id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
73816569
|
pes2o/s2orc
|
v3-fos-license
|
P ENILE SHADOW ARTEFACT OVERLAPPING FRACTURES
Soft tissue shadows are commonly seen on pelvic radiographs, and radiographers may overlook or are unaware that these shadows could be artefacts. In a case study, shadow of a penis superimposed with the fracture lines at pubic ramus and it was questioned whether a fracture of ramus ischio-pubis is present. Further radiographic views were performed to demonstrate the fractures without any artefact. There are other possible pelvic artefacts that may be seen and neglected on pelvic radiographs, thus it may lead to misdiagnosis of pelvic fracture. This essay should be served as a reminder for radiographers to recognize artefacts and differentiate it from pathology.
Introduction
An AP pelvis view is routinely requested for a patient suffered from blunt pelvic trauma.It is a common practice to remove all patient's clothing in the area of interest, if possible.Other anatomical parts, such as fat skin and penis, are not often readjusted to avoid overlapping with the pelvis.In this case report, such anatomical artefact is highlighted to remind radiographers that we should remember the negative effect of an anatomical artefact on image quality.
Case Report
A 41-year-old male patient was sandwiched between doors.He then presented to the emergency department in an acute metropolitan public hospital, located in western Melbourne, Victoria, Australia.A pelvic radiographic examination was requested to query for any fracture.An AP pelvic view confirmed fractures at the pubic tubercle, superior pubic ramus and left ischio-pubic synchondrosis.(Figure 1) These fractures were superimposed with a shadow casted by patient's penis.In order to provide a clearer view of the fractures, the radiographer produced a craniocaudal view of the pubic ramus (Figure 2).In this view, the penis shadow was projected away from the area of interest, so the fractures were shown entirely.
Discussion
"Penile shadow'" has been discussed in literatures and was arguably a radiological sign called "John Thomas Sign".It is believed that a penile shadow would point to the side of fracture in a pelvis, and this is considered a positive John Thomas Sign (Murphy, Murphy, & Heffernan, 2014;Thomas, Lyons, & Walker, 1998).However, it is inconclusive in this case report because there are bilateral fractures in the pelvis, and the penis pointed to the left side.In fact, the penile shadow superimposed with the fracture lines, so it is undetermined whether the John Thomas Sign would be a hindrance or a useful indicator in the diagnosis.This example of a penile shadow artefact should be used as a reminder for radiographers to be aware of anatomical artefacts.There are other examples, includes tampon and bowel gas artefacts, which may be encountered in common radiography practice.
A case study highlights an abnormal soft tissue finding on a pelvic radiographic examination of a female patient, and the finding was subsequently concluded to be a tampon (Turton & Silverman, 1994).Hypothetically, similar objects, such as pads, may potentially degrade the image quality of a pelvic image, in the same way as patients wearing thick clothing or infants wearing wet diapers (Markowitz, 2007;Markowitz, Altes, & Jaramillo, 2009).
When we approach patients in regard to removing the artefact object, we should maintain professionalism and sensitivity.Cultural and gender concerns should be considered, whether to ask a female patient to remove her tampon or to ask a male patient to adjust his penis position.It is also mindful to remember that in some religions or personal preferences, it may be offensive for a male radiographer to perform an examination on a female patient.
Breathing technique is useful to produce a sharper image of a lateral thoracic spine projection.The same theory could apply to other body parts such as lumbar spine and pelvis.In cases where patients have bowel gas obstructing the view of the area of interest, a long exposure time companied with short mA could be used to blur out the bowel gas, hence producing a clear view of the interested anatomy.Fuller(2011) discussed and endorsed this tehcnique.Although only the lumbar spine examination was suggested by Fuller(2011), the same technique could also be applied in pelvic examination to achieve a similar improved outcome.
Conclusion
Besides the examples listed above, there are probably other anatomical or foreign body artefacts that could be avoided.It would be an interesting topic to discuss with your colleagues and to see if there are more examples.This case study should be served as a reminder for all radiographers to pay more attention to pelvic images.We should recognize any anatomical artefact, move it if possible, and differentiate it from pathology.Thus, there would be no confusion when radiologists read the images and patients would not be requested for further imaging examinations due to suboptimal image quality.
Figure 1 :
Figure 1: An AP pelvic projection in which the one of several fractures were overlapped with a penis shadow.
Figure 2 :
Figure 2: A cranio-caudal view of the pelvic ramus.The penile shadow was projected away from the area of interest.
|
2018-10-20T15:27:51.477Z
|
2014-12-01T00:00:00.000
|
{
"year": 2014,
"sha1": "2b9ee8b33df5518aa723fbbf049e87f84153ecbb",
"oa_license": "CCBY",
"oa_url": "https://journals.hioa.no/index.php/radopen/article/download/1202/1069",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2b9ee8b33df5518aa723fbbf049e87f84153ecbb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
36201402
|
pes2o/s2orc
|
v3-fos-license
|
Toothbrushing alters the surface roughness and gloss of composite resin CAD / CAM blocks
This study investigated the surface roughness and gloss of composite resin CAD/CAM blocks after toothbrushing. Five composite resin blocks (Block HC, Cerasmart, Gradia Block, KZR-CAD Hybrid Resin Block, and Lava Ultimate), one hybrid ceramic (Vita Enamic), one feldspar ceramic (Vitablocs Mark II), one PMMA block (Telio CAD), and one conventional composite resin (Filtek Z350 XT) were evaluated. Surface roughness (Ra) and gloss were determined for each group of materials (n=6) after silicon carbide paper (P4000) grinding, 10k, 20k, and 40k toothbrushing cycles. One-way repeated measures ANOVA indicated significant differences in the Ra and gloss of each material except for the Ra of GRA. After 40k toothbrushing cycles, the Ra of BLO and TEL showed significant increases, while CER, KZR, ULT, and Z350 showed significant decreases. GRA, ENA, and VIT maintained their Ra. All of the materials tested, except CER, demonstrated significant decreases in gloss after 40k toothbrushing cycles.
INTRODUCTION
In dentistry, many different CAD/CAM blocks used in preparing restorations are available commercially, and their use has gained in popularity. Restorations prepared from CAD/CAM blocks offer uniform material quality, reproducibility, and low fabrication cost. There are numerous types of CAD/CAM blocks available including lithium disilicate glass ceramics, leucite-reinforced glass ceramics, feldspathic glass ceramics, aluminum-oxide and yttrium tetragonal zirconia polycrystals 1) , composite resin 2) , titanium, and titanium alloy 3) . Currently, the demands for esthetics, shorter treatment time, and nonmetallic restorations from both dentists and patients are critical challenges for researchers trying to develop the best restorative material. The advantages of a composite resin block are that it is easy to polish and to mill, does not require sintering or crystallization firing, and is repairable in the mouth. Newly available CAD/ CAM composite resin blocks are fabricated by highpressure/high-temperature polymerization resulting in improved mechanical properties 4,5) . The flexural properties of composite resin CAD/CAM blocks were reported to be comparable to those of ceramic blocks and were suggested to be suitable for single premolar crown restorations 6) . However, a restoration in a patient's mouth is subjected to wear from several factors, such as food and daily cleaning.
The surface of a restoration placed on a tooth should be smooth and shiny. However, these surfaces may deteriorate intraorally due to several factors 7) . Toothbrushing is a primary factor affecting the surface roughness and gloss of tooth-color-like restorations 8) .
Toothbrushing is an example of three body wear. The bristles of the toothbrush act as an antagonist while the toothpaste slurry is used as the medium. Many investigations have shown the effects of toothbrushing on the surface roughness and gloss of composite resins, in terms of brushing time [9][10][11] , brushing force 12,13) , and abrasivity of the particles contained in the toothpaste 14) . An increase in toothbrushing cycles was shown to deteriorate the smoothness and glossy appearance of conventional composite resins 15) . However, there is scant information regarding the effect of toothbrushing on composite resin CAD/CAM blocks. A study evaluating two-body wear, gloss retention, and surface roughness of CAD/CAM blocks found that all ceramic based, one nanocomposite, and two PMMA CAD/CAM blocks behaved similarly or better in terms of two-body and toothbrushing wear than natural enamel 16) . A study demonstrated that PMMA CAD/CAM blocks had a lower wear rate than conventionally polymerized acrylic resin when enamel was used as the antagonist 17) . Recently, the two-body and three-body wear characteristics, of composite resin CAD/CAM blocks were evaluated using water and poppy seeds in vitro and it was reported that all CAD/CAM block materials tested exhibited low wear compared to direct posterior composite resins 18) .
The most commonly used parameters for the evaluation of surface characteristics are surface roughness, gloss, and scanning electron microscope (SEM) images 19) . The arithmetic average of the surface roughness in a two dimensional measurement (Ra), determined using a profilometer, is commonly used to quantitatively describe surface roughness, however, it does not describe the appearance of the examined surface. Surface roughness exceeding a threshold Ra value of 0.2 µm was claimed to increase plaque accumulation and staining in vitro 20) . The determination of Ra values in vivo is not possible because the profilometer cannot be used intraorally. However, gloss is a feature that can be easily recognized and perceived by dentists and patients. Gloss characterizes the specular reflection of the restoration surface 21) . Regression analysis has been used to predict the relationship between Ra and gloss of the examined surface 22) . However, a consensus of their relationship has not been established.
Therefore, the objective of the present study was to investigate the surface roughness and gloss of CAD/ CAM blocks after toothbrushing. The null hypothesis was that there would be no significant differences in the surface roughness and gloss of each CAD/CAM block after toothbrushing.
Specimen preparation
Five composite resin blocks, one hybrid ceramic block, one feldspar ceramic block, one PMMA block and one conventional composite resin were used in the present study. Their compositions and manufacturers are listed in Table 1.
The CAD/CAM block specimens (8×8×2 mm) were prepared using a lathe and a low-speed diamond saw (Isomet Buehler, Lake Bluff, IL, USA). The specimens were embedded in the center of 12×20×10 mm autopolymerized resin (Fastray, Bosworth Company, Skokie, IL, USA) blocks. For the Z350 specimens, a 12×20×10 mm acrylic resin block was prepared using the same autopolymerized resin in silicone molds. A cylindrical cavity, 8-mm-diameter and 3-mm-depth, was drilled in the center of the acrylic block. The cavity was filled with Z350 and covered with a Mylar strip, and pressed flush under a glass slide. The composite resin was light activated with a light curing unit (GC Prima II, GC, Tokyo, Japan; light intensity: 600 mW/cm 2 ) in contact with the Mylar strip for 40 s.
The specimens were wet-ground for 1 min each using P2400 grit (averaged abrasive particle size 12.2 µm), and P4000 grit (averaged abrasive particle size 6.5 µm) silicon carbide (SiC) paper (Leco, St. Joseph, MI, USA) at 150 rpm on a polishing machine (Nano 2000, Pace Technologies, Tucson, AZ, USA). The specimens were thoroughly rinsed with tap water, ultrasonically cleaned for 5 min to remove any debris, and dried with compressed air for 20 s.
Toothbrush testing
After storing the specimens for 7 days in 37°C deionized water, the specimens were mounted in a toothbrushing machine (V-8 Cross Brushing Machine, SABRI Dental Enterprises, Villa Park, IL, USA) operating with 55 mm back and forth brushing strokes at a frequency of 2 Hz. The specimens were brushed with a vertical force of 2.5 N on the toothbrushes (GUM Classic #411, Sunstar Americas, Chicago, IL, USA) following ISO specification #14569-1 23) . The specimens and toothbrushes were immersed in containers of toothpaste slurry, prepared using a homogenizer from 50 mL of deionized water and 25 g of toothpaste (Colgate cavity protection, Colgate-Palmolive, Chonburi, Thailand; RDA80, Dicalcium phosphate as an abrasive system 24) ). The toothpaste slurry was periodically changed every 10,000 cycles. After 10,000 (10k) brushing cycles in the toothpaste slurry, the specimens were prepared for surface roughness and gloss determination as described below. The same specimens were subjected to an additional 10,000 (20k total) and 20,000 (40k total) toothbrushing cycles. The surface roughness and gloss were again determined after the 20k and 40k toothbrushing cycles.
Surface roughness measurement
The Ra value of each specimen was determined using a profilometer (Talyscan 150, Taylor Hobson, Leicester, England) equipped with an inductive gauge stylus with a 2 µm tip radius. The tracing length was 2 mm, the tracing speed was 500 µm/s, and the cut-off length was 0.25. Five parallel measurements, each 400 µm apart, were performed perpendicular to the toothbrushing direction. The Ra value was calculated as the average of the 5 measurements of each specimen.
Gloss measurement
Gloss was determined by a gloss meter (IG-331, Horiba, Kyoto, Japan) calibrated on a black glass standard provided by the manufacturer. The 60 degree measurement mode was selected. Each specimen was centrally placed over the reading aperture. The light beam was transmitted to the surface and reflected to the sensor. The measured area was oval-shaped (3×6 mm 2 ). The specimen was measured, rotated 180 degrees, re-measured, and the gloss values were averaged.
Scanning electron microscope (SEM) observation
Two representative specimens of each material, one after P4000 grit SiC grinding and the other after 40k toothbrushing cycles, were selected and sputter-coated with gold. An SEM (JSM-5410LV, JEOL, Tokyo, Japan) was used to observe and capture images of the surfaces at an acceleration voltage of 20 kV and a magnification of 500×.
Statistical analysis
The Ra values and the gloss units of each material tested were separately analyzed using one-way repeated measures analysis of variance (ANOVA), followed by the Bonferroni method (α=0.05). The relationship between the Ra value and gloss units of each material was analyzed using linear regression.
RESULTS
The results of one-way repeated measures ANOVA of the Ra values of each material are shown in Table 2. All of the materials tested showed significant differences in Ra between the measuring stages (p<0.001) except the GRA group, where the differences were not significant Table 2. All of the materials tested showed significant differences in gloss between the measuring stages (p<0.001). The Table 4. Apart from the CER group, all of the materials tested presented a significant decrease in gloss after 40k toothbrushing cycles. In contrast, the CER group demonstrated a significant increase in gloss after the toothbrushing cycles. The SEM images of each material tested after P4000 grit SiC grinding and 40k toothbrushing cycles are shown in Fig. 1. The BLO samples (Fig. 1-A1) demonstrated large and small spherical filler particles that have been reported to be silica and zirconium silicate, respectively 6) . After 40k toothbrushing cycles, wide scratches were observed on the large silica particles and a large amount of small pits resulting from filler particle exfoliation were seen ( Fig. 1-A2). The CER sample showed surface with minor scratches from SiC grinding ( Fig. 1-B1). After toothbrushing, small filler particles homogenously dispersed on the CER surface were observed and the minor scratches seen in SiC ground showed an obviously decrease (Fig. 1-B2). The GRA samples (Figs. 1-C1, 2) demonstrated large irregularly shaped glass filler particles surrounded with smaller irregularly shaped particles. The surface characteristics between the SiC ground and 40k Fig. 1-H1) showed minor scratches from SiC grinding. In contrast, heavy deep scratches along the brushing direction were seen on the TEL surface after toothbrushing ( Fig. 1-H2). The linear relationship between Ra values and gloss units are shown in Table 5. The correlation of determination between Ra value and gloss units varied from 0.068-0.711.
DISCUSSION
The present study investigated the surface roughness and gloss of CAD/CAM blocks after toothbrushing. The five commercial composite resin blocks for CAD/CAM were selected from market availability in Japan. Z350 was selected as a representative of direct composite resin material. VIT and TEL were selected as negative and positive control, respectively. Statistical analysis indicated significant differences in Ra values and gloss units of each material tested between the measuring stages, except for the Ra value of the GRA group. Therefore, the null hypothesis was rejected for gloss units but partially accepted for Ra values.
Clinically, dentists perform contouring and finishing of a restoration using a diamond or tungsten carbide bur before polishing. However, there are some difficulties in standardizing the surface of the specimens under laboratory conditions due to the diversity of each material. A previous study 25) stated that polishing using SiC paper with abrasive particles less than 9 µm, rendered a clinically acceptable surface roughness and gloss. Thus, P4000 grit SiC paper was selected in our study to obtain a standard baseline reference point.
In the present study, the design of the toothbrushing wear protocol followed the ISO technical specification 23) on brushing force (2.5 N) as employed in previous studies 8,15,26,27) . Soft bristle type toothbrushes and toothpaste were used as in a previous study 15) . Toothpastes containing less abrasive particles may slow the rate of gloss reduction with less increase in the surface roughness of composite resin 14) . A high number of toothbrushing cycles are necessary to produce an unequivocal effect on the roughness and gloss of the materials' surfaces.
To evaluate material surface characteristics, several parameters have been used to investigate the surface of a material. Some investigators measured the volume loss from surface wear 8,11,17,18) . In the present study, three parameters were chosen for evaluating the alteration of surface morphology after polishing and toothbrushing. Surface roughness (Ra) is an important laboratory parameter based on the depth of the scratches present on a material's surface. Surface gloss is a parameter that is more clinically perceptible to clinicians and patients. SEM provides an overall understanding of a material's surface morphology. Therefore, it is desirable to evaluate surface characteristics both quantitatively and qualitatively.
The statistical analyses indicated that significant differences were found in Ra between the measuring stages for each material tested except the GRA group, which maintained its Ra value. GRA consists of large irregularly shaped silicate glass and numerous prepolymerized filler particles that could possibly protect its soft resin matrix from toothbrushing as observed in the SEM images. It is also interesting that the Ra values of the ENA and VIT samples were not significantly different between SiC grinding and 40k toothbrushing cycles evaluations. This greater wear resistance might be attributed to the strong ceramic network structure and greater hardness values of these materials 18) . These findings are consistent with a recent study that reported the surface hardness of ENA and VIT was relatively high when compared to the other materials tested 6) . The BLO and TEL samples showed a significant increase in Ra after 40k toothbrushing cycles. BLO, which consists of large spherical shaped silica filler particles with less filler content, is easily abraded from toothbrushing as seen in our SEM analysis. TEL is a PMMA block with no filler content. Therefore, it is reasonable that toothbrushing can easily create scratches along the brushing direction resulting in extremely high increases in Ra after 40k toothbrushing cycles. This result is supported by that of a previous study that showed that PMMA CAD/CAM blocks exhibited higher vertical wear loss compared with ceramic blocks 17) . The CER, KZR, ULT, and Z350 samples showed significant decreases in Ra after the toothbrushing cycles. This was due to their composition that consists of small-sized filler particles. The high filler loading and small-sized filler particles results in less distance between the filler particles. This might contribute to the better toothbrush wear resistance observed in our study as supported by previous investigation on the effect of filler particle size in the monomodel experimental composite system 28) .
All of the materials tested demonstrated significant differences in gloss units between the measuring stages. ADA specifications state that the gloss units of a polished restoration should range from 40-60 29) . In the present study, the gloss units of all the materials were from 56.3-89 which were diverse due to the difference in the composition of each material. However, these values were within the acceptable range after SiC grinding. This demonstrated the effectiveness of the P4000 grit SiC abrasive paper in rendering a high gloss on the sample's surfaces regardless of nature of the materials. This result corresponded to that of a previous study stating that particle sizes of <9 µm can provide improved gloss of a material 24) . The gloss of all materials tested showed significant decreases after 40k toothbrushing cycles except for the CER samples, which showed a significant increase. The gloss units of the composite resin blocks and ceramic blocks were still within the ADA's range after 40k toothbrushing cycles except for the TEL samples, which are composed of PMMA. The gloss of the TEL samples was completely eradicated by toothbrushing, as was seen in a previous study 16) . The CER samples demonstrated increased gloss units after 40k toothbrushing. This result might be attributed to well-distributed spherical nanosized filler particles. The SEM images of the CER samples showed no aggregated filler particles, which supports that the nano-sized filler particles are well distributed. This composition is believed to be self-polishing during toothbrushing.
The results of the present study indicated that most of the composite resin CAD/CAM blocks were comparable or better to ceramic blocks in terms of Ra and gloss retention, especially the CER samples, which maintained their surface characteristics after toothbrushing. The Ra values of the composite resin and ceramic blocks at each measuring stage were less than 0.2 µm, which is the threshold limit of plaque accumulation proposed by Bollen et al. 20) , and also below 0.64 µm, which is the average roughness value of enamel 30) . This suggests that the surface roughness of all the CAD/CAD blocks evaluated after toothbrushing would not be susceptible to plaque accumulation and staining. Furthermore, differences in Ra can only be discriminated by patients when the difference is over 0.5 µm 31) . Restorations visually appear to be smooth when a surface has a roughness of less than 1 µm Ra 32) . However, the Ra of the TEL blocks exceeded these threshold limits. Therefore, TEL blocks are only suggested to be used for provisional restoration as suggested by the manufacturer and should not be used for more than 12 months.
The surface characteristics after toothbrushing of the ULT samples were similar to those of the Z350 samples. This result is in agreement with that of a previous study 16) . This result indicated that the new polymerization method used in this material did not improve its toothbrush wear resistance or gloss retention.
The present study only provided information on the surface roughness and gloss of CAD/CAM blocks after toothbrushing in vitro. Many parameters have to be carefully evaluated to better understand the surface characteristics of the composite resin CAD/CAM blocks. Furthermore, long-term clinical observations of restorations in the mouth are recommended.
CONCLUSION
Within the limitations of the present study, it can be concluded that most of the composite resin CAD/CAM blocks are comparable or better compared with ceramic blocks regarding Ra and gloss after toothbrushing, especially the CER blocks that maintained their surface characteristics. Therefore, it is suggested that all the composite resin CAD/CAM blocks evaluated in the present study possess an acceptable toothbrush wear resistance.
|
2018-04-03T03:04:25.102Z
|
2016-03-24T00:00:00.000
|
{
"year": 2016,
"sha1": "6c205ad1fd53df5a3e56e7f222f6b314894068e6",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/dmj/35/2/35_2015-228/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6c205ad1fd53df5a3e56e7f222f6b314894068e6",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
266728591
|
pes2o/s2orc
|
v3-fos-license
|
Fusion with and without lever reduction in degenerative lumbar spondylolisthesis: a retrospective study
Background The reduction of slipped vertebra is often performed during surgery for degenerative lumbar spondylolisthesis (DLS). This approach, while potentially improving clinical and radiological outcomes, also carries a risk of increased complications due to the reduction process. To address this, we introduced an innovative lever reduction technique for DLS treatment. This study aims to investigate the clinical efficacy, radiological outcomes, and complications of fusion with or without lever reduction. Methods We conducted a retrospective review of prospectively collected data from a registry of patients who underwent lumbar fusion surgery for DLS, with a follow-up of at least 24 months. Self-reported measures included visual analog scale (VAS) for back or leg pain, Oswestry Disability Index (ODI), and the achievement of minimal clinically important difference (MCID). Radiological assessments encompassed spondylolisthesis percentage (SP), focal lordosis (FL), and lumbar lordosis (LL). Complications were categorized using the modified Clavien–Dindo classification (MCDC) scheme. Patients were assigned to the reduction group (RG) and non-reduction group (NRG) based on the application of the lever reduction technique. Clinical and radiological outcomes at baseline, immediately after surgery, and at the last follow-up were compared. Results A total of 281 patients were analyzed (123 NRG, 158 RG). Baseline patient demographics, comorbidities, and surgical characteristics were similarly distributed between groups except for operating time (NRG 129.25 min, RG 138.04 min, P = .009). Both groups exhibited significant clinical improvement after surgery (all, P = .000), with no substantial difference between groups (VAS, ODI, or the ability to reach MCID). Patients in RG showed statistically lower SP and higher FL during follow-up (all, P = .000). LL was comparable at different time points within each group or at the same time point between the two groups (all, P > .050). The overall complication rate (NRG 38.2%, RG 27.2%, P = .050) or specific complication rates per MCDC were similar between groups (all, P > .050). Patients in RG were predisposed to a lower risk of adjacent segment degeneration (ASDeg) (NRG 9.8%, RG 6.3%, P = .035). Conclusions There were no significant differences in postoperative measures such as VAS scores for back and leg pain, ODI, the ability to reach MCID, overall complication rate, or specific complication rates per MCDC between surgical approaches. However, fusion with lever reduction demonstrated a notable advantage in restoring segmental spinal sagittal alignment and reducing the occurrence of ASDeg compared to in situ fusion.
Background
Degenerative lumbar spondylolisthesis (DLS) is a common pathological condition in the elderly population, characterized by the anterior displacement of a superior vertebra over the adjacent caudal vertebra, while the neural arch remains intact [1,2].Due to the spinal canal stenosis, compression of the nerve root in the lateral recess or in the foramen, and segmental instability secondary to spondylolisthesis, patients with DLS usually present with neurogenic claudication, radicular leg pain, or back pain [3].In severe cases or when conservative treatments fail, decompression of the affected neural structures and stabilization of the spinal segment, so-called decompression and fusion surgery, are considered as a means to provide satisfactory long-term results [4,5].Besides the aforementioned interventions, whether or not to reduce the spondylolisthesis intraoperatively still needs to be determined by surgeons.Theoretically, the reduction procedure contributes to reducing slip distance, increasing segmental lumbar lordosis or intervertebral disc height, and potentially leads to better clinical outcomes or a higher fusion rate [6][7][8][9].Nonetheless, conventional reduction methods predominantly rely on distraction of the disc space and direct elevating pull of the pedicle screws, which may also introduce a higher risk of complications, such as neurologic deficits, hardware failure (screw loosening or pull out), prolonged operating time, or loss of reduction [6,[10][11][12][13].
To reduce the surgical-related complications linked to the reduction procedure, we introduced a composite reduction technique encompassing both traditional elevating-pull reduction and innovative lever reduction.The clinical utility of this technique was previously demonstrated in case series [14].This study aims to delve deeper into the clinical efficacy, radiological outcomes, and complications associated with fusion with or without lever reduction for DLS treatment.By doing so, we intend to offer valuable insights into the comparative effectiveness and safety of these two surgical approaches for DLS.
Patient population
Following approval by the ethics committee at our hospital, a retrospective review of the spine registry data was conducted on a consecutive cohort of 488 patients diagnosed with lumbar spondylolisthesis between May 2015 and December 2020.All the clinical and radiological data had been prospectively collected at the respective follow-up visits.Inclusion criteria were as follows: (1) single-level DLS (Meyerding grade I or II), (2) refractory to conservative treatments for more than 6 months, and (3) at least 2 years' of follow-up with complete clinical and radiological data.Patients with other types of spondylolisthesis, multilevel (≥ 2) spondylolisthesis, highgrade spondylolisthesis (Meyerding grade III or IV), hip disorders, previous spinal surgery or trauma, or incomplete data, were excluded from analysis.
Surgical techniques
All included patients experienced stenosis caused by DLS and underwent decompression and lumbar interbody fusion during subsequent surgery.The surgeries were performed through an open posterior midline approach.Before bony decompression, bilateral pedicle screws were placed.Decompression consisted of bilateral facetectomy and partial foraminotomy, including the hypertrophic ligament flavum.The disc space was opened and thoroughly cleaned with intradiscal drills and pituitary rongeurs.The cartilaginous endplates were cleaned with caution so as to not cause injury to the bone endplates.Bilateral nerve roots were liberated before reduction.The reduction of the slipped vertebra was conducted following lever reduction technique (Fig. 1) [14].The extent of slip reduction was verified with fluoroscopy.Then, the interspace was packed with autologous bone graft material, and an appropriately sized polyetheretherketone cage filled with bone was inserted into the disc space.
Patients undergoing fusion with lever reduction were assigned to the reduction group (RG).Conversely, patients undergoing in situ fusion (where intentional surgical reduction was not performed) were assigned to the non-reduction group (NRG).The assignment was made per surgeon's choice.
Clinical measurements
Clinical assessments including visual analog scale (VAS) for back pain, VAS for leg pain, and Oswestry Disability Index (ODI).The VAS was utilized to measure the severity of back and leg pain for patients based on a 10-cm line, with "painless" (0) and "most severe pain" (10 cm) at each respective end [15].The validated ODI is a selfadministered questionnaire for evaluating back-specific functional disability, consisting of 10 items with scores from 0 to 5, and higher ODI indicates more severe disability [16].Minimal clinically important difference (MCID) was introduced to analyze the clinical significance of variations in clinical outcomes [17].MCID values were set at 14.9 points for ODI, 2.1 points for VAS back pain, and 2.8 points for VAS leg pain [18].All clinical outcomes were assessed by research assistants before surgery, immediately after surgery, and at each follow-up.
Radiological data acquisition
Measurements of radiological parameters are illustrated in Fig. 2, covering: (1) spondylolisthesis percentage (SP), the ratio of the interval between two extended lines of the posterior aspect of superior slipped vertebra and the inferior normal vertebra to the length of the superior endplate of the inferior normal vertebra; (2) focal lordosis (FL), the Cobb angle between the superior endplate of the upper slipped vertebra and the inferior endplate of the lower normal vertebra; and (3) lumbar lordosis (LL), the Cobb angle between the superior endplates of both L1 and S1.All radiological measurements were taken by two trained spinal surgeons (WW and YW) before surgery, immediately after surgery, and at each follow-up.The average of two measurements was taken as the final result.
Complications assessment
All complications were recorded in light of the modified Clavien-Dindo classification (MCDC) scheme containing five types of complications: Type I-normal recovery without any treatment; Type II-pharmacologic treatment needed; Type III-invasive intervention under general anesthesia needed; Type IV-intensive care unit admission needed; Type V-death [19].Adjacent segment degeneration (ASDeg) was diagnosed when plain radiographs, computerized tomography, or magnetic (3) the nerve roots were decompressed before reduction.After removal of the disk tissues and endplate preparation, a rod was placed unilaterally and the pedicle screw of the lower vertebrae was locked; (4) a lever repositioner was placed at the anterior rim of the slipped vertebrae under fluoroscopy; (5) with the lower vertebrae as the lever fulcrum, force was applied to gradually pry the slipped vertebrae upward; 6) the pedicle screws of the slipped vertebrae were locked.Then, an addition rod was placed and all screws were locked.Quote from the study by Chao et al. (https:// doi.org/ 10. 1186/ s12891-019-3028-8) resonance imaging demonstrated one or more of the following lesions at the segment adjacent to the fused segment that were not present preoperatively: (1) development of anterolisthesis or retrolisthesis > 4 mm, (2) range of motion between adjacent vertebral bodies > 10°, (3) loss of disc height > 10%, (4) osteophyte formation > 3 mm, as well as (5) spinal stenosis caused by facet joint hypertrophy, compression fracture, or degenerative scoliosis [20,21].Symptomatic ASDeg requiring reoperation was diagnosed as adjacent segment disease (ASDis).Radiographic fusion was assessed using Bridwell's grading criteria, and both grades I and II were considered radiographic signs of successful fusion, while grades III and IV vice versa [22].Pedicle screw or cage loosening was defined as a radiolucency of ≥ 1 mm around the screw or the cage [23].
Statistical analysis
Data were analyzed using SPSS Statistics software (version 26.0,IBM Corp., Armonk, NY, USA).Statistical significance was set at a level of P < 0.05.
Continuous data are reported as mean values ± standard deviation.The assumption of normal distribution for the data was verified using the Shapiro-Wilk test.The independent samples t test and the Mann-Whitney U test were employed for intergroup comparison in each time point.The paired t test was used for the intra-group comparison of different time points.The Chi-square test was utilized to compare categorical variables between groups.Intraclass correlation coefficients (ICC) were calculated to evaluate the inter-rater reliability of radiographic assessments.ICC values below 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and above 0.90 indicate poor, moderate, good, and excellent reliability, respectively.
Demographics
A total of 312 patients initially met the inclusion and exclusion criteria.However, 31 patients (9.94%) were lost to follow-up.Among the remaining 281 patients, 123 underwent in situ fusion, while 158 underwent fusion with lever reduction (Fig. 3).The enrolled patients were followed up for an average duration of 29 months, ranging from 24 to 41 months.The gender distribution was similar in both groups, with the majority being female (NRG: 74.8% vs. RG: 70.9%, P = 0.466).Surgery at the L4-L5 level was the most common for both groups (NRG: 77.2% vs. RG: 70.9%, P = 0.231).Notably, there was a significant difference in operating time between the groups (NRG: 129.25 ± 27.41 min vs.RG: 138.04 ± 28.02 min, P = 0.009), while no statistically significant differences were observed in other demographic metrics (Table 1).
Patient-reported outcomes
In terms of back pain measured by the VAS, there was a remarkable reduction in NRG from 4.89 ± 1.45 preoperatively to 1.80 ± 1.32 postoperatively (P = 0.000) and 1.95 ± 1.12 at the final follow-up.Similarly, in RG, VAS back pain significantly improved from 5.06 ± 1.43 before surgery to 1.73 ± 1.21 (P = 0.000) postoperatively and 1.81 ± 1.14 at the last follow-up.However, no substantial differences in back pain intensity were observed between the two groups at corresponding evaluation time points.Following surgery, 70.7% of NRG patients and 78.5% of RG patients achieved MCID, though the statistical difference was not significant (P = 0.136) (Table 2).
When considering VAS leg pain, there was a decrease from 5.00 ± 1.82 to 1.84 ± 1.45 postoperatively (P = 0.000) and 1.56 ± 1.10 at the last follow-up in NRG.In RG, VAS leg pain decreased from 5.17 ± 1.74 before surgery to 1.78 ± 1.20 (P = 0.000) postoperatively and 1.73 ± 1.16 at the last follow-up.Comparable VAS leg pain scores were noted between groups at corresponding assessment time points.The proportion of patients achieving MCID was similar between NRG (78.0%) and RG (82.3%) without a statistically significant difference (P = 0.375) (Table 2).
A similar decreasing trend was evident in the ODI, with scores reducing from 49.40 ± 11.58 to 15.99 ± 11.08 postoperatively (P = 0.000) and 15.37 ± 8.99 at the last follow-up in NRG.In RG, preoperative ODI decreased from 48.61 ± 10.45 to 16.35 ± 9.07 postoperatively (P = 0.000) and 16.87 ± 6.28 at the last follow-up.No statistically significant differences were detected between groups at any assessment time points.The proportion of patients achieving MCID was similar between NRG (79.7%) and RG (80.4%), with no significant statistical difference (P = 0.883) (Table 2).
Radiological outcomes
Results of ICC analysis indicated good or excellent reliability for all radiographic assessments (SP: 0.771, FL: 0.816, LL: 0.901).
The preoperative FL was similar between the two groups (NRG: 11.58° ± 6.10° vs. RG: 12.68° ± 6.01°, Fig. 3 Screening procedure of the patients P = 0.130).However, FL significantly increased to 13.99° ± 6.22° (P = 0.000) in NRG and 17.13° ± 5.90° (P = 0.000) in RG after surgery.There were no statistically significant differences in FL at the last follow-up compared to the postoperative values in both groups.Patients in the RG demonstrated greater FL both postoperatively and at the last follow-up compared to those in the NRG (Table 3).
Regarding LL, no statistically significant differences were observed at different time points within each group or at the same time point between the two groups (Table 3).
Complications and reoperations
A total of 90 complications were documented based on the MCDC classification, comprising 47 complication (Type I: 28, Type II: 16, Type III: 3) in the NRG and 43 complications (Type I: 24, Type II: 17, Type III: 2) in the RG, respectively.Patients undergoing in situ fusion demonstrated a higher incidence of ASDeg compared to those undergoing fusion with lever reduction (NRG: 9.8% vs. RG: 6.3%, P = 0.035).L4-5 reduction would be more beneficial in terms of prevention of ASDeg; however, a larger sample size is still needed to validate this finding.No significant differences were observed between the groups in the proportion of ASDis, cage malposition, cerebrospinal fluid leakage, unsuccessful fusion, residual pain or numbness, screw loosening, wound infection, and other general complications (Table 4).
Two patients, one from each group, required revision surgery due to ASDis.One patient in the NRG experienced recurrent pain caused by cage malposition 3 months postoperatively and resolved through reoperation.Wound infection was observed in two patients, with one case identified in each group, necessitating postoperative debridement.
Discussion
The necessity of a concomitant reduction procedure during the fusion surgery for DLS remains a controversial topic.Currently, conventional wisdom suggests that the reduction of spondylolisthesis holds theoretical appeal due to its potential for indirect decompression of neuroforamina and restoration of the sagittal lumbosacral alignment.Within this context, multiple reduction approaches, such translation reduction, distract and slip reduction, cantilever technique, or minimally invasive slip reduction, have been developed and utilized in the treatment of DLS [24-27].However, the implementation of these methods relies upon adequate contact force between the instrumentation and vertebra and might instead result in implant-related complications, especially for elderly DLS patients with diminished bone quality.In response to these challenges, our clinical center introduced a novel lever reduction procedure in combination with transforaminal lumbar interbody fusion [14].The present study compared the clinical efficacy, radiographic outcomes, and complications of in situ fusion versus fusion with lever reduction in a cohort of 281 patients.Results of our study highlighted the benefits associated with lever reduction in terms of restoring segmental sagittal alignment and reducing complications, while no superiority of the additional reduction procedure over in situ fusion in improving clinical outcomes was exhibited.The impact of reduction on clinical outcomes in lumbar spondylolisthesis remains uncertain, as comparative studies have yielded conflicting results.A randomized trial conducted by Lian et al. involving 73 patients with DLS revealed similar postoperative VAS, ODI, and Japanese Orthopedic Association (JOA) scores between patients who underwent fusion with or without reduction [6].Another study involving 65 patients with symptomatic spondylolisthesis, conducted by Heo et al., demonstrated that intraoperative reduction led to greater improvements in ODI after surgery [8].Conversely, Tay et al. did not find any significant clinical benefits associated with reduction in cohorts with low-grade spondylolisthesis [23].Regarding high-grade spondylolisthesis, a recent meta-analysis indicated that slip reduction correlated with more substantial overall enhancements in ODI when compared to in situ fusion [28].In our current investigation, we did not identify a connection between spondylolisthesis reduction and improvements in clinical outcomes or an increased proportion of patients achieving the MCID (Table 2).Considering that the majority of patients in this study showed only slight degenerative spondylolisthesis, one plausible explanation for this result might be that the indirect decompression effect resulting from reduction was marginal when contrasted with the direct decompression achieved during the fusion procedure.Therefore, the reduction procedure had minimal effect on clinical improvement.
In line with previous findings, our results also indicate that the focal lordosis increases significantly as the spondylolisthesis percentage decreases [6,8].In contrast, the overall lumbar lordosis shows variation with no substantial differences across the three assessment time points, whether reduction was performed or not (Table 3).From a practical standpoint, establishing a connection between the restoration of spinal alignment and the perceived enhancements in treatment effectiveness for patients is crucial.In a prospective study enrolling 57 patients with DLS who underwent lumbar fusion surgery, Kuhta et al. reported that obtaining adequate SL was correlated with favorable ODI 5 years postoperatively [29].Similarly, Takahashi et al. showed that DLS patients with a higher increase in SL were predisposed to a higher JOA recovery rate after lumbar fusion surgery for DLS [30].On the contrary, loss of overall lumbar lordosis resulted in a higher risk of poor clinical outcomes [31].Therefore, despite our inability to identify a statistical distinction in clinical outcomes between NRG and RG as mentioned earlier, the significance of the reduction procedure remains worthy of contemplation since it both improves segmental morphology and maintains overall lordosis, which provides potential therapeutic benefit in patients with DLS.
The choice of the most appropriate surgical plan for a surgeon can be influenced by the complications associated with various surgical techniques.However, there remains a lack of consensus regarding the definition and grading of complications arising from spine surgeries.In this study, all complications were categorized according to the MCDC system [19].The results indicated that patients undergoing fusion with lever reduction were inclined to experience a lower overall complication rate and MCDC Type I complication rate compared to those undergoing in situ fusion, although this difference was not statistically significant.Regarding specific categories, the reduction technique exhibited a distinct advantage in reducing the incidence of ASDeg compared to in situ fusion (Table 4).This difference might be attributed to the increased FL resulting from the reduction procedure.As reported in previous studies, the proper restoration of FL curbs the compensatory increase in mobility and loading at the adjacent fused segment, thereby delaying the degeneration process [32,33].Moreover, the additional stresses during the reduction maneuvers might induce a higher risk of screw loosening or even pullout, as previously reported [6,34].Nonetheless, such negative effects were not evident in patients who underwent fusion with the lever reduction procedure in our research (Table 4).The lever device's distractive force can mitigate the pull force exerted on the instrumentation to some extent, potentially leading to fewer implant-related complications [14].Considering these factors collectively, the superiority of fusion with lever reduction is primarily manifested in reducing the risk of complications rather than enhancing patient-reported outcomes.We believe that fusion with lever reduction could emerge as a viable alternative for DLS patients and is worthy of application, contributing to an enhanced long-term prognosis.
Our study has certain limitations that need to be acknowledged.Firstly, the retrospective nature of our study made it challenging to completely eliminate selection bias and attrition bias.Secondly, the decision to pursue lever reduction was primarily influenced by surgeon preferences and, in some cases, the availability of the lever reduction device.This introduces the possibility of unmeasured factors affecting the decision-making process, not accounted for in our study.Thirdly, only patients undergoing fusion with lever reduction and in situ fusion were included in the analysis.Therefore, the present study cannot conclusively prove the superiority or inferiority of lever reduction technique compared to other reduction techniques.The ongoing data collection of relevant research may address this gap in the future.Lastly, it is important to note that our cohort size was relatively small, which might marginally impact the robustness of our conclusions.Despite these limitations, our study yields valuable insights into the efficacy and safety of the innovative lever reduction technique for DLS.Furthermore, it contributes previously unavailable data that can help reconcile the ongoing debate surrounding fusion options with or without reduction.
Conclusions
We conducted a comparison of clinical effectiveness, radiological outcomes, and complications between fusion with and without the innovative lever reduction technique in a group of 281 patients with DLS.There were no significant differences in postoperative measures such as VAS scores for back and leg pain, ODI, the ability to reach MCID, overall complication rate, or specific complication rates per MCDC between surgical approaches.However, a notable advantage was observed in fusion with lever reduction compared to in situ fusion in terms of restoring segmental spinal sagittal alignment and reducing the occurrence of ASDeg.In the long-term perspective, fusion with lever reduction might be a considerable alternative for the treatment of DLS.
• fast, convenient online submission • thorough peer review by experienced researchers in your field
• rapid publication on acceptance
• support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
Fig. 1
Fig. 1 Reduction process of a slipped vertebrae.(1) Forward slippage of L5; (2) pedicle screws were placed at both vertebra of the slipped levels;(3) the nerve roots were decompressed before reduction.After removal of the disk tissues and endplate preparation, a rod was placed unilaterally and the pedicle screw of the lower vertebrae was locked; (4) a lever repositioner was placed at the anterior rim of the slipped vertebrae under fluoroscopy;(5) with the lower vertebrae as the lever fulcrum, force was applied to gradually pry the slipped vertebrae upward; 6) the pedicle screws of the slipped vertebrae were locked.Then, an addition rod was placed and all screws were locked.Quote from the study by Chao et al. (https:// doi.org/ 10. 1186/ s12891-019-3028-8)
Fig. 2
Fig. 2 Illustration of the radiological measurements.A SP, spondylolisthesis percentage, the ratio of the interval between two extended lines of the posterior aspect of superior slipped vertebra and the inferior normal vertebra to the length of the superior endplate of the inferior normal vertebra; B FL, focal lordosis, the Cobb angle between the superior endplate of the upper slipped vertebra and the inferior endplate of the lower normal vertebra; C LL, lumbar lordosis, the Cobb angle between the superior endplates of both L1 and S1
Table 1
Demographic data of the patients
Table 2
Clinical measures of the patients
Table 3
Radiological outcomes of the patients **P < 0.01 † The discrepancy was statistically different between preoperative and postoperative values ¶ The discrepancy was statistically different between postoperative and last follow-up values
Table 4
Frequency and type of complications
|
2024-01-03T14:14:30.583Z
|
2024-01-03T00:00:00.000
|
{
"year": 2024,
"sha1": "eeea877d029d71aad92a9b46a2b15e1e65e03d50",
"oa_license": "CCBY",
"oa_url": "https://josr-online.biomedcentral.com/counter/pdf/10.1186/s13018-023-04507-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2aed9800d186b038fbd7c511ef72e33def6bb6b",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202747872
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptomic Responses in the Livers and Jejunal Mucosa of Pigs under Different Feeding Frequencies
Simple Summary Nutrition management strategies are closely related to body development and health, and feeding frequency affects pig feed intake, feed efficiency, body composition, and growth performance. However, the effect of feeding one time daily and two times daily on the intestine has been given less attention. In this study, we investigated the transcriptomic responses induced in the livers and jejunal mucosa of growing pigs by daily feeding schedules. We found that when compared with feeding once daily, two times feeding had no significant effect on the growth performance of growing pigs with the same average daily feed intake. A two meals regimen reduced the concentration of triglycerides in serum and liver, affected the body metabolism by promoting lipid transport, lipogenesis, fatty acid oxidation, chylomicron formation and transport, gluconeogenesis, and inhibiting adipocyte differentiation. These findings support the idea that different feeding regimens could affect lipid metabolism and can be effective in nutritional strategies against metabolic dysfunction. Abstract Feeding frequency in one day is thought to be associated with nutrient metabolism and the physical development of the body in both experimental animals and humans. The present study was conducted to investigate transcriptomic responses in the liver and jejunal mucosa of pigs to evaluate the effects of different feeding frequencies on the body’s metabolism. Twelve Duroc × Landrance × Yorkshire growing pigs with an average initial weight (IW) of 14.86 ± 0.20 kg were randomly assigned to two groups: feeding one time per day (M1) and feeding two times per day (M2); each group consisted of six replicates (pens), with one pig per pen. During the one-month experimental period, pigs in the M1 group were fed on an ad libitum basis at 8:00 am; and the M2 group was fed half of the standard feeding requirement at 8:00 am and adequate feed at 16:00 pm. The results showed that average daily feed intake, average daily gain, feed:gain, and the organ indices were not significantly different between the two groups (p > 0.05). The total cholesterol (T-CHO), triglyceride (TG), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) concentrations in the serum, and the TG concentration in the liver in the M2 groups were significant lower than those in the M1 group, while the T-CHO concentration in the liver were significant higher in the M2 group (p < 0.05). Jejunal mucosa transcriptomic analysis showed the gene of Niemann-Pick C1-Like 1 (NPC1L1), Solute carrier family 27 member 4 (SLC27A4), Retinol binding protein 2 (RBP2), Lecithin retinol acyltransferase (LRAT), Apolipoprotein A (APOA 1, APOA 4, APOB, and APOC 3) were upregulated in the M2 group, indicating that fat digestion was enhanced in the small intestine, whereas Perilipin (PLIN1 and PLIN2) were downregulated, indicating that body fat was not deposited. Fatty acid binding proteins (FABPs) and Acetyl-CoA acyltransferase 1 (ACAA1) were upregulated in the M2 group, indicating that two times feeding daily could promote the oxidative decomposition of fatty acids. In conclusion, under the conditions in this study, the feeding frequency had no significant effect on the growth performance of pigs, but affected the body’s lipid metabolism, and the increase of feeding frequency promoted the fat digestion in the small intestine and the oxidative decomposition of fatty acids in the liver.
Introduction
It is well known that nutrition management strategies are closely related to growth performance and health, and the feeding frequency is one of the key factors affecting the growth of the body and feed efficiency [1,2]. Previous studies have reported that different feeding frequencies affect the feed intake, feed efficiency, body composition, and growth performance of pigs [2][3][4][5][6]; however, due to the different trial conditions, conflicting results associated with changes in feed efficiency and body composition have been reported in previous studies [2,4]. Feeding once or twice per day are common feeding patterns used in the swine production of China, therefore, it is necessary to investigate and compare the responses on growth performance and the body metabolism of pigs under these two feeding patterns.
So far, the regulation mechanism of feeding frequency on the growth and body metabolism of pigs is not fully clear. Higher feeding frequency could improve the glucose clearance rate and prevent ruminants from accumulating fat in visceral adipose depots due to higher insulin sensitivity [7]. Meal frequency could change the time-course profiles of plasma concentrations of glucose, insulin, and lactate in pig [2]. A twelve meals regimen increased the liver concentration of total lipid, glycogen, and triglyceride than that in the two meals daily pigs [8]. Compared with pigs fed ad libitum, the activities of citrate synthase, β-hydroxylacyl-CoA dehydrogenase, and lactate dehydrogenase were greater in the longissimus muscle of the two meals daily regimen [9]. The intestine and liver are the major digestive, absorption, and metabolism organs of the body and play a key role in the metabolism of nutrients [10,11]. To date, information on the transcriptomic responses in the liver and jejunal mucosa of pigs under different feeding frequencies is not available.
The present study hypothesized that compared with feeding once daily, pigs feeding twice daily could regulate the gene expression of the intestine and liver, eventually affecting the body metabolism. Therefore, the objective of this study was to compare the growth performance, body metabolism, and transcriptional profiles in the livers and jejunal mucosa of pigs under different feeding frequencies.
Experimental Animals, Design, and Diet
This experiment was approved and conducted under the supervision of the Animal Care and Use Committee of Nanjing Agricultural University (Nanjing, Jiangsu Province, China). The ethic code is SYXK (SU) 2017-0007. All pigs were raised and maintained on a local commercial farm under the care of the Animal Care and Use Guidelines of Nanjing Agricultural University. In our study, all animals were individually housed in metal-floor cages (height, 0.85 m; length, 1.2 m; width, 0.70 m) with a smooth walled pan, a nipple drinker, and a feeder. The room temperature was maintained at 25 ± 2 • C during the experimental period. All pigs were kept at a 24 h light-dark cycle, with lights being turned on from 8:00 am to 20:00 pm. Twelve 42-d Duroc × Landrance × Yorkshire growing barrows (BW = 14.86 ± 0.20 kg) were randomly allocated to the one time feeding daily (M1) group and two times feeding daily (M2) group, and each group consisted of six replicates (pens) with one pig per pen. Pigs in the M1 group were fed the adequate diet on an ad libitum basis at 8:00 am on each experimental day. Pigs in the M2 group were fed half of the standard feeding requirement according to the National Research Council (NRC, 2012) at 8:00 am and adequate feed at 16:00 pm [12], and the refusals were removed from the feeder and weighed at 20:00 pm. The composition and nutrient level of the diet in our study is shown in Table 1, and all pigs had free access to water during the 30-day trial period. Individual initial and final body weight as well as feed intake were registered during the experiment to calculate the average daily gain (ADG) and average daily feed intake (ADFI). Feed efficiency (feed:gain) was expressed as F:G.
Sampling
After 12 h fasting, all pigs were euthanized at 8:00 am with a jugular vein injection of 4% sodium pentobarbital solution (40 mg/kg BW). Blood samples were collected and centrifuged at 2000× g for 10 min at 4 • C, and the supernatant stored at −70 • C until subsequent biochemical analysis. The animals were bled and opened immediately, and the liver, kidney, spleen, heart, and the entire gastrointestinal tract were removed and weighed. Tissues from the liver and jejunal mucosa were collected and kept in liquid nitrogen, then stored at −80 • C for further transcriptome analysis.
Biochemical Parameters of Serum and Liver
Glucose, total bile acid (TBA), triglyceride (TG), total cholesterol (T-CHO), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) in the serum of pigs were measured with an Olympus AU400 Automatic Biochemical Analyzer (Olympus Optical Co., ltd., Tokyo, Japan). The concentrations of liver T-CHO, TG, HDL-C, LDL-C, and TBA were determined by using commercial biochemical assay kits (Nanjing Jiancheng Bioengineering Institution, Nanjing, China).
Library Construction for RNA Sequencing and Sequencing Procedures
Total RNA of the liver tissues and jejunal mucosa were isolated using an RNeasy mini kit (Qiagen, Hilden, North Rhine-Westphalia, Germany). As there were six replicates in each group, three biological replicates were randomly selected for the RNA-Seq to reduce the costs of the experiment. A total amount of 1 µg RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using the NEBNext UltraTM RNA Library Prep Kit for Illumina (New England BioLabs, Ipswich, MA, USA) following the manufacturer's recommendations, and index codes were added to attribute sequences to each sample. In order to select cDNA fragments of preferentially 240 bp in length, the library fragments were purified with AMPure XP system (Beckman Coulter, Beverly, MA, USA). Then, 3 µL USER Enzyme (New England BioLabs, Ipswich, MA, USA) was used with size-selected, adaptor-ligated cDNA at 37 • C for 15 min followed by 5 min at 95 • C before PCR. Then PCR was performed with Phusion High-Fidelity DNA polymerase (New England BioLabs, Ipswich, MA, USA), universal PCR primers, and index (X) Primer (New England BioLabs, Ipswich, MA, USA). Finally, the PCR products were purified (AMPure XP system) and the library quality was assessed on an Agilent Bioanalyzer 2100 system (Agilent Technologies, Santa Clara, CA, USA). The clustering of the index-coded samples was performed on a cBot Cluster Generation System using the TruSeq PE Cluster Kit v4-cBot-HS (Illumia (Illumina, San Diego, CA, USA) according to the manufacturer's instructions. After cluster generation, the library preparations were sequenced on an Illumina platform and paired-end reads were generated.
Quantitative Real-Time PCR
The first-strand cDNA synthesis was performed using 1 µg of total RNA with a reverse transcription kit (Takara Bio, Shiga, Japan). The expression of genes Apolipoprotein A1 (APOA1), Apolipoprotein A4 (APOA4), Apolipoprotein C3 (APOC3), Niemann-Pick C1-Like 1 (NPC1L1), Acetyl-CoA acyltransferase 1 (ACAA1), Fatty acid binding protein 1 (FABP1), and Phosphoenolpyruvate carboxykinase 1 (PCK1) were measured by quantitative real-time PCR using an ABI 7300 sequence detector (SDS, Foster City, CA, US). The PCR reactions were performed in a final volume of 20 µL with the Roche SYBR Green PCR Kit (Roche, Hercules, CA, USA), according to the manufacturer's instructions. The sequences of the primers are listed in Appendix A Table A1. The expression of the genes was calculated relative to the expression of β-actin with the formula 2 −∆∆Ct [13].
Statistics
Growth performance, organ weight, serum metabolites, and hepatic metabolite data were analyzed by SPSS version 22.0 software (SPSS Inc., Chicago, IL, USA) as a randomized complete block design. The pen was used as the experimental unit (n = 6), and the significance of differences between the M1 and M2 groups was evaluated by a Student's t test. Significant differences were declared when p < 0.05.
Growth Performance
All pigs were kept healthy during the experiment. No differences in final body weight (FBW), average daily gain (ADG), average daily feed intake (ADFI) and feed:gain (F:G) were found between the two groups ( Table 2). The weight of the liver, kidney, spleen, whole intestinal tract, and their percentage of body weight were not affected by feeding frequency (Appendix A Table A2).
Serum and Liver Metabolites
As shown in Table 3, concentrations of serum T-CHO, TG, HDL-C, and LDL-C of pigs in the M2 group were lower than in the M1 group (p > 0.05), while no significant differences in the concentrations of serum glucose and TBA were found between the two groups. The concentrations of liver TBA, HDL-C, and LDL-C were not affected by feeding frequency. The twice feeding regimen significantly decreased the concentration of liver TG, while significantly increased the concentration of liver T-CHO (p < 0.05).
Gene Expression Profiles in the Liver and Jejunal Mucosa
The gene expression profiles of the liver showed that 256 differentially expressed transcripts were identified and annotated between the two groups at the particular cutoff criteria (FC ≥ 1.5 or < 0.67; p < 0.05). In total, 145 genes were upregulated and 111 genes were downregulated among the annotated genes. Overall, the most upregulated gene was TNF receptor superfamily member 17 (TNFRSF 17), which showed a 33.09-fold increase with the free feeding treatment, whereas the most downregulated gene was Wnt-11, with a 8.82-fold decrease.
In the jejunal mucosa, 817 differentially expressed transcripts were identified and annotated between the two groups at the particular cutoff criteria (FC ≥ 1.5 or < 0.67; p < 0.05). In total, 401 genes were upregulated and 416 genes were downregulated among the annotated genes. Overall, the most upregulated gene was fibroblast growth factor 19 (FGF19), which showed a 6.34-fold increase with the free feeding treatment, whereas the most downregulated gene was homeobox B3 (HOXB3), with a 9.25-fold decrease.
The DEGs were subjected to GO functional category analysis and then by GO enrichment analysis. All DEGs were enriched in the three main functional categories including biological process, cellular components, and molecular function (FC ≥ 1.5 or < 0.67; p < 0.05). In the biological process level of the GO categories, most of the genes in the liver were significantly represented in the cellular amino acid metabolic process, establishment of tissue polarity, and regulation of cardiac muscle cell action potential ( Figure 1A). In the jejunal mucosa, most of the genes were significantly represented in the lipid metabolic process, regulation of immune system process, organic acid metabolic process, positive regulation of immune system process, cellular lipid metabolic process, small molecule biosynthetic process, and so forth ( Figure 1B).
were upregulated and 416 genes were downregulated among the annotated genes. Overall, the most upregulated gene was fibroblast growth factor 19 (FGF19), which showed a 6.34-fold increase with the free feeding treatment, whereas the most downregulated gene was homeobox B3 (HOXB3), with a 9.25fold decrease.
The DEGs were subjected to GO functional category analysis and then by GO enrichment analysis. All DEGs were enriched in the three main functional categories including biological process, cellular components, and molecular function (FC ≥ 1.5 or < 0.67; p < 0.05). In the biological process level of the GO categories, most of the genes in the liver were significantly represented in the cellular amino acid metabolic process, establishment of tissue polarity, and regulation of cardiac muscle cell action potential ( Figure 1A). In the jejunal mucosa, most of the genes were significantly represented in the lipid metabolic process, regulation of immune system process, organic acid metabolic process, positive regulation of immune system process, cellular lipid metabolic process, small molecule biosynthetic process, and so forth ( Figure 1B). To better understand the functional changes, the DEGs were subjected to the KEGG database (Sus scrofa) for pathway enrichment analysis. The significantly changed pathway in the liver of pigs was arginine biosynthesis. In the jejunal mucosa, the significant enriched pathways included glycine, serine and threonine metabolism, the peroxisome proliferators-activated receptor (PPAR) signaling pathway, fat digestion and absorption, vitamin digestion and absorption, hematopoietic cell lineage ( Figure 2, gene information is shown in Table 4). DEGs in the PPAR signaling pathway, fat digestion and absorption, vitamin digestion and absorption were constructed in a co-expression network by using the ClueGO and CluePeidia plugin in the Cytoscape network analyzer tool (Figure 3). The genes of NPC1L1, GOT2, FEBP1, SLC27A4, ACSL3, ACSL5, MTTP, ACAA1, PCK1, APOA1, APOA4, APOB, PLN1, and PLN2 may play an important role in the network. The related pathways containing these selected genes were analyzed for further research (Figure 4). In detail, several genes involved in the formation of chylomicron (LRAT, RBP2, APOA1, APOA4, APOB, AGPAT2, MTTP, MOGAT2, NPC1L1, FABP1, and GOT2), lipid metabolism (APOA1, APOC3, FABP1, SLC27A4, ACSL3, SCD1, and ACAA1), and gluconeogenesis (PCK1) were upregulated, and several genes involved in adipocyte differentiation (PLIN1, PLIN2) were downregulated.
To better understand the functional changes, the DEGs were subjected to the KEGG database (Sus scrofa) for pathway enrichment analysis. The significantly changed pathway in the liver of pigs was arginine biosynthesis. In the jejunal mucosa, the significant enriched pathways included glycine, serine and threonine metabolism, the peroxisome proliferators-activated receptor (PPAR) signaling pathway, fat digestion and absorption, vitamin digestion and absorption, hematopoietic cell lineage (Figure 2, gene information is shown in Table 4). DEGs in the PPAR signaling pathway, fat digestion and absorption, vitamin digestion and absorption were constructed in a co-expression network by using the ClueGO and CluePeidia plugin in the Cytoscape network analyzer tool (Figure 3). The genes of NPC1L1, GOT2, FEBP1, SLC27A4, ACSL3, ACSL5, MTTP, ACAA1, PCK1, APOA1, APOA4, APOB, PLN1, and PLN2 may play an important role in the network. The related pathways containing these selected genes were analyzed for further research (Figure 4). In detail, several genes involved in the formation of chylomicron (LRAT, RBP2, APOA1, APOA4, APOB, AGPAT2, MTTP, MOGAT2, NPC1L1, FABP1, and GOT2), lipid metabolism (APOA1, APOC3, FABP1, SLC27A4, ACSL3, SCD1, and ACAA1), and gluconeogenesis (PCK1) were upregulated, and several genes involved in adipocyte differentiation (PLIN1, PLIN2) were downregulated. Figure 2. Pathway assignments of differential expressed transcripts in the liver and jejunal mucosa of pigs based on the Kyoto Encyclopedia of Genes and Genomes (KEGG). ClueGO (a plugin for Cytoscape) was used for KEGG analysis, and a p value < 0.05 was used as the threshold to select significant KEGG pathways. The pink bars indicate the number of upregulated genes, while the green bars indicate the number of downregulated genes. Table 4. Pathways enriched with differentially expressed genes in the livers and jejunum mucosa of pigs induced by feeding one time daily when compared with feeding twice daily 1.
Validation of RNA-Seq Results by qRT-PCR
To validate the transcriptomic results by quantitative RT-PCR (qRT-PCR), seven upregulated genes (FABP1, ACAA1, NPC1L1, PCK1, APOA1, APOA4, and APOC3) were validated. As shown in Table 5, the results showed that the expression profiles of these genes detected by qRT-PCR were consistent with those detected by transcriptome, which confirmed the reliability of our RNA sequencing data.
Validation of RNA-Seq Results by qRT-PCR
To validate the transcriptomic results by quantitative RT-PCR (qRT-PCR), seven upregulated genes (FABP1, ACAA1, NPC1L1, PCK1, APOA1, APOA4, and APOC3) were validated. As shown in Table 5, the results showed that the expression profiles of these genes detected by qRT-PCR were consistent with those detected by transcriptome, which confirmed the reliability of our RNA sequencing data.
Discussion
Restricted meal frequency is considered a potentially effective treatment for metabolic disease besides limited caloric intake [18][19][20]. A previous study showed that the liver fat content of pigs fed two meals daily was lower than that in pigs with twelve meals [8]. In the present study, compared with the M1 group, the M2 regimen significantly decreased the concentrations of serum T-CHO, TG, HDL-C, LDL-C, and liver TG, although the growth performance of pigs was not affected. In order to explore the underlying mechanisms, transcriptomic responses in the livers and jejunal mucosa of growing pigs were investigated following two different feeding frequency regimens.
By recording the feeding intake at different time points in one day, we found that pigs in the M1 group consumed 75-85% of the total diet before 16:00 pm every day, while the M2 group only consumed half of the total diet at 16:00 pm every day, which indicates that the feeding patterns impacted the meal regimen. However, across the one-month trial, the two different feeding regimens had no significant effect on the growth performance of the growing pig. The average daily feed intake (ADFI) was correlated with the time of eating, therefore, no difference in ADFI in our study was likely due to the same duration of eating time in the M2 group when compared to free access. Colpoys et al. observed a decrease in ADFI in gilts fed two meals daily when compared to free access [5]. Newman et al. observed a tendency for boars fed two 1-h meals to eat less than ad libitum [4]; however, the ADFI was not significantly changed when boars were fed two 90-min meals. The inconsistent results may partly be due to the differences in the sex of the pigs used in different studies. In addition, the different growth period of pigs used in this study may partly explain the difference between earlier studies [2][3][4]. Schneider et al. pointed out that feeding two or six meals daily had the same effect on the growth performance of gestating gilts [21], but the average daily gain (ADG) of gilts was increased from day 0 to 42. In pigs maintained on a high fat dietary regimen, two meals per day decreased the fat deposition content when compared to twelve meals per day [22]. However, Hatori et al. showed that an increase in the feeding frequency significantly increased the ADG of rats fed a normal diet or high-fat diet [18]. This suggests that different feeding regimens may have different effects on the growth performance of animals from different species, different genders, and different physiological stages. Both the diet composition and frequency of daily feeding could affect growth performance.
The accumulation of triglyceride-rich lipoproteins (TRLs) in blood is believed to be related to the occurrence of atherosclerotic dyslipidemia. Blood TG level and/or the remnant-cholesterol (remnant-cholesterol = T-CHO -LDL-C -HDL-C) reflect the level of TRL [23]. In our study, the two times feeding regimen significantly decreased the concentrations of serum T-CHO, HDL-C, and LDL-C. In addition, pigs on the M2 regimen showed a decreased TG concentration in the serum and liver, which is consistent with the results of previous studies on pigs or rodents [2,18,19]. Our study supports the idea that different feeding regimens can affect lipid metabolism.
To gain insights into the mechanisms underlying this process, transcriptomic analysis was conducted to identify the DEGs between the M1 and M2 regimens. In the liver, only the arginine biosynthesis pathway was enriched by the M2 regimen, however, in the jejunal mucosa, the significant enriched pathways included glycine, serine, and threonine metabolism, the PPAR signaling pathway, fat digestion and absorption, vitamin digestion and absorption, and hematopoietic cell lineage, which suggests that the different feeding frequencies in our experiment appeared to have a more significant effect on bay metabolism at the intestinal level.
The NPC1L1 transmembrane protein in the intestinal villi is responsible for the efficient and specific transport of cholesterol into the absorbing cells. Previous studies showed that NPC1L1 gene ablation reduced the absorption of cholesterol by the small intestine, and increased cholesterol synthesis in the liver [24][25][26]. A previous study found that the concentration of cholesterol was significantly increased in transgenic mice for a human NPC1L1 gene [27], while NPC1L1-deficient mice exhibited a drastic reduction of dietary cholesterol absorption [28,29]. The GOT2 gene, known as plasma membrane-associated fatty acid-binding protein (FABPpm), can promote the transport and metabolism of fatty acids. It was reported that FABPpm participated in fatty acid metabolism by transporting long-chain fatty acids into the cell [30]. Acetyl-CoA acetyltransferase 2 (ACAT2) can catalyze the absorption of cholesterol absorbed in cells to form cholesteryl esters [31]. ACAT2-deficient in the intestine reduced the efficiency of the absorption and transportation of cholesterol by chyle particles and the absorption of cholesterol in the diet [32]. Under the action of apolipoprotein APOB48 and MTTP, cholesteryl esters were assembled together with triglycerides, phospholipids, and a small portion of free cholesterol to form chylomicrons, which could be secreted into lymphatic circulation through basement membrane [33][34][35]. In the present study, genes NPC1L1, FABP1, MOGAT2, and apolipoprotein (APOA1, APOA4, and APOB) were upregulated by the M2 regimen in fat digestion and absorption pathway, combined with the upregulation of genes PLB1, RBP2, and apolipoprotein (APOA1, APOA4, and APOB) in the vitamin digestion and absorption pathway, which indicates that the M2 regimen promoted the formation and transport of chylomicrons into the bloodstream, and the digestion of fat in the small intestine ( Figure 4).
Long-chain acyl-coenzyme a synthase (ACSL) catalyzes the first step in the activation of intracellular fatty acid metabolism by converting long-chain fatty acids into long-chain acyl-CoAs [36]. The ACSL3 gene can promote the synthesis of lecithin and the formation of intracellular lipid droplets, participate in fatty acid oxidation, and maintain lipid droplet formation in two metabolic pathways [37]. ACSL5 is a key enzyme in the process of fatty acid beta oxidation and triglyceride synthesis and metabolism in the animal body. Cao et al. upregulated the transcription of ACSL3 gene by using cytokine tumor suppressor in HepG2 cells, enhanced β-oxidation, and reduced the TG content of cells [38]. Acetyl-CoA acyltransferase (ACAA), also known as 3-Ketoacyl-CoA thiolase, includes two subtypes, ACAA1 and ACAA2, which catalyze the last step of fatty acid beta oxidation. Previous studies have shown that the increase of ACAA activity promotes the oxidation of fatty acids [39,40]. The ACAA1 gene is mainly involved in physiological and biochemical processes such as the oxidation of very long-chain fatty acids, bile acid metabolism, and regulation of peroxisome proliferation [41]. In the present study, we found that the M2 regimen significantly upregulated the expression of genes ACSL3 and ACSL5 in pig jejunum, which suggests that two times feeding daily could increase fatty acid β-oxidation ( Figure 4).
The FABP1 (L-FABP) could be affected by PPAR and in turn affect PPARα [42]. FABP can assist in the transport of fatty acids to the mitochondria into the β-oxidation pathway and transport to the endoplasmic reticulum to enter the triglyceride esterification pathway. A previous study on mice fed a high-fat diet indicated that the L-FABP gene promoted long-chain fatty acid oxidation and inhibited weight gain and obesity [43]. The over-expression of L-FABP could significantly increase the intake of long-chain fatty acid oxygen and medium-chain fatty acid in the nucleus [44]. When L-FABP binds to PPARα, it can enhance the transcriptional activity of PPAR to long-chain fatty acid oxidase, regulate the utilization and metabolism of intracellular fatty acid, and maintain the relative balance of fatty acid metabolism in vivo [44]. In adipocytes, Perilipin (PLIN) anchors on the surface of the lipid droplets to produce a barrier effect, which blocks TG from lipohydrolase [45,46]. In the present study, the PLIN gene was downregulated, and its barrier effect was weakened with the increase of lipohydrolase activity, then lipid droplets became significantly smaller, which accelerated fat decomposition and reduced paper deposition in cells. In addition, phosphoenolpyruvate carboxykinase (PCK), a rate-limiting enzyme of hepatic glycogenesis, was also upregulated in the M2 group. Therefore, the results suggest that the two times feeding regimen can increase fatty acid β-oxidation and reduce fatty acid accumulation ( Figure 4). However, Liu et al. found that the protein abundance of PCK2 in glucose and energy metabolism, the protein abundance of APOA1, APOB, FABP1, MTTP, and GOT2 in the lipid metabolism of pigs with two meals a day were significantly higher than that of pigs with twelve meals a day [8]. The inconsistent results between the two studies may be partly due to the different feeding regimens and physiological stages of the pigs.
Interestingly, the transcriptome date of this study revealed significant changes in the arginine biosynthesis pathway in the livers of pigs. In the enriched arginine biosynthesis pathway, ARG2 and NAGS genes in the M2 group were upregulated when compared with the M1 group. These genes contribute to the synthesis of glutamic acid, glutamine (glutamate), and proline in the small intestine of pigs, which are precursors for the synthesis of citrulline and arginine [47]. Arginine can detoxify through the urea cycle, promote insulin and growth hormone secretion, and regulate nutrient metabolism, and has been widely used as an immune nutrient in clinical practice [48]. However, there was no difference in the growth performance between the two groups, and the influence of feeding frequency on the immune function needs further study. A study of proteomics in the liver revealed 35 differentially expressed proteins in the liver between pigs fed two and twelve meals per day, which were involved in the regulation of glucose and energy metabolism, lipid metabolism, protein and amino acid metabolism, and other general biological process [8]. The inconsistent results between the two studies may be partly due to the different feeding frequency and diet composition used in the pig trials. In addition, the present study mainly compared the body response to different feeding frequencies at the transcriptomic level, therefore, the enzyme activity or protein expression are needed to be determined in further study.
Conclusions
The present study investigated the transcriptomic responses induced in the livers and jejunal mucosa of growing pigs by different feeding frequencies. We found that when compared with one time feeding daily, the two times feeding regimen had no significant effect on the growth performance of growing pigs with the same average daily feed intake. The two times feeding regimen reduced the concentration of triglycerides in the serum and liver, affected the body metabolism by promoting lipid transport, lipogenesis, fatty acid oxidation, chylomicron formation and transport, gluconeogenesis, and inhibited the adipocyte differentiation. These findings support the idea that different feeding regimens affect lipid metabolism and can be effective as nutritional strategies to prevent metabolic dysfunction.
Conflicts of Interest:
The authors declare no conflicts of interest. Table A1. Description of primers used in the RT-qPCR analysis of gene expression.
|
2019-09-19T09:15:18.145Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "383ba47867c11684fb3b8b6af31421a01197f612",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/9/9/675/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6743b68cca15de758ef42b1e526e388dddc7fb2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
256896384
|
pes2o/s2orc
|
v3-fos-license
|
The combined effect of cigarette smoking and occupational noise exposure on hearing loss: evidence from the Dongfeng-Tongji Cohort Study
Combined effect of cigarette smoking and occupational noise exposure on hearing loss has rarely been evaluated among Chinese population, especially among females. This cross-sectional study was conducted in 11196 participants of Dongfeng-Tongji cohort study. Smoking status was self-reported through questionnaire and occupational noise exposure was evaluated through workplace noise level and/or the job titles. Hearing loss was defined as a pure-tone mean of 25 dB or higher at 0.5, 1, 2, and 4 kHz in both ears. Compared with participants without occupational noise exposure, the risk of hearing loss was significantly higher for noise exposure duration ≥20 (OR = 1.45, 95%CI = 1.28–1.65). The association was particularly evident among individuals who were males (OR = 1.74, 95%CI = 1.45–2.08) and aged ≥ 70 (OR = 1.74, 95%CI = 1.30–2.33). Similarly, the risks increased with the increasing of pack-years in males and all age groups except for those aged <60. As to the combined effect, the hearing loss risk was highest for noise exposure duration ≥20 and pack-years ≥25 (OR = 2.41, 95%CI = 1.78–3.28), especially among males (OR = 2.42, 95%CI = 1.74–3.37) and those aged ≥70 (OR = 2.76, 95%CI = 1.36–5.60). Smoking may be an independent risk factor for hearing loss. And it may synergistically affect hearing when combined with occupational noise exposure, especially among males and older participants.
noise was thought to be a key reason for hearing loss 11 . Noise induced hearing loss (NIHL) is the change in the hearing threshold at different frequencies, it is chronic and characterized as sensorineural, usually symmetrical and bilateral 12 . NIHL could bring a lot of inconvenience to people in later life and cause a large social and economic burden, and hearing loss due to occupational noise has been described as a primary condition of modern society 13 . Published papers reported that an estimation of 278 million people were hearing disabilities in the world 14 , and the economic costs have been estimated to be about billions of dollars 15 . Although NIHL is permanent and irreversible, it is still preventable 16 .
Except for occupational noise mentioned above, several other risk factors are also reported to be associated with hearing loss, such as age, sex, race, ototoxic medication, hypertension, and diabetes mellitus 17,18 . Besides, cigarette smoking could also lead to elevated susceptibility to noise damage, as well as causing its own damage on hearing system 19 . Cigarette smoking is a common habit in the world, according to the World Health Organization, China has about 320 million smokers which represent one third of the world's total smokers 20 . As a modifiable lifestyle, the association between cigarette smoking and hearing loss has been paid more attention in recent years. However, the results of the published reports were inconsistent.
Several epidemiology studies have found a positive association between smoking and hearing loss [21][22][23] , but others do not support a uniform relationship 24,25 . The association between smoking and hearing loss has rarely been evaluated among female participants and evidence is limited in China. Moreover, the combined effect involving smoking and occupational noise exposure on hearing loss is noticeable due to the high prevalence of smoking among occupational workers in China 26 . Therefore, we conducted a cross-sectional study to examine the independent and combined effects of smoking and occupational noise exposure on hearing loss in a large middle-aged and older Chinese population, especially to explore the association among female participants.
Results
Descriptive. Characteristics of the 11196 participants included in the analysis were reported by categories of hearing loss (Table 1). Among them, 54.8% of participants were females, and 84.1% aged over 60. Overall, 34.7% were exposed to occupational noise (38.0% for male, 1922/5060; 32.1% for female, 1969/6136) and 14.7% were current smokers (37.2% for male, 1883/5060; 2.9% for female, 180/6136). The number of pack-years of smoking ranged from 0 to 195, with an average of 34.6 pack-years for current smokers and 24.9 for ex-smokers. The prevalence of hearing loss among all participants was 61.5% (72.9% for male and 52.1% for female). We observed pronounced differences in hearing loss prevalence by demographic characteristics. Prevalence of hearing loss was higher among men, aged over 70, current drinkers and subjects with hypertension, diabetes mellitus, coronary heart disease, myocardial infarction, and stroke. Current smokers and those exposed to occupational noise for 20 years or more were more inclined to have hearing loss.
Occupational noise exposure and hearing loss. Table 2 presented the odds ratios (ORs) and 95% confidence intervals (CIs) for the effect of occupational noise exposure on hearing loss. Compared with participants not exposed to occupational noise, the risk of hearing loss was significantly higher among noise exposure duration for 20 years or more group (OR = 1.45, 95%CI = 1.28-1.65), after adjusting for potential confounders. Stratified analyses revealed that the association between the longest noise exposure duration group (≥20 years) and hearing loss was significant in different sex and age group except for those aged less than 60. And they were more pronounced in males (OR = 1.74, 95%CI = 1.45-2.08) and those aged over 70 (OR = 1.74, 95%CI = 1. 30-2.33). Meanwhile, the association was also found among females (OR = 1.21, 95%CI = 1.01-1.44) and those aged 60 ~ <70 (OR = 1.37, 95%CI = 1.17-1.61).
Smoking and hearing loss. The effect of smoking status on hearing loss was revealed in Table S1. Compared with nonsmokers, current smokers had higher risk of hearing loss (OR = 1.38, 95%CI = 1.20-1.59). Stratified analyses indicated that the association was significant in males (OR = 1.34, 95%CI = 1.14-1.58) and all age groups but the oldest age group (≥70). Besides, it was not significant in females either (OR = 1.29, 95%CI = 0.92-1.82).
In order to explore whether the risk varied by amount of smoking exposure, the association between pack-years of exposure and hearing loss was also further evaluated in Table 3. The odds ratios increased with the increasing of pack-years, and the highest exposure category (≥25 pack-years) got the highest risk (OR = 1.42, 95%CI = 1.21-1.66). In the stratified analysis, the similar trend was also found in males, and all age groups except for the youngest age group (<60). Besides, the association between pack-years of smoking and hearing loss was also found among females, the risk of hearing loss was statistically significant (OR = 1.54, 95%CI = 1.02-2.33) in the median exposure category (0 ~ <25 pack-years), not the highest exposure group (OR = 1.42, 95%CI = 0.71-2.84).
Combined association of occupational noise exposure and smoking on hearing loss. We further explored the combined association of occupational noise exposure and smoking on hearing loss (Table S2). Compared with no-smoking and no occupational noise exposed participants, all other groups had higher risks of hearing loss except for the ex-smokers and not exposed to occupational noise, and those exposed to occupational noise and smoking got the highest risk (OR = 1.96, 95%CI = 1.60-2.41). Meanwhile, the combined associations were also found in the subgroup analyses, especially among males (OR = 2.06, 95%CI = 1.63-2.59) and the youngest age group (OR = 2.74, 95%CI = 1.41-5.34).
As no occupational noise exposure and no smoking were treat as the reference group. The association of pack-years of smoking and noise exposure duration was further assessed to test the dose-response relationship (Table 4). Individuals with longer noise exposure duration (≥20 years) and more pack-years of smoking (≥25 pack-years) got the highest risk (OR = 2.41, 95%CI = 1.78-3.28). In the subgroup analyses, the combined associations were also found in males (OR
Discussion
To our best of knowledge, few studies evaluated the association of smoking and hearing loss among female subjects, as the low percentage of smokers among this group. And we also explored the independent and combined dose-response association of occupational noise exposure and smoking with hearing loss. Moreover, the data of our study were based on a large middle-aged and older Chinese population, which could provide more reliable results.
In the present study of middle-aged and older adults in Chinese retired workers, occupational noise exposure and cigarette smoking, independently or in combination, were found to be associated with increased risk of hearing loss. High risks of hearing loss were observed among participants long-term exposed to occupational noise and/or took more pack-years of smoking, even controlling for other confounders. Long-term exposed to occupational noise could damage hair cells of the organ of Corti directly, resulting in irreversible and progressive hearing loss 7 , which was known as noise-induced hearing loss (NIHL). Besides, it also revealed that smoking may be an independent risk factor for hearing loss and it may have a dose-response relationship on hearing loss, which was consistent with part of previous studies [27][28][29] . The divergent outcome may be due to the different definitions and criteria of NIHL among studies. Though the mechanism by which cigarette smoking increased the risk of hearing loss was not clear, it was indicated that smoking may damage hair cells by increasing carboxyhaemogolobin or by reducing blood flow to cochlea 23,30,31 .
Our study showed the combined effect of smoking and occupational noise exposure was greater than their independent effects. Individuals with longest noise exposure duration and maximum pack-years got the highest risk in the present study, which was reported in other studies. Pouryaghoub et al. in Iran found smoking could accelerate noise induced hearing loss 32 , Ferrite and Santana in Brazil found smoking may synergistically affect hearing when in combination with noise exposure 33 . Mizoue et al. in Japan also found smoking was a risk factor for high frequency hearing loss and had a positive combined association with occupational noise exposure 19 . However, these studies were conducted among male workers, not including female workers. The results of our study showed the prevalence of hearing loss in females was lower than that in male participants. However, increased risks of hearing loss were also observed among female participants with cigarette smoking and long term exposure to occupational noise. It was concluded that low percent of smoking (male vs female, 37.2% vs 2.9%) or exposure to occupational noise (male vs female, 38.0% vs 32.1%) may contribute to the phenomenon in females.
In the present study, we also explored the independent and combined associations in the subgroup of sex and age. During the independent association, we did not found significant associations between maximum pack-years of smoking (≥25 pack-years) and hearing loss among females and the youngest age group (<60). Meanwhile, the combined associations were not found in aforementioned two groups either. However, these associations could also be found in other female (0 ~ <25 pack-years) and age groups (60 ~ <70 and ≥70). It may mainly be due to the small number of participants in these categories (ranging from 4 to 58), which resulting a limited statistical power. Thus the independent and combined associations among females and the youngest age group (<60) needed to be confirmed further in future studies with larger sample size.
Some limitations in the present study should also be acknowledged. First, our study was a cross-sectional design, which would restrict the evidence of causal inferences. Second, although a number of confounders were adjusted in our study, there were still some other factors which were not included, such as leisure time noise exposure, which was also reported to be associated with hearing loss 17 . However, the sample size of the present study was large, which could reduce possible bias. Thirdly, the results of our study were limited to middle-aged and older adults, thus it may not be generalized to populations of all ages.
In summary, the present study reveals that smoking is an independent risk factor for hearing loss. And it may synergistically affect hearing when in combination with occupational noise exposure, especially among males and older participants. The associations among females and the youngest age group need to be confirmed in future studies with larger sample size.
Methods
Study population. The study is embedded in Dongfeng-Tongji Cohort Study, a population based cohort study aimed at assessing the relationship of dietary, lifestyle, occupational and environmental factors and the development of chronic diseases, which has been reported elsewhere 34 . Briefly, all the participants were retired Table 3. Odds ratios (95% CIs) of hearing loss by pack-years of smoking. * Unadjusted. † Adjusted for age/sex. ‡ Adjusted for age/sex, race, shift work, occupational noise exposure, drinking status, hypertension, ototoxicity medicine, chronic diseases (diabetes mellitus, coronary heart disease, myocardial infarction and stroke). # The sample sizes vary slightly because of missing data. Auditory measures and ascertainment of hearing loss. Each subject was given a general physical and an otologic examination first. Then pure-tone audiometry was performed in a sound-isolated room with a calibrated pure-tone audiometer (Micro-CD21) by certified audiologists. Air conduction thresholds were determined for each ear at 0.5, 1, 2, 4 and 8 kHz across an intensity range of −10 to 120 dB. The non-responses were coded as missing values. As occupational hearing loss was usually supposed to bilateral and fairly symmetrical 35 , it was defined as a pure-tone mean of 25 dB or higher at 0.5, 1, 2, and 4 kHz in both ears 36,37 .
Noise exposure assessment. The occupational information of each subject was self-reported and obtained from questionnaire, which contained personal data on employment and variables about the work, such as the corporation, the job title and the duration of each job. The occupational noise level for each job title at workplace came from the company records which were measured by qualified institutions. Noise exposure levels for workplaces outside of DMC were determined through the job description, and were consulted with local occupational hygienists. The occupational noise exposure was defined as exposed to a time-weighted-average (TWA) noise level of 80 dB (A) for at least a year 29 . Besides, years of occupational noise exposure was divided into four groups: 0, 1 ~ <10, 10 ~ <20, ≥20. Smoking assessment. Smoking status was self-reported and it was divided into three groups: current smokers, ex-smokers and nonsmokers. Individuals smoking at least one cigarette per day for more than half a year were defined as current smokers. Nonsmokers were defined as those who seldom or had never smoked in their lifetime. And ex-smokers were those who ever smoke and do not smoke at present. The number of cigarettes smoked or ever smoked per day and the duration of smoking was also recorded for current smokers and ex-smokers. Meanwhile, total pack-years were defined as the number of cigarettes smoked per day divided by 20 per pack, and then multiplied by years of smoking 21 .
Ethical approval. The study was approved by the Ethics and Human Subject committee of Tongji Medical College, and Dongfeng General Hospital, DMC. The methods were carried out in accordance with the relevant guidelines and regulations. The written informed consents were obtained from all the participants.
Covariates. Information on sociodemographic characteristics (sex, age, and race), shift work, drinking status, hypertension, ototoxicity medicine, and chronic diseases history were collected through a questionnaire by face to face interview with trained interviewers. As hearing loss varied by age 17 , it was classified into three groups: <60, 60 ~ <70, ≥70. Individuals who drink at least one time per week for more than half a year were defined as current drinkers. And non-drinkers were defined as those who seldom or had never drank in their lifetime. Hypertension was defined as blood pressure ≥140/90 mmHg or self-reported physician diagnosis of hypertension or self-reported current use of antihypertensive medication. Use of ototoxic medication was counted when participants reported medications of loop diuretics, aminoglycoside or non-steroidal anti-inflammatory drugs. Chronic diseases history diagnosed by a physician was reported by the participants, including diabetes mellitus, coronary heart disease, myocardial infarction, and stroke.
Statistical Analysis. Sociodemographic characteristic of participants was reported as mean (SD) for continuous variables and as number (percentages) for categorical variables. Logistic regressions were performed to evaluate the independent and combined associations of occupational noise exposure and smoking with hearing loss. Then the associations were further evaluated with stratified sex and age, based on previous published reports suggesting that age and sex may be important factors for hearing loss 21,38 . The models were conducted with those do not smoke and no occupational noise exposure as the reference group. We chose covariates which could affect hearing loss according to evidence from published literatures 17,21 . Covariates included age, sex, race, shift work, drinking status, hypertension, ototoxicity medicine, and chronic diseases history (diabetes mellitus, coronary heart disease, myocardial infarction and stroke). All statistical analyses were performed using SAS version 9.2 software (SAS institute Inc., Cary, NC). The statistical tests were two sided, and significance was set at P < 0.05.
Data Availability. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
|
2023-02-16T16:21:00.467Z
|
2017-09-11T00:00:00.000
|
{
"year": 2017,
"sha1": "8c1dd4bb82a6a06a8856e9d71ffb69c123894109",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-11556-8.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "8c1dd4bb82a6a06a8856e9d71ffb69c123894109",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
17755236
|
pes2o/s2orc
|
v3-fos-license
|
Boundary States and Symplectic Fermions
We investigate the set of boundary states in the symplectic fermion description of the logarithmic conformal field theory with central charge c=-2. We show that the thus constructed states correspond exactly to those derived under the restrictions of the maximal chiral symmetry algebra for this model, W(2,3,3,3). This connects our previous work to the coherent state approach of Kawai and Wheater.
Introduction
During the last 20 years, conformal field theory (CFT) in two dimensions [1] has become a very important tool in theoretical physics. Especially, two different directions are subject of current interest: The study of critical systems on surfaces involving boundaries led to a good knowledge of the so-called boundary CFTs (BCFT) [2,3,4,5]. On the other hand, already in 1991, Saleur showed the existence of density fields with scaling dimension zero occurring in the treatment of dense polymers [6]. These fields may cause the existence of operators yielding logarithmically diverging correlation functions. The two subjects, BCFT and logarithmic conformal field theory (LCFT), enjoy increasing popularity in both condensed matter physics and string theory. Even though there has been much progress in the field, LCFTs are not yet completely understood. However, it has been found that many properties of ordinary rational CFTs can be generalized to LCFT, such as characters, partition functions and fusion rules, see, e. g., [7,8,9,10,11,12,13,14] and [15,16] for some recent reviews. In ordinary CFTs, especially in unitary minimal models, the presence of a boundary is mathematically and physically described by a standard procedure introduced by Ishibashi [2] and Cardy [3] that allows to derive boundary states encoding the physical boundary conditions. Unfortunately, LCFTs involving a boundary happen to be more difficult to treat. There have been different approaches towards a consistent description of boundary LCFTs in terms of boundary states emerging first for two years ago [18,19,20,21], see also [22,23]. LCFT in the vicinity of a boundary is also dealt with in [24,25]. All those works focus on the best understood example of a LCFT, the c = −2 realization with the maximally extended chiral symmetry algebra W(2, 3, 3, 3). The earlier results are different and partly contradictory. Most successful seem the ideas of Kawai and Wheater [20] using symplectic fermions and coherent states and of ourselves [21]. The concept of symplectic fermions was first introduced by Kausch [17] in order to describe the rational c = −2 (bulk) LCFT. In our own work, a general, very basic approach towards the derivation of boundary states in the case of the W-algebra is presented that allows to handle complicated structures such as indecomposable representations in LCFTs. This letter is positioned exactly at this point. We show that the two different symmetries -the symplectic fermions vs. W(2, 3, 3, 3)-algebra -lead to the same set of boundary states. In particular, the former one, though extending the latter, implies no additional restrictions on the boundary states. By this, we can show that the coherent state approach is fully equivalent to ours yielding the same results. This corresponds to the presumption of Kawai [23] that the coherent states are indeed as good as taking the usual Ishibashi states. The paper proceeds as follows: In section 2, a short introduction to the rational c = −2 LCFT is given both for the W(2, 3, 3, 3)-algebra and in the symplectic fermion picture. Then, section 3 and 4 review the results of Kawai and Wheater and those deduced by us. In section 5, the boundary states for the symplectic fermion symmetry algebra are derived using the method of [21] and compared to both of the previous results. Finally, section 6 concludes the paper with a short discussion.
The model
The CFT realization at c = −2 is based on the extended chiral symmetry algebra W(2, 3, 3, 3) consisting of the energy-momentum tensor L(z) and a triplet of spin-3 fields W a (z). With the two quasi-primary normal-ordered fields Λ = : L 2 : −3/10 ∂ 2 L and V a = :LW a : −3/14 ∂ 2 W a the commutation relations for the corresponding modes read: +f ab Here,ĝ ab is the metric andf ab c are the structure constants of su (2). It is convenient to arrange the fields in a Cartan-Weyl basis W 0 , W ± . In this framework, we haveĝ 00 = 1, The algebra yields a set of six representations that close under fusion. There are four ordinary highest weight representations: V 0 is based on the vacuum state Ω with weight h = 0, V −1/8 emerges from the state µ with weight h = −1/8. Then, one has two doublet representations V 1 based on the states φ ± and V 3/8 built from ν ± . Furthermore, two indecomposable or generalized highest weight representations R 0 and R 1 emerge. They base on the states ω and ψ ± , respectively. These states form rank-2 Jordan blocks in L 0 together with the states Ω and φ ± . Thus, V 0 and V 1 are subrepresentations of R 0 and R 1 , respectively. R 0 also contains two subrepresentations of type V 1 built on the states For the bulk states of the R 0 and R 1 we use the metric of [21] that reads: where d and t are in principle arbitrary real numbers. The fusion rules for this model read: Here, m and n can take the values 0, 1. From (4) one reads off that R 0 , R 1 , V −1/8 , V 3/8 is a sub-group closed under fusion itself. The characters for the model are given by: Note, that the physical characters are only χ V −1/8 , χ V 3/8 and χ R forming a three-dimensional representation of the modular group that corresponds to the above-mentioned subgroup. Here, η(q) = q 1/24 n∈N (1 − q n ) is the Dedekind eta function and Θ r,2 (q) and (∂Θ) 1,2 (q) are the ordinary and affine Riemann-Jacobi theta functions: In ordinary CFTs, the characters coincide with the torus amplitudes. Here, this is no longer the case: The torus amplitudes form a slightly larger, five-dimensional representation of the modular group. It reads: This representation was analyzed by Flohr [7]. There, the S-matrix transforming the "characters" under τ −→ −1/τ was constructed and it was shown that it yields the fusion rules (4) only in the limit α −→ 0 under which the logarithmic term in (7) vanishes. However, in this limit, S became singular. There exists an explicit Lagrangian formulation for the c = −2 LCFT based on two fermionic fields η and ξ of scaling dimension 1 and 0, respectively: This is the fermionic ghost system at c = −2 with the operator product expansions All other products are regular. Kausch [17] showed, that these two fields combine into a two-component symplectic fermion The choice assures that χ + and χ − have the same conformal weight h = 1. This description differs from the ghost system only by the treatment of the zero modes in χ − and ξ. The fermion modes are defined by the usual power series expansion where λ = 0 in the untwisted (bosonic) sector and λ = 1 2 in the twisted (fermionic) sector. The modes satisfy the anticommutation relations with the totally antisymmetric tensor ε ±∓ = ±1. The symplectic fermion decomposes the Virasoro modes and the modes of the three spin-3 fields W (z) of W(2, 3, 3, 3) [8,9,15]: The highest weight states become related to each other by introducing the fermion symmetry: In the twisted sector, the doublet states of weight h = 3/8 are connected to the singlet at weight h = −1/8 by ν α = χ α −1/2 µ. The states of weight 0 in the untwisted sector are related by Thus, this additional symmetry intertwines the representation R 0 with R 1 and V −1/8 with V 3/8 .
Approach 1: Coherent boundary states
Starting point for any derivation of boundary conditions is the absence of energy-momentum flow across the boundary and corresponding gluing conditions for the extended symmetry fields. On a cylinder, the boundary conditions are identified with an initial and final state of a propagating closed string: the boundary states B . After radial ordering and in the framework of symplectic fermions this yields the following consistency equations: where φ is a phase that occurs in the gluing condition of χ and χ. The latter equation implies the first one due to (13). Kawai Here, N is a normalization factor and 0 φ is a non-chiral ground state. The boundary states were designed in such a way that they are compatible with the W-algebra and thus obey (14) and This implies that the phase φ can only take the values φ = 0 and φ = π. Therefore, the non-chiral ground states are given by the "invariant vacua" Ω ⊗ Ω , ω ⊗ ω , µ ⊗ µ . This yields six possible boundary states, denoted by (+) if φ = 0 and (−) for φ = π: The corresponding cylinder amplitudes are given by the natural pairings B q C = B q H C = B (q 1/2 ) (L 0 +L 0 +1/6) C . For the interesting (untwisted) sector, they are The different factors and signs in contrast to [20] arise due to our different normalization of the metric. To get rid of the unphysical terms proportional to log(q)Θ 1,2 (q), one of the states B ω± was discarded and the physical boundary conditions were derived with this reduced set. This was possible according to the 2 symmetry φ −→ φ + π mod 2π. Candidates for the Ishibashi states were deduced by diagonalizing the cylinder amplitudes, i. e., i q j = δ ij χ i (q). However, it was not possible to express the physical boundary states in terms of this basis. Kawai and Wheater proposed the following five states and five corresponding duals: The (ket-)states form only a four-dimensional space. Especially, R is associated to the indecomposable representations but only built on the subrepresentations. It is evident that the states B ω± cannot obey equation (15) without further restrictions because they are based on the state ω ⊗ ω which is obviously not a proper ground state: unless the right-hand side state is discarded as in the unique local c = −2 LCFT [12]. There, a chiral and an anti-chiral version of the rational c = −2 LCFT are glued together to obtain a non-chiral theory. In order to keep locality of the correlators, certain states had to be divided out, namely the image of (L 0 − L 0 ). This was not mentioned by Kawai and Wheater. It is shown in the following that their considerations are indeed compatible with the result of [21] and lead to the same results if starting from the "vacua" of the complete chiral theory.
Approach 2: Boundary states for the W-algebra
In [21] the span of boundary states under the constraints of the W(2, 3, 3, 3)-algebra was derived. This was done by inventing a straight-forward method that uses only basic properties of the theory and its representations. Due to that it was possible to keep especially the inner structure of the indecomposable representations R 0 and R 1 and their subrepresentations visible. This allowed to find relations between the derived states. Ten boundary states were identified: The states V −1/8 and V 3/8 corresponding to the admissible irreducible representations V −1/8 and V 3/8 are the usual Ishibashi states for these modules. For the indecomposable representations R λ , to stay close to the usual notions, the definition of the Ishibashi states was generalized. The two states are called generalized Ishibashi states. Here, l, m ; l = h, h + 1, . . . , m = 1, . . . is an arbitrary basis over the representation R λ where l counts the levels beginning from the top-most, which is h = 0 in our case. The basis states on each level of the representation are counted by m. Similarly, l, n is the basis for the anti-holomorphic module R λ . The matrix γ λ was identified to be the inverse metric on R λ . In ordinary CFTs, these bases can be chosen orthonormal and then the result would coincide with the usual Ishibashi state. It was argued in [21] that this is not applicable here. The Ishibashi states corresponding to the two subrepresentations V 0 and V 1 were derived with the help of an operatorN =δ +δ, whereδ is the off-diagonal part of L 0 that was considered to be in Jordan form. Since there are rank-2 Jordan cells at most,δ 2 = 0 and thus,N 3 = 0. It was argued that the states do not vanish and fulfill (14) and (17), i. e., are properly defined boundary states. These are called level-2 Ishibashi states and contain only contributions from the corresponding subrepresentations.
In addition, two doublets of states were found that glue together the two different indecomposable representations R 0 and R 1 at the boundary. They were given in terms of operatorsP andP † that intertwine the two representations and have the following action on the (bulk) states: This yields the so-called mixed Ishibashi states R ± 01 and R ± 10 : These relations can be drawn schematically. It is not quite unexpected that there is a one-to-one correspondence to the embedding scheme of the local theory [12]: The states that are divided out there are due to (14) exactly those that do not contribute to the boundary states. figure 1: boundary states vs. local theory The lines in the left picture in figure 1 refer to the action ofN ,P,P † ,P, andP † while in the right one they denote the action of the (non-chiral) symmetry algebra. The non-vanishing natural pairings of the boundary states are given by These coincide with the physical characters forming the three-dimensional representation of the modular group. The torus amplitudes, on the other hand, are first seen with the help of additional, so-called weak boundary states X λ and Y λ , λ = 0, 1, that obey These states could be chosen uniquely in such a way that they serve as the duals to the null states V λ obtaining Obviously, this does not exactly reproduce the elements of the five-dimensional representation given in (5) but rather linear combinations of them and the unphysical contribution log(q)Θ 1,2 (q). This has to be taken care of when calculating physical relevant boundary conditions with the help of Cardy's consistency equation.
With the general method
The method presented in [21] provides an efficient tool for the investigation of the boundary states under the restrictions of the symplectic fermion algebra. It bases on the general ansatz for a boundary state connecting a holomorphic and an anti-holomorphic representation M h and M h at the boundary The task is to directly calculate the matrix c. This is done in an iterative procedure.
Since the sum in (29) is infinite, the coefficients c l mn can only be derived up to any finite level l = L. The idea is that this provides the basis for the second step, the identification of the boundary states. The boundary state consistency equation for this symmetry algebra is given by (15): where φ is the spin which can take the values φ = 0, π at the boundary, since we force B to be compatible with the W-algebra. It is then clear that (14) and (17) are automatically satisfied once (30) is valid. This implies that the solutions are linear combinations of the boundary states of section 4. The naturally arising question is, especially when comparing the results presented in the two previous sections, whether the fermion symmetry is more restrictive than the W-algebra, i. e., if less states are found here than in the latter theory. The opposite is the case: Using the method of [21] we again find ten proper boundary states. Denoting the φ = 0 case by the quantum number (+) and φ = π by (−) as in the previous discussion, these states are: Ω , ω; ± = Ω , ω + ω, Ω ± ξ + , ξ − ∓ ξ − , ξ + + . . . , Ω , ξ a ; ± = Ω , ξ a ± ξ a , Ω + . . . , a = +, −, µ, µ; ± = µ, µ ± ν + , ν − ∓ ν − , ν + + . . . .
Here, m, n is used as a short-hand for m ⊗ n . This result may be compared to the one for the W-algebra. We obtain the following identities This identification uses the fact that the boundary states fulfill (14) and (17). Thus, the first level contributions of (31) can be compared to the results of section 4 to gain the corresponding linear combinations of the states given there.
To show that the result (20) of Kawai and Wheater is compatible to ours one has to keep in mind that the coherent states obey the consistency equation (30), and hence (14) and (17). Therefore, they can be expressed in terms of the states (31). Indeed, we find up to possible additional contributions from null-states and the different normalization. It seems contradictory that here, no boundary state based on ω ⊗ ω is found. But reviewing [20] as quoted in section 3 these are the states B ω± (or rather B ω± ) which occur only as the duals to B Ω ± in the Ishibashi states. This is remarkable, since in our framework the only states having such logarithmic contributions, i. e., ω ⊗ ω -like terms, are X λ that we used in precisely the same manner. This suggests, that the coherent states based on ω ⊗ ω are related to X λ in the same way as above: Observe the fact that these states do not exactly correspond to B ω± due to the connection to the local theory as discussed above.
The generic procedure of [21] yields a much bigger collection of states in comparison to Kawai and Wheater. Especially, the mixed boundary states were not discussed by them and the Ishibashi boundary state for the module R was obtained by the identification 2V 0 + 2V 1 ≡ R. Presumably therefore and by referring to the local theory by setting Ω ⊗ ω − ω ⊗ Ω to zero, their physical boundary conditions differ from the set of Ishibashi states. Indeed, we find that the coherent state method produces exactly the same amount of states when starting from the "invariant vacua" that we have: The symplectic fermions decompose the L 0 operator in such a way that With respect to (24) and (25) this suggest that the intertwining operatorsP andP † and the corresponding boundary states R a 01 and R a 10 might be closely related to the fermionic zero modes.
Discussion
We worked out the space of boundary states in the rational LCFT with central charge c = −2 under the restrictions of the symplectic fermion symmetry. It turned out that these states coincide with the solution we presented in [21]. In particular, this implies that the symplectic fermion algebra gives no additional constraints on the boundary states in comparison to the W(2, 3, 3, 3)-algebra of the rational c = −2 LCFT. This is interesting because the latter one is embedded in the former. One might guess that the boundary state consistency equation for the symplectic fermion symmetry is more restrictive than the one for the W-algebra. On the other hand, already in [21] we noticed the close relation between the derivation of boundary states and the construction of a local theory (see fig. 1). At least for c = −2 the latter one is uniquely defined which would suggest, that there exists exactly one consistent solution for the set of boundary states. To construct the boundary states, we used the same method that we presented in [21] for the W-algebra case. This shows that this method really yields a general prescription for the treatment of boundary states and is easily adoptable to different frameworks (like the symplectic fermions in this case). Thus, it seems natural that the presented results generalize to more complicated theories. For the coherent states this was already pointed out by Kawai [23]. We compared the results to the coherent state solution of Kawai and Wheater and were able to show that both approaches are equivalent, leading to exactly the same set of states. However, our results differ in some crucial aspects compared to [20]: They had to divide out the image of (L 0 − L 0 ) by hand while in our prescription this is implicitly included.
|
2014-10-01T00:00:00.000Z
|
2002-07-19T00:00:00.000
|
{
"year": 2002,
"sha1": "8f656558558056ee4994e440a9e98903fefdb733",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0370-2693(02)03061-7",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "c703096e6ad309e35a490887e59451fd21a96c1f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
266836578
|
pes2o/s2orc
|
v3-fos-license
|
Fumaria indica (Hausskn.) Pugsley Hydromethanolic Extract: Bioactive Compounds Identification, Hypotensive Mechanism, and Cardioprotective Potential Exploration
Fumaria indica (Hausskn.) Pugsley (FIP), a member of the Papaveraceae family, has a documented history of use in traditional medicine to treat cardiovascular ailments, particularly hypertension, and has shown substantial therapeutic efficacy among native cultures worldwide. However, the identification of bioactive compounds and the mechanism of hypotensive effect with the cardioprotective potential investigations are yet to be determined. The study aimed to identify bioactive compounds, explore the hypotensive mechanism and cardioprotective potential, and assess the safety of Fumaria indica (Hausskn.) Pugsley hydromethanolic extract (Fip.Cr). LC ESI-MS/MS analysis was performed to identify the bioactive compounds. In vitro experiments were conducted on isolated rat aorta and atria, and an in vivo invasive BP measurement model was used. Acute and subacute toxicities were assessed for 14 and 28 days, respectively. Isoproterenol (ISO) was used to develop the rats’ myocardial infarction damage model. The mRNA levels of NLRP3 inflammasome and the abundance level of Firmicutes and Lactobacillus were measured by qRT-PCR. The hypotensive effect of FIP bioactive compounds was also investigated using in silico methods. Fip. Cr LC ESI−MS/MS analysis discovered 33 bioactive compounds, including alkaloids and flavonoids. In isolated rat aorta, Fip.Cr reversed contractions induced by K+ (80 mM), demonstrating a calcium entry-blocking function, and had a vasorelaxant impact on phenylephrine (PE) (1 μM)-induced contractions unaffected by L-NAME, ruling out endothelial NO participation. Fip.Cr caused negative chronotropic and inotropic effects in isolated rat atria unaffected by atropine pretreatment, eliminating cardiac muscarinic receptor involvement. Safety evaluation showed no major adverse effects. In vivo, invasive BP measurement demonstrated a hypotensive effect comparable to verapamil. Fip.Cr protected the rats from ISO-induced MI interventions significantly in biometrical and cardiac serum biochemical indicators and histological examinations by reducing inflammation via inhibiting NLRP3 inflammasome and elevating Firmicutes and Lactobacillus levels. The network pharmacology study revealed that the FIP hypotensive mechanism might involve MMP9, JAK2, HMOX1, NOS2, NOS3, TEK, SERPINE1, CCL2, and VEGFA. The molecular docking study revealed that FIP bioactive compounds docked better with CAC1C_ HUMAN than verapamil. These findings demonstrated that Fip.Cr’s hypotensive mechanism may include calcium channel blocker activity. Fip.Cr ameliorated ISO-induced myocardial infarction in rats by attenuating inflammation, which might be via inhibiting NLRP3 inflammasome and may prove beneficial for treating MI.
INTRODUCTION
Cardiovascular diseases (CVDs) are the leading cause of mortality and significantly detriment to life quality.CVDs include peripheral arterial and ischemic heart disease, heart failure, stroke, and other vascular and cardiac diseases. 1,2In 2017, cardiovascular disease caused 17.8 million fatalities, life loss of 330 million years, and additional disability of 35.6 million. 3According to the WHO, 17.5 million people die yearly from CVDs, representing 31% of all deaths globally. 4Heart disease accounts for approximately 7.4 million annual deaths, whereas stroke is responsible for 6.7 million annual fatalities. 5,6ople with heart conditions may be more vulnerable to COVID-19 illness. 7,8ypertension is a major cardiovascular disease risk factor.WHO has identified it as one of the most significant worldwide risk factors for death and morbidity; approximately nine million fatalities yearly are attributable to it. 9,10In the United Kingdom, hypertension is defined by the NICE (National Insitute for Health and Care Excellence) as a blood pressure of 140/90 mmHg in a medical clinic or higher as assessed with follow-up ambulatory monitoring of blood pressure at home throughout the day, 135/85 mmHg or higher.It is not only elderly persons that get hypertension.In 2015, over 2.1 million persons under 45 in England had hypertension. 9yocardial infarction is the leading cause of disability and mortality worldwide. 11Myocardial infarction refers to the death of cardiac myocytes brought on by ischemia, which results from a mismatch between blood flow demand and supply. 12,13espite substantial advances in prognosis over the previous decade, acute MI is the most severe coronary artery disease manifestation, impacting more than seven million people worldwide and inflicting more than four million deaths annually in Northern Asia and Europe. 14,15Oxidative stress and inflammation are the primary pathophysiological processes involved in myocardial infarction, as well-established. 16,17he global endeavor is to investigate treatments that prevent MI from damaging cardiac tissues.Common treatments can have unwanted consequences and require constant attention. 18ecure, effective, and economical alternative treatments are increasingly used to alleviate various medical issues. 19Ethnopharmacological investigations are highly relevant in manufacturing herbal remedies.−22 Recent accomplishments in the rapidly expanding field of ethnopharmacology are numerous.Cutting-edge innovative methodologies and specialized extraction techniques have facilitated the transfer from traditional ethnopharmacology to drug discovery.The leadingedge, innovative techniques include liquid chromatography−mass spectrometry, high-performance liquid chromatography, chemoinformatics, advancements in isolation and classification, gas chromatography−mass spectrometry, and a surge in computing capability incorporating prediction of chemical gene target and molecular docking. 23Ethnopharmacology is a viable method for identifying bioactive compounds having a broader use beyond their original traditional usage. 24edicinal herbs have been utilized to treat ischemic heart problems for centuries. 25Fumaria indica (Hausskn.)Pugsley is a member of the Papaveraceae family, which is prevalent in the lower hills and plains of Pakistan, India, Turkey, Iran, Afghanistan, and Central Asia.F. indica, also known as Fumitory, Earth smoke, and Fumus in English, is known locally as 'Pitpapra' and 'Shahtrah'.−29 Reportedly, it possesses hepatoprotective properties, 30 spasmolytic and spasmogenic, antileishmanial, antidiabetic, 31 antianxiety, cognitive modulating activity, 32 gastroprotective, 33 anti-inflammatory, 34 chemoprotective activity, 35 antidengue, 36 and antioxidant activity. 37,38Predominantly, isoquinoline alkaloids are responsible for these actions, with protopine being the most commonly discovered. 39Previous research has demonstrated that protopine contains hypotensive properties, smooth muscle relaxants, and antiarrhythmic effects. 27,40However, hydromethanolic extract of F. indica research on hypertension and myocardial ischemia is lacking in the scientific literature.Moreover, scientific facts supporting traditional usage and toxicity assessments of medicinal herbs persisted despite their increasing popularity and ubiquity. 41,42herefore, the main aim of this study was to identify bioactive compounds via LC ESI-MS/MS analysis of Fumaria indica (Hausskn.)Pugsley hydromethanolic extract, hypotensive mechanism exploration based on in vitro, in vivo, and in silico studies, its cardioprotective effect investigation, and safety assessment.
Bioactive Compound Identification via LC-ESI-MS/ MS Analysis.
Fip.Cr LC-ESI-MS/MS analysis discovered 33 bioactive compounds, including ten alkaloids, five coumarins, four flavonoids, two phenylpropanoids, one compound of hydroxycinnamic acid, fatty acyl, gallic acid and derivative, glucosinolate, cinnamic acid and derivative, amino compound, dihydroisoquinoline, tetrahydroisoquinoline, carbazole, terpenoid, indole and derivative, and aporphine.In the negative ionization mode, Sr. No. 1−11 compounds were tentatively recognized as [M − H]− deprotonated molecules.In the positive ionization mode, Sr. No. 12−34 compounds were tentatively recognized as [M + H]+ protonated molecules (Table 1).Scopoletin was identified in both modes.Supplementary Figure S1 depicts the total ion chromatogram (TIC) containing a full scan of both positive and negative ionization modes, mass spectra (MS/MS), and chemical structures of all bioactive compounds identified.
In Vitro Experiments. 2.2.1. Effects of In Vitro Vascular Reactivity Investigations.
To evaluate the vasorelaxant effect of Fip.Cr, an intact endothelium aortic preparation, was used.In undamaged endothelium, Fip.Cr exerted a relaxing action without any constriction.Fip.Cr exhibited a minor vasoconstrictor effect on aortic-denuded preparations at lower conc.0.01−1.0mg/mL, followed by vasorelaxant effect at the subsequent higher conc.3.0 mg/mL.Yohimbine (1 μM) inhibited these constrictions in aorticdenuded preparations (Figure 1).
In Vivo Experiments. 2.3.1. Safety Assessment. 2.3.1.1. Observations of Clinical and Survival. Acute Toxicity Study
In the acute toxicity study, neither 4 h of continuous observation nor 24 h revealed any mortalities.In addition, no lethal effects were discovered for 14 days after giving the extract during the experiment.The morphological attributes (fur, skin, eyes, and nose) were normal.There was no evidence of salivation, diarrhea, lethargy, or other abnormal behaviors.Intake of food and water, as well as respiration, were normal.
Nonetheless, modest sedation was noticed in the first 4 h following administration of the test extract, particularly at the maximum dosage of 2,000 mg/kg.The sedative effect diminished within minutes, and the rats resumed normal behavior.During the study's duration, no more harmful consequences were detected.This finding implies that Fip.Cr dosages of 1000, 1500, and 2000 mg/kg (body weight) were safe.No mortalities were documented for any dosages, so the LD 50 value exceeded the test dose limit of 2,000 mg/kg.Therefore, oral dosages of 125, 250, and 500 mg/kg Fip.Cr were used to investigate subacute toxicity.The normal vehicleadministered control group did not experience any toxicity or fatality during the research period.
Subacute Toxicity Study
For 28 days, no mortality was observed in rats administered Fip.Cr orally at 125, 250, or 500 mg/kg bw.For the 28-day research period, none of the rats demonstrated any clinical toxicity or observable signs of morbidity, including changes in skin, eyes, hair, respiratory rate, stereotypical behaviors, and autonomic (sweating, salivation, and piloerection).The normal control group had no clinical symptoms of toxicity.Any modest changes or behaviors seen in rats throughout the research period can be categorized as typical for Sprague−Dawley rats.
Body Weight.
During the experimental period of acute and subacute toxicity study, the animals treated with the extract did not exhibit significant changes in body weight and percentage of body weight gain of rats by weeks compared to the normal control (NW) group (p > 0.05), indicating that the extract did not affect the animals' normal growth.The findings are depicted in Table 2 and Figure S2.
2.3.1.3.Necropsy and Organ Weight.The macroscopic examinations of key organs such as the kidneys, lungs, stomach, intestines, spleen, liver, pancreas, and heart following administration of Fip.Cr at varied dosages did not reveal any notable macroscopic alterations or the development of lesions.Table 3 displays the absolute (g) and relative weight (%) of the heart, kidney, liver, spleen, and lungs.The results demonstrated that the administration of Fip.Cr at various dosages did not have a notable harmful effect on the macroscopic examination of vital organs and the absolute and relative organ weights of selected organs compared to the normal control (NW) group (p > 0.05).This finding might corroborate the safety of Fip.Cr extract administered in doses up to 2,000 mg/kg orally, as the relative weight of organs can be altered by hazardous chemicals. 43.3.1.4.Hematological Analysis.Compared to the normal control group, all parameters exhibited statistically insignificant and dose-independent alterations (p > 0.05) (Table 4).
Biochemical Analysis.
Compared to the normal control group, all parameters exhibited statistically insignificant and dose-independent alterations (p > 0.05) (Table 4).
Effect on Blood
Pressure and Hemodynamic Parameters.At dosages of 10, 20, and 30 mg/kg, Fip.Cr The hypotensive response to 10, 20, and 30 mg/kg Fip.Cr was dose-dependent.In normotensive anesthetized rats, Fip.Cr intravenous administration decreased dose-dependent systolic (SBP) and diastolic blood pressure (DBP), along with a reduction in pulse pressure and heart rate (BPM) (Figure 5A).At dosages of 1, 3, and 10 μg/kg, verapamil lowered MABP in normotensive anesthetized rats by 95 ± 5.1, 82.90 ± 3.9, and 62.23 ± 2.0, respectively (Figure 5B).The physical activity of rats decreased in this investigation after the fourth dose of subcutaneous isoproterenol in the intoxicated (ISO) group, and the decline was complete by the tenth treatment, which was accompanied by a decrease in respiration, while the verapamil, carvedilol, and Fip.Cr (100 and 200 mg/kg)-treated rats had mild breathlessness and were active.During the investigation, observations revealed that rats in the IC group lost weight, and their weariness and shortness of breath worsened.Rats treated with verapamil, carvedilol, and Fip.Cr (100 and 200 distinction between control and IC (ISO) biomarkers (p < 0.05 vs NC group); increasing these biomarkers in the ISOadministered group implies persistent MI.Although carvedilol, verapamil, and Fip.Cr (100 and 200 mg/kg) groups exhibited blood biomarkers within the permissible range.These treated groups demonstrated significant protection (p < 0.05 vs IC group) against MI.The treatment groups did not differ statistically significantly from the normal control group (p > 0.05 vs NC group).These findings demonstrated that Fip.Cr protects rats against the cardiotoxicity generated by ISO.Values are presented as the mean ± SD (n = 5) and analyzed through one-way ANOVA followed by the Dunnett test.
Various recovery profiles were seen for each compound.All compounds displayed favorable pharmacokinetic features and no significant side effects, indicating their potential therapeutic use.Therefore, ADMET analysis enabled the identification of ligands with important pharmacokinetic features within acceptable limits.
Each of the isolated bioactive compounds exhibited water solubility, human serum albumin affinity, and lipophilicity partition coefficient within the acceptable range; bioavailability, metabolism, CNS permeability, excretion, and distribution volume were all adequate; high GI absorption except for bergenin, (methylsulfanyl) butyl glucosinolate, myricetin, and chlorogenic acid, which can be modified during formulation development; and epsilon-Viniferin was outside the acceptable range of HERG K+ channel blockage.These factors all contribute to the proper internal distribution of ligands.
The rule of five analysis established by Lipinski rejected myricetin and chlorogenic acid; AMES mutagenesis investigations identified koumidine, bergenin, hydrocotarnine, ellipticine, 2′,5-dimethoxyflavone, koumine, myricetin, gardnutine, isocorydine, oxoglaucine, ethylrhoeagenine, and S, Rnoscapine as positive; therefore, these compounds did not fall under the permissible limit.Serotonin was an amino acid molecule that was excluded from in silico research.After screening the 33 compounds with in silico drug-likeness and ADMET prediction, only 19 bioactive compounds remained for network pharmacology study.
Modeling Protein Homology.
Voltage-dependent Ltype calcium channel subunit alpha-1C protein (Q13936: CAC1C_ HUMAN) homology model was constructed using sequence data from UniProtKB.
2.4.2.1.Physicochemical Characteristics.Physicochemical properties of CAC1C_HUMAN sequencing data were investigated using the ProtParam tool web service.The amino acid profile of CAC1C_ HUMAN revealed amino acids 2221 with a total molecular weight of 248976.62 and maximal Leu (L) 227 (10.2%),Ala (A) 171 (7.7%), and Ser (S) 159 (7.2%) residues with 240 Asp + Glu (negatively charged) and 225 Arg + Lys (positively charged) residues.The elevated aliphatic index of the protein, 92.53, demonstrated its exceptional thermostability, although a −0.052GRAVY score showed a larger possibility of water and protein interacting efficiently.The protein's pI was 6.33, suggesting its acidity, while its instability index was 48.87, indicating its instability.This instability was anticipated due to the dipeptide manifestation, often absent in protein stability.244,660 (M −1 cm −1 ) was the extinction coefficient at 280 nm in water, while 0.983 resulted from 0.1% (=1 g/L) abs, supposing that every pairing of Cys residues produces cystines.In a solution, this metric assesses the interactions between protein−protein and protein−ligand.Due to the absence of data in the databases, as mentioned above, hulupinic acid was excluded from the network pharmacology investigation.Umbelliferone failed to intersect with target genes after using a 0.03 probability-based cutoff threshold.Tables S1 and S2 represent the top 150 pathogenic target genes for hypertension analyzed via VarElect and bioactive compounds' most likely macromolecular targets, respectively.In Figure S4, a Venn diagram depicts the relationship between hypertension-related genes and bioactive chemical targets.Consequently, 15 putative hypertension disease target genes overlap.These overlapping targets were utilized for GO and KEGG pathway research and network construction.
GO and KEGG Evaluation.
The GO biological process for Fumaria indica (Hausskn.)Pugsley hydromethanolic extract demonstrated a prevalence of hypertensive target genes for bioactive compounds, emphasizing reg. of blood pressure and cell migration, blood circulation, circulatory system proc., pos.reg. of locomotion, cell migration, and cellular component movement, cellular response, and response to oxygencontaining compounds (Figure 10A and Table S3).The KEGG enrichment evaluation revealed a predominance of hypertension-target genes for bioactive compounds, with emphasis on the following signaling pathways: calcium, fluid shear stress and atherosclerosis, steroid hormone biosynthesis, relaxin, lipid and atherosclerosis, CGMP-PKG, HIF-1, AGE-RAGE in diabetic complications, PI3K-Akt, and pathways in cancer (Figure 10B and Table S4).
Network Construction.
The protein−protein Interaction (PPI) STRING network for hypertension disease target genes had 15 nodes and 38 edges (Figure 11A).Cytoscape developed a CTD network of 25 nodes and 15 edges between putative hypertension disease target genes and bioactive chemicals from FIP (Figure 11B and Table S5).The CTP network of GO biological processes for FIP bioactive compounds consisted of 33 nodes and 96 edges for hypertension disease target genes (Figure 11C and Table S6).The CTP network of KEGG pathways for FIP bioactive compounds comprised 35 nodes and 62 edges for hypertension disease target genes (Figure 11D and Table S7).
DISCUSSION
Substantial evidence supports the traditional use of Fumaria indica (Hausskn.)Pugsley as a treatment for various ailments and its remarkable therapeutic advantages.Consequently, this study aimed to establish the veracity of Fumaria indica's folkloric claims for the treatment of hypertension by investigating its mechanism of action, exploring the cardioprotective potential, identifying bioactive compounds, and assessing its toxicity.
The LC−MS/MS analysis of Fumaria indica (Hausskn.)Pugsley hydromethanolic extract revealed the predominant presence of alkaloids, coumarins, phenylpropanoids, flavonoids, and other identified chemicals were sinapic acid, bergenin, (methylsulfanyl)butyl glucosinolate, p-coumaraldehyde, dehydrosalsolidine, hydrocotarnine, ellipticine, hulupinic acid, isocorydine, koumine, serotonin, and linoleic acid.−46 Fip.Cr was evaluated for potential vasorelaxant effects on endothelium-intact and endothelium-denuded aortic tissue preparations.Fip.Cr exerted a vasorelaxant effect on both preparations.Experiments on vascular preparations revealed that Fip.Cr cumulative addition relaxed K + (80 mM) induced contractions at 1.0 mg/mL concentration.K + (80 mM) induces depolarization by augmenting the release of intracellular Ca 2+ through voltage-dependent Ca 2+ channels. 47alcium channel blockade may assist the vasorelaxant effect of the Fip.Cr, as evidenced by its capacity to alleviate K + (80 mM) precontraction because smooth muscle relaxation will occur following repolarization by preventing the inflow of calcium current. 48After testing Fip.Cr on a vascular preparation that had been precontracted with phenylephrine, we discovered that it generated full relaxation at the same concentration and that the nitric oxide synthase inhibitor L-NAME 49 had no impact on this relaxation, thereby excluding the involvement of nitric oxide pathway.PE stimulates α-adrenergic receptors, creating inositol-1,4,5-triphosphate via the conversion of phosphatidylinositol, activating voltage-dependent calcium channels to raise intracellular calcium. 50onsequently, similarly to verapamil, Fip.Cr exhibited a relaxant response to K + (80 mM) and phenylephrine (1 μM) induced contractions, representing a blockade of intracellular calcium influx via calcium channels.Essential to the treatment of hypertension and angina are medications that block calcium channel function. 51ip.Cr was further evaluated on isolated rat atrial strips to get insight into its effects on cardiac muscles.Fip.Cr completely blocked the rate (negative chronotropic) and force (negative inotropic) of atrial spontaneous contractions, comparable to verapamil.The inotropic and chronotropic hypotensive effects were unaffected by atropine pretreatment, ruling out a role for muscarinic receptors in the heart.Electrical stimulation of cardiac myocytes initiates heart muscle contraction (excitation and contraction coupling, ECC). 52The most plausible explanation for these results is that Fip.Cr may be manifested via calcium ion channel blockade as a hypotensive mechanism.
To ensure plant products' safety, conducting exhaustive studies of their toxicity is vital, offering scientific evidence for determining acceptable doses for animals, including humans. 53cute toxicity is a preliminary investigation that is the foundation for categorization and labeling.In addition, it gives preliminary information regarding the method of toxic action of a chemical, allowing us to determine the dose of a novel molecule and assisting in animal research dose determination. 54,55Our investigations revealed that acute toxicity from a single dosage of escalating Fip.Cr did not result in any mortality or significant abnormalities at any level.However, a moderate sedative effect was seen in rats at 2000 mg/kg, b.wt.dose.The study's limit test dose was 2000 mg/kg b.w., which did not demonstrate significant toxicity and fatality; therefore, subacute toxicity tests were performed with doses (500, 250, and 125 in mg/kg, b.wt.) that were a quarter, an eighth, and a 16th of the limit test dosage, respectively.
In rats, subacute administration of Fip.Cr did not result in any behavioral abnormalities or mortality.Initial warning indications of toxicity from chemicals and medications manifest in changes in general behavior and body weight. 56Compared to the control group, the rise in rat body weight was insignificant and deemed normal.Consequently, it can be inferred from the acute and subacute toxicity investigations that oral administration of Fip.Cr did not cause significant clinical symptoms or alter the normal development pattern of rats.
Rats' kidneys, lungs, stomach, intestines, spleen, liver, pancreas, and heart were analyzed for any discernible morphological differences.A comprehensive evaluation of the vital organs of the rats found no abnormalities or degeneration.When assessing the toxicity of a product, drug, or chemical, relative organ weight is a good indicator and reliable predictor of toxicity.Our investigation indicated that after administering Fip.Cr at varying dosages to rats, all organ modifications were insignificant; hence, it can be inferred that administration of Fip.Cr at varying doses did not produce any major complications to the vital organs.
The hematopoietic system, one of the most susceptible bodily systems to the impacts of toxic compounds, is also a good indication of human and animal pathological and physiological status. 57An examination of hematological parameters, which may also be used to explain the blood-related activities of extracts, can indicate the potentially harmful effects of an extract on an animal's blood. 58,59Rats' hepatic and renal functions were analyzed using clinical biochemistry to determine whether or not the extract had any effect.The liver and kidney are essential for an organism's survival, so the biochemical parameters are considered a crucial toxicity assessment signal. 60The hematological and biochemical profile of rats treated with Fip.Cr was mostly nonsignificant compared to the control group and was within the usual range.These findings demonstrated that acute and subacute treatment of Fip.Cr did not significantly affect the hematological and biochemical profile of rats, nor did it cause noteworthy harmful repercussions.
Histopathological slides allow for a more in-depth investigation of toxic effects and ailments since the tissue architecture is preserved throughout fabrication. 61Compared to the normal control group, histological examinations of the heart, liver, kidneys, and lungs revealed no significant alterations.These results were compatible with hematological and biochemical measures that showed no substantial changes.Since no detrimental effects on the hematopoietic system are observed, these results might indicate the extract's safety at up to 2000 mg/ kg. 62xperimental results on the aorta and atrial preparations revealed that Fip.Cr might have hypotensive effects on anesthetized rats with normotensive blood pressure.The Fip.Cr exerted hypotensive effects on anesthetized rats, with comparable decreases in mean arterial (MABP), systolic (SBP), and diastolic (DBP) blood pressure to those observed with verapamil.These findings demonstrated that the voltagedependent calcium channel-blocking activity might play a crucial role in the antihypertensive potential of Fip.Cr.
After exposing rats to isoproterenol (ISO)-induced myocardial infarction, we determined that Fip.Cr exhibited cardioprotective properties.ISO is a synthetic, nonselective betaadrenergic agonist, and ISO-induced myocardial damage has frequently been used in pharmacological investigations to examine the effect of drugs on myocardial infarction. 63,64levated biometrical indicators, serum levels of CPK, CK-MB, lipid profile, LDH, cTnT, ANP, AST, ALT, and serum IL-6, as well as abnormal cardiac microstructure on histopathology in the ISO administered group, indicate that the current research is an efficient rat model of myocardial injury.These findings are consistent with prior in vivo investigations. 65,66Consequently, the current study confirmed that Fip.Cr could ameliorate ISOinduced myocardial infarction injury, as evidenced by a significant decrease in biometrical indicators, serum levels of myocardial injury markers, and a marked reduction in histopathological alterations comparable to carvedilol and verapamil.
Oxidative stress-induced inflammation of cells includes the inflammasome containing the NOD-like receptor pyrin superfamily domain 3 (NLRP3). 67This inflammasome protein complex includes ASC, NLRP3, and caspase-1 as constituents.Patients with acute myocardial infarction have peripheral blood monocytes that are NLRP3 inflammasome-activated.Furthermore, it has been observed that treatment of NLRP3 siRNA plus BAY 11-7082, an inflammasome inhibitor, greatly improved myocardial ischemia/reperf usion injury. 68,69,70Myocardial infarction (MI) damage is also influenced by tumor necrosis factor-α (TNF-α), IL-1β, and IL-6.TNF-α antagonists perform a therapeutic function in MI. 71 These data suggest that NLRP3 inflammasome, TNF-α, IL-1β, and IL-6 have a role in myocardial infarction damage.In the current work, we demonstrated that ISO-induced myocardial injury activated NLRP3 inflammasome in the heart, as determined by increased cardiac expression levels of ASC, NLRP3, and caspase-1; and TNF-α, IL-1β, and IL-6 expression levels were also elevated.Our findings revealed that Fip.Cr, like carvedilol and verapamil, decreased NLRP3 inflammasome activation and TNF-α, IL-1β, and IL-6 levels, which are prospective therapeutic targets for the treatment of myocardial infarction damage.
Cardiovascular disorders are associated with an abnormal gut microbiome. 72AMI patients' gut microbiota contained a decreased proportion of the phylum Firmicutes. 73By enhancing cardiac pump performance in AMI patients, Lactobacillus may minimize the incidence of severe outcomes such as HF. 74In the present investigation, the administration of ISO lowered the abundance of Firmicutes and Lactobacillus.The Fip.Cr affected the gut microbiome by increasing the abundance of Firmicutes and Lactobacillus compared to carvedilol and verapamil, demonstrating its cardioprotective potential.Previous inves-tigations have revealed that alkaloids and flavonoids are cardioprotective. 75Consequently, the bioactive compounds identified in the current investigation may be accountable for Fip.Cr's cardioprotective effect.
In 2007, Hopkins proposed network pharmacology to discover how ligands and drugs function within cells. 76The generated networks of network pharmacology depict the interplay between many targets, bioactive chemicals, and the pathways of bioactive substances in herbal and complicated diseases. 77Recently, network pharmacology has emerged as an innovative method for highlighting the bioactivity of herbs and elucidating cellular pathways. 78The hypertension-treating mechanism of FIP may include MMP9, JAK2, HMOX1, NOS2, NOS3, TEK, SERPINE1, CCL2, and VEGFA, according to network pharmacology research in the current study.According to computational assessments, the bioactive components of FIP manifested to treat hypertension are implicated in the control of many major KEGG pathways, including the calcium signaling pathway.
Molecular docking has become essential to drug discovery. 79olecular docking studies demonstrated that (methylsulfanyl) butyl glucosinolate, epsilon-viniferin, eugenol, scoparone, sinapic acid, 2-coumaric acid, and dehydrosalsolidine have a greater affinity for the voltage-dependent calcium channel (CAC1C_ HUMAN) than verapamil.Consequently, it became plausible to hypothesize that the antihypertensive effect of the FIP was due to the high affinity of the identified compounds for the protein of voltage-dependent calcium channel targeted by smooth muscle, which might hinder contraction-related signal transmission.
CONCLUSIONS
It can be concluded that the alkaloids and flavonoids identified in Fip.Cr may be responsible for its hypotensive and cardioprotective effects.Fip.Cr dramatically mitigated the ISO-induced myocardial infarction damage by attenuating inflammation in rats via inhibiting the NLRP3 inflammasome and raising the levels of Firmicutes and Lactobacillus.In vitro and in vivo experiments demonstrated that the antihypertensive mechanism of Fip.Cr may involve calcium channel blocker activity.The network pharmacology investigation revealed that FIP bioactive compounds may interfere with the hypertensionrelated genes and KEGG pathways.MMP9, JAK2, HMOX1, NOS2, NOS3, TEK, SERPINE1, CCL2, and VEGFA may all have a role in the hypotensive mechanism of FIP.In addition, a molecular docking study demonstrated that FIP bioactive compounds, including (methylsulfanyl) butyl glucosinolate, epsilon-viniferin, eugenol, scoparone, sinapic acid, 2-coumaric acid, and dehydrosalsolidine exhibited a higher docking score with a voltage-dependent calcium channel protein (CAC1C_ HUMAN) than verapamil.The acute and subacute toxicity studies of Fip.Cr up to 2000 mg/kg on rats found no significant adverse effects.It has been demonstrated that the oral LD 50 of Fip.Cr is larger than 2000 mg/kg, thus widely considered safe.Therefore, the current work provides a major mechanistic basis for compound identification, hypotensive mechanism and cardioprotective potential exploration, and a safety evaluation of Fip.Cr.However, more research is required to determine the precise mechanism by which Fip.Cr prevents the activation of the NLRP3 inflammasome in myocardial infraction-damaged heart tissue.
Plant Material Conglomeration and Crude Extract
Preparation.Fumaria indica (Hausskn.)Pugsley whole plant was conglomerated in February 2020 from wheat fields and alongside railway tracks in Multan, Pakistan.A professor from Bahauddin Zakariya University in Multan, Pakistan, named Dr.Zafar-Ullah Zafar, verified the specimen's validity (Voucher number: www.theplantlist.org/tpl1.1/record/kew-2815398),and a specimen was placed in the same institution's herbarium.Cleaning, shade drying, and milling the plant material resulted in a fine powder.One kilogram of plant powder was steeped in 70% aqueous methanol for 1 week at 25 °C in an amber-colored glass jar with an open cover and frequent shaking.The percolate was concurrently filtered with muslin via a filtering media, namely, paper (Whatman no. 1).Following three rounds of maceration, all filtrates were mixed.A rotary evaporator (Labtechnik AG, BUCHI 9230 Model, Switzerland) evaporated the organic solvent entirely under decreased pressure and 37 °C.An extract (Fip.Cr) of a dark brown, semisolid substance with a yield of about 7.5% was produced.Fip.Cr was dissolved in 10% DMSO in preparation for the in vitroexperimentation.
Chemicals.
In all studies, analytical research-grade chemicals and reagents were used.The following chemicals were obtained from St. Louis, Missouri, United States, Sigma Chemical Company, MO: atropine sulfate, acetylcholine chloride (ACh), phenylephrine hydrochloride (PE), and verapamil hydrochloride (KCl).Sodium chloride (NaCl) was obtained from Poole, United Kingdom's BDH Laboratory Supplies.Calcium chloride (CaCl 2 ), magnesium sulfate (MgSO 4 ), potassium dihydrogen phosphate (KH 2 PO 4 ), sodium bicarbonate (NaHCO 3 ), and glucose (C 6 H 12 O 6 ) were purchased from Darmstadt, Germany, Merck.For research, China's heparin and Global Pharmaceuticals' ketarol 500 mg/mL (ketamine) were utilized.On the day of the experiment, dilutions and physiological salt solutions were freshly produced in distilled water/saline.In studies serving as controls, the solubilization vehicle had no impact.
LC ESI-MS/MS Analysis.
Identification of potentially bioactive compounds of Fumaria indica (Hausskn.)Pugsley hydromethanolic extract was evaluated using a Linear Ion Trap Mass Spectrometer (United States − Thermo Scientific) connected to a source of electrospray ionization (LTQ XL Model). 80,81Ultrasonic dissolution of plant extract Fip.Cr (20 mg/mL) in the methanol of LC−MS grade for 5 min at 25 °C, followed by 10 min of 12,000 rpm centrifugation, produced the sample solution.The supernatant was filtered using a filter nylon syringe 0.45 μm after collection.Methanol was used to dilute each approximately 50 μL sample stock solution until the final volume was 1 mL.This solution was injected directly by a syringe pump into the ESI interface using a 7.5 μL/min flow rate.The following are the optimal operational parameters of negative and positive ESI/MS ion spectroscopy: N2 auxiliary and sheath gas flow rates at 20 and 5.0 au, respectively; the capillary's temperature and voltage were set at 286 °C and 4.6 kV, respectively; and 50−2000 m/z range for mass scanning.Modifying collision-induced decomposition energy from 2 to 30 according to ionic parent molecule type enabled ESI-MS fragmentation.The ESI-MS/MS data were collected and interpreted using the software X-Calibur 2.0.7 by Thermo Scientific (Massachusetts City of Waltham, United States). 82.3.1.Identification of Compounds.The bioactive compounds in the extract were identified by cross-referencing the mass spectra chromatogram with previously published reference libraries and data, such as the European Mass Bank (https:// massbank.eu/MassBank/)and the North American Mass Bank (http://mona.fiehnlab.ucdavis.edu/).
Animals and Housing
Conditions.The Department of Pharmacology ethics committee, BZU Multan, approved the procedures (reference number EC/06-PhDL/S2018), and the studies were conducted in strict conformity with the Laboratory Animal Resources Commission's guidelines. 83Rats of the Sprague−Dawley strain were provided by the same institution for practical reasons, weighing 170 ± 20 g, regardless of gender.The rats were housed in a facility that regulated temperature (22 ± 4 °C) and lighting with conventional food and unlimited tap water access.5.6.In Vitro Tissue Experimentation.For in vitro experimentation, the approach outlined in previous studies 84−86 was utilized.
5.6.1.Vascular Investigations of Isolated Rat Aorta.Following a cervical dislocation, the rats' thoracic descending aortas were isolated and submerged in the carbogen-containing solution of Kreb.Without harming the aorta endothelium, all adhering fatty tissues were meticulously removed.Each (2−3 mm) preparation for an aortic ring was fixed in tissue organ baths of 15 mL containing carbogen accreted solution of Kreb and kept at 37 °C by a thermoregulator with circulation.Before drug testing, aortic ring preparations were subjected to 2 ± 0.1 g preloaded force, and after 10 ± 2 min, new Kreb's solution buffer drainage was permitted to stabilize for 55 ± 5 min.Using a FORT100 force−displacement transducer, the tissue's physiologic response was recorded.Lab Chart Pro Software Version 7 displayed the results of signal amplification achieved using Power Lab (4/25).Immediately before administering the test drug, the physiological activity was measured and quantified as a contraction percentage.After equilibration, the drug or test substances were delivered in sequence.
This research evaluated aorta preparations with intact and denuded endothelium in terms of nitric oxide and cholinergicbased (EDRF) relaxing factors produced from endothelium.Curved forceps were used to scrape the intima of the aortic ring to create an aortic preparation devoid of the endothelium.The ability of a bolus of acetylcholine (1 μM) administration to relax the constriction caused by phenylephrine (1 μM) validated the endothelial function.The endothelial activity was indicated by relaxing contraction of 70−80% (endothelium-intact). Contrary to this, additional contraction increases demonstrated endothelial dysfunction (endothelium-denuded). 87Both aortic preparations were subjected to K + (80 mM) and phenylephrine (1 μM)generated contractions in an organ tissue bath to investigate the potential vasorelaxant effect of Fip.Cr.L-NAME (10 μM) was utilized to assess the involvement of nitric oxide in the vasculature.
5.6.2.Mechanistic Basis of Isolated Rat Atria.Following a cervical dislocation, the rats' hearts were isolated and submerged in the carbogen-containing solution of Kreb.Ventricles and any adhering fatty tissues were meticulously removed without causing pacemaker damage.Atria's preparation was fixed in tissue organ baths of 15 mL containing carbogen accreted solution of Kreb and kept at 37 °C by a thermoregulator with circulation.Before drug testing, atria's preparations were subjected to 1 ± 0.1 g preloaded force, and after 10 ± 2 min, new Kreb's solution buffer drainage was permitted to stabilize for 25 ± 5 min.Using a FORT100 force−displacement transducer, the tissue's physiologic response was recorded.Lab Chart Pro Software Version 7 displayed the results of signal amplification achieved using Power Lab (4/25).Immediately before administering the test drug, the physiological activity was measured and quantified as a contraction percentage.After equilibration, the drug or test substances were delivered in sequence.
5.7.In Vivo Tissue Experimentation.5.7.1.Safety Assessment.5.7.1.1.Acute Toxicity.The acute toxicity experiments on Sprague−Dawley rats were done under OECD guideline 423, with minor adjustments.Four groups of five rats each were formed from the selected rats.Before the start of the experiment, each rat was weighed, marked for identifying purposes, and abstained from food during the night despite having access to water throughout.
1st group: The normal control group (NW) was administered a single dosage of 10 mL/kg bw of distilled water orally.2nd group: The group treated with hydromethanolic extract (FAL) was administered a single dosage of 1000 mg/kg bw of Fip.Cr orally.3rd group: The group treated with hydromethanolic extract (FAM) was administered a single dosage of 1500 mg/kg bw of Fip.Cr orally.4th group: The group treated with hydromethanolic extract (FAH) was administered a single dosage of 2000 mg/kg bw of Fip.Cr orally.Following dosing, the rats fasted for an additional 4 h, and continuous observations were obtained for every single rat in their separate groups for the first 4 h and twenty-four hours following the administration of drug therapy for mortality and aberrant alterations.Tremors, lethargy, diarrhea, respiration, sleep, aberrant behavior, drink and food intake, convulsions, changes in the skin, hair, and eye, color, salivation, motor activity, and coma were evaluated twice daily for 14 days to determine any toxicological effects.Acute toxicity was performed for informational purposes based on the test extract's short-term toxicity level, which assists in selecting the dosage for the repeated oral toxicity study. 88.7.1.2.Subacute Toxicity.A 28-day investigation of the subacute toxicity of Fip.Cr in rats followed OECD standards 407. 89Four groups of five rats each were constructed.
1st group: The normal control group (NW) administered a single dosage of 10 mL/kg bw distilled water orally.2nd group: The group treated with hydromethanolic extract (FSL) was administered a single dosage of 125 mg/kg body weight of Fip.Cr orally.3rd group: The group treated with hydromethanolic extract (FSM) was administered a single dosage of 250 mg/kg body weight of Fip.Cr orally.4th group: The group treated with hydromethanolic extract (FSH) was administered a single dosage of 500 mg/kg body weight of Fip.Cr orally.7.1.3.Observations of Clinical and Survival.All of the animals were examined twice a day for death and disease.The clinical observation included alterations in the animal's skin, fur, mucous membrane, eyes, and autonomic activity, including pupil size, piloerection, abnormal breathing pattern, and lacrimation.Alterations in gait and posture were studied alongside stereotypical behaviors, including obsessive grooming and circling often.The observation lasted 1 week before test drug delivery until an autopsy was planned.5.7.1.4.Body Weight.Throughout both studies, the weekly body weights of every animal were recorded.The body weights were also recorded both at the beginning of the study and at the end.
The correlation was used to determine the percentage gain in weight: All blood samples were collected on the final day of both investigations, and each animal fasted for a full night before the necropsy.After administering ketamine hydrochloride (100 mg/kg, intramuscularly), the animals were exsanguinated.The macroscopic examination comprised a review of the exterior surfaces, including the abdominal, thoracic, pelvic cavities, and cranial.The macroscopic characteristics of the kidneys, lungs, stomach, intestines, spleen, liver, pancreas, and heart removed during necropsy were observed.We cleansed the organs and deposited them in a 10% formalin solution that was neutrally buffered.These organs were also examined macroscopically for the likely development of lesions and other pathological indications.
Absolute (g) and relative weights (%) of organs were determined for the following important organs: the heart, kidney, liver, spleen, and lungs.The absolute weight of the essential organs was determined by removing them, placing them on absorbent sheets for a few minutes, and then weighing them (g).This approach determined the RW of each rat's respective organs: where ow = organ weight, bw = body weight, and RW = relative weight.5.7.1.6.Hematological Analysis.The blood sample was stored in anticoagulant-containing sterile tubes, and the hematological profile was analyzed using an automated hematological analyzer (Sysmex XS-1000i).The following hematological parameters were examined: mean corpuscular volume, hemoglobin, and concentration of hemoglobin (MCV, MCH, and MCHC); red and white blood cells, and differential leucocyte counts (RBC, WBC, lymphocytes, neutrophils, monocytes, eosinophil); platelets, hematocrit (HCT), and hemoglobin (Hb %).5.7.1.7.Biochemical Analysis.The serum was isolated from blood by centrifugation at 1500 × g for 15 min (no anticoagulant used).The recovered serum was refrigerated at −20 °C for future use.These biochemical parameters were assessed: glucose, bilirubin total, serum glutamic pyruvic transaminase (SGPT), urea, sodium, chloride, creatinine, potassium, alkaline phosphatase (ALP), total cholesterol, triglycerides, total protein, HDL cholesterol, serum glutamic oxaloacetic transaminase (SGOT), LDL cholesterol, and albumin.These values were determined using commercially available biochemistry analyzers and diagnostic test kits as evaluation tools.
5.7.1.8.Histopathological Studies.Histopathological studies were performed on organ samples of the heart, liver, kidneys, and lungs.The primary organs were surgically removed and preserved in a 10% formalin-buffering solution (pH 7.4). 92ollowing fixation, tissue samples were dehydrated in a succession of ethanol concentrations ranging from 70 to 99.9%, rinsed with toluene, and then encased in paraffin.The tissue was sectioned to a thickness of 5 μm employing a rotary microtome, and then hematoxylin and eosin stained the slices (HE).The slices were then photomicrographed and examined microscopically for pathological diagnoses. 88.7.2.Blood Pressure and Hemodynamic Parameters Assessment in Anesthetized Rats.Five mg/kg Diazepam and 85 mg/kg ketamine were administered intraperitoneally to 200−250g rats individually to achieve adequate anesthesia without affecting cardiovascular markers within the normotensive range.The rats were positioned supine on the dissection table, and an overhead lamp was used to maintain the temperature.An 18-gauge polyethylene tube was inserted into the trachea to facilitate breathing after a tracheotomy.The jugular vein of the right side was catheterized for intravenous drugs and Fip.Cr delivery.Fill heparinized saline (50 IU/mL) in polyethylene tubing while performing carotid artery cannulation to measure changes in hemodynamic parameters using an MLT 0699 pressure transducer.Lab Chart Pro Software Version 7 displayed the results of signal amplification achieved using Power Lab (4/25).Before delivering any drug, the exposed region was coated with a damp cotton swab and left to equilibrate for 15 ± 5 min.Standard one μg/kg dosages of acetylcholine and adrenaline were administered to induce hypotension and hypertension before testing. 93.7.3.Isoproterenol-Induced Chronic Myocardial Infarction.The 21-day duration of the research was designed to induce MI.Normal saline was used to dissolve Fip.Cr and isoproterenol (ISO).Randomly dividing thirty-five rats weighing 120−170 g into seven groups and housing them in separate cages.Each group consists of five SD rats.
1st Group: The negative control (NC) group was orally administered 10 mL/kg bw of normal saline.
2nd Group: The intoxicated control (IC) group was administered a dosage of 5 mg/kg bw of ISO subcutaneously daily for 14 days (from the experiment's eighth day to its twenty-first day).
3rd Group: The group treated with standard (VRP) was administered a dosage of 10 mg/kg bw of verapamil orally daily for 21 days.4th Group: The group treated with standard (CAR) was administered a dosage of 10 mg/kg bw of carvedilol orally daily for 21 days.5th Group: The group treated with hydromethanolic extract (FLD) was administered a dosage of 100 mg/kg bw of Fip.Cr orally daily for 21 days.6th Group: The group treated with hydromethanolic extract (FHD) was administered a dosage of 200 mg/kg bw of Fip.Cr orally daily for 21 days.7th Group: The control group treated with hydromethanolic extract (FWI) was administered a dosage of 200 mg/kg bw of Fip.Cr orally daily for 21 days.This cohort was utilized to assess the effects of Fip.Cr without ISO on model parameter values.After 1 h of pretreatment with verapamil, carvedilol, and Fip.Cr, subcutaneous injections of ISO (5 mg/kg) were administered for 14 consecutive days, beginning on the eighth day and ending on the twenty-first day, except for the seventh group, which did not receive ISO. 94 Sample Collection Twenty-four hours following their last injection of ISO, the rats were anesthetized with 40 mg/kg of intraperitoneal ketamine, and retro-orbital sinus blood was collected into coagulant and anticoagulant tubes containing EDTA to determine biochemical indicators.Serum and plasma were separated from blood samples by centrifuging them at 4 °C for 15 min at 4500 rpm.The blood samples were then frozen at −20 °C.The animals were euthanized via cervical dislocation, and cardiac tissue was extracted for research.For RT-PCR investigation, the cardiac tissues were immediately cleaned with ice-cold saline, immersed in trizole, and frozen.For histological studies, cardiac tissues are preserved in 10% formalin.Rat feces samples were collected and frozen for RT-PCR microbiota analysis.qRT-PCR was used to examine the level of mRNA expression for ASC, NLRP3, caspase-1, TNF-α, IL-1β, and IL-6.According to the manufacturer's guidelines, the TRIzol reagent (USA, California, Carlsbad, Invitrogen) was used to isolate and purify total RNA from rat heart tissues after homogenization of frozen tissues.cDNA was subsequently synthesized from total RNA using an M-MLV reverse transcription enzyme kit using the manufacturer's instructions.RNA levels of ASC, NLRP3, caspase-1, TNF-α, IL-1β, and IL-6 were quantified by qPCR using SYBR-Green Supermix (USA, California, Hercules, Bio-Rad) after reverse transcription.The 2 −ΔΔCt technique was used to analyze the data.The housekeeping gene for the mRNA study was GAPDH..8.2.Modeling Protein Homology.The three-dimensional structure of a protein is determined using homology modeling based on the sequence similarity between at least one PDB template and UniProt.This approach is predicated on the notion that the target protein's structural conformation is superior to its amino acid sequence in terms of stability.The ExPASy ProtParam (https://web.expasy.org/protparam)application was used to calculate numerous chemical and physical characteristics, accessed on June 2, 2023, 99 including projected isoelectric point, extinction coefficient, molecular weight, instability index, totally positive and negative charged residues, amino acid content, GRAVY, and aliphatic index.
We discovered templates utilizing UniProt sequences, searched for homology using BLAST (NCBI PDB Database), conducted sequence alignment using ClustW, and formulated a model, which was subsequently validated during homology modeling (Schrodinger suite 2021−3, Maestro 12.9).Ramachandran plots and Protein's Reliability Report found that the protein model did not meet the standards.
The building model was refined using the Prime refinement loop and Prime minimization module, with the VSGB solvation model and OPLS4 force field.On June 3, 2023, PROCHECK was executed on the minimized protein model Ramachandran plots using the Web site (https://saves.mbi.ucla.edu/)SAVES v6.0. 100,101.8.3.Network Pharmacology Analysis.We utilized techniques Xiao et al. 102 had previously outlined for network pharmacology analysis.
Prospective Target Evaluation.The Swiss Target Prediction and Drugbank databases predicted the proteins most likely to be targeted by bioactive compounds.The probability-based cutoff value of 0.03 led to the selection of 307 targets.
The databases DisGeNET (https://www.disgenet.org/),Pubmed, and Gene card were leveraged on May 21, 2023, to identify disease targets containing the word "hypertension".The VarElect database (https://ve.genecards.org/),accessed on May 22, 2023, was used to construct genetic and phenotypic disease association scores for hypertension targets.The 150 top targets were saved for bioactive chemicals and illness target intersection analysis using Venny 2.1, a web-based toolbox retrieved on May 22, 2023: https://bioinfogp.cnb.csic.es/tools/venny/, and the relationship between the targets of bioactive substances and disease-associated proteins is depicted using a Venn diagram.ShinyGO version 0.77 was also used to enrich KEGG pathways and conduct Gene Ontology analysis on corelative targets.Pathway database of KEGG and GO biological process for top ten was conducted using "Human" and an FDR cutoff of 0.05.We categorized the pathway database by fold enrichment and chose using FDR.
Networks Construction.Cytoscape 3.9.1 would utilize the string to retrieve Protein−protein Interaction (PPI) for hypertension-related targets.We constructed the Compounds Targets Disease (CTD) and Compounds Targets Pathways (CTP) networks using Cytoscape 3.9.1.
Target Analysis and Pathways Evaluation.MCODE was implemented to detect clusters in PPI networks (densely connected regions).A separate list of putative FIP targets in the MCODE clusters for hypertensive illness was compiled.These common targets were considered potential hub proteins for FIP in hypertension treatment.GenomeNet was utilized to identify the components linked with these targets.Based on the KEGG pathways enrichment study results, pathways comprising the key proteins were discovered and considered.
5.8.4.Molecular Docking.Network pharmacology explores the affinity and stability of target proteins using functional chemicals or significant ligands.The two-dimensional ligand structures were retrieved from "PubChem" on June 4, 2023 (https://pubchem.ncbi.nih.gov) in format SDF and submitted to LigPrep ligand minimization, ionization, and optimization.Including the force field OPLS4 to optimize ligands and reduce energy enhanced the efficacy and precision of energy-optimized conformational predictions of protein−ligand.Using Epik to produce pH-appropriate (7.0 ± 2.0) ionization states for suspected protomers.Each ionized ligand was transformed into a tautomeric version, and the desalt's tool was utilized to retain the highest atomic counts of ligands.Each ligand under consideration had up to 10 stereoisomers made with the stereoizer while maintaining the acquired data on chiral stereogenesis.
Using the task protein preparation, we performed hydrogen bond assignment, het state minimization, optimization, and ionization of the resultant model protein's three-dimensional crystalline structures.Redundant binding sites were removed from the developed model protein structure to assess whether the ligand-protein interaction is multimeric or dimeric.The Chemical Components Dictionary database rearranged the unstructured amino acid residues and het groups.Alterations were made to the charges typically associated with metal ions and their neighboring atoms, and the number of covalent bonds between metal ions and adjoining atoms was reduced to zero.Disulfide connections are formed when protein sulfur atoms are less than 3.2 Å apart.The command prime validated and rectified structural defects in proteins, especially those resulting from side-chain atom absences.Across a pH range of 7.4 ± 2.0, the group het protonation and metal/cofactor charge states were determined with an accuracy of 5.0 Å using the Epik tool.Within 5.0 Å of the protein structure, water molecules and het groups replaced hydrogen atoms.PROPKA application at a pH of 7.4 increased the hydrogen bond distribution throughout protein structures.Introducing the force field, OPLS4 decreased constraint energies and improved protein architectures.
The binding sites were determined via receptor grid generation.In part, improved ligand binding to protein pockets was achieved using a cube grid to predict coordinates based on bibliographic research and a module sitemap.VGCC used coordinates for PDB: 8EOI; x= 175.22,y = 203.5,z = 169.06.Under the partial charge threshold of 0.25 for proteins with nonpolar atomic van der Waals radius, the nonpolar regions' receptor potential is lowered by scaling factor 1.
Extra precision glide dock (XP) was linked with a previously prepared grid receptor file to facilitate the induced docking fitness of proteins and ligand precursors.The partial charge threshold and Radii van der Waals' scaling factor values were 0.15 and 0.80 Å, respectively.Adjustments have been made to the ligands sampling that facilitates flexible docking to create conformers by including ring conformations and nonring nitrogen inversions.In addition, the torsion sampling bias was implemented near a bond.The Epik indicates the accrual of docking score penalties.
We employed Prime MM-GBSA for Extra precision docking results, which computed the binding energies of ligands to protein structures utilizing the VSGB solvation model and OPLS4 force field (Schrodinger suite 2021−3, Maestro 12.9). 103nhibition Constant.The above equations were used for the inhibition constant to calculate the ligand's binding free energy.
5.9.Statistical Analysis.The findings of the experiments were expressed as means ± SD.Logarithmic sigmoidal dose−response graphs were used to plot concentration−response curves.The EC 50 (median effective concentration) with a 95% CI (confidence interval) was determined using sigmoidal dose−response graphs and a nonlinear regression curve fit.The in vivo results were evaluated using a one-way ANOVA followed by the Dunnett test with GraphPad Prism-9 to ascertain statistical significance.At the p ≤ 0.05 level, there were significant findings.
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/
Figure 1 .
Figure 1.Effect of Fip.Cr on endothelium-intact and endotheliumdenuded aortic tissue preparation.
Figure 3 .
Figure 3. Concentration−response curves illustrate the effect of Fip.Cr on heart rate and force of contraction in rat atrial preparation (A) in the absence and (B) in the presence of atropine (1 μM).(C) Effect of verapamil on heart rate and force of contraction in rat atrial preparation.Values are presented as the mean ± SD (n = 5).
Figure 6 .
Figure 6.Effects of Fip.Cr on cardiac (A) hypertrophy biometric and (B) serum biochemical indicators in myocardial infarction.
Figure 8 .
Figure 8. Photomicrographs of myocardial tissues from all experimentally treated groups (H&E staining, 10× magnification).(A) Negative control group showed normal cardiac architecture.(B) ISO-administered group exhibited (a) necrosis accompanied by (b) inflammatory cell infiltration and (c) local tissue interstitial edema (black arrows).The groups (C) Verapamil, (D) Carvedilol, (E) Fip.Cr 100 mg/kg, and (F) Fip.Cr 200 mg/kg ameliorated ISO-induced myocardial infarction damage, as shown by a substantial decrease in histological abnormalities.(G) Fip.Cr 200 mg/kg without isoprenaline-induced MI group exhibited no major histopathological alterations compared to the control group.
a
CNS PERM: above −2 capable of penetration; QPlogP o/w: coefficient of octanol/water partition lipophilicity (forecast between −2 and 6.5); QPlogHERG: blockage of HERG K + channels (projected IC 50 value greater than −5); QPlogKhsa: albumin binding to human serum (anticipated to range from −1.5 to 1.5); QPlogKp: predicted skin permeability (between −8 and −1 cm/s); QPPCaco: Cacodepicted in Tables Figure 9A illustrates the prepared CAC1C_ HUMAN (Q13936) protein model based on the 8EOI, and Figure S3 depicts the secondary structure sequence alignment of the 8EOI.The PROCHECK was used to check the model.In addition, the Ramachandran plot revealed the amino acid distribution as phi and psi degrees and validated the Q13936 model.The structure was determined using the estimated values.CAC1C_ HUMAN (Q13936) protein's amino acid residues comprised 83.2% of the most preferred areas and 0.2% of the prohibited regions (Figure 9B).When adequately aligned and finished, a model with low sequence similarity may be ideal for molecular docking research.2.4.3.Network Pharmacology Analysis.2.4.3.1.Prospective Target Evaluation.Swiss target prediction and DrugBank databases were queried to identify possible protein targets for
Figure 10 .
Figure 10.Functional enrichment utilizing (A) Gene Ontology (GO) and (B) Kyoto Encyclopedia of Genes and Genomes (KEGG) was depicted, displaying the top ten enriched terms based on the number of genes and fold enrichment.
Figure 11 .
Figure 11.(A) PPI STRING network of hypertension; (B) CTD network of bioactive compounds with hypertension disease target genes; CTP networks of (C) GO biological processes and (D) KEGG pathways of bioactive compounds with hypertension disease target genes; (E) Fip.Cr hub proteins and MCODE cluster network.Note.Interactive Fip.Cr key targets in hypertension therapy.PPI: Protein−protein Interaction; CTD: Compounds Targets Disease; and CTP: Compounds Targets Pathways.
5 . 5 .
Ethics Statement.All experiments were conducted following the Animals (Scientific Procedures) Act of the United Kingdom, enacted in 1986, and associated recommendations; ARRIVE guidelines; National Institutes of Health recommendations (1978 revision of NIH Publication Number 8040); or Directive 2010/63/EU for animal experimentation for the use and care of the laboratory animals.
Table 1 . continued
a Rt: retention time; m/z: mass-to-charge ratios.
Table 2 .
Evaluating Body Weight and Weight Gain Percentage in Acute and Subacute Toxicity Studies of Rats with Fip.Cr Treatment a
Table 3 .
Evaluation of Absolute Organ Weight (g) and Relative Organ Weight Percentage in Acute and Subacute Toxicity Studies Treatment of Rats with Fip.Cr a
Table 6 .
ADMET Analysis Based on QikProp and AdmetSAR a
Table 7 .
KEGG Pathways of FIP Hub Genes Associated with Hypertension Therapy
10.1021/acsomega.3c07655.LC ESI-MS/MS spectra in negative and positive mode for tentative compounds of Fumaria indica (Hausskn.)Pugsley hydromethanolic extract; body weight gain percentage of individual rats by weeks: in acute toxicity study on (A) 7th day and (B) 14th day; in subacute toxicity study on (C) 7th day, (D) 14th day, (E) 21st day, and (F) 28th day; secondary structural sequence alignment of the 8EOI; top 150 pathogenic target genes related to hypertension analyzed via VarElect; most probable macromolecular targets of bioactive compounds; Venn diagram between compounds macromolecular targets and hypertension genes; top ten GO Biological process of bioactive compounds of Fumaria indica (Hausskn.)Pugsley hydromethanolic extract for hypertension-related target genes; top ten KEGG pathways of bioactive compounds of Fumaria indica (Hausskn.)Pugsley hydromethanolic extract for hypertension-related target genes; network analysis of hypertensive pathogenic target genes' interactions with phytoconstituents; network analysis of hypertensive pathogenic target genes' interactions with phytoconstituents and GO biological process (BP); network analysis of hypertensive pathogenic target genes' interactions with phytoconstituents and KEGG pathways; and interaction of Fip.Cr bioactive compounds and verapamil with CAC1C_ HUMAN (Q13936) (PDF) Corresponding AuthorAmbreen Aleem − Department of Pharmacology, Faculty of Pharmacy, Bahauddin Zakariya University, Multan 60800, Pakistan; orcid.org/0000-0002-7722-2643;Phone: +923226100313; Email: ambreenaleem@ hotmail.com,ambreen.aleem@bzu.edu.pk
|
2024-01-08T16:06:17.932Z
|
2024-01-06T00:00:00.000
|
{
"year": 2024,
"sha1": "25d4e3c8ca84861c7c2e82a2c62ec9c24469d479",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c07655",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95f554bec0992dfbb32d51c68aeea8f5069ca1fc",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
}
|
55908983
|
pes2o/s2orc
|
v3-fos-license
|
Determination of Heavy Metals and Organochlorine Pesticides in the Leaves and Flowers from Linden Trees in Kırklareli Province
For five different regions in Kırklareli province, heavy metals; such as Pb, Ni, Cu, Mn, Cd, Cr, Co, Zn, Mo, and Fe in the mixture of leaves and flowers from linden trees (Tilia tomentosa L.) were analyzed by using flame atomic absorption spectroscopy after the samples were dissolved with microwave method. Also, organochloride pesticides; such as ∑BHC: [α-BHC, β-BHC, γ-BHC, and δ-BHC], ∑DDT: [4,4’-DDD, 4,4’-DDE, and 4,4’-DDT], α-Endosulfan, β-Endosulfan, Endosulfan sulfate, Heptachlor, Heptachlor-endo-epoxide, Aldrin, Dieldrin, Endrin aldehyde, Endrin ketone, Endrin and Methoxychlor in these samples were determined by utilizing gas chromatography mass spectroscopy after the samples were prepared for analyses by using QuEChERS method. The metal concentrations in the samples were in the range of 45.3 to 268 mg/kg for Mn, 0.25 to 18.8 mg/kg for Cu, 11.5 to 46.1 mg/kg for Zn, 128 to 1310 mg/kg for Fe, 10.4 to 38.6 mg/kg for Mo, 0.82 to 1.34 mg/kg for Cd, 0 to 6.45 mg/kg for Ni, 0 to 19.2 mg/kg for Pb, and 0 to 8.25 mg/kg for Cr. Moreover, the concentrations of organochloride pesticides in samples were usually determined to be lower than their maximum residue level values given the pesticide residue limit regulation of Turkish Food Codex.
Introduction
Heavy metals and pesticides, found at different forms in ecological system, lead to serious destructive injuries on biological structures of living cells when they are taken at high concentrations [1] .In case of exposure for a long time to chloride pesticides and some heavy metals, such as Cd, cancer cases can occur because of their endocrine disruptive effects, or it can result in death directly in case of exposure at high dosages for a short time [2][3][4][5] .Trace elements have significant roles in plant structures for life functions.Some elements are quite toxic to living beings and some are basic elements.However, basic elements can display toxic effects if they are taken at high dosages [6] .
It is known that metals, such as Pb, Ni, and Cr are toxic and carcinogenic, while metals, such as Fe, Zn, Mn, and Cu are necessary for our biological systems [7] .For human beings, maximum metal concentrations that can be obtained from foods should be 10-18, 15, 2.5-5, 2-3 mg for Fe, Zn, Mn, and Cu, respectively [8] .World health organization has reported acceptable weekly intake as 0.007 and 0.025 mg/kg for Pb and Cd metals, respectively [9] .Acute Cd poisoning in humans is excessively damaging to tissues; such as high blood pressure, kidney damages, and destructions of testicular tissue and red blood cells [10][11][12] .Cd mostly accumulates in kidneys, and it can lead to kidney failure at high levels.Also, high level Cd intake can cause stomachache, vomiting, bone breakage, reproductive impairment, immune system impairment, DNA damage, and cancer [13] .Furthermore, there is sufficient evidence for a relation between lung cancer and nickel sulfate and nickel sulfurs [14] .Pb is a metal that has excessively poisoning properties.According to World Health Organization, acceptable weekly Pb intake is 25 µg/kg body weights because of its extremely toxic properties [15,16] .Co is an essential element for B12 vitamin at living species, but it is very toxic at high level intakes [17] .Also, it was determined that high Cu levels higher than 2 mg/kg in drinking water caused vomiting, stomachache, and nausea [17,18] .
Organochloride pesticides are components that need to be tracked and inspected for pollution controls and health protections because organochloride pesticides stay active in environment for a long time, have biological accumulation properties and potential hazards to environment and living bodies.Organochloride pesticides can remain without degradation for a long time, dissolve in lipids, and their biological degradations and bio-transformations are too slow.Therefore, organochloride pesticides display unfavorable effects by causing various lives bio-magnifications, and also they were determined to be able to reach human beings via food chain [19] .Insecticides can move away from a region to another region by means of winds and insect movements and affect environment adversely.In the atmosphere, the most found organochlorine compounds among the insecticides are DDT, α-HCH, γ-HCH (lindan), heptachlor, dieldrin [20,21] .
The plants take pesticides to their structures directly or indirectly from soil, water, and air, and then these pesticides join the food chain when used by living species [22][23][24][25][26] .Hence, pesticides and their amounts in nature continuously need to be tracked and determined to take their damages under control.
Turkey is the one of the developing countries in the world, but fast industrialization and unconscious activities have brought about environmental problems.The amounts of pollutions and heavy metals have increased in Turkey because of increasing traffic density, urbanization, and fast industrialization [27] .In this study, heavy metals (Pb, Ni, Cu, Mn, Cd, Cr, Co, Zn, Mo, and Fe) and organochloride pesticides (∑BHC: [α-BHC, β-BHC, γ-BHC, and δ-BHC], ∑DDT: [4,4'-DDD, 4,4'-DDE, and 4,4'-DDT], α-Endosulfan, β-Endosulfan, Endosulfan sulfate, Heptachlor, Heptachlor-endo-epoxide, Aldrin, Dieldrin, Endrin aldehyde, Endrin ketone, Endrin and Methoxychlor) were analyzed in the mixture of leaves and flowers from linden trees in Kırklareli, Turkey.For this aim, flame atomic absorption spectroscopy (FAAS) was used after the samples were dissolved with microwave method, and gas chromatography mass spectroscopy (GC-MS) was utilized after the samples were prepared for analyses by using QuEChERS method.
Study Area and Sample Collection
As a bio monitor for the determination of heavy metals and pesticides, leaves and flowers from linden trees were collected in five different regions in Kırklareli, Turkey, seen in Figure 1.Kırklareli province borders the province of Istanbul to the southeast, the province of Tekirdağ to the south, and the province Edirne to the west.Also, Kırklareli neighbours Bulgaria to the north, and its total population, including villages too, is 356,050 (year 2017).In Figure 1, traffic density relations of sample collection areas are 1<4<2<3<5, and the coordinates of these sites are given in Table 1.Moreover, site 1 has been determined as control point because the population of site 1 is 250, and agriculture and livestock breeding are significant in site 1.
Linden trees (Tilia tomentosa L.) grow abundantly in Kırklareli.Linden is from the plant family of Tiliaceae, and a linden tree can reach a height of 20 to 30 meters.The leaves of linden trees grow to 5-10 cm, and the leaves and flowers of linden trees are widely used to make tea in Turkey.Also, linden tea consumptions increase in cold months because of its curative effects.The samples collected in these five regions were washed with tap water and then with ultra-distilled water.After that, the samples were dried at 35 °C and homogenized in an agate mortar for each sample separately.After that, these materials were soaked in HNO3 (10%, v/v) before rinsing with distilled water and drying in an incubator.
Preparation of Metal and Organochloride Standard Solutions
From the stock solutions of 1000 ppm, the solutions with 50 ppm were prepared by diluting with ultra-distilled water.Then, these standard solutions were adjusted to 0.01-1.0mg/L for Cd, 0.1-2.0mg/L for Pb, 0.5-10.0mg/L for Fe, 0.1-2.5 mg/L for Cr, 0.5-5.0mg/L for Mo, 0.1-2.0mg/L for Ni, 0.1-5.0mg/L for Mn, 0.05-2.5 mg/L for Cu, 0.1-2 mg/L for Zn measurements.For organochlorine pesticide, 2000 ng/µl for 1 mL was prepared in the mixture of toluene/hexane (1/1).From the stock solution, standard solutions with 5, 10, 25, 50, 100, 500, and 1000 ppb were prepared by diluting in hexane.At the optimization studies, the organochloride pesticide standard with 100 ppb was used.
Methods
Metal analyses were achieved by using FAAS (Agilent, 240 Duo), and the parameters related to measurements are presented in Table 2. Also, correlation coefficients (R2) of all metal standards were determined to be higher than 0.99 from their calibration curves.3.Moreover, correlation coefficients (R 2 ) of all organochloride pesticide standards were determined to be higher than 0.99 from their calibration curves.
Solubilization of Linden Samples by Microwave for FAAS
0.5 g linden samples were solubilized in a microwave oven by using the mixture of nitric acid and hydrogen peroxide (1:2, v/v).The analyses of Mn, Mo, Cu, Fe, Zn, Pb, Ni, Cd, and Cr in the linden samples made ready for analyses were achieved with six parallel samples at FAAS.The samples for Fe, Zn, and Mn were diluted ten times, and calculations were completed by taking into account dilutions.
Extraction method for GC-MS
Organochloride pesticides were analyzed by using QuEChERS method.After linden samples were homogenized, 10 g samples were taken into falcon centrifuge tubes with 50 mL volumes.Then, the samples were extracted by adding the mixtures of 5 mL ultra-distilled water and 40 mL hexane-dichloromethane (1:1, v/v) in these falcon centrifuge tubes on Vortex mixer.After that, the samples were centrifuged at 7000 rpm and 5 °C for 5 min.After the separations of solid-liquid phase and organic phase, QuEChERS method was applied.First, organic phases were vortexed by adding AOAC 2007.01Extraction kit (1.5 g sodium acetate and 6 g MgSO4 in order to remove water from organic phase), and they were centrifuged at 7000 rpm and 5 °C for 5 min.Supernatants (QuEChERS dSPE) were passed through the cleaning column including 400 mg PSA and 1200 mg MgSO4, and they were taken into centrifuge tubes by using vacuum manifold system.Subsequently, they were dried at the temperatures below 35 °C under nitrogen atmosphere via evaporation system.Afterwards, dried samples were dissolved in 1 mL hexane, and they were taken into vials of 2 mL after passed from filters with 0.45 µl.Lastly, the prepared samples were analyzed with GC-MS, and their calculations were carried out by injecting into GC-MS five times for each sample.
Metal analyses of SRMs and Linden Samples
The solubilization method via microwave is the one of the widespread used methods for metal analyses in solid samples by atomic absorption spectroscopy.To test the accuracy of solubilization methods, various standards were used, and their analysis results are given in Table 4.By applying linden and spinach samples to SRM samples, optimized solubilization method was carried out with FAAS for the analyses of Mn, Cu, Zn, Fe, Mo, Cd, Ni, Pb, and Cr.The metal contents in linden samples solubilized by microwave are presented in Table 5.As seen in Table 5, heavy metals (Ni, Pb, and Cr) were not determined in linden samples collected from the regions with low traffic density (traffic densities: 1<4<2<3<5), but the metal concentration values of linden samples collected from the regions near highways and industrial factories exceed the daily uptake limit of a person (especially for 3 and 5 regions).Furthermore, the concentrations of Zn, Fe, Mo, and Cd in samples collected from 3 and 5 regions were determined to be higher compared to the other regions.The lowest metal concentrations in linden samples were found to be in region 1, except for Fe metal, and the high Fe concentration can be related to the soil structure in region 1.
Element
The lowest Fe concentration was measured to be 128 ± 8 mg/kg (dry weight), while the highest Fe concentration was found to be 1310 ± 120 mg/kg (dry weight).Thus, the leaves and flowers of linden trees can be a bio monitor for Fe metal.
Organochloride Pesticide Analyses of Linden Samples
According to the pesticide residue limit regulation of Turkish Food Codex [28] , MRL (maximum residue level) values for linden samples are given in Table 6.To test the accuracy of analysis method, recovery experiments were carried out by using standard addition method, and it was determined that the sensitivity of the analysis method done for each organochloride pesticide was good.At standard addition step, each sample was separated into four parts, and first part was completed 1 mL with hexane.By using 1000 ppb standard solution, second, third and fourth parts were adjusted to 10, 25, and 50 ppb, respectively, and then the last volumes were completed 1 mL.All solutions were measured at GC-MS, and correlation coefficients (R 2 ) were determined to be higher than 0.99 from their calibration curves.By calculating real concentration values for linden samples, organochloride pesticide concentrations are presented in Table 6.
The concentrations of ∑DDT, Heptachlor, Heptachlor-endo-epoxide, α-Endosulfan, β-Endosulfan, Endosulfan-sulfate, Endrin-ketone, and Methoxychlor in samples were determined to be lower than their maximum residue level values (MRLs).On the other hand, the concentrations of ∑HCH, Aldrin, and Dieldrin are close to their MRL values.Also, the values of Endrin and Endrin-aldheyde in the samples collected from region 5 were detected to be 1.5-2 times higher than their MRLs.Moreover, Endrin concentrations in the samples have the highest values compared to the other organochloride pesticides in samples collected from all regions, except for Endrin-aldheyde in the sample collected from region 5. Pesticides can reach the leaves and flowers of linden trees not only from soil but also air by winds; therefore, pesticides fallen to the leaves and flowers of linden trees may penetrate in the leaves and flowers, or they may be adsorbed on the surfaces.These high Endrin concentrations can be caused by intensive agricultural activities at regions or their neighbor regions.
Conclusions
This work was achieved to obtain information about metal and organochloride pesticide concentrations in the leaves and flowers of linden trees in Kırklareli province.Flame atomic absorption spectroscopy was used after the samples were dissolved with microwave method, and gas chromatography mass spectroscopy was utilized after the samples were prepared for analyses by using QuEChERS method.Heavy metal concentrations in the linden samples collected from urban settlements with high traffic densities were usually determined to be higher than those in the linden samples collected from rural areas.Also, the concentrations of organochloride pesticides in samples were usually determined to be lower than their maximum residue level values, but Endrin and Endrin-aldheyde concentrations in the samples collected from region 5 were detected to be 1.5-2 times higher than their MRLs.
Figure 1 ;
Figure 1; Collection areas of the leaves and flowers from linden trees in Kırklareli province.
Table 2 .Figure 3 .
Figure 3. Also, selected organochloride pesticides in SIM mode and retention times of organochloride pesticides at GC-MS were presented in Table3.Moreover, correlation coefficients (R 2 ) of all organochloride pesticide standards
Table 1 .
Coordinates of each sampling region in Kırklareli province 2.2.Reagents All chemical reagents used are Merck and they have analytical grades.Metal standards (SRM; standard reference material) are NIST stock solutions with 1000 ppm.Dr. Ehrenstorfer organochlorine pesticide Mix 2 (HCH;
Table 3 .
Selected organochloride pesticides in SIM mode and retention times of organochloride pesticides at GC-MS.
Table 4 .
The certificate values, measured values, and recovery percentages of various metals in spinach standard (NIST-SRM
Table 5 .
Metal concentrations in leaves and flowers from linden trees (mg/kg, dry weight) (n=6)
Table 6 .
Organochlorine pesticide concentrations (µg/kg, dry weight) in the leaves and flowers from linden trees and their MRL values (mg/kg)
|
2018-12-11T11:44:02.674Z
|
2018-07-12T00:00:00.000
|
{
"year": 2018,
"sha1": "cee8fb2d3714f6bf5f4153339e32656e7880d7bc",
"oa_license": "CCBYNC",
"oa_url": "https://systems.enpress-publisher.com/index.php/SF/article/download/789/429",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cee8fb2d3714f6bf5f4153339e32656e7880d7bc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
231740696
|
pes2o/s2orc
|
v3-fos-license
|
Improved prescription for winding an electromagnet
We describe an improvement on the magnetic scalar potential approach to the design of an electromagnet, which incorporates the need to wind the coil as a helix. Any magnetic field that can be described by a magnetic scalar potential is produced with high fidelity within a Target region; all fields are confined within a larger Return. The helical winding only affects the field in the Return.
In practical uses of magnetism it is sometimes desirable to be able to create a very well characterized magnetic field; for example, a field that is uniform within a defined region (hereafter, the Target), and with a surrounding region (the Return) which confines the field, so that there is no magnetic disturbance outside the device. Ref [1] has described an algorithm to do this that makes use of the magnetic scalar potential to determine the surface current densities on the interfaces.
Here is how the algorithm works: the specified magnetic field inside the Target is represented by a scalar potential such that H target ( r) = − ∇Φ target . A current field K( r) at the surface of the Target is constructed so that only the normal component of H is present just outside the target. It can be shown that this surface current density flows along the equipotentials of Φ target ; having the current between any two equipotentials be equal to the potential difference ensures that the tangential components of the field are terminated. This establishes the physical interpretation of magnetic scalar potential as a "source potential" in analogy with charge being the source of electric flux lines.
The Return envelopes the Target, excepting possibly parts of the Target where there is no normal field. The normal component of the field is already specified at the interface between the Target and the Return, and it is required to be zero at the exterior surface. Then the magnetic scalar potential Φ return defined in the Return is determined by Neumann boundary conditions. The current field on both the inner and outer surfaces of the Return is determined by this scalar potential in the same way as above.
The result is a complete description how to construct surface current distributions that exactly produce within the Target any field configuration that is consistent with Maxwell's equations. Choosing equal increments of the scalar potentials divides the surfaces into ribbons which can be turned into physical wires, all carrying the same current.
For the cases that the Return is one region that completely envelopes the Target, a simplification of this algorithm is possible: as described, there are two current sheets right next together at the interface between the Target and the Return, and these can be merged. Then the current sheet on the surface is determined by the gradient of the difference between the two scalar potentials, and flows along contours of constant value of this difference. In the continuum limit (i.e. very fine wires) the surface current density is where the stream function [2][3][4] δΦ = Φ return − Φ target is the difference between the scalar potentials on the two sides of the surface andn is the outward-directed normal. A small source of dissatisfaction with this algorithm for constructing designed-field coils is that what it prescribes is the current that should flow in closed loops around the surface of the Target, and on the surfaces of the Return. The effect of breaking each loop and connecting it to its neighbors to make a series circuit introduces the equivalent of a single wire lying transverse to the loops; its effect can be nearly canceled by running the return current wire right on top of the interconnections, but this leaves a magnetic dipole line source. Since the perfect field envisioned by the algorithm is otherwise only spoiled by exponentially small corrections due to wire discreteness, this would appear to be the largest design error in the constructed field.
Here we will describe a modification of the algorithm which removes this defect, allowing the construction of a more nearly perfect coil. In the next section we will work though a problem is that exactly solvable, but not quite trivial; later we will generalize this to arbitrary geometry.
II. SPHERICAL ELECTROMAGNET
Winding an infinite solenoid as a helix also introduces an axial current, but this does not affect the field inside it. This can be readily generalized to any azimuthally symmetric object. First consider the case that the field in the Target is zero, but there is with an axial current on the surface of the Target. Let the symmetry axis be z, and the shape of the object be ρ(z). Current conservation requires that the current through any cross section be the same; then from outside this is indistinguishable from a long straight wire along the axis (this wire will have to actually exist beyond the object); the added field outside will be the corresponding field, which has only aφ component. This adds no field component normal to the surface, and the parallel component is exactly canceled by the surface current. The field inside is unaffected. Adding the axially symmetric field inside the Target, we find the corresponding field in the Return by the algorithm described above, and add the currents and fields just constructed to find a consistent set of fields produced by wires that wind around the surfaces.
For the case of a sphere of radius P with only an axial current, the surface current density is K axial = −θI/(2πP sin θ) (in spherical coordinates) and the field outside is H =φI/2πr sin θ. We can represent this as a magnetic scalar potential H = − ∇Φ axial , where This is a multiply valued function; we can make it single valued by introducing a branch surface interrupting any path that wraps around the sphere or the wire extension. It has the same discontinuity I everywhere across the branch surface, which can be taken to be the half-plane φ = 0 for r > P -but note that the gradient of Φ axial (the magnetic field) is continuous everywhere except at the poles θ = 0 and θ = π. Now consider how the magnetic potential construction describes the same sphere when it has a uniform magnetic field inside (but no axial current), and all magnetic flux is enclosed within a Return of radius Q. Inside, the magnetic potential is The potential in the Return is At the boundary the normal component of the magnetic field is continuous, so that − ∂ ∂r Φ target = 0. The surface current density that matches up the tangential magnetic fields is on interface between the Target and the Return, and on the outer surface of the Return. If we slice up the sphere surface at equal intervals of the magnetic potential, or equivalently at equal intervals of the coordinate z (call this interval D), each of the resulting rings on the surface of the Target is carrying the same current I =φDS inner . The rings have the same extent in the coordinate z but varying width on the surface of the sphere. Now we make this set of rings helical, so that after one turn the bottom of this ring is higher (measured along the z axis) by the amount D. This adds an axial current I helix =ẑDS inner in the form of the surface current density K axial . This modifies the potential in the Return Φ return (4) by adding an axial potential of the form Φ axial (2) given above. As explained above, this has no effect inside the sphere.
The interesting point is that in moving up one turn, the scalar potential difference Φ return − Φ target decreases by DS, and in going around one turn, the potential outside due to the axial current Φ axial increases by DS. This shows that the condition Φ return + Φ axial − Φ target = constant generates a helical winding for a spherical electromagnet that completely and efficiently covers the surface. The same construction applies to the outer surface of the return with the omission of Φ target ; observe that all of the current that was moving along the z axis on the Target will return along the outer surface in agreement with both Ampere's law and current conservation.
Including the effects of the axial currents, the magnetic potential in the Return is given bỹ The potential difference across the surface of the target and across the outer surface of the return determines the surface current density. Slicing each boundary surface along lines of constant potential difference in increments of I turns each one into a ribbon that winds around the surface, connecting up with the next ribbon. Explicitly, the potential differences are at the interface between the Target and the Return, and on the exterior surface of the Return. The tangential (along the surface) gradient of the potential differences corresponds to a surface current density (Eq. 1) that twists about the sphere in such a way as to match the tangential component of the field inside with the tangential field outside according to Ampere's law. Beyond the Target, there are a pair of wires along the axis, carrying the current I to the outer surface of the Return. In constructing the spherical magnet, the field inside and the current I are independent parameters, which determine the width of the windings (or the spacing of the equivalent wires). Once the potentials have been chosen, the path of the windings is determined by the condition of constant potential difference. The combined potentialΦ return actually has a nonphysical discontinuity DS at each crossing of the branch cut, which must be built into numerical solutions, but the result is equivalent to the multi-valued Φ axial potential with no discontinuities as describe above, for which the entire winding is traced out by a single equipotential contour on both the inside and outside of the Return.
III. THE GENERAL CASE
The construction described above will work with little modification for any system with an axis of rotation; the only difference is that the construction of Φ return will entail solving the Laplace problem with Neumann boundary conditions in a more complicated geometry. However, it can also be generalized to any Target region of arbitrary shape and physically allowable magnetic field. The first step is to follow the basic algorithm [1] to learn how to produce a designed field, by finding the scalar potentials Φ target and Φ return . Equal intervals of δΦ describe ribbon loops on the surfaces which carry equal current. Though this doesn't yet specify how to make a coil, it approximates its form. The points (the four dots in Fig. 2) where δΦ is maximal and minimal on each surface are the places where the coil on a surface must start and end. For fields of dipolar character in a suitable Target and Return, there will be just one maximal and minimal point on each surface. Now we need to choose a "wiring diagram": Jumper wires connecting the minimal point on the Target to the maximal point on the Return, and vice versa for the other pair of extrema, with the current supply interpolated in one of these paths via a twisted pair. At this point we also decide how large a current I to use relative to the size of the field H 0 to be constructed (which determines the width of the ribbons, just as in the case of a solenoid). Assuming that the Return wraps around the Target, choose a branch surface whose boundary edge includes the wires connecting the Target to the Return, and traverses across the surfaces of the Target and the Return so that the surface interrupts any path through the Return that encloses the Target. Treating this as a special kind of boundary, the Return is now singly connected.
As in the case of the spherical electromagnet, we now construct a second scalar potential Φ axial in the Return that has has vanishing normal derivative at the surface of the Target and the Return, and a discontinuity of magnitude I across the branch surface. This is not a standard Neumann boundary condition, though perfectly well defined.
FIG. 2: An illustration of generalized axial windings, used to connect all equipotential contours of the Target and Return fields into a continuous helical winding. a) axial currents flow from the maximum to minimum of Φreturn − Φtarget on the inner surface of the Return and back from maximum to minimum of Φreturn on the outer surface. b) two Jumpers connect the axial currents to make a complete circuit. The corresponding toroidal winding field is confined to the Return so it does not disturb the inner or outer specified fields. c) the entire coil is energized from a twisted pair inserted into one Jumper on the outer surface. d) the two Jumpers can be extended outside the Return to form a filament circuit with calculable potential Φwire with the same singularity as Φ axial , so that the latter can be solved from the difference using standard Neumann boundary conditions.
(Here's a way to turn it into a standard Neumann boundary value problem, as illustrated in Fig 2: choose a wire circuit that carries the chosen current, such that it incorporates the two Jumpers, connected by wires which lie outside the Return and inside the Target. The Biot-Savart integral for the magnetic field of this circuit corresponds to a scalar potential Φ wire ( r) = IΩ/4π where Ω is the solid angle subtended by the circuit viewed from the point r, and has a branch surface which can be chosen to coincide with that of Φ axial . The difference between Φ axial and Φ wire is a solution to Laplace's problem inside the Return, and with normal derivative which is the difference between those of the two potentials. The former is specified by the field in the Target and the latter is calculable, so this is a standard Neumann boundary value problem in a singly connected region.) As above we construct Φ chiral = Φ return +Φ axial in the Return. The difference Φ chiral −Φ target is constant along a path that slices the surface of the Target into one long ribbon that defines the appropriate winding; similarly, Φ chiral is constant along a path on the outer surface of the return that defines the winding there.
The wires connecting the Target to the outer surface of the Return should join the extrema of the relevant potential differences, but the positioning of these has already been determined earlier in the algorithm. We claim that the wire positions continue to be appropriate: the wires themselves are singular points of the fields where Φ chiral takes on different values when the wire is approached along different paths. Then the "perturbation" due to the tangential fields will automatically be absorbed in the determination of the surface windings.
The axial potential Φ axial produces an additional magnetic "winding field". In the specific and general cases described above, this was a toroidal field circling about the Jumpers. The boundary conditions were constructed to confine the winding field completely inside the Return, so that it didn't perturb either the Target or external fields.
The algorithm describes how to make a perfect elec-tromagnet in the limit of a continuous surface current distribution when I → 0. The effect of replacing this with discrete windings is exponentially localized near the surfaces, with a healing length whose scale is set by the winding spacing [1]. This is largest near the wire connection points where the current density vanishes when the surfaces are smooth (this can be readily seen in the case of the spherical magnet), and has the consequence that the largest discreteness error occurs near the poles [6]. We propose that the Target and Return be deformed near the wire attachments, to make them somewhat conical about the wire (the Target becomes a lemon, and the Return has the shape of an apple). The linear (rather than quadratic) variation of the potential near the connection point implies that the width of the spiralling ribbon will not grow so much in that limit, thus decreasing the healing length and making the field in the Target closer to the design field. In summary, this paper provides a general method to convert the series of disjoint equipotential contour loops described in [1], each carrying current I, into a continuous helical winding through use of an auxiliary axial potential Φ axial representing the current I flowing from the lowest to the highest equipotential along each surface of the Return. This method is very general in the sense that the choice of wiring circuit, current, and equipotential constants classifies all possible helical windings which approximate the specified Target and external field with a discrete wire or trace winding.
IV. ACKNOWLEDGMENTS
This work is supported in part by the U.S. Department of Energy, Office of Nuclear Physics under contracts DE-SC0008107 and DE-SC0014622, and by the National Science Foundation under award number PHY-0855584.
|
2021-02-02T17:46:51.382Z
|
2021-01-30T00:00:00.000
|
{
"year": 2021,
"sha1": "7d9c44029ba9afe7c46d2525564ab41047e5bc89",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7d9c44029ba9afe7c46d2525564ab41047e5bc89",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
73561328
|
pes2o/s2orc
|
v3-fos-license
|
Influence of the approach boundary layer on the flow over an axisymmetric hill at a moderate Reynolds number
Large Eddy Simulations of a flow at a moderate Reynolds number over and around a three-dimensional hill have been performed. The main aim of the simulations was to study the effects of various inflow conditions (boundary layer thickness and laminar versus turbulent boundary layers) on the flow behind the hill. The main features of the flow behind the hill are similar in all simulations, however various differences are observed. The topology of the streamlines (friction lines) on the surface adjacent to the lower wall was found to be independent of the inflow conditions prescribed and comprised four saddle points and four nodes (of which two are foci). In all simulations a variety of vortical structures could be observed, ranging from a horseshoe vortex - that was formed at the foot of the hill - to a train of large hairpin vortices in the wake of the hill. In the simulation with a thick incoming laminar boundary layer also secondary vortical structures (i.e. hairpin vortices) were observed to be formed at either side of the hill, superposed on the legs of the horseshoe vortex. Sufficiently far downstream of the hill, at the symmetry plane the mean velocity and the rms of the velocity fluctuations were found to become quasi-independent of the inflow conditions, while towards the sides the influence of the hill decreases and the velocity profiles recover the values prevailing at the inflow.
Introduction
Flow over obstacles occurs in many engineering applications. From a fundamental point of view, the flow over obstacles features a variety of phenomena and is particularly complex: it is threedimensional (also in the mean), highly unsteady, involves separation and reattachment (possibly at several locations) and contains several interacting vortex systems. Typical examples are flow over axisymmetric obstacles Hunt and Snyder [1980], wall-mounted prismatic obstacles Martinuzzi and Tropea [1993] or finite-height circular cylinders Palau-Salvador et al. [2010]. In the case of obstacles without sharp edges, the separation location is not fixed by the geometry and is highly dependent on the incoming flow characteristics, like Reynolds number or boundary layer thickness. Therefore, it is very challenging to compute accurately this kind of flow.
Recently, a series of experiments performed by Simpson and co-workers Simpson et al. [2002], Byun et al. [2004], Ma and Simpson [2005], Byun and Simpson [2006] renewed the interest in analysing and predicting three-dimensional separation. The configuration they considered was an axisymmetric three-dimensional hill subjected to a turbulent boundary layer. The Reynolds number of the flow based on the free-stream velocity and the height of the hill was relatively high (Re = 130000). Since then a number of researchers have tried to reproduce the experimental results using various computational techniques: RANS Wang et al. [2004], Persson et al. [2006], hybrid RANS-LES Tessicini et al. [2007] and pure LES Persson et al. [2006], Patel and Menon [2007], Krajnović [2008], García-Villalba et al. [2009]. It was shown that the RANS predictions were generally poor, while both the hybrid techniques and the pure LES provided promising results although not completely satisfactory. In the simulations reported in García-Villalba et al. [2009], it was observed that, in spite of the high resolution employed, the presence of a thin recirculation region made the flow very sensitive to the grid resolution.
Apart from the grid resolution, one of the most important differences between all computational studies was the modelling of the incoming flow. In the experiment the hill was subjected to a turbulent boundary layer whose thickness was half of the hill height. Because the incoming boundary layer was turbulent, the specification of the inlet conditions had to be done in an unsteady manner. For instance, in García-Villalba et al. [2009] a precursor simulation was used, while in Patel and Menon [2007] a boundary layer profile plus random noise was employed. Because of the high Reynolds number of the flow, it is computationally very expensive to perform parametric studies and, hence, it is very difficult to assess the impact on the results of the modelling of the incoming flow.
In the present paper, we report simulations of flow over the same hill as considered in the previous studies, but at a significantly lower Reynolds number. The aim is to study the influence of the flow characteristics of various approach flows: two simulations with incoming laminar boundary layers of different thicknesses and one simulation with an incoming turbulent boundary layer.
Numerical model
The LES were performed with the in-house code LESOCC2 (Large Eddy Simulation On Curvilinear Coordinates). The code has been developed at the Institute for Hydromechanics. It is the successor of the code LESOCC developed by Breuer and Rodi Breuer and Rodi [1996] and is described in Hinterberger [2004]. The code solves the Navier-Stokes equations on body-fitted, curvilinear grids using a cell-centered Finite Volume method with collocated storage for the cartesian velocity components and the pressure. Second order central differences are employed for the convection as well as for the diffusive terms. The time integration is performed with a predictor-corrector scheme, where the explicit predictor step for the momentum equations is a low-storage 3-step Runge-Kutta method. The corrector step covers the implicit solution of the Poisson equation for the pressure correction (SIMPLE). The scheme is of second order accuracy in time because the Poisson equation for the pressure correction is not solved during the sub-steps of the Runge-Kutta algorithm in order to save CPU-time. The Rhie and Chow momentum interpolation Rhie and Chow [1983] is applied to avoid pressure-velocity decoupling. The Poisson equation for the pressure-increment is solved iteratively by means of the 'strongly implicit procedure ' Stone [1968]. Parallelization is implemented via domain decomposition, and explicit message passing is used with two halo cells along the inter-domain boundaries for intermediate storage.
The configuration mentioned above consists of the flow over and around an axisymmetric hill of height H and base-to-height ratio of 4. The hill shape is described by: where Λ = 3.1926, J 0 is the Bessel function of the first kind and I 0 is the modified Bessel function of the first kind Simpson et al. [2002]. The approach-flow boundary-layer has a thickness δ which varies depending on the simulation (see Table 1). The Reynolds number of the flow based on the free-stream velocity U ref and the hill height H is Re = 6650. The size of the domain is 22H × 6H × 12H in streamwise, wall-normal and spanwise directions, respectively. The grid consists of 504 × 256 × 400 cells in these directions. The choice of the computational mesh is based on the experience gained in performing an LES of a similar flow problem at a significantly higher Re García-Villalba et al. [2009], with results compared to experimental data. In order to minimize numerical errors, the grid is quasi-orthogonal close to the hill's surface and the grid points are concentrated in the boundary layer.
As in García-Villalba et al. [2009], the dynamic Smagorinsky subgrid-scale model, first proposed by Germano et al. Germano et al. [1991], has been used in the simulations. The model parameter is determined using an explicit box filter of width equal to twice the mesh size and smoothed by temporal under-relaxation Breuer and Rodi [1996]. The impact of the subgrid-scale model on the results is likely to be small. For instance, the maximum values of the ratio of the time-averaged eddy viscosity and the molecular (kinematic) viscosity is ν sgs /ν ∼ 1 in the region of interest. For comparison, in García-Villalba et al. [2009] this ratio was ν sgs /ν ∼ 6. Table 1.
A no-slip condition is employed at the bottom wall while a free-slip condition is employed at the top boundary. Free-slip conditions are also used at the lateral boundaries and convective conditions are employed at the exit boundary. In two of the simulations the inflow conditions are time-independent and no turbulence is added to them. In these two cases, a Blasius profile is imposed at the inflow plane. In Simulation S1, δ/H = 1 and in Simulation S2, δ/H = 0.1. In Simulation S3, the approaching boundary layer is turbulent. The mean inflow profile corresponds to one of the cases reported in Spalart [1988]. As in García-Villalba et al. [2009], the time-dependent inflow conditions are obtained by performing simultaneously a separate periodic LES of channel flow in which the mean velocity is forced to assume the desired vertical distribution using a body-force technique Pierce [2001]. By using this technique, the distribution of turbulent stresses obtained at the inflow plane is very similar to the standard distribution Spalart [1988] in a fully developed turbulent boundary layer. In this precursor calculation, the number of cells in streamwise direction is 72. The cost of this simulation is, therefore, 1/8 of the total cost. The three inflow profiles are shown in Fig. 1 (in the case of Simulation S3 the mean inflow profile is shown). The Reynolds number of the incoming boundary layer based on the momentum thickness, Re θ , for the three cases is also provided in Table 1.
Results
After discarding initial transients, statistics have been collected for a time span of roughly 250 H/U ref .
This corresponds approximately to 11 flow-through times of the computational domain.
Pressure distribution
The mean drag coefficient of the hill, where D is the drag including pressure and viscous terms, ρ is the fluid density, and S is the frontal surface, is reported in Table 1 for the three cases. Note that using the free stream velocity U in the definition of C D might not be ideal in the present case because of the different amounts of momentum present in the incoming flow for y/H < 1 (Fig. 1). The most important contribution to the drag is due to the pressure force on the surface. As an example, the pressure force is responsible for 91% of the drag in case S3. Similar values are obtained in the other two cases. Table 1.
Profiles of the pressure coefficient C p = (p − p ∞ )/(0.5ρU 2 ref ) along the hill centreline are shown in Fig. 2. A similar trend is observed in the three cases. Upstream of the hill the pressure coefficient increases as the hill is approached, reaching a local maximum shortly after the windward slope of the hill starts. The maximum is more pronounced in case S2, which is the case in which more momentum is present below y/H = 1 (Fig. 1). As a consequence, this is the case with the highest drag coefficient of the three. Thereafter, the flow accelerates and the pressure drops significantly reaching a local minimum near the top of the hill. Further downstream, the flow decelerates and the pressure recovers somewhat but due to the adverse pressure gradient the flow separates soon, producing a typical plateau in the profile of C p upto x/H ∼ 2.5. The peak pressure at reattachment occurs around x/H ∼ 4 in all cases.
Mean flow topology
Closely connected with the pressure distribution is the mean flow topology map displayed in Fig 3. This figure shows streamlines of the mean flow projected onto a wall-parallel surface at a distance to the wall of y/H = 0.01. In addition, the blue patches indicate the region where backflow is present (U < 0). The overall pattern is similar in all three cases but a few differences are observed.
A total of eight topological features can be observed in all cases (see Fig. 3d for case S3). Four saddle points, which are all located along the centreline, and four nodes, two located on the centreline, and two foci, located on both sides of the hill, around x/H ∼ 1.5, z/H ∼ ±0.5. As expected, these topological features satisfy the conditions provided by Hunt et al. Hunt et al. [1978] for flow over obstacles (same number of saddle and nodal points). In all cases, there are two main areas of backflow, one in the windward part of the hill and a second one in the rear part. The backflow in the windward part of the hill is located between a saddle point and a nodal point on the centreline. The appearance of this region is related to a well-know phenomenon in the flow around wall-mounted obstacles: the formation of a horseshoe vortex at the foot of the obstacle. This has been observed in flow over wall-mounted cubes, cylinders, etc. However, this is not observed for the present geometry at significantly higher Reynolds numbers García-Villalba et al. [2009]. It is noticeable that this region is largest in Simulation S1, while in S2 and S3 the smaller regions are of comparable size. The streamlines arriving at the first saddle point from upstream, are deviated to both sides to go around the hill. In the Simulation S1 they are deviated as far as |z/H| ∼ 4 while in S2 and S3 they remain within |z/H| ∼ 3.
The backflow region in the rear part has a more complex structure. It is split in two parts with a forward flow region in between. It turns out that this forward flow region is extremely thin (as discussed below), this is why we refer to the rear backflow region as a single region. The differences between the three simulations are minor. The location of saddle and nodal points are approximately the same in all cases and therefore the topology of the streamlines that connect them is the same. Figure 4 shows a comparison of streamlines of the mean flow in the symmetry plane of Simulations S1, S2 and S3. The main recirculation region behind the hill is longer and higher than the one observed for this same geometry at a significantly higher Reynolds number García-Villalba et al. [2009]. The centre of the main recirculation bubble behind the crest of the hill is identified by the label RB. While HS 1 identifies the small upstream area of recirculation (vortex) obtained at the foot of the hill. Because of the strong accelerating mean flow along the foot of the hill, the upstream vortex HS 1 is stretched in the streamwise direction. As it is wrapped partially around the foot of the hill the horse shoe vortex mentioned above is formed. The shape of the horse shoe vortex is reflected in the lateral streamlines that originate from the upstream saddle point, shown in Figure 3. Because of the larger Re θ (Tab. 1) of the incoming (laminar) boundary layer, compared to Simulations S2 and S3, in Simulation S1 a significantly larger upstream separation bubble HS 1 is generated. Also, the streamlines -originating from the crest of the hill -that bound the wake-like wall-parallel flow show that the height of the wake increases with increasing Re θ of the inflow profile (see Table 1).
Mean velocity distribution
Below the main recirculation region, a very shallow secondary bubble is obtained in all three simulations. Evidence of this can be seen in Fig. 3. However, this bubble is not visible in Fig. 4. A zoomed view of simulation S3, presented in Fig. 5, illustrates the shape of the secondary bubble. This thin region is resolved in the simulation with 8 to 10 grid points in wall-normal direction. Figure 6 shows the streamwise velocity profiles of the three simulations at x/H = −4, −2, 2, 5, 8, 11. At x/H = −2 the profiles confirm the presence of the separation bubble HS 1 at the foot of the hill in all simulations. The presence of the main recirculation bubble RB is clearly reflected in all three simulations by the reverse flow in the profiles shown at x/H = 2. Despite the difference in the profiles and the state (laminar/turbulent) of the flow at the inflow plane, for x/H = 2, . . . , 11 the mean profiles of all three simulations do not present significant differences. Furthermore, the profiles for x/H = 5, 8, 11 all exhibit the characteristic full turbulent boundary layer profile at the bottom with a wake-like region on top. Figure 7 shows a comparison of the mean u-velocity profiles at various stations z/H = 0, 1, 2, 3 and 4, extracted in the cross section at x/H = 5. While at the symmetry plane (z/H = 0) the velocity profiles almost collapse, towards the edges of the computational domain gradually more and more differences can be observed. At z/H = 4, finally, for each simulation the shape of the mean u-velocity profile is found to be very similar to the mean inflow velocity profile. Hence, the downstream influence of the hill (in the form of a wake) is only noticeable in a spanwise region of limited size. Figure 8 shows the turbulent kinetic energy, k, in the symmetry plane for Simulations S1, S2 and S3. Immediately upstream of the hill, at x/H ≈ −2, in each simulation a small patch with an elevated k-level can be observed that coincides with the upstream separation bubble labelled HS 1 in Figure 4. The production of kinetic energy leading to increased values of k in the re-circulation zone of a separation bubble was observed earlier in Wissink [2003], Wissink et al. [2006] and was accounted for by an elliptic instability of the rolled-up shear layer. Downstream of the small separation bubble, the streamwise pressure gradient turns favourable and the energized boundary layers in all simulations re-attach. Also, at the centre of the circulation bubble, downstream of the crest of the hill -labelled RB in Figure 4 -production of kinetic energy is observed in all simulations. Of the three simulations, the momentum thickness of the incoming boundary layer in Simulation S2 is smallest, followed by Simulation S3 and then Simulation S1. Because of this, in the inflow region the wall-shear in Simulation B is significantly larger than in Simulations S1 and S2. As the boundary layer separates from the crest of the hill, the mean shear in the free-shear layers generates turbulence. Because in Simulation S2 the mean shear is much stronger than in Simulations S1 and S3, the production of k in Simulation S2 is significantly higher (as confirmed in Figure 8). Similarly, because of the difference in wall-shear strength, in Simulation S1 the production of k is found to be lower than in Simulation S3. Profiles of the rms values of the u-, v-and w-velocity components of the three simulations are shown in Figure 9. Close to the inflow plane, at x/H = −4, the rms-values of all velocity components of the two Simulations S1 and S2 are observed to be zero. In the upstream separation bubble, HS 1 , at x/H = −2, all three velocity components exhibit fluctuations as reflected by the locally increased rms-values. On the lee side of the hill, at x/H = 2, the presence of the large recirculation bubble RB induces significant fluctuations in all simulations. Because of the increased momentum thickness of Simulation S1's incoming boundary layer,the fluctuation in the rms-values is found to be slightly less than in Simulations S2 and S3. For x/H ≥ 5, the profiles of the rms values of the velocity components become similar, indicating that the significant differences in the shape and state (laminar/turbulent) of the boundary layers at the inflow plane is no longer identifyable by studing differences in the rms values in the symmetry plane. Compared to the Simulations S1 and S3, the only significant difference is observed in the w rms values of Simulation S2 at x/H = 5 which, close the the lower wall, are significantly higher than in Simulations S1 and S3. This is likely related to the increased mass flux in Simulation S2.
Secondary motions
In the wake of the hill, the boundary layer recovers with a combination of streamwise acceleration (Fig. 6) and transverse (secondary) circulation. The secondary motion is relatively weak with a peak velocity around 10-15% of U ref and it originates from the realignment of the vorticity generated upstream of the hill (horseshoe vortex) and additional vorticity shed from the surface of the hill. The vorticity generation at the wall and its subsequent re-orientation is discussed in §3.6. Fig. 10 shows the streamwise evolution of the secondary motion at three locations in the near wake of the hill, namely x/H = 1, 2 and 5. At x/H = 1, the legs of the horseshoe vortex are clearly visible around |z/H| ∼ 2. It is interesting to note that in Simulation S1 two vortices (|z/H| ∼ 2 and |z/H| ∼ 2.8) are present with a counterrotating region in between. On the other hand, in Simulations S2 and S3 only one such vortex is visible. Further downstream for these two simulations there is a trace of a second vortex (labeled HS 2 ) although much weaker than in Simulation S1. At x/H = 2, a secondary vortex labelled HP appears in all three simulations at |z/H| < 1. Further downstream, at x/H = 5, for Simulations S1 and S3 this vortex remains present, with the eye slightly displaced upwards. In Simulation S2, this secondary vortex seems to have collapsed in the midplane.
Contours of turbulent kinetic energy k are also included in Fig. 10. The development of the turbulent kinetic energy in the wake occurs at a later streamwise position in Simulation S1 compared to Simulations S2 and S3. At x/H = 1 patches of k can be observed in the region |z/H| < 1 in Simulations S2 and S3, but not in Simulation S1 (see also Fig. 8). At x/H = 2, the peak of k is stronger in Simulations S2 and S3. The turbulent kinetic energy is concentrated in the shear layer between the free stream and the recirculation region, in all three simulations. Additionally, there is a patch further outwards, which is related to the recirculation in the horseshoe vortex. Further downstream, at x/H = 5, the decay of k with respect to the previous location is clearly visible, a typical phenomenon of wakes. In order to illustrate in a more quantitative manner the differences between the three simulations at this streamwise location (x/H = 5), vertical profiles of k at various spanwise locations are shown in Fig. 11. At the midplane (z/H = 0), k is largest in Simulation S2, by a factor of about 20%, while Simulations S1 and S3 present similar values. At z/H = 1, k of Simulation S1 has decreased significantly compared to Simulation S3, indicating that the width of the Table 1. wake is smaller in Simulation S1. Further outwards, at z/H = 2 and 3, k of Simulation S1 presents larger values than Simulations S2 and S3, an in particular at larger heights. This indicates that the horseshoe vortex is strongest in Simulation S1 and transition to turbulence happens later. Finally, at z/H = 4, all simulations have recovered the values specified at the inlet, S1 and S2 are laminar while S3 is turbulent. Therefore, at this location the boundary layer is undisturbed by the hill.
Vorticity flux
Vorticity production at a solid boundary can be described in terms of vorticity flux. For threedimensional flows, the mean vorticity flux can be defined Panton [1984], Andreopoulos and Agui [1986] as where σ is the mean vorticity-flux vector, n is the normal vector to the surface, towards the fluid and ω is the mean vorticity vector. As an illustration, Fig. 12 displays the three components of the mean vorticity flux at the wall from case S2 (the distributions of the other two cases are similar). There is production of spanwise vorticity everywhere in the flow, while the production of vertical and streamwise vorticity is concentrated in the hill region. Obviously, in the absence of the hill, ω x and ω y would be zero. In the vicinity of the hill, σ z (Fig. 12c) reverses sign as a consequence of the reverse flow which occurs both upstream and downstream of the hill. The negative vorticity flux peaks in the region where the flow strongly accelerates. Production of ω y occurs when the oncoming flow is deviated sidewards to pass along the left and the right of the hill (only the deviation to the right is illustrated). On the half-domain displayed in Fig. 12b, only negative ω y is generated. Finally, production of ω x (Fig. 12a) is concentrated towards the side of the hill: Negative ω x is generated upstream of the crest due to the flow moving upwards and to the right, while downstream of the crest positive ω x is generated due to the flow moving downwards and to the left.
It is interesting to compare the magnitude of the mean vorticity flux σ = √ σ i σ i for the three cases. This is done in Fig. 13. Away from the hill, the values of σ are higher in cases S2 and S3 compared to S1. This is due to the much lower wall-shear in case S1 (see Fig. 1). In the hill region, the maximum values of σ are of the same order in all cases (σ ∼ 3U 2 ref /H). These high values are attained just upstream of the top: in the region of strong acceleration of the streamwise velocity (which leads to production of ω z ), and towards the sides: where the flow is deviated to the left and to the right (which leads to production of ω x and ω y ). It can be seen that, while in all three cases similarly shaped contours are obtained, case S2 has the highest flux, followed by S3 and, finally, S1. The reason for this is that in S2 more momentum is present below y/H = 1 (Fig. 1), than in S3, while S3 has more momentum below y/H = 1 than S1. The trend (and the argument) is the same as for the pressure coefficient discussed above. By comparing σ in the region of the hill (σ H ) to its value at the inflow (σ 0 ), it is possible to quantify the relative influence of the upstream vorticity and the vorticity generated over the hill. Due to the low σ 0 in case S1, σ H /σ 0 reaches values as high as 50, while in the other two cases this ratio is lower; σ H /σ 0 ∼ 8 in S2 and σ H /σ 0 ∼ 6 in S3.
After being produced at the wall, vorticity is subsequently convected and re-oriented by the mean flow. This is illustrated in Fig. 14, which displays iso-surfaces of the vertical vorticity for case S2. Because similar features are observed in the other two cases, we limit the discussion to S2. Fig. 12b shows that, for the region considered, only negative ω y is produced at the wall. The blue isosurface ω y = −0.5U ref /H originates exactly in the region of production and is then convected into the wake. The red iso-surface ω y = 0.5U ref /H is, however, not produced at the wall. Instead, this region corresponds to the horseshoe vortex and is formed through re-orientation of spanwise vorticity from the incoming boundary layer. Fig. 15 displays the values of ω x and ω y in the wake of the hill. These two components, which are generated as the flow passes over and around the hill, can be seen to be gradually dissipated in the downstream direction. The vanishing non-spanwise mean vorticity indicates that, with increasing distance from the hill, the wake-flow is becoming more and more homogeneous in the spanwise direction. This figure is related to the secondary motions that were displayed in Fig. 10. In particular, the patches of ω x and ω y , which are located in the region |z/H| 1, are related to the secondary vortex labelled HP in Fig. 10. The outer patches are related to the horse-shoe vortex labelled HS1 and HS2 in the same figure. Therefore, we can conclude that the secondary vortex HP is a direct consequence of the vorticity production at the surface of the hill while the horse-shoe vortices HS1 and HS2 are only indirectly generated by the hill through re-orientation of vorticity.
Instantaneous flow
A visualization of the instantaneous coherent structures of the flow is displayed in Fig. 16. The figure (and corresponding animations) shows an iso-surface of pressure fluctuations for the value p − p = −0.02. This visualization technique has been often used in the past Fröhlich et al. [2005], García-Villalba et al. [2006]. Coherent structures are observed to form in the lee of the hill and are convected downstream. Many of them have the shape of a hairpin vortex. Similar structures have been also observed at high Reynolds number, although in that case, they were more irregular García-Villalba et al. [2009]. It is also well-known that at lower Reynolds number, a hemisphere protuberance in a laminar boundary layer generates a train of very regular hairpin vortices Acarlar and Smith [1987]. It was shown in the cited study that the behaviour of the wake was quite regular upto Re H ∼ 3400. Beyond that the shedding became irregular. The present investigation (Re H = 6650) lies already in the irregular regime.
An important difference with respect to the high Reynolds number case is the visibility of structures originating upstream of the hill at the reduced Reynolds number considered. These structures are secondary vortical structures that appear on top of the horseshoe vortex. In the present case, these hairpin vortices are more clearly visible in the cases with laminar inflow, in particular, in Simulation S1, see Fig. 16(top) and Animation S1. In the middle, a train of large hairpin vortices can be observed, while at both sides of the hill a sequence of much smaller hairpin vortices can be seen. In Simulations S2 and S3, (Fig. 16 middle and bottom, respectively) the large hairpin vortices in the middle can still be observed, but the smaller vortices at each side of the hill have become more difficult to identify (Simulation S2 and Animation S2) or almost completely vanished (Simulation S3 and Animation S3). It appears that the reduced wall-shear in Simulation S1 provides ideal conditions for the generation of a horseshoe vortex on which secondary vortical structures are formed that, in the downstream direction, turn into hairpin vortices. Note that the primary horsehoe vortex can only be indirectly detected by studying iso-surfaces of the pressure fluctuations. The turbulence, combined with the increased wallshear stress in Simulation S3 virtually prevent the formation of secondary vortical structures on the horseshoe vortex.
Conclusions
In this paper, results of three LES of flow over and around a three-dimensional hill at moderate Reynolds numbers have been presented. The Reynolds number is lower than in previous investigations García-Villalba et al. [2009] and this has a significant impact in the wake region. While in the present investigation the flow is massively separated, leading to a large recirculation region behind the hill, at high Re the recirculation region is very shallow García-Villalba et al. [2009]. Therefore, no quantitative comparison with the previous case is reported in this paper. Two of the simulations have incoming laminar boundary layers of different thicknesses and the third one has an incoming turbulent boundary layer. It has been shown that the main features of the flow behind the hill are very similar in all three simulations. For instance, the similar size of the main recirculation bubble behind the hill, the presence of a horseshoe vortex originating inmediately upstream of the hill, a similar wall-topology map and a virtual collapse of the mean streamwise velocity profiles in the midplane beyond x/H = 5. In spite of this, there are various differences which need to be pointed out: 1) The height of the wake, which increases with increasing Re θ of the inflow profile 2) The maximum level of kinetic energy in the wake varies from 20% to 30% depending on the simulation. 3) The horseshoe vortex is observed to be significantly affected by the inflow characteristics. 4) The main secondary motion in the central region is found to be quite sensitive to the actual inflow condition prescribed, which might have a significant impact on heat and mass transport. 5) The instantaneous coherent structures show significant variations as well. All these fine details indicate that, when trying to reproduce physical experiments, special care has to be taken concerning the modelling of the inflow conditions in order to avoid observable differences in the region of interest.
|
2017-08-23T10:15:40.000Z
|
2017-08-23T00:00:00.000
|
{
"year": 2017,
"sha1": "a10ea5cfad90fdd9f9dd926a84abbac1514efc82",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1708.06944",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a10ea5cfad90fdd9f9dd926a84abbac1514efc82",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118556799
|
pes2o/s2orc
|
v3-fos-license
|
Limits on hadron spectrum from bulk medium properties
We bring up the fact that the bulk thermal properties of the hadron gas, as measured on the lattice, preclude a very fast rising of the number of resonance states in the QCD spectrum, as assumed by the Hagedorn hypothesis, unless a substantial repulsion between hadronic resonances is present. If the Hagedorn growth continued above masses ~1.8 GeV, then the thermodynamic functions would noticeably depart from the measured lattice values at temperatures above 140 MeV, just below the transition temperature to quark-gluon plasma.
In this talk we point out the sensitivity of thermal bulk medium properties (energy density, entropy, sound velocity...) to the spectrum of the hadron resonance gas. In particular, we explore the effects of the high-lying part of the spectrum, above ∼ 1.8 GeV, where it is poorly known, on the thermal properties still below the cross-over transition to the quark-gluon plasma phase. Such investigations were carried out in the past by various authors, see [1,2,3,4,5] and references therein, where the reader may find more details and results.
The presently established QCD spectrum reaches about 2 GeV, and it is a priori not clear what happens above. Does the growth continue, or is saturated? As is evident from Fig. 1, the Hagedorn hypothesis [7] works very well up to about 1.8 GeV [8]. In the following, we explore two models: 1) hadron resonance gas with the Breit-Wigner width, HRG(Γ), which takes into account all states listed in the Particle Data Group tables [6] with mass below 1.8 GeV, and 2) this model amended with the states above 1.8 GeV, modeled with the Hagedorn formula fitted to the spectrum at lower masses (see Fig. 1). In short, model 1) includes the up-to-now established resonances, and model 2) extends them according to the Hagedorn hypothesis. The QCD trace anomaly (divided by T 4 ) plotted as a function of temperature T . The inclusion of width of resonances to the hadron resonance gas improves the agreement with the lattice data from the Wuppertal-Budapest (WB) [9] and Hot QCD [10] collaborations.
First, we recall the fact that the inclusion of widths of resonances [11], as listed in the Particle Data Group tables, affects the results noticeably and in fact improves them. This is shown in Fig. 2, where the hadron resonance gas calculation for the QCD trace anomaly, − 3p divided by T 4 . Here stands for the energy density, p for the pressure, and T for the temperature. In the calculation, the hadrons are treated as components of an ideal gas of fermions and bosons. We note that the overall agreement with of the hadron resonance gas model HRG(Γ) with the lattice measurement is remarkable.
The virial expansion of Kamerlingh Onnes yields p/T = ρ + B 2 (T )ρ 2 + B 3 (T )ρ 3 + . . . . Correspondingly, for the partition function of a thermodynamic system including the 1 → 1, 2 → 2, etc., processes one has The non-interacting term includes the sum over all stable particles, whereas the second-order virial term involves the sum over pairs of stable particles denoted as K, where δ K (M ) stands for the phase shift in the channel K. For narrow resonances the correction to the density of two-body states dδ K (M )/(πdM ) [12] can be accurately approximated with the Breit-Wigner form, which is a basis of the hadron resonance gas model. In Fig. 3 we show the result of extending the Hagedorn hypothesis above the present experimental limit on the QCD spectrum. We note that the inclusion of extra (noninteracting) states above M = 1.8 GeV has a quite dramatic effect on the trace anomaly θ µ µ = − 3p, placing it way above the lattice data at T > 140 MeV (the model calculation is credible below T 170 MeV, where a cross-over to the quark-gluon plasma occurs). A similar conclusion is drawn for other thermodynamic quantities, such as the entropy (cf. Fig. 4) or the sound velocity (cf. Fig. 5).
Therefore, if the hadron resonances were non-interacting, there would be no room for extra states above 1.8 GeV in the QCD spectrum. This conclusion may be affected by repulsion between the states (e.g., the excluded volume corrections), which decreases the contribution to the partition function. The issue is discussed quantitatively in [3,4], where a reduction of contributions to the thermodynamic quantities is assessed. The excluded volume reduces the contribution of resonances, and this makes them possible to appear in the spectrum in an "innocuous" way. The effect is explicit in Eq. (3), as repulsion leads to a decrease of the phase shift with M , or a negative correction to the density of states dδ K (M )/(πdM ).
An important example of such an explicit cancellation occurs in the case of the σ meson, whose contribution to one-body observables is canceled by the isospin-2 channel [12]. The case of the trace anomaly is shown in Fig. 6. Note that the phase shift taken into account in this analysis automatically includes the short-distance repulsion in specific (ϵ-3P)/T 4 Figure 6: Contributions to the trace anomaly from the pions, ρ mesons, σ meson, and the isospin-2 component of the pion-pion interaction. We note an almost perfect cancellation of the σ and isospin-2 channels. channel, hence there is no need to model it separately. The cancellation experienced by the σ state may occur also for other states with higher mass.
In conclusion, the thermodynamic quantities offered by the modern lattice QCD calculations allow to place limits on the high-lying spectrum on the QCD resonances, but the interactions between the states, such as the short-range repulsion, must be properly taken into account, as the two effects: increasing the number of states and introducing repulsion works in the opposite way.
|
2016-10-30T16:57:32.000Z
|
2016-10-30T00:00:00.000
|
{
"year": 2016,
"sha1": "ec92dda6d3639137de4ada48b1f0715afe622e6c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ec92dda6d3639137de4ada48b1f0715afe622e6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118584042
|
pes2o/s2orc
|
v3-fos-license
|
Cohomological properties of Hermitian symplectic threefolds
A Hermitian symplectic manifold is a complex manifold endowed with a symplectic form $\omega$, for which the bilinear form $\omega(I\cdot,\cdot)$ is positive definite. In this work we prove $dd^c$-lemma for 1- and (1,1)-forms for compact Hermitian symplectic manifolds of dimension 3. This shows that Albanese map for such manifolds is well-defined and allows one to prove K\"ahlerness if the dimension of the Albanese image of a manifold is maximal.
Introduction
A Hermitian symplectic manifold is a complex manifold (M, I) together with a symplectic form ω, for which the bilinear form ω(I·, ·) is positive definite (that is, ω(IX, X) > 0 for any vector field X on M). Any Kähler manifold is obviously Hermitian symplectic, and it is an open problem whether there exist other examples of Hermitian symplectic manifolds. Hermitian symplectic manifolds were studied by Streets and Tian in [ST2] and [ST1]; they constructed an appropriate Ricci flow on Hermitian symplectic manifolds, and studied its convergency properties. Since then, many people searched for non-trivial examples of Hermitian symplectic manifolds.
The search for non-Kähler examples of Hermitian symplectic manifolds was vigorous, but ultimately unsuccessful. All common sources of examples of non-Kähler manifolds were tapped at some point.
For complex dimension 2, Hermitian symplectic structures are all Kähler. This was shown by Streets and Tian in [ST2]. Another proof could be obtained from the Lamari ( [L]) result about existence of positive, exact (1, 1)-current on any non-Kähler complex surface.
In [EFV] it was shown that no complex nilmanifold can admit a Hermitian symplectic structure, and in [FKV] this result was extended to all complex solvmanifolds and Oeljeklaus-Toma manifolds.
Existence of Kähler metric implies some restrictions on the cohomology of a manifold: for example the Frölicher spectral sequence of Kähler manifold always degenerates at the first page. Results of Cavalcanti ([Ca]) show that the Frölicher spectral sequence for Hermitian symplectic manifolds degenerates at the first page.
In this work we define some Laplacian-like operators, kernels of which conjecturally isomorphic to the spaces of cohomology, and, with the help of these operators, prove dd clemma for (1,1)-forms on Hermitian symplectic threefolds. Argument of Gauduchon ([G]) shows that dd c -lemma for (1,1)-forms is equivalent to the equality b 1 = 2h 0,1 . It follows that the Albanese map is well-defined and, if its image is not a point, the generic fiber of Alb is Kähler. The question of existence of special (e.g. Kähler or balanced) metrics on total spaces of maps with Kähler base and fibers is studied, for example, in [HL] and [Mi]. Using the Albanese map, we are able to prove that if a Hermitian symplectic threefold M has dim Alb(M) = 3, then it admits a Kähler metric, and if dim Alb(M) = 1, M is balanced. If dd c -lemma holds for (2, 2)-forms, then by [HL] dim Alb(M) = 2 would imply that M is Kähler, but, unfortunately, we have not proven dd c -lemma in full generality yet.
Acknowledgements. The author would like to thank M. Verbitsky for many extremely helpful discussions. Work on sections 1-3 was supported by RSCF, grant number 14-21-00053, within the Laboratory of Algebraic Geometry. Work on section 4 was supported by RFBR 15-01-09242.
1 Preliminaries Definition 1.1: Let M be a smooth manifold of dimension 2n, I : T M −→ T M an integrable complex structure, A p,q the corresponding Hodge decomposition on the bundle of differential forms: A n ⊗ C = n=p+q A p,q , ω 1,1 a form in A 1,1 . We will say that ω 1,1 is Hermitian if the tensor h(·, ·) := ω 1,1 (·, I·) is a Riemannian metric on M, and we will say that ω 1,1 is Hermitian symplectic if there exists a symplectic form ω such that ω 1,1 is the (1,1)-component in the Hodge decomposition of ω. If M is endowed with such I and ω 1,1 , we will call it a Hermitian symplectic manifold.
For a Hermitian symplectic manifold (M, I, ω), let d : A • −→ A •+1 be the usual de Rham differential acting on forms, d c := IdI −1 : . We will denote by L 1,1 the operator of multiplication by the hermitian form ω 1,1 , and by Λ 1,1 the adjoint operator to L 1,1 .
Definition 1.3: Let α be a differential form on M. We will say that α is primitive with respect to ω if Λα = 0, and that α is primitive with respect to ω 1,1 if Λ 1,1 α = 0.
Remark 1.6: ∆ is not a Laplacian associated to the Riemannian metric h. Nevertheless they differ by a differential operator of first order (see e.g. [LY] for the exact formula), therefore they have equal symbols, so ∆ is elliptic.
Proof: Follows simply from the Jacobi identity. Proof: Decomposition is in fact proven in [BGV,Proposition 2.36] (∆ is a generalized laplacian in their terminology); one has to apply spectral theorem for compact operators: compact operator on Hilbert space has a canonical Jordan form with finite-dimensional generalized eigenvalues ( [Co]).
By Lemma 1.7, ∆ commutes with d and d c , so all generalized eigenspaces are in fact subcomplexes.
Theorem 1.9: Let α be a closed form in λ i =0 A • λ i (M). Then α is exact.
Forms on a Hermitian symplectic manifold
In this section M is assumed to be compact.
We will now investigate whether holomorphic forms on M are closed. Lemma 2.2: Let the n be the complex dimension of M. Then every holomorphic n − 2form is closed.
Hence α is closed.
Remark 2.3: Obviously, on any compact complex manifold of complex dimension n, every holomorphic function and every holomorphic n-form is closed. Every holomorphic n − 1-form is also closed, as the simple argument with the integration shows. So, any holomorphic form on a Hermitian symplectic threefold is closed.
If Λη = c, then η = cω +B, where B is a primitive form. Since η = dγ, the cohomology classes of cω and B are equal, but the cohomology class of a symplectic form cannot be represented by a primitive form. Indeed, ω ∧ ω n−1 is a volume form, hence nonzero in cohomology, but B ∧ ω n−1 = 0. So c = 0 and η is primitive. Proof: Note first that, since η is primitive with respect to ω, it is primitive with respect to ω 1,1 , so, by Weil identities, * η = η ∧ (ω 1,1 ) ∧n−2 , where * is the Hodge star operator associated with the Hermitian metric h with corresponding 2-form equal to ω 1,1 ( [GH]). Then h(η, η) = But dd c ω 1,1 = 0 on a Hermitian symplectic manifold, so the integral vanishes. Since h is a hermitian metric, η also equals to zero.
In order to complete the proof of dd c -lemma for (1,1)-forms on Hermitian symplectic manifolds, we have to prove that an exact, primitive (1,1)-form vanishes.
Proof: Square of Hermitian norm of η is equal to η ∧ η ∧ ω 1,1 , but in dimension 3 we have the equality η ∧ η ∧ ω 1,1 = η ∧ η ∧ ω; the latter form is exact, therefore η = 0. Proof: If Alb(M) is smooth, then Alb is an immersion, and pullback of the Kähler form Alb * ω is the Kähler form on M. Otherwise, we can desingularize the morphism Alb to obtain the Kähler metric on some manifoldM bimeromorphic to M (M is then a manifold in the Fujiki class C). On the other hand, M admits an SKT structure (Lemma 1.2). From the theorem of Chiose ( [Ch]) it follows that M is Kähler. are Hermitian symplectic (and therefore Kähler) surfaces, and the pullback of the volume form Alb * Vol C is a closed, non-exact (1, 1)-form on M. By dd c -lemma for (1, 1)-forms and Remark 2.3, it could not be cohomologous to a form of type (2, 0) + (0, 2). By a theorem of Michelsohn ([Mi]), in that situation there exists a balanced metric on M, that is, a Hermitian form ω such that dω dim M −1 = 0. Actually, the smoothness of C is not necessary, because a manifold bimeromorphic to a balanced manifold is balanced itself ( [AB]).
|
2015-06-24T15:24:55.000Z
|
2015-06-24T00:00:00.000
|
{
"year": 2015,
"sha1": "621c5037e669a634a1af7e790887c5a694df7e95",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "621c5037e669a634a1af7e790887c5a694df7e95",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
9916934
|
pes2o/s2orc
|
v3-fos-license
|
$D$-dimensional Arrays of Josephson Junctions, Spin Glasses and $q$-deformed Harmonic Oscillators
We study the statistical mechanics of a $D$-dimensional array of Josephson junctions in presence of a magnetic field. In the high temperature region the thermodynamical properties can be computed in the limit $D \to \infty$, where the problem is simplified; this limit is taken in the framework of the mean field approximation. Close to the transition point the system behaves very similar to a particular form of spin glasses, i.e. to gauge glasses. We have noticed that in this limit the evaluation of the coefficients of the high temperature expansion may be mapped onto the computation of some matrix elements for the $q$-deformed harmonic oscillator.
Introduction
In this paper we are interested to study the statistical mechanics of arrays of Josephson junctions in D-dimensions in the limit where D → ∞. We will construct here the solution of the mean field theory in the high temperature phase. We postpone to a later stage the computation of the corrections to the mean field approximation and the study of the low temperature phase. The model has been studied in two dimensions, especially in the low temperature region [1,2], but no results are known in very high dimensions.
The model we consider is described by the Hamiltonian: Here c(D) is a normalisation constant, which will be useful later to rescale the Hamiltonian in order to obtain a non trivial limit when D goes to infinity. The spins φ j are defined on a D-dimensional hypercubic lattice. We can consider three possibilities: • The spins φ j are constrained to be of modulus one.
• The spins φ j have modulus one in the average at β = 0: in this limit they have a Gaussian distribution.
• The spins satisfy the constraint i |φ i | 2 = N. This is the spherical model which is intermediate among the two previous models.
In the limit where the dimension D goes to infinity the properties of the first model and of the third model can be obtained from that of the Gaussian model. We will concentrate our attention on the Gaussian case.
The couplings U are non zero only for nearest neighbour sites. They are complex numbers of modulus one and they satisfy the relation In other words the couplings U are the links variables of an U(1) lattice gauge field. We will select the couplings U to give a constant magnetic field. Many different orientations of the magnetic field can be chosen. For simplicity we restrict our computation to the case where the flux through each elementary plaquette is given by B (or −B), independently from the plane to which the plaquette belongs. This corresponds to constant uniform frustration on all the plaquettes. In the extreme case (B = π) we obtain a fully frustrated model, while for B = 0 we recover the ferromagnetic case. Random point dependent B values correspond to a particular form of spin glasses, i.e. to gauge glasses [3]- [6].
More precisely we set B α,β = S α,β B, where S α,β may take the values 1 or −1, B α,β is the antisymmetric tensor corresponding to the magnetic field, which in the continuum limit is given by ∂ α A β − ∂ β A α . The ordered product of the four links of a plaquette in the α, β plane is equal to exp(i B α,β ).
We must now specify S α,β , i.e. the sign of B α,β . A possible choice would be to take which implies B α,β = B for α > β.
In two and in three dimensions this choice is equivalent to any other possible choice of the sign. In three dimension the magnetic field is a vector and all the vectors corresponding to different choices of the sign may be obtained one from the other with a rotation. The choice of S does not influence the thermodynamics.
In more than three dimensions different choices of the matrix S are not equivalent 1 and we must select one among all the possible ones. In this note we consider the case in which the matrix S is a generic one, i.e. the signs of B are randomly chosen. The system is translation invariant and the randomness appears in only in the relative orientation of the magnetic field with the crystal axis.
In the two dimensional case we recover the usual description for an XY system (or equivalently an array of Josephson junctions) in constant magnetic field.
The aim of this note is to compute the statistical properties of this model in the mean field approximation in the high temperature region. The first difficulty we face consists in finding the spectral properties of the lattice discretised Laplacian in presence of a magnetic field. The lattice Laplacian is defined as The spectral properties of the lattice Laplacian in two dimension have been carefully studied. They depend on the arithmetic properties of the B/π, i.e. different results are obtained for rational and irrational B/π [2].
The study of the lattice Laplacian in higher dimensions is much less developed. In any dimension the explicit construction of the field U show that for rational B/π, of the form B = 2πr/s, with both r and s integers, there is a gauge in which the U couplings are periodic functions of the position, with period s. In this case the spectrum of the Laplacian has the typical band form, the edges of the bands being related to the eigenvalues of a s D × s D matrix. When both s and D are large, a direct study of the eigenvalues is rather complex.
We will study this problem in the limit of an infinite number of dimensions . We cannot solve it in a completely satisfactory way, but we can put forward some educated guesses. We will find some unexpected relations with the properties of the q-deformed harmonic oscillator. At the end the behaviour of the model will come out very similar to that of spin glasses. The reader should notice that it is not clear how much of our results survives in a large, but finite, dimensions and that the properties of the model in high dimensions may be quite different from that of the two dimensional model.
In section II we present some general considerations. In the next section we show some general properties of the high temperature expansion in the limit D → ∞. We consider in detail the ferromagnetic case, the spin glass case and the constant frustration model. In section IV we show the relation among the high temperature expansion for the constant frustration model in infinite dimension and the q-deformed harmonic oscillator. In the next section we study the behaviour of our model near the critical point and we find that it is very similar to that of spin glasses. In section VI we briefly discuss the problems related to the exchange of limits (β → β c and D → ∞). Finally (in the last section) we present our conclusions and express our points of view on the open problems. In the appendix we will describe some interesting features of the q-deformed harmonic oscillator, which shows an anomalous behaviour for q = exp(2π iθ), when θ is rational.
General Considerations
There are two extreme cases for the U which are very well studied for the Hamiltonian (1): In this way we obtain the usual ferromagnetic XY model. There is a ferromagnetic transition at β = 1 in the limit D → ∞, if we set c(D) = 1 2D , i.e. c(D) has to be equal to the inverse of the coordination number of the hypercubic lattice.
where r are random numbers belonging to the interval 0−2π, such that the symmetry condition eq. (2) is satisfied.
In this way be obtain an spin glass model of the XY type, which is called a gauge glass. The transition temperature is β = 1 in the limit D → ∞, if we set c(D) = (2D) −1/2 , i.e. [3,4,5] c(D) is equal to the inverse of the square root of the coordination number.
The model we study is intermediate among the previous two problems. In order to define it properly, it is convenient to introduce the so called Wilson loop. Let us consider a closed oriented circuit (C) on the lattice, which goes from the point j to the same point j and let us define W (C) as the product of the U's along the circuit. The Wilson loop W (C) is a gauge invariant. The knowledge of W (C) for any C gives all gauge invariant informations concerning the gauge field.
In the continuum limit we have where Φ(C) is the magnetic flux entangled within C. In 2 dimensions in presence of a constant magnetic field the Wilson loop is given by where S(C) is the signed area of the loop C.
In D dimensions there are D(D − 1)/2 planes oriented in the directions of the lattice. The choice of the magnetic field we study here is where the indices ν and µ denote one of the D possible different directions and S ν,µ is the signed area of the projection of the curve C on the ν, µ plane.
As a consequence of gauge invariance there are infinite many choices of the U which correspond to these Wilson loops. All these choice are physically equivalent. In two dimensions we could set where j ν is the ν th component of the vector j and we have introduced the short-handed notation n ν being the unit vector in the ν direction. This construction can be generalised to the D-dimensional case. For example in 4 dimensions one obtains Our main task will be the study of the associated Gaussian model, where the Hamiltonian is given The solution of this associated Gaussian model is a crucial step in the computation of the properties of the high temperature expansion.
The high temperature expansion
In the case of the Gaussian model the free energy density can be written as where the sum is done over all the closed lattice circuits with given starting point; L(C) is the length of the circuit [6]. In a model (like the present one) where gauge invariant quantities are translational invariant [7], we can chose the origin (and the end) of the circuit at an arbitrary point of the lattice. In other cases, like spin glasses, we must average over all the possible starting points [8].
The previous formula can also be written as where by W (C) n we denote the average over all the circuits of length n and by N (n) the number of (rooted) closed circuits.
Differentiating the previous formulae we obtain a similar result for the internal energy density: Here the factor 1/n has disappeared.
The ferromagnetic case
This is the simplest case. We have only to compute N (n) since W (C) n = 1. It is evident that N (n) = 0 for odd n. The first non zero contributions for small n are We could also compute N (n) using the representation If we use the correct normalisation of c(D), that gives the critical temperature at 1, we immediately find that when D → ∞ all these contributions vanish. This is a well known fact: in the high temperature phase in the mean field approximation the internal energy of a ferromagnetic system is zero. The fluctuations contribute only in the subdominant terms of the large D expansion.
This behaviour implies that one should be careful in taking the limit D → ∞. Indeed it is easy to check that in the limit where n >> D one finds that [9] but in the opposite limit D >> n one gets and therefore The equation (20) is very simple to understand. In a closed circuit for each step in one direction there must be a step in the opposite direction. In infinite dimensions all the steps are taken in different directions (in a way compatible with this constraint) . The generic circuit will be thus identified by the directions in which these steps are done (we have to make a choice n/2 times between these directions) and by the locations of the steps at which two opposite directions are chosen. In high dimensions all the steps are done in different directions and in this way on obtains the previous formula, i.e. the number of pairing of n objects ((n − 1)!!), multiplied by the number of choices for the directions ((2D) n/2 ) .
If we were not aware of the correct normalisation factor and we had put c(D) = ( 1 2D ) 1/2 with the aim of obtaining a non trivial perturbative expansion, we would get the formula: We would have found in this way that the high temperature expansion has a zero radius of convergence. This is not a surprise [10] because in this scale the critical temperature is at β = 0 and any non zero value of β is already in the low temperature regime.
In the ferromagnetic case the singularity of the free energy disappears when D → ∞ in the high temperature expansion with the correct c(D). This effect can be easily explained. The ferromagnetic transition is characterised by the building up a singularity at momentum k = 0 in the two point correlation function. The free energy in the high temperature phase is given by where the integral is done over the first Brillouin zone. When D → ∞ the region of momenta near the origin has a vanishing weight and its contribution to the singularity disappears. We can see a transition in the specific heat in the limit of infinite dimensions only if the directions of the most relevant modes are not orthogonal to the boundary of the Brillouin zone, where the measure is concentrated in momentum space.
Spin Glasses
In this case we will compute the spectrum of the random Laplacian. This can be done in the infinite dimensions limit since we recover the old problem of computing the spectrum of a random matrix, which is given by a semicircle law 2 . Instead of using directly this result we prefer to follow a diagrammatic approach.
In this case the U's have zero average and are random elements of the U(1) group. After the average over all the possible starting points, W (C) gets contributions only from those circuits for which for any step going from i to k there is a step going from k to i. In other words we must sum only over backtracking circuits.
Let us count the number of these circuits in infinite dimensions. We must to compute It is easy to check that for n = 1 we do not get any new contribution with respect to the previous case and G 1 = 1.
For larger values of n a more detailed computation must be done. At this end it is convenient to denote by a, b, c.. one of the different 2D possible directions in which a step could be done.
In the case n = 2 we have 3!! circuits which differs for the ordering possibilities: where it is implicit that the second identical letter denotes a back step in the opposite direction of the first one. We do not attach any meaning to the letters a or b: we could have written aabb or bbaa indifferently. In both case the second step and the fourth steps are in the opposite direction of the first and of the third step respectively. Each of the 3!! choices correspond to (2D) 2 lattice circuits (we neglect subleading terms for large D). The first two are backtracking circuits the second is not. We thus find G 2 = 2.
In the case n = 3 we have 5!! circuits which differs for the ordering possibilities. We list here all the backtracking ones: aabbcc abbcca abccba aabccb abccab.
Therefore G 3 = 5. It is easy to verify that a circuit is backtracking if and only if the corresponding word may be reduced to the null one by subsequent elimination of consecutive identical letters.
The computation of G n can be thus cast under the following graphical form. For each given word, we put its 2n letters (two by two equal), on a circle starting from a given point, in the same order of the letters of the corresponding word. We connect those points which have identical letter by a line and we count the number of intersections of the lines. This number is topological invariant and it does not depend on the point where the letter have been put on the circle , but only on their order.
We can associate to each word the number of intersections. Let us call I n (m) the number of words which have m intersections (m ≤ n(n − 1)/2). It is easy to check that Indeed only in the case in which the resulting diagram is planar, the diagram may be reduced to zero by removing consecutively equal letters. The combinatorial problem of computing I n (0) has been solved [11] in the past 3 . After a short computation one finds: The result of the computation can also be written is a slightly different form. We consider an Hilbert space, and a base (|m on this Hilbert space, where m ranges in the interval [0 − ∞]). We define on this space two shift operators R and L: where | − 1 is identified with the null vector. These two operators satisfy the relation, which is a particular case (for q = 0) of the q-deformed commutation relations 4 : It is easy to see that where the state |0 could also be characterised the condition The existence of these two other formulations should not be a surprise. The condition of zero intersection implies that the diagram is planar and the theory of random matrices may be reformulated in terms of planar diagrams. The theory of random matrices can also be formulated in terms of the orthogonal polynomials respect to a given measure [14] and in this contest it is well known that the shift operators play a crucial role [15].
We finally find that There is a transition at β = 1/2, which is characterised a singularity of the specific heat of the form (β c − β) −1/2 . In other words the critical exponent α is equal to 1/2. Equation (34) gives the result for spin glasses in the Gaussian approximation. Starting from it one can obtain the more familiar results for the Ising spin glass or for the spherical spin glass.
Josephson junctions in Magnetic Field
In this case we need at first to compute the function We will follow the strategy of first dividing the circuits into classes corresponding to different words of 2n letters (as in the previous case) and to evaluate the contribution of each class. Let us start by computing G 2 (B) (it is trivial that G 1 (B) = 1). The backtracking circuits which correspond to the planar diagrams, (the corresponding words are aabb and abba) give a contribution 1 each. More generally we can define the area of a circuit as the minimal area of a surface of lattice plaquettes which have that circuit as boundary. Backtracking circuits can be characterised as area zero circuits.
For large D the word abab corresponds to (2D) 2 circuits with area 1. For half of them the signed area (defined in eq. 10) S(C) is equal to 1, for the other half is equal to -1. If we recall that W (C) = exp(iΦ(C)), the contribution of these circuits average to cos(B). We finally find G 4 (B) = 2 + q, where q = cos(B).
Generally speaking each different word of length 2n is associated to (2D) n circuits having the same area. The signed area of these circuits having the same area (A) is different. In a large number of dimensions (in the generic case where all the independent steps are done in different directions) the projected signed areas S µ,ν take only the values 0 or ±1 and |S µ,ν | = A.
If we average over all the possible orientations of the lattice the contribution coming from the circuits having the same word, we find that the average value of W (C) depends only on A and it is given by We finally find that where the sum is taken over all words of 2n letters and A(w) is the area associated to each of these words. We now show that the area of a of the circuit is exactly equal to the number of intersections of the lines connecting equal letters in the corresponding diagram. We can decrease the area by an unity by interchanging two letters. For example Indeed the area of the projection on the a − b plane goes from 1 to 0 and the projected area on the other planes is the same in the two circuits corresponding to the two words. The same braiding operation decreases the number of intersections by 1. By subsequent operations of the previous kind we can arrive to the zero intersections case (planar diagrams) by decreasing each time both the area and the projection by an unity. We have already remarked the relation between the number of planar diagrams and the coefficient of the high temperature expansion for spin glasses (G n = G n (0)). We have thus transformed the problem of computing the high temperature expansion into a combinatorial problem, although not very easy, which generalise the computation of planar diagrams. The solution of this problem will be presented in the next section.
The q-deformed harmonic oscillator plays a role
We have reduced the problem of evaluating the high temperature expansion for the Gaussian model in presence of a magnetic field to the computation of the number of words of 2n letters, two by two equal, such that the number of intersections in the corresponding diagram is equal to a given number.
We claim that where and the operators L and R satisfy the commutation relation of a q-deformed harmonic oscillator: Therefore L q may be identified with the distruction operator and R q with the creation operator for a q-deformed harmonic oscillator. For q = 1 we recover the ferromagnetic case, for q = −1 the fully frustrated case and for q = 0 the spin glass case. These operators may be represented as: and m ranges in the interval [0 − ∞]. In the limit q → 1 we obtain the usual Bosonic oscillator and we recover the usual formulae. It is a simple matter of computation to verify that eq.(42) gives G 1 (B) = 1, G 1 (B) = 2 + q, G 3 (B) = 5 + 6q + 3q 2 + q 3 , G 4 (B) = 14 + 28q + 28q 2 + 20q 3 + 10q 4 + 4q 5 + q 6 .
These results coincide with the output of an explicit enumeration of the diagrams. We have not been able of finding a neat proof of eq.(42). However we have checked its validity in many special cases (large q, small q, q = 1, q = 0 and q = −1) and we are convinced of its validity.
Intuitively eq.(42) tells us that when we use the Wick theorem for q-deformed harmonic oscillators, we must bring together the different terms we contract and for each term we get a factor q to the power of the number of object we have to cross.
If we use this result, we finally find the quite simple formula: which gives a remarkable connection among the high temperature behaviour of the Gaussian model and the q deformed harmonic oscillator.
In this way we have reduced the combinatorial problem of computing the high temperature expansion to an algebraic problem.
Near the critical transition
The problem now is reduced to the computation of the spectrum of the operator X of the q-deformed harmonic oscillator. The computation is apparently non trivial. We are however interested to the computation of the spectral density near the largest eigenvalues.
A simple case is q = 1, where the operator X q is not bounded and the high temperature expansion is divergent. In this case X has a continuum spectrum and the highest eigenvalues of X are concentrated in the large m region. Let us assume that this feature is valid for q inside the interval [−1, 1]. One finds that when the operator is applied to a state |m in the region of large m. (L and R are the two shift operators for q = 0 which are used in the planar case). The difference among L q and (1 − q) −1/2 L can be seen only when the two operators act on a state of low m. It is very reasonable to assume that the spectral radius and the spectral density near the maximum eigenvalues is the same in the two case. We have verified numerically that this conjecture is consistent (at least for q not too close to 1) by estimating the spectral density of X q in subspaces of various size (m < M, with M up to a 300).
We find therefore that the critical temperature is given by which is the inverse of the spectral value of X, i.e. .
The behaviour of the spectral density near the edge is the same as for the random matrix model, i.e. in spin glass. In this way we find the same critical exponents as in spin glasses in the Gaussian approximation. A possible physical interpretation is the following. In computing the internal energy one has to sum over all the closed circuits. Circuits with large physical area average to zero and only fattened backtracking circuits survive. The situation is very similar to spin glasses, where only backtracking circuits contribute, the only effect being a renormalization of the temperature 5
The issue of exchanging limits
A very serious problems in assessing the relevance of these results is related to the exchange of the limits D → ∞ and β → β c . If we exchange the limits we become blind to any singularity whose strength vanishes in the limit D → ∞. Sometimes this exchange is quite justified, sometimes it leads to disaster [16], [17].
The cases q = 1 and q = −1 are particularly instructive. The case q = 1 has been already discussed. The case q = −1 is quite interesting. We notice the following facts.
• The spectrum of the lattice Laplacian for the fully frustated model is well known [17]. A simple way to compute it consists in using the relation among the Gaussian fully frustrated model and the naive Wilson Fermions on the lattice [18]. Indeed let us start from the Hamiltonian of the naive Wilson Fermions whereμ is the versor in µ direction, the γ µ are the appropriate Dirac gamma matrices in D dimensions (which satisfies the usual algebra) and the ψ are the spinors on which these matrices act. For even D the gamma matrices may be taken to have dimension 2 D /2. In order to simplify the notation we have not indicated the spinorial indices. If we introduce the field it is well known fact that the lattice Dirac operator reduces to the Laplacian of a fully frustrated model.
• The previous remark implies that for q = −1 one has in the Gaussian approximation (with the appropriate rescaling of β): for all even values of the dimensions.
• If we send D to infinity we find that in perfect agreement with the direct computation. (In this case the creation and annihilation operators act on a two dimensional Fermionic space.) • In any finite dimensions [17] the closest singularity to the origin of the function U(β) is located at β 2 = 1/2, which corresponds to the integration point where all the momenta are at the boundary of the Brillouin zone (i.e. (k µ ) = ±π/2).
• In infinite dimensions the function β c (q) is discontinuous at q = −1. Indeed As already found in [17], at q = −1 the limit D → ∞ of β c is smaller by a factor 2 of the value of β c obtained from the high temperature expansion computed directly ad D = ∞. However this difficulty seems to be confined at q = −1. If we first take the limit D → ∞ at q = 1, we recover the correct critical point for the q = 1 case. In other words, if we first compute the critical temperature at D = ∞ for q = −1, we obtain the correct value of the critical temperature at q = −1, while we would get the wrong results if we perform the limit D → ∞ directly at q = −1. By consistency we find that the prefactor in front of the nearest discontinuity vanishes when q → −1 so that for q = −1 this singularity disappears.
It seems that we are free to conjecture that (apart from two well understood problems at q = −1 [17] and q = 1 [10]) the correct value of the critical temperature is obtained when we send firstly D to infinity. A numerical verification of the validity of this conjecture may be attempted for q = 0 or ± 1 2 , where β c can be computed by diagonalizing matrices of size 2 D or 3 D respectively.
Open problems
Let us suppose that the difficulties discussed in the previous section are not serious. We still face the problem of presenting a full computation of the high temperature expansion in the XY model. We must include high order terms which come from the fact that the distribution of the spins is not Gaussian. In the case of spin glasses these corrections are relevant; however they are identical in the Ising, XY and spherical model. In this last case they can be computed by tuning the coefficient of the quadratic term in such way that the spherical constraint is satisfied.
We have not checked that this happens also in our case, but it seems rather plausible. If this argument is correct, the knowledge of the Gaussian propagator is sufficient to reconstruct the high temperature expansion.
What happens in a finite number of dimensions is not clear. The first step is to verify if the equality of the two model survives in perturbation theory. Also if this check is satisfied one should be very careful because of non perturbative effects. It seems to me rather likely, but I do not have solid arguments in this direction, that for rational B the critical theory should behave differently from spin glasses, and the only hope for having a spin glass like behaviour is for generic irrational B. It would be vry interesting to connect this approach with the results obtained in two dimensions, where quantum groups have been used to compute the spectrum [19].
The possibility of having a spin glass behaviour for this non random system [20] is fascinating and deserves more careful investigations.
Acknowledgements
It is a pleasure for me to thank V. Anders for suggesting to me this field of investigation and for a very useful discussion. I am also happy to thank for useful suggestions G. Immirzi, E. Marinari, S. Sciuto and G. Vitiello.
Appendix
In this short appendix I report on some numerical findings that I have obtained on the behaviour of the spectral radius of X as function of θ for q = exp(i2πθ). In this case I find a function which is discontinuous at all the rational points, but the discontinuity vanishes when the rational point becomes irrational.
If we apply the previous formulae we find that the spectral radius of X 2 should be 4 |1 − q| = 4 sin 2 (πθ) The argument breaks down for rational θ. Indeed if θ = r/s, with both r and s integer (r and s are the smallest integer for which have this property) X reduces to a finite dimensional operator of size s. In this case the previous formula is not correct. However in the limit where s goes to infinity it seems to become correct again. This can be seen by considering the function R(θ), defined as |X|(θ) 4 = 4 sin 2 (πθ) (1 − π 2 2s 2 ) + R(θ) The function R(θ) is the difference among the analytic continuation of the value of the spectral radius from |q| less than 1 and the actual spectral radius (apart the presence of a multiplicative factor which goes to zero as s −2 when s → ∞ at fixed θ).
I have computed the function R(θ) for all rational with s ≤ 21 (70 cases) and I have found that goes to zero fast with s (quite likely as s −2 ). It seems likely that the function R(θ) is discontinuous at rational points, but the value of the discontinuity goes to zero when the rational becomes irrational (i.e. when s → ∞).
Unfortunately I am not aware of a physical interesting model in which the properties of X for complex q enter. This appendix should be considered as a curiosity.
|
2014-10-01T00:00:00.000Z
|
1994-10-25T00:00:00.000
|
{
"year": 1994,
"sha1": "5cf242daacef5445045ab9e52939f8714da98baa",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9410088",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5cf242daacef5445045ab9e52939f8714da98baa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
201967141
|
pes2o/s2orc
|
v3-fos-license
|
Sensitivity, specificity, positive and negative predictive values of identifying atrial fibrillation using administrative data: a systematic review and meta-analysis
Introduction Atrial fibrillation (AF) is the commonest arrhythmia and a major cause of stroke and health care utilization. Researchers and administrators use electronic health data to assess disease burden, quality and variance in care, value of interventions and prognosis. We performed a systematic review and meta-analysis to assess the validity of AF case definitions in administrative databases. Methods Medline was searched from 2000 to 2018. Extracted information included sensitivity, specificity, positive and negative predictive values (PPV and NPV) for various AF case definitions. Estimates were pooled using random-effects models due to significant heterogeneity between studies. Results We identified 24 studies, including 21 from North America or Scandinavia. Hospital, ambulatory and mixed data sources were assessed in 10, 4 and 10 studies, respectively. Nine different AF case definitions were evaluated, most based on ICD-9 or 10 codes. Twenty-two studies assessed case definitions in patients diagnosed with AF and thus could generate PPV alone. Half the studies sampled unrestricted populations including a mix of those with and without AF to assess sensitivity. Only 13 studies included ECG confirmation as a gold standard. The pooled random effects estimates were: sensitivity 80% (95% CI 72–86%); specificity 98% (96–99%); PPV 88% (82–94%); NPV 97% (94–99%). Only 3 studies reported all accuracy parameters and included rhythm monitoring in the gold standard definition. Conclusion Relatively few studies examined sensitivity, and fewer still included rhythm monitoring in the gold standard comparison. Administrative data may fail to identify a significant proportion of patients with AF. This, in turn, may bias estimates of quality of care and prognosis.
Introduction
Atrial fibrillation (AF) increases risk of stroke, heart failure and death, and is one of the few cardiac conditions whose prevalence continues to rise. 1,2 Most developed health systems collect reasons for hospital and ambulatory encounters for administration, service planning, quality improvement and reimbursement. Health services researchers use these administrative electronic databases to monitor the burden of disease, quality of care, and ascertain exposures or outcomes. The accuracy of AF identification is central to these applications. Sensitivity and specificity, though theoretically independent, typically trade-off and are inversely related. 3 The "optimal" approach to identifying AF depends on the purpose. High sensitivity more completely captures a population, improves generalizability and is important when defining AF as an exposure. By contrast, high specificity ensures persons identified truly have AF and is central to adjudicating treatment uptake, which appears inappropriately low if patients with sinus rhythm are misclassified as having AF. 4 Conceptually, the AF patient journey involves ambulatory and acute contacts dissociated in time and space, between which information flows by varying amounts and rates. Interrogating data sources over short time intervals or single environments may miss infrequent encounters. A previous systematic review examined the accuracy of AF detection, but was limited to ICD-9 codes only, noncontemporary electronic sources, North American cohorts and narrative synthesis without consideration of the impact of different health care settings (indeed the focus was largely on hospitalization data). 5 We, therefore, undertook a systematic review to address these evidence gaps.
Participants, outcomes and study designs
Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were followed (Table S1). We examined the accuracy of AF case definitions in electronic administrative health data, namely sensitivity (SN), specificity (SP), positive and negative predictive values (PPV and NPV). Inpatient, outpatient and mixed populations were included. All study designs were accepted. The study protocol was not published.
Search strategy and data collection MEDLINE was searched from January 2000 to February 2018, limited to adult humans and English language, excluding case studies, reviews and conference abstracts. Search terms were determined by literature review and database query. The search strategy combined Medical Subject Headings (MeSH) terms and keywords in title and abstract to define three groups: atrial fibrillation (including atrial flutter (AFL) if not differentiated); administrative and electronic medical databases; and studies examining accuracy of AF identification within these records (Table S2). The search returned 1007 unique records. Manual bibliography searches identified an additional 31 publications ( Figure S1). Titles and abstracts were screened for inclusion, and 302 full-text articles reviewed. Studies fulfilling the participant, outcomes and study design criteria were included. Variables of interest were decided a priori, expanded iteratively after pilot and collected in Microsoft Excel. The following information was extracted: bibliographic details, sample size, population characteristics, inclusion and exclusion criteria, codes and algorithms, AF confirmation gold standard and accuracy parameter outcomes.
Data synthesis
Weighted averages of sensitivity, specificity, positive and negative predictive values were calculated using the DerSimonian-Laird random effects model. 6 Forest plots of estimates with 95% confidence intervals (CI) were generated. Publication bias was assessed through visual inspection of funnel plots and the Begg-Mazumdar rank correlation test for asymmetry. 7,8 Heterogeneity was tested with visual forest plot inspection, Cochrane Q, I 2 and Tau 2 statistics. 9 Estimates with significant heterogeneity (I 2 >90%) were examined manually and formally for moderating effects including country, publication year and reference standard, none of which were significant. The leave-one-out method was used to determine if the results were sensitive to the inclusion of extreme values from specific studies. 10
Study characteristics
Twenty-four studies were identified (Table 1). Most originated from countries with established administrative databases that are often interrogated by health services researchers, including 10 from the United States, 3 from Canada and 8 from Sweden or Denmark. The populations were heterogeneous, including general unselected, stroke and post-operative cohorts. Hospital, ambulatory and mixed data sources were assessed in 10, 4 and 10 studies, respectively. Only 3 studies outside Scandinavia examined mixed populations. 4,11,12 One Canadian study included administrative data from emergency departments separate from hospitalizations. 11 Coding and case definition algorithms Table 2). The impact of coding position (primary versus secondary diagnosis in hospitalization data) was never examined. Four studies compared the accuracy of two versus one encounter coded as AF in ambulatory data sources within a single year. 4,[11][12][13] This consistently increased specificity but decreased sensitivity. A single study compared 2 versus 1 year for case ascertainment in Veterans Affairs outpatient records, finding greater sensitivity with only slightly reduced specificity. 13 Overall in that study, 2 diagnoses over 2 years were optimal for detecting AF. 13 Only one study from Canada examined more complex algorithms including cardioversion codes and pharmacy dispensations for antiarrhythmic drugs. 11
Characteristics of AF
Prevalent and incident AF were assessed in two-thirds and one-thirds of studies, respectively (Table 1). Incident cases were typically defined by exclusion of prior AF diagnoses since the records began, or methods were not specified. The incidence and prevalence of AF varied markedly depending on the population studied, from 0.3% to 55%. 14,15 The incidence and prevalence was highest in studies following cardiac surgery (32-36%) and stroke (10-28%), lower in general hospitalizations (7-9%) and lowest in unselected outpatients (0.3-1%). No study distinguished between persistent and paroxysmal AF. Three studies reported from 7.0% to 20.4% of the coded AF to be transient. 4,16,17 Two defined transient as a single episode without recurrence, 4,16 while two added precipitants including cardiac surgery and/or hyperthyroidism. 16,17 Gold standard for diagnosis of AF With the exception of two studies, medical chart review was considered the gold standard by which history of AF was classified (Table 2). Of these, 13 studies specifically included ECG review, of which 2 employed ECG alone for confirmation of AF. 18,19 No prospective protocols or frequencies for ECG were reported. A median of 11 ECGs per patient with AF was noted in a Swedish outpatient setting. 15 Only 4 studies mentioned use of longer term rhythm monitoring such as Holter, although these results may also have been available in medical record review.
Discussion
This analysis reports several key findings. The overall specificity and NPV of an AF diagnosis using the ICD case definitions was high, 98% and 97%, respectively. The sensitivity and PPV were lower though reasonable, 80% and 88%, respectively. Only half the studies sampled patients with and without an assigned diagnosis of AF to determine the sensitivity of the case definitions and thus the proportion potentially missed by using administrative data. Half the studies confirmed AF using electrocardiography as the gold standard, while the remainder employed medical record review, alternative databases (like primary care EMRs) and/or patient questionnaires. Only 3 studies reported all accuracy parameters and included rhythm monitoring in the gold standard definition. 15,17,20 Sensitivity High sensitivity improves case finding as it more completely captures a population, increases the estimated incidence and prevalence and enhances generalizability. This is particularly relevant when estimating the burden of disease and to reduce bias when studying health inequalities. Sensitivity is also important when defining AF as an exposure. Misclassification of exposure (eg, AF) as nonexposure (eg, no AF) attenuates the association with outcomes such as stroke. 21 By contrast, sensitivity is less of a concern when defining AF as an outcome, for example in pharmacovigilance studies. In these circumstances, estimates of relative risk are not biased providing misclassification occurs to the same degree in exposed and nonexposed patients. Sensitivity is reduced when cases are missed and AF is misclassified as normal (ie, false negatives). This occurs in two circumstances. First, when recording or coding is incorrect. Second, when correctly recorded and coded diagnoses are missed in time or space. Examining shorter time frames may miss infrequent encounters, as evidenced in the Veterans Affairs study where sensitivity increased using a 2 versus 1 year period for case ascertainment. 13 Information also flows by varying amounts and rates through health systems. Although the median time from AF on ECG to diagnosis in the Swedish Patient Register was 16 days, this time lapse exceeded 6 months in one-third of patients. 15 Positive predictive value Sensitivity may be viewed from different perspectives: local, horizontal level of care (eg, primary care), vertical (eg, health maintenance organization) or global (entire health care system). Examining a single health care setting may miss encounters meeting the AF case definition in another. For example, hospitalization data alone misses patients managed entirely in the community, causing under-estimates of prevalence rates and over-estimates of adverse outcome rates. There were insufficient studies to accurately compare sensitivity between ambulatory, hospital and mixed populations. However, one of the mixed population studies did compare the accuracy of coding between primary care, secondary care or both together. In that study from Ontario, the sensitivity was 45%, 39% and 75% for hospitalization, emergency department or outpatient data sources alone, respectively, and 83% combining the three sources. 11 The true population incidence and prevalence may also be influenced by access to rhythm monitoring (ECG, Holter, event or loop recorder), reporting standards (eg, training, quality assurance) and information transfer (eg, interface to electronic medical record). These factors are potentially more challenging in community than in hospital settings, particularly relevant to measuring inequalities, and difficult to quantify. None of the included studies described these aspects of access.
Positive predictive value
Since sensitivity and specificity are typically inversely related, higher sensitivity reduces specificity, which increases false positives and lowers PPV. The impact on PPV is magnified for diseases with a relatively low prevalence such as AF. A high PPV ensures persons identified truly have AF (fewer false positives). This is central to adjudicating treatment uptake, which will appear inappropriately withheld if patients with sinus rhythm are misclassified as having AF, unless OAC is prescribed for an alternate reason. 4 A PPV value exceeding 85-90% suggested adequate for research purposes. 19,20 The reasons for false positives were rarely explored. 22 Potential scenarios include: 1) miscoding eg, allergic rhinitis was written as "AR" and coded as AF; 22 2) rhythm misinterpretation such as atrial tachycardia; 3) misreporting if based on medical history alone; and 4) AF defined by an intervention shared with other conditions eg, cardioversion. PPV is also highly dependent on disease prevalence: as many studies focused on older or high-risk individuals they may overestimate the true PPV for that case definition if applied in a younger population.
Oral anticoagulation is the only treatment to improve survival in patients with AF, and thus a key quality indicator. Although the overall PPV was high (88%), the specificity and PPV to identify AF requiring anticoagulation (as opposed to any AF) could be lower for several reasons. First, up to 10% of incident AF is isolated with a defined precipitant, low recurrence, and may not require anticoagulation. 4,16,23 Only three studies reported or excluded such patients. 4,16,17 Second, anticoagulation adjudication requires accurate coding of embolic and bleeding risk factors, which like AF exhibit high specificity but are under-reported. 12,22 More subjective bleeding risks such as frailty and falls are particularly difficult to quantify, although a recently described frailty score based on administrative data (the Hospital Frailty Risk Score) has been described. 24 Finally, patient preferences are major drivers of anticoagulation decisions but are never captured in administrative databases.
Atrial fibrillation phenotype and coding considerations
The disease spectrum (permanent, persistent, paroxysmal, isolated unprovoked or provoked episodes) was rarely reported yet also impacts accuracy of AF detection. Permanent or persistent AF is associated with greater comorbidity and hence health care encounters during which arrhythmia is continuously present. By contrast, isolated or paroxysmal AF may be under-represented by health care encounters. Treatment including rate versus rhythm control and anticoagulation also varies based on symptoms, AF duration, risk of recurrence and thromboembolism. 25 The accuracy of administrative data to identify AF requiring anticoagulation is thus further lessened by limited phenotypic characterization. 4 The AF phenotype may also impact the "gold standard" for diagnosing AF, whereby paroxysmal AF is missed by ECG alone but detected by chart review. In the only study examining this issue, ECG review did not improve sensitivity of AF detection over diagnosis codes alone. 4 Most developed health systems collect reasons for hospital and ambulatory encounters for administration, service planning, quality improvement and reimbursement. A single primary or most responsible diagnosis is typically assigned, while conditions complicating or prolonging stay are coded in multiple secondary positions, sometimes further categorized as pre-existing or de novo disease. Differences in coding accuracy, treatment and prognosis are reported between primary and secondary positions for conditions such as heart failure. 26,27 To our knowledge, such differences have not been explored in patients with AF, and no study identified by our search compared coding positions.
Strengths and limitations
Several strengths and limitations merit consideration. Our analysis is contemporary, included varied health systems, ICD-8 to ICD-10 codes, and both ambulatory and hospital populations. However, most studies originated from North America or Scandinavia, and examined ICD codes in administrative data sources. This potentially limits generalizability to other health care systems. There was significant heterogeneity in terms of population, prevalence of AF and reported accuracy parameters. Most studies assessed accuracy in restricted cohorts as opposed to the broader population.
Directions for future research
Health service researchers and administrators may interpret administrative data using either our pooled estimates or locally relevant studies from among those identified. Jurisdictions would ideally conduct nationally representative validation studies to provide estimates specific to their populations and data sources. These should examine existing codes and test new case definition algorithms in all data sources with differences in coding practices and diagnostic accuracy (eg, hospitalization, emergency department, ambulatory primary and secondary care), and in scenarios with varying disease prevalence. Though challenging and costly, random sampling of representative populations is essential to define sensitivity, enhance generalizability and reduce bias when studying inequality. To understand the true disease burden, algorithms should combine primary and secondary care data sources.
More complex algorithms utilizing advanced analytics such as natural language processing and machine learning to mine free-text medical records merit investigation. Potential avenues include integrating corroboratory data such as medications and procedures, and temporal and spatial coding patterns. Future work should investigate the optimal gold standard including rhythm monitoring, electronic data sources and chart review. The reasons for false positives and negatives need to be explored in detail, as does the impact of AF phenotype and coding position.
Finally, the accuracy of embolic and bleeding risk factor case definitions requires further validation in order to adjudicate appropriateness of anticoagulation management choices.
Conclusion
The overall accuracy of AF identification was reasonable for system planning and surveillance of prevalence, quality and outcomes. However, there is a marked disconnect between the volume of publications in these domains, and those examining the underpinning data. Sensitivity and PPV were the least accurate parameters with greatest uncertainty in terms of evidence and interpretation. This potentially underestimates the burden of disease and may bias estimates of outcomes and treatment quality. The optimal AF case definition should consider the purpose of the study and the data sources available. Health service administrators, researchers and clinicians should be mindful of these factors, and work together to refine our use of electronic data.
Selection
17 Give numbers of studies screened, assessed for eligibility and included in the review, with reasons for exlusions at each stage, ideally with a flow diagram.
Figure S1
Characteristics 18 For each study, present characteristics for which data were extracted (eg, study size, PICOS, followup period) and provide the citations. Table 1 Bias within studies 19 Present data on risk of bias of each study and, if available, any outcome level assessment (see item 12). N/A Results 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group (b) effect estimates and confidence intervals, ideally with a forest plot.
|
2019-09-09T18:39:17.844Z
|
2019-08-23T00:00:00.000
|
{
"year": 2019,
"sha1": "8db40be3e67d36de32f2c6eb722c88ec7165d690",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=52246",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f721fdcfec73d2e6035fca427a6f4f636a6b90d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258747841
|
pes2o/s2orc
|
v3-fos-license
|
Vehicle tracking system using working sensors of a partially functional android mobile phone
Vehicle tracking has found its applications in a variety of domains. Most of the currently available vehicles tracking systems are based upon using some expensive tracking units that employ only GPS signals to determine position and have very high monthly subscription charges from their service providers. To reduce the cost for such applications, this paper proposes the use of a partially damaged or low-cost android device with GSM capability and some other useful working sensors to periodically report the location and then display and monitor the determined location set on a remote digital map. This device can be integrated with a vehicle and made to boot up and start tracking automatically as soon as the vehicle’s ignition is turned on and stop tracking and shutdown automatically after the vehicle’s ignition is turned off. These devices may find application in fleet management, public transport systems, etc. as discussed inside this paper.
Introduction
In today's world of technology, mobile devices are no longer the devices used only for voice communication or short messaging service, but the evolution of various technologies like smaller integrated circuits; compact and fast processors; multimedia output-input capability, etc. has brought about a revolution in the field of mobile technology [1]. The cell phones today can deliver most of the services offered by the traditional computers and the integration with various other sensors like GPS antenna, Digital compass, and Gyroscope etches further enhanced their utility [2][3]. These devices are often termed as Smart Phones. Different operating systems have been designed to operate on these devices in order to optimize their battery life, responsiveness, etc.
Android platform is a new generation of smart mobile phone platform launched by Google [5][6]. Android supports GPS, Video Camera, compass, and 3d-accelerometer and provides rich APIs for map and location functions, which is probably the concern of vast numbers of developers now a days. Users can flexibly access, control and process the free Google map and implement location based mobile service in his mobile systems at low cost. But owing to their sophisticated hardware, these devices are even more vulnerable to crashes. A simple drop may leave the device bricked and it may then seem useless to normal user, however it has been observed that in most cases only some of the hardware like screen, outer surface, speaker, camera etc. are damaged leading to the malfunctioning of the overall system. These partially functional android devices can be employed as a vehicle tracking device after examining whether the required hardware like GPS antenna, Network receptor, gyroscope etc. are properly functional or not [4]. The paper introduces various sensor combinations and generic algorithms for location reporting that comes under the study of Telematics [7]. This paper contributes to the field of vehicle tracking systems by proposing a novel approach that leverages the working sensors of a partially functional Android mobile phone to track the location and driving behavior of vehicles. The paper presents a system that uses the GPS and accelerometer sensors of a mobile phone to track the vehicle's location and speed and analyzes the data to detect aggressive driving behaviors. The system is designed to be low-cost and accessible, as it uses existing hardware and software that are widely available.
One of the main contributions of the paper is that it shows the feasibility of using a mobile phone's built-in sensors to track vehicles accurately. This is significant because it opens up the possibility of implementing vehicle tracking systems in a wide range of applications where specialized hardware and software may not be available or practical. For example, the proposed system could be used in developing countries where the cost of specialized tracking systems may be prohibitive.
Overall, the paper makes an important contribution to the field of vehicle tracking systems by proposing a novel approach that is low-cost, accessible, and effective. The proposed system has the potential to be implemented in a wide range of applications, from fleet management to personal vehicle tracking, and could help improve safety, reduce emissions, and save costs.
The use of working sensors of a partially functional Android mobile phone for vehicle tracking systems is a novel approach that has the potential to revolutionize the way vehicle tracking systems are implemented. Additionally, the use of a camera sensor allows the detection of road signs and other objects, which could be used to further enhance the system's capabilities.
The studies in the section 2, shows that there are few areas where the existing work could be improved or extended to make it even more innovative. One area that could be explored further is the integration of additional sensors to improve the accuracy of the tracking system. For example, the use of a magnetometer sensor could help improve the accuracy of the system in areas where the GPS signal is weak or unreliable. Another area where innovation could be added is in the development of algorithms to analyze the data generated by the sensors. The use of machine learning techniques has already been explored in some of the existing work, but there is potential for the development of more sophisticated algorithms that could improve the accuracy of the system even further. For example, the use of deep learning algorithms could enable the system to learn from previous data and make more accurate predictions about the vehicle's behavior. The existing work lacks in demonstrating the potential for using machine learning techniques to analyze the data generated by the sensors and detect aggressive driving behaviors. By detecting aggressive driving behaviors such as sudden acceleration and harsh braking, the system could help reduce fuel consumption, improve safety, and reduce emissions.
This could involve the use of visualizations to help users better understand the data generated by the sensors, or the development of a mobile app that allows users to access the data and track their vehicles in real-time. There are many opportunities for innovation in the development of vehicle tracking systems using the working sensors of a partially functional Android mobile phone. By exploring these opportunities, it may be possible to develop a system that is even more accurate, efficient, and user-friendly than the existing approaches.
Literature Review
Vehicle tracking systems have become increasingly important in recent years for managing and monitoring vehicle fleets. Traditionally, these systems have relied on specialized hardware and software to track the location of vehicles. However, with the advancement of technology, it is now possible to leverage the sensors of a partially functional Android mobile phone to track vehicles. In this literature review, we will explore the existing research on the use of working sensors of a partially functional Android mobile phone for vehicle tracking systems.
The study by Brahim, S.B et al., 2022,[8] focused on driver behavior profiling, which is important for insurance industries and fleet management. The use of mobile applications to classify driver behavior is in the spotlight of autonomous driving, but using mobile sensors may raise security, privacy, and trust issues. To address these challenges, the authors proposed using the Carla Simulator available on smartphones to collect data from sensors such as accelerometer, gyroscope, and GPS, which will help to classify driver behavior based on speed, acceleration, direction, and 3-axis rotation angles. The authors also explored different machine learning algorithms for time series classification to evaluate the one that results in the highest performance.
Lindow and Kashevnik, 2019 [9], investigated the use of smartphone sensors and machine learning to detect abnormal driving behavior. The authors conducted a literature review to explore current studies in this area and found that different machine learning approaches and sensor data were used. Based on their findings, the authors proposed a driver decision support system that uses neural networks for classification and smartphone-based sensor data. This approach allowed the system to be accessible to a wider range of people, regardless of their car type.
The paper by Jahan et al. 2019 [10], proposed an easy system for tracking real-time bus location using the GPS and SMS features of mobile devices. The system consisted of a server device and a client device, with the server device installed on the bus to provide its exact location to the server or the user in case of an SMS query. The client device can find the bus location either through SMS or a mobile application, and experiments showed that the proposed system outperformed other similar vehicle tracking systems.
Another study by Júnior J. F. et al. 2017 [11] focused on driver behavior profiling and its impact on traffic safety, fuel consumption and gas emissions. It also explored the automated collection of driving data and the application of computer models to generate a driver aggressiveness profile. The paper investigated the usage of different Android smartphone sensors and classification algorithms to achieve high-performance classification. The results showed that specific combinations of sensors and intelligent methods allow classification performance improvement.
Chaudhary et al., 2017, [12] explained that a vehicle tracking system is an Android-based mobile application that uses GPS to track nearby vehicles and help people find them quickly, especially during emergencies. The system connects drivers and passengers, reducing travel time and energy consumption. The development of mobile applications has been made possible by the mobile trend and 3G network. The authors proposed a system that used Java programming language, Android OS, PHP web server, and GPS location provider to provide a smooth and hassle-free user experience.
The article by Saha S et al. 2015 [13], discussed the battery life toll of GPS tracking on mobile devices and proposes a low-power and low-cost location tracking system that utilizes the accelerometer, magnetometer, and gyroscope sensors in a smartphone to track continuous locations of a mobile device with good accuracy. The system was tested on both indoor and outdoor locations and has generated an accuracy level of as low as 2 meters distance. The system offered huge savings in terms of battery power consumption, up to 20% for a run of 3 hours, and can be a good alternative to the costly GPS system for location tracking.
If the sensors can be extracted from the partially damaged mobile phone, they provide an alternative to the expensive sensors. By using the built-in sensors of a partially damaged mobile phone, vehicle tracking systems can be implemented at a lower cost, making it accessible to a wider range of users. Additionally, machine learning techniques can be used to analyze the data and improve the accuracy of the tracking system. Overall, the existing research has shown that leveraging the sensors of a partially functional Android mobile phone for vehicle tracking systems is a promising approach.
Cheapest Vehicle Tracking System Currently in use
There are various vehicle tracking system available but some of them which are currently in use have been shown in figure 1.
Passive tracking devices
Devices such as Tracking key, tracking key Pro, GPS 3100 etc. are the smallest available GPS devices in the market. Secretly placed in the car, where it sits idle until the car is started and placed in motion. Motion sensors activate the covert GPS tracker data recorder which stores data in a flash drive that can be plugged into any Windows based computer later and downloaded. Passive in nature. No provision for indoor tracking. The ever so increasing demand of cost-efficient vehicle tracking systems dealing with almost all aspects of security and privacy is the major motivation behind our work. The paper proposes the use of a partially functional or low-cost Android device as a vehicle tracking unit. It is easy to implement self-location, to draw the driving trace, to perform queries and to flexibly control the real-time map on Android [14]. The actual system also achieves high running performance. The proposed system combines the features of active and passive tracker with database API in Android, records path even when GSM signals are not available and sync them when connected, boots up and turns off automatically in compliance to the vehicle's ignition, and can even use SMS functionality in case of an emergency. Moreover, there is no third-party interference thus relieving the client from high subscription charges and ensuring privacy of data as well. Functionalities of the android devices and features such as camera, microphones etc. can be used to communicate or interact with the current driver. Sensors such as gyroscope can be used to detect the sudden collisions (collision detection of vehicles) or sudden change in the inertial activity of vehicle in case of accidents when the driver is unable to communicate. More extensibility and easy automatic client application updating is possible with androids [15].
Features of current tracking systems:
Alerts are sent to you via, such as when your vehicle moves from a stationary position or when your car is moving faster than a preset speed. Basically, you will know where your vehicle is, where it is likely going, how fast it is going and more, meaning you will have full knowledge and control over your vehicles when you are not in it. Cost efficient but the proposed system is inexpensive in comparison to this one. No provision for indoor tracking. No provision for path tracing, only displays the current location.
Proposed Work
In this section the sensors and common positioning methods used in this research work have been discussed followed by proposed method. The sensors used for positioning in android runtime environment are given below:
Useful Sensors Combination for Positioning
Normally an android device has the following sensors that can be used alone or in combination with one another to determine the location of the device: GPS antenna (measures most accurate position up to 3 meter), Wi-Fi adapter (for Wi-Fi positioning), GSM signal receptor (for cell-ID positioning), gyroscope (for sensing orientation), magneto-meter (used as digital compass), Accelerometer (motion sensor),USB OTG (a specification to add USB devices ) and Barometer (for altitude calculation) [16][17][18][19][20]. In figure2, a screenshot of an android device is shown and running of an application "phone tester" demonstrating the use of various sensors in programming interface.
GPS Positioning:
The Global Positioning System (GPS) is a precise satellite-based navigation system providing three-dimensional positioning, velocity and time information all on a twenty-four-hour basis [20][21][22][23][24]. The tracking unit receives the GPS signal and calculates how far the satellite is and then determines its own position based upon this distance. Now a day's another type of satellite system named as GLONASS is also being used for the same purpose. Most of the new devices use signals from both systems for high accuracy. Some new devices also use barometer to reduce the time to fix first as altitude is provided quickly, leading to faster positioning. One of the systems that use the combination of different satellite system and barometer has been discussed in [27].
Wi-Fi Positioning:
Based upon the MAC address of various hotspots, an open source database is created online for MAC to Position mapping. On the basis of various signals and signal strength available, the approximate location of a device is determined. Accuracy is within the range of about a100 meters [25].
Cell-ID Positioning:
Similar to Wi-Fi Positioning but the database in this type of positioning is maintained using GSM BTS ids. Accuracy is within the range of about 1 to 1.5 kilometers [19-20].
Gyroscope with Accelerometer:
Determining the orientation and speed of motion the continuous change in location can be monitored [1]. This system thereby calculates its current position very quickly and hence used in missile science. However, because the sensors in mobile device are weak, it may generate huge error when used for a long duration. The use of these inertial sensors integrated with GPS can give us an all-time reliable positioning technology [24][25].
Laser Mouse Optic Sensor with magnetometer
A combination of an optical mouse microprocessor and an electronic compass (magnetometer) may be used to measure speed and direction. The optical mouse is a very low-cost sensor and has the advantage that the measured displacement is independent from the kinematics of the vehicle because the optical sensor uses external natural microscopic ground landmarks to obtain the effective relative displacement. This algorithm is used for implementing odometer in Robotics [20]. This combination can also be used to determine the current position when GPS signals are weak. For this system, the focal length of a camera lens is to be adjusted and this device can then be attached to the device using USB OTG [26] port. The approach is better illustrated in figure3 below. Both the generic techniques require a high precision reference point to start with, which is provided via GPS positioning.
Proposed Tracking System
The Vehicle Tracking System Using Working Sensors of a Partially Functional Android Mobile Phone" was built using the working sensors of an Android mobile phone, specifically the GPS and accelerometer sensors. The system was designed to be low-cost and accessible, using widely available hardware and software. Here are the basic components and steps involved in building the system: Mobile phone with working GPS and accelerometer sensors: The first step was to develop an Android mobile phone with working GPS and accelerometer sensors. The GPS sensor was used to track the location and speed of the vehicle, while the accelerometer sensor was used to detect sudden acceleration and harsh braking. Data collection and processing: The system collects data from the GPS and accelerometer sensors using a custom-built mobile application. The data was collected continuously and transmitted to a server for processing. Server-side data processing and analysis: On the server, the data was processed and analyzed to detect aggressive driving behavior. The system uses machine learning algorithms to analyze the data and detect sudden acceleration, harsh braking, and other driving behaviors that may indicate aggressive driving. Visualization and reporting: The analyzed data was then visualized and reported to the system's users. This allows users to track the location and driving behavior of their vehicles in real-time and identify areas where improvements can be made The complete working of proposed system is shown in figure 4. Depending upon which available sensors combination to use in addition to GPS for optional indoor position estimation (when GPS signals are not available), the client application is designed. The application reports location on regular intervals or when significant displacement from old reported position is achieved. These Geo-coordinates are sent using mobile data in a GSM network. To receive and store these coordinates either cloud storage services are used or a dedicated server is set up (recommended) that sync the location of various location reporting devices mounted on different vehicles. A web application that enables vehicle owners or authorities to view the position of vehicle on digital maps is also needed to be developed.
The Eclipse IDE with ADT plug-in is normally used for android application development which requires a profound knowledge of Java programming language in addition to the knowledge of various APIs provided for using various resources of the system. On server side, the coordinate data along with the time stamps is stored in a database and the web application is designed using any of the technologies like JSP, PHP, etc. To display the path trace of the vehicle a digital map API is required, which usually have very expensive private license. Thus, a popular open source map API known as Google Map API [22][23] which offers the most updated maps and easy to use programming interface is recommended. The overall scenario is depicted in figure 4.
Integrating with Vehicle
Integration here refers to the automatic boot up and turning off the initially off Android device in accordance to the vehicle's ignition being turned on or off. The function requires kernel level modifications to the android software running on the device. Thus, we either need to root the device or compile a custom ROM for it in which we remove the malfunctioning hardware drivers so that Android doesn't recognize them anymore and make it boot when charger is connected instead of displaying the charging battery sign. An application "NoMoarPowah" [27] needs root access on phone and it enables user to automatically boot up and shutdown some Samsung devices at defined timing. It also replaces the default charging screen of a switched off android device.
Some of the mobile devices from Sony (XPERIA mini), Spice have this automatic boot feature in their stock ROM, but for a normal mobile this feature is obsolete and hence has been removed in later firmware updates [28]. Almost all vehicles have a battery and a 12-volt DC source inside them normally termed as cigarette lighter slot. This slot is powered up when the engine is ready for ignition and stops the supply when the ignition is turned off. A good quality USB charger (1000 mA, as tracking applications drains battery heavily) is attached to the power slot to power up the android device so it can recognize the engine's ignition being turned on or off. Next a service that can automatically start the tracking application and turn on mobile data is implemented on device. This service also monitors the charger connection status for at least 60 seconds after power is unavailable in order to determine whether the engine has been turned off or not.
Map Trace Improvement Algorithm
While tracking the vehicle live from a remote location, location reporting frequency needs to be high. But in case when no one is observing the vehicle in real-time, continuous location reporting is done after a certain time interval. This may lead to inaccurate path trace on digital maps as depicted in figure 5&6.
Instead of using reporting location at fixed time intervals, a variable reporting time can be used. This is based upon the prediction of turns and curves in a track using various sensors like gyroscope or magnetometer with accelerometer. If a device equipped with gyroscope is kept horizontal (parallel to ground), the measure of angular velocity in Z direction can help determine changes in track. In fact, the following equation relates reporting time with rotation of vehicle: ∝ Here ' ' is the frequency of taking GPS coordinate reference and ' ' is the absolute angular velocity recorded by the vehicle in direction perpendicular to the plane of motion.
Experimental Results
The proposed tracking system described in the paper is designed to track the location of vehicles using a combination of sensors, including GPS and potentially other sensors for indoor position estimation when GPS signals are not available. The system works by collecting and reporting the vehicle's location at regular intervals or when there is a significant displacement from the previously reported location. The location data is transmitted using mobile data in a GSM network to a server for storage and processing. The system can use cloud storage services or a dedicated server to store and sync the location data from different reporting devices mounted on various vehicles.
To display the vehicle's position on digital maps, a web application is developed using technologies like JSP, PHP, etc. The web application would use a digital map API to display the vehicle's position and path trace. The paper recommends using the Google Map API, which is a popular and widely used open-source map API that provides updated maps and an easy-to-use programming interface.
The proposed method can record track with better accuracy as shown in figure 6. Good path trace may assist in lane detection at various geographical locations where automatic lane detection is not possible with the help of satellite images. It may also help in marking temporary paths in deserts, mountains etc. for special purpose tours and travels.
Conclusion
The paper describes an approach for live vehicle tracking making use of a partially functional and low-cost Android device with various useful working sensors and built-in GPS & web capabilities. The generic architecture discussed in the paper presents an approach towards indoor positioning. The map trace optimization method suggested above records track with a better accuracy and can be employed in automatic lane detection. Therefore, the proposed system incorporates most of the functionalities offered by the existing tracking devices along with its own and reports location even more accurately and that too at an affordable & comparatively low factory price. Moreover, Android being a productive and growing technology, the work has even better scope for future enhancements.
Following are the future scope of proposed work: An efficient low cost dedicated tracking device can be built on scenario of low-cost AKASH TABLET [28] by adding the required and removing the unnecessary sensors from the existing device. Taxi Management System: people can request a taxi pick up and a centralized system can help avail the nearest possible taxi service. Intelligent Public Transportation System: Centralized public transport vehicle routing on the basis of number of passengers waiting at the stop and their destined location. Entertainment purpose: the system alleviates the need of additional entertainment system such as music player; radio etc. if application framework is utilized accordingly. "Text to speak" API of android may be used for sending messages and instructions to the vehicle driver over the network in textual form.
Apart from the above future scope, the further research is required to address challenges such as battery life, signal reception, and data security.
|
2023-05-18T15:03:13.280Z
|
2023-05-30T00:00:00.000
|
{
"year": 2023,
"sha1": "8f6aa1d7f0e693cc729f52ac167f6231030b3a10",
"oa_license": "CCBY",
"oa_url": "https://wjaets.com/sites/default/files/WJAETS-2023-0134.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0a76c4f35b84d6b03ea1cf0a4b3bc8ec3e7f4d1b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
5545469
|
pes2o/s2orc
|
v3-fos-license
|
The Use of the S-Quattro Dynamic External Fixator for the Treatment of Intra-Articular Phalangeal Fractures: A Review of the Literature
Intra-articular phalangeal fractures are a common injury. If left untreated, these injuries can lead to poor functional outcome with severe dehabilitating consequences, especially in younger patients. The S-Quattro external fixator device (Surgicraft®, UK) can be used to treat such injuries. Its use has been widely documented and has shown many advantages in comparison to other conventional treatments. Advantages include reduced operative time, rigid fixation and early range of motion. We present a review of the current literature and use of the S-Quattro serpentine system in the management of intraarticular phalangeal fractures.
INTRODUCTION
Phalangeal fractures of the hand are common injuries, especially in contact sports. In younger patients, with higher functional demands, the effects of suboptimal treatment can be devastating.
The average accident and emergency department will see several hundred each year, with causal factors including falls and road traffic accidents. More commonly in sport, especially football and cricket, the injuries may be severe [1]. It is estimated from current literature that approximately 18% of all phalangeal fractures extend into a joint. In most cases, the proximal interphalangeal joint and 8% are associated with comminution. Such injuries are associated with considerable morbidity, the main problem being stiffness and deformity [1].
The management of displaced intra-articular phalangeal fractures of the hand is difficult, challenging and controversial. These joints are uniquely susceptible to injury due to their limited, singular plane of motion. Most phalangeal and metacarpal fractures can be managed conservatively by temporary splintage followed by rehabilitation [2]. Immobilization in extension has shown varying result, with finger stiffness being the main problem.
Displaced comminuted intra-articular fractures pose many problems to the surgeon. Displaced comminuted phalangeal fractures or intra-articular fractures, especially where more than 40% of the joint surface is involved, present the greatest difficulties. Common associated with intra-articular phalangeal fractures include angulation, flexion deformity, malrotation, malunion, joint stiffness and joint subluxation [3]. The relative indications for operative *Address correspondence to this author at the Institute of Musculoskeletal Sciences, Royal National Orthopaedic Hospital, Stanmore, Middlesex, HA7 4LP, UK; Tel: 07968353356; Fax: 02089542300; E-mail: j_s_bhamra@hotmail.com intervention include displaced intra-articular fractures, failed conservative treatment in unstable fractures, multiple fractures and open fractures with associated soft tissue injury [2].
To achieve a successful outcome in the majority of case, it is prudent to initiate to early mobilization. This is appropriate for minor avulsions and other stable fractures, however, in complicated injuries, a more aggressive approach is required. Various different techniques have been advocated, but treatment of these difficult fractures is often complex and inconvenient for the patient [1]. Fahmy (1990) and Fahmy & Harvey (1992) described treatment of such fractures using a dynamic external fixator [2]. Treatment relies on the concept that the fixator applies traction but also allows some movement of the involved, injured joint. This concept enables that early mobilization is clearly without jeopardizing the accuracy of the fracture reduction [4].
The S-Quattro (Surgicraft Limited®, Redditch, UK) is an external fixator designed to treat displaced comminuted intra-articular phalangeal fractures which works on the principle of ligamentotaxis to reduce and hold the fracture. The system is elastic and this allows some movement of the affected joint [2,4]. An arthrodiastasis is maintained throughout a limited range of motion of injured joint effecting moulding of the articular cartilage and restoration of joint congruency [2].
PATTERN OF INJURY
A spectrum of joint injury pattern occurs depending on the direction, rate and force of loading. Pilon-type injuries can occur if the loading force is severe and axially directed. Such fractures are characterized by comminution that involves the whole phalanx base and are associated with central depression or splaying of the concave articular surface (in the coronal plane, sagittal plane or both) [3].
When force is directed from the dorsal or palmar direction, a fracture of the corresponding lip of the articular surface is likely to ensue. This may be associated with a dislocation or a subluxation. When the fracture involves a significant proportion of the articular surface, subluxation occurs [3].
Other fracture patterns include unicondylar and bicondylar fractures. Literature review suggests poor results are achieved in on-operative management of these injuries, which usually comprises of immobilization with splintage. Complications associated with this method of treatment include pain, stiffness and reduced range of motion. For best functional results, it is important for any subluxation to be reduced efficiently and early mobilization instigated. Although these fractures can usually be reduced by the traction principle, methods of maintaining reduction until the fracture has united are difficult and often compromise joint movement [3].
Stiffness is a particular problem with the proximal interphalangeal joint (PIPJ) whose normal range of movement exceeds that of the distal interphalangeal joint (DIPJ). This contributes considerably to grip strength. Fractures that involve the PIPJ, can often lead to dorsal subluxation of the base of the middle phalanx, and lthough stable in flexion, it is undesirable to immobilize the joint to prevent future stiffness.
CONVENTIONAL TREATMENT METHODS
Closed reduction with zimmer splintage or volar slab may be adequate for simple fractures, but may not always prevent recurrent displacement and also risks future stiffness.
In many cases, open reduction and internal fixation (ORIF) may become necessary after failure of splintage to maintain reduction. In such fractures, single large fragments may be held with Kirschner wire (K-wire) fixation introduced (either percutaneously or open) or by AO screw fixation. As with all surgical procedures, there are associated complications. These include risk of tendon adherence, ligament and capsule fibrosis and avascular necrosis of the fragment. Trans-articular K-wire fixation precludes early mobilization and further damages the healthy articular surface [3].
Much of the current literature for more comminuted or severely compound injuries suggests poor results. Intraarticular unstable fractures with comminution and a depressed fragment very often have no satisfactory treatment. This compromises patient care and renders the patient susceptible to future surgery. The patient inevitably has to accept the deformity and later be considered for corrective osteotomy, arthrodesis or even joint replacement [3].
THE S-QUATTRO SYSTEM
The Stockport Serpentine Spring System or S-Quattro (Surgicraft®) is a flexible mini external fixator designed to treat comminuted unstable intra-articular phalangeal fractures [1]. It consists of a unique, dual, parallel but opposing action, spring column system and was devised by Fahmy (1990). It has now become a well established successful system in the management of intra-articular phalangeal fractures of the hand [3].
There are advantages of the S-Quattro system over conventional methods of fixation. Amongst those documented, these include a relatively lightweight system that is effective in the management of fracture dislocations; a reduced operative time; distraction of joints in different degrees of flexion, extension and radial-ulnar deviation; allowing of movement in intra-articular fractures and its' use in some compound fractures and in cases of mal-union.
The S-Quattro system works on the principle of ligamento-taxis, reduction being achieved and maintained by tension in the joint capsule and ligamentous structures produced by dynamic distraction. It is a flexible system and therefore allows early active mobilization and guards against tendon adherence [1].
PRINCIPLES OF FIXATION WITH THE S-QUATTRO SYSTEM
There are two main principles in managing intra-articular fractures; [3] maintaining congruency of the joint by reduction and stabilization of fragments and [2] promoting early joint mobilisation.
By maintain joint congruity, it aid in the prevention of joint stiffness and allow pain free movements by allowing free gliding of adjacent tendons. This inevitably causes less pain and reduces long term arthritis. Vidal et al., described the principle of ligamentotaxis and demonstrated how simple traction can be used to reduce displaced and comminuted fractures by tightening various ligamentous and capsular structures. This principle is now in everyday use in the management of various types of fractures [3,4].
Early joint mobilisation reduces swelling and facilitates joint nutrition, surface remodelling, contouring and healing. It guards against tendon adherence and subsequent joint stiffness. It also prevents the fibrous thickening of collateral ligaments and contractures of the palmar plate with subsequent restriction of extension. Traction by application of the external fixator prevents shortening of the ligaments, which would otherwise contribute to joint stiffness.
Conventional methods of treating intraarticular phalangeal fractures have proven difficult in obtaining and maintaining anatomical alignment and stable fixation to allow early motion. In contrast to existing techniques, the S-Quattro external fixation device works on the principle of ligamentotaxis, which overcomes the potential difficulties described. It has the advantage of restoration of the articular surface and early joint motion. Even though, the S-Quattro system results in limited movement at the injured joint, it allows free movement of the other digital joints. This reduces swelling, prevents tendon adherence and allows quick recovery after the removal of the external fixator [3]. In most units, the common practice is to remove the device between 4 to 6 weeks post application.
SURGICAL TECHNIQUE
The surgeon may perform the operation under general or local anaesthesia. The S-Quattro system consists of two modified K-wires and two serpentine springs [1]. Manual traction is applied to the finger tip of the injured joint and is gently distracted. This allows correction of the deformity (angular or rotational) [5]. The unthreaded notched wires are introduced using a power drill percutaneously through the dorsal or mid lateral approach either side of the injured joint [1].
The device manufacturers' (Surgicraft, UK), suggest that if the If the dorsal approach is chosen the distal pins should be inserted in the bare area just distal to the insertion of the central slip to avoid the extensor tendon. The proximal pin is inserted through a small incision splitting the extensor tendon longitudinally to avoid transfixing it with the pin. Both pins should be inserted in sagittal plane. Furthermore, in the mid-lateral approach, the pins should not be introduced through the phalangeal necks. The mid lateral approach is reported as technically more difficult and does not allow the distal interphalangeal joint to be placed in 30° flexion, which is stated as the optimum position [1]. Repeated attempts at introducing the pin at the same level should be avoided due to the risk of fracturing the phalanx. It is advised that the pins are inserted through the most convex part of the shaft [5]. Both cortices must be breached and in the same cortical plane.
The springs are then inserted between the two pins (in the first or second grooves) near the tapered ends [5]. The stiffer spring is applied first (to act as a fulcrum), and the second spring applied to provide distraction [1]. By bringing the free ends of the pins closer, a greater degree of distraction is achieved and by placing the second spring further away from the finger maintains the distraction [5]. This allows the system to achieve distraction or compression of the fracture as appropriate or in the neutral format [1]. Tension in both the ligaments and capsule maintains the reduction and prevents rotation. All other joints remain mobile, with the use of the hand remaining intact whilst the external fixator is applied [1].
A check X-ray should be taken to confirm a satisfactory positioning of the device and reduction of the fracture. The procedure is completed by applying gauze dressings over the pin 'entry' sites and adhesive dressings should be used to secure the junctions between the pins and the springs. The pins and serpentine springs need to be trimmed and gauze dressings applied over the sharp ends [5]. The device is secured with adhesive ('plastic padding') and further protection is afforded by gauze dressing and a 5cm elasticated cling bandage (3) (Figs. 1-3).
Post operatively, the patients are encouraged to exercise the finger regularly to prevent complications.
CURRENT EVIDENCE
Since its' introduction two decades ago, evidence supporting the use of the S-Quattro in the surgical management of intra-articluar phalangeal fractures has now become widely documented. Although a great number of clinical trials have not been conducted, those that have demonstrate promising results. Results from the original studies using the S-Quattro device have been shown to be reproducible in both large regional hand units and smaller district general hospitals. Figs. (1-3). Three sets of radiographs for phalangeal fractures managed with the S-Quattro external fixator and radiographs at final follow-up. Table 1 summarises the main clinical studies involving the use of the S-Quattro fixator in the management of these injuries. Fahmy (1990) reported the use of the S-Quattro in 20 cases of intra-articular phalangeal fractures. In most cases the fixator was applied in the first week following fracture, the longest interval from injury to application being 21 days. Fahmy left the device in place for 2-6 weeks. A minimum of 6 month follow-up was achieved with the mean range of movement recorded as 81% was possible in the affected joints. In the majority of patients, movements were pain free after removal of the device. No pin tract infections were reported [1,6]. Furthermore, early controlled mobilization restores the congruity of the joint surface preventing stiffness and later arthritis.
A further study by Fahmy & Harvey (1992) of 14 cases of displaced intra-articular phalangeal fractures (5 malunions, 5 comminuted condylar fractures, 4 communited compound fractures; mean presentation 31 days post injury) yielded good results. They showed a mean total deformity (angular, flexion and rotation) at the start of treatment of 70°. The mean residual deformity after an average 11.4 months follow-up was 14° [1,7].
In a study of 37 patients treated over a 7-year period using the S-Quattro system, Mullett et al., (1999) demonstrated a good outcome. 30 fractures were intraarticular and nine extra-articular. The average follow-up was 22.5 months. Indication in all 30 cases was a displaced intraarticular fracture. The average total range of motion for the affected digit at follow-up was 232° for intra-articular and 241° for extra-articular fractures. The external fixator device was removed at 4-6 weeks [1,2]. conducted a retrospective study for acute intra-articular phalangeal fractures of the hand using the S-Quattro. One hundred patients with a variety of fractures underwent fixation over a 6 year period, with all fractures involving a single joint. All were closed injuries and mean follow up was 6 months. Results compared favourably with those in other published series. Interestingly, from their study, Khan et al., noted that patients regained more movement and less pain following the second 6 months of the first year. A similar trend was also shown by Fahmy (1990), with favourable results expected if the patients are less than 40 years of age, have no associated osteoarthritis and are treated within 1 week of injury (Fahmy, 1990) [6].
Other trials demonstrate similar results. Byrne et al., (2008) reported outcomes in 10 patients who underwent S-Quattro external fixation for complex fractures of the base of the thumb. Between 1996 and 2003, 9 men and one woman (mean age of 3 years). The dominant hand was involved in 8 patients. Three patients had Bennett fractures, 5 had Rolando fractures, one had an open multi-fragmented fracture, and one had a fracture-subluxation. Mean follow up of 10.7 months. The mean loss of total active movement (TAM) at the carpometacarpal joint was 7.5°. After a mean of 41 months of treatment, the mean disability of arm, shoulder and hand (DASH) score was 3.4. This study therefore also demonstrates a good outcome for complex intra-articular base of thumb fractures fixed with the S-Quattro system [8].
Studies of injuries were sustained during sports also demonstrate a good outcome with the use of the S-Quattro system. examined 11 consecutive sports injury cases in which the S-Quattro was used phalangeal fracture sustained during sports (19 to 51 years of age). The fracture was displaced, comminuted and intraarticular in all cases. None of the injuries was compound. 10 patients had a good range of movement at the injured joint (75-110°), whilst in one case there was marked stiffness (35°), which was subsequently improved with tenolysis (55°). 73% of patients were pain free; all were satisfied with the outcome of surgery [1].
Furthermore, the largest series of patients with sports injuries treated with the S-Quattro revalidates the results produced by . 20 patients were treated over a three-year period . Results demonstrated an average arc of movement of the affected joint of 94° at a mean follow-up of 14 months. The mean DASH score was 5 indicating mild impairment. 100% of the patients were satisfied with the results following surgery thus reinforcing the argument for the routine use of the S-Quattro in difficult sports injuries [9].
DISCUSSION
The S-Quattro system has gained popularity over recent years supporting evidence grows. It owes its success to many of the attributes this external fixator possesses. The device is a light, dynamic, and versatile system, which enables distraction of the injured joint and controlled movement of uninjured joints to maintain joint congruency [8].
Use of the S-Quattro has been reported in a spectrum of hand traumas and is a versatile device with an increasing range of applications [4]. The Stockport Hand Unit reports good results following trapeziectomy or excision arthroplasty of the proximal interphalangeal joint. Other indications include intra-articular comminuted phalangeal fractures, fracture-dislocations to mal-united phalangeal fractures. Modifications of the S-Quattro device have also been used for treating neglected dorsal interphalangeal dislocations and chronic subluxation at the proximal interphalangeal joint [8].
It is particularly suitable where fragments are too small to fix and where comminution affects the joint surface. The device achieves and maintains excellent reduction, prevents deformity and allows early mobilization of the affected joint. The procedure is short and straight forward, complications are few, and functional results are surprisingly good considering the severity of these injuries and the inadequacy of other treatment [4].
Several other techniques of distraction have been described; all with varying results. All have demonstrated advantages and disadvantages in the treatment of phalangeal fractures. Thus far, there has been no review comparing the outcomes of these treatments. However, all treatments have problems associated with their use, which are complexity, infection and loosening. Examples include external fixators [10], pins and rubber traction systems [11], dynamic springs [12], force couple splint [13], dynamic longitudinal traction [14] and compass hinges [15]. Discussion of these treatment modalities is beyond the scope of this review.
Principal indications for using the S-Quattro are displaced and comminuted intraarticular phalangeal fractures [1]. Anatomic reduction is achieved by capsuloligamentotaxis which and provides stability for fracture healing with early active digital movement [8]. Compared to traditional treatment modalities such as splinting and Kirschner wiring, the S-Quattro appears to provide superior results according to current literature regaining and maintaining at the congruity of articular joint surfaces, on the basis of many clinical trials demonstrating good clinical outcomes [8]. Much of the literature is in favour of early mobilisation to prevent future stiffness and indeed, return of hand function in young working patients is an essential goal.
Many advocate the use of this system in injuries that may be difficult to treat successfully with conventional methods. The surgical technique described previously, enables accurate reduction with minimal soft tissue dissection and is relatively straight forward procedure with reduced operative time [2]. Open phalangeal fractures can be easily stabilised whilst preserving soft tissue and this device overcomes the need for extensive soft tissue dissection for fracture fragment fixation, and avoids ensuing complications such as avascular necrosis [6]. Fahmy & Harvey (1992) showed good results in compliant patients in treating intra-articular-subluxations that were treated within 2 weeks of the injury. They suggest that their results are transferrable to regional trauma units [2].
Many of the papers discuss technical point for use of the S-Quattro system. Some suggest avoiding excessive tension on the Serpentine springs when attempting reduction, as to avoid overdistraction of the joint and resultant stiffness [2]. In those intra-articular fractures where there is a single fragment of sufficient size, it may be fixed with a K-wire or an AO screw (open or percutaneously). This method can also be used for condylar fractures and also selected cases of dorsal or volar fracture dislocation [1].
CONCLUSION
It is clear from the studies presented that the S-Quattro external fixator system is an effective and useful treatment option for the management of acute intra articular fractures of the phalanges. It may also be a treatment option for mal or non unions of such injuries following previous conservative or surgical attempts. However, further clinical evaluation is needed to consolidate its use for these fractures.
|
2014-10-01T00:00:00.000Z
|
2012-02-23T00:00:00.000
|
{
"year": 2012,
"sha1": "2a31b02a127e31c145259d3237a7cb842b94edb8",
"oa_license": "CCBYNC",
"oa_url": "https://openorthopaedicsjournal.com/VOLUME/6/PAGE/54/PDF/",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a31b02a127e31c145259d3237a7cb842b94edb8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14203989
|
pes2o/s2orc
|
v3-fos-license
|
Ultrasound-guided Intraarticular Hip Injection for Osteoarthritis Pain in the Emergency Department
Ultrasound-guided intraarticular hip corticosteroid injections may be useful for emergency care providers treating patients with painful exacerbations of osteoarthritis of the hip. Corticosteroid injection is widely recommended as a first-line treatment for painful osteoarthritis of the hip. Bedside ultrasound is readily available in most emergency departments; however, using ultrasound to guide therapeutic hip injections has not yet been described in emergency practice. Herein, we present the first description of a successful emergency physician-performed ultrasound-guided hip injection of local anesthetic and corticosteroid for pain control in a patient with an acute exacerbation of osteoarthritis.
INTRODUCTION
Hip osteoarthritis (OA) is a common painful condition in the emergency department (ED). It is estimated that one in four people will develop painful hip OA in their lifetime. 1 Acute exacerbations of painful hip OA can be severe and disabling, often presenting a clinical challenge for emergency physicians (EP). Intraarticular corticosteroid hip injection is a well-established and effective treatment used extensively by rheumatologists, orthopedists, and pain physicians. 2 Landmarkbased techniques of intraarticular injections are technically difficult with a relatively high failure rate and are associated with inadvertent neurovascular injury. 3 Over the past decade, ultrasound-guided intraarticular hip injection has emerged as a safe and efficacious alternative to landmark techniques. [4][5][6]8 Despite the evidence for the efficacy and excellent safety profile of ultrasound-guided hip corticosteroid injections as a first-line treatment for acute OA pain, this technique has never been described in emergency medicine practice.
Entering the hip joint space can be technically challenging due to its depth and proximity to the femoral neurovascular bundle. Fluoroscopy, which has been shown to be relatively safe and accurate, requires significant resource allocation and introduces the risk associated with exposure to ionizing radiation. Also, fluoroscopy does not visualize soft tissue or neurovascular structures. 2,3 As a result, the ultrasound-guided technique for hip joint injections has been widely accepted as Alameda County Medical Center, Highland Hospital, Department of Emergency Medicine, Oakland, California a safe and effective alternative by interventional radiologists, pain physicians, and orthopedists. [4][5][6][7][8][9][10][11][12] Within emergency medicine, diagnostic ultrasound-guided hip arthrocentesis for cases of suspected septic joint has been recently described. [13][14][15] Herein we present the first description of EP-performed ultrasound-guided native hip injections for pain control.
Ultrasound-guided technique Preparation
An ultrasound system (Sonosite M-Turbo Bothell, Washington) is positioned contralateral to the affected hip with the screen in the line of sight of the operator. The skin overlying the hip should be cleaned in a sterile manner.
Survey Scan
A low-frequency curvilinear probe (5-2 MHz), covered with an adhesive sterile dressing, is placed in a transverse plane parallel to the inguinal ligament and used to identify the femoral artery and vein above the hyperechoic femoral head. The probe is then moved laterally to just above the hyperechoic femoral head and rotated to an oblique sagittal position so that the probe marker is aimed towards the umbilicus. The femoral head, femoral neck, anterior capsular recess, and ileofemoral ligament should be visualized.
Needle insertion and injection.
A superficial wheal of local anesthetic is placed at the point of planned needle entry. A 6ml mixture of 5ml of bupivicaine 0.5% and 1ml of 40mg/ml of triamcinolone is placed in a 10cc syringe. A 20-gauge 3.5 inch standard cutting spinal needle is guided in-plane under real-time ultrasound guidance to the anterior capsular recess (Figure). When the needle tip is clearly visualized in the joint space, 1-2 mL aliquots of the solution is slowly injected under low pressure. Successful targeting of the joint space is confirmed by spread of anechoic fluid under the ileofemoral ligament within the anterior capsular recess.
Disposition
Only a short period of observation is necessary as the volume of local anesthetic is quite small. It is, however, possible that inadvertent partial femoral nerve blockade could result and all patients should demonstrate full muscle strength before discharge.
CASE PRESENTATION
A 47-year-old male presented to the ED with an acute flare of his chronic severe left hip due to osteoarthritis. He was being followed as an outpatient by the orthopedic service and was scheduled for an upcoming fluoroscopy-guided therapeutic hip injection. He complained of 8/10 pain in the left hip that was significantly limiting his daily functioning. In conjunction with orthopedics consultation and after informed consent, the left hip was injected according to the previously stated technique. At follow up 1 week later, the patient reported significant improvement in his pain and increased daily activity without evidence of infection.
DISCUSSION
We present the first description of an ultrasound-guided injection for pain relief from osteoarthritis of the hip by an EP. In the ED there are few options for treating patient's pain from osteoarthritis and other forms of degenerative hip disease. The American College of Rheumatologists has no recommendations for control of pain in the acute care setting. They do, however, recommend oral pain medications and intraarticular cortisone injections as initial treatment in outpatient practice. 16 Not only is osteoarthritis of the hip common, patients with pain from osteoarthritis are more likely to visit EDs than patients without pain from hip osteoarthritis. 1,17 The economic costs of disability from osteoarthritis are staggering -patients with pain have higher healthcare costs and higher economic costs from missed work and disability compared to patients without pain from osteoarthritis. 17 With such a common, painful and costly problem presenting frequently to EDs, an approach to pain control with only oral pain medications may be limited. Oral medications such as NSAIDs and opiates are often inadequate and not without complications. 18,19 Hip injections with corticosteroids are not without potential complications. In a review of the literature by Kruse in Current Reviews in Musculoskeletal Medicine in 2008, there were 3 main clinically significant complications of intraarrticular corticosteroid injection of the hip: septic arthritis, osteonecrosis, and infection of total hip replacement following pre-operative joint injection. 12 Of the 4 randomized controlled trials of imaging guided hip injections with corticosteroids involving 265 patients, no clinically significant adverse events were reported. 12 There were some minor side effects noted, such as flushing, flare of pain in the days following injection, and hyperglycemia.
The incidence of septic arthritis after hip injection has not been well studied, and only 2 case reports were found in the literature. One case details septic arthritis after a single injection, while in the second case it occurred after repeated injections of sodium hyaluronate and a single injection of triamcinolone. 20,21 There is 1 case report of osteonecrosis after 1 injection of methylprednisolone, although it is unclear if this was due to disease progression or the steroid injection. 22 The injection of corticosteroids for pain control of hip osteoarthritis in the ED has not been studied or commented on in emergency medicine literature to our knowledge. The idea is not without potential concerns. Patients may present to EDs on a regular basis requesting injections for pain control, Table 1. Ultrasound-guided intraarticular hip injection in the emergency department.
Emergency care indications
Pain from osteoarthritis and other degenerative disease of the hip.
Ultrasound-guided techinique Low frequency curvilinear transducer is used to visualize hip joint and target the anterior synovial recess for injection.
Positioning Supine with hip slightly abducted and internally rotated.
Needle approach In-plane approach with a 20-22 gauge 3.5 inch standard cutting spinal needle.
Important anatomy
Inguinal crease, femoral neurovascular bundle, and the anterior synovial recess.
Potential complications Significant complications are rare. Flare of chronic pain and an increased risk of post-operative infection is possible if total hip arthroplasty is done within 3-4 months of injection. 12 Local irritation, and puncture of the femoral artery or vein are possible but have not been reported with ultrasound guidance.
and without the follow up and consultation of an orthopedist, this would be outside the standard of care for managing hip osteoarthritis. A patient presenting to the ED with hip pain has many more diagnostic and therapeutic considerations than a patient presenting with chronic hip pain to a subspecialty outpatient clinic. Ultrasound-guided hip injections are commonly performed in the outpatient clinic and could easily be transferred to the ED for pain control in the properly selected patient. [5][6][7][8] Emergency providers familiar with point-of-care ultrasound can become proficient in this procedure, which has proven to be safe in office-based settings. 7-8 We present a case of an alternative technique for pain reduction in patients with clinical signs and symptoms of chronic OA of the hip. In conjunction with consultative services, this novel technique may be a potentially useful method to reduce pain from OA in the ED setting.
|
2018-02-10T13:36:11.485Z
|
2013-09-01T00:00:00.000
|
{
"year": 2013,
"sha1": "6bbf12f195583ded2751e8cd3fb6f5287b3b978e",
"oa_license": "CCBYNC",
"oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt3130v3hh/qt3130v3hh.pdf?t=ozfcmg",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bbf12f195583ded2751e8cd3fb6f5287b3b978e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10923588
|
pes2o/s2orc
|
v3-fos-license
|
A Novel msDNA (Multicopy Single-Stranded DNA) Strain Present in Yersinia frederiksenii ATCC 33641 Contig01029 Enteropathogenic Bacteria with the Genomic Analysis of It's Retron
Retron is a retroelement that encodes msDNA (multicopy single-stranded DNA) which was significantly found mainly in Gram-negative pathogenic bacteria. We screened Yersinia frederiksenii ATCC 33641 contig01029 for the presence of retroelement by using bioinformatics tools and characterized a novel retron-Yf79 on the chromosome that encodes msDNA-Yf79. In this study, we perceived that, the codon usage of retron-Yf79 were noteworthy different from those of the Y. frederiksenii genome. It demonstrates that, the retron-Yf79 was a foreign DNA element and integrated into this organism genome during their evolution. In addition to this, we have observed a transposase gene which is located just downstream of retron-Yf79. So, the enzyme might be responsible for the transposition of this novel retron element.
Introduction
For the past 21 years, it has been shown that some pathogenic Gram-negative bacteria strains contain genetic elements called retrons. Retron is a retroelement consisting of msr, which encodes the RNA part of msDNA, msd, which encodes the DNA part of msDNA, and the ret gene for reverse transcriptase (RT) [1]. The reverse transcriptase (RT) was originally discovered in virus [2] as an essential enzyme for the replication of retroviruses. Since the discovery of RT in myxobacteria [3] and Escherichia coli [4] an intriguing question have been raised concerning its origin and function in the prokaryotes [5].
The msDNA (multicopy single-stranded DNA) is composed of a small, single-stranded DNA, linked to a small, single-stranded RNA molecule. The 5 end of the DNA molecule is joined to an internal guanine base (G) residue of the RNA molecule by a unique 2 , 5 -phosphodiester bond [6]. Since msDNA was originally discovered in the Gramnegative soil bacterium, Myxococcus xanthus [7] it was also isolated from aggregative adherence E. coli (AAEC) [8], a classical enteropathogenic E. coli (EPEC) [9] and more recently from Vibrio cholerae [10], Salmonella enterica serovar Typhimurium [5], V. parahaemolyticus and V. mimicus (Shimamoto T, 2003, unpublished data). Hence, RT might have a role in diversification of pathogenic bacteria genomes.
Although msDNAs have been isolated over the pathogenic Gram-negative bacteria, in this study we characterized a novel retron region by screening the complete genome sequence of Yersinia frederiksenii [11] which encodes msr, msd with a ret gene by best hits RT sequence similarity along with V. cholerae, V. parahaemolyticus and S. Typhimurium. These provide insight into the important roles of this mysterious element in these bacteria species. To determine the particular place of retron-Yf79, the complete nucleotide genome sequence of Yersinia frederiksenii ATCC 33641 contig01029 was retrieved from the national center for biotechnology information (NCBI) resource at (http://www.ncbi.nlm.nih.gov/) with the following accession number (AALE02000035) [11]. To investigate an evolutionary relationship among amino acid sequence of reverse transcriptases from Y. frederiksenii, V. cholerae, V. parahaemolyticus and S. Typhimurium; were collected from ExPASy proteomics server at (http://www.expasy .org/). In addition, the 16S ribosomal RNA (16S rRNA) nucleotide sequences of Y. frederiksenii, V. cholerae, V. parahaemolyticus and S. Typhimurium were collected from the kyoto encyclopedia of genes and genomes (KEGG) organism database available at GenomeNet server, Japan (http://www .genome.jp/) to observe the possible evolutionary scenario among those species.
Sequence Alignment.
The genomic organization of msd-msr region of retron-Yf79 was determined according to their nucleotide sequences analyzing, that is, the presence of conserved region nucleotides with other msr-msd coding regions which have been isolated from various pathogenic bacteria-(V. cholerae, V. parahaemolyticus and S. Typhimurium) by using (ClustalW) program available at (http://www .genome.jp/tools/clustalw/), Japan [12]. To evaluate the similarity of RT-Yf79 with others RT-Vc95 from V. cholerae [10], RT-Vp96 from V. parahaemolyticus (Shimamoto T, 2003, unpublished data) and RT-St85 from S. Typhimurium [5], the alignment program was utilized at the site (http://www.genome.jp/tools/clustalw/) [12], after determining the best hit of RTs sequence similarity search by the BLAST program at NCBI Blast homepage (http://www .blast.ncbi.nlm.nih.gov/Blast.cgi).
Structure Prediction and Codon Bias Analysis.
The DNA and RNA secondary structures of msDNA-Yf79 were predicted by using the database-(http://www.ncrna.org/ centroidfold/) [13]. The promoter sequence of retron-Yf79 was predicted on the basis of the conserved promoter sequences [14]. To appraise whether the retron is a foreign DNA element, the codon bias was carried out. The codon bias of retron-Yf79 and the whole organism genome was resolute by using codon usage database-(http://www.kazusa .or.jp/codon) [15].
Phylogenetic Analysis.
To evaluate the origin and similarity of RT-Yf79 from Y. frederiksenii, phylogenetic tree was constructed by using other RTs from (V. cholerae, V. parahaemolyticus and S. Typhimurium). These amino acid sequences were aligned along with each other by using (ClustalW) at (http://www.genome.jp/tools/clustalw/), Japan [12]. The sequence alignment was performed under default conditions and the phylogenetic tree was constructed by the neighbor-joining method. The phylogenetic tree of 16S ribosomal RNAs was also constructed based on their nucleotide sequences by using same database available at (http://www.genome.jp/tools/clustalw/), Japan [12].
Results
3.1. The Structure of msDNA-Yf79. Analysis of msd nucleotide sequence showed that the DNA part of msDNA found in Y. frederiksenii is predicted to consist of 79 bases of a single-stranded DNA, and hence it was named as msDNA-Yf79, and the RNA part of msDNA-Yf79 consists of 70 bases encoded by msr gene of retron-Yf79 (Figure 1(a)). Furthermore, the guanine base (G) residue at position 12 of the RNA molecule branched out by a unique 2 , 5phosphodiester link (Figure 1(a)). The msDNAs isolated from other bacteria contains at least one mismatched base pair in their DNA stems which could be mutagenic [16,17]. However, in this study we observed that the DNA structure of msDNA-Yf79 contains no any mismatched base pair as like as most of msDNAs were isolated from other pathogenic bacteria (Figure 1(a)). Further, the msDNA-Yf79 shared a number of conserved nucleotide sequences with other msDNAs (msDNA-St85,-Vc95 and -Vp96) (Figure 1), except thymine (T) at position 67 in DNA part of msDNA-Yf79 (Figure 1(a)).
Genomic Organization of Retron-Yf79. The retron-Yf79
consists of nucleotide sequence of about 2.8 Kb, and the retron element is transcribed from the −35 and −10 conserved promoter sequence located 5 bp upstream to the msrmsd coding region (Figures 2(a) and 2(b)). In addition, two open reading frames (ORFs) were located just downstream of msr and msd coding sequence, one is RT-Yf79 encoded retron-type reverse transcriptase having 310 amino acids, and another one is ORF-541 which encoded a putative ATP binding protein containing 541 amino acids (Figure 2(a)). The upstream and downstream regions of retron element also contained yfred0001 42820 gene that encoded a hypothetical protein (356 AAs) and Yred0001 42860 gene that encoded a transposase (308 AAs), respectively (Figure 2(a)).
Codon Usage of Retron-Yf79.
To identify the origin of RT-Yf79 and ORF-541 genes in Y. frederiksenii genome, the codon usages were carried out. It revealed that the RT-Yf79 and ORF-541 genes used AAA codon for lysine with a frequency of 55% and 74%, respectively, but the Y. frederiksenii genome only used AAA codon for lysine with a frequency of 20% of the time (data not shown). Present observation suggested that the retron-Yf79 is a foreign DNA element and probably acquired in this organism chromosome from other ancestral species during their evolution times.
Comparative Study of RT-Yf79 with Other ret Genes
from Different Pathogenic Bacteria. The RT-Yf79 encoded by the retron-Yf79 consists of 310 AA residues. Surprisingly, all retron RTs in pathogenic bacteria were shown to have the highest identities to RT-Yf79: RT-Vc95 (from V. cholerae, 44% identity), RT-Vp96 (from V. parahaemolyticus, 45% identity), and RT-St85 (from S. Typhimurium, 43% identity) when these RTs were compared with each other by using multiple amino acids alignment (Figure 3). These four RTs shared approximately similar number of amino acids (Figure 3). In addition, they all shared a conserved domain along with each other (data not shown).
Phylogenetic Analysis of RTs and 16S Ribosomal RNA Gene Sequences.
To observe the genomic diversity of ret genes and orthologous 16S ribosomal RNA genes (from Y. frederiksenii, V. cholerae, V. parahaemolyticus and S. Typhimurium) phylogenetic trees were constructed by using ClustalW at (http://www.genome.jp/tools/clustalw/), Japan [12] (Figure 4). The phylogenetic tree analysis showed a fundamental diversity among the ret genes in relation to the host bacteria (Y. frederiksenii) species as RT-Yf79 from Y. frederiksenii [11] was closely related to RT-Vp96 from V. parahaemolyticus (Shimamoto T, 2003, unpublished data) rather than to the RT-St85 from S. Typhimurium [5] and RT-Vc95 from V. cholerae [10] of pathogenic bacteria as RT-St85 was closely related to the RT-Vc95 (Figure 4(a)). Although both RT-Vc95 and RT-Vp96 were from Vibrio species, both were diverged from each other as they were closely related to RT-St85 and RT-Yf79, respectively (Figure 4(a)). The 16S ribosomal RNA phylogenetic analysis suggested that, these pathogenic bacteria genomes might acquire these retron elements during their evolution (Figure 4(b)).
Discussion
In this study, we demonstrated that a new msDNA-Yf79 exists in Y. frederiksenii ATCC 33641 contig01029 cell types and compared it's properties to that of St85 [5], Vc95 [10] and Vp96 (Shimamoto T, 2003, unpublished data). The retron-Yf79 was responsible for the production of msDNA-Yf79 in Y. frederiksenii Gram-negative pathogenic bacteria strain. However, the gene organization of retron-Yf79 was similar to those found in E. coli (retron-Ec83 and -Ec78) [8,9], that is, contained only two open reading frames (ORFs) in their retroelement. On the other hand, the gene organization of retron-Vc95 [10] and retron-Vp96 (Shimamoto T, 2003, unpublished data) were completely different as they contained a third ORFs. The msDNA-Yf79 has a sequence similarity to msDNA-St85, msDNA-Vc95 and msDNA-Vp96 as these msDNAs shared a number of highly conserved bases in their nucleotide sequences, indicating that they might be descended from a common origin (i.e., from a common ancestor). The presence of the conserved guanine base (G) at position 12 in RNA part of msDNA-Yf79 which involved in branch formation via a 2 , 5 -phosphodiater link in DNA-RNA complex (Figure 1(a)). Lima and Lim suggested that the fact that the mutation in guanine base (G) prevents msDNA synthesis and the primary product of reverse transcription may be a branched DNA-RNA compounds [9], which supports our observation.
Furthermore, it was quite interesting that stem structure of msDNA-Yf79 did not contained any mismatched base pair like most of the msDNA isolated from other pathogenic bacteria. Moreover, the codon usage of this retron element and also the phylogenetic analysis of RTs and 16S rRNA from pathogenic bacteria revealed that this retron was a foreign DNA element. The downstream of retron element-Yf79 contained a transposase gene indicating that this enzyme might be participated in transposition of this novel retron element in the genome.
RT-St85
RT-Vc95 Figure 3: Comparison of the amino acids sequence alignment of the RT-Yf79 with the three highest identity RT sequences: RT-Vc95 (44% identity), RT-Vp96 (45% identity), and RT-St85 (43% identity). Amino acids conserved in all four RTs are marked with asterisks and black colors. Conserved and well-conserved amino acids residues are marked with dots and the number of amino acids of each RT was written at the end of the alignment.
RT-St85
RT-Vc95 We resolved after consideration to look closely the nucleotide sequence of this retron-Yf79 in Y. frederiksenii because this organism has generated significant value in the role of pathogenicity. Functions of msDNA are still not clear. However, this DNA-RNA complex which was identified in Gram-negative pathogenic bacteria may support its role in the process of pathogenicity. In addition, retron element may play an essential role for adaptation of such bacteria in different stressful conditions by changing the expression of their regulatory social behavior under which conditions that expression is densely populated. Further experiment will be required for demonstrating the functions of msDNA, which may be opened a new arena in the process of pathogenicity or adaptation in stressful conditions.
|
2014-10-01T00:00:00.000Z
|
2011-11-17T00:00:00.000
|
{
"year": 2011,
"sha1": "d149542c043729e976c72e9dd5dd2f66201d39c7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4061/2011/693769",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cf70d39b1255459a2c4c0b2ad84ddf93bb80c59",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
56475633
|
pes2o/s2orc
|
v3-fos-license
|
Sustainable Discoloration of Textile Chromo-Baths by Spent Mushroom Substrate from the Industrial Cultivation of Pleurotus ostreatus
Synthetic dyes are recalcitrant to degradation and toxic to different organisms. Physical-chemical treatments of textile wastewaters are not sustainable in terms of costs. Biological treatments can be more convenient and the lignin-degrading extracellular enzymatic battery of basidiomycetes are capable to discolor synthetic dyes. Many basidiomycetes are edible mushrooms whose industrial production generates significant amount of spent mushroom substrate (SMS) with residual high levels of lignin-degrading extracellular enzymatic activities. We have demonstrated that the low cost organic substrate, the SMS deriving from the cultivation of the basidiomycetes Pleurotus ostreatus, is able to discolor anthraquinonic, diazo and monoazo-dyes when incubated in dying chromo-reactive and chromo-acid baths containing surfactants and anti-foams, where the concentrations of the different dyes are exceeding the one recovered in the corresponding wastewaters. Laccase was the lignin-degrading extracellular enzyme involved in the discolouring process. The exploitation of the low cost SMS in the treatment of textile wastewaters is proposed. Accordingly, a toxicological assessment, based on a cyto-toxicity test on a human amnion epithelial cell line (WISH) and the estimation of the germination index (GI%) of Lactuca sativa, Cucumis sativus and Sorghum bicolor, has been performed, showing the loss of toxicity of the chromo-baths after being discoloured by the SMS.
Introduction
Synthetic dyes find application in different industrial divisions including textile industry.An annual consumption of about 0.7 million tons of synthetic dyes has been reported [1].These compounds result to be recalcitrant to degradation and toxic for higher animals [2] and surface aquifers [3].Textile industry alone accounts for two-thirds of the total dyestuff market.Accordingly, discoloration of textile wastewaters is one of the major environmental concerns since last decades.Physical-chemical treatments such as coagulation/adsorption, electrolysis or ozonation are sometime unsuccessful or very expensive and frequently producing large amounts of toxic wastes [4].At the same time, the strong electron-withdrawing groups characterizing the chemical structure of dyes protect them by the attack of bacterial oxygenases [5], affecting the treatment of textile wastewaters by conventional activated sludge plants.On the other hand, anaerobic digestions of various dyes produce toxic amines [6], which make the processes not recommended, unless combined with subsequent aerobic treatments to oxidize the toxic intermediates [7].
In this scenario, the already reported capacities of fungi [8,9] with respect to aerobic oxidation of synthetic azo-dyes [10,11] anthraquinones [12], and phthalocyanins [13], result to be of extreme interest.Ligninolytic basidiomycetes causing white rot on wood, were shown to be the most promising fungi because of their capacity to produce a complex array of lignin-degrading extra-cellular enzymes, with very low substrate specificity towards xenobiotics [9].The ligninolytic battery of extracellular enzymes of the basidiomycetes P. ostreatus has been described as capable to transform a very broad spectrum of waste substrates [14][15][16] including textile dyes [17].However, it is worth to mention that P. ostreatus and many other basidiomycetes are edible mushrooms and their industrial cultivation produces a significant amount of spent mushroom substrate (SMS), reported as harboring high levels of residual oxidative enzymatic activity [18].Actually, an average of 5 kg of SMS, are produced for 1 kg of mushrooms on the market, and the P. ostreatus production has increased over 500% in the last ten years, ranking the second or the third position in the context of edible mushroom industrial production in the world [19].Thus, SMS would be a low cost source of ligninolytic enzyme [20][21][22].However, the costs of purification of the latter can still negatively influence the exploitation of the spent matrix in pollutants biodegradation.As far as we know, with the exception of the description of potential of the spent mushroom compost of Pleurotus pulmonarius in synthetic pentachlophenol contaminated waters [23], no report on the potential of the SMS from the cultivation of edible mushrooms as an oxidising matrix for the treatment of industrial effluents has been reported.
The aim of this work was the recover and direct exploitation of the low cost organic substrate, the SMS from the industrial production of P. ostreatus, to discolour complex chromo-baths used in textile industry.The feasibility of the process has been verified incubating the SMS directly in reactive and acid chromobaths, containing anthraquinonic, mono and diazo dyes, as single dyes or as a mix.In addition to dyes, the chromo-baths contained surfactants and anti-foams that are used as auxiliaries in the colouring process and normally released in textile wastewaters.Although usually not specifically investigated in terms of disappearance and generally used as biodegradable ingredients, surfactants and anti-foams might interfere with the discolouring process or contribute to the toxicity of the textile effluents.The progressive discolouration of the different complex chromo-baths by the SMS were recorded in parallel with the evaluation of the loss of the here recorded toxicity.This latter has been monitored by a cyto-test on a human amnion epithelial cell line (WISH) and the estimation of the germination index (GI%) of Lactuca sativa, Cucumis sativus and Sorghum bicolor in presence of untreated and discoloured chromo-baths.The ligninolytic enzymes potentially involved in the discolouring and detoxifying process have been monitored.
Chemicals, SMS, Seeds and Epithelial Cell Line
Analytical grade chemicals were purchased from Sigma Aldrich (Milan, Italy).The chromo-reactive and chromoacid baths were provided by manufacturers, chemical components are reported in Table 1.The real structure of the different dyes and auxiliaries were not disclosed by the manufactures, that categorized the different chromobaths in relation to the general classification of antraquinonic, mono and diazo-group.Surfactants (Setavin group) and anti-foams (Kolassol group) were respectively classified as alkylamine ethoxylated and silicon based chemicals.The SMS of Pleurotus ostreatus was obtained by a local mushroom farm.The human amnion epithelial cell line (WISH) was kindly provided by Prof. Francesco Sgarrella, University of Sassari (Italy).Plant seeds were obtained from the USDA-ARS North Central Regional Plant Introduction Station, Iowa State University, Ames, IA, USA.
Discoloration of the Chromo-Baths by the SMS
The capacity of the SMS to discolor the chromo-baths was tested at 21 2 • C, in static and dynamic conditions (orbital shaking at 250 rpm) in sterile 2 L glass flasks incubated with plugs of SMS (0.3 cm thick), visually homogenously colonized by the fungal mycelium.The SMS was collected for a total amount of 130 g of fresh weight and submerged in the glass flask containing 500 mL of the coloring bath diluted in 150 mM NaCl solution in water.The Black chromo-Reactive Mix (BRM) was 10 and 5-fold diluted, the Blue chromo-Reactive anthraquinonic Bath (BRaB), the Blue chromo-Acid anthraquinonic Bath (BAaB) and the Red chromo-Acid monoazo Bath (RAmaB) were 5-fold diluted.Two flasks (one set) were prepared for each dilution.One set of flasks was not inoculated with SMS, the other was inoculated with SMS previously autoclaved at 121 • C, 1 atm for 20 minutes, respectively control sets for abiotic discoloration and for adsorption of the dye onto the SMS.A further set of flasks was prepared in sterile 150 mM NaCl solution in water to evaluate the contribution of the release of pigmentation from SMS to the increase in the spectral absorbance of flask supernatants in the visible range.At each time-point, three volumes of supernatant of each flask were collected and analyzed for UV-Vis absorption between 400-800 nm (Beckman Instruments, Fullerton, CA, USA).All the samples at time zero were diluted to give an absorbance < 1 and the same dilution has been used for samples corresponding to the successive time points.The 150 mM NaCl solution served as blank.Percentage of discoloration with time of incubation was quantified as the decrease of areas under the absorbance spectra of flask supernatants at the different time-points of analysis and calculated against a baseline defined by the absorbance of supernatants from flasks without chromo-bath addition, at the corresponding time of analysis.The areas under the absorbance spectra were calculated by using PeakFit, Systat Software Inc., San Jose, California, USA.Decrease of the area under the spectra of supernatants deriving from flasks inoculated with autoclaved SMS with respect to the areas under spectra of supernatants from flask not inoculated with SMS was interpreted as percentage of adsorption of the dye onto SMS.
Enzymatic Activity
Ligninolytic oxidative capacity related to laccase (EC 1.10.3.2),manganese peroxidase (EC1.11.1.13),lignin peroxidase (EC1.11.1.14)and versatile peroxidase (EC1.11.1.16)activity were quantified, by a spectrophotometer based method, in sample volumes of each flask supernatant collected in triplicate.Enzymatic activities were calculated as specific activity.Total proteins were determined according to [24], using bovine serum albumin as standard.All enzyme assays were performed at 37 ± 0.5 • C. Linearity with time and protein concentration was observed for all enzyme activities assayed.Laccase activity was determined as described by [25].Manganese peroxidase and versatile peroxidase activity were deter-mined as described by [26].Lignin peroxidase activity was measured as described by [27].Enzymatic activities were detected before chromo-bath amendments, at the moment of the amendment of chromo-baths and at the end of the discoloring process.Laccase stability at different pH of incubation was monitored on 20 µl of control flask supernatant (no chromo-baths amended) incubated in different buffers and quantified using standard conditions at the time zero, after 3 and 24 hours of incubation.Laccase stability at different temperature was monitored on 50 µL of control flask supernatant incubated for 15' at different temperature.The effect of ionic strength on laccase activity was monitored in presence of increasing amounts of NaCl in the laccase mixture assay.
Isolation of P. ostreatus Mycelium
The mycelium of P. ostreatus has been aseptically collected from the fungal fruiting body, streaked on a sterile potato dextrose agar plate and incubated for 1 week at 28 • C in the dark.The mycelium was maintained, through periodic transfer at 4 • C on potato dextrose agar plates in the presence of 0.5% yeast extract.Laccase activity has been monitored in liquid cultures prepared incubating 3 plugs (1 cm diameter) of the aseptically collected mycelium, grown on the maintaining agar plates, in shaking flask (125 rpm/min) containing 250 ml potato dextrose (24 g/L) broth with 0.5% yeast extract at 28 • C in the dark.At successive times, 1 mL of supernatant was collected in triplicate and analyzed for laccase activity.
Native Protein Gel Electrophoresis in Gradient of Polyacrylamide
A 5-30% polyacrylamide gradient gel was performed at an alkaline pH under non-denaturing conditions.The separating gel contained a gradient of acrylamide from 5 to 30%, while the stacking gel contained 4% of acrylamide.The electrode reservoir solution was 25 mM Tris-190 mM glycine (pH 8.4).A total of 60 ng of proteins deriving from the supernatants of 1) P. ostreatus mycelium grown in potato dextrose broth; 2) SMS incubated in control flasks; 3) SMS incubated in 5 fold diluted blue chromo-reactive bath (BRaB), were loaded on the gel that has been stained for laccase activity using 2,2'-azinobis-(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS) as substrate.Sample supernatants were collected at the time of maximum laccase activity for 1).
Seed Germination Test
The germination indexes (GI%) of Lactuca sativa, Cucumis sativus, Sorghum bicolor seeds in presence of aliquots of undiluted different flask supernatants before and after the treatment with the P. ostreatus SMS were compared.Seed germination tests were performed in sterile Petri dish plates containing Whatman N.1 ashless filter imbibed with 10 ml of 1) control solution, 150 mM NaCl in water; flask supernatants 2) before and 3) after the SMS incubation.The plates were kept for 120 hrs in the dark at 25 ± 1 • C. The germination indexes were calculated from the number of germinated seeds and the corresponding root length values, according to the formula: GI% = (Gs × Ls) / (Gc × Lc) × 100, where Gs was the mean number of germinated seeds and Ls the mean root length of germinated seeds in the sample; Gc was the mean number of germinated seeds and Lc the mean root length of the germinated seed in the control.The analyses were done in triplicate in plates containing 15 seeds for each set.
Cytotoxic Test
The established WISH cell line derived from human amnion epithelium has been routinely grown in RPMI medium with 10% FBS and 2% antibiotics at 37 ± 0.5 • C in a humidified 5% CO 2 /95% air atmosphere.The experiments were performed on 35 mm plates containing approximately 1.000.000 of cells in RPMI complete medium maintained for 24 hours in the absence (control) or presence of volumes of the different flask supernatants corresponding to the dilutions of the chromo-baths used for the SMS discoloration.After incubation, plates were evaluated for the number of adhering cells to plates and for the trypan blue method [28] to evaluate the acute toxicity of the chromo-baths before and after discoloration with SMS.
Statistical Treatment of Data
The analysis of variance of the data (ANOVA), has been performed by using GraphPad InStat version 3.00 for Windows 95, GraphPad Software, San Diego California USA, to evaluate the effects of different parameters of incubation on the discolouring process.The means of the significantly different main effects were compared by the Duncan's test at the 5% level using the Statistic program (Statsoft Inc., 1997).
Discoloration of the Chromo-Baths
In order to evaluate the potential of P. ostreatus SMS to discolor the black chromo-reactive mix (BRM), two different dilutions of the latter were incubated in presence of the SMS.The incubation of the 10-fold diluted BRM was operated both in static and dynamic conditions.Discoloration of 10-fold diluted BRM, matched with a visible effect to the naked eye.The 97% of discoloration occurred in 24 hrs (Figure 1(a)).The process has been observed only in presence of not autoclaved SMS and in dynamic incubation.In static condition a maximum of 8% of discoloration has been recorded.Due to the high percentage of discoloration observed in dynamic incubation with the 10-fold dilution, a 5-fold dilution of BRM has been inoculated with the SMS only in dynamic incubation, because of the evidence that the oxygenation was mandatory for discoloration.After 24 hrs, the 65% of discoloration has been observed; the 80% was recorded after 110 hrs of incubation (Figure 1(b)).
Enzymatic Activities and Native Gel
ouring process (Table 2).An increase of laccase specific activity either in presence or absence of chromo-baths has been recorded during the discolouring process (Table 2).No enzymatic activities have been detected in the supernatants of autoclaved SMS.
Electrophoresis
Enzymatic activities of lignin-degrading enzymes, measured before the addition of the chromo-baths to the discoloring sets of flasks, revealed that the lignin degrading battery of enzymes was limited to laccase.In fact no lignin peroxidase activity has been detected and the recorded laccase activity was predominant on the manganese and versatile peroxidase (0.294 ± 0.65 U/mg for laccase; 0.039 ± 0.03 U/mg and 0.034 ± 0.017 U/mg for manganese and versatile peroxidase respectively).The same enzymatic activity profile was recorded after the chromo-baths amendment and at the end of the discol-The laccase recovered from the supernatant of SMS incubated in the 5-fold diluted BRaB has been collected to be compared by native gel electrophoresis to 1) the laccase recovered from the set of flasks of control; 2) the supernatant of actively growing mycelium of P. osteauts in rich medium.Results obtained showed a very similar profile for the laccases recovered from the three supernatants (Figure 2(d)).The stability of the laccase recovered from the set of flasks of control has been monitored with respect to pH and temperature, furthermore the effect exerted by ionic strength has been evaluated (Figures 2(a)-(c)).The enzyme, whose optimum pH was between 4 and 5 (results not shown), showed a significant stability in a range of pHs between 5 and 10 and after 30 min of incubation up to 60 • C. The laccase activity resulted to be 50% inhibited by 200 mM NaCl.
Germination and Cytotoxicity Test
As a result of the germination tests, all the chromo-baths were extremely toxic because of the significant inhibition of seed germination that was nearly absent in the case of incubation in chromo-reactive baths for all the three plant species analyzed (Figures 3(a)-(d)).Acid baths were also extremely toxic (GI% < 50%), even though a certain percentage of germination has been observed (20%) with the exception of Lactuca sativa that was strongly inhibited in germination by the Red chromo-acid bath (Figures 3(a)-(d)).The discoloration of all the chromo-baths determined the loss of toxicity of the discolored baths with a recover of the GI% to values (nearly 100%) communicated for the selected plant seeds by the USDA-ARS North Central Regional Plant Introduction Station (Fig-
ures 3(a)-(d)).
A human amnion epithelial cell line (WISH) has been used to determine the effect exerted by the chromo-baths on the cell viability.The WISH cells subjected to incubation with chromo-baths showed a significant decrease of the number of adhering (healthy) cells (Figure 3(e)) and a significant increase of the percentage of dead cells (stained by trypan blue) (Figure 3(f)) compared to control cells incubated in PBS 5-fold diluted in RPMI medium (control medium).The discoloration of the chromobaths by SMS incubation removed their toxic effect: the number of adhering cells and the percentage of dead cells, were not significantly different as compared to the control (Figures 3(e)-(f)).
Discussion
Both azo and anthraquinonic dyes are associated to a high ecotoxicity and high recalcitrance to discoloration [12,29].Our results demonstrated that, in aerated (oxidative) conditions, a low cost organic substrate, the SMS from P. ostreatus cultivation, discolored industrial chromo-baths containing either monoazo, diazo and anthraquinonic dyes, either as single dyes or as a mix of them.Being interested in SMS exploitation for discolouring textile wastewaters, the capacity of the organic substrate has been tested on industrial chromo-baths, instead of aqueous solution of single synthetic dyes.In fact, the formers have a higher similarity to the real industrial wastewaters, since containing auxiliaries of the dying process (mainly surfactants, anti-foams and salts), that actually are also released in real wastewaters and potentially can interfere with the discolouring process.The dilution of the chromo-baths, here tested for discoloration, contained the different chemicals at concentrations definitely higher than the one recovered in the real wastewaters.In fact, they contained dyes and auxiliaries in concentration up to the 20% of the initial chromo-bath mass used in the dying process, while it is estimated that the amount of textile dyes released in wastewaters during the industrial process accounts for the 10% of the total used [30].Thus, the SMS was capable to discolour different chromo-baths at similar or higher concentrations than the one recovered in real textile wastewaters.The efficiency of the SMS in the described process was different in relation to the different chromo-baths.Reactive baths resulted more recalcitrant than acid one, even though a comparison is either questionable because of the difference in chemical components of the different chromo-baths, and difficult since the real structures of the different dyes and chromo-baths auxiliaries, indicating their putative recalcitrance to biodegradation, were not provided by the manufacturers.However, in 24 hrs the discoloration of every chromo-baths tested accounted for the 70-90% of the total.Abiotic discoloration and adsorption were excluded by the persistence of colours in sets of flasks not incubated with the SMS and in sets of flasks where every biological activity has been eliminated by autoclaving.Thus, the occurrence of a biotic process has been explored monitoring the ligninolytic activity of fungal extracellular enzymes, eventually associated to the grow of P. ostreatus on the SMS.Laccase resulted to be either the only enzymatic activity measurable from the incubation of the SMS in a saline solution and in presence of the different chromo-baths and the only enzymatic activity recovered during the time interval corresponding to the discoloration process.Actually during discoloration, laccase activity increased either in presence or absence of chromo-baths.Results obtained indicated the ligninolytic laccase as the enzymatic activity involved in the discolouring process.The hypothesis was confirmed by the nearly absence of discoloration in static condition, because in accordance with the known mechanism of catalytic reaction of the enzyme that uses molecular oxygen as electron donor to catalyze reactions consequently occurring in oxidative (aerated) conditions.The laccase recovered in the supernatant of discolouring baths showed peculiar traits.Laccases from basidiomycetes are generally considered acidic enzymes not active at basic pH, characteristic actually not convenient for the exploitation of the enzyme in the treatment of textile wastewaters, which are essentially associated to neutral up to basic pH.However, the laccase produced by P. ostreatus growing on the SMS, although displaying an acidic optimum of pH, showed activity and igh stability at neutral to strong basic pH.A second h interesting trait of the here investigated enzyme was related to the capacity to catalyse discoloration at high ionic strength.Also this characteristic results to be of interest because of the nature of textile wastewaters that are often associated to high ionic strengths.In fact, the laccase from SMS discoloured acid chromo-baths in incubation media containing two different salts, NaCl 150 mM and NH 4 SO 4 227 mM.The enzyme resulted to be 50% inhibited by NaCl 200 mM, but the discoloration of acid baths occurred in only 24 hrs.The enzyme showed also a significant stability with respect to the temperature of incubation, and all the characteristics previously described make the laccase from P. ostreatus SMS extremely attractive for industrial application, suggesting the exploitation of the SMS not only as a low cost substrate with oxidative capacity, but also as a source of robust laccase.In this context, the induction of laccase production could be advantageous for enzyme-purifying purpose, even though, in the experimental condition here explored, an effect of induction of the chromo-baths on the ligninolytic battery of enzymes eventually associated to the SMS was absent.Actually dyes induction of oxidative enzyme, comprising laccase, has been observed in fungi [31].However, the process can not be defined a generic effect of dyes on fungal metabolisms [32].On the other hand, results obtained comparing the isoforms of laccases from different supernatants by native electrophoresis showed that P. ostreatus was able to produce the enzyme (with similar profile) in very different growth conditions, spanning from the one related to the presence of an easily available carbon source (rich broth where the fungus grows in axenic culture), to growth condition with a more complex carbon source, eventually not sustaining the production of fruiting body and in presence of chromo-baths (onto SMS).Laccases from basidiomycetes have been described as capable to quite efficiently decolorize azo-dyes and, in relation to anthraquinonic dyes, the enzymes are reported as more efficient than other oxidases [33].However, as far as we know, this is the first report on putative laccase capacity to discolour azo and anthraquinonic dyes in complex mixtures of chemicals as chromo-baths, containing high concentrations of different surfactants, anti-foams and salt.In relation to the experimental condition adopted, the extremely versatile laccase oxidative capacity here observed, can be interpreted as related to the co-presence of the enzyme and eventually of chemical redox mediators deriving from the ligninolytic activity of the basidiomycetes.Mediators act as electron transfer between the enzyme and very different substrates characterised by high redox state [34,35].They can also accelerate oxidative processes in terms of kinetics of reactions involved [36,37].Actually, intermediates of the process of lignin degradation, defined as natural mediators, are reported as the best one to use for dye discoloration by laccase [38].With reference to the here described discolouring incubations, it is rea-sonable to assume that the ligninolytic activity of P. ostreatus, growing on a lignin containing substrate as the SMS, released these or analogous intermediates in the discolouring supernatants.Thus, the occurrence of natural mediators might be the reason of the peculiar capacity of the P. ostreatus SMS to discolour a plethora of different chromo-baths, with different chemical compositions, by the quite efficient kinetics of reactions here observed.Moreover, as a matter of fact, the Annex I of the Dangerous Substances Directive in the European Union (EU) describe some dye as toxic or mutagenic and others as skin sensitizers for consumers [Council Directive 67/ 548/EEC on the approximation of laws, regulations, and administrative provisions relating to the classification, packaging, and labeling of dangerous substances.Official Journal of the European Communities, June 27, 1967;Vol. 196, p 1.].At the same time, some discolouring process are associated to the production of toxic intermediates of dyes oxidation process [4,6].Thus, any approach for the treatment of textile wastewaters must be focused either on discolouring and detoxifying processes.Accordingly, the opportunity to exploit the SMS from P. ostreatus in the treatment of textile wastewaters has been verified by the toxicological assessment of the process, based on a cyto-toxicity test on a human amnion epithelial cell line (WISH) and a phyto-toxicity test on germinating seeds of Lactuca sativa, Cucumis sativus and Sorghum bicolor.The results obtained showed a strong toxicity of the chromo-baths related either to the cyto-toxicity exerted on cultured human amnion epithelial cells (WISH) and to the phyto-toxicity exerted on seed of different plant species.The discoloration of chromo-baths by the SMS was associated to the loss of toxicity either on WISH cells and plant seeds.
In conclusion, the SMS from P. ostreatus resulted to be capable to either discolour and detoxify a plethora of different chromo-baths containing complex mixtures of chemicals with recognised toxic effect for the environment.The versatility of the low cost organic substrate is related at least to the laccase activity deriving from P. ostreatus, even though other elements favouring oxidation cannot be excluded.All in all, the development of processes of treatment of textile wastewaters based on the exploitation of the substrate, either as a source of robust enzymes or as a versatile low cost organic substrate with oxidative capacity, results to be sustainable in terms of costs and eventually profitable for the design of an integrated management of the disposal of the SMS as an organic waste.
|
2018-12-17T22:02:16.285Z
|
2010-06-30T00:00:00.000
|
{
"year": 2010,
"sha1": "4efff49b93a7efbf6d24cdb706dadb803dd51bb2",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=2013",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4efff49b93a7efbf6d24cdb706dadb803dd51bb2",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
251201697
|
pes2o/s2orc
|
v3-fos-license
|
Cognitive bias toward the Internet: The causes of adolescents’ Internet addiction under parents’ self-affirmation consciousness
The Internet plays a crucial part in the adolescent life. However, as a product of modernization, the Internet has brought a lifestyle different from that of our parents who tend to regard excessive exposure to the Internet as a manifestation of the adolescent Internet addiction. The cognitive bias against the Internet seem to have been arisen among the parents. Under the theoretical framework of self-efficacy and empathy, this study adopts PLS-SEM to analyze the contributing factors of the adolescent Internet addiction from the perspective of self-affirmation consciousness of parents. The result demonstrates that self-affirmation consciousness has a significant positive effect on the empathy process; the empathy process and self-affirmation have a significant positive effect on cognitive bias; and the empathy process acts as a mediator between self-affirmation and cognitive bias. To sum up, through the investigation of the causes of adolescent Internet addiction, this study explores the formation process of parents’ cognitive bias toward the Internet under the influence of self-affirmation consciousness, verifying the practical effects of empathy in the process of promoting rational thinking of parents toward the Internet and adolescent Internet use, and at the same time promoting the harmonious development of parent–child relationships to a certain extent.
The Internet plays a crucial part in the adolescent life. However, as a product of modernization, the Internet has brought a lifestyle different from that of our parents who tend to regard excessive exposure to the Internet as a manifestation of the adolescent Internet addiction. The cognitive bias against the Internet seem to have been arisen among the parents. Under the theoretical framework of self-efficacy and empathy, this study adopts PLS-SEM to analyze the contributing factors of the adolescent Internet addiction from the perspective of self-affirmation consciousness of parents. The result demonstrates that self-affirmation consciousness has a significant positive effect on the empathy process; the empathy process and self-affirmation have a significant positive effect on cognitive bias; and the empathy process acts as a mediator between self-affirmation and cognitive bias. To sum up, through the investigation of the causes of adolescent Internet addiction, this study explores the formation process of parents' cognitive bias toward the Internet under the influence of self-affirmation consciousness, verifying the Introduction In recent years, with the innovative development of the Internet and information technology, information products such as digital media and social platforms are constantly emerging and rapidly being upgraded. Under the international trend that countries around the world are committed to technological development to enhance national competitiveness with information as the core. The improvement of Internet infrastructure construction and further popularization mobile devices are of great importance to strengthen the penetration of digital communication into the public society and increase the power of digital culture (Dunne-Howrie, 2022;Geng et al., 2022). While the fast development of the Internet has changed the course of human civilization and promoted social revolution. And the virtual private space provided by the Internet for human beings promotes the development of real modern society to a large extent (Gong et al., 2019;Wu et al., 2020). Meanwhile, it provides infinite possibilities for the growth of human collective consciousness. Human society has been continuously presented to people through the Internet in a more varied form, which has greatly enriched the human civilization (Yi et al., 2021(Yi et al., , 2022. The Internet has evolved over the past 25 years, in which social media, characterized by massive information interaction, has rapidly risen to occupy most of people's social interactions. As the Internet is getting increasingly closer to people's life, and the adolescents are constantly integrating with the Internet world in a new way as an indispensable user group (Zhou and Fang, 2015).
Currently, the Internet has highly integrated into all aspects of the adolescent growth. Along with their study, daily life, entertainment and social interaction, it has become the main channel for the adolescents to understand the world and become the source of knowledge for them to form values. However, with the continuous popularization of the Internet and social tools such as computers, tablets, and smart phones, the adolescent's focus of life shifting to the Internet has become a trend whereas it has also brought thorny problems and the Internet addiction is one of them (Li et al., , 2021. As a typical label for excessive use of the Internet, the Internet addiction was first proposed and named by American scholars Kimberly and Young in 1996 after referring to the Substance Dependence Disorders (DSM) IV Criteria in the Diagnostic and Statistical Manual of Mental Disorders (Young, 1996). Besides, Griffiths also published case studies on the male and female Internet addiction in 1996. And they are the pioneers of the Internet addiction research (Griffiths, 1996). Due to the prominent social function of the Internet, the Internet addiction has also been defined as an instrumental form of social interaction (Cerniglia et al., 2017;Gong et al., 2020). Contemporary adolescents are the greatest beneficiaries of the Internet popularization, and the prevalence of the Internet addiction among adolescents has gradually emerged. Consequently, the characteristics of the adolescent Internet use and the effects of widely used social media on their development have become valuable research topics.
In terms of the external influences on adolescent Internet addiction, scholars currently focus on psychological factors, such as personality traits (Young and Rogers, 1998); demographic factors, such as gender (Ko et al., 2005); social factors, such as social support (Hardie and Tee, 2007), family economic status (Hur, 2006), etc. In addition, a great number of studies have shown that poor parent-child relationships have a vital important impact on adolescent Internet addiction; harmonious parent-child relationships can effectively reduce the frequency of adolescents' Internet use, while parent-child conflict increases the likelihood of Internet addiction Wei et al., 2020;Huang et al., 2021). How to help adolescents become masters of cyberspace rather than captives of the Internet is an educational theme that every parent has to face in an era of instant information. However, due to the gaps in information reception, knowledge level, and awareness, parents have different degrees of cognitive bias toward the Internet and their children's use of it. The root of solidified thinking is their self-awareness catalysis, and when their children's behavior conflicts with their own perceptions, parents naturally act to block it due to their cognitive biases, so it can be said that parents' cognitive bias toward the Internet is the result of self-affirmation. Available research has explored many factors influencing adolescent Internet addiction and its development, but little applied research has been conducted in conjunction with psychology. The parental cognitive bias toward the Internet is a product of self-affirmation, and the causes of adolescent Internet addiction need to be explored under this perspective of empathy. In this study, we will explore how self-affirmation influences the formation of parental cognitive bias toward the Internet and verify whether this process is mediated by empathy, to systematically understand the mechanism of adolescent Internet addiction.
Literature review General self-efficacy and self-affirmation Proposed by psychologist Bandura, self-efficacy is defined as the extent or strength of one's own ability to complete a task in a specific situation (Bandura, 1977). As a core variable in the individual self-belief system, self-efficacy is an important approach for individuals to deal with traumas (Feeney and Collins, 2014). Self-efficacy plays a major role in the selfregulatory system. The individuals' perception of their own abilities affects their choice of activities, and such perception could also be a conscious manifestation of self-affirmation in many cases. First proposed by Steele, self-affirmation means that when individuals encounter external threats, they will reevaluate their self-worth to maintain their integrity and security. Such self-affirmation aims to alleviate the negative effect of threats on individuals (Steele, 1988). Generally, the effects of self-affirmation are constrained by two conditions, specifically, self-affirmation is effective only for threatened individuals itself and it has no effect on other defense mechanism that already functioning Easterbrook et al., 2021;Zhang et al., 2021). The self-affirmation theory holds that when an event or information that threatens self-integrity appears, the individual will activate the self-defense mechanism and reallocate cognitive resources to event processing. Limited by cognitive resources, a sense of self-affirmation hinders individuals' objective and unbiased perceptions of people or events (Sherman and Cohen, 2006;Khalid et al., 2021). This effect is particularly evident in the elder generation's perception of the Internet. Under the traditional self-knowledge system, the elder generation have certain prejudices about the Internet, and when adolescents use the Internet parents will resort to criticism or confiscation of devices, etc. to restrict their use of the Internet, and even expect such actions to forcibly sever the connection between their children and the cyberspace, yet it is this kind of cognitive bias that stiffens the parent-child relationship (Luqman et al., 2017;Hsieh et al., 2020).
From a macro perspective, the scope of research and application of the self-affirmation theory engagement is constantly extending, while from a micro perspective, the self-affirmation theory can also be seen as an extension of the empathy theory research. Existing studies have explored the impacts of the self-affirmation theory on interpersonal relationships from the perspective of self-cognitive value (Hurst et al., 2020); besides, researchers have verified a positive relationship between the self-affirmation theory and proenvironmental behavior (Graham-Rowe et al., 2019). Studies on self-affirmation and empathy have shown that empathic processing is actually a concern for oneself, and that the effect of self-affirmation on empathy causes people to show more positive emotions and strength (Exline and Zell, 2009); another study on empathy also showed that self-affirmation affects the likelihood of paying attention to and helping others out of empathy (Ministero et al., 2018); emotional and cognitive errors arising in multicultural contexts shape people's risk prevention and control behavior, which is guided by empathic factors to make choices out of self-protection and self-affirmation (Li and Katsumata, 2020;Yi et al., 2022). Based on the above background analysis, to clarify the research direction of the formation of adolescent Internet addiction, this study will introduce empathy theory as a potential theoretical support, in order to explore the inner relationship between self-affirmation theory, empathy process and cognitive bias, so as to systematically analyze the formation mechanism of adolescent Internet addiction.
There are varied opinions on the effects of self-affirmation and empathy. Empathy is an investigative tool used by individuals to gather emotional information through cognitive processes and emotion simulation (Davis, 1983). The current stage of research on empathy focuses on three dimensions: emotional empathy, cognitive empathy, and affective bias. Specifically, emotional empathy refers to the ability to generate spontaneous alternative emotional experiences to others' emotional feelings; cognitive empathy refers to the ability to recognize others' emotions and understand others' perspectives; and affective bias refers to the deviation of cognitive outcomes from objective reality due to the influence of internal or external factors when perceiving oneself, others or environment (Vachon and Lynam, 2016). First, in the research on emotion regulation and positive behavior change through self-affirmation, it is believed that self-affirmation can facilitate individuals to reduce social anxiety symptoms and inspire individuals to achieve higher positive emotions (Lakuta, 2020); the case study on smoking has found that compared with the control group in the behavior of smoking, the smokers with positive self-value gained more positive emotions and increased their own receptivity to relevant information as well (Crocker et al., 2008). This demonstrates that self-affirmation affects individual emotions and moods. From a cognitive perspective, self-affirmation improves the efficiency of self-control, in other words, selfaffirmation changes the focus of individual attention to the value of information itself in the process of changing self-recognition (Davis et al., 2016;Wang et al., 2020a). For instance, in the survey of how caffeine cause health problems, compared with the control group, the participants with self-affirmation had weaker defensive responses to the threat information and they were more efficient in information collection and extraction, which facilitated their integration of information (DiBello et al., 2015). In this era of information explosion, emotional fluctuation is an important driving force in the evolution of online public opinion. Parents could be easily influenced by objective factors such as opinion leaders or subjective factors such as herd mentality due to differences in self-affirmation and awareness of social phenomena, resulting in affective bias in the reception and feedback of information (Peters et al., 2014;Duan et al., 2022). In addition, parents' self-affirmation often drives a series of psychological activities such as emotional empathy and cognitive biases in the parent-child relationship (Huang et al., 2021;Masood et al., 2022). Accordingly, this study proposes the following research hypotheses: H1a: The greater the sense of parental self-affirmation, the greater the occurrence of their emotional empathy. H1b: The greater the sense of parental self-affirmation, the greater the occurrence of their cognitive empathy. H1c: The greater the parental self-affirmation, the greater the occurrence of their emotional deviance.
Parent-child empathy relationship
Blind trust derived from family affection The word empathy was first translated from German by a psychologist Edward Titchener in 1909. Due to the rapid development of information transmission, the concept of empathy has become a hot academic topic and it is widely used in disciplines such as medicine, psychology, journalism and communication, and tourism (Kupetz, 2020;Yi et al., 2022; Frontiers in Psychology 03 frontiersin.org Wang et al., 2022). The transference has been widely applied in researched related to the intergenerational relationships between the elder and younger generations. Positive parentchild relationships and effective communication not only contribute to the normal functioning of the family but also to the development of adolescents, alleviating their discomfort during socialization (Tamarit et al., 2021;Uzun et al., 2021). Most of the existing studies have explored the kinship between parents and children from the phenomenon of left-behind children, focusing on the impact of intimate relationships on children's sense of security, well-being and psychological health (Niu et al., 2020;Li et al., 2022). With the continuous development of the Internet, communication between parents and children has gradually shifted from face-to-face to the virtual world, which brings a trust crisis in the parent-child relationship, and the conflict between parents' resistance to the Internet and children's use of the Internet destroys family harmony, and the Internet addiction of adolescents is thus aggravated (Wang et al., 2020b;Qiu et al., 2022). And this negative impact from the crisis of trust, in turn, deepens the parents' cognitive bias toward the Internet (Steinberg et al., 2021). Blind trust between parents and children arising from intimacy may also pose a potential threat to the intimate relationship between the two. When emotional empathy occurs, parents may have varying degrees of cognitive bias toward the Internet due to their adolescent's Internet addiction (Tian, 2016). Based on the above, the following hypotheses are proposed.
H2a: Parental emotional empathy impacts the emergence of cognitive biases.
Dialectical rational thinking
The Core Information and Interpretation of the Chinese Adolescent Health Education (2018 edition) defines the Internet addiction (IA) as the uncontrollable behavior of the urges to use the Internet under the influence of non-addictive substances, which manifests as excessive exposure to the Internet leading to obvious impairment of academic and occupational performance and social function. Addiction is a mental illness. Addictions in the traditional sense is very similar to excessive smartphone use, but they also could be distinctly different. Excessive cell phone use is a behavior, whereas addiction is a psychiatric disorder that is detrimental to physical and mental health. Extant research does not find a clear basis for smartphone dependence in terms of addiction, and although this behavior is similar to a psychiatric disorder, the two cannot be fully equated (Panova and Carbonell, 2018). Strictly speaking, Internet addiction may only be an exaggerated expression of excessive Internet use. From an objective point of view, Internet socialization and video games are necessary for adolescents' daily interactions; the Internet is not only an assistant for adolescents' learning but also an important link with the outside world, and it can help adolescents escape from the stress and pain brought by the real world to a certain extent for a short period of time Tamarit et al., 2021). However, propelled by the public opinion, the Internet addiction has gradually become a concept constructed by parents to control adolescents' excessive exposure to the Internet and justified the parental intervention on the Internet. In studies on cyber instruction, rational communication between parents and adolescents can moderate the sensitive relationship between cyber instruction and adolescent Internet addiction. On the one hand, rational communication effectively reduces the sensitivity of adolescents to Internet addiction; on the other hand, irrational communication leads to cognitive and emotional biases in adolescents (Xin et al., 2021). A survey of parents and children showed that although more than half of the parents believed that the content of the Internet was positive, they still classified the Internet as a forbidden place on account of its effect on learning because the Internet was consciously viewed harmful, and such perception has been continuously solidified (Kuss and Griffiths, 2017;Wan et al., 2021). In conclusion, parents' unscientific perceptions continue to solidify under the group effect, giving rise to a bias against the Internet that can only be addressed by rational thinking with a dialectical perspective. Based on this, the study proposes the following hypotheses.
H2b: Parental cognitive empathy impacts the emergence of cognitive biases.
Egoistic habit
Egoism firstly mentioned in Plato's The Republic, the word that originated from the Latin ego, and then under the baptism of the Renaissance and the Enlightenment, underwent a long development and gradually deriving its meaning into man's desire and pursuit of humanity and human rights (Knez, 2016). Steiner in The Ego and His Own talks about the development of the individual as ending with the rationalization of egoism, meaning that egoism is the ultimate and fundamental purpose of human behavior (Clulow, 2016). Furthermore, Egoism regards the pursuit of self-interest as the nature of human beings. Thomas Hobbes once argued that individual behaviors are driven by personal interests, which take precedence over all life attitude and code of conduct (Machan, 2016). Generally, there are two types of explanation of egoism in the academic circle. One is psychological egoism, meaning that the individuals always do what is the most congruent with their own interests; and the other is ethical egoism, implying that the individuals put themselves on a commanding height of moral life when performing unjustly (Vries et al., 2010). Today, thought patterns are being impacted by the market economy, and refined egoism -the satisfaction of personal interests in a more advanced and hidden way -has become a new trend of thought. With the increasingly close relationship between the adolescents and the Internet, the egoistic thinking of the parents has gradually emerged in their views on the Internet and the children's education. On the one hand, the adolescent Internet addiction is closely related to the educational approach of the parents. According to the above logic, it makes sense that people will always maximize their own interests.
For parents, it is based on their own cognition of the Internet to reduce their children's exposure to the Internet to conform to their own expectation (Setiawati et al., 2021). Parents will always stand on the moral high ground to control their children's thoughts and restrict their behavior from the perspective of maximizing their interests, which is inertial egoism at work. Not only that, under the condition of egoistic inertia, parents will have cognitive and emotional biases toward the behavior of their adolescents on the Internet, which means that the emotional input of parents changed (Avirbach et al., 2019), thus affecting the perception of the Internet in the elder generation. When adolescents become overly dependent on the Internet and reduce communication with their parents, those series of acts lead to parents' subjectively and emotionally categorize the Internet as an undesirable factor, which results in their cognitive bias toward the Internet. Based on this, the following hypotheses were developed.
H2c: Parental affective bias impacts the emergence of cognitive bias.
Cognitive biases in the psychological empathy perspective
The term cognitive bias originated in psychology and was later incorporated into behavioral economics, and scholars have mostly adopted a limited rationality perspective when exploring cognitive bias, as Nobel Prize winners Kahnema and Tversky held a view that the brain thinking may be affected by a variety of unconscious biases due to the bounded rationality of individuals, namely cognitive biases (Tversky and Kahneman, 1974). Behavioral psychologists have found through extensive experimental investigations that actual decisions made by humans under conditions of uncertainty deviate from the predicted behavior of expected utility theory, and that people in decision making tend to exhibit limited rationality, which is known as cognitive bias (Hertel and Mathews, 2011). There are various human cognitive biases, some of them are transient and could trigger more efficient actions at the moment; some of them may cause perception biases due to individuals' limited information processing ability and their incompleteness of information, which may affect individuals' decision-making (Norman, 2014;Landucci and Lamperti, 2021;Xu et al., 2021). Cognitive bias is a subjective feeling of individuals affected by information. In the extant literature, studies on irrational biases such as optimism, jealousy, and narcissism are favored by scholars in various disciplines (Marshall et al., 2013). It follows that cognitive biases are actually generated by a combination of human perception and psychological drives.
As psychology continues to be applied in research related to intimate relationships, empathy has also emerged as an important lens for exploring the characteristics of parentchild relationships and as a key factor in triggering certain attitudes and behaviors (Tillery et al., 2020). It has been shown that empathy is an important component in the generation of identity and attachment, which helps humans understand others' perspectives, needs, and intentions, and it also enhances mutual trust (Yi et al., 2021). The formation of cognitive bias toward the Internet in the elder generation is inseparable from the individual's identification with the group concept, and in the Internet era, the individual's sense of self-affirmation is reinforced through the reception and understanding of online information, which resulting in prejudice toward the Internet (Balakrishnan and Griffiths, 2018). In the early stages of this cognitive bias formation, the stimulus response generated by psychological empathy occupies a pivotal position. And the individual develops an imbalanced perception of the absent psychological factors of the existing situation and thus seeks self-regulatory recovery (Velvet, 2020). In family life, self-affirmation is inseparable from parents' empathic understanding of the Internet itself and their children's use of it. Effective psychological empathy can correct parents' one-sided perceptions of the Internet, which raises concerns about the mechanism of the role of empathy as a psychological transition variable. Meanwhile, it has been argued that when parents feel that the information, they have is more valuable than the facts, their behavior will be influenced by a sense of self-affirmation and affect their own value judgments about adolescent Internet addiction through mediating factors such as emotion, cognition, and mood (Kumar and Goyal, 2015;Wang and Yi, 2020;Zhang et al., 2021). As addressed above, empathy is a mediating mechanism in the process of driving behavioral choices and is also a mediating factor in the creation of a sense of selfaffirmation and parental cognitive bias toward the Internet. Therefore, the following research hypothesis is proposed.
H3a: Parental cognitive biases toward the Internet are mediated by emotional empathy. H3b: Parental cognitive bias toward the Internet is mediated by cognitive empathy. H3c: Parental cognitive bias toward the Internet is mediated by emotional bias.
Parent-child relationship and affective interaction
The ecosystem theory proposed by Bronfenbrenner (1986) argues that family is an integral part of the ecosystem and plays a key role in the development of the individual. Family is the first communication environment that an individual has Frontiers in Psychology 05 frontiersin.org Research hypotheses model.
to deal with, and a sound parent-child relationship is the vital foundation for individual' development and is conducive to the improvement of personal resilience. Previous studies on the parent-child relationship and traditional bullying have found that parental affect can predict whether their children will be bullied, and the parent-child conflict is positively correlated with behaviors of being bullied (Boniel-Nissim and Sasson, 2018). As the Internet has become an indispensable part today, the impact of the parent-child relationship on the adolescent behavior has also migrated from the real world to the virtual world. Studies have shown that individuals with a harmonious parent-child relationship have better Internet literacy, and sound parent-child attachment can inhibit the formation of the Internet addiction in adolescents to a certain degree (Yusuf et al., 2020). Based on previous studies in the academic community, it is revealed that the parent-child relationship has long been understood as one-way influence relationship of parents on their children (Yoo et al., 2014), but in fact, the parent-child relationship is a friendly interaction model between parents and children. It includes not only the care and love of the parents for their children, but also the gratitude and respect of the children to their parents. Moreover, it is also a concentrated expression of the harmonious and warm atmosphere of the family (Trumello et al., 2021). A sound parent-child relationship is the manifestation of deep emotional interaction between the elder and younger generations. The parents can negotiate with the children about the time they use the Internet to prevent them from being addicted to it. In addition, a sound parent-child relationship can also implicitly shape correct values and ideological qualities of young people, so that they can stay awake and handle harms calmly when they are exposed to Internet abuse. Based on the above considerations, the following research model is formed (as shown in Figure 1).
Research design
Respondents and procedure In the information age, the relationship between adolescents and the Internet has become increasingly close. Based on the rapid development of the Internet in China, this study investigates the close connection between adolescents and the Internet. The development trend in China shows that the Internet has gradually integrated into all aspects of adolescents' education, life, socialization, and entertainment. The Internet has become a major channel for adolescents to understand the world and a source of knowledge for their value formation. In contrast, parents also hold mostly negative attitudinal tendencies toward their children's Internet use. To further explore the causes of Internet addiction among domestic adolescents and parents' attitudes toward adolescents' Internet use, we conducted a pre-study and a formal study with domestic parents as survey respondents.
Before the formal research, a pre-survey of parents has been conducted through field surveys and online distribution of questionnaires, and 121 valid questionnaires were collected in the pre-survey stage by screening out invalid ones. The PLS-SEM model was applied to analyze the samples of the pre-study. In terms of reliability and validity of the model, the current study adopted standardized results for analysis, as shown in Table 1. As for self-affirmation, the Cronbach's Alpha was 0.862, rho_A was 0.864, composite reliability (CR) was 0.901, and average 644. In the model testing process, the R 2 value of cognitive bias was 0.663, and the adjusted R 2 value was 0.656, indicating that each latent variable had a strong explanatory power for cognitive bias. Also, the Cronbach's Alpha coefficient of each latent variable in the study was greater than 0.8, indicating that each latent variable had high reliability; the CR was greater than 0.7, further proving that the reliability of the model is high; and the AVE of each latent variable was greater than 0.5. In addition, the GOF value of the pre-study model was 0.46 (greater than 0.36) obtained by the formula of GOF value, which proves the goodness of fit of the model. The above analysis showed that the overall goodness of fit of the model is excellent, the internal latent relationship has a significant explanatory power, the estimated effect is acceptable, and the reliability of each variable is consistent with the construct validity. The formal survey was carried out on the basis of the pre-survey (see Figure 2). The formal survey included online and offline questionnaires. A total of 459 questionnaires were collected, including 287 online and 172 offline questionnaires. To ensure the validity of the questionnaire sources, the invalid questionnaires were eliminated such as those filled in within 1 min, with too many missing data, with inconsistent options. Finally, 407 valid questionnaires were obtained, with the effective rate being 88.6%, which met the research requirements, and the basic information of the samples is shown in Table 2 below. The demographic information of the interviewed parents revealed that in terms of gender ratio, the respondents included 177 males and 230 females, respectively, accounting for 43.5 and 56.5% of the total, thus the gender ratio was roughly balanced; in terms of age, the respondents aged 30-39 years old accounted for the largest proportion, at 43.5%; in terms of occupation, the self-employed and related practitioners accounted for the largest proportion; in terms of study experiences, the respondents with a bachelor's degree or a high school background accounted for largest proportion reaching 80.6%. The study applied the PLS-SEM model to verify the research hypotheses from the perspective of prediction.
Scale selection
According to the research objectives, the parent-child empathy relationship involved in this study can be measured by the Affective and Cognitive Measure of Empathy (ACME) (Vachon and Lynam, 2016), which is superior to other empathy measures in terms of measurement accuracy and excellence; besides, the current study selected the General Self-Efficacy Scale (GSES) as a reference on account of the essential properties of self-efficacy; The scale items related to "cognitive bias" in the questionnaire were designed and adapted based on the cognitive Gaag et al., 2013). The scales for the empirical analysis in this study are shown in Table 3.
Empirical research
In the past research, the paradigm of constructing structural equation models by domestic and foreign scholars is relatively common, mostly through the structure of the covariance matrix of observed variables as the modeling computational logic, to construct CB-SEM (Covariance base structural equation model) among many latent variable conformations. The CB-SEM technique enable us to detect theoretical. The CB-SEM technique enable to detect the intrinsic structural relationships of concepts, but is extremely demanding in terms of model fitness, so it is more suitable for testing purely theoretical models (Gefen et al., 2011). A study by Hair and other scholars confirmed that when the sample size is too small in the structural equation model construction process and the data cannot satisfy the standard normal distribution, the CB-SEM technique is unable to test the correlation between variables from the prediction perspective It is difficult to form a complete construction of the structural equation model. In order to properly address the many limitations of CB-SEM for the non-pure theoretical model testing process, the PLS-SEM (partial lease square structural equation model), which focuses on the study of the main structure of the variables rather than the overall model construction superiority, is used instead based on the least square estimation method. This model mainly constructs the relationship between the observed and latent variables through a system of linear equations, which is basically a generalized general linear model. It has been widely used by domestic and foreign scholars in recent years because of its ability to effectively solve the covariance problem between observed variables and reduce the impact of regression unhelpful noise (Sarstedt et al., 2020). Therefore, Smart PLS is applied for data analysis in this study.
Reliability and validity analysis of structural plane
The R 2 values of the four variables of cognitive empathy, affective empathy, affective bias, and cognitive bias in Table 4 are 0.521, 0.572, 0.531, and 0.698, all of which are greater than 0.5, indicating that the interpretability of each latent variable is strong. Meanwhile, the Cronbach's Alpha coefficient of each latent variable was greater than 0.7, indicating that each latent variable had good reliability and internal consistency. The combined reliability CR of each latent variable met the requirement of greater than 0.7, which further proved the high reliability of the model. The average extracted variance and rho_A of each latent variable are close to or greater than 0.5, both of which reach the evaluation criteria of PLS-SEM modeling proposed by Hair's team (Hair et al., 2019). In addition, Q 2 is a statistic that assesses the influence of exogenous variables on endogenous variables. The Q 2 values of the four variables of cognitive empathy, affective empathy, affective bias, and cognitive bias in the model are all close to 0.25, which meets the criteria, indicating that the exogenous variables have a greater influence on the endogenous variables, which means that the model has a stronger predictive relevance. The Q 2 value of cognitive bias in Table 4 is 0.285 greater than 0.25, which indicates that the exogenous variables of the model have stronger predictive relevance to the endogenous variable of comprehensive development level, indicating that the predictive capability of the PLS model as a whole is strong. The GOF value is 0.43 according to the calculation formula, which indicates GOF = Communality × R 2 that the model has a strong goodness of fit. After the calculation is completed, the
Study scale Items' tags Measurement question items
Self-affirmation ZK1 I can solve the problem if I try my best.
ZK2
It is easy for me to stick to my ideal and reach my goal.
ZK3
I am confident that I can deal effectively with anything unexpected.
ZK4
I trust I can handle the problem and face the difficulty calmly.
ZK5
When faced with problems, I can usually find several solutions.
Affective empathy QG1 I'm happy when my child needs my help.
QG2
When I think what I do is beneficial to my child, I will not worry much whether the relationship between my child and me will be affected.
QG3
When I think what I do is good for my child, I will not be affected by the opinions of others.
QG4
I think in most cases I can understand what my child thinks and does.
QG5
When I notice that what I do causes negative emotion in my child, I will stop immediately.
QG6
When my children are perplexed and upset, I often try my best to make them feel better.
Cognitive empathy RG1 I can figure out the circumstances when the child is scared.
RG2
I can tell at a glance when the child pretends to be happy.
RG3
I can generally understand why the child thinks this way about something.
RG4
I think it hard to figure out how the child feels about something.
RG5
I can generally predict the feelings of the child when something happens.
RG6
When the child gets angry, I can usually guess the reasons.
RG7
When the children are sad, I can notice the sadness from their faces.
Affective bias QP1 I am delighted with a sense of accomplishment when the child succeeds after following my arrangement and instruction.
QP2
I am frustrated when my child does not follow my instruction.
QP3
I hope the children can understand and recognize my care and efforts to them.
QP4
I hope my child can behave as I expect.
QP5
Since I'd not like to be thought irrational by the children, I will stop my action that they are against.
Cognitive bias AP1 I find that the children use the Internet more for entertainment (like games).
AP2
I don't understand why the children are so dependent on the Internet.
AP3
When the children use the Internet, I can't help paying attention to what they are doing.
AP4
In order to ensure children have a safer environment for their growing up, I will be wary of their exposure to the Internet.
Loads analysis and collinearity analysis of model factor
To further explore factor loadings and collinearity characteristics of the model and ensure the reasonableness of the internal and external structure of the formative measurement model, the study calculated the model factor loadings and collinear VIF values. As can be seen from Table 5, in terms of factor loadings, all 20 level 5 indicators exhibit high polarity after orthogonal rotation, which indicates that the data structure is consistent with the model expectations. PLS-SEM is essentially a generalized general linear model, the collinear VIF value of each indicator needs to be measured, and the higher the VIF value, the higher the level of collinearity of the model. If the VIF value is less than 3, the collinearity of each indicator of the formative measurement model to be constructed is extremely low and at the ideal level and can be analyzed subsequently. The VIF values of all indicators of the PLS-SEM model constructed in the study are less than 3, which basically meet the criteria of model collinearity criterion proposed by Hair's team, so the path coefficient analysis of PLS-SEM can be performed.
Path coefficient analysis
The Bootstrapping method was adopted to calculate the T statistic of each path coefficient, and the specific parameters were shown in Table 6 to test the significance level of the path coefficient estimate (two-tailed test). Among them, if 1.96 < T < 2.58, the path coefficient was significant at the 0.05 level; if 2.58 < T < 3.29, it was estimated to be significant at the 0.01 level; if T > 3.29, it was significant at the 0.001 level. The T statistic of the structural equation model in the Bootstrapping test showed that all path coefficients had high T statistic, and the P-value of each path was less than 0.05, indicating that each path coefficient passed the test of the corresponding significance level, and the model structure has high stability (Streukens and Leroi-Werelds, 2016).
From the model performance results, the hypothesis of H1a, H1b, H1c, H2a, H2b, and H2c has been verified in this study. In H1a, parents' self-affirmation has a positive influence on emotional empathy, and the path coefficient is 0.757, indicating that parents will have emotional empathy for the Internet and teenagers' online events under the influence of self-affirmation. In H1b, parents' self-affirmation has a significant positive influence on cognitive empathy, and the path coefficient is 0.722, indicating that parents' self-affirmation will affect their views and cognition toward the Internet. In H1c, self-esteem has a significant positive influence on the emotional deviation, and the path coefficient was 0.656, shows that parents' sense of selfesteem tend to emotional events in the Internet and teenagers produce deviation, lead to a certain degree of prejudice on the Internet on the cognition of, but in contrast, this kind of emotional deviation is lower than shadow to emotion and cognition Ring. In H2a, the occurrence of paternal emotional empathy has an impact on the generation of cognitive bias and the path coefficient is 0.311. In H2b, the occurrence of paternal cognitive empathy has an impact on the generation of cognitive bias, and the path coefficient is 0.206. In H2b, the occurrence of paternal affective bias will affect the generation of cognitive bias, and the path coefficient is 0.396 in emotional empathy cognitive empathy. Among the three effects of affective bias on cognitive bias, affective bias has the most obvious effect on cognitive bias, indicating that when parents are emotionally impatient with the Internet and teenagers' Internet use, their cognitive bias toward the Internet will be deeper and their prejudice against the Internet will be worst.
Analysis of specific indirect effects
To verify the hypothesis of the proposed mediating relationship, the study utilized the Bootstrapping algorithm to examine the specific indirect effects, which randomly sampled 5,000 times. As for "Self-affirmation → Affective empathy → Cognitive bias, " the original sample (O) coefficient was 0.235, and the sample mean (M) coefficient was 0.232, the standard deviation (STDEV) coefficient was 0.055, the T statistic was 4.262, the P-value was 0.000; as for "Selfaffirmation → Affective bias mediates → Pride and prejudice, " the original sample (O) coefficient was 0.260, the sample mean (M) coefficient was 0.260, the standard deviation (STDEV) coefficient was 0.039, the T statistic was 6.728, and the P-value was 0.000; as for "Self-affirmation → Cognitive empathy → Pride and prejudice, " the original sample (O) coefficient was 0.148, the sample mean (M) coefficient was 0.154, the standard deviation (STDEV) coefficient was 0.059, and the T statistic was 2.525, the P-value was 0.012. The test results in Table 7 show that the three mediated paths have significance p-values less than 0.05 at 95% confidence intervals that do not include the null point, indicating that this PLS-SEM is a fully mediated model and the hypothesis holds. Therefore, the hypothesis of H3a, H3b, and H3c in this study has been verified, the cognitive bias of parents toward the Internet under selfaffirmation will be affected by the three empathic mediators of emotional empathy, cognitive empathy, affective bias, and the mediating effect of emotional empathy is slightly higher than that of cognitive empathy and affective bias.
Research conclusion Conclusion
Based on the exploration of the causes of adolescent Internet addiction and a systematic review of theories of selfaffirmation, empathy, and cognitive biases, this paper analyzes the mechanisms underlying the generation of cognitive biases in parents' perceptions of the Internet and adolescent Internet Frontiers in Psychology 11 frontiersin.org behavior in the modern Internet perspective. The study applies SmartPLS to validate the theoretical model proposed in this paper, and the results show that parents' self-affirmation has a positive influence on the occurrence of empathy. Driven by a sense of self-affirmation, parents develop subjective affective and conscious tendencies toward the Internet and their adolescents' access to the Internet. At the same time, the occurrence of empathic processes also enriched parents' knowledge about the Internet and adolescents' Internet access. In addition, the study also verified that parents' self-affirming consciousness is an antecedent to the generation of cognitive biases and that this process is influenced by the mediation of empathy, suggesting that when parents' sense of self-affirmation is stimulated by the mediation of empathy, it intensifies their cognitive biases.
In the context of the rapid development of the Internet, the trend toward convenient Internet use is inevitable, but parents must go through rational analysis and judgment in their perception of the relationship between the Internet and their adolescents' access to the Internet and establish a good parentchild relationship in order to promote parents' and adolescents' understanding of each other.
Theoretical significance
The rapid development of the Internet has shifted the connection between adolescents and others from the real world to the virtual world, which also affects parent-child relationships. It is essential to explore the causes of adolescents' Internet addiction and promote the stable development of parent-child relationships. Theoretically, the current study is of great significance. On the one hand, while reviewing the theories of self-affirmation, empathy, parent-child relationship and other related ones, this paper innovatively proposes the idea of cognitive bias, and explores the causes of adolescent Internet addiction from this perspective. In the study, we introduced the concept of empathy in psychology, which is innovative from previous studies focusing on family and social factors (Chen X. et al., 2020), and pointed out the existence of a parental habit of blind trust in self, inability to engage in dialectical rational thinking and excessive self-interest in families under the impact of Internet information, which broadened the research horizon as well. On the other hand, this study examines how self-affirmation influences Internet cognitive bias in parents under the condition of empathy as a mediating factor by cutting from a self-affirmation perspective, and examines the mediating utility of the empathy process, thus validating previous perspectives (Amiri and Jamali, 2019;. It both enriches the research related to empathy and the causes of adolescent Internet addiction and provides a new perspective for other scholars to conduct related research in the future.
Practical significance
While enjoying the convenience of the Internet, we should also pay attention to the impact of the Internet on the healthy growth of adolescents. This study explores the causes of adolescent Internet addiction from the perspective of parental self-affirmation with the following practical significance. First, the findings suggest that increased parental self-affirmation deepens parents' cognitive bias toward the Internet and their adolescents' access to the Internet. Therefore, in order to mitigate the effects of such cognitive bias, parents should actively change their inherent thinking about the Internet and view it from a more comprehensive and rational perspective; second, the study shows that the empathic relationship between parents and children affects the degree of cognitive bias of parents, and a good parent-child relationship can alleviate parents' cognitive bias, which to a certain extent also reveals the necessity of maintaining a harmonious parent-child relationship; furthermore, the research has certain guiding significance for the family education of adolescents who are addicted to the Internet, and only by helping both parties to make efforts to build bridges of family communication and enhance their understanding of each other, can we create a good family education atmosphere for them.
Limitations
This study has enriched the research on the causes of the adolescent Internet addiction, but there are still deficiencies in need of further exploration and improvement. First of all, the data collected in this study are of different qualities. The parents' cognition of the Internet and their attitudes to the adolescents' access to the Internet show heterogeneity under the influence of factors such as different family backgrounds, educational levels, and ideological concepts. Thus, future research can attempt to collect more high-quality, extensive and targeted data to further explore and verify the causes of the adolescent Internet addiction under the self-affirmation consciousness of the parents. Second, this study conducted the questionnaire survey to verify the research hypotheses one by one, however, as the data analyzed are horizontal, it is difficult to observe the dynamic process of the formation of cognitive bias against the Internet under the self-affirmation consciousness of the parents. Hence, longitudinal studies can be conducted in the future to further explore the formation rules and characteristics of the cognitive bias of the parents to provide feasible suggestions for the parents to establish a correct cognition in the Internet era and create a sound parent-child relationship for the healthy growth of the adolescents.
Data availability statement
The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Nanchang Institute Technology, China. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
MZ contributed to the empirical work, analysis of the results, and writing of the first draft. JZ and HZ supported the total work of the MZ. ZZ contributed to overall quality and supervision the part of literature organization. GJ contributed to developing research hypotheses and revised the overall manuscript. All authors discussed the results and commented on the manuscript.
Funding
This study was supported by the funding of Science and Technology Research Project of Jiangxi Provincial Department of Education (GJJ201918).
|
2022-08-01T13:19:05.414Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "40589839e902b91734c4dd2611dde20183e0e2ab",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "40589839e902b91734c4dd2611dde20183e0e2ab",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
202674800
|
pes2o/s2orc
|
v3-fos-license
|
Self-organization of modular network architecture by activity-dependent neuronal migration and outgrowth
The spatial distribution of neurons and activity-dependent neurite outgrowth shape long-range interaction, recurrent local connectivity and the modularity in neuronal networks. We investigated how this mesoscale architecture develops by interaction of neurite outgrowth, cell migration and activity in cultured networks of rat cortical neurons and show that simple rules can explain variations of network modularity. In contrast to theoretical studies on activity-dependent outgrowth but consistent with predictions for modular networks, spontaneous activity and the rate of synchronized bursts increased with clustering, whereas peak firing rates in bursts increased in highly interconnected homogeneous networks. As Ca2+ influx increased exponentially with increasing network recruitment during bursts, its modulation was highly correlated to peak firing rates. During network maturation, long-term estimates of Ca2+ influx showed convergence, even for highly different mesoscale architectures, neurite extent, connectivity, modularity and average activity levels, indicating homeostatic regulation towards a common set-point of Ca2+ influx.
Introduction
Modularity is a fundamental design principle of neuronal systems and exists at the scale of cellular compartments, local circuits or interconnected brain areas. From a structural perspective, modularity can arise from inhomogeneities in the physical substrate that facilitate connectivity within a group of functional entities versus connectivity between such groups.
At the mesoscale level of local circuits, the cerebral cortex is organized in local clusters of tightly interconnected neurons (Feldman and Peters, 1974;Skoglund et al., 2004) that share common inputs and targets (Bosking et al., 1997;Voges et al., 2010), have similar functional properties (Ringach et al., 2016) and are thought to constitute a basic computational module (Buxhoeveden and Casanova, 2002;Casanova and Casanova, 2019;Mountcastle, 1997).
Although cortical architecture is largely genetically predefined at this level, blocking electrical activity during development disturbed the characteristic clustering of connections, suggesting that activity-dependent self-organization influences network modularity (Durack and Katz, 1996;Ruthazer and Stryker, 1996;Thompson, 1997). Intriguingly, computational models predict that modular connectivity, in turn, promotes spontaneous activity (Kaiser and Hilgetag, 2010;Klinshov et al., 2014;Mazzucato et al., 2015). Modularization and spontaneous activity may thus co-evolve in a self-enhancing process.
In early postnatal development, neuronal migration and neurite outgrowth are regulated by activity-dependent changes of the intracellular Ca 2+ concentration [Ca 2+ ] i (Kater and Mills, 1991;Komuro and Kumada, 2005;Spitzer, 2006;Zheng and Poo, 2007), suggesting that morphodevelopmental processes contribute to cellular Ca 2+ homeostasis (Zündorf and Reiser, 2011). Put simply, neurons would grow to increase neurite field overlap and the corresponding synaptic connectivity (Kossio et al., 2018;Shepherd et al., 2005;Stepanyants et al., 2002;Tetzlaff et al., 2010;van Ooyen et al., 1995) to establish the level of spike activity necessary to achieve some target value of [Ca 2+ ] i . As inter-neuron distance strongly affects the overlap of neurite fields and thus connectivity (Barral and D Reyes, 2016;Schnepel et al., 2015;Seeman et al., 2018), spatial clustering of neurons may play an important role in shaping modularity (Hernández-Navarro et al., 2017).
In the current study, we focus on the developmental self-organization that leads to different network architectures. In a simple computational model, varying the ratio of activity-dependent homeostatic growth versus migration was sufficient to modify neuronal clustering, mesoscale network organization, and the degree of modularity. Since controlled manipulation of network architecture and simultaneous activity monitoring is impractical in vivo, we tested this developmental interaction by modifying growth and migration in networks of cortical neurons in cell culture. These networks recapitulate major developmental processes such as cell migration and neurite outgrowth (Guan et al., 2007;van Huizen et al., 1987;van Pelt et al., 2004), develop varying degrees of clustering (Kriegstein and Dichter, 1983;Okujeni et al., 2017;Soriano et al., 2008;Teller et al., 2014) and produce a rich repertoire of spontaneous bursting events (SBE) (Kamioka et al., 1996;Okujeni et al., 2017;Wagenaar et al., 2006), similar to the developing cortex (Golshani et al., 2009;Minlebaev et al., 2007).
Increasing PKC activity in cultured networks amplified cell body clustering and local neurite entanglement at the expense of long-range connections, promoting local burst initiation and average firing rate (AFR) but reducing network recruitment during SBEs (Okujeni et al., 2017;Okujeni and Egert, 2019). This supports the theoretical predictions for modular networks mentioned above and is consistent with results from clustered networks created by mechanical constraints or modified growth substrates (Bisio et al., 2014;Tibau Martorell et al., 2018;Yamamoto et al., 2018).
Irrespective of network architecture, activity stabilized after approximately 21 days in vitro (DIV), suggesting that the target of homeostatic network development had been achieved. Different AFRs at this stage, however, conflict with previous studies assuming that AFR development reflects the homeostatic regulation of [Ca 2+ ] i (Abbott and Rohrkemper, 2007;Kossio et al., 2018;van Ooyen et al., 1995). Ca 2+ -influx, however, exponentially increases with membrane depolarization (Mazzanti et al., 1992) and thus depends on the temporal structure of spike activity. Our findings suggest that because of this non-linearity and specific differences in network-wide peak firing rates (PFR), long-term average Ca 2+ influx converges despite different AFRs and connectivity. Migration and neurite growth thus interact in a homeostatic process that defines the mesoscale architecture of neuronal networks.
Results
The connectivity between neurons depends on the overlap of their neurite fields and on their spatial distribution in the network. Like neurite growth, however, this distribution is dynamic because neurons migrate even in postnatal development. In a recurrent network, the input a neuron receives then depends on its embedding as well as the network's overall connectivity and activity structure. Here, we investigated how activity-dependent neurite growth and migration interact to establish connectivity and activity in neuronal networks.
Simulating activity-dependent neurite growth and migration
To gain insights into interdependencies between neurite growth and neuronal migration during the activity-dependent network self-organization, we extended a network growth model introduced by van Ooyen et al. (1995) that reproduces the outgrowth and subsequent pruning of neurites reported for developing neuronal networks (van Huizen et al., 1987;van Pelt et al., 2004). Following this, neurons were initially randomly seeded on a torus and their interconnectivity was modeled as degree of overlap between their circular neurite fields (no distinction was made between axons and dendrites). Input to neurons was calculated as the product of presynaptic firing rates and respective connectivity. A sigmoidal transfer function governed the relation between input-dependent membrane potential depolarization and firing rate ( Figure 1A). A growth process superimposed onto this framework allowed neurons to adjust their input by growing or shrinking their neurite fields, and thus the overlap with other fields, to establish a defined target firing rate ( Figure 1B,C). In addition to neurite growth, the final phase of neuronal migration observed in postnatal development is modulated by network activity and thus interacts with the formation of neurite fields and the regulation of connectivity. We therefore extended the original framework of the model by adding activity-dependent migration, where neuron somata migrated in the direction of the strongest input and gradually slowed down as their firing rates converged to the target level ( Figure 1B,D). In contrast to the bidirectional modulation of neurite fields, neurons were not repelled, however, if the activity level was above target. Prior to the formation of first contacts, migration was determined by erratic movements only. Neurons could thus increase their input by extending neurites and by migration to increase the overlap of neurite fields. The relative contribution of migration in network formation herein depended on its rate in relation to the net rate of neurite extension or pruning.
Migration and neurite outgrowth shape network architecture
Initially, neurite outgrowth ( Figure 2A) and migration ( Figure 2B) did not depend on activity. Once neurite fields began to overlap, directed migration towards areas that provided more input amplified statistical variations in the local cell density and led to clustering, indicated by decreasing clustering index (CI, Figure 2C). CI was calculated as the ratio between the average nearest neighbor distance in a network and the expected average nearest neighbor distance for random networks. CI above one indicates grid-like cell body arrangements and CI below one indicates clustering. Increasing clustering promoted connectivity buildup ( Figure 2D) and thus input to a neuron ( Figure 2E), which advanced the onset of spontaneous network activity ( Figure 2F). Migration and clustering of neurons ceased with the steep onset of network activity ( Figure 2B,C,F). In homogeneous networks, neurite fields had to grow larger than in clustered networks to establish the same degree of overlap and thus connectivity (Figure 2A,D). As a result, the size of neurites in mature networks correlated negatively with the degree of neuronal clustering (Figure 2-figure supplement 1). Connectivity, input activity and firing rates eventually converged to the same levels for different migration conditions ( Figure 2D-F).
Varying the rate of migration crucially impacted on the overall architecture of developing networks ( Figure 2I, Figure 2-videos 1-4). Without migration, networks developed the most homogeneous neurite field diameters and neurite coverage ( Figure 2G). Clustering led to more variable neurite field diameters as more isolated neurons required large fields to receive sufficient input, whereas within dense clusters, strongly overlapping neurite fields remained small.
The evolution of the largest connected subnetwork, that is the giant component, suggested that full network connectivity was established along the same developmental time line, irrespective of the degree of clustering ( Figure 2H, inset). In clustered networks, however, individual neurons played an important role in bridging subnetworks ( Figure 2I, arrows in the bottom panel). To quantify the tendency for modularity with different architectures, we calculated the giant component remaining after removing increasing subsets of randomly selected neurons in mature networks ( Figure 2H). In clustered networks, the giant component shrunk faster with an increasing fraction of neurons removed, demonstrating that individual neurons became critical bottlenecks in connectivity. Increasing activity-dependent migration relative to neurite growth thus increased the modularity (Q, Figure 2I) of the network.
Mesoscale network architecture in vitro
The growth model suggested that spatial clustering of neurons during development could play a crucial role in the formation of network connectivity by influencing the probability of neurites to overlap during outgrowth. We assessed this dependence experimentally by chronic activation or inhibition of PKC (PKC + and PKC À respectively), a regulator of neuronal migration, in developing networks of cortical neurons in cell culture. As described previously (Okujeni et al., 2017), PKC Figure 1. Model of activity-dependent network development. Neuronal wiring strategies may involve expansion of neurite fields and migration towards other neurons to increase connectivity modeled as neurite field overlap. (A) Transfer function of membrane depolarization between resting and maximal potential to firing rates. Dotted line: target firing rate. (B) Neurite growth (orange) and migration (green) were modulated as a function of [Ca 2+ ] i that corresponded to average firing rates. Neurites grew while the firing rate (corresponding to long-term average Ca 2+ influx) was below target and were pruned when above it. Migration rate decreased as neurons approached the target firing rate (dotted line). (C) The area of neurite field overlap, corresponding to connectivity in the model, can be increased by neurite outgrowth and neuronal migration towards neighboring neurons (D). DOI: https://doi.org/10.7554/eLife.47996.002 Figure 2. Model of activity-dependent growth and migration. (A) Activity-dependent growth produced a characteristic overshoot and subsequent pruning of neurite fields. The overall size of developing neurites decreased with increasing migration rates and clustering. (B) Mean migration distance of neurons after seeding (smoothed by 1 hr sliding average). (C) Migration promoted clustering of neurons, which saturated with the onset of network activity and neurite pruning (curves smoothed by 1 hr sliding average). All networks were initialized with the same spatial cell body distribution with CI Figure 2 continued on next page manipulation significantly altered the mesoscale architecture of networks with 600-800 neurons/ mm 2 ( Figure 3A), with striking similarity to mature networks generated with the growth model. Under control conditions (PKC N networks), networks appeared as inhomogeneous density landscapes with both, clustered and sparse regions ( Figure 3A, center panel). In particular in clustered areas, neurites formed tangles, which would increase the probability of local connections. Axons spanning several millimeters indicated monosynaptic connections between distant network regions. In comparison, PKC À networks with diminished migration had a more homogeneous distribution of cell bodies and coverage with dendrites and axons ( Figure 3A, left panel). Reduced fasciculation of neurites and a high density of long-range axons suggested a more isotropic embedding of neurons and more random-like connectivity. In turn, PKC + networks with enhanced migration had well delineated clusters of about 30-60 neurons with dense tangles of neurites that rarely reached into neighboring clusters ( Figure 3A, right panel), indicating high local connectivity and reduced inter-cluster connectivity.
Cell migration promotes neuronal clustering
To quantify the structural development, we seeded networks at lower densities of about 300 neurons per mm 2 that were more suitable for morphometric analyses (Figure 3-figure supplement 1). Within the first day of random seeding of neurons, rapid neurite outgrowth resulted in overlapping neurite fields between neighboring neurons. Simultaneously, neuronal cell bodies migrated across the substrate. Neuronal migration with concurrent outgrowth of neurites gradually increased neuron clustering within about three weeks in vitro ( Figure 3B). Chronic manipulation of PKC activity differentially modulated neuronal clustering during development ( Table 1). At 22 DIV, clustering was moderate in PKC N networks (CI = 0.75 ± 0.03) but significantly increased in the PKC + networks (CI = 0.67 ± 0.02, p=3.3*10 À2 ) and significantly reduced in the PKC À networks (CI = 0.88 ± 0.01, p=4.4*10 À4 ). CI did not change significantly after 22 DIV, indicating cessation of neuronal migration.
Note that the spatial patterning of somata depended on neuron density. Clusters in dense networks (~700 neurons/mm 2 at >22 DIV) typically contained 30-60 neurons (Okujeni et al., 2017), close to 1. Note that the fluctuations for zero migration results from the random jittering of neuron positions by half the cell body radius (6 mm). (D) Average connectivity increased more rapidly with stronger migration and clustering. (E) Input increased faster with increasing migration rate because clustering initially promoted connectivity. Input levels eventually converged. (F) Firing rates increased sharply once critical input levels were attained. Migration and clustering accelerated the onset of activity. With increasing migration, steps arise because of incremental integration and activation of clusters within the larger network. Note that clustering reduced the developmental overshoot of firing rates. (G) Moderate migration and clustering produced the highest variability of neurite field size across neurons in mature networks. (H) High migration rates increased modularity in mature networks. With increasing migration rate, the giant component more rapidly decreased in clustered networks when a certain fraction of neurons was randomly deleted, indicating that these networks break into disconnected subnets. Inset: the fraction of neurons in the giant cluster, that is the largest connected subnetwork, evolved similarly in different migration conditions. (I) Migration rates crucially determined the mesoscale architecture and modularity (increasing Q indicates stronger modularity) of developing networks. While average neurite fields were small in clustered networks, more isolated neurons generated larger fields (arrows) and formed bottlenecks for activity propagation by connecting otherwise unconnected or weakly connected subnetworks. DOI: https://doi.org/10.7554/eLife.47996.003 The following video and figure supplements are available for figure 2:
Clustering diminishes dendrite outgrowth
To address the interaction of neurite field extension, migration and clustering, we analyzed the average size of dendrites at several time points during development ( Figure 3C). Dendrite size was quantified as the ratio between the total length of detected dendrite stretches and the number of neurons within regions of interest ( Table 1). The measure estimates the average contribution of each neuron to the dendritic mesh. Chronic manipulation of PKC activity had little impact on dendrite size up to 8 DIV but significantly modulated dendrite outgrowth during subsequent development. At 22 DIV, dendrite size was significantly increased in the more homogeneous PKC À networks but significantly reduced in the more strongly clustered PKC + networks (PKC N : 1021 ± 41 mm; PKC À : 1413 ± 64 mm, p=7.9*10 À5 ; PKC + : 816 ± 24 mm, p=3.6*10 À4 ). In all conditions, dendrite size did not change significantly between 22 and 29 DIV, indicating stabilization of the dendritic network after the third week in vitro. As in the model (Figure 2-figure supplement 1), dendrite size in mature networks was negatively correlated with the degree of cell body clustering and, thus, the distance between neurons ( Figure 3D).
Dendrite outgrowth promotes synaptic connectivity
Network connectivity requires neurite overlap but further depends on the probability by which synapses are realized at axo-dendritic intersections. To assess how synaptic connectivity evolved in the different PKC conditions, we stained and detected presynaptic boutons (Figure 3-figure supplement 2) and determined the synapse density as the average number of presynaptic boutons per neuron ( Figure 3E, Table 1) and the dendritic occupancy as the number of synapses per unit dendrite length ( Figure 3F, Table 1). Manipulating PKC activity had no significant influence on early synaptogenesis up to 8 DIV, consistent with the comparable dendrite density in different PKC conditions at this stage. Paralleling dendritic outgrowth, synapse density increased significantly with increasing dendritic occupancy between 8 and 22 DIV in all conditions. Synapse densities and dendritic occupancy subsequently decreased between 22 and 29 DIV. This reduction was not significant in PKC À networks, however. Developmental manipulation of PKC activity profoundly affected mature synapse densities (PKC N : 1114 ± 56; PKC À : 2019 ± 110, p=4.5*10 À8 ; PKC + : 669 ± 21, p=5.7*10 À8 ) and dendritic occupancy (PKC N : 1.19 ± 0.04 mm À1 ; PKC À : 1.47 ± 0.04, p=3.0*10 À5 mm À1 ; PKC + : 0.9 ± 0.04 mm À1 , p=2.3*10 À5 ) at 29 DIV, both of which were significantly increased in the PKC À and reduced in the PKC + condition. Similar to dendrite densities, synapse densities were thus negatively correlated with the degree of clustering. Across PKC conditions and developmental stages, synapse In late development, dendrite size scaled inversely with the degree of clustering. For visualization the CI axis was inversed, so the degree of clustering increases from left to right. (E) The synapse density increased concurrently with dendrite growth. After 22 DIV synapse densities decreased in PKC N and PKC + networks, indicating synaptic pruning. (F) Dendritic occupancy with synapses differed slightly between conditions and decreased after 22 DIV. (G) The number of synapse per neurons increased with the dendrite size. Gray lines connect networks of the same age. The blue line illustrates a proposed quadratic scaling rule between dendrite size and synapse densities. (H) Neuron density declined with DIV to about one third of the seeding density. (I) Estimated upper bounds for connectivity based on the synapse density and the total number of neurons (on 113 mm 2 cover slips). PKC À at least doubled average connectivity. (J) In mature networks, maximum connectivity scaled inversely with clustering. All parameters are presented as mean ± SEM. Data from 4 to 24 images (Table 1, area 3.5 mm 2 ) taken in each of 2 networks per condition and age. Asterisks indicate p-values 0.05 (*), 0.01 (**) and 0.001 (***) tested against PKC N . DOI: https://doi.org/10.7554/eLife.47996.010 The following source data and figure supplements are available for figure 3: Source data 1. Source data and Matlab script for Figure 3B Table 1. Morphometric analysis of network development under different PKC conditions. Results are presented as mean ± standard error of mean (SEM). Significance was determined against PKC N , or between specified developmental time windows, using independent Student's t-test. N specifies the number of analyzed images taken from two networks per PKC condition and age. Neuron density 8 255 ± 6 (9.6*10 À7 ) 185 ± 11 168 ± 7 (2.0*10 À1 ) #/mm 2 15 214 ± 9 (5.9*10 À4 ) 131 ± 17 158 ± 12 (2.0*10 À1 ) #/mm 2 22 107 ± 8 (1.9*10 À1 ) 123 ± 6 85 ± 6 (5.7*10 À4 ) #/mm 2 29 87 ± 5 (3.0*10 À1 ) 96 ± 7 77 ± 4 (2.6*10 À2 ) #/mm 2 occupancy scaled approximately quadratic with the dendrite size ( Figure 3G), which could result from similarly modulated axonal densities (Okujeni et al., 2017) and the corresponding multiplicative increase in intersection probability.
Clustering reduces maximum global connectivity
Network connectivity is limited by the number of synapses per neuron and the overall number of neurons in a network since neurons obviously cannot have more partners than they have synapses.
The ratio between the number of synapses per neuron and the total number of neurons in the network defines an upper bound of connectivity for a network (maximum connectivity). The degree of connectivity realized, however, could be lower because of multiple structural synapses between neuron pairs. Although the density of neurons decreased during early development ( Figure 3H, Table 1), maximum connectivity increased significantly in all conditions between 8 and 22 DIV ( Figure 3I) and saturated between 22-29 DIV. At the same time, maximum connectivity almost doubled in PKC À networks compared to PKC N networks but was significantly reduced in PKC + networks (PKC N : 0.12 ± 0.01; PKC À : 0.23 ± 0.03, p=9.2*10 À4 ; PKC + : 0.08 ± 0.01, p=3.8*10 À2 ) and thus was negatively correlated with the degree of clustering ( Figure 3J).
Clustering decreases PFR and depolarization during SBEs
The hypothetical set-point of the homeostatic process, however, is not the firing rate per se but the associated [Ca 2+ ] i (Mattson and Kater, 1987), which is linked to molecular processes involved in growth and migration. Ca 2+ influx increases supra-linearly with increasing membrane depolarization (Mazzanti et al., 1992). This suggests that the long-term Ca 2+ gain is not a linear function of AFR but depends the depolarization of the membrane potential and thus on the temporal structure of activity. Depolarization depends on the number and synchronization of excitatory synaptic input, which becomes maximal during the peak phase of SBEs. Simultaneous intracellular and extracellular recording showed that higher SBE strength was indeed associated with stronger depolarization during SBEs (Okujeni et al., 2017). In PKC À networks, membrane depolarization high above spiking threshold frequently led to a depolarization block that outlasted the spike burst ( Figure 4D top trace). The fraction of time spent above threshold (À40 mV, Figure 4E) was significantly larger in neurons of PKC À networks (5.2 ± 0.7%, p=1.7*10 À4 , N = 30 neurons; mean ± SEM, independent Student's t-test) than in PKC N (1.7 ± 0.5%, N = 24) and PKC + (1.2 ± 0.7%, p=1.2*10 À3 , N = 24) networks (14-23 DIV). Depolarization was therefore not necessarily correlated with the individual firing rate of a neuron and the AFR in the network but rather reflected the network PFR during SBEs.
Homeostatic regulation of growth by long-term Ca 2+ influx
To assess how Ca 2+ influx depends on PFR, we determined the amplitude of Ca 2+ transients in excitatory neurons expressing GCaMP under the CAMKII promotor while simultaneously recording SBEs with MEAs ( Figure 5A). Most neurons indeed showed an exponential relation between PFR and the amplitude of Ca 2+ transients ( Figure 5B). PKC À networks realized much higher PFRs and had somewhat smaller exponents than PKC N (PKC N 0.12 ± 0.02, PKC À 0.11 ± 0.01, p=3.2*10 À18 ; Figure 5C,D, E).
In all network types, PFR increased steeply in early development and later declined concurrently with SBE strength. Throughout development, however, PFRs were highest in homogeneous networks and lowest in clustered networks ( Figure 5F, Table 2). Networks with low AFR thus had high PFR. The following source data is available for figure 5: Source data 1. Source data and Matlab script for Figure 5B- Knowing the relationship between PFR and Ca 2+ influx allowed us to estimate Ca 2+ levels during development based on MEA recordings. We approximated the development of the average Ca 2+ influx per SBE ( Figure 5G) from their respective PFRs and the exponential Ca 2+ gain function with the average exponent of 0.11. Because higher PFRs, Ca 2+ influx per SBE was highest in the more homogeneous PKC À networks and lowest in clustered PKC + networks. Yet, in combination with the systematic increase of SBE rate with clustering, long-term Ca 2+ influx converged during late development for different PKC conditions, network architectures and AFR ( Figure 5H).
Differences in PFR reflect variations of network recruitment during SBEs
The predominately short-range connectivity observed in clustered PKC + networks could impair network-wide recruitment (Okujeni et al., 2017) and synchronization of activity. This would decorrelate inputs, explaining lower PFR and weaker membrane depolarization during SBEs. To test this, we determined network synchrony as the average spike correlations between all electrode pairs ( Figure 6A). Consistent with the rapid buildup of connectivity, network synchrony increased steeply between 3-15 DIV and reached stable levels already between 15-21 DIV, even though activity levels, connectivity and inhibition continued to develop. In line with connectivity estimates, synchronization was highest in PKC À networks (0.53 ± 0.04, p=4.6*10 À5 compared to PKC N ), intermediate in PKC N networks (0.35 ± 0.02) and lowest in PKC + networks (0.16 ± 0.03, p=1.7*10 À5 compared to PKC N ), that is network synchrony indeed decreased with the degree of clustering.
Discussion
Neuronal network architecture is not based on a genetic blueprint alone but is shaped by predefined rules of activity-dependent self-organization (Spitzer, 2006). Herein, neuronal migration (Komuro and Kumada, 2005;Zheng and Poo, 2007) and neurite outgrowth are regulated by activity-related changes of [Ca 2+ ] i . Indeed, cell motility and growth is optimal within a narrow [Ca 2+ ] range and diminished otherwise, which led to the hypothesis that network connectivity and activity evolve under homeostatic control with the [Ca 2+ ] i as set-point parameter (Kater and Mills, 1991). However, basal cytosolic [Ca 2+ ] is very low due to efficient Ca 2+ -buffering and extrusion (Kater and Mills, 1991;Zündorf and Reiser, 2011) and remains relatively constant during development (Maravall et al., 2000). Free Ca 2+ for the regulation of growth is thus essentially determined by transient [Ca 2+ ] i elevations induced by synaptic input and spike activity. Accordingly, the developmentally attained spike rate was proposed to reflect the Ca 2+ set-point of growth (van Ooyen et al., 1995).
The overall capacity for neurite growth ultimately relies on gene expression for cytoskeletal building blocks, which crucially depends on nuclear [Ca 2+ ] (Berridge et al., 2000). Somatic membrane depolarization increases Ca 2+ influx close to the nucleus (Greer and Greenberg, 2008). In this context, intracellular stores like the endoplasmic reticulum can accumulate Ca 2+ over longer periods of time and then considerably amplify Ca 2+ signals by additional Ca 2+ -triggered release (Berridge et al., 2000;Pivovarova et al., 2002). This effectively acts as a low-pass filter and amplifier for Ca 2+ -signaling to the nucleus -modulating the expression of cytoskeletal proteins. Supporting this link, neurite tree morphology and size in different neuron types appear to depend on the expression of specific Ca 2+ -binding proteins that determine nuclear Ca 2+ buffering capacity (Mauceri et al., 2015). In contrast to nuclear Ca 2+ levels, local Ca 2+ transients in neurites direct migration and growth towards target neurons (Guan et al., 2007;Henley and Poo, 2004;Hutchins and Kalil, 2008), which promotes neurite overlap and synaptic connectivity (Shepherd et al., 2005;Stepanyants et al., 2002). Though local Ca 2+ influx activating PKC modulates cytoskeletal turnover involved in guided outgrowth and migration (Fogh et al., 2014;Kabir et al., 2001;Larsson, 2006), PKC may not be essential for constitutive neurite outgrowth (Flynn, 2013;Letourneau et al., 1987). We therefore speculate that local Ca 2+ transients and PKC activity regulate cytoskeletal motility to direct growth processes, whereas long-term accumulation of Ca 2+ in intracellular stores modulates signaling to the nucleus, transcription levels and thus the overall availability of cytoskeletal building blocks. This predicts a cessation of growth at a target longterm average Ca 2+ influx that is independent from PKC activity.
Migration contributes to homeostatic network development
Extending on growth models for homeostatic network formation based on activity-dependent neurite outgrowth, neuronal migration could likewise contribute to the regulation of connectivity and activity in developing networks. Eglen et al. (2000) already added migration implemented as repulsion between neurons to the neurite growth model by van Ooyen et al. (1995) to generate regular neuronal arrangements as observed in dense retinal cell mosaics. We showed that activitydependent attraction between migrating neurons leads to different degrees of modularity by the interaction of clustering with homeostatic regulation of neurite growth. While it is plausible to assume that neurons with small neurite fields and little connectivity may move, this seems less realistic once they are enmeshed in the network. In line with this, cell migration relies on localized Ca 2+ transients in leading neurites and the resulting Ca 2+ gradients across the cell (Guan et al., 2007) but ceases with increasing neuronal activity and frequency of Ca 2+ transients (Bando et al., 2016). We approximated this in the model by allowing attraction while input was below the setpoint but omitted repulsion with input above the set-point. In consequence, cell migration ended during the rapid increase of activity during development, similar to peaks in PFR and Ca 2+ influx, and cessation of clustering around 10-15 DIV ( Figure 3B, Figure 5F,H). Moreover, with rapid transitions to high network activity once neurite fields in the network overlapped sufficiently, the model showed a transient overshoot of connectivity. A more gradual build-up of activity diminished the average overshoot and pruning when the slope of the sigmoid mapping input to firing rate was reduced (Figure 2-figure supplement 2), in agreement with reports of varying degrees of growth overshoot or even saturating growth during development in vitro (Ito et al., 2013;Kondo et al., 2017;van Pelt et al., 2004). Neurons that connected to the network early in development, however, still showed an overshoot of connectivity, in agreement with Kossio et al. (2018).
Average Ca 2+ influx converges for different network architectures
Homeostatic regulation of growth processes by Ca 2+ was proposed to guide network development towards target firing rates (van Ooyen et al., 1995), which implies a quasi-linear relationship between Ca 2+ influx and AFR. In our model, connectivity, input activity and firing rates eventually converged to the same levels for different migration conditions and network architectures. In apparent conflict with the simulation, we found that different network architectures stabilized in vitro after about 3 weeks but with different AFR. Consistent with theoretical studies predicting that network modularity promotes spontaneous activity (Kaiser and Hilgetag, 2010;Klinshov et al., 2014;Mazzucato et al., 2015), SBE rates and AFRs increased with the level of clustering. Clustering, however, reduced network synchronization, lowered PFRs and weakened depolarization during SBEs. This strongly affected Ca 2+ -transients: Ca 2+ peak amplitude increased exponentially with PFR during SBEs, in agreement with reports of Ca 2+ currents through voltage-gated Ca 2+ channels increasing exponentially with depolarization (Mayer et al., 1987;Mazzanti et al., 1992). Because of the opposite modulation of SBE rates and PFRs with clustering, however, the estimated long-term Ca 2+ -gain converged for different network architectures during development, despite different AFR. The low spike rates during inter-burst intervals had negligible influence on Ca 2+ influx.
To account for the supra-linear increase of Ca 2+ with PFR we would need to use spiking neurons in our model. In addition, Ca 2+ influx would need to depend on the membrane potential, instead of on the average spike rate of a neuron as in extensions of the growth model with spiking dynamics (Abbott and Rohrkemper, 2007;Kossio et al., 2018). To accelerate the simulation of several weeks of network development, these studies initially increased the neurite growth rate and thus effectively decreased the temporal resolution until the networks approached the equilibrium state. The mesoscale structures forming in our networks, however, crucially depended on the continuous feedback between migration and neurite growth and activity. Low temporal resolution in the simulation would amount to a large decrease of the feedback speed, which leads to a random walk of neurons and more homogeneous network structures without clustering.
Interaction between growth and migration shapes network modularity
Increasing the rate of activity-dependent migration in the model promoted clustering, decreased neurite fields and accelerated the development of spontaneous activity by more rapidly increasing neurite overlap and connectivity. This resulted in network architectures covering a continuous gradient from homogeneous via partially clustered with scattered neurons to fully clustered networks with corresponding degrees of modularity. This was remarkably similar to the development in vitro, where PKC activity promoted clustering and SBE rates, and decreased neurite density. The model suggests that different network architectures can arise spontaneously based on simple rules regulating connectivity to achieve a target level of [Ca 2+ ] i .
Among the grand average developmental time courses of the most relevant aspects across all conditions, long-term Ca 2+ influx was the first property to peak while the impact of inhibition on network activity only started to increase when Ca 2+ influx stabilized ( Figure 6C).
Growth and migration shape the framework for synaptic connectivity
In our networks, synapse densities scaled approximately quadratically with the average dendrite size and thus negatively with the degree of clustering. This could be explained by the co-modulation of axonal and dendritic densities in the same direction (Okujeni et al., 2017), which multiplicatively increases the number of axo-dendritic contact sites, rather than their modulation in opposite directions as used in Tetzlaff et al. (2010). Such potential synapses realize into functional synapses with approximately constant probability in vivo (Stepanyants et al., 2002). The consistent relation of synapse density and dendrite size across developmental stages and PKC conditions ( Figure 3G) suggests that PKC manipulation did not critically impair synaptogenesis. Our estimates of maximum connectivity suggest a saturation of connectivity towards 10% in clustered and 20% in homogeneous networks, in the range of values reported for cultured (Marom and Shahaf, 2002) and native cortical networks (Feldmeyer, 2012).
The mesoscale network architecture formed early thus appears to determine the probabilistic framework for connectivity. PKC activity additionally influences synaptic plasticity, yet without general directionality towards LTP or LTD (Chung et al., 2000;Ferreira et al., 2011;Lan et al., 2001;Boehm et al., 2006;;Scott et al., 2007). Our model indirectly accommodates this influence. For example, synaptic depression, corresponding to reducing the synaptic weight factor s, would extend the outgrowth phase to increase connectivity and input necessary to reach the target level of [Ca 2+ ] i . Conceptually, this would be the inverse of the homeostatic scaling of synaptic weights with the level of connectivity (Barral and D Reyes, 2016;Okujeni et al., 2017;Wilson et al., 2007). This contribution of synaptic plasticity to the activity-dependent fine-tuning of connectivity likely gains importance with increasing developmental age and structural complexity of a network.
Conclusion
Based on our findings, we propose that interactions between neurite growth and neuronal migration affect the balance between local and global connectivity, thereby shaping network modularity. Cell migration defects were also proposed as a pathogenic mechanism involved in several neurological conditions associated with altered size and spacing of mini-columns in the cortex, aberrant neurite growth and hyper-or hypo-connectivity (Catts et al., 2013;Courchesne and Pierce, 2005;Di Rosa et al., 2009;Donovan and Basson, 2017;Fan et al., 2013;McKavanagh et al., 2015), suggesting that the mesoscale network organization could be a critical factor. The associated degree of modularity thus appears to have crucial impact on activity generation, propagation and perpetuation, neural synchronization as well as network function and dysfunction.
Network growth model
We adopted and modified the model of activity-dependent network growth introduced by van Ooyen et al. (1995). All simulations were carried out with Matlab (version 2017a, Mathworks, Natick, MA, USA; code available at doi 10.5281/zenodo.3459678). Networks were initialized by randomly seeding 500 neurons onto a torus surface of 1 mm 2 to avoid boundary effects. Newly introduced neurons conflicting with the minimal neuron distance of 12 mm, approximately the size of cell bodies, were discarded and the procedure continued until the required neuron density was obtained.
Neurite fields were modeled as circular fields, centered at cell bodies and were initiated with a radius of 12 mm. Connectivity between neurons W was nonsymmetrical and defined as the area A of neurite field overlap normalized by the area of the presynaptic neuron, which reflected the probability that dendrites of neuron i overlapped with the axons of presynaptic neuron k.
The gain s = 0.1 was chosen such that it produced networks with an intermediate degree of neurite field overlap (for s = 1, neurons would only connect to one or a few other neurons). Instead of simulating network growth with dimensionless equations (van Ooyen et al., 1995), we adjusted the time steps such that we could compare the dynamics to realistic developmental timescales. We estimated the loop-time across which activity is integrated based on the time constants for the accumulation of Ca 2+ in intracellular stores to be in the order of minutes (Pivovarova et al., 2002) and therefore set the temporal resolution of the simulation to 1 min.
Since inhibition is not explicitly relevant to the questions addressed here, we adapted the model for excitatory networks only. Long-term integration of activity in neurons was described by their state variable x i (ranging between 0 and 1), which increased with input from presynaptic neurons contributing with their firing rate f x k ð Þ times the synaptic strength W ki : where dt= t= 1 min was the time resolution of the simulation, corresponding to the time constant of long-term integration of activity. A sigmoidal transfer function for the depolarization state x i determined the firing rate f x ð Þ.
Þ=a where = 0.5 reflected the firing threshold and a = 0.12 determined the steepness of the function that crucially impacted on the developmental overshoot of connectivity and subsequent pruning of neurites. We chose a slightly shallower function than the original model by Van Ooyen (a = 0.1) to accommodate the degree of overshoot and pruning for cultured networks in recent reports (Ito et al., 2013;Kondo et al., 2017;van Pelt et al., 2004).
As in the original model by Van Ooyen, neurons were modeled to grow neurites and thereby increase input activity and firing rate to reach a target [Ca 2+ ] i . If this Ca 2+ level was surpassed, neurites were pruned, in turn. These bidirectional changes in the radius R of circular neurite fields, were determined by a sigmoidal function of the firing rate of a neuron multiplied with a fixed growth rate growth .
where " = 0.6 defined the target level for activity or [Ca 2+ ] i , b = 0.1 determined the steepness of the sigmoidal function and growth was the constant factor for the growth rate of neurite fields. We assumed that connectivity is mainly determined by the density of neurites rather than their maximal length. Given the homogeneous density of the neurite field used in the model, however, its radial expansion must be considerably slower than the average elongation rates of individual dendrites, which were reported to be 12 mm/day for isolated neurons in the first week in vitro . We therefore set growth = 4 mm per day.
In our model, neurons additionally migrated in the direction of presynaptic inputs, thus mimicking the guidance of migration by leading processes (Flynn, 2013;Guan et al., 2007) and consistent with the positive correlation between the rate of soma translocation and the amplitude and frequency of Ca 2+ transients (Komuro and Kumada, 2005;Zheng and Poo, 2007). We assumed synaptic activity in leading neurites as an important source of input, however, did not preclude contactmediated Ca 2+ signaling (Sheng et al., 2013), which may contribute in regulating migration early in development when activity levels are low. Changes in the spatial position of neuronal cell bodies S were caused by migration impulses that depended on [Ca 2+ ] i and, thus, on the firing rate f and a variable factor for the maximal migration rate migration .
where migration ranged 0-300 mm/day and m = -15 determined how strong migration impulses were diminished as neurons reached their target Ca 2+ level. We chose m to result in a negligible migration impulse at the target [Ca 2+ ] i . This mimicked a realistic migration process in which neurons are guided by local Ca 2+ transients in leading neurites and the resulting Ca 2+ gradients across the cell (Guan et al., 2007), but at the same time cease migrating when spiking-based Ca 2+ transients start to dominate (Bando et al., 2016). The migration speed of postnatal neurons in vitro indeed decays approximately exponentially during development from 0.7 mm/min (1008 mm/day) at 0 DIV tõ 0.05 mm/min (72 mm/day) at 12 DIV on Matrigel-coated substrates and with slower initial migration speeds of 0.1 mm/min (144 mm/day) on PEI coated substrates (Sun et al., 2011), as used in this study. In the model we varied migration rates within this range.
The direction of movement was determined involving a directed movement component and a random movement component to match erratic movements observed in time lapse videos. Movement direction of the directed component was determined by the vector sum v dir of direction vectors v ik that pointed to presynaptic neurons and were weighted by their input.
To obtain the final direction vector V, directed and the random component (updated every 10 min) were weighted (p = 0.9) and summed. The random directional component was necessary to mimic the erratic movement patterns observed in in vitro time lapse studies.
New neuronal cell body positions P were determined by multiplying the normalized final direction vector with the migration impulse.
In addition, neurons were set to jitter randomly around their current position by maximally their cell body radius to allow neurons to pass each other in the 2D simulation, which prevented unrealistic chains of neurons. This positional jitter decreased according to the exponential decay function modulating migration in dependence of [Ca 2+ ] i such that neurons stopped moving when reaching the target value. It was reset after each time step. Movements violating the minimal possible intersoma distance (12 mm) were discarded.
To assess the modularity of a network, we calculated the size of the largest subnetwork (the giant component) remaining after removing defined fractions of randomly selected neurons from the network as its fraction in the remaining total population. For each network, the results were averaged across 1000 repetitions of the procedure. We quantified the degree of modularity Q in the final networks based on the connectivity matrix using the Louvain method (Blondel et al., 2008) implemented for MATLAB by Mika Rubinov with gamma = 1 (Rubinov and Sporns, 2010). Q increases towards one with increasing modularity. Random networks yield Q = 0.
PKC modulation and disinhibition
PKC inhibitor Gö decke6976 (Gö 6976, 1 mM; Tocris Bioscience, Bristol, UK) and PKC agonist Phorbol-12-Myristate-13-Acetate (PMA, 1 mM; Sigma-Aldrich) were dissolved in dimethyl sulfoxide (DMSO, Sigma-Aldrich) and added to the culture medium directly after cell preparation. The maximal concentration of DMSO in the growth medium was 0.1%. GABAergic transmission was probed by acute application of the non-competitive GABA-A receptor antagonist Picrotoxin (PTX; 10 mM; Tocris Bioscience) during electrophysiological recordings. Recordings of spontaneous activity were started 10 min after application of PTX for 1 hr at different DIV. Changes of spike activity were calculated as mean burst strength across 1 hr with PTX vs. 1 hr baseline recording before application. Networks exposed to PTX were discarded.
Morphological analyses
The development of neuronal clustering, dendrite outgrowth and synapse densities was analyzed in sparse networks of~100 neurons/mm 2 that were more accessible for quantitative morphological analysis. Clustering of neuronal cell bodies was analyzed based on immunocytochemical staining of neuronal nuclei (NeuN; Rabbit-anti-NeuN, 1:500; Abcam, Cambridge, UK, RRID:AB_2744676) and of all cellular nuclei (DAPI; Sigma-Aldrich). Neuronal nuclei were detected based on NeuN and DAPI colocalization and evaluated for their degree of clustering using a modified Clark-Evans clustering index (CI) that accounts for cell body diameter as minimal possible inter-neuron distance (Clark and Evans, 1954;Galli-Resta et al., 1999;Okujeni et al., 2017). CI was calculated as the ratio between the average nearest neighbor distance in a network and the expected average nearest neighbor distance for random networks. Note that the degree of clustering increases with decreasing CIs below 1. CIs above one indicate grid-like cell body arrangements. Dendrite morphology was examined by immunocytochemical staining of microtubule-associated protein 2 (MAP2, Chicken-anti-MAP2; 1:500; Abcam, RRID:AB_2138147). To quantify the total length of dendrites, MAP2 images taken at 20-fold magnification (0.323 mm/pixel) were processed by median filtering (3 Â 3 kernel), background subtraction (lowest value in 7 Â 7 pixel field), contrast adjustment (saturation at highest and lowest 10%), thresholding and skeletonization of the resulting binary image, similarly to Pani et al. (2014). Synapses were detected based on an immunohistological staining of the presynaptic protein synapsin (Mouse-anti-Synapsin; 1:200; Synaptic Systems GmbH, Gö ttingen, Germany, RRID:AB_ 887805). Synaptic punctae were then determined by local maximum detection in high-pass filtered and contrast-enhanced images. We analyzed two networks per condition and age taken from images covering approximately 3.5 mm 2 . In each image, we typically analyzed 10-20 regions of interest with varying size (could overlap) and including dense and sparse network regions. The following measures were determined as the slope of the linear regression through data pairs from all regions of interest: Dendrite size, total length of dendrite stretches relative to the number of neurons; Synapse density: average number of synapses relative to the number of neurons; Dendritic occupancy: average number of synapses relative to the total length of dendrite stretches; Neuron density, average number of neurons per area; Maximum connectivity, ratio between the number of synapses per neuron and the total number of neurons in the network (extrapolated for the entire network area of~1.1 cm 2 given the image neuron density). All morphometric analyses were done with Matlab (versions 2014a -2017a). Results are presented as mean ± standard error of the mean (SEM) and significance was assessed with a two-tailed independent Student's t-test. Network architectures of dense networks (600-800 neurons/mm 2 ) were characterized qualitatively at 22 DIV with antibodies against MAP2 and phosphorylated neurofilament 200 kD (Rabbit-anti-neurofilament; 1:10; Abcam, RRID:AB_ 448148) to visualize dendritic and axonal compartments, respectively.
Extracellular recording and analyses
MEA recordings (MEA1060-BC and USB-MEA256-Systems; MCS, 25 kHz sampling frequency, 12 bit AD-conversion; MCRack software versions 3.3-4.5, RRID:SCR_014955) of multi-unit spike activity from individual networks were performed under culture conditions (37˚C, 5% CO 2 ) and lasted at least 1 hr. Action potentials were detected with a threshold set to À5 standard deviations of the high-pass filtered baseline signal (Butterworth 2 nd order high pass filter, 200 Hz cut-off; detection dead time 2 ms).
Raw data from MEA recordings was imported into Matlab using MEA-Tools (Egert et al., 2002) and the FIND toolbox (Meier et al., 2008). Spontaneous SBEs were detected as follows: Series of spikes with consecutive inter-spike intervals smaller than a threshold value (100 ms) were detected as bursts. SBEs were defined from periods in which a predefined fraction of electrodes showed simultaneous bursts (10% of all sites detecting spikes but minimally 3 and maximally 20 sites to keep criteria comparable between small and large MEAs). To account for buildup and fading phases of SBEs, spikes within a time windows of 25 ms prior to and following this SBE core were included into the SBE. Network activity was characterized by the following parameters: SBE rate in the recording period, SBE strength as the average number of APs per SBE divided by the number of electrodes with spikes at any time during the recording session (active sites); AFR as the grand average firing rate per active site during the recording session. PFR was calculated per SBE as the peak of the network-wide firing rate profile (box car filter applied to the global spike train; 0.2 s kernel width) divided by the number of active sites. Network synchrony was determined as average spike train correlation (30 ms bin width) between pairs of active sites.
For the developmental analysis of network activity, recordings from many networks were pooled within time windows of increasing width to account for the slowing development of activity dynamics as networks matured ( Table 2). Numerical results are presented as mean ± SEM and significance was assessed with a two-tailed independent Student's t-test.
For acute experiments with PTX, we defined as control period the last 1 hr section before application of PTX and excluded the first 10 min after application from the analysis to avoid transients due to handling. To determine the time course of the maturation of inhibition, changes in SBE strength following PTX application were quantified relative to the control period for different DIV. For visualization, trend lines were calculated with a sliding average (±7 DIV).
Data sets of at least 20 min were analyzed with Matlab. Membrane potential distributions for neurons with resting potentials between À64 ± 4 mV were determined for the entire recording period and averaged across neurons of the same PKC condition.
Calcium measurements and analyses
To assess neuronal Ca 2+ dynamics, cultures were transfected with AAV (=Adeno Associated Virus) vectors coding for GCaMP6s (AAV9.CAG.GCaMP6s.WPRE.SV40, titer:~10 11 ; Penn Vector Core, School of Medicine Gene Therapy Program, University of Pennsylvania) under control of the CAG promotor after 10-14 days in vitro. Ca 2+ dynamics were imaged at 20x magnification and 25 Hz frame rate (Examiner Z1 microscope, Zen software 2015, Carl Zeiss, Jena, Germany). Somatic regions were delineated by threshold detection in maximum projections of the Ca 2+ -movie with ImageJ (Schneider et al., 2012). The resulting regions of interest were corrected manually. Changes in the Ca 2+ signal DF/F were calculated as relative change to baseline following (Jia et al., 2011). For each SBE, the peak of the Ca 2+ signal (DF/F) within 200 ms after onset was related to the PFR determined from simultaneous MEA recordings. The exponential scaling between DF/F and PFR was assessed by fitting with the function DF=F ¼ e kÃPFR À 1 using the Matlab function fminsearch. Ca 2+ data were derived from five PKC N and four PKC À networks at 19-20 DIV in recordings of~30 min and analyzed with Matlab. Ca 2+ influx during SBEs was estimated as e 0:11ÃPFR À 1 to match the scaling found experimentally. Long-term Ca 2+ influx was approximated as the Ca 2+ influx integrated over all SBEs per hour. All results are presented as mean ± SEM. Significance was tested with a two-tailed independent Student's t-test.
German Research Foundation (DFG) and the University of Freiburg in the funding programme Open Access Publishing. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: Animal handling and tissue preparation were done in accordance with the guidelines for animal research at the University of Freiburg and approved by the Regierungsprä sidium Freiburg (permits X-12/08D, X-16/07A, X-15/01H, X-18/04K).
Data availability
Matlab code and source data files are provided for Figures 3-6. Data preprocessing is described in the methods. As the unprocessed data is considerably heavy (over 1TB), the raw data and analysis tools will be provided upon request. Code for the model simulation is available at https://dx.doi. org/10.5281/zenodo.3459678.
The following dataset was generated: Author (
|
2019-09-19T09:13:11.655Z
|
2019-09-17T00:00:00.000
|
{
"year": 2019,
"sha1": "b269cdc64559355069400da354c03e61c04f6622",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.47996",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24fdfeca099c59197340c7633219b413bd66e358",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
209427160
|
pes2o/s2orc
|
v3-fos-license
|
Chlorophyll treatment combined with photostimulation increases glycolysis and decreases oxidative stress in the liver of type 1 diabetic rats
Photodynamic therapy (PDT) promotes cell death, and it has been successfully employed as a treatment resource for neuropathic complications of diabetes mellitus (T1DM) and hepatocellular carcinoma. The liver is the major organ involved in the regulation of energy homeostasis, and in pathological conditions such as T1DM, changes in liver metabolic pathways result in hyperglycemia, which is associated with multiple organic dysfunctions. In this context, it has been suggested that chlorophyll-a and its derivatives have anti-diabetic actions, such as reducing hyperglycemia, hyperinsulinemia, and hypertriglyceridemia, but these effects have not yet been proven. Thus, the biological action of PDT with chlorophyll-a on hepatic parameters related to energy metabolism and oxidative stress in T1DM Wistar rats was investigated. Evaluation of the acute effects of this pigment was performed by incubation of isolated hepatocytes with chlorophyll-a and the chronic effects were evaluated by oral treatment with chlorophyll-based extract, with post-analysis of the intact liver by in situ perfusion. In both experimental protocols, chlorophyll-a decreased hepatic glucose release and glycogenolysis rate and stimulated the glycolytic pathway in DM/PDT. In addition, there was a reduction in hepatic oxidative stress, noticeable by decreased lipoperoxidation, reactive oxygen species, and carbonylated proteins in livers of chlorophyll-treated T1DM rats. These are indicators of the potential capacity of chlorophyll-a in improving the status of the diabetic liver.
Introduction
Diabetes mellitus is a serious chronic metabolic disorder that affects more than 415 million people worldwide, being one of the leading causes of death in Brazil in 2015 (1). In type 1 diabetes mellitus (T1DM), hyperglycemiathe most prominent feature of this pathologyis a result of both the impaired glucose uptake by insulin-dependent tissues and the hyperactive production of glucose by the liver, due to the lack of insulin release by pancreatic beta cells. This impaired glucose homeostasis is associated with multiple organic dysfunctions and metabolic abnormalities in the liver pathways of blood glucose regulation, such as reduction of the enzymatic activity of the glycolytic and glycogen-synthesizing pathways and increase of the gluconeogenic enzymes (2).
In addition to indispensable insulin therapy, there are several non-medication interventions that can be used as adjuvants in the treatment of T1DM (3), but they do not hinder the progression of chronic complications and, therefore, the search for new therapeutic strategies is necessary. Among different interventions, photodynamic therapy (PDT) has been prominent in the treatment of neoplastic diseases (4)(5)(6) and neuropathic complications of T1DM (7). This therapy consists of topical or systemic administration of a photosensitizing agent (PS) and its accumulation in the diseased tissue followed by local irradiation with light at a wavelength within the PS absorption spectrum (8). With photoexcitation, the PS can react with oxygen and generate singlet oxygen ( 1 O 2 ) and reactive oxygen species (ROS) (9), provoking cytotoxicity and photodamage, that lead affected cells to death (6).
Due to economic and environmental considerations, PSs obtained from abundant raw materials attract greater interest than those prepared by complex chemical procedures (10). Among raw materials, chlorophyll (Chl) is the most abundant plant photopigment in nature, with chlorophyll-a (Chl-a) corresponding to approximately 75% of the green pigments found in plants (11). Structurally, Chl-a is a fully unsaturated asymmetric macrocyclic molecule with a hydrophobic character (11) that gives it low solubilization in hydrophilic solutions. Therefore, its incorporation into micellar copolymers, such as P123, which has been proven to be biocompatible, is necessary for in vivo and in vitro analyses, since they guarantee the monomerization of the hydrophobic PS and the maintenance of its photophysical properties, which are indispensable for PDT (12).
Chl-a metabolites are retinoic X receptor (RXR) agonists (13). Research has shown that these agonists are capable of decreasing hyperglycemia, hyperinsulinemia, and hypertriglyceridemia in type 2 diabetic mice (T2DM) (14). Therefore, it has been suggested that Chl-a metabolites could also exert such anti-diabetic actions (15). However, the physiological potential of this pigment and its role in the prevention of chronic complications of diabetes have been neglected (16).
Evidence of the accumulation of Chl-a and its metabolites in organs such as intestine and liver (17,18) suggests that these organs might be affected by these compounds. Considering that liver functioning is compromised in the absence of endogenous insulin in T1DM, that hyperglycemia and oxidative stress are relevant in this disease, and that Chl-a may have beneficial effects, the hypothesis formulated was that Chl-a and/or its metabolites could have important anti-diabetic effects on this model. Thus, the purpose of this study was to evaluate the biological action of PDT, applied to Chl-a, on the hepatic parameters related to energy metabolism and oxidative stress in T1DM rats.
Material and Methods
Chlorophyll-based extract and chlorophyll-a incorporated into P123 micellar nanostructured system The chlorophyll-based extract (CBE) and purified chlorophyll-a were obtained through the methodology proposed by Campanholi et al. (19). The concentration of CBE (22 mg/L) and Chl-a (1.25 mg) in the micellar copolymer P123 (2% m/v) were determined by the solid dispersion method (20), which consists in co-solubilizing the drug and P123 in ethanol, followed by evaporation of the solvent and formation of a thin film. This solid matrix was maintained under vacuum for 12 h and then hydrated with Krebs/Henseleit-bicarbonate (KH) buffer solution, pH 7.6, at 60°C, under vigorous stirring. The same procedure was performed without the addition of the active principle (CBE or Chl-a) in order to evaluate the individual effect of the P123 copolymer. The doses of Chl-a and CBE administered in the experimental protocols were determined by the Research Group in Photodynamic Systems (NUFESP) of the Department of Chemistry (State University of Maringá, Brazil), responsible for the development and production of the compounds (21). The lighting system used in the studies was composed of light-emitting surfacemounted diodes that emitted red light with a maximum wavelength of 636 nm. The photonic standardization of the light source was made in an Ocean Optics USSB 2000+ photo-radiometer (USA) and the measured irradiance was of 2.57 10 3 mWatts/cm 2 . Its emission spectrum ensured overlap with the absorption spectrum of the photosensitizers ( Figure 1). The lighting system was maintained at a fixed distance of 10 cm in all tests.
Animals and induction of type 1 diabetes mellitus
Adult male Wistar rats (40 animals, 90 days old, 200 g body weight) were kept at the animal house under controlled temperature (23-25°C), photoperiod (12 h light/ 12 h dark), and they were given standard rodent chow (Nuvital s , Nuvilab, Brazil) and water ad libitum. All the procedures were approved by the Ethics Commission on Animal Use of the State University of Maringá (CEUA 2019130116).
Diabetes induction consisted of intravenous injection of streptozotocin (60 mg/kg body weight; Sigma, USA) dissolved in citrate buffer (10 mM, pH 4.5, final volume 0.1 mL/100 g bw) after overnight fasting (control animals received citrate buffer, 10 mM, pH 4.5, final volume 0.1 mL/100 g bw). The confirmation of the diabetic state was made one week later by checking the fasting glycemia. Animals with glycemia X300 mg/dL were considered diabetic.
Experimental design
Control and diabetic animals were randomly allocated to one of the following groups: CG, control rats without light irradiation; CG/PDT, control rats intended for treatment with light irradiation; DM, type 1 diabetic rats without light irradiation; and DM/PDT, diabetic rats with light irradiation. The following procedures were carried out (5 animals per group per procedure): hepatocyte incubation, in vivo biodistribution, and in situ liver perfusion. For all protocols, the animals were in a fed state, to allow the evaluation of glycogenolytic and glycolytic pathways.
Incubation of hepatocytes
To evaluate the acute effect of PDT with Chl-a, hepatocytes were isolated through the previously described 3% collagenase perfusion technique (22). Hepatocytes with viability above 75% (10 6 cells/mL) were incubated in oxygenated (O 2 /CO 2 95/5%) KH buffer pH 7.4, 37°C, under constant stirring for 1 h in the absence (pure KH or empty P123 micellar copolymer) or presence of Chl-a in the P123 micellar copolymer at the concentration of 1.25 mg/L. The hepatocytes of CG and DM groups were incubated in the dark, while those from CG/PDT and DM/PDT groups were incubated with red light irradiation. After this period, the samples were centrifuged three times at 426 g for 4 min at 4°C, and the soluble fraction collected for determination of glucose, L-lactate, and pyruvate. The results are reported as mmol Á 10 6 cells -1 Á h -1 .
In vivo biodistribution
To evaluate the biodistribution of Chl-a, five animals from CG and DM groups were treated orally, by gavage, with CBE for 14 consecutive days at a concentration of 22 mg/L (corresponding to the profile and intensity of electronic absorption of Chl-a used in the incubation medium of the isolated hepatocytes; Figure 1) in a volume of 0.4 mL per animal. During the treatment period, food and water intake, fasting glycemia, and body weight were recorded.
Just after the CBE treatment, animals were evaluated for the in vivo biodistribution of the pigments contained in the CBE. They were anesthetized (thiopental 40 mg/kg bw and lidocaine 5 mg/kg bw, ip, 0.1 mL/100 g bw), had their abdominal fur removed, and then evaluated using MS FX PRO (Carestream Molecular Imaging, Carestream Health, USA). Fluorescence images (excitation=650 nm; emission= 700 nm) were obtained with a CCD camera (Kodak Image Station, Canada) 1, 2, and 24 h after gavage with CBE. The images were acquired by Carestream Molecular Imaging 5.0.
In situ liver perfusion
Based on the results of the hepatocyte incubation protocol, five rats from the CG, DM, and DM/PDT groups were treated orally by gavage with CBE 22 mg/L, 0.4 mL per animal for 14 consecutive days and then submitted to in situ liver perfusion in order to evaluate the chronic effect of Chl-a PDT.
After 14 days, the fed animals were anesthetized (thiopental 40 mg and lidocaine 5 mg/kg bw, ip, 0.1 mL/ 100 g bw) and had the liver perfused with KH buffer, oxygenated by carbogenic mixture (O 2 /CO 2 95/5%), pH 7.4, 37°C, at a flow rate calculated from the body weight and adjusted to values that allowed adequate oxygenation (4 mL Á min -1 Á g liver -1 ). The organ was completely exsanguinated and euthanasia occurred by hypovolemic shock.
The KH entered the liver through the portal vein and exited through the cava vein in an open and nonrecirculating perfusion system. Perfused samples were collected every 5 min for 1 h for biochemical determinations. The first 15 min were considered baseline perfusion and the next 45 min stimulated perfusion, during which the liver was kept in the dark (CG and DM) or irradiated with Figure 1. Schematic representation of the encapsulation of chlorophyll-a (Chl-a) into a P123 micelle and its absorption spectra. A, Chemical structure of (a) Chl-a in inclusion to (b) P123 triblock copolymer (x=20 and y=70). B, UV-Vis absorption spectra for Chl-a (1.25 mg/L) and chlorophyll-based extract (22 mg/L) incorporated into the 2% (m/v) P123 micellar system. C, Normalized absorption spectra UV-Vis (Chl-a in P123) superimposed to the red light-emission diode (LED). red light (DM/PDT). Thus, all groups had the initial 15 min to release glycogen stores without any stimulation.
Blood and biochemical parameters
The fluid collected during the perfusion was used to evaluate the concentration of glucose, L-lactate, and pyruvate. The data are reported as area under the curve (AUC, mmol/g liver). In addition, the hepatic uptake of the CBE was evaluated by spectrophotometric measurements of electronic absorption and fluorescence emission for each time point of the perfusion.
Blood samples were collected by cardiac puncture immediately after the anesthesia of the animals of the liver perfusion. Hepatic injury was determined by the plasma concentration of alanine amino transferase (ALT) and aspartate amino transferase (AST). The kits for biochemical analysis were purchased from Gold Analisa s Diagnóstica Ltda. (Brazil).
Preparation of liver homogenate and analytical procedures
At the end of the perfusion, still in the metabolic steady state, the liver was removed, rapidly clamped, and stored in liquid nitrogen. Later, a sample of the organ was homogenized in van Potter Elvejem homogenizer (Sigma Chemical Co., USA) with 10 volumes of 0.1 M potassium phosphate buffer (pH 7.4). An aliquot of this homogenate was used to determine the following parameters: levels of carbonylated proteins, lipoperoxides, and content of reduced and oxidized glutathione. The remaining homogenate was centrifuged at 20,000 g for 20 min at 4°C and the supernatant used to determine the activity of the antioxidant enzymes catalase and superoxide dismutase, as well as the content of ROS. Protein content of total homogenate and supernatant were determined as previously described (23).
Lipoperoxides
The levels of lipid peroxidation were evaluated by thiobarbituric acid reactive substances (TBARS) (25). The amount of TBARS was calculated from the standard curve prepared with 1,1 0 ,3,3 0 -tetraethoxypropane and the values are reported as nmol/mg protein.
Reduced (GSH) and oxidized glutathione (GSSG)
The contents of GSH and GSSG were determined by spectrofluorometry (excitation at 350 nm and emission at 420 nm) by the o-phthalaldehyde (OPT) assay (26). Fluorescence was estimated as GSH. For the GSSG assay, the sample was pre-incubated with 10 mM n-ethylmaleimide (NEM) and then with a solution containing 1M NaOH and OPT. The results were calculated using a standard curve prepared with GSH or GSSG and values are reported as nmol/mg protein.
Reactive oxygen species
ROS levels were quantified by spectrofluorometry via 2,7-dichlorofluorescein diacetate (DCFH-DA) as described previously (27). This method quantifies the conversion of DCFH-DA to the oxidized and fluorescent molecule 2,7-dichlorofluorescein (DCF) in the presence of esterases and ROS. The results are reported as nmol/mg protein using a standard curve prepared with DCF.
Activity of antioxidant enzymes: catalase and superoxide dismutase (SOD)
Catalase activity was estimated by measuring changes in absorbance at 240 nm using H 2 O 2 as the substrate (28), which are reported as mmol Á min -1 Á mg protein -1 . The activity of SOD was estimated by its ability to inhibit the auto-oxidation of pyrogallol in alkaline medium, which was determined by spectrophotometry at 420 nm (29). A SOD unit is considered the amount of enzyme that is capable of causing 50% inhibition, and the results are reported as U/mg protein.
Statistical analysis
Data were submitted to Kolmogorov-Smirnov test to verify normality. One-way analysis of variance (ANOVA) with Tukey post hoc test was used to compare the groups, and the level of significance was pre-fixed at 95% (Po0.05), using Prism version 5.0 (GraphPad, USA). The results are reported as means ± SD.
Isolated hepatocytes
Hepatocytes were incubated in the presence of Chl-a and controls were run in parallel, in which the hepatocytes were incubated only with KH buffer or with empty P123 micellar copolymers. Glucose and L-lactate release and rates of glycogenolysis and glycolysis of liver cells are presented in Figure 2.
The results showed the absence of biological response to the empty micellar copolymers, given the equivalence with the results obtained from the incubation of hepatocytes in the presence of KH buffer only. It was also observed that, in general, all metabolic parameters presented in Figure 2 were decreased in DM compared to CG.
In the dark, Chl-a increased the rate of glycolysis and L-lactate production from CG hepatocytes, without altering glucose release and the glycogenolysis rate. In DM, glucose release, glycogenolysis, and glycolysis were stimulated by Chl-a. In the PDT condition, Chl-a also increased glucose release and rates of glycogenolysis and glycolysis in CG/PDT (vs CG), whereas in DM/PDT hepatocytes there was stimulation of glycolysis only, accompanied by reduction of glucose release and glycogenolysis (vs DM). The concentration of pyruvate in the supernatant of the isolated hepatocytes was below the lower limits of detection of the method; therefore, the cytosolic NADH/NAD + ratio could not be calculated.
Parameters related to the treatment with chlorophyllbased extract
The goal of the second part of the experiments was to evaluate the effect of Chl-a in natura on plasmatic parameters, hepatic metabolism, and oxidative stress in the intact liver. For this, CBE was administered orally (gavage) for 14 days. During the treatment period, the classic symptoms of diabetes were observed in the DM groups, represented by reduced body mass and increased fasting glycemia compared to CG, both before (not shown) and after (Table 1) gavage with CBE. It was also observed, by comparing the groups before and after the gavage period, that these parameters were not affected by the treatment. There was also no change in water and food intake (data not shown) promoted by treatmentin addition to those caused by T1DM itself (polydipsia and polyphagia).
The biodistribution analysis of the photosensitive components present in the CBE, shown in Figure 3, revealed the absence of fluorescence emissionreferring to CBE componentsin any body region in non-diabetic animals after 14 days of gavage treatment. On the other hand, fluorescent emission in the intestinal loops of diabetic animals was observed 1 h after its administration by gavage. The fluorescence intensity in the intestine was reduced after 2 h and absent 24 h after administration of the CBE.
The evaluation of hepatic injury by the quantification of AST and ALT showed a significant increase in both diabetic groups (DM and DM/PDT) compared to non-diabetic animals. In order to investigate whether hepatic lesion was a result of disease or treatment with CBE, non-diabetic and diabetic animals were submitted to the same oral treatment protocol, but with saline administration. The results confirmed the increased AST and ALT in diabetic animals, regardless of the oral treatment (Table 1).
In situ liver perfusion Figure 4 shows AUC values of liver glucose release and the hepatic rates of glycogenolysis and glycolysis. The results indicated reduction in all of these parameters in the diabetic groups (DM and DM/PDT) compared to CG, in the order of 50-60% for liver glucose release, 60-70% for glycogenolysis, and 60-80% for glycolysis. In addition, the reduction in liver glucose release and glycogenolysis rate did not differ between the diabetic groups. But with irradiation, DM/PDT had a higher glycolysis rate than DM.
Similar to previous results, production of L-lactate and pyruvate during liver perfusion ( Table 2) was lower in both diabetic groups compared to CG. However, DM/PDT animals had higher L-lactate production accompanied by lower production of pyruvate than DM, which agreed with the glycolytic stimulation in this group. The cytoplasmic NADH/NAD + ratio also differed between the diabetic groups, and DM/PDT had a higher ratio compared to the other groups.
The spectrophotometric measurements of electronic absorption and fluorescence emission showed the absence of extract in the effluent fluid collected during liver perfusion, suggesting that it was completely taken up by the organ.
Hepatic oxidative stress
The levels of TBARS, carbonylated proteins, and ROS in non-diabetic (CG) and diabetic (DM and DM/PDT) animals treated with CBE for 14 days and submitted to the infusion protocol with KH for 1 h are shown in Figure 5. These parameters of oxidative stress were elevated in DM compared to CG, suggesting that the oxidative characteristic of T1DM was not altered by oral treatment with CBE alone. On the other hand, PDT in the liver of diabetic animals (DM/PDT) reduced the hepatic TBARS levels by 46%, carbonylated proteins by 36%, and ROS by 25% compared to DM, statistically matching the values of DM/ PDT to CG. These results suggested that PDT, during the in situ liver perfusion, was able to alter these parameters of hepatic oxidative stress of diabetic animals. Given this alteration, the antioxidant system was also evaluated in the perfused liver through determinations of GSH and GSSG levels and the activity of the catalase and SOD enzymes (Table 3).
In the dark, livers of diabetic animals had high levels of GSH compared to CG, while PDT raised GSSG levels by approximately 70% compared to other groups. High GSH levels in DM and GSSG in DM/PDT caused approximately 40% increase in total glutathione levels in both groups relative to CG. The GSH/GSSG ratio was 30% higher in DM (relative to CG) and 38% lower in DM/PDT compared to DM.
Catalase activity in DM was only half that of CG and increased by 56% in the presence of the light stimulus. In both diabetic groups (DM and DM/PDT), SOD enzyme activity was lower compared to CG.
Discussion
Despite many existing studies in the field of PDT (9), its applicability and/or effect on internal organs in in vivo models remain undetermined and, therefore, no important descriptive patterns were found in the literature to compare the findings of this study.
This research tested the hepatic effects of PDT with the PS Chl-a in untreated T1DM rats. The results showed that T1DM affected both the cellular metabolism and the response of hepatocytes to PS. The results suggested that in CG, Chl-a increased the intracellular consumption of glucose by stimulating glycolysis, which could then have been converted to L-lactate, whose production was high. In CG/PDT, the action of Chl-a stimulated the release of glucose, glycogenolysis, and L-lactate production, but did not affect glycolysis. In the diabetic condition, in the dark (DM), unlike what was observed in hepatocytes of healthy animals, Chl-a altered the metabolic response of liver cells, with glucose release, glycogenolysis, and glycolysis being stimulated. With photoexcitation (DM/ PDT), Chl-a decreased the release of glucose and glycogenolysis and maintained the stimulation of the glycolytic pathway. Thus, in the model of interest (DM/PDT), PDT with Chl-a induced an adequate response to the pathological picture.
Metabolic results obtained from in situ liver perfusion showed that liver glucose release, glycogenolysis, and glycolysis were decreased in both diabetic groups (illuminated or not) relative to CG. This is because in T1DM the absence of insulin decreases the activity of the glycolytic and glycogen-synthesizing enzymes (2) and, therefore, glucose utilization and metabolism become reduced (30). However, there was no difference between the results obtained through in situ liver perfusion between the non-illuminated (DM) and the illuminated (DM/PDT) diabetic groups, except for the glycolysis rate, showing that they appear to be independent of the photo-excitatory condition.
Comparison of the results obtained with isolated hepatocytes (acute Chl-a) and with the intact organ (Chla chronic effect) of CG animals revealed that in isolated cells there was stimulation only of the glycolytic pathway, while in the intact liver glucose release, glycogenolysis and glycolysis rates were elevated. It is possible that the wider effect of the PS in the experiment with the intact organ is due to the action of Chl-a and its metabolites produced after gastric digestion, once the degradation of Chl-a by gastric acid and the absorption of its metabolites as micelles by the enterocytes were already demonstrated (11,16). However, considering the absence of fluorescence ( Figure 3) after 14 days of treatment with CBE, it is likely that the pigments were not incorporated by the intestine of the healthy organism and thus the liver of the non-diabetic animal remained at the expected physiological condition for the fed state.
Chan et al. (31) observed that PDT with Chl-a metabolites completely inhibited the growth of hepatocellular carcinoma, while it did not affect normal hepatocytes. This observation is important because it agrees with the principle of PDT, according to which PSs selectively accumulate in the diseased tissue and neighboring vasculature, while their affinity for healthy cells should be low enough for them to be cleared from the healthy tissue (32).
It is believed that in healthy tissues the triplet state of the PS is eliminated by the defense mechanisms of the cell, which can, therefore, decrease the effects of the PDT (33,34).
In DM/PDT, the results obtained with the intact organ were similar to the acute results obtained in isolated cells, in which the Chl-a illumination reduced glucose release (vs controls) and glycogenolysis rate (vs DM) and stimulated glycolysis. The action exerted by Chl-a in both protocols with diabetic animalsreducing liver glucose metabolismindicated a potential capacity of this PS in aiding the correction of glycemia. This, together with the high fluorescence observed inside those animals treated with the extract, indicated the possibility of increased permeation of the PS in the intestine and hepatic action similar to the direct action of the PS on the isolated hepatocytes. It is important to note that Chl-a PDT stimulated glycolysis in both experimental protocols with T1DM animals, a pathway known to be depressed in the diabetic condition in the absence of insulin (2). Koch and Deo (35) were the first to relate Chl as a nutritional supplement in the control of hyperglycemic conditions in T2DM. Their study, conducted in vitro, reported the inhibitory effect of Chl on the intestinal enzyme alpha-glucosidase, responsible for the digestion and absorption of dietary glucose. The authors suggested that inhibition of this enzyme could reduce the glucose content absorbed in such a way as to prevent postprandial peak glycemia and contribute to the maintenance of glucose homeostasis. However, the results presented in Table 1 showed that treatment with CBE as a source of Chl-a in the absence of illumination did not affect postprandial glycemia nor the weight of the animals beyond the alteration caused by the diabetic condition itself. Therefore, the effects observed in vitro in T2DM (35) were not reproduced in vivo in experimental T1DM animals. In research conducted with Caenorhabditis elegans treated with Pheid, a metabolite of Chl-a, and light exposure, a reduction in fat mass was observed (36). However, this effect was only perceptible under illumination, which would explain the fact that in this study there was no change in body weight induced by treatment with CBE in the dark. Likewise, in the absence of photostimulation, the treatment did not cause changes in AST and ALT levels of the diabetic animals. These results imply the need of photoirradiation on PSs to stimulate their biological activity.
Important changes were also found in the evaluation of oxidative stress. The results showed that the treatment of diabetic animals with CBE alone was not enough to reduce oxidative stress in the liver. This was observed by the higher levels of carbonylated proteins and ROS, together with less activity of the antioxidant enzymes catalase and SOD in the homogenate of perfused livers in the absence of light stimulus. However, in livers perfused under photostimulation (DM/PDT), hepatic oxidative stress was reduced, as observed by lower levels of TBARS, carbonylated proteins, and ROS. Thus, PDT was beneficial in reducing the oxidative stress of the intact liver.
PDT states that, by photostimulation in the presence of oxygen, the PS can participate in the process of electron or energy transfer, forming free radicals or ROS (9). However, evaluation of ROS in this study revealed that light did not increase oxidative stress. In addition, levels of carbonylated proteins (which appear to be more adequate than TBARS to assess tissue oxidative damage, since they are more sensitive to both diabetes and the light stimulus) were reduced by 36% in DM/PDT compared to DM. This result corroborates those observed by Zhang et al. (36) in Caenorhabditis elegans treated with Pheid and exposed to illumination, in which an 80% reduction in carbonylated protein levels was detected.
Furthermore, the livers of DM animals presented higher levels of GSH than the livers of non-diabetic animals also treated with CBE and in the dark (CG), a result that apparently suggested that the treatment with the extract was only enough to raise GSH levels in diabetes. However, this increase was not sufficient to improve hepatic oxidative damage, which only occurred when the light stimulus was applied.
The results of this study are still insufficient to determine the mechanism by which liver photostimulation triggered an improvement in oxidative stress. However, it seems to be due to the stimulation of a part of the antioxidant system, that is, by the increase in catalase activity. Although there are reports that the direct antioxidant activity of Chl-a is low due to its high chemical instability (37), the results of this study indicated a promising antioxidant capacity of CBE stimulated by PDT.
The PDT applied to Chl-a, directly on hepatic cells and indirectly in the liver by oral administration of the Chl-a-rich extract, improved the general condition of the liver and of hepatocytes isolated from T1DM rats and, in this way, it is promising as an alternative and/or complementary treatment to T1DM. Unraveling the Chl-a mechanisms of action on the liverpossibly mediated by RXR agonism (13)will improve the knowledge of these anti-diabetic effects of Chl-a (15).
|
2019-12-21T14:04:36.040Z
|
2019-12-20T00:00:00.000
|
{
"year": 2019,
"sha1": "33769823b0d5fe4aaf3a631fa0348211a4a4c905",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/bjmbr/a/nftshYXgN5XBbHDB4L3TKnG/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d05fbad25698afe55002f816dbc005e337a0df1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
243761858
|
pes2o/s2orc
|
v3-fos-license
|
Yellow fever vaccine protects mice against Zika virus infection
Zika virus (ZIKV) emerged as an important infectious disease agent in Brazil in 2016. Infection usually leads to mild symptoms, but severe congenital neurological disorders and Guillain-Barré syndrome have been reported following ZIKV exposure. Creating an effective vaccine against ZIKV is a public health priority. We describe the protective effect of an already licensed attenuated yellow fever vaccine (YFV, 17DD) in type-I interferon receptor knockout mice (A129) and immunocompetent BALB/c and SV-129 (A129 background) mice infected with ZIKV. YFV vaccination provided protection against ZIKV, with decreased mortality in A129 mice, a reduction in the cerebral viral load in all mice, and weight loss prevention in BALB/c mice. The A129 mice that were challenged two and three weeks after the first dose of the vaccine were fully protected, whereas partial protection was observed five weeks after vaccination. In all cases, the YFV vaccine provoked a substantial decrease in the cerebral viral load. YFV immunization also prevented hippocampal synapse loss and microgliosis in ZIKV-infected mice. Our vaccine model is T cell-dependent, with AG129 mice being unable to tolerate immunization (vaccination is lethal in this mouse model), indicating the importance of IFN-γ in immunogenicity. To confirm the role of T cells, we immunized nude mice that we demonstrated to be very susceptible to infection. Immunization with YFV and challenge 7 days after booster did not protect nude mice in terms of weight loss and showed partial protection in the survival curve. When we evaluated the humoral response, the vaccine elicited significant antibody titers against ZIKV; however, it showed no neutralizing activity in vitro and in vivo. The data indicate that a cell-mediated response promotes protection against cerebral infection, which is crucial to vaccine protection, and it appears to not necessarily require a humoral response. This protective effect can also be attributed to innate factors, but more studies are needed to strengthen this hypothesis. Our findings open the way to using an available and inexpensive vaccine for large-scale immunization in the event of a ZIKV outbreak.
Introduction
Zika virus (ZIKV) probably emerged in the early 1900s and remained undetected for several years [1]. This virus was first isolated in 1947 from a sentinel rhesus monkey (Macaca mulatta) presenting with a febrile illness in the Zika Forest of Uganda [2]. The first case of ZIKV in humans was reported in 1952 [3], and ZIKV was historically regarded as a self-limiting disease. However, the scenario began to change in 2013, when a large outbreak in French Polynesia was associated with cases of Guillain-Barré syndrome [4]; during an outbreak in Brazil (2014)(2015), authorities reported an increased number of children born with microcephaly [1,5]. ZIKV infection is known to be associated with congenital malformations and other neurological complications, such as Guillain-Barré syndrome [6,7]. Over the years, the epidemiological scenario of ZIKV has expanded quickly and has been considered endemic not only in Latin America but also in Caribbean regions and in parts of Africa and Asia [8].
Different vaccine models, including inactivated and attenuated models, have been tested in preclinical studies [1,9,10]. Some of these models have shown success in mice, and some of them have advanced to the clinical stage [1,9,10]. Infectious agents may lead to protection against other distinct but similar infectious agents [11]. This mechanism is known as crossprotection and was, for example, the basis of the first vaccine to be developed, which led to the global eradication of smallpox [12]. Members of the Flaviviridae family are similar, and some members of this family are the targets of currently available vaccines, such as the attenuated yellow fever virus (YFV) vaccine [13]. It was previously suggested that the low coverage of YFV vaccine, especially in the Northeast Region of Brazil, might be related to the high number of cases and microcephaly caused by Zika [7]. On the other hand, the number of Zika cases was apparently reduced under increased YFV vaccine coverage after an outbreak of yellow fever in the southwest. Based on the cross-protection observed for different vaccines, we hypothesized that this protection was induced by YFV against ZIKV infection. YFV and ZIKV are flavivirus and share several T cells epitopes that can work in cross-protection evoking mechanism associated to protection. A common T cell epitope activated during YFV vaccination, after Zika challenge, could rapidly be recruited and perform effector functions [14]. Furthermore, mechanisms of trained immunity, another type of non-specific cross-protection, can contribute to control the infection through epigenetic reprogramming [15]. Thus, it would be extremely interesting to use an already licensed vaccine with efficacy potential against ZIKV infection that could be deployed quickly in cases of ZIKV outbreaks.
Here, we evaluated whether a vaccine for YFV, a flavivirus that is similar to ZIKV, could prevent or at least decrease the severity of disease caused by ZIKV via a cross-protection mechanism and performed a follow-up of the survival, behavioral, and neuropathological consequences of infection. We used the attenuated YFV 17DD vaccine because it is a vaccine model that has long been used in humans with well-established tolerability. YFV vaccines have the advantage of already being licensed, and they can be safely used in humans. Despite the shortterm protection observed, our results suggest a positive modulation against Zika infection promoted by YFV 17DD vaccination, raising the possibility of using this already commercialized vaccine.
YFV vaccine is safe for use in A129 and BALB/c mice
Based on a hypothesized cross-reaction between the YFV vaccine and ZIKV, we evaluated the tolerability of the attenuated YFV 17DD vaccine in A129 mice, monitoring both weight loss and mortality after two immunization doses of the YFV vaccine. We tested three different doses of the YFV vaccine, namely, 10 5 , 10 4 and 10 3 plaque-forming units (PFU), and in parallel, we carried out the challenge of mice only with a lethal 10 6 PFU dose of ZIKV (S1 Fig) as a control group (Fig 1). In animals immunized with YFV, although there was no difference in the weight change (Fig 1A), we observed 35% death of mice at the 10 5 YFV dose (Fig 1B), while the immunized animals with 10 4 and 10 3 PFU doses of YFV remained asymptomatic; in contrast, nonimmunized animals challenged only with ZIKV lost weight ( Fig 1A) and died ( Fig 1B). Thus, we chose a YFV dose of 10 4 PFU, which had no apparent effects, and adopted this dose for subsequent experiments in mice.
YFV vaccine induces protection against ZIKV infection in A129 mice
The susceptibility of the A129 strain to ZIKV infection was demonstrated previously [16] (Fig 1), making the A129 mouse a useful model to study ZIKV infection. The mortality in A129 mice from ZIKV infection declines with age [16], and we adopted a short vaccine protocol period to challenge mice at an age at which they are more susceptible. We immunized the A129 mice twice with a 10 4 PFU dose of YFV vaccine, with seven days between doses. Following immunization, the mice were challenged with ZIKV (7x10 3 viral particles) via the intracerebral route (IC) at different intervals after immunization (7, 15, and 35 days after the second dose of YFV vaccine) (Fig 2). The attenuated YFV vaccine was effective at protecting susceptible animals, especially 7 (Fig 2B and 2C) and 15 days (Fig 2D and 2E) after immunization. The vaccinated mouse group gained more weight (Fig 2B and 2D) and presented much lower mortality (Fig 2C and 2E) than the saline-treated mouse group. The difference in mortality was more evident than the difference in weight loss because many of the unvaccinated mice rapidly lost weight and died within 10 days. Some of the mice that died after the tenth day ( Fig 2C) lost less weight. At 35 days following the second dose (Fig 2F and 2G), the protection decreased but was still present. No difference was observed in the weight losses, but the mortality was statistically lower.
To evaluate the protective effect of immunization with 10 4 PFU of YFV 17DD against the microglial phenomenon in the central nervous system induced by infection with ZIKV (7x10 3 viral particles), we performed an immunohistochemical assay in the brain tissue of A129 mice vaccinated and challenged intracranially with ZIKV. Therefore, we found that YFV 17DD prevented the ZIKV-induced increase in hippocampal Iba-1 immunoreactivity in mice (Fig 2H-2J). On the other hand, synapse loss is a common feature of different neurodegenerative conditions. Thus, to evaluate whether immunized mice present protection against synapse loss induced by ZIKV infection, we quantified the colocalization between synaptophysin (SYP, a presynaptic protein) and Homer-1 (a postsynaptic protein) immunoreactive puncta, a measure of functional synapses, in the hippocampus of mice. In Fig 2K-2M, we demonstrated that ZIKV-infected animals immunized with YFV presented an increased number of synaptic puncta compared with nonimmunized mice (Fig 2K-2M). Altogether, these findings suggest that YFV protects mice against brain damage induced by ZIKV infection.
YFV decreases viral load in ZIKV-infected SV129 mice
To confirm the protection of the YFV 17DD vaccine in an immunocompetent animal model, we evaluated the effects of a ZIKV challenge in SV129 mice (background of immunocompromised A129 animals) 35 days after vaccination. ZIKV viral loads were markedly lower, indicating that vaccination promotes a response against viral spread in the brain in wild-type mice (S2 Fig).
YFV vaccine induces protection against ZIKV infection in BALB/c mice
We also tested the YFV vaccine in immunocompetent BALB/c mice. These BALB/c mice were immunized twice, and after 7 days, they were IC-challenged with ZIKV (following the same protocol used for the A129 mice described in Fig 2A). We observed that the vaccinated group presented no weight loss, while the saline group did (Fig 3A). The cerebral viral load was significantly different between the groups (Fig 3B), indicating that the prevention of clinical signs was correlated with lower viral propagation in the vaccinated mice.
YFV vaccine protects BALB/c mice against neurological signs
We observed different neurological disturbances, such as spinning when suspended by the tail, shaking, hunched posture, ruffled fur and paralysis, following ZIKV infections in the BALB/c mice. We evaluated these manifestations in the vaccinated and saline groups after the challenge. All extremely recognizable clinical neurological signs were present in the saline group and completely absent in the vaccinated group (Table 1).
In the saline group, 3 of the 5 animals presented an unsteady gait, which was marked by paralysis in at least one of the segments. In the vaccine group, no animals presented with this clinical sign. In 2 of the 3 symptomatic mice, unsteady gait was established as a permanent sequela (observed from 5 days after infection onwards). All the mice in the saline group exhibited agitation and touch sensitivity, but all the animals recovered from these behaviors. In the saline group, 3 animals showed spinning behavior during tail suspension. In 2 of these 3 animals, this behavior remained a sequela (which were observed from 5 days after infection onwards). In the vaccine group, no mice exhibited this behavior. These results indicate that the protective mechanism is efficient at controlling viral replication and brain damage, guaranteeing physiological homeostasis. immunoreactivity in the hippocampus of mice. Bars represent mean ± SEM. Symbols represent individual mice. Unpaired t-test, � p < 0.05. (K-L) Representative images of synaptophysin presynaptic marker (SYP, green) and homer-1 postsynaptic marker (red) colocalized puncta (yellow) in the hippocampus of mice. (M) Synaptic puncta quantification in the hippocampus of animals. Bars represent mean ± SEM. Symbols represent individual mice. Unpaired t-test, #p < 0.075. https://doi.org/10.1371/journal.pntd.0009907.g002
YFV 17DD vaccine killed AG129 mice
We immunized AG129 mice (S3 Fig), in which both type 1 and type 2 interferon receptors are knocked out. We tested the 10 4 and 10 2 PFU doses at which the A129 mice were asymptomatic. The AG129 mice were highly susceptible to the YFV vaccine (all mice died after vaccination). Unfortunately, we were not able to evaluate the YFV vaccine on AG129 mice against Zika infection; however, this result suggests that IFN-γ is necessary to control YFV, which is absent on AG129 and could be necessary to confer cross-protection against a ZIKV challenge after YFV vaccination in A129 mice.
YFV 17DD vaccine did not induce protection in nude (NU/J) mice against ZIKV infection
Because of the importance of T cells in producing IFN-γ, we evaluated YFV 17DD in nude (NU/J) mice, which are deficient in T cells. We first analyzed ZIKV pathogenicity through the IC infection of mice of different ages (1, 3, 4, 5 and 6 months) (Fig 4A and 4B), demonstrating a relation with the immaturity of the immune system, as the mice are very susceptible at 1 and 3 months and partially susceptible at 4, 5 and 6 months, a phenotype that is very similar to that observed in A129 mice. We vaccinated the nude mice and then challenged them. No protection against death was detected (Fig 4E), but a delay in the survival curve was observed. In addition, vaccination did not promote any decrease in viral replication in the cerebral tissue ( Fig 4F), which indicates the importance of T cell-mediated immunity for protection.
Vaccination elicits nonneutralizing antibody production
We also evaluated the ability of the antibodies produced against YFV to cross-react with ZIKV. We observed that immunizing the BALB/c mice induced the production of specific IgG antibodies against heterologous (ZIKV) and homologous (YFV) antigens (Fig 5A and 5B), which could be detected 7 days after booster immunization. This result indicated that the heterologous agent used in the vaccine (YFV) could elicit the production of antibodies that bind to ZIKV. We also evaluated the capacity of the antibodies produced against YFV to neutralize ZIKV infection in Vero cells. Our results demonstrated that the serum from the vaccinated mice did not neutralize the ZIKV infection (Fig 5C), whereas the serum from the mice infected with ZIKV did, suggesting that the mechanisms induced by YFV could not be related to the humoral immune response.
We also evaluated the possible protective effects of antibodies produced by vaccinated animals in vivo. AG129 mice have deactivated type 1 and type 2 interferon receptors and are highly susceptible to ZIKV infection. The application of the serum mixture of mice immunized with ZIKV was unable to protect these mice from infection ( Fig 5D).
Spin through tail suspension Shaking, curved body and ruffled hair Paralysis
Saline group (N = 5) The mice were evaluated for the presence or absence of neurological signs by two independent observers. The signs were evaluated daily from the first day after infection. � This animal recovered paw movement on the right side 15 days post infection. https://doi.org/10.1371/journal.pntd.0009907.t001
Breastfeeding by immunized females is unable to protect infected pups from developing brain disorders
Female Swiss mice were divided into two groups: vaccinated and control. The first group received two doses of 10 6 YVF 17DD, and the second group received two doses of saline. Seven days after the second dose, the mice were placed for crossing. Pregnant mice were separated into four groups: saline without challenge, saline + ZIKV challenge, vaccine without challenge and vaccine + ZIKV challenge. Three days after birth, Swiss mouse pups were subcutaneously challenged with 10 6 ZIKV. After 35 days, the animals were euthanized, and their brains were weighed (Fig 6A). The infected animals had smaller brain sizes, and vaccination was unable to prevent the manifestation of this phenotype. The lightest brain in the saline group without challenge weighed 0.38 g, and we used it as a cutoff point. Then, we compared the proportion of brains in all the groups that were lighter than this cutoff point ( Fig 6B). We observed that both infected groups had a higher proportion of lighter brains, with the vaccinated group showing a low proportion. This result supports the idea that YFV vaccination is effective at protecting adult mice but has low efficacy in promoting protection in pups through breastfeeding.
Discussion
Vaccines against ZIKV have been studied following the outbreak in 2015. Different approaches, including using a virus inactivated by formalin and subunits or DNA vaccines, have been tested [1,9,10,17]. In this study, we immunized mice with the attenuated YFV 17DD vaccine and IC-challenged them with ZIKV infection. An IC infection in which ZIKV is inoculated directly into the CNS has a highly neurovirulent and pathogenic route; this route is considered a severe model of infection [18] and may require a strong immune response, which can probably be achieved only by using live vaccines, to protect the brain. YFV is one of the strongest immunogens ever developed because it confers long-lasting protection with a single dose [13]. In our first step, we standardized the YFV dose in A129 mice using immunization via the subcutaneous (SC) route. Recently, a similar result was observed by another group using a chimeric attenuated vaccine (ChimeriVax-Zika) based on YFV with ZIKV epitopes (premembrane and envelope genes from YFV were replaced by those from ZIKV) [19], which demonstrated that a dose of 10 5 PFU resulted in a low mortality rate. This result is not surprising because attenuated vaccines, despite being safe, require some precautions for their use. When the tolerability of YFV (17-D) and ChimeriVax-Zika (CYZ) was analyzed in mice, CYZ was safer because it induced few deaths [19]. However, the study comparing the two vaccines involved the injection of 5-day-old mice via the IC route to evaluate their tolerability. Although this evaluation method is important, it does not reflect real conditions because vaccination does not occur via this route and is not performed in neonates. However, the YFV vaccine is recommended for people aged 9 months or older and has been used in pregnant women without any apparent adverse effects on the fetuses.
The A129 model has already been characterized for presenting, with wild-type YFV infection, viscerotropic disease and fatality after subcutaneous inoculation, despite the use of an attenuated vaccine strain [20]. Vaccination with 10 4 PFU of YFV was shown to induce protection in A129 mice. The Giel-Moloney study that used ChimeriVax-Zika showed a reduction in the viral load in vaccinated A129 mice; however, no survival results were reported [19]. The protective efficacy of a live attenuated ZIKV vaccine with mutations in the NS1 gene and the 3'UTR of the ZIKV genome was evaluated only in pregnant mice, which did not allow us to compare those study results with our results [21]. In our work, YFV provided protection to immunocompromised mice infected by the IC route, and this protection was demonstrated by a reduction in the viral load in the brain and by increased survival but was time dependent. For SV129 mice, vaccination proved to be effective in controlling viral propagation in cerebral tissue, even with a challenge that occurred 35 days after the second vaccine dose (S2 Fig).
We also evaluated immunocompetent BALB/c mice. Recently, BALB/c mice were found to die after IC infection using 10 3 or 10 4 PFU, and some mice infected with 10 2 PFU of ZIKV strain MR766 also died (Uganda, 1947) [22]. Other immunocompetent mice also died when infected at the neonatal stage, such as Swiss mice [23]. We observed that BALB/c mice did not die after an IC challenge with ZIKV, and our model allowed us to study neurological disorders as represented by easily recognizable clinical signs. Vaccination prevented the BALB/c mice from developing neurological disorders. Vaccination efficiently blocked viral propagation, which positively correlated with the clinical signs found in the BALB/c mice. Protection against an IC challenge requires a potent immune response not only because this route causes more severe disease but also because the CNS presents a level of isolation from the rest of the body (as an immune privileged site).
The mechanism of YFV vaccination that protects against YFV infection also involves neutralizing antibodies [24]. CYZ has been shown to elicit antibodies in mice and to reduce the viral load in a vaccinated group [19]. We detected antibody production against ZIKV (Fig 5A), but these antibodies did not have the capacity to neutralize ZIKV infection in Vero cells ( Fig 5C). This finding has also been observed in AG129 mice, which are highly susceptible to infection. The mixture of the serum and the virus was not able to mitigate the infection (Fig 5D). The lack of protection in challenged pups whose mothers were previously vaccinated also supports this idea (Fig 6A and 6B). A study using a live-attenuated ZIKV vaccine showed protection through breastfeeding by antibodies present in milk [21]. In our study, if some significant amount of ZIKV neutralizing antibodies were present in the milk of the vaccinated mice, some level of protection would be expected. We observed that the YFV vaccine did not prevent microcephaly with breastfeeding in ZIKV-challenged pups.
As described above, the mechanisms of protection may not be dependent on the neutralizing activity of the antibodies. The protection observed 35 days post booster in A129 and Sv129 indicated short-term memory protection. To investigate this issue, we started with AG129 mice (S3 Fig). In our prior experiment, we observed that A129 mice are protected by YFV vaccination despite being sensitive to high doses of YFV. However, when we tested the 10 4 or 10 2 doses in AG129 mice (in which the A129 mice were asymptomatic), the animals were highly sensitive to YFV. At this dose, all the mice died. The induction of IFN-γ by YFV was demonstrated previously [25]. The YFV-17D vaccine induces a robust cellular immune response through the activation of a mixed Th1 and Th2 response, cytotoxic CD8+ T cells and a neutralizing antibody response [26]. These mixed responses are elicited by the activation of Toll-like receptors (TLRs), such as TLR2, TLR3, TLR7, TLR8 and TLR9, on dendritic cells [27]. Several CD4+ and CD8+ T cell epitopes have been characterized and are related to the protection induced by YFV vaccines [28,29], which suggests that future studies should assess possible cross-T cell epitopes between YFV and Zika.
The relative importance of NK and CD8+ cells in controlling early infection is known to vary between mouse strains, with T cells being more important in BALB/c mice [4]. Furthermore, the lack of this response in nude mice was positively correlated with the lack of protection. Although vaccination appeared to cause some delay in mouse death (Fig 4E), the total mortality in both groups was similar. This discrete protection effect may be attributed to innate factors. We cannot rule out a potential role of the innate immune system, as cross-protection induced by BCG vaccination has been observed. This mechanism has been linked to heterologous effects of adaptive immunity but also to potentiation of innate immunity through epigenetic mechanisms [30]. As a BCG vaccine, the YFV vaccine is an attenuated model, so in both cases, similar protection mechanisms may be elicited. Thus, the protection observed by YFV vaccination may partially involve trained immunity. YFV-17DD vaccination has been shown to comprise a complex network of cytokines in the innate immune compartment involving cytokines such as IFN-γ produced by NK cells [24]. Against Zika infection, YFV could induce protection using a combination of mechanisms involving adaptive immunity and trained immunity. The decreased protection observed with challenge occurring 35 days after vaccination is an indication that the protection observed is at least partially dependent on innate immunity.
Although we have shown that the protection observed against ZIKV by YFV 17DD vaccination does not come from the production of neutralizing antibodies, our study did not demonstrate the mechanistic part of cross-protection, but we hypothesized the role of the T cell response or even trained immunity in this protection observed. Further studies to provide the importance of T cells using CD4 -/and CD8 -/mice; and to investigate trained immunity using NLRP3 -/-, CASP1/11 -/and also to investigate the epigenetic signature in innate cells should be addressed to better understand this cross-protection mechanism.
Many ZIKV vaccine candidates are in the preclinical phase, and some are in clinical phases I and II. Different technologies, such as live attenuated vaccines, recombinant vector vaccines, subunit vaccines, whole inactivated vaccines, mRNA vaccines and DNA vaccines, have been tested [1,9,10,17]. Undoubtedly, the study and development of new vaccines is extremely important because these processes allow us to have the opportunity to test and develop more efficient and safer models. Some of these vaccine models may turn out to be highly effective, and some may not, but it will still take time to make these vaccines available. This time gap can be filled by the YFV vaccine, which has been used successfully for decades in the human population and is currently readily available. It is possible that the YFV vaccine may be effective at protecting humans against ZIKV, especially against neurological diseases in adults. The possibility of cross protection between flaviviruses is hypothesized. A recent epidemiological study reported that preexisting infection with dengue virus (as determined by high antibody titers) was associated with a reduced risk of ZIKV infection [31]. However, no experimental evidence has been provided for this hypothesis.
Concerning epidemiological data on the YFV vaccination of mothers of CZS infants, there are no systematic studies. However, a descriptive study indicated that Northeast Brazil had the lowest YFV vaccination coverage and was the region with the highest incidence of CZS between October 2015 and March 2016 [7]. If YFV truly protects against ZIKV in humans, it could provide a safe, quick and inexpensive vaccination model because its pros and cons in clinical practice are already well known. In addition, the YFV vaccine would be capable of protecting against two distinct pathogens simultaneously. Substantial time and resource savings could be accrued by using an already licensed vaccine. We believe that more studies on cross protection between flaviviruses are needed and that the use of the YFV vaccine during an outbreak of Zika may be strategic until a specific Zika vaccine is available.
Ethics statement
All animal use involved in this work was approved by the Ethics Committee on the Use of Animals (CEUA) in Scientific Experimentation of the Health Sciences Center of the Federal University of Rio de Janeiro registered with the National Council for the Control of Animal Experimentation (CONCEA) based on Brazil regulations on the case number 01200.001568/ 2013-87.
Cells
Vero (African green monkey kidney) cells (CCL 81) were obtained from the American Type Culture Collection (ATCC), Manassas, VA, USA, and cultured in high-glucose Dulbecco's modified Eagle's medium (Gibco DMEM; Thermo Fisher Scientific-Manassas, VA, USA). The culture medium was supplemented with 10% fetal bovine serum (FBS; Vitrocell Embriolife, Campinas, SP, Brazil) and 100 μg/mL streptomycin, and the cells were maintained at 37˚C in a 5% CO 2 atmosphere.
Mice
We used different mouse strains in this study, namely, the immunocompetent BALB/c and SV129 strains and the immunocompromised A129 strain (IFNAR1), AG129 (IFNα/β/γR-/-) and nude (NU/J). In all experiments, four-to five-week-old mice were used before vaccination. All animals were obtained from UFRJ Central Biotherm (Rio de Janeiro/RJ, Brazil). All procedures were performed in accordance with the guidelines established by the Ethics Committee for Animal Use of UFRJ (CEUA 131/19).
ZIKV and YFV
The ZIKV strain used in this study was ZIKVPE 243 (GenBank ref. number KX197192), which was isolated from a febrile case in the state of Pernambuco, Brazil and was kindly given to us by Dr. Ernesto T.A. Marques Jr. (Centro de Pesquisas Aggeu Magalhães, FIOCRUZ, PE, Brazil). The YFV was YFV 17DD, which was kindly given to us by LATEV, Bio-Manguinhos/Fundação Oswaldo Cruz (Rio de Janeiro/RJ, Brazil). The viruses were propagated as described previously [23,32], and the viral titers were determined in Vero cells using a standard plaque assay at day 5 postinfection by crystal violet staining (Merck Millipore). The viral titers were determined in aliquots of harvested medium, and stocks of the viruses were stored at -80˚C.
Safety study
For the safety study, we injected the YFV vaccine at 10 3 , 10 4 and 10 5 PFU doses via the SC route into A129 mice, and we challenged the mice only with ZIKV 10 6 PFU as a control.
Vaccination and challenge
We performed two immunizations with attenuated YFV by the SC route using a dose of 10 4 PFU with 7-day intervals between the doses. The mice were challenged with ZIKV by inoculating 5 μL of ZIKV (7x10 3 viral particles) via the IC route using a 0.5 mL Hamilton syringe and 27 G ¼ needles. The control mice were treated with phosphate-buffered saline (PBS) instead of YFV. The challenged mice were observed for 4 weeks to evaluate their clinical signs, including ruffled fur, vocalization, shaking, hunched posture, spinning during tail suspension, paralysis and death. Dying animals were euthanized humanely. The protocol is summarized in Fig 2. Under an alternative protocol, the mice received only one dose, and seven days later, they were challenged.
Assessment of neurological signs
After immunization with YFV 17DD and IC challenge with ZIKV (the protocol is summarized in Fig 2), the BALB/c mice were observed daily and analyzed for 60 min for clinical signs of infection by comparing the vaccinated infected and control groups with healthy mice. The animals underwent tail suspension for a maximum of 60 seconds to evaluate their neurological alterations. For this examination, the animals were tested twice daily with a minimum interval of 5 min between analyses.
Immunohistochemistry
Mice were deeply anesthetized with xylazine (10 mg/kg, i.p.) and ketamine (100 mg/kg, i.p.) and perfused transcardially with PBS 0.1 M, pH 7.4, 50 mL per animal followed by ice-cold 4% paraformaldehyde. Brains were removed, postfixed for 24 h in the same solution, processed and embedded in paraffin. Slides containing coronal brain sections (5-8 μm) were subjected to antigen recovery by treatment with 0.01 M citrate buffer for 40 min at 95-98˚C. Slides with mouse hippocampal tissue were incubated overnight with primary antibodies (rabbit anti-Iba1 1:500, WAKO #019-1941, mouse anti-synaptophysin 1:200, Vector Laboratories #S285; rabbit anti-Homer-1 1:100, Abcam #184955) diluted in PBS containing 3% BSA. Then, sections were incubated with Alexa 594-or 488-conjugated secondary antibodies (1,750; Invitrogen) for 1 h at room temperature, washed in PBS, and mounted in Prolong Gold Antifade with DAPI (Invitrogen). Synaptic puncta and microglial immunolabeling were imaged on a confocal microscope (Nikon) at ×630 magnification. Independent images of the hippocampus were used for analyses. For Iba-1 quantification, the total pixel intensity was defined for each image, and the data are expressed as integrated optical density (DO). In synaptic puncta analysis, each image obtained was a z-stack of 12-16 (0.33 μm depth) sections. We then used the Puncta Analyzer plugin in ImageJ 1.29 (NIH; RRID: SCR_003070) to count the number of colocalized, pre-(synaptophysin), or postsynaptic (Homer-1) puncta.
Viral load determinations by qRT-PCR
Seven days post immunization (booster) with the attenuated YFV vaccine, the animals were challenged by the IC route. The viral load was measured in the brain tissue of the mice at day 7 post challenge (peak viremia in these models) by qRT-PCR using primers/probes specific for the ZIKV E gene as previously described [23]. The cycle threshold (Ct) values were used to calculate the log PFU/mg tissue equivalence after conversion using a standard curve with serial 10-fold dilutions of a ZIKV stock sample.
Enzyme-linked immunosorbent assay (ELISA) evaluation of anti-mouse IgG levels in the serum of immunized immunocompetent mice
Polystyrene microplates (Corning, New York, NY, EUA) were coated overnight at 4˚C with 10 5 ZIKV or YFV viral particles. Following blocking for 2 h with PBS containing 1% bovine serum albumin (BSA) (LGC Biotecnologia, Cotia, SP), the serum from mice that were vaccinated with YFV was adsorbed in the wells at different concentrations and incubated overnight at 4˚C. Then, peroxidase-conjugated goat anti-mouse IgG antiserum (1:4,000; Southern Biotech) was added to the wells, and the plate was incubated for an additional period of 1 h. Peroxidase activity was revealed using hydrogen peroxide and tetramethylbenzidine (TMB). The reaction was stopped with H 2 SO 4 (2.5 N), and the optical density (OD) at 450 nm was determined with a spectrophotometer using SOFTmax PRO 4.0 software (Life Sciences Edition; Molecular Devices Corporation, Sunnyvale, CA).
Microneutralization in vitro and in vivo
For the microneutralization assay, the serum samples were initially diluted 1:10 and then serially diluted in 2-fold steps. The dilutions were then mixed at a 1:1 volume ratio with approximately 150 PFU of ZIKV, and the samples were incubated for 30 min at 37˚C. They were then incubated with Vero cells at 60-70% confluence in 24-well culture plates for 1 h at 37˚C and 5% CO 2 . Next, each well was filled with 1 mL of high-glucose DMEM containing 1% FBS, 1% 100 μg/mL penicillin, 100 μg/mL streptomycin mixed solution (LGC Biotecnologia, Cotia, SP) and 1.5% carboxymethylcellulose (CMC; Sigma-Aldrich Co, Missouri, USA). The plates were incubated at 37˚C and 5% CO 2 for 4 days. The cells were fixed by adding 1 mL of 4% formaldehyde for 30 min. Each plate was washed and stained with a crystal violet solution (1% crystal violet, 20% ethanol). The number of plaques in each well was counted to determine the neutralizing effect of the serum on the ZIKV.
For in vivo microneutralization, sera at a 1:4 dilution ratio were preincubated with 10 4 ZIKV at 25˚C for 30 min. Next, the AG129 mice were challenged by an intraperitoneal route with a total volume of 300 μL.
Evaluation of protection from clinical signs and cerebral atrophy induced by ZIKV replication in the brains of neonatal mice by breastfeeding
Persistent weight loss has been associated with the severity of the ZIKV infection in mice. Therefore, on postnatal day 3 (P3), the Swiss mice were subcutaneously infected with 10 6 PFU of ZIKV. Then, the virus-exposed pups were weighed and observed until 30 days post infection (dpi) and compared to the uninfected control group to assess clinical signs of the disease and the mortality profile. After 35 days of infection, the animals were euthanized, and their brains were removed and weighed to assess the tissue masses.
|
2021-11-06T06:16:41.515Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e3a84d6fb8be91a1ff7e83af549bb3401ba37a13",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0009907&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1155b52c06b50a921103edebef487581dcde72b4",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
76650916
|
pes2o/s2orc
|
v3-fos-license
|
The Raman dressed spin-1 spin-orbit coupled quantum gas
The recently realized spin-orbit coupled quantum gases (Y.-J Lin {\it et al}., Nature 471, 83-86 (2011); P. Wang {\it et al}., PRL 109, 095301 (2012); L. W. Cheuk {\it et al}., PRL 109, 095302 (2012)) mark a breakthrough in the cold atom community. In these experiments, two hyperfine states are selected from a hyperfine manifold to mimic a pseudospin-1/2 spin-orbit coupled system by the method of Raman dressing, which is applicable to both bosonic and fermionic gases. In this work, we show that the method used in these experiments can be generalized to create any large pseudospin spin-orbit coupled gas if more hyperfine states are coupled equally by the Raman lasers. As an example, we study in detail a quantum gas with three hyperfine states coupled by the Raman lasers, and show when the state-dependent energy shifts of the three states are comparable, triple-degenerate minima will appear at the bottom of the band dispersions, thus realizing a spin-1 spin-orbit coupled quantum gas. A novel feature of this three minima regime is that there can be two different kinds of stripe phases with different wavelengths, which has an interesting connection to the ferromagnetic and polar phases of spin-1 spinor BECs without spin-orbit coupling.
I. INTRODUCTION
Cold atoms have proven to be ideal platforms to simulate a variety of phenomena ranging from condensed matter to nuclear physics due to their unprecedented controllability [1,2]. Among these, a great amount of theoretical and experimental efforts in recent years have been dedicated to the engineering of gauge potentials for neutral atoms [3,4]. Not only because it provides a platform to simulate magnetic fields and spinorbit couplings (a special form of non-Abelian gauge potentials), and related phenomena such as quantum spin Hall effects or topological phases [5][6][7] in condensed matter physics, but also due to the dramatic impact the gauge potentials have on the system dynamics. For instance, spin-orbit effects with bosons have no counterpart in the electronic properties of solids. Even at single particle level the introduction of gauge potentials will modify the particle dispersions, leading to exotic properties such as negative reflection [8], or multirefringence [9,10]. These modified particle dispersions will have dramatic effects on the few-body or many-body physics when interactions are present. Indeed, the enhanced density of states by the gauge potentials leads to two-body bound states even at the BCS (Bardeen-Cooper-Schrieffer) side of the resonance [11]. Moreover, the broken Galilean invariance due to the presence of the gauge potentials could result in finite-momentum Cooper pairs [12]. As a consequence, the possibility of BEC (Bose-Einstein-Condensates) to BCS crossover by increasing the strength of the gauge coupling [13][14][15], or the possible realization of the long-sought FFLO (Fulde-Ferrell-Larkin-Ovchinnikov) superfluidity [16,17] at many-body level have attracted great interests.
To date, synthetic spin-orbit couplings have been realized experimentally for both BEC and Degenerate Fermi Gases (DFG) [4]. The key idea behind these achievements, i.e., Raman dressing, was first demonstrated in a series of experiments by Lin et al. [18][19][20][21] for a 87 Rb BEC and later on * Electronic address: z.lan@soton.ac.uk, lanzhihao7@gmail.com for fermionic 40 K by [22] and with 6 Li by [23]. The elegance of this method is in its simplicity where only one pair of lasers and an external magnetic field are used. Both the Abelian and Non-Abelian regimes can be reached by tuning the laser power. For example, in the first three experiments at NIST, only a single minimum of the lowest energy dispersion in the form ofh 2 (k x − A x ) 2 /2m was created, where a constant A x , space-dependent A x and time-dependent A x , lead to, a uniform vector potential with zero magnetic field, nonzero magnetic field B z = −∂ y A x = 0, and non-zero electric field E x = −∂ t A x = 0 respectively. Conversely, two minima in the energy dispersion is interpreted as two dressed spin states responsible for the synthetic spin-orbit coupling with equal Rashba and Dresselhaus strengths. We note that the most recent experimental [24,25] and theoretical [26][27][28][29] studies using this Raman scheme are mostly concerned with the two minima regime. It is, however, not an insurmountable task to experimentally control all the three Zeeman levels [30], and by doing so obtain a spin-1 scenario. Magnetically generated spin-orbit coupling [31] may also provide a viable route to larger spin systems.
In this work, we show that a three minima regime in the energy dispersion of the NIST setup can be reached. We show in detail how this three minima regime can emerge as a function of the Raman strength Ω R and the quadratic Zeeman energy ε when the detuning δ is zero and the contributions of the three Zeeman states of the underlying manifold are comparable. This is in contrast to the extensively studied phase diagram in the Ω R -δ plane [21,24,25]. For a special configuration of the parameters, a triple-degenerate minima can appear at the bottom of the spectrum, thus realizing a spin-1 spin-orbit coupled quantum gas. Our work shows that the method of Raman dressing can readily be used to synthesize large pseudospin-orbit couplings for neutral atoms if more hyperfine states are coupled equally.
II. THE THREE-MINIMA REGIME
We follow the NIST setup shown in Fig. 1 (a), where two counter-propagating Raman lasers alongx with frequency difference ∆ω L and momentum difference 2hk r couple the three hyperfine states of a ultracold atomic cloud. At single particle level, the setup is applicable to both bosonic and fermionic gases as long as suitable hyperfine states can be selected. For simplicity, we will denote the three states as an F = 1 manifold, |F, m F = |1, −1 , |1, 0 , |1, +1 , though in principle they can be three hyperfine states of a much higher manifold. Meanwhile there is a magnetic field alongŷ producing the linear and quadratic Zeeman shiftshω Z andhε for the three states. The lasers induce a Raman transition in the atom, transferring linear momentum 2hk rx to the atom while increasing its spin angular momentum byh at the same time ( Fig. 1 (b)). The Hamiltonian of the system dressed by the Raman lasers is described by: where F is the spin-1 operator, Ω R the Raman frequency associated with the Raman process,hω Z andhε the linear and quadratic Zeeman energies of the three levels. In the rotating wave approximation for the frame in spin space rotating aboutŷ with frequency ∆ω L , the Hamiltonian becomes static, Figure 1: (color online) (a) Schematic of the experimental setup at NIST. Two counter-propagating Raman lasers with frequencies ω L and ω L + ∆ω L alongx impinge on the atomic cloud. A bias field B 0 alongŷ produces the Zeeman effects. (b) Level diagram of the Raman coupling scheme within the F=1 manifold. The linear and quadratic Zeeman shifts are ω Z and ε while the detuning from Raman resonance δ , which is set to zero in this study. Atoms excited by the Raman lasers will change their spin projection along the magnetic field by 1 while increase their linear momentum by 2hk r . (c) The spectrum of H (k x ) withhΩ R = 2E r andhε = −0.23E r . The tripledegenerate minima at the bottom of the spectrum serves as a spin-1 system. (d) The single particle phase diagram in the plane of Ω R -ε. The three phases meet at a tricritical point and the red line shows the regime where the three minima are degenerate in energy.h y . We can then apply another rotation in spin space alongŷ with angle 2hk r , and the Hamiltonian in the state basis of for the Raman coupling given by [18], is the detuning from Raman resonance. It is to be noted that the parameters δ and ε just shift the three bare branches up and down. While the experiments [21,24,25] use δ to select two out of the three Zeeman states as a spin-1/2 system, here we set δ = 0 and leave only ε as a free parameter to balance the contributions of the three bare branches, which in principle can be controlled by state-dependent trapping potentials, i.e., both positive and negative ε can be realized in this way. Alternatively, negative quadratic Zeeman energy can also be realized experimentally by the technique of microwave dressing (e.g., see [32]). In the following, we define E r =h 2 k 2 r /2m as the recoil energy which will be used as the energy scale and k r the momentum scale. Fig. 1 (d) shows the single particle phase diagram of Hamiltonian (1) in the plane of Raman coupling (Ω R ) and energy shift of the middle branch (ε). The three phases, characterized by one minimum, two minima and three minima in the lowest energy dispersion, meet at a tricritical point beyond which the three minima regime no long exists. The red line in Fig. 1 (d) shows the regime where the three minima are exactly degenerate in energy. We show in Fig. 1 (c) one example of the triple degenerate minima regime with parameters ofhΩ R = 2E r and hε = −0.23E r . In this case, the three degenerate minima at the bottom of the spectrum serve as a spin-1 manifold and the atomic gas is spin-orbit coupled with enlarged pseudospin of 1. We note our phase diagram in the plane of Ω R -ε is very different from the phase diagram in the plane of Ω R -δ (e.g., see [24]). The phase diagram in the plane of Ω R -δ shows only two phases, a two local minima regime or a one minimum regime and when the Raman coupling is strong enough, only one single minimum can exist. We also note tricriticality and similar phase diagrams in spin-orbit coupled BEC are discussed recently by Li et al [28], but in a two minima regime at many particle level, which is different from our three minima regime at the single particle level.
The physical reason of the phase diagram in Fig. 1 (d) can be understood as follows. At (Ω R , ε) = (0, 0), the energy dispersions are the three bare parabolas located atk min x /k r = −2, 0, 2. With increasing Ω R , gaps will open at the anticrossing points and ε will shift the middle branch up with negative ε or down with positive ε. When the middle branch shifts up (with a decreasing ε) the minimum located atk x = 0 will merge with the two neighbouring maxima into a single maximum, thus the system enters the two minima regime. Conversely, when the middle branch shifts down (with increasing ε) the two minima located atk min x /k r = −2, 2 will merge with its neighboring maximum and leave only a minimum atk x = 0, thus the system enters the single minimum regime. When the Raman coupling is strong enough, the minimum located at k x = 0 will be destroyed by the anti-crossing between the dispersion curves, and as a result, the three minima regime no longer exists. Figure 2: (color online) Momentum-resolved radio-frequency (rf) spectroscopy for reconstructing the band dispersions in Fig. 1(c) for a spin-orbit coupled Fermi gas. Plots (a), (b) and (c) show momentumresolved rf spectroscopies for the three Zeeman states respectively, while (d) shows the reconstructed band dispersions by the combination of (a), (b) and (c). Parameters used:hΩ R = 2E r ,hε = −0.23E r , chemical potential µ = 3E r and temperature T = 0.6µ and in consideration of the energy resolution of the spectroscopy γ ∼ 0.1E r . We have replaced the δ function for the energy conservation by δ (x) = (γ/π)/[x 2 + γ 2 ] [23,33]. Note when increasing the chemical potential, the transfer strength will become stronger and the higher branches will also get occupied, since there will be more and more atoms in the system.
III. MOMENTUM-RESOLVED RADIO-FREQUENCY (RF) SPECTROSCOPY
By using the same method of Raman dressing as in the NIST experiments, spin-orbit coupled Fermi gases have also been realized experimentally at ShanXi University ( 40 K ) [22] and Massachusetts Institute of Technology (MIT) ( 6 Li) [23] where the band dispersions have been studied by momentumresolved rf spectroscopy and spin injection spectroscopy respectively. While the spin-injection spectroscopy uses a rf laser to inject free atoms in a reservoir state into the empty spin-orbit coupled system, after which the momentum and spin of injected atoms are mapped out by using time of flight and spin-resolved detection, the momentum-resolved rf spectroscopy uses a rf laser to transfer atoms from one of the hyperfine states for constructing the spin-orbit coupling system to an empty reservoir state. For a non-interacting system, the momentum-resolved rf spectroscopy yields equivalent information to the spin-injection spectroscopy.
We have calculated the momentum-resolved rf spectroscopy in order to show experimentally it is possible to observe the three-minima band structure shown in Fig. 1(c) (see [33] for details of the calculation, and [34] for recent experiment). We present the results in Fig. 2, where plots (a), (b) and (c) show the momentum-resolved rf spectroscopy for the three Zeeman states that are selected to synthesize the spinorbit coupling and (d) shows the reconstructed band structure by the combination of plots (a), (b) and (c). The reason for reconstructing the band dispersions in Fig. 1(c) by using the rf spectroscopies of all the three Zeeman states is because each branch of the band dispersions is a mixture of the bare dispersions of the three Zeeman states. From Fig. 2(d), we see the qualitative features of the band structure in Fig. 1(c) are clearly visible. The rf spectrum also shows an important feature of the system, i.e., the weights of the three dressed spin states are largely determined by the three bare branches, which is the essential point for the emerging of two different stripe phases to be discussed later. Fig. 1(c) for a BEC with k 0 = 1.88k r when collisional interactions are present. Shown are the density distributions of the m = 0 Zeeman state with (a) g 0 /g 2 = 1 and (b) g 0 /g 2 =-1, where g 0 > 0 for both cases (A harmonic trap is taken account for in the numerics by the local density approximation). The typical parameters realized experimentally as in [21] are E r /h = 1.1 × 10 4 Hz, g 0 = 7.79 × 10 −12 Hz cm 3 and number of atoms N = 1.8 × 10 5 .
IV. STRIPE PHASES FROM THE THREE MINIMA
There are two possible phases for the BEC in the two minima regime when interactions are present, a plane wave phase and a stripe phase depending on whether a single minimum or two minima are occupied [26]. For our three minima case, at the single particle level, the ground state for a BEC is triple degenerate and is described by where A ±,0 is the complex amplitude and χ p ±,0 m (x) = e ip ±,0 xχ p ±,0 m withχ p ±,0 m the spinor from the lowest eigenstate of the single particle Hamiltonian (1) at the three minima of k x = ±k 0 , 0. At the many particle level, the inter-action will select which minimum or minima the system will condensate to by minimizing the interaction energy. For example, a single non-zero component of A ±,0 means a plane wave phase, while two non-zero components of A ±,0 create a standing wave phase [26,35]. One interesting consequence of the triple-degenerate minima regime is that there are two different kinds of stripe phases with different wavelengths. When the BEC occupies the two minima at k x = ±k 0 , the resulting stripe phase has twice smaller wavelength than that when the BEC occupies the two minima at k x = k 0 and k x = 0 or k x = −k 0 and k x = 0.
The interaction Hamiltonian for a three component BEC is given byĤ int = d 3 rg 0n 2 (r) + g 2F 2 (r) [35,36], wherê n = ∑ mnm is the total population of the three Zeeman states, andF = φ α F αβ φ β is the spin-1 operator with F the spin-1 generalization of the Pauli matrix. The complex amplitudes A ±,0 are determined by minimizing the Gross-Pitaevski (GP) functional of the single particle Hamiltonian plus the interaction Hamiltonian. In the numerical investigations, apart from the plane wave phase as discussed before [26,28,35] which results from the occupation of a single minimum, we also find two kinds of stripe phases with different wavelengths. Fig. 3 shows a typical example for the density of the m = 0 Zeeman state at two different g 0 and g 2 , where the difference of factor two in wavelength is clearly seen. Note that since these two kinds of stripe phases have the same laser parameters, i.e., they stem from the same single particle dispersion, the change of wavelength by the laser parameter as discussed in [26] can not explain their origin.
To better understand the nature of the two different stripe phases, we write the density of one of the Zeeman state (e.g., m = 0) as n 0 = φ 0 φ * 0 with φ 0 = (A − e −ik 0 x a − + A 0 a 0 + A + e ik 0 x a + ), where a ±,0 (taken as real) is the m = 0 component of the spinor wavefunction and A ±,0 the complex amplitude. Because of the nonzero overlap between the spinor part of the wavefunction, the density of each Zeeman state will develop a stripe structure. A straightforward calculation gives, n 0 = C + (C 1 e ik 0 x + c.c) + (C 2 e 2ik 0 x + c.c), where C = |A − | 2 |a − | 2 + |A 0 | 2 |a 0 | 2 + |A + | 2 |a + | 2 , C 1 = a 0 (A * − a − A 0 + A + a + A * 0 ), and C 2 = A * − a − A + a + . For g 0 > 0, we find that when g 0 /g 2 = 1, the values of A ±,0 which minimize the interaction energy, always have vanishing A 0 , which means C 1 = 0, thus the wavelength of the stipe phase is, π/k 0 in this case. For g 0 /g 2 = −1, the A ±,0 are all nonvanishing. But in this case, since a 0 is dominant (see for instance Fig. 2 (b)), the term e ik 0 x is dominant over e 2ik 0 x , which means the wavelength of the stripe in this case is given by 2π/k 0 . The two different stripe phases therefore originate from the separation between the three minima. A further physical insight may be gained by the fact that the weights of the three dressed branches are largely determined by the three Zeeman states themselves (e.g., see Fig. 2), i.e., the minimum at ±k 0 , 0 by the Zeeman state m = ∓1, 0 respectively. So when g 2 > 0(g 2 < 0), the system wants a zero (large) spin to minimize (maximize) the interaction energy, consequently, m = ±1(m = 1, 0 or m = −1, 0) are occupied. This shows the two different kinds of stripe phases have an interesting connection with the ferromagnetic and polar phases of spin-1 BEC [36] which is the true manifestation of spin-orbit coupling in this system, i.e., the structure in pseudo-spin space (ferromagnetic or polar) has been transferred to structures in orbit space (small or large wavelength stripe) since the three dressed spin states are represented by the three minima with different momentum. It is certainly tempting to conjecture that other types of stripe phases will emerge when including more Zeeman states. Since the two different kinds of stripe phases originate from the sign of g 2 when the g 2 term is dominating over the g 0 term (assumed positive), we could tune g 0 /g 2 from +1 to −1 to see the transition of the wavelength from π/k 0 to 2π/k 0 . The dynamics of the transition would depend on the experimental details such as for instance non-adiabatic effects. The interaction parameters for observing these stripe phases can be reached experimentally by optical Feshbach resonance [37] (see also recent experiment [38] for Raman-induced Feshbach resonance in this setup), and the different stripe structures can be probed by Bragg light scattering [39] or detected by measuring the displacement of the atomic cloud after expansion when the trap is turned off [26].
V. CONCLUSIONS AND DISCUSSIONS
Motivated by the recent experiments on synthetic spin-orbit coupled quantum gases [21][22][23][24][25] in the two minima regime, we investigated and showed how a three minima regime in this setup can be obtained. We found that when the contributions of the three Zeeman states are comparable, triple degenerate minima appear at the bottom of the band dispersions which can be translated into a spin-orbit coupled spin 1 quantum gas. We further found there are two different kinds of stripe phases in this setup which have their roots from the ferromagnetic and polar phases of spin-1 BEC, i.e., due to spin-orbit coupling, the structure in pseudo-spin space is manifested by the structure in orbit space. The scenario can be generalised to create a spin-orbit coupled high spin quantum gas by including more Zeeman states. We note that recently a different experimental technique to create two minima in momentum space by shaking an optical lattice and in situ observation of ferromagnetic domains has been achieved [40]. These techniques could also be applied to the three minima regime studied in this work.
|
2014-02-24T21:01:59.000Z
|
2013-11-24T00:00:00.000
|
{
"year": 2013,
"sha1": "ea56209a5125f331ab7c71f810bca28ae1a7bb77",
"oa_license": null,
"oa_url": "https://eprints.soton.ac.uk/369446/1/PhysRevA.89.023630",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ea56209a5125f331ab7c71f810bca28ae1a7bb77",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118514986
|
pes2o/s2orc
|
v3-fos-license
|
Quantum correlations of two-qubit states with one maximally mixed marginal
We investigate the entanglement, CHSH nonlocality, fully entangled fraction and symmetric extendibility of two-qubit states that have a single maximally mixed marginal. Within this set of states, the steering ellipsoid formalism has recently highlighted an interesting family of so-called 'maximally obese' states. These are found to have extremal quantum correlation properties that are significant in the steering ellipsoid picture and for the study of two-qubit states in general.
I. INTRODUCTION
Quantum steering ellipsoids provide a faithful and intuitive representation of two-qubit states [1][2][3][4]. If Alice and Bob each hold a qubit of a non-product state then Alice's Bloch vector is 'steered' when Bob performs a local measurement. Given all possible measurements by Bob, the set of Bloch vectors to which Alice can be steered forms her steering ellipsoid E inside the Bloch sphere. E is described by its centre c and a real, symmetric 3 × 3 matrix Q. The eigenvalues of Q give the squares of the ellipsoid semiaxes s i and the eigenvectors give the orientation of these axes. Not every E inside the Bloch sphere describes a physical two-qubit state; the necessary and sufficient conditions for physicality have recently been given in Ref. [5].
In the steering ellipsoid formalism, the set of canonical states is of particular importance. These correspond to two-qubit states in which Bob's marginal is maximally mixed. A general two-qubit state ρ is transformed to its canonical state ρ by the local filtering operation [2] where ρ B = tr A ρ. Since E is invariant under this transformation, only canonical states are needed to describe all possible physical steering ellipsoids. For a canonical state, Alice's Bloch vector coincides with the ellipsoid centre c. The ellipsoid matrix of a general state ρ is defined using its canonical state by Q = T T T . The ellipsoid semiaxes are therefore given by s i = |t i |, where t i are the signed singular values of T . Without loss of generality we will say that the semiaxes are ordered such that s 1 ≥ s 2 ≥ s 3 . The chirality of E is defined as * antony.milne@gmail.com χ = sign(det T ) = sign(t 1 t 2 t 3 ) and relates to the separability of the quantum state [5]; any entangled state must have χ = −1. In Ref. [5] we investigated extremal states lying on the physical-unphysical boundary by finding the largest volume physical E for any given c. For c = (0, 0, c), the maximal volume ellipsoid E max c has major semiaxes s 1 = s 2 = √ 1 − c and minor semiaxis s 3 = 1 − c (see Fig. 1). Since E is invariant under Bob's local filtering operations, the same E max c describes a whole manifold of states in which Bob's Bloch vector can take any value. However, by choosing the canonical state, which has Bob's marginal maximally mixed, we can associate with any given E max c a unique two-qubit state ρ max c . This is the so-called 'maximally obese' state, which forms a family parametrised by 0 ≤ c ≤ 1: where |ψ c = 1 is Choi-isomorphic to the trace-preserving single-qubit amplitude-damping channel with decay probability c.
With the exception of c = 1, the maximally obese states are entangled; moreover, ρ max c is the state that maximises concurrence over the set of all two-qubit states that have steering ellipsoid centred at c [5].
Here we further investigate the set of canonical twoqubit states, with a particular focus on the family of maximally obese states. We find that ρ max c maximises three more measures of quantum correlation -CHSH violation, fully entangled fraction, and negativity -over the set of canonical two-qubit states with a given c. We show that any maximally obese state must be either CHSH nonlocal or symmetrically extendible. Furthermore, in the context of steering ellipsoids, the entanglement properties of canonical states are found to correspond directly to simple geometric features of E. Finally, we place necessary bounds on c for a general two-qubit state (i.e., one without any restriction on Bob's marginal) to be CHSH violating or useful for quantum teleportation.
II. CHSH VIOLATION AND SYMMETRIC EXTENDIBILITY
Consider the Clauser-Horne-Shimony-Holt (CHSH) scenario [6] with Alice and Bob sharing a canonical twoqubit state ρ of the form (1). Alice can measure her qubit in one of the two directions α or α , and Bob can measure his qubit in β or β . Define the operator The maximal CHSH violation gives a measure of Bell nonlocality and is given by β( ρ) = max B | tr( ρB)|, where the maximisation is performed over all directions α, α , β, β . This gives [7] In the steering ellipsoid picture, the entanglement of a state depends on the centre vector c, the size of E and its skew c T Qc [3]. In contrast to this, the CHSH nonlocality of a canonical state has a remarkably simple geometric interpretation: it depends on only the two longest semiaxes of E and not on the position or orientation of E inside the Bloch sphere. Proof. According to Eq. (3), we need to bound s 2 1 + s 2 2 . The most CHSH nonlocal state will be entangled and so has χ = −1. From the conditions for physicality given in Theorem 1 of Ref. [5] we have s 2 1 + s 2 2 ≤ 1 − c 2 + 2s 1 s 2 s 3 − s 2 3 . As described in the appendix of Ref. [5], we can use the Karush-Kuhn-Tucker conditions to show that the maximal volume E max c also maximises 2s 1 s 2 s 3 −s 2 3 for a given c = (0, 0, c). This E max c has s 1 = s 2 = √ 1 − c and Let us also consider the symmetric extendibility of maximally obese states. A bipartite quantum state ρ AB is symmetrically extendible with respect to Alice if there exists a tripartite state ρ AA B for which tr A (ρ AA B ) = tr A (ρ AA B ) [8]. Originally introduced as a test for entanglement [9], symmetric extendibility has a number of operational interpretations. For example, a symmetrically extendible state cannot be used for one-way entanglement distillation [10] or one-way secret key distillation [11]. The relationship between symmetric extendibility and Bell nonlocality has also been studied; the results of Ref. [12] show that a two-qubit state cannot be both symmetrically extendible and CHSH nonlocal. Although there exist (necessarily entangled) twoqubit states that are neither symmetrically extendible nor CHSH nonlocal, a maximally obese state must have one of these properties.
Theorem 2. The family of maximally obese states ρ max c is partitioned into states that are symmetrically extendible and states that are CHSH nonlocal. ρ max c is symmetrically extendible for 1/2 ≤ c ≤ 1 and CHSH nonlocal for 0 ≤ c < 1/2.
Proof. The necessary and sufficient condition for a twoqubit state ρ AB to be symmetrically extendible with respect to Alice is tr(
III. FULLY ENTANGLED FRACTION
The fully entangled fraction of a bipartite state ρ is defined by f (ρ) = max |φ φ| ρ |φ , where the maximum is taken over all maximally entangled states |φ [13]. f (ρ) is an important quantity in entanglement distillation protocols [14] and relates directly to the fidelity of quantum teleportation [15].
For a canonical state ρ of the form (1), the fully entangled fraction is [16] f ( ρ) = 1 4 where we recall the ordering s 1 ≥ s 2 ≥ s 3 . An entangled state must have χ = −1; in this case f ( ρ) depends only on the sum of the steering ellipsoid semiaxes i s i = tr √ Q.
Similar to CHSH nonlocality, the fully entangled fraction of a canonical state depends only on the size of E and not on its position or orientation.
Theorem 3. From the set of all canonical states with a given c, the state with the highest fully entangled fraction is the maximally obese ρ max c , as given in Eq. (2).
We can also consider this result in the Choi-isomorphic setting, using the fact that ρ max c is isomorphic to the amplitude-damping channel. Let us say that Alice prepares the Bell state |ψ + and sends one qubit of it to Bob through a trace-preserving quantum channel Φ, intending the resulting shared state to act as a resource for teleportation. From the set of all non-unital maps Φ for which Φ(1/2) = (1 + c · σ)/2 and c = (0, 0, c), the one that will maximise teleportation fidelity is the amplitude-damping channel.
These results complement previous studies of teleportation, which have shown that passing a resource state through a dissipative channel can enhance the average teleportation fidelity [16,18] as well as identifying the filtering operations that achieve optimal fidelity for a given resource state [19,20].
IV. CONCURRENCE AND NEGATIVITY
We now consider two entanglement monotones, both of which range from 0 for a separable state to 1 for a maximally entangled state.
For a two-qubit state ρ, define the spin-flipped state aŝ ρ = (σ y ⊗ σ y )ρ * (σ y ⊗ σ y ) and let λ 1 , ..., λ 4 be the square roots of the eigenvalues of ρρ in non-increasing order. The concurrence is then given by C(ρ) = max(0, λ 1 − λ 2 − λ 3 − λ 4 ). Negativity is a measure for entanglement based on the Peres-Horodecki criterion [21,22]. Let µ min be the smallest eigenvalue of ρ TB ; the negativity is then given by N (ρ) = max(0, −2µ min ) [23,24]. In Ref. [5] we bounded the concurrence of any twoqubit state in terms of the volume of its steering ellipsoid. This gave us the bound C( ρ) ≤ √ 1 − c for a canonical state ρ of the form (1). The bound is saturated by maximally obese states ρ max c . Our results on CHSH violation allow us to derive another result that is neither stronger nor weaker than this bound. Ref. [25] gives the bound 2 √ 2C( ρ) ≤ β( ρ). From Eq. (3) and the ordering s 1 ≥ s 2 ≥ s 3 we then obtain C( ρ) ≤ s 1 . Although this bound is distinct from C( ρ) ≤ √ 1 − c, it is also saturated by maximally obese states.
Numerical results show that the negativity of a canonical state is bounded as N ( ρ) ≤ s 3 . As discussed in Theorem 3 we have s 3 ≤ 1 − c, and so N ( ρ) ≤ 1 − c. Again, the bound is saturated by ρ max c .
We therefore see that in the steering ellipsoid picture, the concurrence of a canonical state is upper bounded by the length of the major semiaxis while the negativity is upper bounded by the length of the minor semiaxis [26]. For maximally obese states, these entanglement measures are in fact equal to the lengths of these semiaxes and can thus be directly obtained from a geometric visualisation of E max c .
As discussed in Ref. [5], the maximally obese states form a special single-parameter class of the generalised Horodecki state (see, for example, Refs. [23,27,28]). Other classes of the generalised Horodecki state have been studied before and were seen to have certain extremal properties. For example, the Verstraete-Verschelde states [29] minimise the fully entangled fraction for a given concurrence and negativity; these states obey C = [N + N (4 + 5N )]/2. Our maximally obese states ρ max c maximise concurrence for a given CHSH nonlocality and obey C = √ N .
V. BOUNDS FOR CHSH NONLOCALITY AND TELEPORTATION FOR GENERAL STATES
CHSH nonlocality and fully entangled fraction are measures that do not transform straightforwardly under local filtering operations. The bounds given in Theorems 1 and 3 for canonical states cannot therefore be used to analytically derive bounds for β(ρ) and f (ρ) for a general (i.e., not necessarily canonical) two-qubit state ρ. However, numerical investigations lead us to conjecture remarkably simple expressions for these bounds (see Fig. 2). Conjecture 1. Let ρ be a general two-qubit state with E centred at c. The CHSH nonlocality is tightly bounded as This allows us to place a necessary bound on the steering ellipsoid for a general two-qubit state ρ to be CHSH violating: to violate the CHSH inequality we need β(ρ) > 2 and so c < 1/2. We therefore see that a twoqubit state whose E is centred too close to the surface of the Bloch sphere cannot exhibit CHSH nonlocality. Conjecture 2. Let ρ be a general two-qubit state with E centred at c. The fully entangled fraction is tightly bounded as f (ρ) ≤ 1 − c/2.
Recall that teleportation fidelity is related to fully entangled fraction by F (ρ) = [2f (ρ) + 1]/3. Using only state estimation and classical communication, it is possible to achieve a teleportation fidelity of 2/3 [15]. To beat this classical limit we require f (ρ) > 1/2, and so we see that for all c < 1 there exists E describing a state that achieves truly quantum teleportation. An optimal universal cloning machine achieves a fidelity of 5/6 [30,31]. To beat this limit we require f (ρ) > 3/4 and hence c < 1/2, which is the same bound as we obtained as a necessary condition for E to be CHSH violating.
VI. OUTLOOK
Steering ellipsoid centre c provides a natural parametrisation of two-qubit states and leads to geometric interpretations and simple bounds for several measures of quantum correlation. Whether these results can be easily extended to higher dimensional quantum systems remains to be seen. In particular, what would be the analogous family of maximally obese states in higher dimensions? It seems likely that the set of states Choi-isomorphic to higher dimensional amplitudedamping channels [32] will also have interesting maximal quantum correlation properties.
|
2014-08-29T11:47:46.000Z
|
2014-04-15T00:00:00.000
|
{
"year": 2014,
"sha1": "c6055cca8307fad397976f1e5b762756ed10a6db",
"oa_license": null,
"oa_url": "http://eprints.whiterose.ac.uk/139398/1/PhysRevA.90.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "673b9b9ef5cf786041536dd714f8719eec42ed7a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256291255
|
pes2o/s2orc
|
v3-fos-license
|
Layer-specific correlates of detected and undetected auditory targets during attention
In everyday life, the processing of acoustic information allows us to react to subtle changes in the auditory scene. Yet even when closely attending to sounds in the context of a task, we occasionally miss task-relevant features. The neural computations that underlie our ability to detect behavioral relevant sound changes are thought to be grounded in both feedforward and feedback processes within the auditory hierarchy. Here, we assessed the role of feedforward and feedback contributions in primary and non-primary auditory areas during behavioral detection of target sounds using submillimeter spatial resolution functional magnetic resonance imaging (fMRI) at high-fields (7 T) in humans. We demonstrate that the successful detection of subtle temporal shifts in target sounds leads to a selective increase of activation in superficial layers of primary auditory cortex (PAC). These results indicate that feedback signals reaching as far back as PAC may be relevant to the detection of targets in the auditory scene.
Introduction
In a movie with a bank robbery scene, a criminal tries cracking a safe by carefully listening to any sounds while turning the wheels of the locking mechanism. By listening carefully, the robber detects when notches align, permitting the removal of a locking bar and opening of the safe. This highlights the auditory system's remarkable ability to process subtle acoustic information, upon which we base decisions and actions. Notably though, despite the robber keenly attending the sounds, an alternative scenario can be imagined, in which the same acoustic change is not detected, delaying or preventing the robbery. Here, we investigated why changes in the soundscape close to detection threshold, with the same behavioral relevance are sometimes detected and sometimes missed (despite being physically identical). We hypothesized that these changes in behavior are due to momentary fluctuations in attention and reflected in modulations of feedback signals.
In line with the known segregation of feedforward and feedback processes within the laminar organization of the cortex (Douglas et al., 1989;Douglas and Martin, 2004), previous studies in primary visual and auditory cortices demonstrated enhanced activity in superficial layers with attention (Lawrence et al., 2019;Liu et al., 2021;De Martino et al., 2015;Gau et al., 2020). In particular, electrophysiological research in animals has investigated the neural correlates of attention and highlighted changes in cortical oscillations in superficial layers (Lakatos et al., 2013;O'Connell et al., 2014). In humans, the modulation of cortical layers by attention in both vision and audition has been probed non-invasively using high-field functional magnetic resonance imaging (fMRI) (Lawrence et al., 2019;Liu et al., 2021;De Martino et al., 2015;Gau et al., 2020;Klein et al., 2018). In these studies, attentional modulation was probed by either drawing attention towards or away from the relevant stimulus (or stimulus feature). In particular, in the auditory domain, attending to an auditory stimulus (compared to a concurrently presented visual stimulus) has highlighted changes in frequency tuning (i.e. tuning width) (De Martino et al., 2015) and an increase in activation in superficial layers of the (primary) auditory cortex (Gau et al., 2020). In the visual domain, within-modality attentional manipulations (spatial or feature based attention) have been used in layer-specific studies, which demonstrated activity modulations in superficial layers (Lawrence et al., 2019;Liu et al., 2021) as well as changes in population receptive fields in deep layers of primary visual cortex (V1) (Klein et al., 2018). Altogether, these data indicate that the presence or absence of attention to stimuli modulates activity in superficial (and in some cases deep) layers. Here we asked where in the auditory cortex and in which cortical layers, neural activity variations would be present that could explain why identical auditory stimuli presented under identical attentional instructions would be detected in some trials, and not in others. In line with literature ascribing a role of superficial layers in receiving attentional feedback, we hypothesized increased activity in superficial layers of auditory cortex may be related to fluctuations in attention thus variations in the perception (detection) of physically identical stimuli.
Apart from the segregation of feedforward and feedback signals across cortical depths, the auditory cortex has a tonotopic organization (Merzenich and Brugge, 1973;Formisano et al., 2003). Attention to frequency specific targets gain-modulates frequency-specific (tonotopic) regions in a layer dependent manner (O'Connell et al., 2014). Task-dependent changes in the receptive fields have been shown using invasive electrophysiology in animals in superficial layers of the auditory cortex (Francis et al., 2018) and have been suggested as neural correlates of selective attention. Similarly, frequency-specific effects of attention have been shown non-invasively in humans at a macroscopic level (De Martino et al., 2015;Riecke et al., 2018;Da Costa et al., 2013). We presented narrowband stimuli at two distinct frequencies (high and low), to understand whether the topographic organization of human auditory cortical areas interacts with laminar processing when detecting a relevant sound.
In particular, we asked human listeners to perform an auditory temporal detection task while concurrently acquiring layer-specific fMRI data. We hypothesized that, the change in soundscape (presence of a target) may be reflected in a modulation of middle cortical layers (for both detected and undetected targets). By comparing responses to (acoustically identical) perceptually detected and undetected targets, we localized responses related to the detection of sounds under constant, demanding attentional conditions. We hypothesized that, the behavioral relevance of attention is reflected in the change of population level activity in superficial cortical layers of the primary auditory cortex (Fig. 1C) and further hypothesized that this effect may be tonotopic.
Participants
Ten healthy participants (4 females, 6 males; median age 28,5; range = (Huber et al., 2017;Moerel et al., 2021;Snyder et al., 2012;van Mourik et al., 2021;Henry and Herrmann, 2014;Giani et al., 2015;Gutschalk et al., 2008;Wiegand and Gutschalk, 2012;Cusack, 2005;Micheyl et al., 2005;Kilian-Hütten et al., 2011;Hill et al., 2011;Brainard, 1997;Moore, 2003)) were recruited. All participants were students or employees of Maastricht University. The study was approved by the research ethics committee of the Faculty of Psychology and Neuroscience at Maastricht University. For every participant we acquired 1 run of the tonotopic localizer, between 3 and 9 runs of the target detection experiment (median number of runs = 6; range (Lawrence et al., 2019;Liu et al., 2021;De Martino et al., 2015;Gau et al., 2020;Lakatos et al., 2013;O'Connell et al., 2014;Klein et al., 2018)) and a high-resolution anatomical scan. Each volunteer participated in either one or two sessions depending on the number of functional runs collected in the main experiment (for a total of 14 sessions across all participants). Most participants had previous experience of high-resolution fMRI studies.
Experimental design and stimuli
All stimulus presentation scripts were written in Matlab (The MATHWORKS Inc., Natick, MA, 234 USA), using the Psychophysics toolbox (Brainard, 1997) and custom-code. Participants underwent a training session (~20 min) followed by a scanning session (~2 ½ hours). Participants 01, 02, 03 and 08 underwent two scan sessions, to acquire additional functional runs and reach the target of at least six functional runs of the main experiment. Prior to each scan session the sound intensity of stimuli was adjusted individually to (perceptually) equalize the loudness of the experimental stimuli in the two stimulus conditions, and between outside the scanner (training session) and inside (main experiment).
Target detection experiment. Participants were asked to detect a target constructed by temporally shifting (TS) one of the narrow-band sounds forming the quintets (Fig. 1A). Narrowband sounds were centered around carrier frequencies of 200 Hz or 1100 Hz. The passbands around the carriers were constructed using equivalent rectangular bandwidths (ERBS = 4 (Moore, 2003);). Each passband consisted of a summation of 21 sinusoids with amplitude normalized to 1 and a random onset phase. A quintet consisted of five 10 ms narrowband sounds, each separated by 10 ms (see Fig. 1A, inset 1). Targets were constructed by shifting in time the third sound in a quintet (see Fig. 1A, inset 2). During a training session participants' 70% detection threshold was determined by means of a 2 down 1 up staircase, in which the size of a temporal shift (TS; ranging between 1 and 9 ms) determined the difficulty of a detection task. A 70% detection threshold outside the scanner was used as a starting threshold during the scanning session. The more challenging (i.e. louder scan environment) led to a (desirable) detection accuracy ~ 50% during scanning. Maintaining task difficulty to achieve a detection of 50% required adjusting the TS individually after every run by the experimenters, to ensure an approximately equal number of detected and undetected trials per participant, to be contrasted later on in the analysis. Supplementary Fig. S1 displays the behavioral detection rates per participant for high and low sounds separately. All sounds were presented in silent intervals between
Fig. 1. Task and Hypotheses. (A)
Stimuli were periodic sequences of narrowband quintets repeating at 2 Hz. Two narrowband frequency ranges around 200 Hz and 1100 Hz were used to create low and high pitch sounds. Five sounds repeating at 50 Hz (10 ms ISI) formed a quintet (inset 1). 75% of the stimuli contained a target. Target sounds (TS; inset 2) had a different temporal structure: the third sound in a quintet was temporally shifted between 1.5 and 7 ms, depending on participants' perceptual (detection) threshold. Figure S1.1 shows behavioral detection rates per participant. (B) Target trials were sorted based on the behavioral response. (C) Expected laminar BOLD response profile. Both detected and undetected TS would entail a feedforward BOLD increase in middle layers, but a detected TS (magenta line) additionally increases the BOLD response in superficial layers compared to undetected TS (blue line). (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) acquisitions. After the sound finished, participants were cued by a green fixation cross to respond whether they had detected a target or not and instructed to press 1 or 2 on the button box. The cue for a button press was randomly jittered on each trial in the interval [0-200 ms] after the sound offset. Each run consisted of a total of 30 trials, 15 per carrier frequency, of which 3 trials per carrier were without a target and 12 containing a target. In addition, 3 silent trials per run were randomly interspersed functioning as baseline for sound vs silence contrasting.
Tonotopic localizer. To map tonotopic organization in the AC, a frequency localizer was performed (Formisano et al., 2003). We presented 7 center frequencies (130 Hz, 200 Hz, 306 Hz, 721 Hz, 1100Hz, 1700 Hz and 4000 Hz) in blocks. Each block consisted of three amplitude modulated tones centered on one of the center frequencies (center frequencies ± 0.1 octaves). Five center frequencies were log-spaced between 130 Hz and 4000 Hz, and two additional center frequencies were inserted (200 and 1100 Hz, the carrier frequencies employed in the target detection experiment). Tones were amplitude modulated (8 Hz, modulation depth of 1) and presented for 800 ms. During the localizer, participants were asked to fixate and passively listen to the sounds. The duration of the localizer was 7 ½ min.
MRI acquisition
Data acquisition was performed on a whole-body Magnetom scanner (nominal field strength 7 T (T) (Siemens Medical Systems, Erlangen, German) at the Maastricht Brain Imaging Center, The Netherlands. All images were acquired using a 32-channel head coil (Nova Medical Inc. Wilmington, MA, USA).
Behavioral data analysis
Behavioral data were analyzed in Matlab (The MATHWORKS Inc., Natick, MA, 234 USA). For every participant we determined the number of detected and undetected trials for the low and high carrier sounds separately (see Fig. S1.1 for behavioral performance per participant). Reaction times have not been analyzed as participants were cued to respond.
The second inversion image of the MP2RAGE was subjected to the automated segmentation in SPM to obtain tissue-probability maps. The non-brain tissue-probability maps (C3, C4, C5) were manually thresholded and combined with a brain mask, obtained from the second inversion image using FSL BET. By combining these, we obtain a brainmask that allows removing non-brain tissue and large veins (for a stepwise procedure see (Kashyap et al., 2019)). This anatomical pre-processing workflow was developed particularly to work well for MP2RAGE data (https://github.com/srikash/presurfer). The resulting mask was inspected, had the cerebellum manually removed and was further manually polished using ITK SNAP in combination with a graphics tablet (Intuos Art; Wacom Co.). The resulting mask was applied to the T1w image (UNI) of the MP2RAGE. We then used BrainVoyager's intensity inhomogeneity correction and up-sampled the image to a resolution of 0.4 mm isotropic, using the spatial transformation option in BrainVoyager's 3D Volume tools. Lastly, the image was transformed (only translation and rotations, no scaling) from native space into a space in which the anterior and posterior commissure were in the same plane (ACPC space). We refer to this space as the voxel space.
Segmentation. The resulting image was input to BrainVoyager's advanced segmentation routine to obtain a white matter (WM) mask. This initial WM mask was inspected and manually polished in ITK SNAP, where emphasis was placed on corrections in the region of interest (bilateral auditory cortex [AC]). The polished WM mask was input to the subsequent step of the advanced segmentation routine in BrainVoyager to obtain a GM mask. This GM mask tended to be too inclusive, containing blood vessels, posing a challenge especially around the strongly vascularized AC. Therefore, we manually polished the GM definition and GM/CSF boundary in ITK SNAP. As a last step the obtained GM/WM segmentation was manually split into two hemispheres. Cortical depth sampling. Using the GM/WM segmentation at 0.4 mm isotropic resolution, we measure the cortical thickness of individual segmented cortical hemispheres in volume space. Based on the cortical thickness we can perform whole-mesh cortical depth sampling, where we create surface meshes at equivolume cortical depth levels between the WM/GM boundary and the GM/CSF boundary (Waehnert et al., 2014). The created set of meshes at different cortical depth were then used to sample the functional data using trilinear interpolation. Surface visualizations are always based on the mid GM surface reconstruction.
Anatomical ROI selection. Based on macro-anatomical landmarks (sulci and gyri) and following the definition reported in (Kim et al., 2000), the temporal lobe of each participant was divided into three anatomical ROIs in each hemisphere: Heschl's gyrus (HG), planum temporale (PT), planum polare (PP), drawn onto the inflated hemispheric surfaces, see Fig. 2B.
Functional data processing -tonotopic localizer
Preprocessing. Preprocessing of the localizer data was performed in BrainVoyager 21.4, the NeuroElf toolbox in Matlab, as well as custom code in Matlab R2017a (The MathWorks Inc). Where not specified otherwise, default settings were used. Slice-scan-time correction, 3D motion correction (with sinc interpolation), linear trend removal and high-pass filtering (7 cycles) was performed. BrainVoyager's COPE plugin was used to correct EPI geometric distortions using a pair of opposite-phase encoded data.
Statistical analysis. All statistical computations were performed at the level of single participants, by fitting a general linear model with a predictor for each center frequency to the data of the tonotopic localizer, obtaining a beta (response-strength) for every predictor and computing a statistical activation maps (FMap) of all predictors combined (contrast: sound > no sound). Fig. S3 shows the overall response to sounds in the localizer at statistical significance threshold qFDR >0.05 for every participant. Tonotopic maps. Tonotopic maps were derived following the standard procedure of z-scoring the response of voxels on the temporal lobe per frequency predictor, thereby removing a response bias towards low frequencies, and then color coding each voxel according to the frequency to which it best responded (i.e. its preferred frequency, indicated by the beta value - (Formisano et al., 2003)).
Functional ROI definition. In addition to dividing the human auditory cortex in terms of its major anatomical landmarks, we define primary auditory cortex functionally using the main tonotopic gradient obtain in the localizer (Moerel et al., 2014), as the auditory cortex in humans displays large macro-anatomical variability (Heynckes, 2022;Heschl, 1878).
The statistical activation map (FMap) in response to sounds, and the tonotopic map derived from the localizer were up-sampled from their native resolution at 1.2 mm isotropic by linearly interpolating to 0.8 mm isotropic to match the high-resolution functional data of the main experiment. The obtained up-sampled tonotopic map was then projected on the individual's reconstruction of the inflated mid-GM surface for each hemisphere, which allowed locating the main tonotopic gradient. The most likely position of the primary auditory cortex was localized using the tonotopic gradient of high frequency (posteromedial HG) to low frequency (medial portion HG) and back to high frequency. Supplementary figure S2.3 displays the tonotopic maps of all ten participants.
Functional data processing -target detection experiment
Functional data were processed using BrainVoyager 21.4, the Neu-roElf toolbox in Matlab, as well as custom code in Matlab R2017a (The MathWorks Inc). Where not specified otherwise, default settings were used.
Preprocessing. Preprocessing for all high-resolution functional data was performed in the default order in BrainVoyager (slice-scan time correction, 3D motion correction [with sinc interpolation and across runs] and linear trend removal and high-pass filtering (7 cycles). We corrected all functional images for EPI geometric distortions using BrainVoyager's COPE plugin based on the AP/PA images.
Co-registration of functional to anatomical images. The functional data of the first run were registered to the pre-processed anatomical data in native space using BrainVoyager's FMR-VMR co-registration. The positional information provided in the header is used for an initial alignment followed by fine-tuning co-registration using boundary-based registration. The result for the first run was visually inspected by overlaying the functional and anatomical images acquired in the same session and manually improved where necessary. The obtained initial alignment and fine-tuning alignment transformation files were used for the remaining runs within a session in combination with an ACPC transformation file to create a volume timecourse per run in the voxel space, using sinc interpolation. When a second session was acquired, co-registration of functional images was performed to in-session MPRAGE anatomical data. In a second step, between session anatomical data were then aligned using BrainVoyager's vmr-vmr co-registration and the resulting transformation matrix applied when creating volume time courses.
Fig. 2. Analysis approach (previous page). (A)
Interleaved anatomical image and functional volume, highlighting correspondence between datasets, anatomical images are segmented (and manually corrected around the ROI) to identify white and gray matter. See Figure S2.5 for enlarged view. (B) The segmentation is used to reconstruct cortical surfaces (inflated view, with cortical curvature; light gray, gyrus; dark gray, sulcus). Anatomical ROIs (planum temporale (PT), Heschl's gyrus (HG) and planum polare (PP) are defined based on major anatomical landmarks (Kim et al., 2000)
Functional datastatistical analysis
Statistical analysis per ROI. We computed a GLM with a separate predictor for every trial, classified as either being low detected, low undetected, low no Target or high detected, high undetected or high no Target, where high and low refers to the carrier frequency of the sound. Fig. 2A shows the overall response to sounds compared to baseline silence, corrected at qFDR <0.05 for an exemplary participant. (See Fig. S2.2 for all participants).
In a second step we sampled these single trial beta maps on 11 reconstructed depth dependent surfaces and averaged across trials of the same perceptual condition (Fig. 2D-G). To obtain laminar profiles multiple inclusion criteria guided the selection of vertices for sampling the mean beta surface maps (see S3.1). Vertices had to be within a particular ROI (PAC, HG, PP, PT). Their statistical F-value in response to sounds in the localizer needed to exceed F > 2 and statistical F-value in response to sounds in the main experiment exceeded F > 0.1, thereby ensuring that voxels with an (average) positive BOLD response to sounds in the main experiment were included, independent of depth. For each participant we extracted the mean (beta) across these vertices, per perceptual condition per depth. The perceptual conditions depended on the behavior of the participant and could lead to unequal condition size (see Fig.S1.1). Therefore, we bootstrapped a 95% confidence interval of the mean of trials per perceptual condition per depth (by bootstrapping 100 times the mean percent signal per perceptual condition).
Second-level group statistics for each ROI (n participants = 10) were carried out on the mean differences of the bootstrapped betas between the perceptual conditions extracted from each participant (detected minus no Target and undetected minus no Target. Fig S3.4 and 3.5). By subtracting the response to the no target condition from both the responses to detected and undetected sounds we aimed to control for the layer dependent signal increase towards the superficial gray matter elicited by the draining vasculature (Turner, 2002;Polimeni et al., 2010). We binned the data across 11 depth levels as follows: Deepdepth 1:3, middledepth 5:7, superficialdepth 9:11, thereby ensuring equal sized depth bins. We used three predictors (depth [deep; middle; superficial], condition [detected minus no target; undetected minus no target] and their interaction) in a separate generalized linear mixed effects (GLME) model for each ROI. Model fits were compared between a fixed effect and a random effect model using likelihood ratio tests.
Assessing the frequency selectivity of the effect of detection in PAC. We expected voxels to retain their frequency preference (high vs low frequency) across the localizer and main experiment. To test this, we selected voxels whose time courses were modulated in response to sounds, exceeding a statistical threshold of F > 2 in the tonotopic localizer. In the localizer we performed best frequency (BF) mapping (i. e. tonotopic mapping). Using a GLM with only two predictors (200 Hz and 1100 Hz), we defined voxels as preferring low or high frequency. This allowed us to directly compare the preference from the tonotopy to the main experiment. For these groups of voxels (i.e., labeled as preferring low or high frequency in the localizer), we then plotted the response (after z-scoring as customary in tonotopic mapping) to the high and low preferring sounds (separately) in the main experiment (see Fig. S4.1).
Tonotopic analysis of main experiment. For the tonotopic analysis of the data we selected vertices in PAC as outlined in the previous section. In particular, we extracted the mean (beta) across vertices per perceptual condition (detected, undetected, no target), per depth (11 levels), in low-and high-preferring groups of voxels within PAC, for low and high presented sounds in the main experiment.
Second-level group statistics (n = 10) were carried out on the differences between perceptual conditions extracted from each participant (detected minus no Target and undetected minus no Target, Fig S4.2). We binned the data from 11 depth levels into three depth bins. Deepdepth 1:3, middledepth 5:7, superficialdepth 9:11, thereby ensuring equal sized depth bins. For the tonotopic analysis of PAC we used the predictors (depth [deep; middle; superficial], condition [detected minus no target; undetected minus no target], BFandSound [highSoundHighBF, highSoundLowBF, lowSoundHighBF, lowSoundlowBF] and their interactions; where BF stands for best frequency and was defined on the basis of the localizer [see above]) in a generalized linear mixed effects model (GLME).
Results
We examined the laminar response profile of human auditory cortex (AC), using 2-D gradient echo (GE) blood oxygen level dependent (BOLD) fMRI at 7 T, with sub-millimeter resolution, during perceptual detection of temporally shifted target sounds (TS) embedded in rhythmic sound sequences (Fig. 1A). Specifically, we contrasted different percepts of acoustically identical sound sequences containing a target (Fig. 1B).
Based on participants' responses, we labeled trials as detected (i.e., target present and detected), undetected (target present and not detected) and no target (target not present). In individual participants' data we estimated responses for every condition, focusing on primary and non-primary areas of human auditory cortex ( Fig. 2B-C) and sampled them onto 11 equivolume depth surfaces (Waehnert et al., 2014) (Fig. 2D). Figure S3.2 shows the laminar profile per participant ROI and condition.
Second-level group statistics (n = 10) were carried out on the differences of the mean (across participants) betas between the perceptual conditions across cortical depths (Fig. 3D-F). Specifically, we statistically assessed whether detected and undetected sounds differentially modulated the response across depths. By subtracting the response to the no target condition from both the responses to detected and undetected sounds we aimed to control for the layer dependent signal increase towards the superficial gray matter elicited by the draining vasculature (Turner, 2002;Polimeni et al., 2010).
No significant frequency-specific modulation of the layer-dependent detection effect
The best frequency definition within PAC was stable between the localizer and the main experiment ( Fig. S4.1). To assess the frequencyspecificity of detection effect we fitted a generalized linear mixed effects (GLME) model, splitting each ROI according to best frequency (BF) as defined in the localizer, with four predictors (depth [categorical], condition [detected minus no target; undetected minus no target], BFandSound [highSoundHighBF, highSoundLowBF, lowSoundHighBF, lowSoundlowBF] and their interactions). We did not detect a significant 3-way interaction between condition depth and frequency (Depth:Condition:BFandSound (F (6,696) = 0.17, p > 0.05), indicating that the detection effect was not significantly different for high and low preferring targets in high and low preferring sub-regions of PAC. In follow-up analyses we collapsed data across frequency-preferring voxel populations and the frequency carrier of the sounds.
Detection of a target selectively increases activation in superficial layers of PAC
In PAC all three perceptual conditions show an increase in response from deep to superficial layers (Fig. 3 A). When subtracting the no Target condition, the additional modulation induced by detection of a target is apparent as an increase from deep to superficial layers (Fig. 3D red line), while non detected targets do not result in a significant change in response compared to no target trials (Fig. 3D -black line). We fitted a generalized linear mixed effects (GLME) model for each ROI with three predictors (depth [categorical, 3 levels], condition [detected minus no target; undetected minus no target] and their interaction) to test for the non-frequency-specific effect of target detection and its interaction with cortical depth. We detected a significant interaction between depth and condition in PAC (F (2,174) = 6.63, p < 0.01). Follow-up ANOVAs per detection condition showed a significant effect of depth for the condition represented by the subtraction of detected and no Target trials (F (1, 87) = 17.72, p < 0.001) and no significant effect of depth for the condition represented by the subtraction of undetected and no target trials (F (1,87) = 0.22, p = 0.8). The significant main effect of depth for the condition represented by the subtraction of detected and no target trials warranted the comparison in activation across the three depth levels. After correcting for the multiple comparisons (Bonferroni, across cortical depths) we observed significant differences between the deep and superficial (t (1,174) = 22.71, p < 0.01), as well as middle and superficial (t (1,174) = 6.8, p < 0.05) depths. The difference between deep and middle depth was not significant (t (1,174) = 4.65, p > 0.05). This result is indicative of feedback related signals affecting the processing of superficial layers of PAC in relation to detected targets. When considering the whole of Heschl's gyrus as region of interest (F (1,216) = 6.06, p < 0.05; Fig. S3.6), only the comparison between the deep and superficial depth bin (t (1,174) = 9.53, p < 0.01) was significant, but not between middle and superficial depth (t (1,174) = 3.19, p > 0.05) or between deep and middle depth (t (1,174) = 1.69, p > 0.05).
Layer-specific effects of detection in PAC
Previous (human) fMRI research has highlighted the modulation of superficial cortical layers of (primary) auditory cortex when attending and responding to auditory stimuli (and ignoring visual ones) (De Martino et al., 2015;Gau et al., 2020). Here, we aimed to understand whether feedback mechanisms can explain why physically identical stimuli presented under identical attentional instructions are detected in some trials and not in others. To do so, we measured laminar fMRI responses from human auditory cortex, while participants performed a temporal target detection task at perceptual threshold. This allowed us to contrast the response to detected and undetected targets, while the bottom-up acoustic information remained identical. We showed that detected targets elicited a comparatively stronger response in superficial layers of the primary auditory cortex, indicating the relevance of feedback processing. In non-primary regions (PP & PT) detecting a target resulted in a stronger response (compared to non-detected targets), yet this differential response did not vary across layers.
We have reported our results after subtracting the response to no target trials from the responses to detected and undetected sounds. By doing so, we were able to control for offset effects induced by local vascular contributions to the BOLD signal, which should be consistent across the experimental conditions. This permitted highlighting the modulation induced by the detection of a target, despite the overall increase of the GE-EPI signal towards the pial surface (Uludag and Blinder, 2018). Acquisition techniques such as 3D-GRASE (Oshio and Feinberg, 1991) and VASO (Huber et al., 2017) which are not (or less) affected by vascular draining exist, nevertheless, GE-EPI offers increased sensitivity (compared to both 3D-GRASE and VASO), coverage (compared to 3D GRASE) and temporal efficiency (compared to VASO) all of which were essential to our study (Moerel et al., 2021).
It is conceivable that the increase in response we observed in superficial layers of PAC could have been the result of a fluctuation of attentional sampling, which is known to modulate long-latency sensory responses (Snyder et al., 2012). Multiple recent laminar fMRI studies located top-down effects of attention in superficial layers of human visual and auditory cortex either by attending to different modalities (auditory and visual) or by studying feature-based attention within a modality (Lawrence et al., 2019;Liu et al., 2021;De Martino et al., 2015;Gau et al., 2020), but see (van Mourik et al., 2021). Invasive electrophysiological studies have also related changes in sensory gain of superficial layers to fluctuations of attention (Lakatos et al., 2013;O'Connell et al., 2014;Henry and Herrmann, 2014). Our results are thus consistent with the idea that attention modulates superficial layers of Fig. 3. Layer-specific BOLD response for the different perceptual conditions per ROI. A. BOLD response to detected (magenta), undetected (blue) and no Target (green) sounds in the different layers of PAC, averaged over trials and participants. B. Same as A but in planum polare (PP). C. Same as A but in planum temporale (PT). Laminar profiles in all ROIs show an increase towards the cortical surface (closer to CSF). D. Difference in BOLD response between detected and no Target sounds (red; detected -no Target) and, undetected and no Target sounds (black; undetected -no Target) show a modulation of the BOLD response towards superficial layers of PAC driven by detection. Dashed line depicts difference between red and black line. E. Same as in D but for PP. No significant differences between BOLD responses across depth to detected and undetected targets are observed in area PP. F. Same as in D but for PT. No significant differences between BOLD responses across depth to detected and undetected targets are observed in area PT. Shading indicates the standard error of the mean across participants. Figures S3.1-S3.6 show single participant plots and the results for the HG ROI (not depicted here). (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) (auditory) cortical areas and provide first evidence that these small fluctuations can make the difference between detecting or not detecting an otherwise identical stimulus.
Our results identify PAC as a target of such feedback signals. Previous MEG research has also suggested that feedback to PAC (a unique longlatency response ranging between 50 ms and 300 ms) may be relevant to the detection of target sounds (Giani et al., 2015;Gutschalk et al., 2008). At the macroscopic level, increased fMRI BOLD responses in PAC in response to detected targets have been suggested to be the result of feedback signals (Wiegand and Gutschalk, 2012), potentially originating in parietal areas (Giani et al., 2015;Cusack, 2005). The plausible involvement of feedback to primary (auditory) cortical areas in determining the detectability of a stimulus is also corroborated by studies on bistable perception, or auditory streaming. These studies reported variations in (primary) auditory cortex responses to changes in percept evoked by identical physical stimuli (Micheyl et al., 2005). Using fMRI, for example, responses in regions adjoining PAC have been associated with the perceptual interpretation of acoustically identical sounds (Kilian-Hütten et al., 2011) as well as to perceptual streaming (Hill et al., 2011).
Responses to detected targets are not modulated by frequency of the sounds
Contrary to previous invasive electrophysiology studies and noninvasive human studies (Lakatos et al., 2013;O'Connell et al., 2014;Riecke et al., 2018), we did not find a significant effect when considering separately cortical regions whose preference was maximal for the carrier frequency of the sounds (e.g. high vs. low frequency). While the absence of evidence is not evidence of the absence, a possible explanation for such inconsistency may stem from the nature of the task we employed. In previous research reporting frequency specific effects in auditory cortical regions, the task entailed focusing attention to the spectral content of the sounds (Lakatos et al., 2013;O'Connell et al., 2014;Riecke et al., 2018). In our task, the carrier frequency of the sounds was not the target of attention as participants were instructed to detect temporal shifts embedded in the stream of sounds. This line of reasoning, and our results are in line with previous investigations showing an attentional enhancement in layer 2/3, independent of the preferred frequency of the recording site when sound frequency was not task-relevant (Francis et al., 2018).
In conclusion, the current study shows that when detecting a temporally shifted target, the response of neural populations in superficial layers of primary auditory cortex increases (in a non-frequency specific manner). This modulation is compatible with feedback signals targeting the primary auditory cortex Future studies may be directed at identifying the source of the feedback signal we identified in auditory cortex by assessing laminar resolved functional connectivity after data acquisition with larger brain coverage. Invasive investigation in animals (with micro stimulation or optogenetics used to modulate activity in layer-specific sources of feedback) could also be directed to the investigation of the causal relationships among feedback sources, superficial layers responses, and behavior.
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Miriam Heynckes, Elia Formisano, Peter De Weerd reports financial support was provided by Dutch Research Council. Federico De Martino reports financial support was provided by European Research Council.
Data availability
Data will be made available on request.
|
2023-01-27T16:12:20.860Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d7ad9fa82ded14bc5daf6b81ec25236c8907f420",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.crneur.2023.100075",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb445a3d6aa304426ebf12d8d65cacdfab4d8597",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245537903
|
pes2o/s2orc
|
v3-fos-license
|
Tracking Down the Route to the SM with Inflation and Gravitational Waves
We explore supersymmetric SO(10) models predicting observable proton decay and various topological defects which produce different shapes and strengths of gravitational wave backgrounds depending on the scales of intermediate symmetry breaking and inflation as well. We compare these to their non-supersymmetric counterparts. By identifying the scales at which gravitational wave signals appear, we would be able to track down a particular breaking chain and discern if it has a supersymmetric origin or not. It would also be useful to observe gravitational waves from more than one source among all possible topological defects and first order phase transitions for a realistic breaking chain. For these purposes, we work out specific examples in which the grand unification and relevant intermediate scales are calculable explicitly. It turns out that examples with gravitational waves from different sources are quite difficult to obtain, and the predicted gravitational wave profiles from domain walls and first order phase transitions obtained in some examples will require detectors in the kHz to MHz region.
Introduction
The first ever direct detection of a gravitational wave (GW) signal in 2015 [1] opened the era of gravitational wave astronomy and revived interests in cosmological GW signals. The study of GW signals from cosmological origin, either topological defects or phase order first transitions (FOPT), is a long standing subject, but only recently the current and future experiments have the potential to detect these kind of signals. During the next years, KAGRA [2] and LIGO-India will be integrated in the LIGO [3,4,5,6] and Virgo [7] interferometer network (LVC). Thanks to the improved sensitivity, a huge number of GW events will be detected. The detection could be so frequent that the associated signals would be too entangled to be identify individually. These events will then appear as a Stochastic Gravitational Wave Background (SGWB), a signal continuous in time, loud in a very broad frequency band, and coming from the whole sky dome. In fact, these are also the characteristics of GW signals created at the earliest moments of the universe. Such signals, called SGWB of Cosmological Origin (SGWBoCO), exist but we do not know their strength and the frequencies at which we could detect them. It is then crucial to design SGWB searches suitable for scenarios where the SGWBoCO and SGWB of astrophysical origin (SGWBoAO) are both present at some level. This kind of searches will need to be implemented by all current and projected experiments, such as LISA 1 TianQin [9], Taiji [10], LISA [11], Einstein Telescope [12,13], Cosmic Explorer [14], BBO [15] and DECIGO [16], as well as Pulsar Timing Array (PTA) experiments SKA [17], EPTA [18], PPTA [19], IPTA [20]. In fact, in September 2020 the NANOGrav Collaboration announced their PTA analysis [21] showing that we are likely on the verge of a SGWB detection. In particular, it has hinted at the detection of a cosmic string in the range Gµ ∈ [2 × 10 −11 , 3 × 10 −10 ] [22]. A firm SGWB observation may require only some extra years of measurements, and it is encouraging to have the recent positive development of the PPTA analysis [23] and the collaborative effort of IPTA [24]. If confirmed, the NANOGrav SGWB signal in the range Gµ ∈ [2 × 10 −11 , 3 × 10 −10 ] would be compatible with the SGWBoCO produced by (Nambu Goto) cosmic strings coming from a phase transition that occurred between 10 −22 and 10 −18 seconds after the Big Bang.
On the theoretical side, model building within the context of Grand Unified Theories (GUT) needs to provide a compelling science case for scenarios where the cosmological signal can be clearly identified. In this respect, there has been a large number of recent studies reviving the study of GW coming from topological defects 2 . These include non-supersymmetric GUT theories in connection to proton decay and unification [31,32,33] and different interconnections to B − L [34], neutrino masses and leptogenesis [35], as well as in connection to addressing the NANOGrav hint for cosmic strings [36,37] and combined effects with FOPT and GW [38].
In this paper, we address the question if different breaking chains of the supersymmetric SO(10) GUT predicting a certain combination of topological defects and first order phase transition (FOPT) can be identified through the detection of SGWB. This task of course can be done taking into account low energy phenomenological constraints (proton decay, masses, flavor effects, etc.) and assumptions about how inflation should take place.
The classification of breaking routes of SO(10) down to Standard Model (SM), or Minimal Supersymmetric SM (MSSM), producing topological defects, started long time ago [27]. As it is well-known, non-supersymmetric models can yield gauge coupling unification with different intermediate scales [39] and therefore it is appealing to study the GW panorama in this context. Making a catalogue of surviving breaking chains coming from SO(10) models as in [32, 33] 3 is considerably more challenging in supersymmetry due to the fact that not only the number of constraints and parameters is considerably bigger but also to the assumptions for the way supersymmetry is broken and the way it is UV completed. For these reasons we present in this work only a few typical examples, rather than a comprehensive catalogue.
Supersymmetric models can be compared with their non-supersymmetric counterparts because they differ in their breaking scales and proton decay channels and this can be contrasted in the future with both GW and proton decay experiments. We provide a study for this comparison by considering examples based on the SO(10) breaking routes containing the SU(5) and SU(3) C × SU(2) L × SU(2) R × U(1) B−L sub-groups. These models are representatives of different subgroups and low energy phenomenology and therefore can shed light on discerning routes down to the SM. We envision plots for frequency vs. density showing all possible combination of topological defects and FOPT for a particular breaking chain. Unfortunately, this task is quite model-dependent and will require future developments of simulations, in particular, for hybrid topological defects since the relative tensions of the topological defects is crucial to the evolution of the networks. Nevertheless, it is worth to establish possible breaking routes where hybrid topological defects can leave a distinct GW imprint. Recently, these observations have been also revived in [42] with a similar motivation to ours.
The paper is organized as follows. In §2 we put into context the appearance of topological defects according to the different breaking routes down to the SM. In §3 we mention how do we obtain the scale of breaking and how do we compute the proton decay ratios. In §4 we analyze our model examples and compare to the non-supersymmetric counterparts. In §5 we conclude. For the sake of completeness, we provide the information that we use to generate the GW signals in the appendices.
GUT breaking and topological defects
In this section, we make a brief summary of topological defects appearing in a GUT breaking chain which we use and refer to. Then, we introduce typical SO(10) GUT breaking patterns for which we study various phenomenological features and GW signals from relevant topological defects and possible phase transitions.
Theory of topological defects in breaking chains
A schematic breaking chain of a GUT model down to the SM can be depicted as where the letters p i , q i and r i represent topological defects. The conditions for the formation, evolution and stability of topological defects has been thoroughly studied 4 . The k−homotopy group, π k (G/H), of the vacuum manifold M = G/H determines the appearance of topological defects since π k (G/H) classifies the distinct topological spaces of G/H that appear after the breaking of G → H. They correspond to k = 3 for textures, k = 2 for monopoles, k = 1 for cosmic strings (CS) and k = 0 for domain walls (DW), although formally π 0 is not a group: it represents merely the number of connected components of the manifold. If the homotopy groups are trivial, that is, π k (G/H) = I, then there are no associated defects. The general features of topological defects from the breaking sequence of Eq. (1) can be summarized as follows.
corrections to achieve gauge coupling unification and acceptable proton decay rates, and also to consider or not multiple parameters controlling topological defects [40,41].
1. The topological defects will be stable up to the H n group if the k-homotopy group of the manifold G GUT /H n is not trivial, that is if π k (G GUT /H n ) = I; and unstable up to the H n group if the k-homotopy group of the manifold G GUT /H n is trivial, that is if π k (G GUT /H n ) = I. The same applies for the manifold G GUT / (SU (3) C × U Y (1)) which will determine stable or unstable topological defects all the way down to the EW scale.
2. There can appear metastable defects that decay quantum mechanically with a decay rate ∝ e −A/ , where A is the "tunneling action" of the particular defect [43]. For the strings decaying via a pair of monopole/anti-monopole the decay rate, which is specifically the tunneling rate per unit string length, ℓ, is given by where µ is the CS tension and m is the scale at which the monopoles are begin created. This means that the creation of strings should not take place much below the formation of monopoles as only decays for values of k = O(1) render a non negligible value of Γ, since the exponential in Eq. (2) decays rapidly. For values much greater than 1, the decaying cosmic strings become indistinguishable from stable strings. In certain cases, the probability per unit area of a domain wall decaying by the nucleation of strings, goes like [44,43] dP dA where P is the probability of nucleation and A is the unit area, σ and µ are respectively the tension of the domain wall and the cosmic string. Hence, depending on the tunneling process or the decay time, the metastable defects could appear as (almost) stable or disappear quickly and hence leave different GW background imprints.
3. Hybrid topological defects appear when a defect associated with the homotopy π k (H i /H j ) is produced at a breaking stage and at a subsequent breaking stage a defect with a homotopy π k−1 (H j /H ℓ ) is generated. Then, the topological defects associated to π k (H i /H j ) interact with those associated with π k−1 (H j /H ℓ ) [45]. Examples of these phenomena include the well known fact that monopoles could be the seeds of monopole-antimonopole decay of a string via the Schwinger mechanism and also the fact that strings could become the boundaries of the DW produced at a later stage [45,44,46]. The interaction and evolution of these defects depend on the tensions and sizes of the interacting defects and on when inflation and reheating take place (see for example [41] for a comprehensive review).
4. If monopoles and cosmic strings are produced at the same breaking step or without any sizable gap, they are unstable [47], but could leave observable GW spectra [48,40].
5. If domain walls appear together with cosmic stings after inflation, they become unstable due to the attachment of strings and thus leave GW signals without causing a cosmological problem. Furthermore. a discrete symmetry associated with a domain wall could be lifted by Planck mass suppressed terms [49] and thus the domain wall decays to leave an observable GW signature while its cosmological evolution does not conflict with the standard cosmology [50].
Due to the recent development in GW detection, there have been extensive studies of the signatures from the GUT breaking. Distinct GW signals from cosmic strings produced in association with/without monopoles, or around/away from the end of inflation could be detected to provide an important clue of both inflation [51,52] and GUT theories. A typical U(1) sub-group of SO (10) GUT is U(1) B−L whose breaking at different scales would be tracked by distinguishing the resulting SGWB [53,54]. On the other hand, the formation of domain walls and the GW production could be a consequence of the spontaneous breaking of discrete R symmetries [55,56]. There have been some arguments against the possibility of finding R symmetries [57] in 4D GUT theories. Lastly, the breaking of supersymmetry itself it can lead to FOPT [58] that can generate GW [59].
This makes the study of GW in the context of GUT a fascinating arena that can probe some of the fundamental aspects and predictions of GUTs. As mentioned before, the study of topological defect formation, evolution and production of GW signals has a long history, however more elaborated simulations and treatment of signals are taking place recently given the opportunity for current and future experiments. 5
Typical examples of SO(10) breaking
Here we present possible breaking chains having more than one source of GW signals and that we will analyze in §4. As it is well-known, uncertainties in determining the parameters involved in the processes of GW production are yet to be greatly reduced. Nevertheless, we can still make qualitative analyses revealing important features. Let us now depict the distinct breaking chains for which we determine the intermediate and unification scales and estimate proton decay ratios: The numbers (2,1,0) in parentheses denote respectively again topological defects (monopole, string, domain wall), and (pt) means phase transition. The subscripts of the subgroups in Eqs. (4)-(6) are as follows: C for color, L for Left, R for right, V = 4 I 3 R − 3(B − L) and Z = −I 3 R + 1/2(B − L). Where I 3 R is the third component of the right-handed isospin and B − L the baryon minus the lepton number. The notation D represent the D parity also known as Z C 2 . Note that Z 2 , which is used in supersymmetry as R-parity, is never broken. Here M GUT , M R and M B−L refer respectively to the GUT scale, the scale where SU(2) R is broken and the scale where B − L is broken. The forthcoming results of JUNO [61], DUNE [62] and Hyper-Kamiokande [63] experiments will be able to tell us which models are going to be ruled out and therefore this will be able to exclude the models as sources of GW at a particular scale.
One may consider the breaking route of Eq. (5) or Eq. It is indeed interesting to look for breaking chains where combined sources of gravitational waves could appear. It turns out, however, that this does not occur generically. For supersymmetric models with one intermediate step of breaking it is difficult to separate the intermediate scale too much from the unification scale without incurring proton decay problems. With more than one intermediate scale, an intermediate scale could be lowered, e.g., between 10 10 GeV and 10 13 GeV. Then a further splitting of scales can be achieved and combined effects can take place. This is the case for the example in (6). For non-supersymmetric models, the separation of the GUT scale M GUT and any other intermediate scales, M I , M II , is more feasible basically because the number of particles in the theory, which change the coefficients in the beta functions, are much larger than in their supersymmetric counterparts and therefore one can allow more running of the gauge couplings between the intermediate scales. However, it is precisely this hierarchy, i.e. M GUT ≫ M I (or any other intermediate scale M II ), that erases potential interactions of combined effects. Specifically, the decay probabilities of Eqs. (2) and (3) decrease very rapidly because the arguments in the exponential go like −1× ratios of powers of the masses involved in the breaking chains. For example, for a decaying string at a scale ∼ M I , the factor k in Eq. (2) goes like the ratio of M 2 GUT /M 2 I , assuming that monopoles are created at M GUT . We also note that the example in ( §2.2) allows the signatures both from cosmic strings and from FOPT. However, the signals from domain walls and FOPT turn out to be away from the current sensitivity as it will be shown in §4.
Phenomenological considerations
The minimum set up for studying GUT, and its different breaking chains, calls for computation of unification scales and proton decay depending on the choices of matter content and intermediate scales.
Detailed aspects for this are well studied in the context of non-supersymmetric models. When supersymmetry is considered, one needs to make more assumptions about the spectra and boundary conditions. We will take no-scale supergravity boundary conditions following the treatment in [64].
Gauge coupling unification Except for the cases studied previously in the literature, we compute the beta functions following [65] and express them in the usual form µ dg a /dµ = b (1) where we keep the top Yukawa coupling, y t , since due to its relative size in comparison to the gauge couplings it cannot be neglected.
We assume the "survival hypothesis" implying that only the SM particles contribute to the running below the intermediate symmetry breaking scale and all other particles have masses around either M GUT or the intermediate scales.
Proton decay For non-supersymmetric theories the most sensitive channel is the dimension-6 operator induced decay p → π 0 e + mediated by gauge fields. We estimate the proton decay for this channel using [66] where m p and m π 0 are the proton and the neutron pion, π 0 , masses respectively. The amplitude at the weak scale is computed from where the Wilson Coefficients C RL ((ud) R u L ) and C LR ((ud) L u R ) corresponds to C 1 and C 2 of [66], respectively. For numerical values we use the inputs given in Tab. 1. The most constraining channel for the supersymmetric theories is p → K +ν , mediated by dimension-5 operators. We use where m K is the mass of the kaon and the decay amplitude is given in terms of the Wilson Coefficients effective at the hadronic scale (2 GeV) It is useful to remember that all of the Wilson Coefficients in Eq. (10) are suppressed by the color-triplet Higgs mass and the supersymmetry breaking scale. The hadronic matrix elements are given in Tab. 1 and the evolution of the Wilson coefficients from M GUT is given in [64].
The forthcoming experiments DUNE [62], JUNO [61] and Hyper-Kamiokande [63] are expected to improve the current sensitivity by an order of magnitude (see Tab. 1).
SU(5) routes
Non-supersymmetric models via SU(5) routes are quite restricted by the proton decay bound in the channel p → π 0 e + since the Wilson coefficients are suppressed only by the masses of heavy gauge fields: C i ∝ 1/M 2 X (for a review see for example [69].) Thus, the discussion below is specific to supersymmetric SO(10). The following routes are candidates for the appearance of combined effects: The first one, Eq. (11), produces cosmic strings at the first step of breaking, and monopoles at the second stage of breaking. As monopoles need to be inflated away, the only GW signal would be due to a possible breaking of R-parity (Z 2 ) or FOPT. The second possibility, Eq. (12), produces monopoles at the first stage of breaking and cosmic strings at the second stage of breaking. However, the proton decay bound requires both breaking scales to be at the same scale close to 10 16 GeV. Thus, inflation needs to take place after that, leaving no imprint from GW, other than the possible Z 2 breaking. Note however that the situation can be different in the flipped models. The breaking of SU(5) F × U(1) V (and thus one U(1) factor) can occur at a much lower scale than 10 16 GeV, if we introduce a pair of vector-like particles [70]. For the third route, Eq. (13), monopoles will be produced in the first two breaking steps, while cosmic strings in the last one.
Here unification also requires all the three scales to be close to 10 16 GeV. However, just as in the case of of the flipped SU(5) pair of vector-like fields could be introduced to split the scales [70].
In supersymmetric models, the dimension-5 operator in the channel p → K +ν is mediated by the exchange of colour-triplet Higgs and suppressed by the scale of supersymmetric particles, M S , so effectively the operators mediating proton decay are proportional to ∝ 1/( M GUT M S ). This means that we can push up the supersymmetric scale to aid overcoming the present bounds [71,64]. To have an idea we can write, [69], where M S is the supersymmetry breaking scale, M HC is the colour-triplet Higgs mass, A R is a hadronic parameter and β is the usual angle determined by the ratio of the vacuum expectation values of the two Higgs doublets. For M S = 10 TeV, we get a suppression of O(10 −2 ) in Eq. (14) making the proton decay just compatible with the current bound τ (p → K + ν) > 6.6 × 10 33 yrs and thus accessible by the upcoming experiments [61,62,63]. However, it is nontrivial and it is quite model dependent to achieve the measured Higgs mass m h = 125 GeV. This puts additional constraints on the model. When U(1) V and U(1) Z are broken, there can appear B − L proton decay modes through, for example, the operator d c d c u c LH u in supersymmetry [72]. However, the resulting proton decay rate depends on the flavour structure of the model and can be suppressed sufficiently. For our analysis, we adopt the results of Fig. 3 of [64] showing the regions where the proton decay rate is compatible with the bound of the channel p → K +ν . These regions correspond generally to the mass range of M 1/2 ≥ 5 TeV and M 0 ≥ 7 TeV. Taking (M 1/2 , M 0 , tan β, µ) = (5.5 TeV, 7.0 TeV, 5, > 0), we have the GUT scale of and a proton decay rate of This would requires to break all the chains of Eqs. (11)(12)(13) directly to G SM × Z 2 at the GUT scale. The intermediate scales can be separated away from the GUT scale by altering the unification scale and introducing an effective intermediate scale with the addition of multiplets which affect a bit the running but do not have a great impact in the proton decay rate. This would allow detectable hybrid topological defects appearing in the SU(5) route. As an example, we take the chain Eq. (13) and consider the following two cases: Depending on when the inflation takes place, we may observe different gravitational wave signals. If the reheat temperature T RH is higher than M B−L , there appear the usual signal of undiluted stable comic strings. In the opposite case T RH < M B−L , all the defects are diluted away, but sizable string regrowth may occur depending on the conditions. They are basically determined by choosing the parameters that satisfy Eq. (7) of [73]. A brief summary about this is provided in the Appendix (D). Basically we assume a number of long strings with the parameter choice ofz = 10 4 which is used in the plot Fig. 1. Another interesting phenomenon is the appearance of observable signals of decaying cosmic strings attached with monopole-antimonopole pairs. The second breaking route of Eq. (17) can realize this possibility as M R is slightly above than M B−L and the inflation can take place in between, M R > T RH > M B−L . We follow [74] in order to get the profile of the decaying cosmic string. All of these features are illustrated in Fig. 1. Our breaking chain also allows the possibility of T RH > M R > M B−L which has been known to produce undetectable gravitational signals. For this, we refer the readers to the recent study in [42] .
The input values of the quark Yukawa, we use We then employ the well known procedure of running and matching the Wilson Coefficients of the relevant Lagrangian at each step. The running of the coefficients C 5L,5R from M GUT to where β Y uk L,R are the beta functions of SU we have the usual running of the Minimal Supersymmetric Standard Model (MSSM) and use the procedure employed in [64]. This means that the coefficients C * 331i 5R (µ), C jj1k 5L (µ) are evolved using Eq. (24) from the GUT scale to the supersymmetry breaking scale M S . At M S , the sfermions are integrated out via the wino-or Higgsino-exchange one-loop diagrams to get the effective Lagrangian L eff.
where the coefficients CH i and CW jk are proportional to C * 331i 5R (M S ) and C jj1k 5L (M S ), respectively. Finally, the coefficients 6 The dimension-5 effective Lagrangian mediating proton decay is given by L eff iaĒjŪkbDlc, and the supersymmetric multiplets Q, D, E, U and L are integrated over the Grassmann variable θ. of Eq. (10) at the hadronic scale, C RL (usdν τ ), C RL (udsν τ ), C LL (usdν k ) and C LL (udsν k ) are proportional to CH 2 , CH 1 , CW jk and CW jk , respectively (as given in Eq. 38 of [64]). In order to make a definite evaluation of the proton decay in the K ν channel we impose no-scale supergravity conditions at M GUT . Then, we look for values of the supersymmetric particles that satisfy the current limits. Taking M 0 = 1.6 × 10 4 GeV, M 1/2 = 2 × 10 4 GeV and tan β = 3, we get τ (p → K ν) = 8.6 × 10 33 yrs,
Monopoles Cosmic Strings
consistent with the values in Tab. 1.
First breaking stage This breaking happens at M GUT given in Eq. (18). Monopoles are produced at this scale but no CS.
Second breaking stage At this scale SU(2) and U(1) B−L are broken and this can happen at the end of inflation. For this case both decaying CS and diluted CS can occur. In the top plot of Fig. 2 we depict the breaking chain which predicts metastable strings after inflation at 10 13 GeV. The scale at which monopoles are produced is ∼ 10 16 GeV, and thus the string decay rate is exponentially suppressed by the huge number of k = v 2 /µ ∼ 10 6 where v = O(10 16 ) and µ ∼ (10 13 GeV) 2 [see Eq. (2)]. Therefore the strings are almost stable and its signature is plotted in the bottom left panel as an orange band corresponding to (2, 4) × 10 13 GeV. It may happen also that CS are produced during inflation and thus lead to the GW profile of the diluted CS. This is shown in the left bottom panel as a dashed purple line for a redshift ofz = 2 × 10 4 that satisfies Eq. (7) of [73].
First Order Phase Transitions Phase transitions have been analyzed in the context of the present breaking chain and in the context of no-scale supergravity [76], and although they are difficult to realize, there is a possibility and therefore we consider this case. Coleman-Weinberg inflation inflation could also be adopted to achieve FOPT but this has been explicitly carried out only in the context of non-supersymmetric theories [37].
In the bottom-left panel of Fig. 2 we plot a profile of a GW from a FOPT itself at 3 × 10 13 GeV. Note that the predicted GW profile is way out of the present experiment sensitivities. It will require detectors capable to access a frequency band in the kHz to MHz region. In fact, some proposals of exploring up to kHz are starting to take shape, in particular interferometers [77] and optically levitated dielectric sensors spanning a wide frequency band from few kHz to ∼ 300 kHz [78,79]. For the MHz region there are also incipient proposals, [80,81,82] for frequencies up to 100 MHz and in the range 1-250 MHZ [83].
The rest of the parameters used to produce the energy density profiles is given in Tab. 2 and the relevant formulas are given in Appendix C.
Parameters used for the FOPT profiles α β/H g * ǫ 0. Fig. 3 we use an annihilation temperature also equal to the a breaking scale of 3 × 10 14 GeV.
Inflation framework As mentioned before, this breaking pattern has been analyzed in [84,85], and a Starobinsky-like inflation scenario has been constructed successfully with the inflation energy scale V It is also shown in [32] that the proton decay rate in the channel p → π 0 e + is above the current bound, 1.6 × 10 34 yr even without threshold corrections. Future experiments, JUNO [61], DUNE [62] and Hyper-Kamiokande [63], have the potential to exclude it. Note however that threshold corrections or additional singlets added to the theory can push-up the limits [32]. In the right plot of Fig. 2 we show the GW signal from a CS (orange curve in the lower rightcorner) that would occur at 9.5 × 10 9 GeV corresponding to the tension of Gµ = 7.6 × 10 −20 ). In this non-supersymmetric scenario inflation can take place before that and so the CS will behave like stable string, which is in the reach of BBO.
where we have included uncertainties from the electroweak parameters, M Z , α S and m t , quoted in Tab. 1. We remark that M D and M GUT tend to be closer to each other, than M R and M D , mainly because of the value of the coefficient (b (14)). At this scale, however the dimension-six operators leading to p → π 0 e + become relevant. To calculate proton decay we proceed as in the previous section §4.2 with two matching scales, M D and M R . For the dimension-six operator, we use the procedure of [64] to obtain the proton lifetime 2 × 10 34 yrs which is just above the current bound (Tab. 1). For the dimension-five operator leading to p → K +ν , we obtain the proton lifetime 9 × 10 33 yrs with the sparticle masses at 100 TeV, which falls in the reach of the next generation proton decay experiments. This, in turns, sets an upper bound for the scale of this kind of inflation and hence serves as a guide to look for the scales for which topological defects are not completely erased by inflation and reheating.
It is known that the domain wall network can overclose the universe if it is generated after inflation. This can be overcome in various ways (see Appendix E). One possibility is that the
DomainWalls
Cosmic Strings formation of cosmic strings takes place just before inflation and the domain wall formation takes place after inflation. Then the strings can nucleate on the wall, which make domain walls to decay through a quantum tunneling process [44,43]. This process is controlled by the ratio µ/σ between the string tension µ and the wall tension σ. If the radius R of a circular string satisfies R > µ/σ, the (nucleating a loop of string) rate per unit area is given by Eq. (3).
In the present case where the D parity breaking produces Z 2 walls, we can use the results of the existing simulations. These simulations assume that all damping forces that could be present in the cosmological evolution of the string-wall network are negligible and hence the wall tension goes like σ ∼ M 3 D [41]. We can then use the formula of Eq. (55) to describe the GW signal left by the domain wall. Note that in this case an order of magnitude for the hierarchy between M D and M GUT is enough 8 to produce a large value of µ 3 /σ 2 which will make the DW evolution indistinguishable from the evolution of DW without the presence of CS.
In this model, the scale M D = 3.8 × 10 14 GeV is of the order of the bound for the Hubble parameter of single-field inflation, as mentioned at the beginning of this section. So in this case the domain walls could have been produced at the end of the inflation leaving their GW as given in Eq. (55). In the left plot of Fig. 3 we plot the DW corresponding to this scale with the rest of the parameters as in Tab. 2.
We recall that our motivation is to look for signals of multiple GW sources coming from topological defects or from FOPT from GUTs. So we ask the following question: suppose that a signal corresponding to the shape and frequency of a DW is reconstructed and has a peak frequency of about 10 5 , would this be then indicative of a DW produced by the breaking of a subgroup of a GUT preceded by a breaking producing CS? To put it in other words, we are asking if such a reconstructed signal would be a clear signal of a DW in the presence of CS. To answer the question we recall the following. It is known that DW could be also produced when a symmetry is lifted in the vacuum after the breaking of a discrete parity, such that the effective bias term drives the annihilation of the DW [87,88,89]. An specific way to do this is to consider a potential of the form V = 1/4λ φ 4 + φ 6 /Λ 2 9 , where Λ is a cutoff scale for the dimension 6 operator and φ represents the Higgs breaking of the D symmetry. In this case, even in the absence of CS, such a term could originate from the 16 of SO (10), which under the group SU(3) C × SU(2) L × SU(2) R ×U(1) B−L × D contains a singlet and hence both of the terms φ 4 and φ 6 would be allowed. Then, the tension of the string can be estimated from where φ f represents the value of the Higgs field where the two minima are quasi-degenerated. At the scale 10 13 GeV the typical values of V and φ will be respectively 10 13 4 and 10 13 , giving then as a result a tension σ of the order 10 13 GeV 3 . In this way, DW can collapse and part of their energy will be converted to GW. We can take the annihilation temperature to coincide with the scale of the breaking 10 13 GeV, assuming of course that inflation takes place above that scale. The signal will be indistinguishable from the DW signal that we illustrate in Fig. 3 we present a DW profile, using the parametric formula of Eq. (55) with the parameters appearing in Tab. 2.
Comparison to Non-Supersymmetric Case
which are to be compared with the supersymmetric case Eq. (15). The big differences in the predicted breaking scales will are maintained even after including the full uncertainties. In this nonsupersymmetric case, domain walls would be formed above 10 15 GeV and hence will not be observable. Cosmic strings can be produced after inflation at a scale 2 × 10 12 GeV, corresponding to approximately to a tension Gµ = 3 × 10 −15 . This is plotted in Fig. 3 with a solid orange line. We also checked that the proton lifetime in the p → π 0 e + channel is 1.0 × 10 35 yrs which is above the reach of the next generation of experiments (Tab. 1).
Conclusions
We studied the possibility of tracking down a route from SO(10) down to the SM in supersymmetric SO(10) models using GW signatures from the topological defects involved in the various stages of breaking, which may differ from the non-supersymmetric counterparts. Non-supersymmetric models have been studied widely as they require various intermediate steps of breaking to achieve gauge unification. Although limited, supersymmetric models may also allow some intermediate breaking scales which could improve gauge coupling unification. The shapes and strengths of gravitational wave signals depend on the scales of intermediate breaking, the scale of inflation, as well as whether or not hybrid topological defects appear. We explored such features considering the breaking routes of SU (5) In the supersymmetric SU(5) example, the only GW signatures that we could identify are from cosmic strings appearing at the intermediate scale around 10 14 GeV. These could decay due to monopoles predicted in a previous breaking step, or become diluted depending on the scale of inflation. Therefore, variant GW signals are predicted. The breaking routes of SO (10) We conclude by commenting on the reasons why it is difficult to find examples where GW signatures from different sources can appear. Considering only topological defects, we mainly need the breaking of the GUT group down to the SM in at least three steps. The separation of these steps needs to be controlled to assure that the topological defects producing observable GW signals occur after inflation and reheating. These features are illustrated in our breaking chain 4.3. In this case, the appearance of cosmic strings and DW is separated by roughly an order of magnitude, and hence DW decay via nucleation of strings on the wall while avoiding the overclosure of the universe and producing GW at the same time. Thus, two separate GW signals arise: one from DW at a higher scale and the other from CS at a lower scale.
On the other hand, FOPT could occur at any scale and thus two GW signals can arise at a scale below inflation as shown in §4.2. Since monopoles are typically produced at the GUT scale, the scale where GW from FOPT and another source could appear has to be different from the GUT scale and so effectively at least two scales are needed. Ministry of Education through the Center for Quantum Space Time (CQUeST) with Grant No. 2020R1A6A1A03047877.
A Confrontation with GW experiments
The GW signals from stochastic backgrounds are by convention expressed in terms of a GW energy density spectrum Ω signal (f ) h 2 as a function of the GW frequency f , while the instantaneous sensitivity of a GW experiment is expressed as a noise spectrum Ω noise (f ) h 2 . Then to evaluate if a predicted signal would be able to be detected one can use one of the following procedures. I. Compute the associated signal-to-noise ratio (SNR) by integrating over the experiments' total observation time accessible to a given frequency [91,92,93]: where n det , can be 1 or 2 respectively for a cross-correlation or measurement. Then if SNR is bigger than a threshold value, it is assumed that the associated GW experiment will be able to detect the predicted GW signal. II. Construct the power-law-integrated sensitivity curve (PLISC), Ω PLISC , [5] based on Ω signal (f ). If the signal and the PLISC intersect in such a way that Ω signal (f ) > Ω PLISC for a given frequency, then is assumed that the experiment will be able to detect the signal. However, since the PLISCs are constructed on spectra based on pure power laws, in realistic situations where the signal is expected to have a structure different to a pure power law, PLISCs it can be used only as a qualitative visual aid. In cases where the shape of the predicted GW signal is fairly model independent the computation of SNR can be computed only once. This fact was exploited in [94], where new sensitivity curves for SGWB predicted by FOPT were proposed. The shape of these curves is fairly model independent (the computation depends on the contributions from sound wave and turbulence and their associated parameters). The idea is that using a fit function for a peak-integrated sensitivity curves (PISC) for a particular experiment it is not necessary to perform a frequency integration on a parameter-point-by point basis, but simply plot a numerical fit for the PISC against the predicted SWGB. Updating only when the spectral shape functions, the noise spectrum and the functional forms of the peak amplitude change. Unfortunately PISC have been obtained only for few experiments in particular LISA, DECIGO and BBO and for SGWB FOPT. We compare our examples in §4 to the profiles of these last experiments based on the information of [94] and for NANOGrav, PPTA and SKA we use smoothed profiles for the sensitivity curves based respectively on [22,21], [17] and [19], while for LIGO whe use the (PLISCs) [5] approach. As a comparison in our plots we plot the LISA PLISCs and PISC approaches. In all of our plots for LISA, BBO and DECIGO we use t obs = 1 year. For SKA, we present observation times corresponding to one, two and five years, these are shown in the plots appearing in Fig. 1-Fig. 3 respectively as the small, medium and big triangular regions delimited by green dashed lines. 126 is (1, 3, 2, 1) and that of 10 is (2, 2, 0, 1) ⊃ (2, 1, ±1/2), this last decomposition being the decomposition under the SM group, For the matter, all fermions and their corresponding sparticles in the 16 are taken into account: (2, 1, −1/3, 3), (2, 1, 1, 1), (1, 2, 1/3, 3) and (1, 2, −1, 1). Then the beta functions coefficients at one and two loops are respectively while for non-supersymmetric, we have For the non-supersymmetric case our results agree with those of [66,32]. The matching conditions at the scale M R are as those in Eq. (37).
The non-supersymmetric beta function coefficients of this example are Again for these cases, for the non-supersymmetric case, we reproduce the results of [66,32].
The matching of the LR = SU ( here where z is a real number that is determined through the constraint of unification at M GUT . All of the discussion above could change if of course we consider other than minimal models and add matter content at any step that can affect the value of the beta function coefficients without changing the scalars that define the different breaking patterns.
C Production of GW from First Order Phase Transitions
Cosmological FOPT originate from a discontinuity in the entropy when there exist a metastable vacuum that eventually decays into the true vacuum of the theory. This event occurs through bubbling, that is, the process where regions of space get first to the true vacuum state and overcome the barrier separating the two vacua of the theory. Bubble dynamics produces GW through two basic mechanisms: (i) bubble collisions, generating sound waves, and (ii) turbulence, both producing energy that releases into the GW. In the case of turbulence, it happens when bubble expansion causes macroscopic motion in the surrounding plasma. The total stochastic GW background measured from FOPT is the sum of the contributions from bubble collisions and turbulent motions. However, not all the SGWB are detectable, weak production proceeds via vacuum tunnelling and thermal fluctuations, while strong production happens when bubbles are merely nucleated via quantum tunnelling. Only this latter case is detectable.
Given the breaking of the group into another, producing hence the vacua of the effective theory, the effective potential relevant for a phase transition is , where the fields φ are all the fields necessary to parameterize the breaking phase, V 0 (φ i ) is the tree level potential, V CW (φ i ) is the Coleman-Weinberg contribution (one-loop) and V T 1 (φ i , T ) is the finite temperature correction.
The parameters carachterising the FOPT are as follows. The "vacuum-to-thermal energy ratio", α, which is roughly the ratio of the false vacuum energy density to the thermal energy density, measuring hence the amount of energy released during a FOPT (only for strong production of SGWB α ∼ O(1)) which is defined as [95] α = ∆ρ ρ R , where ∆ρ is the released latent heat from the phase transition to the energy density of the plasma background. The "change in nucleation rate", β, is a measure of the bubble nucleation rate per unit volume, its inverse is approximately the decay rate of the nucleation from the metastable vacuum to the true vacuum and hence it characterises the duration of the FOPT. The other parameters characterising the SGWB are the "nucleation temperature", T n (or T * ), and the velocity of the bubbles, v b . The parameter β is given by where S 3 is the Euclidean action for the O(3)-invariant bounce solution. In terms of the effective potential, the α parameter can be written as where ∆V eff (T n ) is the potential energy difference between the true and the false vacuum at T n . As mentioned above, the gravitational waves from the FOPT mainly include two sources 10 : the bubble collisions, producing sound waves, and the MHD (Magnetohydrodynamics) turbulence, with the total energy given by [95] The efficiency factors, k SW and k Turb. characterise the fractions of the released vacuum energy that are converted into the energy of scalar-field gradients, for sound waves and turbulence, respectively.
The bubble wall velocity v b is a function of α [96] , although this should be taken just as a lower bound since in phase transitions there exist a larger class of detonation solutions [97]. The energy density of the sound waves that are created in the plasma is given by where the factor τ sw is given by min 1 H * , R * U f is the time scale of the duration of the phase. It could be either 1/H * or R * /Ū f , where H * R * = v b (8π) 1/3 (β/H) −1 according to [98]. The rootmean-square (RMS) fluid velocity is given roughly by [99,8,100]Ū 2 f ≈ 3 4 κν α 1+α , where κ ν is the fraction of the latent heat transferred into the kinetic energy of the plasma [101] 11 . The peak frequency is given by Hz .
The energy density of the MHD turbulence in the plasma is given by where a * (a 0 ) = and the efficiency factor is given by ǫ ≈ 0.1. The present day Hubble parameter can be expressed as Finally, the peak frequency for GW produced by MHD turbulence is given by [102] f turb = 2.7 × 10 −5 β H 1 v b T * 100 GeV g * 100 1 6 Hz .
In our study we have considered just the contributions from bubble dynamics (leading) and turbulence (sub-leading).
D Production of GW from Cosmic Strings
Stable Cosmic Strings GUT based on simple gauge groups lead to the formation of topologically stable monopoles whose density is about 10 18 times greater than the experimental limit [86], dominating thus the energy density of our universe and closing it. While this kind of topological cosmological defects demands then a wash down effect such as inflation, not all monopoles need to undergo completely washout [87]. Cosmic strings can also have enormous energy. In the simplest case, which we consider, the canonical type of string (Nambu-Goto), the energy per unit length, µ, and the string tension are equal. It is expected that for strings produced at a phase transition at T c , µ ∼ T 2 c [103], where µ is the tension of the string and it characterizes the strength of the gravitational interaction of strings. A grand unified string of length equal to the solar diameter would be as massive as the sun, while such a length of string formed at the electroweak scale would weigh only 10 mg. The gravitational effects of the latter are essentially negligible, though such strings may still be of great interest, because of other types of interactions. The Nambu-Goto cosmic strings are characterized only by 11 In the small and large limit of v b , κν can be approximated as κν ≈ α (0.73 + 0.083 √ α + α) −1 , v b ∼ 1 and κν ≈ v the string tension, µ [41]. Using the Kibble mechanism, the string tension can be estimated to be (with a GUT scale of 10 16 GeV) [104,105] µ ≈ 10 −15 G T p 10 12 GeV where G is the Newton's constant. One simple approximation is to assume T p ≈ T n . When this scale is taken to be the GUT scale, roughly T p = T n = 10 16 GeV, we then get the result of CS produced at GUT scale, we get then the familiar result Another way to express the tension of the CS is to write [106,107] where M CS is the scale of the cosmic string and M P the reduced Planck scale, 2.4 × 10 18 GeV. The function B(x) is B(x) = 1.04 x 0.195 for 10 −2 ≪ x ≪ 10 2 , B(x) = 2.4 log(2/x) for x 0.01. After the formation of strings, the string loops loss energy dominantly through emission of gravitational waves. We compute the relic GW energy density spectrum from cosmic string networks from [108,109] where Ω (k) (52) the critical density is given by ρ c = 3H 2 0 /8/π/G, k is the k-mode at a frequency f. The gravitational loop-emission efficiency factor is Γ ≈ 50 [110] with its Fourier modes for cusps [111] given by [112,110] The factor F α characterizes the fraction of the energy released by long strings. We use F α = 0.1, and α = 0.1 in order to take into account the length of the string loops rendering a monochromatic loop distribution. Θ is the Heavy side step function and a(t) is the cosmological factor at a given time t. The loop production efficiency C ef f is obtained after solving Velocity-dependent One-Scale equations (VOS), with C ef f = 5.4(0.39) in radiation (matter) dominate universe [104]. The VOS equations are where k(v) = 2 1+8 v 6 andc ≈ 0.23 describes loop formation. L is the correlation length parameter (such that the energy density of the long strings is given by ρ ∞ = µ/L 2 [41]) and H is the corresponding Hubble parameter. The scaling regime is reached after three or four orders of magnitude of change in the energy scale of the universe, where we have a stable value of C ef f , see e.g. Fig.3 of [104]. The loop formation time of the k mode is a function of the GW emission time (t) and is given by Assuming the small-scale structure of loops is dominated by cusps, the high mode in Eq. (51) is given by Ω GW (f /k). The low and high frequencies of the spectrum of the GW from cosmic strings are dominated by emissions in the matter and radiation dominated eras respectively.
Diluted Cosmic Strings A standard expectation of primordial cosmological inflation is that it dilutes all relics created to unobservable levels. But this does not need to be the case, for example in [73,113] counter-examples were presented. These correspond to networks of cosmic strings diluted by inflation but that can regrow to a level potentially observable today in gravitational waves (GWs). In contrast to undiluted cosmic strings (from a stochastic GW background), the leading signal from a diluted cosmic string network can be distinctive bursts of GW. In [73] the VOS model was used together with a simplified picture of inflation and reheating to estimate the dilution of cosmic strings. The starting point is to consider the same VOS equations as in Eq. (53). Then take the correlation length L F (t F ) at the time t F , the greater of the beginning of inflation or the formation of the network. After t F , the string network parameters reach an attractor solution during inflation given by L(t) = L F e H I (t−t F ) and v(t) = 2 √ 2/π/H I /L(t) [73]. The solution corresponds to having the long string network with with HL ≫ 1 and v ≪ 1. The conditions under which there is enough regrowth are given in Eq. (7) of [73]. These are written in terms of the redshift,z, [73] at which the condition HL → 1 is achieved. The condition can be satisfied for ∆N = 0 , where ∆N represents the number of e-foldings between t F and t I , the time at which the attractor solution is satisfied. ∆N = 0 corresponds to the string forming at the start of inflation and in this case then only the number of long strings accounts for satisfying the condition of Eq. (7) of [73]. We assume this last possibility and use Eq. (19) of [73] to produce the GW profile.
E Production of GW from Domain Walls
Domain walls are sheet-like topological defects, which might be created in the early universe when a discrete symmetry is spontaneously broken. Since discrete symmetries are ubiquitous in high energy physics beyond the Standard Model (SM), many new physics models predict the formation of domain walls in the early universe. By considering their cosmological evolution, it is possible to deduce several constraints on such models even if their energy scales are much higher than those probed in the laboratory experiments.
In the context of GUT theories they can appear via the D-parity symmetry, denoted also as Z C 2 . They cannot appear after inflation has taken place, unless they are broken by an explicit parameter, by gravitational effects, or decay due to the attachment of CS produced in a previous breaking stage. For DW to which CS do not attach, there could be also the possibility that the discrete symmetry is broken but then restored at a higher temperature [114]. Another possibility is to accompany the GUT theory with an extra PQ symmetry that lifts the symmetry Z C 2 and so it effectively only acts like an effective symmetry (see for example [115] and references within it).
For some ranges of parameters there exists an alternative solution to the domain wall problem [116,117] based on the idea of symmetry non-restoration [114], which does not require any explicit breaking of the discrete symmetry. Simply because the discrete symmetry is never restored at high temperature. In this way the domain walls never form. As it is shown in [116], this can rescue some of the models such as those with spontaneously broken CP and Peccei-Quinn symmetries. Nevertheless, this mechanism is incompatible with renormalizable supersymmetric theories [118]. One way to make the bias compatible with quantum breaking [119,120] and with the de Sitter Swampland program [121,122], would be to make a bias time-dependent in such a way that it disappears after the walls disappear [117]. Nevertheless, we think that identification or lack of signals of domain walls in the expected regions would lead to either constrain of rule out the parameter space of bias parameters.
Assuming that a bias parameter, coming either by Planck suppressed terms, [49,50] or by a soft breaking, breaks the symmetry and hence allows D parity to break below the inflation scale leaving a GW signal described by where we have used the parametric form of the frequency below and above the peak and the assumptions of [115] based on the simulations of [123]. A is a parameter that can be extracted from lattice simulations,ǫ GW is a parameter based on numerical simulations for the energy density of the GW and it has a valueǫ GW = 0.7±0.4 [115], g * s(T Ann. ) are the degrees of freedom at the annihilation temperature, T Ann. . Finally σ is the tension of the DW. Note that a DW which develops after inflation and during the radiation domination era and where cosmic strings formed before inflation will have the same parametric behaviour as Eq. (55), that is Ω DW h 2 (f ) = Ω GW, max.
f p f p Peak , where the peak frequency is determined by the Hubble parameter at the decay time, f Peak ∼ a t dec. /a t 0 H t dec. [123].
|
2021-12-30T02:16:07.021Z
|
2021-12-29T00:00:00.000
|
{
"year": 2021,
"sha1": "a20fd3d0c527671826615d345c471dc9c7440ccf",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.106.035008",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "0cabcbd8ebd9ca6f430db1246bf479e013102db1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
62792820
|
pes2o/s2orc
|
v3-fos-license
|
Imaging particle collision data for event classification using machine learning
We propose a method to organize experimental data from particle collision experiments in a general format which can enable a simple visualisation and effective classification of collision data using machine learning techniques. The method is based on sparse fixed-size matrices with single- and two-particle variables containing information on identified particles and jets. We illustrate this method using an example of searches for new physics at the LHC experiments.
I. INTRODUCTION
Machine learning is successfully used for classification of experimental data in particle collision experiments (for a review see [1]). This technique becomes increasingly important in searches for new physics in billions of events collected by the LHC experiments. Studies of interesting physics channels using machine learning usually includes an identification of most relevant ("influential") input variables, data reduction, data re-scale (the range [0, 1] is a popular choice), dimensionality reduction, data normalization (to avoid cases when some input values overweight others) and so on.
Among various supervised-learning techniques, Neural networks (NN) [2,3] are widely used for high-accuracy image identification and classification. In the case of large volumes of multi-dimensional data, such as data with information on the final-state particles and jets produced by particle accelerators, the usage of the NN is more challenging. A procedure should be established to find influential input variables from the variable-size lists with characteristics of particles and jets. Examples of analysis-specific input variables for supervised machine learning designed for reconstruction of heavy particles produced in e + e − and pp experiments can be found in [4,5].
The usage of machine learning in particle physics can be simplified if experimental data are transformed into image-like data structures that can capture most important kinematic event characteristics. In this case, an identification of influential variables for specific physics signatures and further preparation of such variables (rescaling, normalization, decorrelation, etc.) for machine learning can be minimized. At the same time, one can leverage a wide range of algorithms for image classification developed by leading industries. Similar to pixelated images of jets [6] for machine learning (for a review see [3]), a pixelated representation of kinematics of particles and jets produced in combination with machine learning techniques may shed light on new phenomena in particle collisions. This paper proposes a mapping of particle records from colliding experiments to matrices that cover a wide range of properties of the final state. Such 2D arrays will have predefined fixed sizes and fixed ranges of their values, unlike the original data records that have varying number of particles represented by four-momenta or other (typically, unbounded) kinematic variables. In this context, the word "imaging" used in this paper refers to a pixelated representation of kinematic characteristics of particles and jets in the form of such matrices. We will show that these matrices can easily be visualized, and can be conveniently interpreted by popular machine learning algorithms.
II. RAPIDITY-MASS MATRIX (RMM)
Imaging event records with final-state particles means transforming such data into a fixed-size grid of values in a given range. In the simplest case, this can be a 2D matrix comprised of columns and rows of values that carry useful features of events to be used in event classification. We propose to construct a square matrix of a fixed width 1 + T i=1 N i , where T is the total number of object types (jets, identified particles, etc.) to be considered and N i is the expected maximum multiplicity of an object i in all events. The values T and N i should be defined based on expectations for the types of reconstructed particles and on technical capabilities of the available computational resources for machine learning.
This matrix will contain values of single-and double-particle characteristics/properties of all reconstructed objects.
To be more specific, let us consider a dimensionless matrix with T = 2. The two objects to be considered are jets (j) and muons (µ). We assume that the maximum number for each particle types is fixed to a constant N, i.e. N i = N for i = 1, 2. Then we define the following rapidity-mass matrix (RMM) for a given pp collision as: The first element at the position ( defined as the projection of the negative vector sum of all the reconstructed particle momenta onto the plane perpendicular to the direction of colliding beams. Other diagonal cells contain the ratio e T (i 1 ) = E T (i 1 )/ √ s, where E T (i 1 ) is the transverse energy of a leading in E T object i (a jet or µ), and transverse energy imbalances for a given object type i. All objects inside the RMM are strictly ordered in transverse energy i.e. E T (i n−1 ) > E T (i n ). Therefore, δe T (i n ) always have positive values. The non-diagonal upper-right values are m(i n , j k ) = M i,n, j,k / √ s, where M i,n, j,k are two-particle invariant masses. The first row contains transverse masses M T (i n ) of objects i n for two-body decays with undetected particles, scaled by 1/ √ s, i.e. m T (i n ) = M T (i n )/ √ s. The transverse mass M T is defined 1 using the missing transverse momentum vector E miss T , transverse energy E T and transverse momentum vector p T of the observed particle/jet at the position i n According to this definition, m T (i n ) = 0 for e miss T = 0 for massless particles.
The first column vector is h L (i n ) = C(cosh(y)−1), where y is the rapidity of a particle i n , and C is a constant. The rapidity of a particle/jet is defined in terms its energy momentum components E and p z as as y = 0.5 ln((E +p z )/(E −p z )). The variable h L (i n ) is proportional to the Lorentz factor, cosh(y), thus it reflects longitudinal directions. The scaling factor C is defined such that the average values of h L (i n ) are similar to those of m(i n , j k ) and m T (i), which is important for certain algorithms that require input values to have similar weights.
This constant, which is found to be 0.15 from Monte Carlo simulation studies for QCD multijet events, is sufficient to ensure similar orders of the magnitude for average values of h L (i n ) and the scaled masses. The value 0.15 is also sufficient to make sure that h L (i n ) belongs to the interval [0, 1] for a typical rapidity range [−2.5, 2.5] for reconstructed particles and jets for collider experiments. For other experiments with a different rapidity range, the constant C needs to be recalculated.
The value h(i n , j k ) = C(cosh(∆y/2) − 1) is constructed from the rapidity differences ∆y = y i k − y jn between i and j. The transformation with the cosh() function is needed to 1 For massless particles, M T can be approximated with 2E T E miss T (1 − cos(∆φ)), where ∆φ is the opening angle between p T and E miss T . We use the massless approximation in this paper. Let us discuss the properties of the RMM relevant to physics signatures in particle experiments: • The first cell at the position (1,1) contains E miss T which is a crucial characteristic of events in many physics analyses at the LHC. This variable is important for searches for new physics in events with undetected particles, but also for reconstruction of Standard Model particles.
• The first row of values are also sensitive to missing transverse energy. They reflect masses of particles that include decays to invisible particles. The most popular example is the W → lν decay for which the m T (i) cells carry information on the W mass.
Transverse masses are used in many searches to separate the signal from backgrounds since they contain information on correlations of E miss T with other objects in an event.
For example, searches for SUSY (for example, see [7] and references therein) and dark matter particles (for a review see [8]) will benefit from analysis of the first row.
• The first column of the RMM reflects longitudinal characteristics of events. It can be used for separation of forward production from centrally produced objects. For example, if jets are produced preferentially in the forward region, then h L (i) have non-zero values. This is important in identification of events with hadronic activity in the forward region. For example, the production of the Higgs (H) boson in the Vector Boson Fusion mechanism has usually at least one jet in the forward direction [9].
• The diagonal elements with e T (i) and δe T (i) values can be used for calculations of transverse energies of all objects and, therefore, the total transverse energy of events.
The transverse energy imbalance, δe T , is sensitive to interactions of partons in the medium of heavy ion collisions (see, for example, [10]). They can also be used for a separation of multi-jet QCD production from more complex processes. Note that the energy E(i) of an object i can be reconstructed from e T (i) and h L (i) as • The non-diagonal top-right cells capture two-particle invariant masses. For twoparticle decays, m(i, j) are proportional to the masses of decaying particles. For example, for a resonance production, such as Z-bosons decaying to muons, cells at m(µ 1 , µ 2 ) will be filled with the nominal mass of the Z boson (scaled by 1/ √ s).
• The RMM contains the information on rapidity differences via h(i, j) (the lower left part of the RMM). Collimated particles will have values of h(i, j) close to zero. The rapidity difference between jets is often used in searches for heavy resonances [11], and is sensitive to parton dynamics beyond collinear factorization [12].
The matrix Eq. 1 does not contain the complete information on four-momentum of each particle (or other kinematic variables, such as the azimuthal angle φ). In many cases, additional single-particle kinematic variables, such as φ, are featureless due to the rotational symmetry around the beam direction. Nevertheless, the RMMs themselves can be used for object selection and a basic data analysis: when interesting candidate events are identified using RMM, one can refine the search using the RMM itself. For example, in the case of searches for new resonances decaying to two leading jets, one can sum up the RMM cells at the position (3,2) in order to obtain distributions with two-jet invariant masses. Such an event-by-event analysis of RMMs may require a smaller data volume compared to the complete information on each produced particle, since there are many techniques for effective storage of sparse matrices.
The matrix Eq. 1 can be extended to electrons, photons, b−tagged jets and reconstructed τ 's. The RMM can also be generalized to a 3D space after adding three-particle kinematic variables, such as three-particle invariant masses.
III. VISUALISING COLLISION EVENTS
Let us give an example of how the RMM approach can be used for visualisation and identification of different event types. We will construct an RMM with four types (T = 4) of reconstructed objects: jets (j), muons (µ), electrons (e) and photons (γ). Three particles per type (N n = 3, n = 1, . . . , 4) will considered. This leads to a matrix (to be denoted as T4N3) of the size of 13 × 13 (one extra row and column contain m T (i) and h L (i)).
We used the Pythia8 [13,14] In addition to the Standard Model processes, events with charged Higgs boson (H + ) process were generated using the diagram bg → H + t, which is an attractive exotic process [18] arising in models with two (or more) Higgs doublets. This process was also simulated with Pythia8 assuming a mass of 600 GeV for the H + boson, which decays to W + and H.
In order make the identification of this process more challenging for our later discussion, we will consider H decaying to two b-jets. In this case, the event signatures (and the RMM values) are rather similar to those from the tt production.
The software and Monte Carlo settings were taken from the HepSim project [19]. Stable particles with a lifetime larger than 3 · 10 −10 seconds were considered, while neutrinos were excluded from consideration. The jets were reconstructed with the anti-k T algorithm [20] as implemented in the FastJet package [21] using a distance parameter of R = 0.6. The minimum transverse energy of jets was 50 GeV, and the pseudorapidity range was |η| < 3.
Muons, electrons and photons were reconstructed from the truth-level record, making sure that they are isolated from jets. The minimum transverse momentum of leptons and photons was 25 GeV. The missing transverse energy is recorded only if it is above 50 GeV.
For a better understanding the RMM formalism, Fig. 1 shows two events with tt from the Pythia8 generator. Fig. 1(a) shows an event where t andt quarks decay to six jets, while Fig. 1(b) shows an event where one top quark decays tobW + with W + → e + ν e . The (1,1)). Note that the simple T4N3 configuration used in this paper to illustrate the RMM concept is not appropriate for real-life cases since it cannot accommodate all reconstructed jets, nor b−jets. How to determine the RMM configuration will be explained at the end of Sect. V.
The RMMs matrices can also be used for visualizing groups with many events. Figure 2 shows the average values of T4N3 RMM cells for the Monte Carlo events described above.
As in the case of single events, the figures show rather distinct patterns for these four processes. The QCD multijet events do not have large values of E miss and (e 1 , e 2 ). If one considers (γ 1 , γ 2 ) cells only, the Higgs mass can be reconstructed from the m(γ 1 , γ 2 ) cells shown in Fig. 2(b), after multiplying their values by √ s. The tt events have a large missing transverse energy at (1,1) and a significant jet activity. The largest similarity between RMMs was found for the H + and tt events shown in Figs. 2(c) and (d).
If a single decay channel is considered for H + → W + H, such as H decaying to two photons, the RMM for H + will be significantly different from the other processes.
IV. RMM FOR NEURAL NETWORKS
An identification of the Standard Model processes, such as multijet QCD, Higgs and tt does not represent any challenge for the NN classification, since even the visualized RMMs show different patterns for these processes. However, the separation of H + events from tt is more difficult due to the similarity of their RMMs. The main distinct feature of these two processes is the RMM cell values, i.e. the color patterns shown in Fig. 2, not the numbers of non-zero cells in events. Therefore, the H + and tt events were used to verify the classification capabilities of the RMM.
As simple test of the RMM concept for event classification using machine learning, we have created 10,000 RMMs for H + and tt events, which are then used as the input for a shallow backpropagation NN with the sigmoid activation function. The NN was implemented using the FANN package [22]. No re-scaling of the input values was applied since the value range of [0, 1] is fixed by the definition of the RMM. The NN had 169 input nodes, which were mapped to a 1D array obtained from T4N3 RMM with a size of 13 × 13. A single hidden layer had 120 nodes, while the output layer had a single node. During the NN training, the output node value was set to 0 for tt events and to 1 for H + events. This value corresponds to a probability that a given collision event belongs to the H + category. The NN was trained using 2,000 epochs. The training was terminated using an independent ("cross-validation") To understand the performance of the NN, the default activation function was changed to a linear activation function in the FANN package [22]. The number of nodes in the hidden layer was varied in the range (50-400). In addition, the number of hidden nodes was increased to two. No significant changes in the NN performance were found. converting the RMMs to "black-and-white" images). Such 2D arrays, called RMM-BW, were constructed to check the sensitivity of the NN output to the amplitude of the cell values.
According to Fig. 3(a), the NN with the RMM-BW inputs cannot effectively be trained, since the MSE values are independent of the epoch number. Therefore, for the given example, the number of cells containing zero values in the RMM (and thus the multiplicities of objects) is not an important factor for the NN training.
To verify the performance of the trained NN, a sample of RMMs was constructed from tt and H + events which were not used during the training procedure. Then the trained NN was applied for predicting the output node value. The success of the NN training was evaluated in terms of the purity of the reconstructed H + events as a function of the reconstructed efficiency. The efficiency of identification of H + events was defined as a fraction of the number of true H + events with the NN output above some value. This value can be varied between 0 and 1, with 1 being the most probable likelihood that the event belongs to H + process. We also calculated the purity of the reconstructed H + events as a ratio of the number of H + events that met a requirement on the NN output value, divided by the number of accepted events (irrespective of the origin of these events). Both the efficiency and purity depend on the value of the output node. According to this figure, the standard RMM can be used to identify H + events. The RMM-BW input for the NN fails for the event classification due to the lack of influential features.
V. EVENT CLASSIFICATION FOR SEARCHES
The RMMs can project complex collision events into to a fixed-size phase space, such that the mapping of the RMM cells to the NN with a fixed number of the input nodes is unique, independent of how many objects are produced. A well-defined input space is the requirement for many machine learning algorithms. At the same time, the RMM covers almost every aspect of particle/jet production in terms of major (weakly correlated) kinematic variables. Therefore, studies of the most influential variables used for classification of events can be reduced or even avoided. However, in some situations, in order to reduce biases that can distort background distributions towards signal features, RMM cells with variables to be used for physics measurements need to be identified and disregarded during machine learning.
Let us consider a simple example. We will separate the H + events ("signal") from tt ("background") using the NN based on the RMMs. As a measure of our success in the event classification, we will use the invariant masses of two leading in E T jets which correspond to the cell position (3,2). This variable will also be used to perform physics studies, such as the measurement of the H + mass. The distribution of this variable should (preferably) be unbiased by the NN training. Therefore, we will disable e T (i) and m(i, j) inputs at the RMM positions (2,2) and (3,2) during the NN training. A sharp peak observed at 125 GeV is due to the decay H → bb, when each b-quark forms a jet. A broad peak near 500 GeV is due to the decays of H + to W (→ qq) and H(→ bb), when the decay products of W and H are merged by the jet algorithm to form two jets.
The peak position of the latter is shifted from the generated H + mass of 600 GeV. This is due to the fact that the jet algorithm has a size of 0.6, which is not large enough to collect all hadronic activity from boosted W and H. The tt events generated as explained above represent a background which masks the H + signal. Note that, in real-life scenarios, the rate of the background tt events is significantly larger than that given in this toy example.
The NN with the two disabled input links described above was trained using 10,000 events with tt and H + . The trained NN was applied to classify the event sample with 50,000 events per each event category. We accept H + events when the value of the output node is above 0.5. After applying the NN selection, one can successfully reduce the tt background, as shown in Fig. 4(b). The background distribution is somewhat shifted towards the signal peak, indicating that some residual effect from the NN training is still present. But this bias cannot change our main conclusion that the signal can reliably be identified. The NN selection decreases the background at the 500 GeV mass region by a factor 3, while the H + signal rate is only reduced by 30%. Note that the background contribution to the H + events depends on the realistic event rates of the tt and H + processes, which are not considered in this example.
A comment on evaluation of systematic uncertainties when using the RMM should follow.
As in the case of the usual cut-and-reject methods, the RMM matrices should be recalculated using different conditions for all reconstructed objects. In the case of calculations of statistical limits, the histograms obtained after applying the RMM selections should be included in dedicated programs for limit calculations.
We believe that the event classification capabilities of this approach can significantly be increased by considering RMMs with more than three jets, leptons with different charges, bjets, reconstructed τ leptons, or using multi-dimensional matrices with three or more particle correlations. To improve classification, a care should be taken to avoid a "saturation" effect when multiplicities of particles/jets are larger than the chosen N i that define the RMM size, otherwise the loss of useful information may prevent an effective usage of the RMM. We using the RMM approach were considered in Ref. [23].
VI. CONCLUSION
We propose a method to transform events from colliding experiments to the language widely used by machine learning algorithms, i.e. fixed-size sparse matrices. In addition, the RMM transformation can be viewed as an effective mapping of complex collision data to pixelated representation useful for visual study. The method does not exclude the use of different types of neural network (deep or recurrent) or other machine learning techniques.
By construction, groups of RMMs cells that belong to certain types of objects are connected by proximity due to a well-defined hierarchy of the kinematic variables. Therefore, the usage of the RMM in particle physics may leverage widely used algorithms developed for image identification that exploit local connectivity of pixels (cells).
Our tests indicate that the proposed approach of imaging collision data for event classification can be useful for preparing a feature space for machine learning. The RMM method is sufficiently general and, typically, does not require detailed studies of influential variables sensitive to background events. But care should be taken to avoid using NN inputs that may bias the shapes of observables which will be used later in searches for new physics. The C++ library that transforms the event records to the RMM is available [24]. The coefficients were calculated from the Pythia8 Monte Carlo simulations described in Sect. III.
The first column shows the correlation coefficients, the second column shows the position of the first cell, and the third column shows the position of the second cell. In the RMM notation, the largest correlation (92%) is observed for the transverse masses calculated using leading and sub-leading jets.
|
2018-12-30T21:24:30.353Z
|
2018-05-29T00:00:00.000
|
{
"year": 2018,
"sha1": "ccb8f62b5a0468d0358a5cd0bd1e119d662ad95c",
"oa_license": null,
"oa_url": "https://www.sciencedirect.com/science/article/am/pii/S0168900219304796",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "ccb8f62b5a0468d0358a5cd0bd1e119d662ad95c",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118361027
|
pes2o/s2orc
|
v3-fos-license
|
Structure of odd Ge isotopes with $40
We have interpreted recently measured experimental data of $^{77}$Ge, and also for $^{73,75,79,81}$Ge isotopes in terms of state-of-the-art shell model calculations. Excitation energies, B(2) values, quadrupole moments and magnetic moments are compared with experimental data when available. The calculations have been performed with the recently derived interactions, namely with JUN45 and jj44b for ${f_{5/2}pg_{9/2}}$ space. We have also performed calculation for ${fpg_{9/2}}$ valence space using an ${fpg}$ effective interaction with $^{48}$Ca core and imposing a truncation to study the importance of the proton excitations across the Z=28 shell in this region. The predicted results of jj44b interaction are in good agreement with experimental data.
INTRODUCTION
Nuclear structure study in 40 ≤ N ≤ 50 region is the topic of current research for the investigation of single-particle versus collective phenomena. For f p shell nuclei the collective phenomena was measured in different laboratories worldwide. In the nuclei where π0f 7/2 shell is not completely filled and N ∼ 40, deformation appears due to interaction of excited neutrons in sdg shell with protons in f p shell. The experimental indication of a new region of deformation for f p shell nuclei has been reported for Fe [1], Cr [2] and Co [3] isotopes.
The importance of the inclusion of intruder orbitals from sdg shell in the model space for f p shell nuclei is reported in the literature [4][5][6][7][8][9][10].
The nuclei around Ni region, particularly, Ga and Ge isotopes are interesting from both experimental and theoretical point of view. Sudden structural changes between N = 40 and N = 50 for Ga isotopes have been observed at isolde, cern [11]. The even Ge isotopes attracted much experimental and theoretical interest due to a shape transition 1) Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, México. 2) Institute of Nuclear Physics, Ulughbek, Tashkent, Uzbekistan. in the vicinity of N = 40 [12][13][14]. The low-lying excitation energies of Ge isotopes show unexpected behavior: as we move from 70 Ge to 76 Ge the E(0 + 2 ) for 72 Ge is lower than E(2 + 1 ), it again start rising from 74 Ge onwards. In the pioneering work of Padilla-Rodal et al. [12] the evolution of collectivity in Ge isotopes with B(E2) measurements have been reported.
It was shown that the N = 40 shell closure is collapsed in 72 Ge. However, the N = 50 shell closure is persistent in Ge isotopes. Collectivity at N = 50 for the 82 Ge and 84 Se was measured using the intermediate-energy Coulomb excitation [15]. In Fig. 1, we show the experimental E(2 + 1 ) for even-even nuclei. The rapid decrease of E(2 + 1 ) for the Ge and Se isotopes around N = 40 reveals the collapse of this shell closure.
In the present paper we consider neutron-rich odd Ge isotopes. The shell model calculation in f 5/2 pg 9/2 space for Ge isotopes is reported in the literature using pairing plus quadrupole-quadrupole interactions [16] and with JUN45 interaction for 73 Ge and 75 Ge in [17]. In this work we report shell model calculations in f 5/2 pg 9/2 space with the recently devised effective interactions, we also first time report shell model results for odd Ge isotopes including f 7/2 orbital in the model space to study importance of proton excitations across Z = 28 shell as suggested in [11]. The electromagnetic moments of Ga isotopes were explained in our recent investigation [18] by including f 7/2 orbital in the f 5/2 pg 9/2 model space.
Section 2 gives details of the shell model (SM) calculations. We will discuss in this section the model space and the effective interactions used in the investigation. Section 3 includes results on the spectra of 73−81 Ge odd isotopes and configuration mixing in these nuclei. In Section 4, SM calculations on E2 transition probabilities, quadrupole moments and magnetic moments are presented. Finally, concluding remarks are given in Section 5.
DETAILS OF MODEL SPACES AND INTERACTIONS
In the present study we have performed calculations in two different shell-model spaces.
The first set of calculations have been performed with the two recently derived effective shell model interactions, JUN45 and jj44b, that have been proposed for the 1p 3/2 , 0f 5/2 , 1p 1/2 and 0g 9/2 single-particle orbits. The JUN45, which was recently developed by Honma et al. [17], is a realistic interaction based on the Bonn-C potential fitting by 400 experimental binding and excitation energy data with mass numbers A = 63-96. The jj44b interaction was developed by Brown and Lisetskiy [19] by fitting 600 binding energies and excitation energies with Z = 28-30 and N = 48-50. Instead of 45 as in JUN45, here 30 linear combinations of good J-T two-body matrix elements (TBME) varied, with the rms deviation of about 250 keV from experiment. The shell model results based on jj44b interaction have recently been reported in the literature [11,18,20,21]. The second set of calculations have been performed in the f p g 9/2 valence space, we use a 48 Ca core, where eight neutrons are frozen in the νf 7/2 orbital. This interaction was reported by Sorlin et al. [22]. As here the dimensions of the matrices become very large, a truncation has been imposed. We allowed up to a total of four particle excitations from the f 7/2 orbital to the upper f p orbitals for protons and from the upper f p orbitals to the g 9/2 orbital for neutrons. The f pg interaction was built using f p two-body matrix elements (TBME) from [23] and rg TBME (p 3/2 , f 5/2 , p 1/2 and g 9/2 orbits) from [24]. For the common active orbitals in these subspaces, matrix elements were taken from [24]. As the latter interaction (rg) was defined for a 56 Ni core, a scaling factor of A −1/3 was applied to take into account the change of radius between the 40 Ca and 56 Ni cores. The remaining f 7/2 g 9/2 TBME are taken from [25]. For the JUN45 and jj44b interactions the single-particle energies are based on those of 57 Ni. In the second set of calculations for the f pg interaction with the 40 Ca core the single-particle energies are based on those of 41 Ca.
In the present calculations the JUN45 and jj44b interactions estimate binding energies of nuclei, however f pg interaction gives relative energies with respect to f 7/2 orbital not binding energies. For the both set of calculations the single-particle energies are the same for protons and neutrons.
The single-particle energies for both protons and neutrons for the three interactions are given in Table 1. In Fig. 2, we compare the effective single-particle energy of the proton orbit for Cu isotopes in JUN45 and jj44b interactions. Both interactions show a rapid decrease in f 5/2 proton single-particle energy relative to p 3/2 as the neutrons start filling in g 9/2 orbit and it become lower than p 3/2 for N > 48. Dimensions of the matrices for odd 73−81 Ge isotopes in the m-scheme for f 5/2 pg 9/2 and f pg 9/2 spaces are shown in the Table 2. In case of 73 Ge computing time ∼ 15 days for both parity. Obviously, maximal dimension ∼ 10 8 is reached in the case of 73 Ge when using f 5/2 pg 9/2 space with 56 Ni core since neutron number is furthest from the closed shell for this nucleus among the Ge isotopes considered in this work.
All calculations in the present paper are carried out at dgctic-unam computational facility KanBalam using the shell model code antoine [26].
SPECTRA
As is discussed in Section 1, for N < 40 and N > 40 nuclei have different structural properties. This can be interpreted as a transition from the spherical (or oblate) to prolate shape. It may also point a coexistence of these phases. Such a change can be seen in various experimental observables like nucleon transfer cross sections, B(E2) values and their ratios for low-lying states [17]. The measured systematics show the narrowing of the N = 40 shell gap toward Z = 32 [27], while the persistence of the N = 50 shell closure is suggested in 80 Ge based on the new B(E2) data [28]. The shell model structure of Ge isotopes was discussed in [17] using JUN45 interaction. We shall start from the 73 Ge which has 41 neutrons and is closest neighbor of 72 Ge which has particular structure. Structural change can be observed by increasing neutron number from 41 ( 73 Ge) to 49 ( 81 Ge). The results for the three interactions used in the calculations are presented with respect to the experiment.
73 Ge
Comparison of the calculated values of the energy levels of 73 Ge with the experimental data is shown in Fig. 3. All the three interactions fail to predict the ground state correctly.
Difference in the triplet of levels mentioned in [17] is the same in f pg calculation too, i. e. 5/2 + 1 is high in the calculation as compared to 7/2 + 1 and 9/2 + 1 levels, while in the experiment these three levels are arranged very close. Also 1/2 + 1 is too high as compared to the experiment with jj44b and f pg interactions. This was the case also for the 69,71 Ge [17]. It was supposed in this work that this could be because of the closed f p neutron shell and d 5/2 might play an important role. However, it can be noted that 5/2 + 1 and 1/2 + 1 levels are very close to the other two levels in jj44b calculation. Lower negative-parity levels are described well by JUN45 and jj44b calculations. Negative-parity levels are too high in f pg calculation, though their arrangement looks like as in the other two calculations. For the 9/2 + 1 level the JUN45 and jj44 interactions predict ν(g 3 9/2 ) (probability ∼ 12%) and ν(g 5 9/2 ) (probability ∼ 5%) configurations, respectively. Figure 4 shows the experimental and calculated positive and negative parity levels of 75 Ge using JUN45, jj44b and f pg interactions. As is seen from the Fig. 4 that only jj44b
The first positive parity 7/2 + level at 140 keV is predicted at 273 keV by jj44b calculation.
77 Ge
Comparison of the calculated positive and negative parity levels with the recent experimental data at ATLAS facility [30] are shown in Fig. 5. The 7/2 + ground state is now correctly predicted not only by jj44b, but also with f pg interaction. The 9/2 + 1 is much closer to the ground state in f pg calculation than in the experiment and jj44b calculation.
The JUN45 calculation still gives 9/2 + ground state while the 7/2 + comes close to the ground state, however it will flip in 79 Ge. The 5/2 + 1 level is located only 6 keV higher than in the experiment in JUN45 calculation. The jj44b calculation result for this level is higher by 181 keV, while f pg calculation for this level predict 161 keV less than in the experiment. Next experimental level is 5/2 + 2 . For this level the result of JUN45 is 233 keV higher and is in the same sequence as in the experiment. In both jj44b and f pg calculations this level is located very high as compared to experiment and ordering of the levels is different from the experiment. The 3/2 + 1 experimental level at 619 keV is predicted by jj44b calculation 31 keV higher than in the experiment. This level is predicted by JUN45 is 155 keV higher, while in f pg it is 215 keV lower than in the experiment. The experimental level 7/2 + 2 at 761 keV is predicted with difference of only 3 keV by JUN45. The jj44b and f pg predictions for this level are 971 and 1079 keV, respectively. In all the calculations 5/2 + 3 , 5/2 + 4 and 5/2 + 5 levels are much higher than in the experiment.
79 Ge
For this isotope few positive and negative-parity levels are available. As is seen from Fig. 6 all the three interactions give correct ground state when reaching to this nucleus.
The location of 5/2 − 1 and 5/2 − 2 levels with respect to experimental ones is good in JUN45. In jj44b these levels are lower than JUN45. The f pg interaction predicts 5/2 − 1 similar to JUN45, but 3/2 − 1 is higher to this level by only 3 keV, while for jj44b they are lower than JUN45 and f pg.
The JUN45 and f pg predict reverse sequence of measured (7/2 + ) and (9/2 + ) levels. They are lower in JUN45 calculation and higher in f pg calculation as compared to the experiment.
The sequence of these levels are the same as in the experiment in jj44b calculation, however they are located very close to each other. For the 9/2 + 1 the JUN45 and jj44b interactions predict ν(g 7 9/2 ) configuration with probability 34% and 35%, respectively. For the 1/2 − 1 the JUN45 and jj44b interactions predict ν(p −1 1/2 ) configuration with probability 27% and 28%, respectively.
81 Ge
Three uncertain positive-parity and one negative-parity levels are available for this isotope. Again all calculations predict the same parity and spin as in the experiment for the ground state. The other experimental positive-parity levels are predicted higher than in the experiment by all the calculations. The only measured 1/2 − negative-parity level is low in the JUN45 and jj44b. In f pg calculation negative-parity levels are very high (> 4 MeV), thus we have not included them in Fig. 7.
Structure of the wave functions is given for some levels of 73−81 Ge isotopes together with yrast levels in Table 3. In Table sum of the contributions (intensities) from particle partitions having contribution greater than 1% is denoted by S, the maximum contribution from a single partition by M and the total number of partitions contributing to S by N. The deviation of S from 100% is due to high configuration mixing. The increase in N is also a signature of larger configuration mixing. The extent of configuration mixing is high in 73 Ge and low in 79 Ge. Indeed, the extent of configuration mixing is high in the isotopes far from the closed shell. All the three interactions predict ν(g 9 9/2 ) configuration with probability 42% (JUN45), 36%(jj44b) and 80%(f pg).
|
2012-10-22T01:22:00.000Z
|
2012-10-22T00:00:00.000
|
{
"year": 2012,
"sha1": "9c171c43df9f013fd406b10fbf6a0847804d0791",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1210.5790",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9c171c43df9f013fd406b10fbf6a0847804d0791",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
220294365
|
pes2o/s2orc
|
v3-fos-license
|
Development of intelligent healthcare system based on ambulatory blood pressure measuring device
Currently, the market size of blood pressure monitors both in domestic and overseas is gradually increasing due to the increase in hypertension patients resulting from aging population. In addition, the necessity of developing systems and devices for the healthcare of hypertension patients is also increasing. Moreover, the determination of health normality in respect to the management of hypertension patients is possible, but it is essentially important to incorporate preventive healthcare. Thus, further studies on deep learning-based prediction technology using previous data are needed. This paper proposes the development of an intelligent healthcare management system that can help to manage the health of hypertensive patients. The system includes a wrist-worn ambulatory blood pressure monitoring device that can analyze the normality of measured blood pressures. The performance evaluation results of the proposed system verified the reliability of data acquisition as compared with the existing equipment as well as the efficiency of the intelligent healthcare system.
Introduction
In the USA, the world's largest medical device market, the blood pressure monitor market is continuously growing as the need for efficient and accurate blood pressure measurements increases due to the growth on the number of hypertension patients caused by the aging population. It is indicated that the market size amounted to U$ 973 million with an increase of 10.8% over the previous year [1][2][3].
The occurrence of hypertension is expected to increase due to limited exercise activities and the aging of baby boomers [4]. In addition, the implementation of programs for the prevention and management of adult diseases at the government level will also increase the demand for blood pressure monitors. The primary consumers will be hypertension patients, baby boomers, and medical service institutions (i.e., hospitals, government facilities, nursing homes, etc.). The unveiling of such devices that are easy to use, minimizes discomfort, efficiently and accurately measures the blood pressure and stimulates consumer demand.
The reliability of a professional blood pressure monitor used in medical institutions is essentially important; thus, higher measurement accuracy has a significant influence on the purchase decisions. With the development of technology, the simple and small size design allows easy and portable handling at home, creating new demand, and in fact, many automatic blood pressure monitors are widely available resulting in a large-scale blood pressure monitor market. However, the availability of ambulatory blood pressure monitoring devices that is able to measure fragmentary blood pressure has more advantages in managing hypertension since it provides information on daily blood pressure fluctuation patterns which are limited in the market due to its inconvenience and less portability. This minimal market for blood pressure monitoring devices leads to meager investments in device development. The ambulatory blood pressure monitoring devices were not yet universalized and are merely used for research purposes. If such devices will be developed to provide higher efficiency and accuracy, minimize discomfort and increase convenience, the paradigm of blood pressure management can be changed from the existing concept of hypertension. It can also reduce costs in improving the efficiency of national health.
Ambulatory blood pressure monitoring has several advantages over fractional blood pressure measurement, which measures blood pressure several times during normal activities [5]. First, the white coat effect can distinguish an average person diagnosed with hypertension. Ambulatory blood pressure monitoring is useful in costeffective ratios because it can reduce unnecessary medical expenses by identifying white coat hypertension. Second, the amount of time the patient was exposed to high blood pressure can be determined. Many clinical evidences were reported that there were more severe damages to the target organ whenever the mean blood pressure for 24 h will be higher than the blood pressure measured in the hospital and the longer the amount of time to have a higher blood pressure than the constant blood pressure (i.e., blood pressure load). Third, the fluctuation pattern of blood pressure can be determined which can be used to more efficiently treat hypertension. It is known that daytime blood pressure alone cannot predict the degree of nighttime blood pressure drop, and the non-dipper is observed in about 25% of hypertensive patients. Non-dipper hypertensions have been reported to have more complications such as left ventricular hypertrophy and asymptomatic stroke than the dipper hypertensions. Fourth, information about blood pressure fluctuations can be provided. The daily changes in blood pressure are more severe than with higher blood pressure, and the target organ damage is critical in people with sharp fluctuations in blood pressure regardless of the average blood pressure. Lastly, the evaluation of new blood pressure drugs can be advantageous. When evaluating new hypertension drugs, the effectiveness can be assessed in few patients through new indicators such as T/ P ratios and effects on fluctuation patterns. Information on blood pressure variability is also provided to further characterize the new hypertension drugs.
Active measurement of ambulatory blood pressure monitoring should be undertaken, but the development of devices to perform such function is very slow. The current and widely used blood pressure meter measures the blood pressure through winding a cuff around the upper arm and inflated, which is too large and cumbersome to carry and gives a significant pain whenever the pressure is applied. In addition, pain can interfere with the patient's sound sleep when blood pressure is measured whenever the patient is asleep, specifically if the measurement is required every hour. Thereby, the reliability of blood pressure measured during sleep is lowered as well as patient compliance of measured blood pressure. Moreover, if the patient inevitably loosens the blood pressure cuff to take a shower or change clothes and then wears it again, reliability of the measured blood pressure can be lowered since it may not be worn again in an accurate manner. Thus, the necessity of an ambulatory blood pressure monitoring device having high portability, capable of minimizing patient discomfort, and with high reliability is essentially important. This paper aims to develop an efficient healthcare management system and design a device that can efficiently measure and manage the ambulatory blood pressure.
Related works
The reliability of professional blood pressure monitors used in medical institutions is considered as the most important factor that greatly influences the acquisition decision. With the development of technology, small size, simple usage, and convenient devices in homes are regularly being released, which creates new demand. Various automatic blood pressure monitors have been distributed creating a large-scale market for blood pressure monitors [6][7][8][9][10]. The ambulatory blood pressure monitoring devices can provide more advantages wherein it measures not only fragmentary blood pressures but can also manage high blood pressures through daily blood pressure fluctuation patterns. However, these devices provide usage inconvenience and were not portable such that its market is very limited, leading to a very low investment in its development. The previously developed ambulatory blood pressure monitoring devices are depicted in Table 1.
Many manufacturers are evaluating various measurement and analysis methods for the purpose of commercialization. However, most of the measuring devices were inconvenient to use, and even those good quality devices were insufficient in terms of the utilization of various information obtained by measuring ambulatory blood pressure. In addition, there are only few devices that were classified into the TOAST class which is the basis for the actual blood pressure analysis. Thus, this paper used more than 300 patient's data to learn and make clinical evidence of the TOAST group, used the extracted TOAST group criteria as a guide, and reflected an evaluation process to distinguish normal or abnormal blood pressure measurements in order to develop an intelligent healthcare system based on ambulatory blood pressure monitoring device.
Proposed intelligent healthcare management system
The proposed intelligent healthcare management system based on ambulatory blood pressure monitor can be developed as shown in Fig. 1.
The hardware represents the actual manufactured ambulatory blood pressure monitoring device, and it can be managed by the user through interlocking with the PCbased analysis program using the Bluetooth module. Many intelligent analytic monitoring systems were being developed and embedded into wearable devices linked into back-end systems [11,12].
The proposed ambulatory blood pressure monitoring system requires the following conditions. First, the measuring device should be miniaturized so that it can be worn easily (e.g., on a wrist) and should be made of a material such as silicon for convenient wearing. Then, the system includes an algorithm that converts the gathered sensor signal into a measured blood pressure. In addition, blood pressure measurement is performed by both a control circuit module and an active pressure generator. The control circuit module controls the pressure so that it matches the intravascular pressure by contacting the pressure sensor to the blood pressure measuring area, while an active pressure generator changes the pressure continuously for blood pressure measurement. The portable ambulatory blood pressure monitoring device consists of a communication module that delivers measured information and a battery module that provides power to its components. The device is ultimately easy to use and provides a reliable and continuous blood pressure monitoring without pain. Moreover, in order to minimize the noise caused by external shocks, a fixed structure made of silicon that can closely adhere the pressure sensor to the wrist has been developed to compensate the noise caused by motion artifacts through a feedback algorithm by motor control.
Ultimately, a prototype was developed for a tonometertype wrist-worn ambulatory blood pressure monitoring device that is highly portable, easy to wear, and improves the reliability on the measurement signal. Furthermore, this paper proposed the development of an intelligent healthcare system that can continuously and conveniently manage the ambulatory blood pressure of users by mounting a TOAST program that may analyze the measured blood pressures.
Development of a prototype for blood pressure monitoring device
The ambulatory blood pressure monitor is designed as wrist band type for high portability and convenient wearing. The ring-shaped band can be worn on the user's wrist with an air pocket inside of the annular band that applies pressure.
The ambulatory blood pressure of the user is measured by a pressure sensor that detects the air pressure inside the air pocket and measures the blood pressure waveform. The detailed components of the proposed ambulatory blood pressure monitoring device are depicted in Fig. 2.
The measurement of blood pressure can be represented in a five-step process. First, the blood pressure gauge is set near the user while keeping the band air pocket (i.e., first part) in contact with the user's wrist. Next, the air pressure is moved from the second part air pocket to the first part air pocket to increase the internal pressure on the first part air pocket which is attached on the user's wrist to enable the pressure sensor to measure the bio-signal. Then, the measured bio-signal waveforms are analyzed to derive an optimal value of the internal pressure of the first part air pocket. After that, the internal pressure on the first part air pocket is set to the optimum value. Finally, the blood pressure of the user is measured as a continuous waveform. A pressure sensor driving circuit was developed to implement the wrist vein tonometric method using a tonometric pressure measurement module. The Arduino Nano v3.0 model was used as the main driver module for sensor data processing and wireless communication control. The Arduino Nano uses ATmega328 as its main processor and can support eight analog input and output ports and 22 digital input and output ports. The pressure sensor driving circuit considers an ADC resolution and motor driver module for close contact between pressure sensor and blood vessel. It was developed to connect a syringe to a driver motor by using a rack gear in order to inject or discharge syringe air pressure into the cuff (i.e., band air pocket). With the cuff positioned at the radial artery on the user's wrist, the band air pocket was inflated to expand the cuff, compressing the radial artery, and measuring the pressure change on the cuff through the beating of the radial artery.
Development of an automatic feedback algorithm for motor control
An error in the analysis increases as the cuff pressure changes due to the patient's movement; thus, an algorithm for estimating the optimal cuff pressure is developed to automatically control the driver motor so that the cuff pressure remains constant. The measured data by the pressure sensor are divided into pulse waves and cuff pressure using a digital filter, and the optimum cuff pressure is estimated by calculating the signal-to-noise ratio of the pulse waves according to the cuff pressure. The driver motor rotates to inject more air pressure when the measured cuff pressure is lower than the estimated value; otherwise, it rotates in reverse to release cuff pressure.
Development of wireless data transmission
The Arduino Nano board is used as the main controller module in the prototype, while the HC-06 Bluetooth module is for data wireless transmission. The HC-06 Bluetooth module has two separate pins used by the pressure sensor for the reception of the measured signals and its wireless transmission to the PC. Both low-pass and highpass digital filters were implemented to the measured biosignal to distinguish cuff pressure from pulse waves. In addition, the systolic, diastolic, mean blood pressure, and other cardiovascular parameters in the pulse wave were calculated. Moreover, the radial artery beats were extracted during the filtering process to improve the accuracy of the blood pressure estimation algorithm using the wrist-worn pulse wave data as shown in Fig. 3. The slope of filtering mask and regression analysis were performed by comparing the extracted beats with the finger pressure waveform to increase the measurement accuracy.
Development of prototype for ambulatory blood
pressure monitoring device A 5 9 3 cm cuff was designed using a wrist band fixing mechanism to be easily worn on the user's wrist. The tube attached to the cuff is connected to a syringe that acts as a chamber and the pressure sensor. The cuff is inflated and contracted through the syringe pump activated by the driver motor, and the pressure sensor starts to measure the changes in pressure. The battery has 3.7 V and 750 mAh output and can be charged using an 8pin USB (Battery Model: MP952238P). The prototype's main processor Arduino Nano is connected to the PC through a USB port. The final prototype is shown below. Figure 4 depicts the actual prototype of ambulatory blood pressure monitoring device which is actually worn by the user. The driver motor and gearbox are attached to the push rod on the base of the syringe. The driver motor and the pressure sensor were connected to the Arduino Nano main processor. In addition, the syringe and pressure sensor were connected to the wristband by a Teflon tube to control the cuff and be able to measure the bio-signal from the radial artery.
Time series ambulatory blood pressure data analysis model
Previous studies on bio-signal data analysis used a lot of machine learning-based clustering algorithms, and there are continuous efforts on the studies of using deep-learning technologies [13][14][15][16][17][18][19][20]. This paper proposes a method for feature extraction and similarity analysis to apply a lightweight and accurate method in real-time processing. This paper aims to develop an ambulatory blood pressure data analysis model based on supervised and unsupervised learning, and to design a time series ambulatory blood pressure data pattern analysis model for 24 h. The Neural Computing and Applications existing machine learning model according to the data type was applied in order to analyze the ambulatory blood pressure data pattern of the user. In general, the normal and abnormal blood pressure classification based on TOAST was defined through logistic regression analysis, and only the preliminary work for the accuracy analysis of the existing five levels (LAA, SVO, CE, UD, OD) was determined. Although the five levels for the detailed determination of blood pressure status can be identified, the normality and abnormality status of TOAST were only the primary factor for the determination of the user's blood pressure status. Thus, this paper determines only the normality and abnormality of blood pressure.
Moreover, a data mining model that combines logistic regression analysis and dynamic time wrapping (DTW) was developed suitable for similarity calculation for classification of 0 and 1 for TOAST classification. The ambulatory blood pressure data analysis process for TOAST classification is depicted in Fig. 5.
For TOAST classification, the blood pressure value (contraction, relaxation) based on RR-interval served as input to analyze the patterns through regression analysis. The similarity between analyzed pattern and reference value was calculated, and the weight of measured contraction and relaxation value was reflected to develop a model that classifies the final TOAST class.
Ambulatory blood pressure data analysis modeling
Logistic regression was applied to extract criteria curves for pattern analysis of class 0 and 1. This method was used to predict the probability of occurrence of an event (probability) using a linear combination of independent variables. It is similar to regression and discriminant analysis in such a way that the linear combination of independent variables describes the dependent variable which was a nominal measure of binary data.
There are n independent variables (continuous or noncontinuous variables) and 1 dependent variable (divided non-continuous variable) which were used as dividing variables such as 0 (normal) and 1 (abnormal) and carries out a class pattern analysis applying the relevant one. The independent variable (X) changes the value of (Z) wherein (Z) acts as an exponent that affects the probability of event occurrence, that is, Prob (Event), so they may find what factors (independent variables, contraction/ relaxation values of ambulatory blood pressure) are risk factors of the disease (dependent variable, class 0 (normal), 1 (abnormal), and how much they affect (odds ratio).
Calculation of similarity of TOAST class patterns
Dynamic time wrapping (DTW) technique [21][22][23][24][25] was used to analyze the similarity with the criteria line of the previously analyzed pattern. DTW moves in a direction that minimizes the distance between two time series, matches each other, calculates the cumulative distance from each template, and recognizes it as the minimum class. W defines the mapping between each time series data X and Y, defines the Kth element of W, calculates the corresponding maximum value, and calculates the corresponding path that satisfies the three conditions of boundary condition, continuity, and forgeability. Finding the corresponding path that minimizes the sum of W is a method of calculating the similarity of DTW and can be expressed by the following equation: K is used to compensate for the corresponding paths having different lengths. The cumulative distance D is the value indicating the final similarity starting from 0 and the shortest path in the DTW. In this similarity calculation method, it is necessary to control the splashing value of the impulse character due to the ambulatory blood pressure measurement characteristics. The analysis on this paper attempts to solve the problem of the measurement on the corresponding path through the exponential smoothing operation when the singular value problem occurs in the measurement of the corresponding path in the DTW similarity calculation. The final class value was calculated by reflecting the weight of the final contraction and relaxation value by comparing the similarity between the previously analyzed class learning pattern and the reference value (D1: similarity of contraction data, D2: similarity of relaxation data, alpha: weight, g: group).
4 Performance evaluation
Evaluation of measurement accuracy of ambulatory blood pressure monitoring prototype
In order to evaluate the reliability of the prototype for ambulatory blood pressure measurement, a comparison with the existing blood pressure measurement device was performed. Finometer Model-1 was used as the existing continuous blood pressure comparison device. This device measures continuous blood pressure through finger cuff pressure. Figure 6 shows the evaluation concept for evaluating the reliability of blood pressure measurement. Table 3 shows the sample measurements. The measurement of bio-signal values of Finometer Model-1 was conducted by a professional clinician and assumed that they were accurate. Comparative matching with the blood pressure measurement conducted with the ABPM prototype was performed using the criteria value. As a result, an error of (±) 2.29% was found which is similar to the Finometer Model-1.
Ambulatory blood pressure data analysis model results
The ambulatory blood pressure data analysis model was applied on the blood pressure information collected through the ambulatory blood pressure measurement device to determine the abnormality on the user's blood pressure. First, the low-pass and shock filters were used to pretreat the ambulatory blood pressure and extract the features. Next, pattern analysis was performed through logistic regression analysis based on the classification of the initial clinician of the training data. Then, the final TOAST class was classified by measuring similarity (DTW ? weight). Figure 7 shows the simultaneous comparison of the pattern results of the input contraction and relaxation blood pressure data with the input blood pressure data. The numerical results were plotted to indicate the sample results of the proposed method.
The graph in Fig. 7 shows the division into the upper group (TOAST0) and the lower group (TOAST1) wherein blue represents contraction and red represents relaxation. The green lines indicate newly input patient data where DTW with criteria line for similarity analysis was calculated. The similarity (distance value) value was reflected up to the final weight, which means that the number of the two classes (0, 1) was included in the lesser class. For example, in Fig. 7, 9896 is given when the value of the input signal is calculated by a green line to resemble Toast 0. And the similarity of Toast 1 means that the value of 30,868 is close to Toast 0 with a short distance of similarity. The resulting value of Toast0 can be confirmed by the relative position of the normal curve of blood pressure contraction and relaxation. As a sample of this, six Toast matching results were shown. Table 4 shows the similarity comparison results with 20 samples using existing classification methods K-nearest neighbor (KNN) [26] and support vector machine (SVM) [27].
The results in Table 4 show the accuracy by classifying TOAST results (normal/abnormal) of each method for 20 samples (total 50 cases). Distance represents the difference of similarity with Toast's classification result. That is, the smaller the distance value, the higher the similarity. The range of distance value is indicated between 0 and 1, the closer to 0, the higher the accuracy is, and the closer to 0.5, an incorrect result may have obtained. Based on the results, the proposed method could show higher accuracy than the comparison methods KNN and SVM.
The KNN method is a clustering method for simple distance differences, which results in relatively inaccurate results because it is difficult to reflect information on variance and bias. In addition, SVM method is more accurate than KNN because of the latter's inaccurate results on the cluster boundary when data are mixed but shows more inaccurate results than the proposed method proposed. The proposed method does not generate boundary information for clear cluster classification but generates a representative major line for data distribution and calculates the difference between the distance and DTW to determine the TOAST. Thus, it can be verified that the proposed method yielded relatively high accuracy.
For further verification of the proposed analytical model, the confusion matrix was calculated and compared with the existing classification methods to calculate specificity, sensitivity, and accuracy. The performance evaluation verified the robustness of the proposed method. The comparison method was similarly performed with KNN and SVM.
Two methods with different points of view, such as distance between classes and hyperplane calculation method, were selected, and the results are shown in Table 5 and depicted in Fig. 8.
The error matrix value was calculated as the value of 50 cases, and the existing SVM had a high dependency on the data quality and quantity due to problems such as inaccuracy as the amount of data increases and the complexity of calculation that induces multidimensional access costs. Also, the method through KNN produced the worst result, which seems to be because only the information about the difference in distance is reflected. This seems to have been calculated without additional information such as bias information. In contrast, the proposed method yielded a relatively high performance. The proposed ambulatory blood pressure analysis model provides a learning-based algorithm to minimize user intervention. Among the aforementioned sample results, errors in the similarity calculation for TOAST recognition occurred when the information on the normalization curve was incorrect in the contraction and relaxation data in the primary case of recognition error. This is caused by impulse noise in preprocessing and is thought to be solved by adding filtering.
Conclusion
Ambulatory blood pressure monitoring measures and records blood pressure several times during normal activity and has several advantages over the fractional blood pressure measured in the hospital. With the development of technology, the small size and simple usage allow ease on handling devices at home, creating new demand, and in fact, various automatic blood pressure monitors have been sold and distributed such that the scale of blood pressure monitor market is very wide. The ambulatory blood pressure monitoring device which is capable of measuring fragmentary blood pressure has more advantages in managing hypertension since it can provide information on fluctuations of blood pressure patterns even though it is inconvenient to use and less portable. This paper deals with the development of an ambulatory blood pressure device and intelligent healthcare management system that can continuously manage the ambulatory blood pressure of users. A prototype of tonometric wristworn ambulatory blood pressure monitoring device was developed which is highly portable, easy to wear, and capable of improving the reliability of measured bio-signals. The measured blood pressure values were compared with the existing popular equipment in order to verify the reliability of the developed prototype. And the proposed analysis method was found to be more effective through the comparison analysis with the existing classification methods. Finally, the health status of hypertension patients can be evaluated for a limited time. In addition, if there was a lot of impulse noise in the shrinkage, loosening data, the inaccuracy of the normalization curve generation could be identified. This caused a problem with the TOAST class judgment accuracy, but I think it will be improved if some additional filtering work for the preprocessing work is reflected. Filtering can take into account a variety of signal improvement filters, such as the median filter [28] or the low-pass filter [29], which eliminates impulse-based noise. In the future, as the concept of preventive healthcare is essentially needed, an additional study using deep learningbased prediction technology [30,31] will be carried out. Funding No fund/grant support relevant to this article was reported.
Data availability
The sensing data used to support the findings of this study are included within the article.
Compliance with ethical standards
Conflict of interest The authors declare that there are no conflicts of interest regarding the publication of this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
|
2020-07-02T15:46:16.716Z
|
2020-07-02T00:00:00.000
|
{
"year": 2020,
"sha1": "5cda2a2f6cdd8d274262b8f86a39cbad4a062d54",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00521-020-05114-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "5cda2a2f6cdd8d274262b8f86a39cbad4a062d54",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
252519317
|
pes2o/s2orc
|
v3-fos-license
|
A binary tree approach to template placement for searches for gravitational waves from compact binary mergers
We demonstrate a new geometric method for fast template placement for searches for gravitational waves from the inspiral, merger and ringdown of compact binaries. The method is based on a binary tree decomposition of the template bank parameter space into non-overlapping hypercubes. We use a numerical approximation of the signal overlap metric at the center of each hypercube to estimate the number of templates required to cover the hypercube and determine whether to further split the hypercube. As long as the expected number of templates in a given cube is greater than a given threshold, we split the cube along its longest edge according to the metric. When the expected number of templates in a given hypercube drops below this threshold, the splitting stops and a template is placed at the center of the hypercube. Using this method, we generate aligned-spin template banks covering the mass range suitable for a search of Advanced LIGO data. The aligned-spin bank required ~24 CPU-hours and produced 2 million templates. In general, we find that other methods, namely stochastic placement, produces a more strictly bounded loss in match between waveforms, with the same minimal match between waveforms requiring about twice as many templates with our proposed algorithm. Though we note that the average match is higher, which would lead to a higher detection efficiency. Our primary motivation is not to strictly minimize the number of templates with this algorithm, but rather to produce a bank with useful geometric properties in the physical parameter space coordinates. Such properties are useful for population modeling and parameter estimation.
I. INTRODUCTION
Banks of template gravitational-wave signals are central tools in the matched-filter detection of gravitationalwave signals from compact binary coalescence [1][2][3]. The general compact binary gravitational-wave signal depends on at least fifteen parameters: two mass parameters, six spin parameters, distance, time, and five angles defining binary orientation with respect to the gravitational-wave antenna. The parameter space can be a james.kennington@ligo.org b stephen.privitera@ligo.org even larger if, for instance, matter or eccentricity effects are included. Since we do not know the source parameters a priori, we must search the data over all possible source parameters.
We are often able to quickly maximize the signal-tonoise ratio (SNR) over a subset of the parameters either analytically or by efficient numerical techniques. For instance, some parameters 1 enter only into the overall amplitude of the signal which is normalized away by the matched-filter definition of SNR. The coalescence time enters into the waveform as a frequency-dependent phase shift which can efficiently be searched over using widelyavailable fast Fourier Transform routines. Considering only dominant ( , |m|) = (2, 2) modes of gravitationalwave signals, the coalescence phase can also be maximized over analytically.
Given the approximations, assumptions and techniques described above, a subset of parameters, λ, the template bank parameters, are generally relevant for template placement. We search over these parameters by laying down a discrete set of points in the parameter space and repeating the matched-filter calculation for each template. The set of points must be chosen as a compromise between optimal SNR recovery and available computational resources. Placing templates finely in the template parameter space leads to high SNR recovery, but can quickly make the search prohibitively expensive. In particular, the number of templates required to cover an D-dimensional parameter space such that no more than a fraction M of the SNR is lost to any potential signal scales as M −D/2 [2].
In the case of non-spinning binaries, lattice placement strategies based on an approximate analytic expression for the signal space "distance" between two nearby templates have been shown to be effective for covering the template parameter space [4,5]. To guarantee efficiency of the placement, these methods require that the metric g( λ), which defines the distance between nearby templates, is very nearly constant throughout the parameter space. For waveforms involving spin, in which a metric is either unavailable or varies rapidly throughout the parameter space, stochastic template placement has proven to be effective in covering the parameter space [6][7][8][9][10]. The stochastic placement technique works by randomly selecting a large number of points in parameter space and keeping only those points which fall sufficiently far away from points which have already been accepted into the bank. This technique, while robust, is computationally inefficient, although recent implementations have made significant strides towards optimization [9,11,12].
Geometric techniques have also been applied to generate aligned-spin template banks [13,14]. In Ref. [13], the authors demonstrate a geometric template bank for neutron-star-black-hole binaries. The authors find satisfactory coverage for this parameter space by stacking two two-dimensional lattices, taking advantage of the fact that the parameter space is "thin" in the third dimension. This placement strategy was used in conjunction with ordinary stochastic placement [11] to cover the full compact binary parameter space searched in the recent LIGO-Virgo searches [15,16]. In Ref. [14], the authors consider an interesting extension of this technique which starts with a true three-dimensional lattice, and precessing binaries have a time-dependent inclination, leading to modulation in the waveform phase and amplitude. falls back to the stochastic approach when the lattice approach breaks down. In Ref. [12], the authors also consider a hybrid stochastic-geometric technique, similar to the algorithm we propose here; however, the notion of lattice-adjacency the authors used is Cartesian whereas we incorporate the intrinsic geometry of the parameter manifold.
These solutions continue to rely at least partially on stochastic placement methods, which scales poorly with the number of templates. The required number of template parameters to cover a parameter space at a given minimal match threshold increases dramatically with the bandwidth of the interferometer and the dimension of the target signal space, both of which are ever-increasing in ground-based gravitational wave searches [11,17,18]. Currently used aligned-spin template banks have four template parameters (two masses and two spins) and over 1 million templates at maximal mismatches between 1-3% [19]. Precessional effects adds five more parameters (four spin components and the binary inclination at some reference frequency) and an additional order of magnitude in templates [18]. At high mass ratios, sub-dominant modes may also be important for detection, which can only further increase the template bank size. Presently template bank generation with stochastic methods may be computationally slow. Future larger banks will require more computing resources to generate as gravitational wave detector sensitivity improves. This can be problematic if banks are generated often.
Here, we demonstrate a new method for template placement based on a binary tree decomposition of the parameter space which is purely geometric originally explored here [20]. The algorithm relies on a numerical estimation of the parameter space metric and uses this metric to determine how to grow the binary tree. This algorithm requires O(2 n D 2 ) overlap calculations, where n is the bifurcation number of the parameter space, i.e., how many times a characteristic cell is split, and dim is the dimension of the resulting template bank. We demonstrate this method by constructing a bank suitable for Advanced LIGO and Advanced Virgo data analysis.
II. MOTIVATION
Beyond general interest in pursuing novel template placement algorithms, our motivation for pursuing this work is three-fold based on experiences analyzing LIGO and Virgo data during the third observing run. First, in order to apply a population model to gravitational wave detection, it is important to account for template placement [21] in a way that may account for the coordinate volume that a template occupies [22][23][24][25]. The binary tree approach that we have taken guarantees that each template ends up in a hyperrectangle in the physical coordinates making coordinate volume calculations easy. Second, in order to ensure a high availability of service for online compact binary searches we run searches at two different data centers. The goal is to split the parameter space in a way that if one site goes down the other is still efficient at detecting a broad class of binary signals. The binary tree approach allows us to use a bank derived from the "right" and "left" splits separately. Finally, having a bank that is grid-like in physical coordinates is generally useful for template interpolation [26] and rapid parameter estimation [27] problems and we are interested in exploring this as future work.
III. METHODS
Our method, whose implementation we refer to as treebank, relies on having an accurate approximation of the template space metric g( λ), which gives a measure of the "distance" between nearby templates. For our work λ ≡ {t c , log m 1 , log m 2 , χ eff }, where χ eff ≡ (m 1 a 1z + m 2 a 2z ) / (m 1 + m 2 ) and a is the dimensionless spin [28]. We define the mismatch δ 2 between two nearby gravitational-wave templates, h( λ) and h( λ+ ∆λ), according to where the template a or b is taken to be complex valued containing both the sine and cosine phases, thereby maximizing over phase, and f N is the Nyquist frequency. δ 2 can be expressed in terms of a metric tensor g on the template signal manifold as From the metric, we can also compute a local volume element and thereby estimate the number of templates required to fill a given hypercube cell in the binary tree decomposition [2]: where V T is the volume of a template in mismatch space. We use the definition by Owen for the metric components in terms of the mismatch. [2] g ij = − 1 2 We have implemented two numerical schemes for estimating the metric component values that we call the iterative and deterministic methods. The iterative method is a standard convergence scheme for numerical differentiation leveraging the Python package numdifftools. The deterministic method uses definitions of the metric components as partial derivatives of the mismatch to compute the preliminary metric γ µν in a single step.
Once the preliminary metric has been estimated using either method, we post-process the metric in two steps. First, we minimize γ µν ∆λ µ ∆λ ν with respect to the time lag between signals ∆λ 0 by projecting out the time component of the metric estimate. This results in the adjusted, spatial metric components Where we use the term spatial above to mean nontemporal, as in the familiar 3 + 1 decomposition. Second, we use an eigenvalue decomposition to check for numerical stability and validity of the estimated metric. If a negative eigenvalue is found, which would incorrectly imply a negative spatial signature, we attempt a reevaluation of the metric with a coarser set of intrinsic parameters λ = Coarse( λ).
The template-bank algorithm then works as follows: 1. Initialize a hyper-rectangle bounding the parameter space one wishes to cover, e.g., a bounding box in component masses.
Compute the metric g( λ)
numerically at the center of the hyper-rectangle. Alternatively, skip this step if the metric is sufficiently constant. We determine this by defining ≡ 1 − |g| i−2 / |g| i−1 and setting a threshold on epsilon. In other words, if the volume element of the previous two iterations (i − 2, i − 1) is sufficiently unchanged, the user may decide to skip this step. Setting epsilon to 0 forces the metric to be recomputed.
3. From the metric, estimate the number of templates N C needed to cover this hyper-rectangle via Eq. 4.
4.
If N C is greater than the user-supplied threshold N * C , compute the side lengths of the hyperrectangle according to the metric and split the cube along its largest side in two children cells A and B. Call the algorithm recursively on A and B.
5. If N C < N * C , place a template at the center of the cell and stop splitting. 2 The splitting stops when all rectangles have N C < N * C or alternatively if the user specifies a minimum coordinate volume. In Fig. 1, we illustrate the decomposition.
Other than waveform generation, the most computationally costly step of this process is the evaluation of the mismatch between two templates (2), which is needed to evaluate the metric coefficients (5). In the case where the template parameter space is bifurcated n times, there will be at most 2 n hyper-rectangles. If = 0, then the metric will be evaluated for every cell and, 2 Note that a single template is added to the bank even though N C is an estimate of the number of templates to cover a hyperrectangle and N * C can be greater than 1. In such a case, N * C acts as a coarse-graining parameter. We usually set N * C ≤ 1.
Each hyper-rectangle will contain one template, which means that a well balanced tree will contain a bank of N B = 2 n templates. Thus, the number of match calculations per waveform in the template bank is number of match evaluations number of templates (N B ) = O(D 2 ).
The above gives a worst case scenario. Under normal circumstances > 0 and the metric is found to be sufficiently constant that it does not need to be evaluated at the final tree depth. This leads to typical scaling where there are fewer match calculations than there are templates in the bank N B By definitions the matches between waveforms used in the metric calculation are extremely high -approaching 1 minus floating point epsilon. Therefore, the function of frequency is extremely smooth and we evaluate waveforms and matches with extremely coarse spacing, typicallly 1 Hz.
IV. RESULTS
FIG. 2. Example template bank. This is a projection of the three dimensional bank in coordinates {log m1, log m2, χ eff } into the {log m1, log m2} plane. The templates that appear to be outside of the region of interest have hyperrectangles that overlap with the region. Note that the naive template density is directly related to local volume element magnitude, and varies accordingly.
We used the algorithm described in the previous section to generate an advanced LIGO template bank using projected O4 sensitivity estimates 3 . We used a chirp mass range from 0.87 -174 M , a minimum secondary mass of 0.98 M , a maximum mass ratio of 20 and a maximum total mass of 400 M . We specified an effective spin range, χ, from -0.99 to 0.99 but limited the spin of objects below 3 M to be less than 0.05. We allowed the template low frequency to go down to 10 Hz, but specified a maximum duration of 128s. We requested a maximum mismatch of 3%, but also set the minimum coordinate volume (∆ log m 1 × ∆ log m 2 × ∆χ) to be greater than 0.0001. This resulted in 2,083,547 templates as shown in Fig. 2.
We validated the template bank by injecting 16,000 simulated signals in the parameter space. We find that the bank achieves the requested 97% match better than 99% of the time.
V. CONCLUSION
We have described here a new method for fast template bank placement, and shown that the method works in 3 dimensions relevant to dominant-mode aligned-spin template searches. The treebank method is computationally efficient and we expect this method will scale to higher dimensional template placement, such as precessing or sub-dominant mode templates, but we leave this for future work. It should also have applications in producing high density banks for use in rapid parameter estimation [27].
A tarball containing the source code necessary to reproduce the results in this paper can be found at https://pypi.org/project/gwsci-manifold.
FIG. 3. Template bank validation. The bank achieves the requested 97% match 99% of the time and a better than 98% match 90% of the time. The large sample evaluation method used here is likely to be conservative since it does not check the match of all templates in the bank. The true performance may be better than this. The color bar indicates mismatch of simulated signal and nearest template. The injected signals were created using uniform distributions of the individual parameters {log m1, log m2χ eff }. The bank sim maximizes match only over nearby templates because the maximum match cannot decrease by including more templates. This balances computational speed for accuracy, but preserves acceptance criteria.
|
2022-09-26T01:15:37.007Z
|
2022-09-22T00:00:00.000
|
{
"year": 2022,
"sha1": "5f3bbdd2c3708717b82b880e40edf62720f4a8e3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5f3bbdd2c3708717b82b880e40edf62720f4a8e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
85892468
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Different Dietary Levels of Thyme Essential Oil on Serum Biochemical Indices in Mahua Broiler Chickens
A 42-day trial was undertaken to study the effect of different dietary levels of thyme essential oil (TEO) on serum biochemical indices of broiler chickens. Seven hundred and sixty-eight selected one-day-old Mahua broilers were divided into 8 dietary treatment groups with an addition of 0.00, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30 and 0.35 mg/kg of thyme essential oil respectively, with 4 replicate pens per treatment group (24 birds each). The feeding programme included a starter diet until day 21 and a finisher diet from day 22 until day 42. The results suggested that TEO markedly increased serum total proteins and globulins on day 21, significantly decreased alanine aminotransferase activity (P≤0.05), the albumin-to-globulin ratio, and serum urea on day 21 and 42, and in particular it improved high- density lipoproteins on day 21 and 42 (P≤0.05). In conclusion, TEO can promote protein metabolism, enhance lipolysis and strengthen the immune function. Furthermore, after a comprehensive analysis, the ideal range of the essential oil addition to the broiler feed proved to be between 0.1 and 0.25 mg/kg.
Introduction
In recent years, because of the increasing use and abuse of antibiotics over the long term, normal animal microflora has been altered, thus giving rise to drug-resistant strains and drug residues, polluting the environment and seriously threatening human health. However, phytogenic feed additives are characterised by natural features, multiple functions, minimal side effects, no-resistance and no residues, therefore they are expected to become new substitutes for antibiotics (Jin, 2010). Thyme belongs to the Labiatae family and is a perennial small low shrub. It is native to the Mediterranean coast. In China it grows mainly north to the Yangtze River in a region with wide arid and semi-arid deserts, grasslands, river banks and hilly areas. Thyme can warm the spleen and the stomach, relieve pain and cough and reduce sputum. It is also diuretic and capable of stimulating the menstrual flow. It can promote digestion and perspiration, and stop vomiting. Besides many other beneficial effects, it is also antiseptic, and can be used as an insect killer or to treat dysentery, beriberi and other diseases (Quan, 2008;Fan and Li, 2001).
Current national and international studies on thyme pharmacology and efficacy are mainly focused on its ability to preserve food, its microbial antagonism in vitro, its anti-oxidant and anti-aging effect on mammals, etc. However, research on its application in animals in vivo and its mechanisms of action is scanty. The main aims of our study are to investigate the effect of different dietary levels of thyme essential oil (TEO) on serum biochemical indices in Mahua broiler chickens, to explore in more depth its mechanisms of action in animals, to provide part of the theoretical basis for its use as a new growth-promoting agent and antibiotic substitute in feed.
Experimental design
Our experiments were conducted according to protocols approved by the Shihezi University Animal Care and Use Committee and are based on a single-factor experimental design. We chose 768 healthy one-day-old Mahua broilers with a similar weight and divided them into 8 dietary treatment groups with 4 replicate pens per treatment group (24 birds each). Half were males and half were females. The feeding programme included a starter diet until day 21 and a finisher diet from day 22 until day 42. In control group 1, TEO was not added to the basal feed, whereas, in treatment groups 2 through 8, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30 and 0.35 mg/kg of TEO were added to the basal feed.
Experimental feed and feeding management
For the purpose of these experiments we formulated with basal feed a starter diet fed from day 1 through day 21 and a finisher diet fed from day 22 through day 42 based on standard China (1986) and National Research Council (1994) nutritional requirements. The experimental feed consisted of basal feed (Table 1) and fresh pure TEO, containing carvacrol, thymol, borneol, coriander oleyl alcohol, caryophyllene, nopinene, etc.
The broilers were only kept free-ranging on the net. Every replicate feed net area was 100x150 cubic centimetres. We administered vaccines against various diseases, such as Marek disease, Newcastle disease, infectious bursal disease, infectious bronchitis by standard regular intranasal administration and intramuscular injection. We controlled light temperature and humidity. We regularly removed chicken faeces and had the broilers eat and drink water ad libitum. day 21 and day 42. We collected 3-5 ml of blood by heart punctures from each bird. Subsequently, after letting the blood settle for 30 minutes, we extracted the serum by centrifugation (3000 rounds/15 min). We kept the serum at -30°C to measure alanine aminotransferase (ALT), aspartate transaminase (AST), alkaline phosphatase (ALP), total proteins (TP), albumin/globulin (A/G) ratio, triglycerides (TG), total cholesterol (TC), lowdensity lipoproteins (LDL), high-density lipoproteins (HDL), blood urea nitrogen (BUN), uric acid (UA) and glucose (GLU). We used the Olympus AU2700 (Olympus) automatic biochemical analyser to assess serum biochemical parameters.
Statistical analysis
We initially processed data with Excel and analysed it statistically by the single-factor analysis of variance of SPSS 18.0. The results of the experiment were reported as aver-ages±standard deviation. When significant differences emerged, we conducted a Duncan's multiple range test with a multiple comparison and P 0.05 as statistically significant cut-off. Table 3, ALT activity in the serum of Mahua broilers in the treatment groups was significantly decreased (P 0.05) on day 21 and 42 compared with the control group. The AST activity in the serum of Mahua broilers tended to increase on day 21 and 42, but no statistically significant difference was identified (P 0.05). The ALP activity in the serum of Mahua broilers tended to increase on day 21, but no statistically significant difference was identified (P 0.05) on day 42.
As shown in
As reported in Table 4, total proteins and globulins in the serum of Mahua broilers can be significantly improved (P 0.05) on day 21 in the treatment groups fed with additional 0.10, 0.15, 0.20, 0.25 g, 0.35 mg/kg of TEO compared with the control group, but no statistical- The effect of thyme essential oil In the treatment groups fed with additional 0.10 and 0.15 mg/kg of TEO, HDL on day 42 showed a statistically significant improvement (P 0.05) by 12.08 and 13.91% respectively.
As reported in Table 6, BUN in the serum of Mahua broilers showed a statistically significant decrease (P 0.05) on day 21 in all treatment groups compared with the control group. Serum BUN of Mahua broilers showed a statistically significant decrease (P 0.05) on day 42 in the treatment groups fed with additional 0.05, 0.10, 0.15, 0.30, 0.35 mg/kg of TEO. No statistically significant effect (P 0.05) of TEO was recorded on UA in the serum of Mahua broilers. Serum GLU in Mahua broilers showed a statistically significant decrease (P 0.05) on day 21 in the treatment groups fed with additional 0.10, 0.15 and 0.20 mg/kg of TEO compared with the control group. No statistically significant effect (P 0.05) of TEO was recorded on GLU in the serum of Mahua broilers on day 42.
Discussion
Serum biochemical indices directly express the status of metabolism, nutrition and health of the animals. Therefore, these indices can be used to assess the effect of TEO on growth conditions, food metabolism, immunity and mechanisms of broilers.
The effect of thyme essential oil on serum enzyme activity in Mahua broilers
Alanine aminotransferase is mainly distrib-uted in the liver, followed by skeletal muscles, kidney, myocardial tissue, etc. Alanine aminotransferase can catalyse the transmutation between alanine and -oxoglutarate, and between oxalo-acetic acid and glutamic acid. Alanine aminotransferase can influence the intermediate metabolism of glucose and amino acids. Aspartate transaminase is mainly distributed in myocardial cell, liver tissue, etc. Serum ALT and AST are very low under normal conditions. When liver damages or an increase in the permeability of liver cells are present, ALT and AST are released into the blood to increase their activity. Therefore, their serum levels are sensitive indicators of liver cell damage (Zhang, 2011). This experiment shows that TEO can significantly decrease the ALT level, but it has no significant effect on AST. Therefore, the addition of TEO will not damage liver cells, probably because it is derived from a natural herbaceous plant, therefore it is beneficial to animals. Sertel et al. (2011) found that the cytotoxic activity of TEO can influence the transcription of the genes that control cell cycle, cell death and cancer. The main regulation pathways of TEO are the interferon signalling pathway, the N-oligosaccharide synthesis pathway and the ERK5 signalling pathway. Alkaline phosphatase can catalyse the hydrolysis of some compounds, such as phosphomonoesters and nucleoside phosphates, and its activity is an important indicator of bone metabolism (Xu et al., 2013). Our experiment demonstrated that, as a result of the addition of TEO, ALP levels in the serum of Mahua broilers tended to show a statistically significant increase (P 0.05) on day 21. Consequently, it can be inferred that TEO can improve bone metabolism in broilers and therefore their growth. However, Sharma and Gangwar (1986) suggested that, as broilers grow and age, the ALP activity tends to decrease, as shown in our experiment. In particular, ALP in the serum of Mahua broilers in the treatment groups tended to decrease significantly (P 0.05) on day 42 compared with the control group.
The effect of thyme essential oil on total serum proteins in Mahua broilers
Serum total proteins consist of albumin and globulin. Their content can effectively reflect protein metabolism, feed condition and growth of animals. -globulins are responsible for humoral immunity. The A/G ratio can reflect spleen functions and partly the immune and physiological status of animals (Xie et al., 2010).
In the treatment groups, in the earlier stages total proteins and globulins in the serum of broilers increased significantly (P 0.05) and at all stages the A/G ratio significantly decreased (P 0.05). This shows that TEO can improve protein metabolism and immune function in broilers. This effect can be probably explained by two reasons. Firstly, TEO can promote protein deposition in broilers in vivo, keep the colloid osmotic pressure stable, improve the transportation of metabolic products in vivo, improve feed conversion rate and promote growth. Mathlouthi et al. (2012), Weber et al. (2012), García et al. (2007) and other authors proved that TEO can improve feed conversion rate and promote growth. Amad et al. (2011) added 0, 150, 750, 1500 mg of plant extracts and thyme and anise essential oils to the basal ration to feed male Cobb broilers and found a statistically significant increase (P 0.05) in the efficiency of use of crude proteins. Hernández et al. (2004) added 5000 ug/mL of an essential oil blend consisting of sage, thyme and rosemary from labiate and found a significant increase in the efficiency of nutrient use in the intestine and in the apparent digestibility of dry matter and crude proteins on day 22-42. Secondly, thyme can have antimicrobial, anti-inflammatory and antiviral actions. In the meantime it can also improve resistance to disease and promote the immune response. A research demonstrated that the major component of TEO was carvacrol, which can lower the expression of COX-2 mRNA and proteins induced by lipopolysaccharides. Because it is an inhibitor of PPAR-, -regulator and COX-2, it has various effects, including an anti-inflammatory action (Hotta et al., 2010). Burt et al. (2007) suggested that the components of TEO can induce bacterial HSP60 and inhibit the synthesis of O157:H7 The effect of thyme essential oil flagellin of Escherichia coli. Ausra et al. (2006) showed that -cymene, thymol and carvacrol in the thyme extracting solution characterising water-solubility and TEO significantly suppresses the proliferation of aspergillus, ten bacteria and eight yeast strains. Zu et al. (2010) investigated ten essential oils and suggested that thyme, cinnamon and rose essential oils display the best antibacterial activity against propionibacterium acnes and inhibit its diameter respectively by 40 mm, 33.5 and 16.5 mm. Schnitzler et al. (2007) indicated that plant essential oils, like TEO, have a more powerful antiviral activity against acyclovir-sensitive strain KOS, and acyclovir-resistant herpes simplex virus. Maximum concentration of TEO with no cytotoxicity is 0.005%.
The effect of thyme essential oil on blood fat content in the serum of Mahua broilers Cholesterol is an important part of animal cell membranes and nerve myelin sheath and is a precursor of the synthesis of biliary acids, steroid hormones, adrenal hormones and vitamin D3, therefore it has some important physiological functions. Low-density lipoproteins carry endogenous cholesterol to every cell in body and promote cholesterol deposition. High-density lipoproteins make cholesterol from the surrounding tissue reach the liver to metabolise it by reverse transportation and decompose it into bile acid. Total cholesterol in blood reflects the degree of lipid absorption and metabolism. Low-and high-density lipoproteins reflect the state of lipid transportation in vivo.
In our experiment HDL of Mahua broilers was significantly (P 0.05) improved on day 42 in the groups fed with an addition of TEO, thus showing that thyme can enhance lipid catabolism and reduce fat deposition. This is probably due to an anti-oxidant effect of TEO and its ability to regulate and control levels of some hormones in vivo, inhibit the activities of some lipases in vivo, promote protein deposition in the body and decrease fat deposition. Rana and Soni (2008) fed daily rations with 0.5% of thymus extract to rats with oxidative stress induced by N-nitrosamine and found that it can influence the levels of superoxide dismutases, peroxidase and catalase and improve their anti-oxidant power, thus contributing to oxidative stress prevention. Berkan et al. (2010) studied 18 wrestling athletes, who drank tea with thyme leaves three times a day. After 35 days, athletes in the treatment group showed a significant improvement of total anti-oxidant capacity, a very significant decrease of malondialdehyde and a significant decrease of thyol group RSH compared with the control group.
The effect of thyme essential oil on blood urea nitrogen, uric acid and glucose in the serum of Mahua broilers Serum urea level is an index which reflects the status of protein metabolism, renal function and nutrition of the body. Uric acid is mainly generated by protein and nucleic acid degradation and is the main form of ammonia excretion in chicken. Serum uric acid directly reflects the level of protein catabolism in animals. Glucose is a substance that in animals directly oxidises to provide energy. Seventy percent of the energy in the body comes mainly from the decomposition of glucose. In our experiment, the BUN level in the serum of Mahua broilers significantly decreased (P 0.05) in the treatment groups, thus showing that thyme improves protein synthesis in broilers, decreases protein decomposition speed and increases the efficiency of nitrogen use. This is probably due to the ability of thyme to promote the growth of animals, improve feed conversion and therefore increase absorption and utilisation of feed proteins and increase protein deposits.
Conclusions
Thyme essential oil can promote protein metabolism of Mahua broilers, enhance lipolysis and improve the immune status. On the basis of the comprehensive analysis of the effect of TEO on serum biochemical indices of Mahua broilers, the optimal range of TEO addition proved to be between 0.10 and 0.25 mg/kg. Thyme essential oil can be considered a potential growth enhancer for broilers, because it can meet the demand of both producers for increased broiler performance and consumers for more environmental-friendly farming conditions. Thyme essential oil is also cost-effective, if one takes into account costs of antibiotics and other commercially available products on the market (Alcicek et al., 2003).
|
2019-03-30T13:04:11.276Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "1a92cad5e06d069b00e4b4d86de89c739320b61e",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4081/ijas.2014.3238?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d6eb85e348de00c961246d03a6333b8d35d6d242",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
241171820
|
pes2o/s2orc
|
v3-fos-license
|
Sampling uncertainty of UK design flood estimation
The UK standard for estimating flood frequencies is outlined by the flood estimation handbook (FEH) and associated updates. Estimates inevitably come with uncertainty due to sampling error as well as model and measurement error. Using resampling approaches adapted to the FEH methods, this paper quantifies the sampling uncertainty for single site, pooled (ungauged), enhanced single site (gauged pooling) and across catchment types. This study builds upon previous progress regarding easily applicable quantifications of FEH-based uncertainty estimation. Where these previous studies have provided simple analytical expressions for quantifying uncertainty for single site and ungauged design flow estimates, this study provides an easy-to-use method for quantifying uncertainty for enhanced single site estimates.
INTRODUCTION
The Flood Estimation Handbook (FEH) volume 3 (Robson & Reed 1999) and the more recent update (Environment Agency 2008) outline statistical methods to estimate flood frequency in the UK. The 2008 update will henceforth be referred to as FEH08 and FEH will be used where methods apply to both. The FEH method uses annual maximum flow data (AMAX) with the application of regional frequency analysis (RFA) based on L-moments (Hosking & Wallis 1997). RFA entails the grouping of similar catchments (hydrologically) to the site of interest. Thereby providing more data spatially where it is lacking temporally. For the grouping, which is known as 'pooling' in the FEH, the region of influence approach is used, as detailed by Burn (1990). RFA assumes that sites in the pooling group have the same AMAX distribution except for a scaling factor, known as the index flood. Therefore, if extreme flow estimates are undertaken on the pooled and standardised AMAX, the estimates can be unstandardised for the site of interest if the index flood is known for that site. In the FEH, the index flood is the median annual maximum flow (QMED) and for sites with no data QMED is estimated using a multiple regression equation. L-moment ratios, L-CV and L-SKEW (Hosking & Wallis 1997) are calculated for each site in a pooling group and then weighted averages of these are used as parameters for estimating a growth curve, which is multiplied by the index flood for the final quantile estimates. For example, the FEH recommended, as a default, the generalised logistic distribution (GLO) for AMAX based estimation of extremes, and the associated GLO estimator is differentiating between gauged and ungauged pooling and assigning greater weight to at-site data for the gauged case. The weights are based on the sample size and similarity with the subject catchment. The weights for the ungauged case are relatively uniform when compared with the weights for the gauged case which vary considerable due to the subject site's sample size. Methods to quantify the uncertainty for estimates of the index flood were provided within the original FEH, but no formal framework for estimating uncertainty was developed at the time for the pooled estimates of longer return periods. RFA using the index flood method can and has been implemented in many ways. For example, the UK, USA, Ireland and Australia all have similar 'official' RFA methods (Environment Agency 2008; Murphy et al. 2014;Ball et al. 2019;England et al. 2019, respectively), but they all differ in significant ways when considering the quantification of uncertainty. As Rosbjerg & Madsen (1995) found when applying an analytical approach to five different RFA procedures, results of uncertainty analysis differ significantly in line with assumptions of the method, particularly when violation of the assumptions is not accounted for. Different approaches are also necessary within a set of RFA procedures, for the single site (SS), ungauged (UG) and gauged cases (FEH08 gauged analysis will be denoted ESS). For the FEH statistical method specifically, Burn (2003) appears to be the first published study to quantify uncertainty. A balanced resampling approach (Reed 1999) was used for the single site and gauged cases, and vector bootstrapping (GREHYS 1996) was used to preserve the spatial correlation structure of the data in the pooling group (to maintain intersite dependence if it is present). Uncertainty for the ungauged case was not attempted. Kjeldsen & Jones (2006) advanced their previous analytical solution (using Taylor approximations) for variance of the GLO-based SS quantiles (Kjeldsen & Jones 2004) to the RFA FEH case. Intersite correlation was considered but the possibility of heterogeneity was not. Kjeldsen (2021) noted that the methods outlined in the 2004 paper (and by extension the 2006 paper) are somewhat complex and are therefore unlikely to be used in practice. As noted by Kjeldsen (2015), there has been a growing call for uncertainty to be at the forefront of flood risk management, which has recently culminated in an associated book (Bevan & Hall 2014) funded by the UK Flood Risk Management Research Consortium. If quantification of uncertainty is to be applied by flood frequency analysis practitioners, easily applicable methods are necessary. To this end, Kjeldsen (2015) developed a simple and practical approach for quantifying uncertainty for FEH08 UG estimates and provided the factorial standard error (fse) (Robson & Reed 1999) for a range of return periods (2, 5, 30 and 100). The method used to derive fse was further applied by Dixon et al. (2017), and quadratic equations were developed for continuous estimation of fse across return periods up to 2000 years. Kjeldsen (2021) has added to the simplicity of quantifying uncertainty with an easy-to-use equation for calculating variance of a SS GLO estimated design flow. To date, therefore, there is a simple method for estimating FEH uncertainty in the SS and UG cases. No simple ESS method has been developed and as noted by Dixon et al. (2017), further work is required for uncertainty in the enhanced single site (ESS) method to be evaluated. Hosking & Wallis (1997) advocate a Monte Carlo simulation approach to quantifying uncertainty for RFA, arguing that the analytical approaches assume a single distribution (and we cannot be sure that the 'correct' model was used), while the simulation approach they recommend would apply the best fitting distribution for each site in the pooling group. However, distributions are still specified, as they are with a bootstrap approach. Although the building of the sampling distribution in the bootstrap case is based on resampling the data with replacement, as opposed to simulating with the assumption of a distribution. The final estimates that make up the sampling distribution are still based on a distributional assumption, but the approach is seen as being the least parametric, with the fewest assumptions of the three approaches (analytical, simulation and bootstrapping). Therefore, like Burn (2003), this paper applies a bootstrap approach; in this case however, it is bespoke to quantifying the uncertainty in the FEH08 methods for SS, UG and ESS. Furthermore, a simple expression, building on Kjeldsen (2021), is derived for estimating uncertainty for the ESS case. The aims of the paper are as follows: 1. describe bootstrap methods for estimating fse for the FEH08 SS, UG and ESS cases; 2. use the methods of point one to apply and compare the uncertainty across these cases at sites considered suitable for pooling (National River Flow Archive); 3. compare the uncertainty across different catchment types; and 4. develop a simple expression for quantifying uncertainty for the ESS estimates.
The paper has four main parts: firstly, a summary of the data used in the study; secondly, a section describing the fse and the bootstrap methods applied to estimate it for the SS, UG and ESS cases. This method section also details the approach to compare fse across catchment types and the analytical expression for calculating fse in the ESS case. Thirdly, a summary of all the results is provided, and lastly, some concluding remarks.
STUDY AREA AND DATA
The annual maximum gauged flow data used for this study is from the National River Flow Archive (NRFA) peak flow dataset version 9. The NRFA collates, quality controls, and archives hydrometric data across the UK, including networks operated by the main UK Measuring Authorities; the Environment Agency (England), Natural Resources Wales, the Scottish Environment Protection Agency and for Northern Ireland, the Department for Infrastructure -Rivers. The locations of the 545 gauging stations used in the study are shown in Figure 1. They are all the sites in the NRFA that are considered suitable for pooling. Figure 2 provides a histogram of the AMAX sample sizes (record lengths in years) with some accompanying statistics. Table 1 provides a summary of the frequency curve parameters across all the sites used in the study.
The analysis undertaken for this study, using the NRFA data, was undertaken using Base R (R Core Team 2019) and the UKFE R package (Hammond 2020).
QUANTIFYING UNCERTAINTY OF DESIGN FLOWS
To quantify the uncertainty of a flood frequency estimate (design flow), it is common to construct confidence intervals using the standard deviation of the design flow sampling distribution (i.e. the standard error). Assuming the sampling distribution is normal, and the 68 and 95% confidence intervals for the design flow can be calculated by subtracting and adding one and two standard deviations, respectively. The fse is the exponential of the standard error on the log scale and can be used to estimate the multiplicative error (the ratio between the estimated and true value). Confidence intervals are proportional to the estimated value and the fse can be used to calculate them as where q is the design flow estimate. The fse is useful as a standardised measure of uncertainty across the different FEH cases (SS, UG and ESS), and what follows details the methods to calculate fse for the three FEH estimation cases.
BOOTSTRAPPING TO APPROXIMATE THE SAMPLING DISTRIBUTION
Bootstrapping, introduced by Efron (1979), and often applied in flood frequency analysis since (Zucchini & Adamson 1989;Faulkner & Jones 1999;Burn 2003;Hall et al. 2004;Burn et al. 2007;Bomers et al. 2019), is a class of resampling in which multiple additional samples of the same size are derived by drawing randomly with replacement from an original sample and estimating the statistic on each (or the model parameters), thus approximating the sampling distribution. Bootstrapping is used throughout this study as a non-parametric method for estimating uncertainty from an approximated sampling distribution. The fse can be calculated with bootstrapping as follows (example for the single site).
1. Randomly select from the sample with replacement N times (where N is the samples size) 2. Repeat step 1 M times to create M bootstrapped samples 3. Calculate the design flow for each of the M samples to approximate the sampling distribution of the design flow 4. Create log-transformed residuals by subtracting the log mean of the sampling distribution from each of the log-transformed design flows 5. The fse is then the exponent of the standard deviation of the log-transformed residuals (Equation (3)).
where n is the number of bootstrapped samples, ln Q T is the mean of the sampling distribution and ln Q T i is the ith design flow estimate within the sampling distribution. An example of approximating the sampling distribution and fse, using the bootstrap approach, for the 100-year flow at NRFA site 39001 (Kingston upon Thames) is provided in Figure 3. The 100-year estimates were derived using the GLO with parameters estimated using L-moments from the observed annual maximum sample (136 years). In this case, the fse is 1.08 and given that the mean of the sampling distribution is 700 m 3 /s. Equation (2) provides 95% intervals of 600-816 m 3 /s. Faulkner & Jones (1999) and Burn (2003) applied a balanced resampling approach where, in place of steps 1 and 2 above, the AMAX series is repeated M times, permuted and split into M samples. The balanced resampling ensures that each value in the AMAX series appears an equal number of times in the union of resampled datasets. For this study, balanced resampling was compared with random resampling by calculating the single site QMED fse applying both methods to each of the 545 AMAX series twice (providing four distributions of fse). A Kruskal-Wallis test found no significant difference between the four fse samples (p-value ¼ 0.86). To maintain intersite correlation within the pooled group, Burn (2003) also applied vector resampling, whereby each year across the AMAX series in a pooling group, is resampled together. Intersite dependence within a pooling group increases the uncertainty in comparison to the same pooling group with no dependence. This is because there is less information when deriving the L-moment ratios. Conversely, calculations of uncertainty would appear to decrease. To see why we can take an extreme example where every site in a pooling group is perfectly correlated. The L-CV and L-SKEW would be the same for each site (there would be less variance), but the sample size has remained the same. The standard error is a function of variance and sample size and is decreased by a reduction in variance or a larger sample size. Therefore, a pooling group with perfectly correlated sites provides no more confidence in our growth curve estimation than one of the sites alone, but an estimation of uncertainty would appear to decrease in comparison. Maintaining intersite dependence within the bootstrapping procedure underestimates the uncertainty for groups with dependence, as does non-vectorised bootstrapping. Fundamentally, intersite dependence undermines the benefits of RFA, hence Hosking & Wallis (1997, p. 8) list independence at different sites as an assumption of the index flood-based RFA. Thankfully, no bias is caused by a violation of this independence assumption and no two AMAX samples within a pooling group will be perfectly correlated. Therefore, the addition of a further site (assumed to have the same scaled distribution as the subject site) to a pooling group, will always provide useful information. However, it would be sensible to consider highly dependent catchments when forming the pooling group, because replacing a catchment with high dependence with an independent site will reduce the uncertainty. In this study, random resampling has been applied and the estimation of fse error assumes intersite independence.
THE METHOD FOR UNGAUGED CATCHMENTS
The general approach described by Kjeldsen (2015) for calculating fse (Equation (4)) has been applied here for ungauged QMED estimates. The only difference being that the variance subtracted to account for the single site error in Equation (4) was calculated by bootstrapping rather than Monte Carlo simulation. Therefore, within Equation (4) for this study, m is the number of catchments, Q T is the QMED estimate, Q T is the SS observed QMED, and var{ε 2 } is variance of the at-site estimate and was calculated by bootstrapping the AMAX samples 500 times.
The sampling distribution for an ungauged QMED estimate can be approximated by assuming a normally distributed sampling distribution, which is defined by the log-transformed QMED estimate as the mean, and the log-transformed fse as the standard deviation. The exponential of this distribution provides the sampling distribution (see Equation (5)) and will provide the same confidence intervals as derived by Equation (2). x exp [N(ln (QMED), ln (QMED fse ))] For example, the ungauged QMED estimate for catchment 58006 (chosen at random) using the QMED regression equation (2)). Considering the sampling distribution of Equation (5), confidence intervals can also be derived as Equation (6) This approach to approximating the ungauged QMED estimate sampling distribution is used as part of the process for calculating fse for UG pooling analysis. For calculating the fse for a UG pooled estimate, the following steps were applied (where N equals 500): 1. Bootstrap each site of the pooling group individually to create N new samples of the same size 2. Create N new pooling groups from the bootstrapped samples of each site 3. Undertake a weighted (based on pooling group weightings) random selection of a single site from each of the N pooling groups and calculate the growth factor for each 4. Sample from the QMED sampling distribution (Equation (5)) N times 5. Multiply the results of step 4 by the results of step 3 to derive N estimates which approximates the sampling distribution 6. Apply Equation (3) to the results of step 5 to derive the fse.
Step 3 is of note because it is used rather than taking the weighted average growth curve for each pooling group. If this was done, due to the more homogenous weighting in the UG case (because there is no at-site data) there would be less variance in the growth curve estimate between the bootstrapped UG pooling groups than in the ESS case. We have more confidence when there is at-site data, hence the individual growth curves make up the sampling distribution in this method as opposed to the sampling distribution of the weighted mean growth curve. It is considered that this approach takes a better account of the wide range of possible growth curves that could be considered the 'true growth curve' (see Figure 4) in the UG case.
For the ungauged pooled estimates (of the 545 catchments detailed aboveassumed ungauged), the default FEH08 pooling groups were used and if the site was urban, an urban adjustment was applied to the ungauged QMED estimate and growth curve (Wallingford HydroSolutions 2016). Where a donor or two donors were applied, the closest rural sites (Bayliss et al. 2006) were used. The single donor method was that of Environment Agency (2008) and the two-donor method was that of Kjeldsen (2019). The GLO distribution was used for estimating the growth curve (Equation (1)) with the L-moments method and UG weighted L-CV and L-SKEW (for more detail, see Environment Agency 2008).
METHOD FOR GAUGED CATCHMENTS
The approach for the ESS case is more straightforward and the following steps were applied (where N equals 500).
1. Bootstrap each site of the pooling group individually to create N new samples of the same size 2. Create N new pooling groups from the bootstrapped samples of each site 3. Estimate N growth curves, one for each pooling group using the ESS weighting method 4. Bootstrap the at-site sample N times and calculate the median for each sample to approximate the gauged QMED sampling distribution 5. Sample from the QMED sampling distribution N times 6. Multiply the results of step 5 by the results of step 3 to derive N design flow estimates 7. Apply Equation (3) to the results of step 6 to derive the fse.
For the pooled ESS estimates (of the 545 catchments detailed above), the default FEH08 pooling groups were used. If the gauged catchment was urban, it was included in the pooling group and deurbanised before an urban adjustment was made to the final growth curve (Wallingford HydroSolutions 2016). The GLO distribution was used for estimating the growth curve (Equation (1)) with the L-moments method and ESS weighted L-CV and L-SKEW (for more detail, see Environment Agency 2008). Kjeldsen (2021) showed that the variance of the T-year event, as derived from a GLO distribution using the FEH procedure, can be approximated as:
AN EASY-TO-USE EQUATION FOR ESS VARIANCE
where N is the sample size, β is a GLO growth curve parameter derived as a function of L-CV and L-SKEW (Robson & Reed 1999), y l ¼ ln(T À 1) and T is the return period. The constants α 0 , α 1 , α 2 and α 3 were calibrated by simulating variance for 10,000 samples of size 50, for a range of return periods and differing values of L-SKEW. The first L-moment was held at 1, the L-CV at 0.2, and the simulations were undertaken using the GLO distribution for a range of L-SKEWs (between À0.45 and 0.45). A similar approach was attempted here by binning the results of the ESS variance by the L-SKEW of the 545 sites used in the study. The variance was bootstrapped in the same way as the fse (detailed in the 'method for gauged catchments'), but the variance was calculated on the bootstrapped sampling distribution instead of the fse. Equation (7) was fitted, minimising the sum of squared residuals (between the square roots of bootstrapped and modelled variance), to each set of results representing a binned value of L-SKEW. The variance estimates from the resulting equation were compared with the SS variance estimates using the Kjeldsen (2021) equation to determine whether there is the suitable reduction across sites and return periods.
COMPARISON OF RESULTS ACROSS CATCHMENT TYPES
A comparison of uncertainty across small catchments (,25 km 2 ) (Faulkner et al. 2012), permeable catchments (BFIHOST . 0.65) (Faulkner & Barber 2009) and urban catchments (URBEXT2000 . 0.03) (Bayliss et al. 2006) were undertaken to determine if there are significant differences between them and catchments which are larger, primarily rural, and are not considered permeable. To ensure that the sample size did not influence the comparison, the following steps were taken to approximate the sampling distribution of median fse for comparison (using catchment size as an example): 1. Split the gauges into small and large catchments 2. List the sample sizes for the small catchments 3. Find, for each sample size in step 2, the large catchments with the same sample size 4. Create a sample of large catchments, the same size as the small catchments sample (N), by randomly sampling from the results of step 3. List the fses from the sample of large catchments 5. Repeat step 4500 times and derive a median fse for each of the 500 samples (providing the sampling distribution for the large catchments) 6. Resample with replacement N*500 fse from the small catchment fses and calculate the median for each of the 500 samples (providing the sampling distribution for the small catchments) 7. Compare the two distributions.
These steps were undertaken to compare the UG, SS and ESS 100-year fse across the different catchment types.
RESULTS AND DISCUSSION
Results for ungauged uncertainty Table 2 provides the resulting fse for ungauged QMED estimates and the mean fse for the UG pooled estimates of longer return periods.
Using the approach of Dixon et al. (2017), three quadratics were calibrated, with the results of Table 2, to derive fse continuously across return periods for the cases of 0, 1 and 2 donors (Equations (8)-(10)). fse 0 ¼ 1:4665 À 0:0135y þ 0:0096y 2 (8) fse 1 ¼ 1:427 À 0:0134y þ 0:0098y 2 (9) fse 2 ¼ 1:4149 À 0:0163y þ 0:0102y 2 where y is the Gumbel reduced variate Àln(Àln(1 À 1/T )), and T is the return period. The fse results for longer return periods are significantly larger than those reported by Dixon et al. (2017). This is due to the differing methods to derive fse for return periods greater than two. Given that the method in this study specifically considers the uncertainty within each pooling group and did not use single site estimates as a proxy for the 'true' design flow, it is recommended that Equations (8)-(10) are used for estimating fse for ungauged pooling analysis (return periods 2 to 1,000) for donor options 0, 1 and 2. Results for gauged uncertainty Table 3 provides six number summaries of the ESS fse estimated across the 545 sites suitable for pooling for a range of return periods (T ). Figure 5 provides a comparison between the mean single site fse and the mean ESS fse across a range of return periods. As can be seen in Figure 5, the benefits of ESS analysis over SS increases significantly after the 20-year return period. That is on average, sites with very few years of data would benefit at shorter return periods. Figure 6 provides a boxplot comparison of fse across the 545 sites for the two-donor UG case, the SS and the ESS case. The mean 100-year fse for SS, ESS and UG with 0, 1 and 2 donors are 1.16, 1.12, 1.61, 1.57 and 1.55, respectively.
Result and example of use for an easy-to-use equation for ESS variance
The binning of ESS variance results by L-SKEW values for calibrating parameters on each set, did not provide significantly better results than calibrating on all values, primarily because the location and L-CV also varied for the ESS variance (as opposed to the approach by Kjeldsen (2021)). For comparison the calibration was also applied using L-CV in place of the GLO β parameter because L-CV has significant weighting within the ESS procedure and is highly correlated with β. Using both, the pooled and single site inputs for calibration were compared (i.e. pooled L-CV/β or single L-CV/β), and single site values provided significantly better results. L-CV provided the better fit when compared with β, and Equation (11) is the resulting estimate of the design flow variance for the ESS case. A quadratic provided better results than the third-order polynomial of Equation (7).
The inputs to Equation (11) are estimated from the single site case (QMED and L-CV). The ESS bootstrapped variance and the modelled variance (Equation (11)) were compared with the SS variance equation provided by Kjeldsen (2021) across the 545 gauging stations considered. Figure 7 provides a summary of the results. It was found that those with higher variance for modelled and bootstrapped ESS than for SS, tended to have negative L-SKEW. Figure 8 provides a scatter plot of the square root of the modelled and bootstrapped variance (i.e. the standard error) for which the R 2 value is 0.9. Unfortunately, the results show some heteroscedasticity, which suggests the estimates of ESS standard error is more varied for larger flows, although the proportional error is not correlated with the magnitude of the flow estimate (R 2 ¼ 1.38 Â 10 À5 ). The results plotted in Figure 8 are across all return periods modelled (5, 10, 20, 50, 100, 200, 500 and 1,000).
Gauging station 58006 (chosen at random) on the River Mellte at Pontneddfechan is applied to provide an example of use. The gauge has 49 years of AMAX data from the NRFA, the QMED is 87.4 m 3 /s, and the L-CV is 0.18. The 100-year ESS estimate, applying the default pooling group and the GLO distribution, is 191 m 3 /s. The variance of the design flow estimate is then estimated as As a comparison, the single site estimate of variance (Kjeldsen 2021) for the same AMAX (the L-SKEW is 0.1429, and β is 0.18133) is 933. ESS bootstrapped intervals were derived 30 times and the mean upper and lower 95% intervals were 156 and 217 m 3 /s. When using Equation (11), more scrutiny should be applied where the AMAX statistics fall outside the range of those in Table 1. Although the author considers Equation (11) to be readily applicable, given the dispersion of the estimates as flows increase, it is acknowledged that the model could be improved. Consideration of weights in the pooling groups and the individual variance of estimates from pooling group members may be fruitful in this regard. Such investigation may also provide a variance estimator for individual ungauged pooling design lows as a function of pooled parameter estimates. There are a few results that may be of surprise, such as the observed QMED uncertainty in non-permeable catchments being greater than that of permeable ( Figure 12, left plot). This is probably due to the lack of variance around the 2-year flow where many permeable catchments see a relatively uniform distribution except for a few outliers. For example, gauging station 42010 on the River Itchen at Highbridge & Allbrook Total, where the baseflow index is 0.96. The AMAX sample is shown in Figure 14. The median can be relatively stable in such catchments. Conversely, the estimation of QMED using the QMEDcd equation in permeable catchments has a greater variance in the proportional error ( Figure 13, left plot). This is probably because the calibration for the QMED equation was undertaken on catchments that were mostly non-permeable.
Results of uncertainty across catchment type
The increased uncertainty in rural catchments when compared with urban in the ESS case ( Figure 11, right plot) appears to be due to the fse of the observed QMED in rural catchments being greater (Figure 12, right plot). Why this is the case, is not clear. It may be due to control structures, flood management schemes and abstractions for urban areas that reduce the variance in flow even at the 2-year level.
CONCLUSION
Bootstrap methods, bespoke to the FEH08 pooling procedures, have been detailed and applied to the SS, UG and gauged pooling (ESS) cases, to derive fses for design flow estimates. The resulting fse estimates have been compared across the FEH08 cases (SS, UG and ESS) and across catchment types for all sites considered suitable for pooling (while accounting for sample size). Building on the work of previous studies, it was established that there is a need and a trend for providing easy-to-use equations for estimating uncertainty for design flow estimates. Given the complexity of the FEH procedures, such equations have been a long time coming since the FEH of 1999. From the initial work of Kjeldsen (2015), Dixon et al. (2017) established quadratic expressions for fse estimates in the case of UG pooling groups. Here, updated versions have been established applying an approach that specifically considered the uncertainty associated with each pooling group. Kjeldsen (2021) established an easy-to-use equation for the SS case, assuming a GLO distribution. This SS variance equation was adapted here for use with design flow estimates in the case of gauged pooling groups (ESS). For the quantification of uncertainty, there is now an analytical and easy to apply set of equations for single site, ungauged and gauged FEH-based design flow estimation. Quantification of uncertainty, and importantly, simple to apply quantification of uncertainty, for estimated design flows, will enable flood risk management authorities and engineers to make better decisions when considering flood risk management infrastructure.
|
2021-10-15T15:09:18.159Z
|
2021-10-12T00:00:00.000
|
{
"year": 2021,
"sha1": "4bb76e02e4f1f39414417738f7a841b72e256510",
"oa_license": "CCBY",
"oa_url": "https://iwaponline.com/hr/article-pdf/52/6/1357/981970/nh0521357.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "376f51e5bd1e5695428e0cea0414824331258adc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
2920672
|
pes2o/s2orc
|
v3-fos-license
|
Adolescent Bullying Involvement and Psychosocial Aspects of Family and School Life: A Cross-Sectional Study from Guangdong Province in China
Background School bullying is an emerging problem in China. The present study aimed to measure the prevalence of bullying behaviors among Chinese adolescents and to examine the association of bullying and being bullied with family factors, school factors and indicators of psychosocial adjustment. Methods A cross-sectional study was conducted. A total of 8,342 middle school students were surveyed in four cities in the Guangdong Province. Self-reports on bullying involvement and information regarding family factors, school factors and psychosocial adjustment were collected. Descriptive statistics and multi-level logistic regression analysis were used to evaluate the prevalence of school bullying and explore potentially influential factors. Results Of the total sample, 20.83% (1,738) reported being involved in bullying behaviors. Of the respondents, 18.99% were victims of bullying, 8.60% were bullies and 6.74% both bullied themselves and bullied others. Factors that were determined to be correlated with bullying behaviors included grade, parental caring, consideration of suicide, running away from home, time spent online per day and being in a physical fight. Conclusion Bullying was determined to be prevalent among Chinese adolescents. Given the concurrent psychosocial adjustment, family and school factors associated with bullying, as well as the potential long-term negative outcomes for these youth, this issue merits serious attention, both for future research and preventive intervention.
Introduction
Since Olweus published the book ''Aggression in the Schools'' in 1993, there has been a growing interest in the area of school bullying. The book stated that ''a student is being bullied or victimized when he or she is exposed repeatedly and over time to negative action on the part of one or more students '' [1]and that bullying was characterized by an imbalance of power, aggressive behaviors and repetition over time. Data from the recent largescale Health Behavior in School-aged Children survey (HSBC) conducted among 40 countries suggested that the prevalence of bullying (bullying others, being bullied, and being both a bully and a victim) may range from 8.6% to 45.2% among boys, with a median of 23.4%, and 4.8% to 35.8% among girls, with a median of 15.8% [2]. Another cross-national study, the Global Schoolbased Student Health Survey (GSHS) carried out among middle school students in 19 low-or middle-income countries showed that the prevalence of bullying in countries ranged from 7.8% in Tajikistan to 60.9% in Zambia [3].
Adolescence is a period of immense behavioral, psychological and social changes and challenges [4]. Previous research has indicated that both bullies and victims have an increased rate of submissive and withdrawing behavior. Victims have shown more peer relational difficulties than have uninvolved in bullying participants [5], and they were more likely to have behavioral problems such as substance use, weapon carrying, and even school shootings [5,6]. There is also increasing evidence suggesting that exposure to violent behavior during childhood can affect individuals into their adulthood and that bullying involvement can act as a precursor to both physical and psychological problems [7]. In Bond's two year cohort study, a history of victimization among school-aged students was a strong predictor for the onset of self-reported symptoms of anxiety or depression. Being victimized has a significant impact on future emotional well-being, especially for girls [6].
Given the long-term consequences of bullying, there is an urgent need to address this universal problem and to increase the understanding of the larger proximal development mechanisms that may promote or inhibit school bulling. From a review of the literature, we found that the following variables had been identified to be associated with school bullying: 1) Demographic characteristics: Previous studies have indicated that male students report involvement in significantly larger numbers of violent incidents than female students [8,9]. Additionally, a number of studies have indicated that school bullying declines with increasing age, whereby the younger the students were, the more likely they were to report frequent victimization [10,11].
2) Family factors: It has been reported that children involved in bullying were more likely to have problems with poor family functioning and an insecure attachment with their parents [12,13]. Adolescents who lived in intact families and either reported higher involvement in schools or communicated with parents often were less likely to be engaged in bullying [14,15]. Lower parental support was also reported to be an important predictor for school bullying [16]. In addition, students who lived in a conflictive family environment were also reported to be more likely to bully others than those who have harmonious family relations [17]. In a study by Chen, however, in which student's pocket money was used as an indicator of Family SES (socioeconomic status), the results did not show any association between family SES and school bullying, which was attributed to the equal family income distribution in Taiwan [8].
3) School factors: The school environment is important for understanding the origins of bully/victim problems and for seeking further avenues for change and prevention [9]. A number of studies have found that poor classmate relations predicted a high level of aggressive behaviors [10]. Teachers play a crucial role in children's wellbeing and development. Care and support from teachers can reduce the aggression and delinquency of their students. In a study by Wei and colleagues, the researchers showed that less support and more maltreatment by a teacher were factors likely to result in higher levels of engagement in adolescent bullying [11]. Other previous studies have indicated that victims showed decreased rates of academic success, measured by lower grades, compared with those not involved in bullying [12,13]. Glew hypothesized that bullying impaired concentration and subsequent academic achievement in victims [14]. Conversely, in a study by Woods, high academic achievement was an important predictor for relational bullying [15]. 4) Psychosocial adjustment: Recently, many researchers have identified the association between psychosocial factors and school bullying. For example, Brunstein found that students who have a history of bullying or being bullied have a higher risk of committing suicide [18]. Those who often felt lonely were more likely to report being a target or aggressor of bullying [19]. In a study by Haynie, it was concluded that children who were involved in bullying were more likely to run away from home [20]. Students who engaged in physical fighting also showed a higher probability for involvement in school bullying [20].
Although we concluded that bullying is a universal phenomenon, it is clear that there are cultural variations in its prevalence and the way that bullying or victimization relates to other factors. Most previous studies, however, have been carried out in Western or developed countries, and only a handful of studies have been conducted in low-or middle-income countries. There is also a paucity of studies on family status (parental communication, family economic status), school dynamics (classmate relations, studentteacher relations) and the personal psychosocial adjustment of students (feeling lonely, attempting suicide) in the Chinese cultural context. Therefore, we carried out this large-scale cross-sectional study among middle school students in the Southern Province of China. The two main purposes of our research were: 1) To examine the prevalence of school bullying, including bullying others, being bullied and being a bully-victim. We have adopted the increasingly common term bully-victim to indicate those students who participate in both bullying and victimization. In view of its dual involvement in bullying and victimization, this emerging group may experience a higher level of psychosocial risks or life events than either bullies or victims.
2) To explore the factors that may contribute to the occurrence of school bullying. The variables highlighted in this study were highly correlated with students' everyday lives, which is consistent with what has been widely reported from previous research.
Study Design and Participants
A cross-sectional study was conducted to investigate the prevalence of school bullying and to examine the relationship between potentially influential factors and involvement in school bullying. Participants were middle school students recruited from four cities in the Guangdong Province (Guangzhou, Shenzhen, Chaozhou, and Dongguan). Guangzhou and Chaozhou represent the traditional Yue culture. Shenzhen and Dongguan, however, are known as immigrant cities, with more than half of the population migrating from other provinces. The schools in Guangdong were divided into three categories, based on teaching quality: key senior high school, regular senior high school and vocational school. A stratified cluster, random sampling method was used to randomly select participants among the three types of schools. First, two key senior high schools, two regular senior high schools and two vocational schools were selected in Guangzhou, Shenzhen, and Chaozhou; in Dongguan, however, one regular senior high school and two regular junior high schools were selected. Next, two classes were randomly selected from each grade in these schools. All students (a total of 8,342) in the selected classes were invited to participate in this study and provided usable information. The participation rate was 99.7%, and those who asked for sick leave were not included in the study.
Data Collection
To protect the privacy of the students, anonymous questionnaires were administered by trained interviewers in the absence of the teachers (to avoid any potential information bias). Students were required to fill out the questionnaires during class time. All data were collected between 2009 and 2011.
Ethical Statement
The study received approval from the Sun Yat-Sen University, School of Public Health Institutional Review Board. Participants were fully informed of the purpose of the study and were invited to participate voluntarily. Written consent letters were obtained from the school, each participating student and either of the student's parents.
Measures
Independent variables. Socio-demographic variables: Age, grade, gender, student's pocket money (Students were asked how much pocket money on average they received per month from their parents. The rating choices for this item were 1) lower than 100 Yuan, 2) 100-199 Yuan, and 3) 200 Yuan or more.
Social friends: ''Do you have friends who have dropped/are dropping out of school?''.
Family factors: Living arrangement, family economic status, family communications, parental caring. Living arrangement was assessed by asking who lived in the student's primary home.
Family economic status was measured by asking the student's perception of their family's current economic status (rated from good to bad). Family communication was assessed by asking the student how often they communicate with their parents on the issues of everyday life (coded on a 3-point scale from often to scarce). Parental caring was assessed by asking, ''Are you satisfied with the care or love you receive from your parents, based on a 3point scale from satisfaction to dissatisfaction?''.
School factors: Classmate relations and teacher-classmate relations were also assessed based on the student's self-rating about their relationship with classmates and teachers, from good to bad. Academic achievements were captured by a single item asking for a personal appraisal of students' performances relative to that of their classmates (responses were coded as ''above average,'' ''average,'' and ''below average'').
Psychosocial adjustment: Feeling lonely was assessed by asking, ''During the past 12 months, how often did you feel lonely per week?'' Response options ranged from 1 (never) to 4 (over 4 days). Suicide attempts were assessed by asking, ''During the past 12 months, did you ever seriously consider attempting suicide?'' Responses were categorized into 4 groups: Never, Considered, Planned, and Attempted. Running away from home was assessed by asking, ''During the past 12 months, did you run away from home without your parents' permission for more than 24 hours?'' Response options were 1) never, 2) considered, 3) attempted, or 4) have run away from home one time or more. Physical fighting was assessed by asking, ''During the past 12 months, how many times did you fight with others?'' Time spent online per day was assessed by asking, ''How much time do you spend online per day?''.
Dependent variables. The questions about bullying and victimization consisted of 12 parts, with the answers given on a 3point scale as follows: 1-never, 2-sometimes or rarely (one or two times) or 3-often (more than three times).
Students reporting at least one bullying behavior with a frequency of ''often'' in the past year were classified as bullies [21]. Victims were those who reported at least one victimization experience in the past year with a frequency of ''often.'' Bullyvictims met the criteria for being both a bully and victim. All other students were labeled as non-bullies/non-victims and served as the comparison group.
Statistical Analysis
All statistical analyses were conducted using SAS 9.1. Descriptive analyses were used to describe demographic characteristics and the prevalence of school bullying. All factors that were statistically significance in the univariate analysis and that have been widely reported in the literature were further analyzed by multivariate analysis. In the multivariate analysis, a student's grade, rather than age, was adjusted for in the total sample because grade was a strong predictor for adolescent bullying. Three multi-level logistic regression models were fitted, one for each type of involvement in school bullying. Adjusted odds ratios (OR) were obtained with 95% confidence intervals (CI). Because individuals were grouped into schools, and therefore not independent, a multi-level analysis was carried out to select possible factors that may influence school bullying. The GLIM-MIX procedure in SAS was used to fit the two-level logistic regression mixed models in which schools were treated as clusters. Table 1 and Table 2 provides basic demographic information for the sample. The final sample included 8,342 middle-school students: 4196 boys (50.3%) and 4146 girls (49.7%). The students ranged in age from 10 to 22 years old, and the mean age was 16.4 (61.63). Overall, 20.83% of the total participants reported being involved in school bullying during the past 12 months, with 18.99% of the students reporting being bullied and 8.6% admitting to bullying others. A subset of students (6.74%) was involved in both victimization and bullying. A total of 27.84% (2322) were from junior high schools and 72.16% (6020) were from senior high schools. A total of 65.39% (5455) students lived with both biological parents, whereas 24.51% (2045) lived in single-parent families. Regarding academic achievement, 5961 (71.46%) students appraised themselves as average and 1361 (16.32%) as below average. A total of 4277 (51.27%) students reported poor relations with classmates, and 36.98% of the participants had poor relations with their teachers. Regarding the psychosocial factors, 0.79% (66) of the students had attempted suicide, 15.5% (1293) felt lonely over 4 days in a week and 1.87% of the total sample had run away from home more than once.
Univariate Analysis for Bully, Victim and Bully-victim Groups
As shown in Table 3 and Table 4, without adjustment for other variables, bully, victim and bully-victim groups were correlated with pocket money, parental caring, communication with parents, feeling lonely, suicide attempts or ideation, running away from home, being in a physical fight and time spent online. Economic status was only significantly correlated with being a victim or bullyvictim. There were no significant differences between gender and bully, victim and bully-victim.
Multilevel Logistic Regression Analysis: Bully
The final logistic regression model for bullying is presented in Table 5 and Table 6. Six of the original variables remained in the final model: grade, parental caring, considered suicide, running away from home, being in a physical fight and time spent online per day. Students who were dissatisfied with parental caring were 1.7 times more likely to be bullies than those who were satisfied with parental caring. Students who spend more time online were also at a higher risk of being bullies. Finally, students who had considered suicide (OR = 1.26, 95% CI = 1.02-1.56) or attempted to run away from home (OR = 1.89, 95% CI = 1.22-2.83) had a higher probability of being bullies.
Multilevel Logistic Regression Analysis: Victim
The final model for victimization in Table 5 and Table 6 showed many correlations. Good economic status appeared to protect students from being bullied. Students who were dissatisfied with their parental caring or scarcely communicated with their parents were at a higher risk to be bullied. Adolescents who reported that they had attempted to run away from home were 62% more likely to be bullied. Those who spent more than 4 hours/day online also had a higher probability to be bullied (OR = 1.85, 95% CI = 1.43-2.38).
Multilevel Logistic Regression Analysis: Bully-victim
Eight independent variables out of the original factors remained in the final model. With these reference categories, it was determined that senior students were less likely to be in the bully-victim group. Students who were dissatisfied with their parental caring, scarcely communicated with their parents, had attempted suicide or tried to run away from home, however, were more likely to be bullied and to bully others.
Discussion
In this study, we found that school bullying was not rare in China, and many risk factors for bullying exist throughout school and family life. Similar studies have already been reported. A cross-sectional self-report survey of 11-15 year-old school children in 27 countries revealed substantial cross-national differences, with a low prevalence of involvement in bullying in Sweden (14.6% and 15.4% of children reporting victimization and bullying, respectively) and a high prevalence in Lithuania (56.3% and 54.9% of children reporting victimization and bullying) [22]. In one of the few studies of school bullying in China, Chen reported that 68% of the middle school students studied had been bullied at least once during the previous year [8]. A possible explanation for the variance in prevalence could be differences in study design and the nature of the samples. In addition, socioeconomic diversity and various cultural definitions and understandings of bullying behaviors may have contributed to the variance of the bullying rate [23,24,25]. Contrary to our expectation, we found that senior students reported a higher level of both bullying and being bullied than junior students, which was consistent with the findings in a study by O'Connell, notably, that older boys were more likely to actively bully than were younger boys [26]. Mrug also reported that higher levels of aggressive fantasies, delinquency and overt aggression were well-predicted by older age [27]. Previous studies also suggested that boys were more likely to report being a target or aggressor of bullying than girls [17,28], whereas in the present study, there was no significant gender difference. This finding is consistent with some previous studies 12, 15,29] arguing that the rate of bullying among boys and girls may be similar. The difference may be reflected in the forms of bullying. Boys usually practice physical and direct bullying (e.g. kicking someone), while girls may engage in psychological bullying (i.e. spreading rumors) [30,31].
Family function can also contribute to the establishment and maintenance of a bully-victim relationship. Previous research suggested that both avoidant attachment and preoccupied attachment have been found to predict aggressive behaviors and victimization concurrently and over time [31]. Our research further supported the notion that the students who were dissatisfied with their parental care were more likely to engage in bullying or be a victim of bullying. It is also worth noting that good communication with parents reduced the probability of victimization as well. Through communication, students confide their problems to their parents and seek better ways to handle them. A study by Reese suggested that a lack of emotional support was negatively correlated with school bullying [32]. The results of the present study showed that good economic status may act as a protective factor for being bullied, which was consistent with a study by Qiao [12]. The same relations that were previously noted among bullies, however, were not identified here. A study by Kim suggested that students with a high family socioeconomic status were more likely to become persecutors [33]. Thus, further study is needed to explore the different impacts that family factors have on bullying and victimization. Victimized adolescents may be widely disliked or not well-liked by their peers, and adolescents were likely to seek out targets who have been isolated by their peer group without receiving any negative evaluation [23]. Victimized adolescents often experience peer rejection and deviant affiliation, leaving them more vulnerable to aggressive peers. Asian children are, perhaps, more likely to be influenced by their peers and to mimic their behaviors than non-Asian children [8]. In our results, victims tended to have poor relationships with their classmates. Bullies were also disliked amongst classmates but were less socially isolated than victims, primarily due to their popularity with other aggressive and deviant adolescents [24]. Contrary to our expectations, students with poor relationships with teachers did not show a higher probability of school bullying and victimization. When confronted with peer abuse or peer rejection, adolescents mostly turned to their parents or peers rather than teachers for help [25], which may suggest that emotional support from peers and parents was more important than that from teachers to these students. Our results also indicated that victimization was significantly correlated with suicidal attempts. A prior study has already indicated that the peer rejection and abuse inherent in school bullying may have a direct effect on the genesis of suicidal ideation. We presumed that the correlations may be caused by the same influential factors shared by both suicide attempts and victimization. For instance, lack of social support and deviant peer associations are consistently and highly-correlated with school bullying exposure and are significantly associated with suicide attempts [13]. Running away from home occurred equally among those who were bullied and those who were bullies, which was consistent with a study by Haynie [18]. Some have hypothesized that running away from home may be used as an adaptation to a stressful structural circumstance [18,34]. Contrary to our expectation, students who had run away from home or tried to commit suicide did not display a higher risk for school bullying. We presumed that the smaller sample of this category may have caused the false negative. Additionally, due to the nature of cross-sectional studies, a causal relationship cannot be determined. Students who have tried to commit suicide or run away from home more than once may have gotten more attention from both parents and teachers, preventing them from engaging in future risky behaviors.
Our results clearly showed an association between bullying and being bullied and engaging in a physical fight. Liang has already reported that experiencing bullying puts younger adolescents at a higher risk for physical fighting [35], which reiterates the importance of addressing these serious public health issues among adolescents. Another interesting finding in our study was that students who spent more time on the internet had a higher risk of engaging in bullying or being a victim of bullying. One possible explanation is the increased access to various formats of violence and bullying. Adolescent problem behaviors should be seen as socially-learned adaptations in a multi-level ecological context. The internet has provided adolescents with convenient access to a culture of violence. In addition, young people are socially connected with others through the internet, and it has become a new medium for bullying behavior [36]. The direction of the link between bullying behaviors and internet use, however, is difficult to determine due to the nature of this cross-sectional study; they might mutually reinforce each other, thereby formulating a vicious circle.
The nature of the sample of students in this study needs to be considered when interpreting these results. We reported results from data among adolescents in Guangdong, and the findings may only be representative of the adolescents in this area, and not the rest of the country. The findings in the present study are based on classroom samples and will not be representative of adolescents outside of school settings, who may be at the highest risk for involvement in bullying. The data were based only on self-reporting, so the possibility of biased reporting motivated by a desire to provide socially desirable responses must be recognized. Recall bias may have also affected the reporting of violent behavior, which spanned a 12-month period prior to the survey.
As a random sampling method was used with a cross-sectional design, the sample in this study appears to represent the population well; however, its limitations must also be considered. First, being a cross-sectional study, causal inferences regarding relational factors and involvement in bullying cannot be made. Thus, while it appears that victims of bullying are more likely to internalize problems, it is also possible that students who internalize their problems are more likely to be targeted by bullies. Longitudinal designs should be utilized to address this shortcoming. Future research should focus on a broader spectrum of predictors over time to identify causal determinants of violence in this population. Second, self-reported data may lead to underor over-reporting. Future studies should collect information from multiple sources, such as teachers or parents. Thirdly, the study did not provide information regarding all potential or likely risk factors, such as self-esteem and violent behavior of friends or parents. This paper provides some information about the prevalence of bullying and being bullied and the possible influential factors associated with bullying and being bullied. With regard to future research, programs may benefit from studies that include more nuanced measures of context, especially family-and school-related factors (e.g., parental monitoring, sibling relationships, class size and gender ratio). Furthermore, research examining mediator models could illuminate the mechanisms by which the psychosocial risk factors studied herein may give rise to involvement in bullying and victimization.
Conclusion
In conclusion, this study investigated the prevalence of school bullying among Chinese middle school students by utilizing a large-scale survey sample. It also further examined the effects of potentially influential factors on adolescent bullying and victimization. We found that 20.83% of the participants were involved in school bullying, and a series of factors were proven to be significantly correlated with school bullying. Previous research has indicated that school bullying can be prevented. Due to the frequency of bullying episodes, schools are the target of the most intervention efforts. The prevalence of bullying observed in Chinese middle school students in this study suggests the importance of preventive intervention research to target bullying behaviors. Effective preventive measures require full consideration of the social and environmental factors that would inhibit bullying behaviors among Chinese adolescents. School-wide interventions, such as the Olweus Bullying Prevention Program (BPP), have been recognized to be some of the most effective strategies for bullying behaviors. The BPP utilizes a multi-pronged approach, incorporating school-wide (e.g., formation of a bullying prevention coordinating committee), classroom-level (e.g., class meetings with parents) and individual activities (e.g., direct interventions with identified bullies, victims and their parents) [29]. In light of the extent of school bullying, and its contribution to the development of other youth problems, concerted efforts to implement preventive measures are necessary.
|
2017-07-08T10:27:15.822Z
|
2012-07-18T00:00:00.000
|
{
"year": 2012,
"sha1": "ba35bb507e4eafbc4fcc72933dc0bcf6e8ac6331",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0038619&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cf863fc90bac03f2562344a72ac470a346411b3",
"s2fieldsofstudy": [
"Education",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
52958514
|
pes2o/s2orc
|
v3-fos-license
|
Peanut-specific T cell responses in patients with different clinical reactivity
Whole extract or allergen-specific IgE testing has become increasingly popular in the diagnosis of peanut allergy. However, much less is known about T cell responses in peanut allergy and how it relates to different clinical phenotypes. CD4+ T cells play a major role in the pathophysiology of peanut allergy as well as tolerance induction during oral desensitization regimens. We set out to characterize and phenotype the T cell responses and their targets in peanut sensitized patients. Using PBMC from peanut-allergic and non-allergic patients, we mapped T cell epitopes for three major peanut allergens, Ara h 1, 2 and 3 (27 from Ara h 1, 4 from Ara h 2 and 43 from Ara h 3) associated with release of IFNγ (representative Th1 cytokine) and IL5 (representative Th2 cytokine). A pool containing 19 immunodominant peptides, selected to account for 60% of the total Ara h 1-3-specific T cell response in allergics, but only 20% in non-allergics, was shown to discriminate T cell responses in peanut-sensitized, symptomatic vs non-symptomatic individuals more effectively than peanut extract. This pool elicited positive T cell responses above a defined threshold in 12/15 sensitized, symptomatic patients, whereas in the sensitized but non-symptomatic cohort only, 4/14 reacted. The reactivity against this peptide pool in symptomatic patients was dominated by IL-10, IL-17 and to a lesser extend IL-5. For four distinct epitopes, HLA class II restrictions were determined, enabling production of tetrameric reagents. Tetramer staining in four donors (2 symptomatic, 2 non-symptomatic) revealed a trend for increased numbers of peanut epitope-specific T cells in symptomatic patients compared to non-symptomatic patients, which was associated with elevated CRTh2 expression whereas cells from non-symptomatic patients exhibited higher levels of Integrin β7 expression. Our results demonstrate differences in T cell response magnitude, epitope specificity and phenotype between symptomatic and non-symptomatic peanut-sensitized patients. In addition to IgE reactivity, analysis of peanut-specific T cells may be useful to improve our understanding of different clinical manifestations in peanut allergy.
Introduction Peanut allergy (PA) is among the most common food allergies and its prevalence has increased over time [1]. In developed countries, PA has been reported to affect up to 1% of children and 0.6% of adults [2]. In contrast to milk and egg allergy, PA is not commonly outgrown [3] and is associated with severe, potentially fatal anaphylactic reactions [4]. Due to this high risk of adverse reactivity, management of the disease usually consists of strict peanut avoidance. However, this is logistically difficult to achieve and patients are at a constant risk of accidental exposure to the allergen. To minimize the risk of serious allergic reactions following accidental peanut ingestion, patients are often advised to carry self-injectable epinephrine. The burden of constant food avoidance and fear of accidental ingestion can have a significant impact on the quality of life of the patients [5].
Extensive studies over the last decades have significantly improved our knowledge of IgE reactivity against peanut and its individual components [6][7][8][9]. Indeed, common clinical diagnostic tests are based on measuring peanut-specific IgE titers or skin test reactivity, which provide evidence of allergic sensitization and are usually indicative of clinical reactivity. Compared to antibodies, much less is known about the peanut-specific allergic T cell response and its association with clinical symptoms. T cell epitopes have been identified for the major allergens Ara h 1 [10][11][12] (7S vicillin-like globulin) and Ara h 2 [13][14][15](2S albumin) but the molecular targets for other peanut allergens remain unknown.
The presence of peanut-specific IgE antibodies is not always associated with clinical peanut allergy. In 2010, Flinterman et al. examined peanut-specific T cell responses in peanut sensitized, allergic and non-allergic individuals, reporting readily detectable responses in both cohorts [16]. Little is known about potential differences in the T cell epitope repertoire or the phenotype of peanut-specific T cells in symptomatic versus non-symptomatic patients.
With the present study, we sought to identify T cell epitopes derived from peanut allergens Ara h 1, 2 and 3, as it has been reported that clinical symptoms were mostly associated with IgE reactivity to Ara h 1, 2 and 3 [9] in American patients. Moreover, cytokine polarization (IL-5 vs IFNγ production) of peanut-specific T cell responses was analyzed in peanut-sensitized allergic and non-sensitized individuals. In a second line of analysis, our objective was to determine if we could detect quantitative and/or qualitative differences in peanut-specific T cell responses between peanut-sensitized and symptomatic versus peanut-sensitized nonsymptomatic patients. Lastly, HLA restrictions of antigenic peptides were used to facilitate the design of tetramer reagents to characterize the phenotype of peanut-specific T cells in symptomatic versus non-symptomatic patients. Tetramer reagents represent a valuable tool for monitoring the surface phenotype, function and frequency of allergen-specific T cells at the single cell level [17]. Accordingly we sought to leverage our epitope identification studies to enable development of tetramer staining reagents. These reagents may contribute significant insights into the differences of a non-symptomatic, tolerant immune response versus the adverse, potentially fatal allergic reaction associated with clinical peanut allergy.
Study population and PBMC isolation
A cohort of 50 patients was recruited from Stanford and San Diego, CA, following Institutional Review Board approval by the La Jolla Institute's Institutional Review Board and Stanford University's Institutional Review Board (IRB protocols: IRB-8629, VD-112-0217). Patients 18 or older enrolled in this study provided written consent. Patients below the age 18 provided oral assent and their parent or guardian provided written consent. Demographic and clinical information is summarized in Table 1. The non-allergic cohort is 70% female with a median age of 37, the symptomatic cohort is 58% female with a median age of 23 and the non-symptomatic cohort is 29% female with a median age of 33. The differences in age between the symptomatic versus non-allergic cohort and symptomatic versus non-symptomatic cohorts are statistically significant (p = 0.003, p = 0.02, respectively) as determined by two-tailed Mann-Whitney test. Peanut-specific IgE titers were determined from plasma using the Immu-noCAP (Thermo Fisher, Uppsala, Sweden). Clinical allergy was determined either by oral food challenge or using a questionnaire to determine a clinical history consistent with peanut allergy. Peanut-sensitized but non-symptomatic donors were categorized based on positive IgE titers (>0.3 kU/L) and lack of clinical reactivity to peanut after peanut ingestion ( Table 1). PBMCs were isolated from whole blood by density gradient centrifugation according to manufacturers' instructions (Ficoll-Hypaque, Amersham Biosciences, Uppsala, Sweden) and cryopreserved for subsequent further analysis.
Peanut extract, peptide selection and sythesis
Peanut extract was obtained from Greer (Lenoir, NC). A total of 5 peanut (Arachis hypogaea) protein sequences, corresponding to Ara h 1.0101, Ara h 2.0101, Ara h 2.0201, Ara h 3.0101 and Ara h 3.0201 were considered. The sequences were aligned and grouped into 3 different clusters, corresponding to Ara h 1, 2 and 3. Next, 15-mer peptides, overlapping by 10 residues, were generated for all sequences. Redundant 15-mers were removed, leaving a set of 386 unique peptides. An additional 14 peptides were added to cover gap regions formed by the alignment. This resulted in a final set of 400 15-mer peptides. This set was synthetized by Synthetic Biomolecules (San Diego, CA) as crude material on a 1 mg scale (purity of >70%), and utilized for subsequent experiments. Representative peptides from this set were randomly selected and tested for quality control by HPLC and mass spectrometric analysis to confirm purity and sequence identity. Peptides were reconstituted at 40 mg/ml in DMSO. Reconstituted peptides were stored at -20˚C. In performed assay, the DMSO concentration added to the culture did not exceed 0.25%.
T cell in vitro culture expansion
For in vitro expansions, PBMCs were thawed and cultured in RPMI 1640 (Omega Scientific, Tarzana, CA) supplemented with 5% human AB serum (Gemini Bio-Products, West Sacramento, CA) at a density of 2 × 10 6 cells per mL in 24-well plates (Corning, San Diego, CA) and stimulated with peanut extract (10 μg/ml) or peptide pools at 5 μg/mL. Cells were kept at 37˚C, 5% CO 2 , and additional IL-2 (10 U/mL; ThermoFisher, San Diego, CA) was added every 3 days after initial antigenic stimulation. On day 14, cells were harvested and reactivity against peanut extract (10 μg/ml), peanut specific peptide pools (5 μg/ml) and individual peptides (10 μg/ml) was assessed.
Statistical analysis
Statistical analysis of the data was performed using GraphPAD Prism software (La Jolla, CA). Statistical comparisons were performed by Mann-Whitney test, unpaired, non-parametric, or
Identification of peanut allergen-derived T cell epitopes in allergic and non-allergic patients
T cell epitope mapping was performed to determine epitopes from the major peanut allergens Ara h 1, 2 and 3, recognized by peanut allergic (n = 11) and non-allergic (n = 10) donors. As these studies were performed with samples obtained from pediatric patients, available blood sample volumes were limiting. To overcome this limitation, small pools of predicted peptides matched to the HLA molecules expressed in each donor were tested. For each donor, class II binding affinity predictions were performed using algorithms available through the IEDB (www.iedb.org) [18], to identify the top 6 predicted binders for each of the alleles expressed at the four HLA class II loci (DRB1, DRB3/4/5, DQB1, and DPB1). Accordingly, the screening load for each donor was reduced from 400 peptides to a maximum of 48 peptides (6 peptides x 8 class II molecules, less for homozygous donors). The six top predicted peptides for each allele were used to generate allele-specific pools, which were in turn used to expand peanutspecific cells in vitro. After 14 days of culture, ELISPOT assays were used to measure reactivity upon restimulation with the respective pool and each individual peptide they contained.
In total, 74 out of 400 tested peptides elicited positive T cell reactivity in at least 1 donor (T cell reactivity ! 20 spot forming cells (SFC) for either IL-5 or IFNγ, p-value <0.05, stimulation index !2). A summary of the magnitude and response frequency of each individual donor/ peptide combination tested is shown in Fig 1A. The data was also deposited in the IEDB (submission ID 1000755, URL: http://www.iedb.org/subid/1000755) Interestingly, a cohort-specific analysis of cytokine production revealed no difference in IL-5 levels in response to the peanut peptides in non-allergic compared to allergic patients. However, IFNγ production was increased significantly in non-allergic vs allergic patients (p<0.0001) (Fig 1B). A similar pattern is seen when directly comparing only the subset of peptides that is recognized in both cohorts (S1 Fig). Given this trend, we set out to investigate the overall cytokine environment of the peanut-specific T cell response in the allergic and non-allergic cohort by calculating the IL-5/IFNγ ratio. T cell responses in allergic donors had a significantly higher IL-5/IFNγ ratio compared to non-allergic donors (p = 0.0048), indicating that the response in allergic individuals is significantly more Th2-dominated compared to non-allergics (Fig 1C). Based on an intermediate non-formal analysis of peptide-induced IL-5 production in allergic and nonallergic donors, we selected and pooled a subset of 19 positive peptides (P19 pool) that was designed to capture a majority of the responses in allergic and less responses in non-allergic donors. Based on the complete data, this P19 pool accounts for 60% of the response in allergics and only 20% of the response in non-allergic donors. Peptides included in the P19 are highlighted in the first column of S1 Table. Analysis of the cytokine response polarization of the P19 pool revealed an even more significant Th2 dominance in allergics compared to nonallergics (p = 0.0006) (Fig 1D) than was observed for the complete set of positive peptides.
It should be noted that, due to the HLA-matched design of our epitope mapping approach, each peptide was tested in a variable number of donors and the tested peptides set was selected based on predicted binding of HLA expressed in our cohort. A full summary of the 74 positive peptides in terms of number of donors tested, response magnitude and response frequencies given in S1 Table. Each dot represents a single peptide that elicited T cell reactivity in one or more donors. B) Magnitude of IL-5 and IFNγ production in allergics (n = 11; open circles) and non-allergic (n = 10; closed circles) patients. The response polarization, expressed as IL-5/IFNγ ratio, is shown for C) all peptides that tested positive in any given donor and D) a set of 19 selected peptides that account for 30% of the total response (60% in allergics, 20% in non-allergics, respectively). Each data point represents a single donor/peptide combination. Statistical comparison by Mann-Whitney test, two-tailed. ÃÃ -p<0.01, ÃÃÃ -p<0.001, ÃÃÃÃ -p<0.0001. https://doi.org/10.1371/journal.pone.0204620.g001
Differences in T cell responses between symptomatic and non-symptomatic peanut sensitized patients
Next, we were interested to assess whether T cell reactivity to the P19 pool could be used to detect differences in the T cell response between peanut-sensitized patients with and without clinically symptomatic peanut allergy. Using a new adult cohort, PBMC from patients with positive peanut-specific IgE titers (>0.3 kU A /L), who are either symptomatic (patients with a clinical history consistent peanut allergy, n = 15) or non-symptomatic (patients who regularly ingest peanuts without experiencing any symptoms, n = 14) were cultured in vitro with either peanut extract or P19 pool for 14 days. After culture, total T cell reactivity (defined here as the sum of IL-5, IL-10, IL-17 and IFNγ) in response to restimulation with peanut extract or P19 pool were determined by ELISPOT. The P19 pool was selected based on its lower reactivity in non-allergic donors, therefore it was of particular interest to investigate if it would also be less reactive in sensitized but non-symptomatic individuals or if they would exhibit responses similar to allergic, symptomatic patients.
A comparison of T cell responses against peanut extract did not show any differences in peanut-sensitized symptomatic patients (median 1673 SFC) compared to non-symptomatic patients (1460 SFC) (Fig 2A). Moreover, analysis of the cytokine polarization of peanut extract-specific T cell responses showed no significant differences between the two cohorts Peanut-specific T cell responses PLOS ONE | https://doi.org/10.1371/journal.pone.0204620 October 10, 2018 (Fig 2B). In contrast to peanut extract, T cell reactivity against the P19 pool was significantly higher (p = 0.043) in peanut sensitized symptomatic donors (median 127 SFC) compared to sensitized, yet non-symptomatic donors (median 37 SFC) (Fig 2C). Again, no significant difference in cytokine polarization was detected between the two cohorts (Fig 2D). A more detailed analysis of the patterns of cytokine production on symptomatic and non-symptomatic donors revealed that the increased T cell response in symptomatic donors is mostly accounted for by IL-10 (p = 0.02) and IL-17 (p = 0.03). IL-5 production was modestly increased in symptomatic patients (p = 0.25) and only very little IFNγ (p = 0.5) was observed in both cohorts (S2 Fig). Thus, while the use of whole peanut extract fails to reveal differences between peanutsensitized cohorts with different clinical manifestations, differences in T cell reactivity can be detected at the epitope level. A threshold level of 70 SFC/10 6 was artificially determined to distiguish responses in symptomatic and non-symptomatic donors. At this threshold, poolspecific T cell reactivity is detected in 80% (12/15) of the symptomatic donors. In the nonsymptomatic cohort, 71% (10/14) are negative for pool-specific T cell reactivity (Fig 2C). Moreover, an analysis of peanut-extract specific IgE titers (listed in Table 1) and T cell cytokine production revealed a significant correlation for IL-5 (p<0.0001), IL-17 (p<0.0001) and IL-10 (p = 0.03) production in response to the P19 pool, whereas correlation between sIgE and extract-specific T cell responses only reach borderline significance for IL-5 (p = 0.045) and IL-17 (p = 0.043) (
Characterization of peanut-specific T cell phenotypes in symptomatic and non-symptomatic donors using tetramer reagents
In the next series of experiments, we were interested in investigating whether differences could be detected in the phenotypes of peanut-specific T cells from symptomatic and non-symptomatic donors using tetramer reagents. The use of tetramers allows detection of peanut epitopespecific T cells on a single cell level. Using MHC class II binding assays, we determined peptide binding to MHC class II molecules that were expressed in our donor cohorts and for which tetramer reagents could be manufactured. A summary of tetramer reagents used is shown in Table 3. Based on donor-specific HLA expression, cell sample availability and tetramer allele reagent availability, we performed tetramer staining in 2 non-symptomatic and 2 symptomatic donors ( Table 3). A representative tetramer staining with tetramer #2 and tetramer #1 as an HLA-mismatch control of donor 2377 is shown in Fig 3A. Quantification of tetramer-positive cells after in vitro expansion with single peptides revealed a trend for higher frequency of tetramer positive cells in symptomatic compared to non-symptomatic donors (Fig 3B).
Next, we hypothesized that peanut-specific T cell responses in symptomatic donors are associated with a pathological type 2 phenotype, as previously reported [10,19], whereas a
Fig 3. Tetramer staining and phenotypic surface marker expression of T cells in peanut-sensitized donors after in vitro culture. A) a representative plot
showing staining with tetramer and HLA-mismatch control. B) Quantification of tetramer-positive cells in peanut-sensitized, symptomatic (Sym) (n = 2) or nonsymptomatic (Non-Sym) (n = 2) donors. In the non-symptomatic cohort, donors were tested with multiple tetramers, therefore a total of 4 data points are shown in this cohort. Median with interquartile range is shown. C) Integrin β7 expression and D) CRTh2 expression in tetramer positive cells from peanut-sensitized, symptomatic and non-symptomatic patients. Left panels show representative FACS plots. Right panels show graphs quantifying Integrin β7 and CRTh2 expression in tetramer+ cells from all samples tested. No statistical analysis was performed due to low sample size. more tolerogenic phenotype may be observed in non-symptomatics. To investigate this hypothesis, we performed co-staining of tetramer and surface markers that are associated with either a strong Th2 phenotype (CTRh2) or gut-homing (integrin β7), which may be associated with a more tolerogenic response as it has been shown that expression of integrin β7 plays a role in the regulation of gut-residential regulatory T cells [20] and immunoregulation of innate responses [21]. Analysis of peanut tetramer-positive T cells for their expression of the phenotype markers Integrin β7 (gut-homing marker) and CRTh2 (highly expressed on Th2 cells), revealed mild trends that indicate different T cell phenotypes for peanut-allergic and clinically symptomatic vs non-symptomatic patients. Expression of the gut-homing factor Integrin β7 was 2-fold lower in tetramer positive T cells from symptomatic patients compared to nonsymptomatic patients (median 26.5% and 50%, respectively) (Fig 3C). In contrast, expression of CRTh2, a molecule associated with pathological type 2 T cell responses in allergy [19], exhibited the opposite pattern, with relatively high expression in symptomatic patients (median 6.5%) compared to non-symptomatics (median 0.4%) (Fig 3D). To assess if there were any differences in the level of expression, median fluorescent intensity was also assessed. No differences were observed (S4 Fig). While the data are preliminary and the sample size is very limited, it may suggest that differences exist in the phenotypes of peanut-specific T cells in symptomatic and non-symptomatic donors, which may be associated with observed differences in clinical reactivity between these two cohorts.
Discussion
Peanut allergy is a common food allergy and can be associated with serious, sometimes even fatal adverse reactions. Here, we performed an analysis of the molecular targets and phenotype of peanut-specific T cell response to the allergens Ara h 1, Ara h 2 and Ara h 3. T cell epitope mapping in peanut allergic and non-allergic patients with HLA-matched peptide pools identified 74 T cell-reactive regions, 27 from Ara h 1, 4 from Ara h 2 and 43 from Ara h 3. Epitopic regions identified from Ara h 1 and 2 have been reported before [10][11][12][13][14][15]. However, to the best of our knowledge, this is the first report of Ara h 3-derived T cell-reactive epitopes in peanutsensitized and non-sensitized patients. A major caveat of the HLA-matched epitope mapping approach performed is that it biases towards peptide binding of the HLA types expressed in the selected cohort. Consequently, it is important to highlight that the low number of epitopes identified in Ara h 2 is most likely due to the chosen approach rather than a reflection of reduced allergenicity compared to Ara h 1 and 3.
Overall, T cell responses from peanut allergics showed a higher IL-5:IFNγ ratio compared to non-allergics, consistent with studies in other allergy systems [22,23]. Nevertheless, nonallergic donors also exhibited strong peanut-specific IL-5 production, which is surprising and may be due to the in vitro culture. The reduced IL-5:IFNγ ratio observed in non-allergics was mostly due to high levels of IFNγ rather than decreased IL-5, resulting in an overall more balanced Th1/Th2 response. It should further be noted that the two cohorts differ drastically in peanut exposure, which could further contribute to differences in peanut-specific T cell reactivity and phenotype.
Peanut sensitization, as defined by peanut-specific IgE titers >0.3 kU/L or positive skin prick test reactivity (diameter ! 3mm), is sometimes detected in patients who do not exhibit any clinical symptoms upon peanut ingestion. To learn more about the immunological reactivity on the T cell level in peanut-sensitized, symptomatic and non-symptomatic patients, and compared the peanut-specific T cell response in these two patient groups, with specific focus on response magnitude and polarization, epitope specificity and the T cell phenotype. While the use of peanut extract failed to detect any differences in peanut-specific T cell reactivity between the two cohorts, a significantly higher response in symptomatic patients (compared with non-symptomatic patients) was detected in response to a defined epitope pool composed of 19 peptides. In addition, a strong correlation between sIgE titers and P19 pool-specific T cell responses was observed, which was much less pronounced when compared to T cell reactivity against whole extract. Interestingly, we did not observe a difference in cytokine polarization between the two cohorts, indicating that the difference may be related to cell quantity rather than the functional response. Of note, the symptomatic cohort has a lower median age (23 years) compared to the non-symptomatic cohort (33 years), which may also be a factor contributing to the differences observed.
A previous study, which compared peanut allergic and non-allergic (non-sensitized) patients [10], reported a difference in magnitude between the two cohorts. Our data suggests that when defined epitopes are used to measure peanut-specific responses, non-symptomatic patients show a lower frequency of cytokine-producing, peanut-specific T cells compared to symptomatic patients, similar to what has been reported for non-sensitized donors. Interestingly, this difference in frequency between symptomatic and non-symptomatic donors was mostly accounted for by IL-10 and IL-17 production, and only a modest difference for IL-5 was observed. A recent study on single cell profiling of peanut-specific T cells reported the detection of multi-functional Th2 responses rather than a deficit in regulatory T cells among peanut-specific T cells in symptomatic donors [24]. In this context, the data may suggest that clinical reactivity is dictated by a potent, multi-functional T cell response, which may include a role for IL-17 as well as type 2 cytokines, rather than a dysfunctional regulatory response. Further studies are required to further elucidate the role of IL-17 and regulatory T cells in peanut allergy. Of note, this analysis was limited to IL-5, IL-10, IL-17 and IFNγ as representative cytokines for the major T cell subsets Th2, Tr1/Treg, Th17 and Th1. Further studies will have to be performed to determine if the difference in magnitude of the four cytokines extends to other cytokines or if a different read-out such as T cell activation or proliferation will return similar results.
Interrogation of the phenotype of tetramer positive T cells revealed that peanut-specific T cells from symptomatic patients tended to have higher CRTh2 expression compared to nonsymptomatic donors. CRTh2 is associated with a Th2 phenotype implicated in allergic disease [10,19]. In contrast, tetramer positive cells in non-symptomatic donors showed a trend for increased level of Integrin β7, a gut-homing factor that plays a role in the regulation of gut-residential regulatory T cells [20] and immune-regulation of innate responses [21]. Due to limited sample availability, tetramer experiments were performed in only four donors and therefore, no statistical analysis could be performed. Furthermore, the analysis performed does not compare T cells stained with the exact same tetramer, therefore the exact epitope specificity is different. Nevertheless, the identified trends are consistent with other studies that reported increased CRTh2 and decreased Integrin β7 expression in allergen-tetramer stained cells from allergic patients [10,25]. Future studies in more expansive cohorts are required to confirm these trends.
Our findings show several notable differences in the peanut-specific T cell response from symptomatic and non-symptomatic donors, irrespective of their peanut sensitization. These data highlight that allergen component-specific reactivity is not limited to IgE and a better understanding of the T cell response may provide additional insights to understanding the different clinical manifestations observed in peanut allergy.
S1 Table. A summary of peanut allergen-derived T cell reactive peptides, number of donors tested and responding, and magnitude of T cell response (IL-5 and IFNγ producing cells).
(XLSX)
|
2018-10-27T16:49:07.248Z
|
2018-10-10T00:00:00.000
|
{
"year": 2018,
"sha1": "a53cbfb346122ce81d9ebc04eb5e3790e1c12ec5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0204620",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a53cbfb346122ce81d9ebc04eb5e3790e1c12ec5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
139386541
|
pes2o/s2orc
|
v3-fos-license
|
EARLY CAREER SCHOLARS IN MATERIALS SCIENCE 2019 Surfactant-free fabrication of pNIPAAm microgels in microfluidic devices
Conventional pNIPAAm microgel synthesis utilizes surfactants to suspend pre-gel droplets in the immiscible continuous phase due to the slow polymerization required for synthesizing pNIPAAm in aqueous solvent. To improve the fabrication process and to eliminate the effects of surfactant on microgel quality, a surfactant-free and water-free method was developed. Rapid polymerization of high-quality microgels was achieved in a single-channel microfluidic device to help maintain the integrity of gel particles without the addition of surfactants. The droplet generation mechanism and the effect of flow rate of the two in-going immiscible fluid on the geometry of the produced microgels were studied. The produced microgels have low polydispersity with a dispersity index of 6.4%. The pNIPAAm hydrogels fabricated in the DMSO solvent has smaller pore size and more uniform microstructure compared to that synthesized in water. The fabricated pNIPAAm microgels show a sharp volume phase transition at ;32 °C and high deswelling/swelling rate.
Conventional pNIPAAm microgel synthesis utilizes surfactants to suspend pre-gel droplets in the immiscible continuous phase due to the slow polymerization required for synthesizing pNIPAAm in aqueous solvent. To improve the fabrication process and to eliminate the effects of surfactant on microgel quality, a surfactant-free and water-free method was developed. Rapid polymerization of high-quality microgels was achieved in a single-channel microfluidic device to help maintain the integrity of gel particles without the addition of surfactants. The droplet generation mechanism and the effect of flow rate of the two in-going immiscible fluid on the geometry of the produced microgels were studied. The produced microgels have low polydispersity with a dispersity index of 6.4%. The pNIPAAm hydrogels fabricated in the DMSO solvent has smaller pore size and more uniform microstructure compared to that synthesized in water. The fabricated pNIPAAm microgels show a sharp volume phase transition at ;32°C and high deswelling/swelling rate.
I. INTRODUCTION
Responsive microgels are crosslinked micron/submicron sized polymeric particles that can swell or deswell in response to external stimuli. 1 The osmosis nature of hydrogels' volume changing mechanism makes microgels extremely advantageous in achieving fast (de)swelling rate due to their reduced diffusion distance and increased surface area compared to bulk gels. Due to their unique thermal responsive behaviors, poly(N-isopropylacrylamide) (pNIPAAm) microgels hold exceptional interest in broad research areas such as environmental, biomedical, and energy fields. pNIPAAm hydrogels can undergo a reversible volume phase transition at its lower critical solution temperature (LCST) around 32°C. 2,3 Compared with other types of stimuli responsive hydrogels such as pH responsive gels, the sharp transition of pNIPAAm from hydrophilic to hydrophobic phase is beneficial for achieving precise control and switching of the gel volume. 4 The thermal responsive property of pNIPAAm hydrogels also allows for remote and localized control of deformation by incorporating photo-absorbers that convert photonic energy into local heat in the material and is not easily achieved with other types of stimuli responsive hydrogels. 5,6 These unique properties of pNIPAAm microgels has attracted great attention in its applications in drug delivery, [7][8][9][10][11][12][13] sensing, 6,14-17 separation, 18 and purification. 19 In biomedical fields, hydrogels are commonly used as cell culturing scaffolds and have received increasing attention in applications such as tissue-engineered organoids (organ mimics). [20][21][22] Hydrogels have similar elastic modulus to tissues and are biocompatible, which facilitate cell attachment and growth. The porous microstructure also allows for nutrient and waste diffusion in or out of the gel, which is essential for cell viability. The engineered tissue models are highly advantageous in gaining insight into disease pathophysiology and identifying new therapies. 20 Large microgels with diameter on the order of tens or hundreds of microns can be especially useful in tissue engineering for their close resemblance to the geometry of vascular, bone marrow, or alveoli-like tissues. 20,[23][24][25] Fabricating pNIPAAm large microgels with diameter of few hundred microns could provide new possibilities in organ mimics.
Conventional synthesis methods such as emulsion polymerization and precipitation polymerization can produce gel particles on the order of nanometer or submicrometer with low poly-dispersity. 26,27 Fabricating pNIPAAm microgels with diameter on the order of tens or hundreds of microns by emulsion synthesis can be challenging due to the inability to well suspend larger droplets of pre-gel solution in the continuous phase. The templating method was developed to overcome this obstacle, where sacrificial mesoporous templates of desired size was used to hold pre-gel solution during emulsion polymerization. 28 The templating polymerization method requires first trapping the pre-gel solution within the template particles and then remove the template after polymerization, which is associated with higher cost and extra steps. More advanced methods using microfluidic devices can produce large microgels on the order of tens or hundreds of microns in a single step, where droplets of pre-gel solution are generated in series inside a continuous phase followed by the polymerization of monomers in these droplets. [29][30][31][32] The microfluidic devices often require a capillary orifice that allows the continuous phase to hydrodynamically focus the pre-gel solution to create droplets. The glass capillaries are often drawn by hand and therefore is difficult to reproducibly fabricate more than one at a time. Microfluidic devices constructed using soft-lithography or direct molding of silicon elastomer poly(dimethylsiloxane) (PDMS) are more parallel in production and thus can be replicated in large quantities.
Water is often used as a solvent for microfluidic synthesis of pNIPAAm microgels due to the immiscibility of the monomer N-isopropylacrylamide (NIPAAm) aqueous solution with wide selections of organic solvents. 26,31,33,34 Aqueous droplets containing the monomers generated inside the microfluidic device are crosslinked using thermally activated initiator ammonia persulfate (APS). Additional infusion of an accelerator N, N,N9,N9-tetramethylethylenediamine (TEMED) is needed after droplet generation to initiate the polymerization under room temperature. The water-based synthesis of pNIPAAm microgels requires multiple inlets and is associated with more complex channel structures. As NIPAAm becomes less miscible in water at elevated temperature near or above 32°, the polymerization must be controlled to take place slowly to avoid precipitation of pNIPAAm from the aqueous phase induced by polymerization heat. To prevent merging between droplets prior to full polymerization, surfactants are often added as the particle stabilizer. However, the presence of surfactants has been reported to interfere with the polymerization process, increase the LCST of produced microgels, and affect cell viability. [35][36][37][38] Therefore, laborious sequential washing and centrifuge steps have been used to purify and sediment the gel particles. A surfactant-free method to produce microgels is needed to overcome the dilemma between the simplification of fabrication and the microgel quality.
Here we present a surfactant-free method for fabricating pNIPAAm microgels based on water-free rapid polymerization of pNIPAAm microgels in PDMS microfluidic device. The produced microgels are easily collected and purified due to the surfactant-free attributes. The microfluidic device is also simplified with only one straight channel due to the single-step infusion of all reactants. Detailed characterization of the synthesized pNIPAAm hydrogels were conducted to investigate the influence of solvent and flow rate on their microstructure, geometry, and thermal properties.
B. Fabrication of microfluidic device
The microfluidic channel pattern was cut in a 300 lm thick double-sided adhesive sheet using a laser cutter. The channel has a dimension of 14 mm (L) Â 800 lm (W) Â 300 lm (H) with two inlet holes and one outlet hole. PDMS sealing was fabricated by mixing SYL-GARD 184 elastomer base and catalyst with a 10:1 ratio, the mixture was degassed and molded into 3 mm thick PDMS sheet. Inlet and outlet holes on the PDMS sealing were punctured using a biopsy with 0.8 mm diameter before device assembly. The sandwich microfluidic device was fabricated by placing the laser-cut double-sided adhesive sheet on a glass substrate followed by stacking PDMS sealing on top of the double-sided adhesive [ Fig. 1(a)]. All three components were oxygen plasma treated for 2 min before assembly to improve adhesion between layers.
C. Synthesis of microgels
The pre-gel solution containing 20 wt% NIPAAm, 1 wt% BIS, and 1 wt% Daracure1173 in DMSO and the continuous phase n-octane were separately injected into the assembled microfluidic channel using syringe pumps (Harvard Apparatus Ph.D. Ultra & Harvard Apparatus 55-1111). These two immiscible fluids form a biphasic laminar flow through the microfluidic channel and at the location of outlet, the pre-gel solution forms droplets in the continuous n-octane phase, as discussed in detail in Sec. III. The flow rates of the two inlets were controlled separately to allow for precisely controlling and tuning the size and geometry of the produced pre-gel droplets. The pre-gel droplets were carried downstream through the outlet tubing and passed under a 40 W UV fiber lamp. Thus, the monomers in the pre-gel droplets photopolymerized and were UV-crosslinked to form solid hydrogel droplets. The produced microgels were sedimented by gravity in an ethanol bath for collection. The control sample of pNIPAAm microgel synthesized in water was fabricated following the same procedure as described above with pre-gel solution containing 20 wt% NIPAAm, 1 wt% Bis, and 1 wt% Daracure 1173 in DI water.
D. Microgel purification
The supernatant was removed from the ethanol bath to remove n-octane, DMSO, and unreacted monomers dissolved in ethanol. The microgels were then placed inside a 50°C oven overnight to completely evaporate the residual n-octane, DMSO, and ethanol. The dried microgel samples were rehydrated with water for further characterization.
E. Synthesis of pNIPAAm hydrogels in water and DMSO
Bulk pNIPAAm hydrogels were synthesized in water and DMSO to study the solvent effect on hydrogel microstructures. 20% NIPAAm 1 1% Bis 1 1% APS in water and 20% NIPAAm 1 1% Bis 1 1% Daracure 1173 in DMSO were used as the pre-gel solution. 500 lL of pre-gel solutions were pipetted into a 2 mL glass vial, respectively. For polymerizing pNIPAAm hydrogel in water solvent, 10 lL of TEMED was injected into the vial followed by vortexing for 2 s. The gelation process finished in around 10 s. For polymerizing pNIPAAm hydrogel in DMSO, the glass vial containing pre-gel solution was exposed under a 40 W UV fiber lamp for 10 s to obtain a crosslinked gel.
F. Characterization of pNIPAAm hydrogels
The pNIPAAm hydrogels synthesized using water/ DMSO as the solvent were immersed in deionized water for 48 h to remove unreacted monomers. The hydrogels were cut open to expose internal structures and freeze-dried to preserve the 3D structure of the polymer networks. The dried samples were mounted onto scanning electron microscopy stubs using conductive tape and imaged using ZEISS Supra 40VP SEM (Carl Zeiss AG, Toronto, Canada).
The morphology of pNIPAAm microgels was imaged using a Leica DMI 6000B (Leica Microsystems, Buffalo Grove, Illinois) optical microscope. Microgel solution was put inside a petri dish for characterization using transmission bright field mode of the microscope. The focal plane was carefully adjusted to show the largest cross-sectional area of the microgels.
G. Measurement of swelling ratio
The microgels synthesized using water/DMSO as the solvent were immersed in deionized water inside a petri dish for at least 48 h to reach equilibrium prior to measurement. A PID controlled heater and a temperature sensor were fixed inside the petri dish. The water temperature can be accurately maintained at the set temperature through automated on/off switching of the heater. The microgel size was recorded using an optical microscope at different bath temperatures. The volume of the microgel was calculated by assuming a spherical geometry. The swelling ratio by volume was calculated using the following formula: where V 0 is the equilibrium volume under room temperature and V is the volume recorded under different temperatures. For LCST measurement, the bath temperature was kept at each point for 20 min to allow the gel to reach its equilibrium volume. For deswelling/swelling rate measurement, the pNI-PAAm microgels were restricted to a small area inside the empty petri dish by spacer prior to measurement. Water with ;50°C temperature was poured into the petri dish to initiate deswelling. The cross-sectional size of the microgels was captured using the optical microscope with 2-s frame intervals. The swelling ratios of the microgels were calculated using the above equation. The reswelling rate for the pNIPAAm microgels were measured by first extracting the hot water in the petri dish and refill immediately with ice water. Recording process and volume conversion were same as above.
A. Pre-gel droplet formation
In conventional synthesis of microgels, surfactants are used as the particle stabilizer to prevent merging between colliding pre-gel droplets. To eliminate the use of surfactants, rapid polymerization of the pre-gel droplets must be achieved. Thus, the pre-gel droplets are fully polymerized before collision could happen, ensuring the integrity of each particle. However, rapid polymerization may not be desirable when water is used as the solvent due to the insolubility of pNIPAAm at elevated temperature. In this study, we have used DMSO as the solvent for polymerizing pNIPAAm microgels because pNI-PAAm does not exhibit the volume phase transition in DMSO. Rapid polymerization of pNIPAAm microgels could be achieved by the water-free synthesis.
pNIPAAm microgels were fabricated in the microfluidic device by using DMSO as the pre-gel solution solvent. An immiscible liquid n-octane was used as the continuous phase. Photo-initiator and UV flooding were used to realize the rapid polymerization chemistry. Due to the one-step infusion of all reactants, the microfluidic device is simplified to single channel, in comparison to conventional designs where multiple channels are needed for step-wise infusion of reactants. The device simplification also benefits from the water-free chemistry. The pre-gel droplet formation mechanism is discussed below.
The Reynolds number is an important dimensionless quantity that predicts the flow pattern of liquids inside microfluidic channels. For a channel with a rectangular cross section, the Reynolds number can be calculated using the following formula: where Q is the volumetric flow rate of liquid, q is the density of liquid, P is the wetted perimeter, and l is the dynamic viscosity of the liquid. The pre-gel solution using DMSO as the solvent has a Reynolds number between 0.11 and 0.63 when the flow rate changes from 2 to 20 lL/min, while the continuous phase has the corresponding Reynolds number between 1.84 and 2.35. The low Reynolds number of the flowing liquids suggests the formation of laminar flow inside the microfluidic channel. This was confirmed by adding Rhodamine B as a fluorescent dye into the pre-gel solution for visual observation of the droplet generation process (Rhodamine B was not included in the actual microgel synthesis and was used here for imaging purposes only). Side-by-side biphasic laminar flow of the pre-gel solution and the continuous phase was formed inside the channel between the two immiscible liquids [ Fig. 1(b)]. The biphasic laminar flow was maintained along the full length of the microfluidic channel. The interface for both liquids decreased significantly as the biphasic laminar flow entered the narrower outlet tubing. The laminar flow became less stable with decreasing wetted perimeter. The pre-gel stream was hydrodynamically focused by the continuous phase stream at the narrower outlet and broke into monodisperse droplets as the two liquids enter the outlet tubing. The formed pregel droplets are carried downstream by the continuous phase and photo-polymerized before collision could occur between the pre-gel droplets.
B. Size and geometry control pNIPAAM microgels with well-controlled different sizes and geometries were successfully produced by varying the pre-gel solution (DMSO solvent) flow rate from 2 to 20 lL/min, while the continuous phase flow rate was kept at 50 lL/min. Higher flow rates were attempted but tended to lead to failure at the PDMSdouble side adhesive bonded interface due to high internal pressure. The morphologies of microgels synthesized in DMSO solvent produced at different pre-gel solution flow rates are shown in Fig. 2. The produced microgels were spherical when the pre-gel solution flow rate was not higher than 3.5 lL/min [ Figs. 2(a) and 2(b)], with lower pre-gel solution flow rates producing smaller particle sizes. This result is consistent with the higher local shear created at the outlet between the two phases at higher continuous phase to pre-gel solution flow rate ratio. The microgels' shape changed from spherical to ellipsoidal then to capsule shape with the increasing flow rate of the pre-gel solution [Figs. 2(c)-2(e)]. This resulted from the outlet tubing size restriction where larger droplets were forced to elongate along the tubing direction. The axis perpendicular to the tubing was maintained at ;400 lm for the microgels produced at pre-gel solution flow rate higher than 3.5 lL/min. Nonuniform beads composed of mixtures of microspheres, microcapsules, and long rods were produced when the flow rate for pre-gel solution exceeded 15 lL/min [ Fig. 2(f)]. With more pre-gel solution flowing through the channel, a larger number of pre-gel droplets were generated in higher density and therefore had higher probability to merge with each other before being fully polymerized. The capability of easily manipulating the microgels' geometry by simply tuning the flow rates and ratios has potential in fabricating microgels with complex structures for broad applications.
The produced microgels showed low polydispersity. The size distribution histogram of the spherical microgels synthesized at a pre-gel solution flow rate of 2 lL/min is shown in Fig. 3. Two hundred microgel particles were randomly chosen and their diameters were measured using ImageJ. The microgels have a mean particle diameter of 304 lm and a standard deviation of 19 lm. The dispersity index was as low as 6.4%, in comparison to the relatively higher particle size variation of 10% by using other methods [Ref. 31], which proves that this simple structured microfluidic device is capable of stably producing uniform microgels with low polydispersity.
C. Solvent effect on the microstructure of pNIPAAm hydrogels Since pNIPAAm undergoes unique volume phase transition around 32°C in aqueous environment, localized polymerization heat may lead to the insolubility or even precipitation of NIPAAm monomers out of the aqueous phase during the polymerization of pNIPAAm hydrogel. Figure 4 compares the color and the microstructures of pNIPAAm hydrogels synthesized using water [ Fig. 4(a)] and DMSO [ Fig. 4(b)] as a solvent. The amount of initiator was tuned so that both samples polymerized within the same short period of 10 s pNIPAAm hydrogels polymerized in water turned to white color, while hydrogels polymerized in DMSO stayed transparent. The white gel was incapable of becoming transparent again in water even after immersing in water for over 24 h. The permanent white color of the pNIPAAm hydrogels synthesized in water suggests that the formed polymer network has a microstructure that is incapable of dissolving in water, as verified by the SEM images [ Fig. 4(a)].
Microscopic analysis under SEM shows that the pNIPAAm hydrogels synthesized in water has agglomerated polymer network with pore size around 1-5 lm. By contrast, pNIPAAm hydrogels synthesized in DMSO show more uniform microstructure with submicron pores. The agglomerates in pNIPAAm hydrogels synthesized in water may result from the hydrophobic aggregation of pNIPAAm networks when the polymerization heat is generated fast enough to raise the sample temperature above the LCST of pNIPAAm. The formed agglomerates were difficult to dissolve in water, which corresponds to the macroscopically unrecoverable white gel. Therefore, synthesizing pNIPAAm hydrogels in water tends to produce agglomerated polymers that has minimum interaction with water, which may lead to low thermosensitivity, reduced swelling ratio, and a more gradual volume phase transition. Using organic solvents such as DMSO to replace water is more advantageous because it not only accommodates rapid polymerization that allows for a surfactant-free synthesis of pNIPAAm hydrogels but also maintained or even enhanced hydrogel quality.
D. Thermally responsive behaviors of rapidly polymerized pNIPAAm microgels
The swelling ratios of the pNIPAAm microgels synthesized in water and DMSO have been studied as shown in Fig. 5. The insets in Fig. 5 are the corresponding optical image of microgels under 26 and 38°C. Here, we use the microgels fabricated under a pre-gel solution flow rate of 2 lL/min because the spherical microgel deforms isotopically in all directions and the particle volume can be accurately converted from the crosssectional area captured under the optical microscope. The microgels produced in DMSO show a large volume reduction of 64% when temperature increased from 25 to 38°C. By contrast, the microgels produced in water have much smaller volume reduction of only 28%, which confirms that with an aggregated polymer network, the swelling ratio of pNIPAAm microgels synthesized in water significantly decreased, which defeats the intrinsic thermoresponsive functions of pNIPAAm. Additionally, 90% of the total volume reduction took place between 30 and 33°C for the microgels produced in DMSO. The LCST of microgels produced in DMSO is the mid-point 31.5°C, which is in accordance with the known volume phase transition temperature around 32°C for pNIPAAm. The LCST of the produced microgels was not affected by using DMSO as a solvent and the rapid synthesis. By contrast, the 90% volume reduction temperature range expanded to between 28 and 33°C for the microgels produced in water. This result suggests that the known sharp volume phase transition behavior becomes more gradual when pNIPAAm is rapidly polymerized in water.
The response rate of the pNIPAAm microgels were characterized by plotting the swelling ratio versus time curve. Figure 6(a) shows the deswelling rate of the microgels. The volume of microgels reduced exponentially after immersing in 50°C water for 70 s. The microgels produced in DMSO shows steeper slope in comparison with the microgels produced in water. For achieving the same degree of shrinking from to 80% of the original volume, microgels produced in DMSO used 8 s while microgels produced in water used 65 s. More than 8 times faster deswelling rate was achieved. The reswelling rate of the same particle is shown in Fig. 6(b), faster swelling rate of the microgels produced in DMSO was also observed in comparison to relatively slower reswelling, and hysteresis observed in the microgels produced in water. This further indicates the high quality and good performance of the microgels produced by our method. The total volume reduction captured during the 70-s rapid deswelling was smaller than the volume reduction at the equilibrium value. This could be explained by the continuously reduced pore size during shrinking, which slowed down the water diffusion out of the gel. The reswelling rates were 8-10 times slower than deswelling and the microgel particles recovered to their original volume after 200 s. The slower reswell process was in accordance with the trend seen in most osmosis gels. Overall, the pNIPAAm microgels synthesized in DMSO well maintained the desirable thermal response rate, which was comparable to the standard swelling/ deswelling kinetics of pNIPAAm. 39,40 The exhibited fast thermal responsiveness and large volume change ratio of our pNIPAAm microgels successfully eliminated the compromised thermal volume changing issue associated with the conventionally fabrication methods.
IV. CONCLUSION
In this work, we presented a surfactant-free method for synthesizing pNIPAAm microgels with low polydispersity based on water-free rapid polymerization. This innovative method has allowed for device simplification and optimized experimental procedures without sacrificing the microgel quality. Easy manipulation of microgel size and geometry was achieved by tuning the flow rate of the pre-gel solution. Microgels produced with our proposed method showed low dispersity index ;6.4% and exhibited sharp volume phase transition behavior as well as ;8 times faster response rates in comparison with microgels produced in water solvent. The proposed fabrication method could have broad application in sensing, separation, and biomedical research studies that involve production of large quantities of microgels with good thermal responsiveness. The chemistry and analysis presented could also provide insight on emulsion fabrication and fluid dynamics.
|
2022-06-06T07:59:17.514Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "7ce667f1883ed6e45fc343f7dffd8603ffb8175b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1557/jmr.2018.364.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7ce667f1883ed6e45fc343f7dffd8603ffb8175b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
148572610
|
pes2o/s2orc
|
v3-fos-license
|
VetTag: improving automated veterinary diagnosis coding via large-scale language modeling
Unlike human medical records, most of the veterinary records are free text without standard diagnosis coding. The lack of systematic coding is a major barrier to the growing interest in leveraging veterinary records for public health and translational research. Recent machine learning effort is limited to predicting 42 top-level diagnosis categories from veterinary notes. Here we develop a large-scale algorithm to automatically predict all 4577 standard veterinary diagnosis codes from free text. We train our algorithm on a curated dataset of over 100 K expert labeled veterinary notes and over one million unlabeled notes. Our algorithm is based on the adapted Transformer architecture and we demonstrate that large-scale language modeling on the unlabeled notes via pretraining and as an auxiliary objective during supervised learning greatly improves performance. We systematically evaluate the performance of the model and several baselines in challenging settings where algorithms trained on one hospital are evaluated in a different hospital with substantial domain shift. In addition, we show that hierarchical training can address severe data imbalances for fine-grained diagnosis with a few training cases, and we provide interpretation for what is learned by the deep network. Our algorithm addresses an important challenge in veterinary medicine, and our model and experiments add insights into the power of unsupervised learning for clinical natural language processing.
Dataset Details
We provide additional descriptive statistics of the dataset below. The training and evaluation CSU dataset, the external evaluation PP dataset and the unsupervised learning PSVG dataset are different due to the nature of the clinics. This additional information allows us to quantify the domain mismatch between CSU, PP and PSVG.
Length Distribution
We plot a histogram to show the proportion of records in each dataset with the certain length in Supplementary Figure 1. Noticeably, CSU and PP follow the SOAP format ("Subjective, Objective, Assessment, Plan"), but PSVG data due to the API limitation, does not strictly follow this format. PSVG notes contain "History, Plan, Physical Exam" sections of the electronic medical record data. This causes the length of PSVG notes to be much shorter compared to CSU and PP notes.
Number of Labels Per Document Distribution
We plot a histogram to show the proportion of records in each labeled dataset with the certain number of labels in Supplementary Figure 1. We do not have any labeled notes from PSVG and hence it is not included in the plot.
Species Distribution
We plot pie charts to show the proportion of species in each labeled dataset in Supplementary Figure 2. CSU dataset contains a fair amount of notes across different species (e.g. equine, bovine, etc.), while PP being a suburban-based private clinic, the animal representations are much more focused on house pets.
Model Details
We formulate the problem of veterinary diagnosis coding as a multi-label classification problem. Given a veterinary record X, which contains detailed description of the diagnosis, we try to infer a subset of diagnoses y ∈ Y, given a pre-defined set of diagnoses Y. The problem of inferring a subset of diagnosis codes can be viewed as a series of independent binary prediction problems 1 . The binary classifier learns to predict whether a diagnosis code y i exists or not for i = 1, ..., m, where m = |Y| = 4577.
Our learning system has two components: a text encoder module and diagnosis code prediction module. In our work, we evaluated three text encoder modules: the convolutional neural network (CNN), the long short-term memory network (LSTM), which has demonstrated its effectiveness in learning implicit language patterns from the text 2 , and the Transformer network, recently developed and proposed by Vaswani et al. 3 . Our diagnosis code prediction module consists of binary classifiers that are parameterized independently for each diagnosis.
Text Encoder
CNN The convolutional neural network (CNN) has been demonstrated to be effective for many NLP tasks 4 . Given a sequence of word embeddings x 1 , ..., x T , we apply a convolution operation with a window size of h (words) and a max-over-time pooling operation to get the summary vector c. The computation can be described in Eq 1, where ⊕ and ⊗ indicate the concatenation operator and the convolution operator, and tanh is the hyperbolic tangent function.
LSTM The long short-term memory network (LSTM) is a recurrent neural network with a long shortterm memory cell 5 . A common LSTM network is composed of a hidden state h t , a cell state c t , an input gate i t , an output gate o t and a forget gate f t . It maintains semantic gating functions specifically designed to capture long-term dependency between words. Given a sequence of word embeddings x 1 , ..., x T , the recurrent computation of LSTM network at a time step t can be described in Eq 2. σ is the sigmoid function: σ(x) = 1/(1 + e −x ), and indicates the Hadamard product.
Transformer The Transformer network was proposed by Vaswani et al. as a machine translation architecture 3 . We use a multi-layer Transformer setup similar to the one in Radford et al. 6 . The Transformer network is defined as a feed-forward network that starts at the word embedding level. Given the word embeddings of a sequence x 1 , ..., x T ∈ R d , we add positional embedding to such sequence so that the model can know the location of each word. We define this positional embedding PE ∈ R T ×d , where T is an arbitrarily set maximum length of a sequence (usually much longer than the longest sequence in our training dataset). For notation convenience, we let it equal to the sequence length T . However, since PE is generated as a cyclical sine-cosine wave and never updated during training, we can easily generate PE for sequence longer than T . For i = 1, ..., d/2, we can define the element in the PE matrix in Eq 3 (symbols inside parentheses indicate the coordinate of the element).
PE(t, 2i) = sin(t/10000 2i/d ) We define the first input to the Transformer network: H 0 = X + PE, where X = {x 1 , ..., x T } and PE is defined above. We note that H 0 ∈ R T ×d . Then for a given layer l, l > 0, we can define a feedforward transformer block in Eq 4. We let W v to have dimensions (d/n) × d, and the resulting H (i) to have dimension T × (d/n). We additionally apply a mask M over the attention so that the model only looks at < t steps when it generates the token at step t. For the same layer l, we repeat the above computation n times. This is referred as the multi-head attention computation, and n indicates the number of heads.
After the multihead attention computation described above, we concatenate n H (i) matrices to obtaiñ H l ∈ R T ×d . We then apply a fully connected layer with ReLU activation function to this matrix and obtain the final hidden representation of the sequence for layer l: H l . We describe the calculation in Eq 5. The matrix multiplication by W o1 ∈ R D×d , W o2 ∈ R d×D are referred to as a bottleneck computation, where D is much larger than d. The Transformer network repeats the above computation to construct a multi-layer Transformer network.H l = Concat(H (1) , H (2) , ..., H (n) )
Diagnosis Code Prediction
We define a binary classifier for each of the 4577 diagnosis code in our pre-defined set. The binary classifier takes in a summary vector c that represents the veterinary record and outputs a sufficient statistic for the Bernoulli probability distribution indicating the probability of whether a diagnosis should be predicted (Eq 6).
Flat Training We use binary cross entropy loss averaged across all labels as the training loss for the flat training. Given the binary predictions from the modelŷ ∈ [0, 1] m and correct binary label y ∈ {0, 1} m , binary cross entropy loss is written in Eq 7. The decision boundary in our model is set to be 0.5.
Hierarchical Training In this setting, we first define a adjacency matrix M , where M ij = 1 if diagnostic code i is the child of diagnostic code j in the SNOMED-CT hierarchy, otherwise M ij = 0. during training time, we generate a mask b ∈ [0, 1] m . We generate the mask based on a recursive definition (Eq 8).
b i = 1, y j = 1 and M ij = 1 0, otherwise Then we can easily apply this mask to both y the ground truth label as well as theŷ. This masking vector allows us to only penalize the prediction of a diagnostic code when its parent is present, thus greatly reducing the number of negative examples for rare diagnoses. We compute the hierarchical binary cross-entropy loss in Eq 9.
During inference time, we generate masking vectorb by settingb i = 1 when all the ancestors of the diagnosis code i are predicted as true, and produce the final model prediction asŷ ·b.
Experimental Setup
We describe our experimental setup in the following section. We truncate all documents to no more than 600 tokens, padded with start and end of sentence tokens. This step is helpful in reducing computational requirement.
Neural Network Architecture In order to have a fair comparison among encoders-CNN, LSTM and Transformer-we set all the latent dimension as 768. For the CNN, we use 384 kernels for convolution with the kernel size of 4. For the LSTM, we compare the performance of unidirectional LSTM and bidirectional LSTM. For the Transformer, we stack 6 transformer blocks, with 8 heads for the multi-head attention on each layer. We let the feedforward dimension to be 2048.
Pretraining All pretraining is conducted on the PSVG dataset. We investigate the effect of pretraining the word embedding (+W) and pretraining the encoder with language modeling objective (+P). In the word embedding pretraining, we use the Word2Vec algorithm 7 on the PSVG dataset. The word embedding dimension is set to 768. For the pretraining language modeling objective, we initialize word embeddings with Xavier initialization 8 and directly optimize − log P (X).
Training We implement our model in PyTorch. We use Noam Optimizer 3 with 8000 warm up steps. The dropout rate is set to 0.1 during training to reduce overfitting. All models are trained for 10 epochs. We use a batch size of 5 for each model, which is the maximum allowed to train VetTag on a single GPU.
MetaMap Baseline We use the popular MetaMap, a program developed by the National Library of Medicine (NLM) 9 , as a baseline. MetaMap processes a document and outputs a list of matched medicallyrelevant keywords with its frequencies in the given document. We use MetaMap as a feature extractor, mapping each document into a frequency-encoded bag-of-words vector. The final feature vector size is 57,235. We perform the multi-label classification task with SVM and MLP with feature vectors.
Comparing VetTag and DeepTag
DeepTag is designed to make predictions on the 42 top-level diagnosis categories 10 . We restrict the performance of VetTag to these top-level categories except for clinical finding (the spurious category) in order to directly compare its performance head-to-head with DeepTag. Note that VetTag is optimized not to predict just on these categories but on all 4577 categories; hence the comparison is more favorable for DeepTag. We report the result comparison in Supplementary Table 1. On the PP test data, VetTag substantially outperforms DeepTag for both F 1 and exact match (EM). On the CSU test data, VetTag achieved better EM score and comparable F 1 score as DeepTag. Supplementary Figure 3 provides the comparison of VetTag and DeepTag for the 20 most frequent categories, demonstrating the superior performance of VetTag.
Interpretation Details
We compute the standard saliency map for each input text; this is defined as the input vector multiplied by the gradient of the predicted probability with respect to the input. The saliency of each word quantifies the influence of that word on VetTag's predictions. For each of the 41 top-level diagnosis categories, we select the top 50 words that have the highest saliency for that diagnosis, defined as the words with saliency score ≥ 0.2 are the largest number of clinical notes labeled with the diagnosis. We then intersect the 50 most salient words with the MetaMap expert-curated dictionary in order to select the most medically relevant words. These words are shown in Supplementary Table 2.
|
2019-05-10T13:54:39.593Z
|
2019-05-08T00:00:00.000
|
{
"year": 2019,
"sha1": "82a325aa046770ccafa9a7c302a648bb4ce05b79",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41746-019-0113-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82a325aa046770ccafa9a7c302a648bb4ce05b79",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
17635809
|
pes2o/s2orc
|
v3-fos-license
|
ELRA in the heart of a cooperative HLT world
This paper aims at giving an overview of ELRAs recent activities. The first part elaborates on ELRAs means of boosting the sharing Language Resources (LRs) within the HLT community through its catalogues, LRE-Map initiative, as well as its work towards the integration of its LRs within the META-SHARE open infrastructure. The second part shows how ELRA helps in the development and evaluation of HLT, in particular through its numerous participations to collaborative projects for the production of resources and platforms to facilitate their production and exploitation. A third part focuses on ELRAs work for clearing IPR issues in a HLT-oriented context, one of its latest initiative being its involvement in a Fair Research Act proposal to promote the easy access to LRs to the widest community. Finally, the last part elaborates on recent actions for disseminating information and promoting cooperation in the field, e.g. an the Language Library being launched at LREC2012 and the creation of an International Standard LR Number, a LR unique identifier to enable the accurate identification of LRs. Among the other messages ELRA will be conveying the attendees are the announcement of a set of freely available resources, the establishment of a LR and Evaluation forum, etc.
Introduction
Over the last few years, ELRA has been involved in a number of international initiatives which are in line with its "original" mission and vision while accounting for the new trends and the community new expectation. These activities focus on the axes that have been ELRA's main concern since its creation in 1995: developing the means for sharing Language Resources (LRs) within the widest community, helping in the development and evaluation of Human Language Technologies, disseminating information and promoting cooperation in the field, and, as a cross-cutting concern, understanding and clearing Intellectual Property Rights in a HLT-oriented context. ELRA, and its operational body ELDA, have been active players of such initiatives from the very beginning. This has been so through their well-established middleman role in the field as well as contributing with their expertise in the setting up of LR repositories, evaluating technologies, producing LRs, establishing standards and best practices, and defining metadata schemas, among others. This paper aims at giving an overview of ELRA/ELDA's recent activities over those dimensions.
The ELRA catalogues
For 20 years now, the HLT community has seen an ever-growing supply and demand in terms of LRs. Among organisations that have been playing a crucial role in the development of the idea of gathering and sharing LRs in that field, ELRA (European Language Resources Association) 1 has been one of the precursors, in particular thanks to the setting up of its large catalogues of LRs, making them not only visible but also available to the community. The ELRA Catalogue 2 now offers nearly 1100 LRs available for various types of HLT applications in more than 60 different languages. A continuous and considerable effort has been put to publish quality LRs adapted to new requirements and latest trends. For instance, ELRA lastly enlarged its catalogue to offer LRs for Sign-Language, Text-to-Speech, or Audio-visual applications. In addition to our Catalogue of ready-for-distribution LRs, the Universal Catalogue 3 offers a compilation of world-wide LRs which have been identified by the ELRA/ELDA team. This catalogue represents an antechamber to the ELRA Catalogue, as well as a shop-window of existing LRs for the community. Through the Universal Catalogue, users may discover existing but not-yet-available LRs that ELRA may help them gain access to. As a complement to this service and with the continuing mission to support the identification of LRs, ELRA launched the LRE-Map 4 at LREC 2010. It is a mechanism intended to monitor the use and creation of LRs. This is implemented by collecting information on both existing and newly-created resources during the papers/abstracts submission process [Calzolari et al. 2010]. It thus provides a portrait of the resources behind the community, of their uses and usability. Nearly 2,000 LR forms were filled in in 2010. The feature has been so successful that it has been implemented by other conferences such as COLING 2010 (International Conference on Computational Linguistics) and EMNLP 2010 (Conference on Empirical Methods in Natural Language Processing) and is very likely to be adopted by other major conferences in the future. Identifying the gaps in LRs has also been one of ELRA's ongoing tasks. ELRA continued the work that was initiated through the implementation and feeding of the BLARK (Basic LAnguage Resource Kit), which aimed at giving access to matrices 5 that highlighted the needed resources and helped at identifying the gaps with regards to LRs required for specific applications and for as many languages as possible. Such information is compiled and shared with the community, e.g. in November 2011, ELRA co-organised the 2nd Less-Resourced Languages Workshop (a joint LTC-ELRA-FLaReNet-META_NET event) on Addressing the Gaps in Language Resources and Technologies 6 . Along these actions, ELRA is involved in the creation of a distributed repository network, which work is being highlighted in the following section.
Towards the Integration of LR Repositories
As a step forward on the basis of its work expressed through its catalogues, ELRA is sharing its knowledge and experience by being involved in the very latest initiatives in Europe that have converged on large cooperation networks. Since 2009, ELRA, through ELDA, is part of the META-NET 7 (Multilingual Europe Technology Alliance) Network of Excellence and is involved in the development of an Open Resource Infrastructure, the META-SHARE action. This infrastructure aims at providing an "open, distributed, secure, and interoperable infrastructure for the Language Technology domain" 8 . This consists in setting up a network of repositories/data centers accessible through a common interface. Two main actions have taken place towards this sharing: • Technical implementation of a META-SHARE network of repositories which have imported all LRs from the ELRA Catalogue and other players, and has thus enriched the infrastructure with over 1,000 LRs. • Work towards the "specification of metadata-based descriptions for language resources and technologies" [Gavrilidou et al. 2011]. ELRA's work, combined with its regular internal effort to improve the description of the ELRA catalogue, has aimed to emphasize the need to review currently existing LR metadata schemas and design a standardised and interoperable one for the current needs of the community. In particular, a large effort has been invested to describe new types of LRs that include modalities such as video or image (for e.g. sign language LRs, multi-sensor and multi-modal data, etc.). These actions aim to support META-SHARE objective and spirit towards the sharing of resources within the community, as well as the harmonization of licenses and transaction modes.
Sharing Technologies
With its large experience gained through over 17 years in distributing LRs to the HLT field, ELRA has enlarged its activity to the sharing of LR-oriented technologies. One good example is ELDA's involvement in the PANACEA FP7 project 9 (Platform for Automatic, Normalized Annotation and Cost-Effective Acquisition of Language Resources for Human Language Technologies). Work has started towards building a factory of LRs that progressively automates the stages required for the acquisition, production, updating and maintenance of LRs. ELDA is leading the dissemination and exploitation work as well as the validation of the platform. Furthermore, we are actively involved in the setting up of the platform and the evaluation of the crawling and MT technology developed within the project. In June 2011, ELDA also organised the first PANACEA Users' workshop 10 which aimed at gathering users of LRs for the development of their own business and technologies. The event was held in conjunction with the META-FORUM organised by META-NET, which again shows the interest in correlating actions around LRs and LR-oriented technologies.
Evaluation of Technologies
ELRA's dedication to activities in evaluation of technologies started about 10 years ago. Since then, ELRA has been paving the way to a more standardised way of evaluating HLT by offering an evaluation infrastructure and evaluation packages to the HLT community. Thanks to this expertise, ELRA managed to be a core player within innovative evaluation projects and in various technology areas. For the past couple of years, ELRA has been involved in evaluation campaigns in the following fields: • Cross-language information retrieval and filtering • Machine translation • Multimedia and multilingual information systems • Speech technologies including speech recognition for spontaneous speech • Topic, opinion and sentiment detection • Spoken language understanding • Named entities recognition • Parsing These evaluation campaigns could be supported thanks to online automatic evaluation interfaces provided by ELRA. Within recent large-scale projects, we can quote ELRA/ELDA's participation in the PROMISE project 11 (Participative Research labOratory for Multimedia and Multilingual Information Systems Evaluation), a EU Network of Excellence that started in September 2010. PROMISE is carrying on the work that was initiated through the Cross-Language Evaluation Forum (CLEF) since 2000 12 . ELDA is responsible for Data acquisition, packaging, and IPR. Through this project, ELRA continues to play a role in experimental evaluations in the field of complex multimedia and multilingual information systems. In support to the development of HLT evaluation activities, ELRA created a web portal dedicated to that topic. The HLT Evaluation Portal 13 gathers information on a large number of technologies. Facing the need to keep alive such valuable information for the HLT community, the portal is now supported by the META-NET project. Interested parties are welcome to contact us to enrich the portal with information on evaluation tasks that are not covered.
Production of LRs
In line with its regular activities, ELRA is very active with the production or commissioning the production of LRs. This takes place both within the framework of European and international projects or in support of companies or institutions. So far ELRA has compiled LRs in more than 30 languages, being involved in every stage of production, from the establishment and definition of specifications and guidelines (e.g. multimodal annotations of videos for person identification 14 ) to the ultimate quality control detail (e.g. quick quality check or validation reports). Thus, ELRA is a privileged partner for innovative projects that require ambitious resources, in terms of size, type of linguistic information as well as quality of the end-result. All these have been main objectives for ELRA, who has produced through ELDA a large number of LRs for a wide variety of languages: English, "Indian" English, German, US Spanish, Catalan, Brazilian Portuguese, French, Canadian French, Moroccan French, Colloquial Arabic(s), Kazakh, Romanian, Czech, Turkish, Hindi, Korean, Chinese... Some of ELDA's recent achievements comprise: (i) broadcast news speech corpora, (ii) newspapers text corpora, (iii) aligned corpora for Machine Translation and Speech Translation, (iv) rich text annotation (named-entities, opinions, feelings, etc.), (v) single and multi-modal annotations (e.g. audio, image, audiovisual), (vi) specific data collections (e.g. SMS, written documents, Wizard-of-Oz based recordings for dialogue systems, etc.). Out of these achievements, we can mention ELRA's participation within cooperative projects. For instance, we have been involved in the creation of manually-translated parallel corpora in different domains, ranging from medical to transcription data, for language directions such as Arabic-to-French, Chinese-to-English, English-to-German, French-to-German, German-to-English and German-to-French. Those production activities have also contributed to enrich the ELRA Catalogue with new LRs.
Enlightening the HLT field to IPR Issues
One of ELRA's background task and creed since its creation has been the taking care of legal issues related to the exploitation and distribution of LRs. Throughout the years, ELRA has put forward the importance of clearing 13 http://www.hlt-evaluation.org 14 http://www.defi-repere.fr legal issues that have to be dealt with at each step of the Language Resource lifecycle [Arranz et al. 2008]: specifications, production, validation, distribution, and maintenance. To enlighten HLT players' knowledge on the topic, ELRA organised a half-day workshop on "Legal Issues for Sharing Language Resources: Constraints and Best Practices" 15 [Mapelli 2010], as a satellite event to LREC 2010. In order to consider an international context from both academic and commercial horizons, the organising committee was constituted of representatives from the following institutions: Linguistic Data Consortium, USA, Institut für Deutsche Sprache, Germany, and ELRA/ELDA, France. It aimed at showing new lines of work in that field, as well as new possible cooperation topics and a good opportunity for participants to share their views on the subject. Among many interesting issues raised during the talks and following panel discussion, the contribution of lawyers was of great value in order to understand current international legal systems as to be compared to the LR field requirements and present the variety of existing licenses (Creative Commons, GNU, etc.). Earlier in 2010, ELRA and FLaReNet had already initiated discussions on this issue by organising a special session on "Sharing or Not Sharing: Availability and Legal Issues" at the 2nd European Language Resources and Technologies Forum, on 11 th February 2010 in Barcelona, Spain 16 . This session focussed in particular on legal stumbling blocks and possible solutions towards a sustainable sharing of LRs.
Fair Research Act
Since its creation, ELRA has been promoting the need to give an easy access to LRs to the widest community, in particular for research activities. ELRA has been discussing with a great number of major research institutions in order to identify the legal issues that could block the advances of the field. From these discussions, it became clear that copyrighted resources and intellectual rights as defined in the Berne convention or the European database directive (the Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases) had to be reconsidered. Recently, ELRA and a number of partners have been advocating toward the establishment of a specific exception dedicated to "the fair use of a copyrighted work for research purposes". Such an exception should consider that no rights are infringed as long as the LRs are used exclusively for research purposes. A number of actions have been undertaken to seek harmonization of such a fair use between the countries.
Clarifying IPR
ELRA has been not only organising events for awareness but is also well involved in international actions to support the clarification of legal questions in the field. In particular, it is worth mentioning the valuable work carried out within META-SHARE. A detailed document was produced to consider all aspects of legal issues ]: analysis of current existing licenses, consideration, clarification and comparison of IPR, adaptation of licenses to the new HLT requirements. In particular, ELRA focused on the analysis of similarities between its licenses and the ones promoted by Creative Commons, with the intention to harmonize such licenses. Moreover, ELRA has been carrying out its activities in the identification and negotiation of rights for the use of LRs within cooperative projects. Focusing on Medical Information analysis and retrieval, the KHRESMOI project 17 (Knowledge Helper for Medical and Other Information users) addresses both challenges of trustworthiness and complexity levels in online health information. This FP7 project started in September 2010. In the first year of the project, ELDA has worked at clarifying the IPR framework of the resources that are being exploited in the project both by identifying respective rights and defining lines of action to clear them all throughout the 48 month project activity. This has been done in particular through the drafting of adapted licenses and the negotiation with due owners. It is worth saying that some areas are even more challenging in terms of IPR negotiation, the medical area being one of them mainly for privacy and confidentiality reasons, another one being the TV-radio broadcast where multi-layer rights have to be considered in depth (several players/areas have to be considered, some of them being interlinked: production, distribution, broadcast). As far as the latter is concerned, the French ANR-funded REPERE project (Person recognition in TV shows) 18 , which started in March 2011, has been another good IPR-clearing challenge for ELDA. For this project, ELDA has faced a multiplicity of IPR ownership interlinked between producers, distributors and broadcasters. Recently, ELRA has extended its legal support beyond its membership base to assist producers and users of LRs. The Legal Support Helpdesk 19 is meant to provide support to those who have to deal with IPR issues while using, producing, sharing or distributing LRs.
Promoting Cooperation around
Language Resources
Promotion and Information Dissemination
A recent initiative from ELRA towards mass collaboration around LRs is the Language Library, a new feature of LREC 2012. The rationale behind this initiative is that accumulation of massive amounts of multi-dimensional data about language is the key to foster advancement in our knowledge about language and its mechanisms. The objective is to gather and share part of the linguistic knowledge the field is able to produce, starting a movement aimed at collecting all possible annotations/encodings at all possible levels. Since its foundation, ELRA has been active not only at promoting the idea of sharing LRs, but also at gathering common expertise through its participation in multiple networks focusing on LRs.
Creation of a LR Unique Identifier
ELRA and a large number of Language Technologies organizations have been debating on the harmonization of the identification of LRs. A consensus seems to emerge regarding the set-up of a small executive committee, steered by a commission representing all key players in the field, data centers (ELRA, LDC, Allagin,/GSK, C-LDC,…), and the stack holders (ACL, IAMT, ISCA,…), to assign each LR an International Standard Language Resource Number (ISLRN) [Park et al. 2012], independently of whether the LR is accessible on Internet, Intranet, available or not, etc… whether it has a DOI, a local PId, etc. Such ISLRN should guarantee that all LRs usable within our field get a unique identifier that can be used to distinguish it from others. The aim behind this proposal is to ensure the sustainability of LRs by providing them with a unique identification scheme using a standardised nomenclature. This will guarantee that LRs are recognised as proper references in the different activities within Human Language Technologies and within documents and scientific papers. For instance, this will allow resources that are sometimes named differently to be correctly identified as unique resources, and it will help catalogues (requiring a unique identification format) to manage data correctly, regardless of the LR type and physical location. On a management point of view, such an initiative requires the setting up of a ISLRN attribution body to manage their attribution, storage and consistency. It is thus planned that the ISLRN attribution will be made by a small group of organisations involved in all LRs distribution and sharing issues. This ISLRN attribution body will set up a ISLRN server which will enable the ISLRN attribution and validation of LRs, based on a minimal metadata set that describes them. The ISLRN will be assigned free of charge.
Conclusions
As highlighted in this paper, ELRA continues to focus on its regular activities by keeping a close look-up on the HLT evolution and by taking part to new large-scale international cooperative projects thanks to its long-lasting expertise.
Latest trends towards open infrastructures led ELRA to renovate its policy and activities along the following main five axes: • Easy and free access to LRs: Anticipating users' expectations, ELRA has decided to offer a large number of resources for free for the research community. Such an offer will consist of several sets of speech, text, multimodal databases that will be released for free regularly, with a particular attention on an easy way to license them.
• Fostering the use of Public Sector Information: Being among the first to recognize the importance of public sector information, the Association joins forces with all stakeholders in the effort to reinforce and extend the free use of public sector information for research, technology and application development.
• Supporting META-NET / META-SHARE actions: As a founding member of META-NET and an important player of META-SHARE, ELRA will be involved in the management and standardisation of meta-data schema being built in META-SHARE. It will also support the legal helpdesk and will focus on the commonalities between ELRA licenses and the ones promoted by Creative Commons, with the intention to harmonize such licenses.
• Promoting collaborative and crowd-sourcing-based methods for building LRs: This task focuses on the new LR production paradigm based on crowd-sourcing. During the last year, the Association has been investigating the set-up of a dedicated platform for crowd-sourcing-based LR building.
• Establishing the Language Resources and Evaluation
Forum (LRE-F): The Forum is open (and not limited) to scientists, students or professors, involved in research activities in universities, small and medium companies or international groups; decision-makers or project managers in large public institutions, etc. It has been established at LREC2012 where participants have been invited to join when registering to the conference. Among the services that members of the LRE-F will be offered, we can quote the following: free downloading of many resources from the ELRA Catalogue and the META-SHARE repository, access to the legal helpdesk, access to the LRE Map, the LR Library, access to LRE Wiki, etc. Members of the community will be also encouraged to join so to upload resources on the ELRA and/or ELRA-META-SHARE repository to share with other colleagues.
|
2015-07-06T21:03:06.000Z
|
2012-05-01T00:00:00.000
|
{
"year": 2012,
"sha1": "129db956cf321b9f0048ba986d469856c6f35f1d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "129db956cf321b9f0048ba986d469856c6f35f1d",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
196535001
|
pes2o/s2orc
|
v3-fos-license
|
Good outcome in refractory cervical dystopia with combination of Deep Brain Stimulation on globus pallidus internus nucleus and delayed selective periphery rhizotomy with five years follow up
Cervical dystonia is a common presentation of segmental dystonia, and is currently manifested by involuntary sustained contractions of cervical territory and abnormal head movements or postures, visible and often palpable, what become harmful and disabling. Medical treatment of cervical dystonia often renders limited benefit and side effects are common. Dystonia can be treated by means of medication, botulinum toxin injections, and physiatry.1 In refractory cases or those who worsens with time in despite of the treatment, surgery could a valid option.2 The surgical treatment for refractory torticolis has been continuously developed with the aim of both, improvement of outcome and reduce complications. Most of destructive procedures have been less used because their current complications while some other ones have been increasingly more employed because considered safer and more effective.3‒6
Introduction
Cervical dystonia is a common presentation of segmental dystonia, and is currently manifested by involuntary sustained contractions of cervical territory and abnormal head movements or postures, visible and often palpable, what become harmful and disabling. Medical treatment of cervical dystonia often renders limited benefit and side effects are common. Dystonia can be treated by means of medication, botulinum toxin injections, and physiatry. 1 In refractory cases or those who worsens with time in despite of the treatment, surgery could a valid option. 2 The surgical treatment for refractory torticolis has been continuously developed with the aim of both, improvement of outcome and reduce complications. Most of destructive procedures have been less used because their current complications while some other ones have been increasingly more employed because considered safer and more effective. [3][4][5][6] Case report As far as we could know, through bibliographical search in Medline, Pubmed and other current international sources, this was the first publication that reports the effect of GPi-DBS in combination with delayed bilateral rhizotomy in cervical dystonia and we have published it in a previous paper. 7 The actual presentation is the continuation of the patient´s follow up.
We present a women 58y, in the present days without hereditary history of dystonia; she developed at 52y progressive contracture of muscles of the neck and a feeling that his head was moving backwards. The symptoms were gradually exacerbating to take a stand permanent retrocollis. Treatment with levodopa and botulinum toxin injections were performed (up to 500 IU) without satisfactory results. Secondary causes of dystonia were discarded. As she rapidly showed disabling clinical manifestations none responding to medical therapy, a surgical approach was suggested, targeting the globus pallidus internus (GPi) nucleus.
By micoregistration guided technique a quadripolar electrode (model 7428, Medtronic) was stereotactically implanted bilaterally on GPi and fixed with the Stimloc system. The Magnetic resonance images were processed with WinNeus program to identify coordinates. On a posterior step, the electrodes were connected to a pulse generator (Kinetra; Medtronic). We are currently employing this technique for the treatment of refractory cases in several movement disorders. 8 In the post -operatory assessment we registered a clear improvement in the dystonic signs. Six months later, there was a 69% mean improvement in the BFMDRS total movement score what meant a P<0.031, (Wilcoxin signed rank test). The mean BFMDRS disability score clearly improved (P<0.06). The total TWSTRS score improved 58% (P<0.044). There were no adverse events following the procedure for surgical implant. 7 Five years after, the dystonic symptoms dramatically returned despite a the correction of the stimulation parameters, so it was indicated a selective peripheral bilateral rhizotomy with Bertrand technique 9 at the left side and Taira technique 10 at the right, in order to reduce risks of complications on neck muscles voluntary movements.
In the immediate post -operative control, one month after surgery, the results returned to values very close as after DBS implantation. In five years of monthly follow up her clinical improvement has not significantly changed.
Discussion
Palidum stimulation by DBS implantation is a current treatment for Spasmodic Torticolis, 11 but it hasn´t solved all problems related to the disease and seldom results are incomplete or worsen with time. 12 Efforts are being made to improve results by means of technological advances 13 and target selection. 14 It is a common practice in refractory Cranio, cervical Dystonia, to perform DBS after peripheral surgery has failed, 15 but not the converse, as in our case.
Conclusion
We conclude that we need more studies and long-term monitoring to determine more adequate indications for the beneficial combination of these two procedures in this sequential order, as shows the longterm successful outcome in this case.
|
2019-03-18T14:04:34.013Z
|
2018-10-30T00:00:00.000
|
{
"year": 2018,
"sha1": "74de087eea3cdafe46dabdf5a46bd95f83cbc100",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/JNSK/JNSK-08-00325.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cb118c3253de72fd7d288cd4e334dde8a73ee5d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259983715
|
pes2o/s2orc
|
v3-fos-license
|
The opening dynamics of the lateral gate regulates the activity of rhomboid proteases
Rhomboid proteases hydrolyze substrate helices within the lipid bilayer to release soluble domains from the membrane. Here, we investigate the mechanism of activity regulation for this unique but wide-spread protein family. In the model rhomboid GlpG, a lateral gate formed by transmembrane helices TM2 and TM5 was previously proposed to allow access of the hydrophobic substrate to the shielded hydrophilic active site. In our study, we modified the gate region and either immobilized the gate by introducing a maleimide-maleimide (M2M) crosslink or weakened the TM2/TM5 interaction network through mutations. We used solid-state nuclear magnetic resonance (NMR), molecular dynamics (MD) simulations, and molecular docking to investigate the resulting effects on structure and dynamics on the atomic level. We find that variants with increased dynamics at TM5 also exhibit enhanced activity, whereas introduction of a crosslink close to the active site strongly reduces activity. Our study therefore establishes a strong link between the opening dynamics of the lateral gate in rhomboid proteases and their enzymatic activity.
INTRODUCTION
Intramembrane proteases process transmembrane substrates directly within the lipid bilayer and are involved in numerous physiological and pathological processes. Notably, they play important roles in a wide range of diseases including neurodegenerative disorders, diabetes, and malaria (1)(2)(3)(4). The bacterial rhomboid protease GlpG is the de facto model system for this class of intramembrane proteases, and its structure, dynamics, function, and inhibition have been studied in detail (5-9).
Unfortunately, little is known about the functional role of bacterial rhomboids, and until recently, only one bacterial substrate was studied in detail: the twin arginine receptor [twin arginine transporter A (TatA)], the natural substrate of AarA in Providencia stuartii (10). In 2020, the physiological substrate HybA of GlpG in Shigella sonnei was found, which has 99% sequence identity with Escherichia coli (E. coli) GlpG (11). It is involved in membrane protein quality control by specifically targeting components of respiratory complexes (11).
In order for GlpG to hydrolyze transmembrane substrates, they need access to the active site within the membrane bilayer. The active site consists of a catalytic dyad, formed by residues S201 and H254 in transmembrane helices (TM) 4 and 6, respectively. It is hydrophilic and therefore shielded from the hydrophobic acyl chains of the lipids, particularly by TM2 and TM5 (see Fig. 1A) and the L5 loop. The exact mechanism how the substrate gains access to the active site is still not determined. Urban, Shi, and colleagues (5, 12) proposed that TM2 and TM5 form a lateral gate that is able to open and allows the substrate to enter (see Fig. 1C). This mechanism was inspired by crystallographic work where TM5 and L5 can adopt either an "open" or a "closed" conformation (5). The extreme bending of TM5 was however questioned, because it might be an artifact of crystal packing. In contrast, Xue and Ha (13) suggested that the substrate enters from the top rather than laterally. In that case, the substrate does not enter the enzyme but bends over to reach into the active site. Consequently, only a slight displacement of TM5 is needed, while the L5 loop mostly influences substrate processing by acting as a cap (see Fig. 1C). Additional to these theories, substrate processing likely involves exosite binding, where the substrate binds to residues of TM2 and TM5 in order for it to be recognized and being positioned for helix unwinding and active site binding (14).
The two different propositions have been previously probed by engineering disulfide bonds connecting TM2 and TM5, with the aim to understand if lateral gate opening is required for access to the active site, and whether crosslinking of TM2 and TM5 would abolish the activity of GlpG. Specifically, the following three variants were investigated: L229C + Y160C (bottom), F232C + W157C (middle), and W236C + F153C (top). Note that the active site is closest to the top position (Fig. 1A). The interaction between TM5 and TM2 is mainly stabilized by hydrophobic and π-stacking interactions. Consequently, mutating aforementioned residue pairs result in a more open conformation and in an increase of activity (12), while disulfide bonds will reestablish these connections.
Urban and colleagues (12) reported that the middle and bottom crosslinks could be formed by regular disulfide bridges, resulting in a loss of GlpG activity. However, they were not able to establish the crosslinking at the top adjacent to the active site. This limitation may have been a result of the improper formation of the crosslink due to geometric constraints and therefore did not have the anticipated effect. Ha and colleagues (13) succeeded to properly establish the top crosslink with the help of the linker 1,2-bis(methylsulfonylsulfanyl)ethane (M2M) (see Fig. 1B). However, they found that GlpG still maintains its activity in the crosslinked variant. This was presented as an argument against the "lateral gate" theory and in favor of the "top-down" theory of substrate access because it is difficult to imagine how the substrate could enter laterally with the top crosslink established.
Since previous structural studies on GlpG were all based on detergent-solubilized proteins, we recently set out to study GlpG in liposomes by solid-state nuclear magnetic resonance (NMR) (15). Solid-state NMR is an ideal structural biology tool for this aim, as it has the unique ability to study functional dynamics of membrane proteins in native-like lipid bilayers at physiological conditions and temperatures (16)(17)(18)(19)(20). In support of the lateral gate model, we observed irregular features of TM5 such as a kink at W236. Relaxation dispersion data further indicated the presence of a major and a minor state of TM5 that are in conformational exchange on a time scale of~40 μs. The importance of investigations in the lipid bilayer is further highlighted by a very recent solid-state NMR study on GlpG, which shows that hydrophobic membrane thickness influences GlpG activity (21). Possibly, the thickness of the membrane could influence the conformational equilibrium of TM5 through lateral pressure and thereby the opening dynamics of the lateral gate.
To investigate the role of the lateral gate in the regulation of the activity of GlpG, the current work focuses on variants of GlpG that are either crosslinked, so that the motion of TM5 relative to TM2 is restricted or alanine mutants with a weakened interaction between TM5 and TM2. The latter GlpG variants are expected to increase the dynamics of the gate, as it was previously reported that these variants entail an enhancement of activity (12,22). We systematically analyze these variants in a lipid bilayer environment using an integrated biophysical approach comprising solid-state NMR, molecular dynamics (MD) simulations, and a range of functional assays. Our data reveal a clear relationship between the opening dynamics of the TM5 gate and protease activity, reassuring the lateral gating mechanism for substrate binding in the rhomboid proteases.
Correlation between gate opening dynamics and protease activity
According to the proposed lateral gate cleavage mechanism, the interactions between TM2 and TM5 are essential for the activity of GlpG. Therefore, by mutating residues involved in these interactions, altered activity should be achieved as described before (12). We evaluated this on individual F153A and W236A single mutants, as well as on the F153A + W236A double mutant. These variants potentially weaken the interaction between the TM2 and TM5 helices at the top and maybe with the surrounding lipids. Note that here and for all following results we studied a truncated version of GlpG, missing the cytosolic domain: GlpGΔN, i.e., GlpG 87-276 [see the Supplementary Materials; for simplicity, GlpG 87-276 is henceforth referred to as GlpG or wild-type (WT) GlpG].
A fluorescence-based proteolysis activity assay (23,24) on all alanine mutants showed an increase in activity compared to WT GlpG ( fig. S1). The single mutants F153A and W236A both showed approximately a 2.5 times activity enhancement, while the double mutant (F153A + W236A) showed a synergetic effect with fivefold enhancement compared to WT GlpG. This is in line with previous results showing similar enhancement of the enzymatic activity in this double mutant (12). Our results indicate that enzymatic activity is quantitatively correlated with the opening dynamics of the lateral gate.
With the aim to characterize the opening dynamics of these mutants at the atomistic level, we performed solid-state NMR experiments. Intriguingly, the resulting spectra of F153A + W236A showed increased conformational heterogeneity compared to WT GlpG. Furthermore, the weakened interactions between TM5 and TM2 did not only affect the mutated region but the entire protein. The increased conformational heterogeneity also resulted in a marked decrease in sensitivity, preventing unambiguous assignments of the alanine mutants [see fig. S2 for a comparison of 1 H-15 N two-dimensional (2D) spectra]. Activity of gate-closing crosslinked variants Next, we investigated whether preventing the opening of the TM5 helix by adding a crosslink between TM2 and TM5 decreases activity. To crosslink TM2 and TM5 at several positions, we generated three double mutant pairs: F153C + W236C, W157C + F232C, and Y160C + L229C ( Fig. 2A). Compared to WT GlpG, the introduced cysteine mutations at the top of TM2 and TM5 helices showed a similar behavior as the alanine mutations at the same position ( Fig. 2B and fig. S3), increasing the activity threefold. In contrast, a double mutation at the middle part of TM2/TM5-the W157C + F232C mutation-results in only a twofold increase in activity, while the double mutation at the bottom section of TM2/ TM5-Y160C + L229C-remained even only similar activity compared to the WT. These results suggested that weakening the interaction of the top part of TM2 and TM5 is more efficient for enhancing protease activity. Crosslinking all three double cysteine mutants (F153C + W236C, W157C + F232C, and Y160C + L229C) via M2M strongly decreased the activity by five-to eightfold, which, importantly, could be recaptured to almost 100% by reversing the link with dithiothreitol (DTT) (Fig. 2B). Compared to WT GlpG, the activity of the mutants at the top and at the middle part of the TM5 helix decreased about two-to threefold. The activity was not completely abolished in the crosslinked variants, possibly due to the inefficiency of the crosslinking approach.
Active site not directly influenced by crosslinking
To probe the influence of the crosslinks on the active site, we conducted fluorophosphonate-based experiments (Fig. 3) (25). Fluorophosphonate probes specifically label the serine of enzymatically active serine hydrolases through coupling with tetramethylrhodamine (TAMRA) and therefore enable in gel fluorescence detection (Fig. 3A). Here, we found that the fluorophosphonate was able to bind to the active site in all of the investigated mutants to a similar degree, including the crosslinked and uncrosslinked variants (Fig. 3B). This indicates that the structure and function of the active site are not perturbed by the presence of the crosslinks, and the observed changes in the rate of TatA processing are solely due to changes in site accessibility through the lateral gate rather than perturbation of the catalytic dyad.
We again complemented our functional data by solid-state NMR experiments to obtain structural insights at the atomic level. For this, we focused on the mutant with the most pronounced effect of the M2M crosslink, namely, F153C + W236C. Figure 4 shows a comparison between WT GlpG and the crosslinked GlpG F153C + W236C mutant. While a number of peaks match in the two spectra, there are some considerable differences (see Fig. 4A for an overlay of 2D NCα projections from 3D hCαNH spectra). The most noticeable difference is the complete absence of peaks originating from residues in TM5 in the spectrum of the crosslinked cysteine mutant. In addition to TM5, peaks corresponding to residues in TM1 Fig. 4B). There are also previously unknown peaks arising in the crosslinked double cysteine mutant (red in Fig. 4B) that could not be observed in WT GlpG. Specifically, several residues in loop L1 (124 to 125, 138 to 140, and 142 to 144), the lower part of TM6 (264 to 267), and a few residues in other parts of the protein (97 in TM1, 165 in TM2, and 187 in TM3). Furthermore, of the peaks that could be observed in both samples (dark gray in Fig. 4B), we found a number of residues with considerable chemical shift differences in loop L1, around the active site (L3/ TM4), and in loop L4 (Fig. 4C).
Note that because our solid-state NMR experiments were conducted on perdeuterated samples that were subsequently back-exchanged in water-based buffers, we could only extract the information of those solvent exposed residues. Therefore, we propose here two potential explanations for why the visibility of the peaks of the two different samples is distinctive: (i) the protein dynamics is considerably different between WT GlpG and the mutant; and (ii) hydrogen/deuterium (H/D) exchange is different due to variation of the protein-lipid interactions. Moreover, the fact that residues in TM5 cannot be observed in the crosslinked sample (H/D exchange should have taken place before the crosslink was established) suggests that the crosslinked GlpG variant is not in a rigid closed conformation as we previously anticipated, but rather with substantial structural heterogeneity in TM5.
The relatively large chemical shift changes observed for several residues in L1 suggest an allosteric coupling between the TM2/TM5 dynamics and the conformation of loop L1. A comparison between spectra of the F153C + W236C mutant with and without crosslink unexpectedly did not reveal any noticeable chemical shift differences ( fig. S2), but rather a strong decline in sensitivity for the mutant without crosslink ( fig. S4). This suggests that crosslinking TM2 and TM5 does not lead to a considerable change in conformation. However, because we cannot detect any residues in TM5 for the cysteine mutants, regardless of whether they are crosslinked or not, we cannot conclude if the conformation of TM5 is affected by the crosslink. The considerable increase in sensitivity after the establishment of the crosslink suggests that the overall dynamics of the protein is strongly affected by the crosslink and that the crosslink leads to a more rigid protein conformation. This observation is further supported by a comparison of bulk 15 N R 1 and R 1ρ relaxation rates between the samples (fig. S4). The R 1 rates, dominated by fast ps-ns motion, are slightly lower in the cysteine mutants (0.035 s −1 without and 0.029 s −1 with crosslink) compared to WT (0.040 s −1 ). R 1ρ , dominated by slower (high ns-ms) motion, is increased in the cysteine mutant without crosslink (30.5 s −1 ) and decreased in the cysteine mutant with crosslink (21.2 s −1 ) compared to WT (25.7 s −1 ). While the differences in bulk relaxation rates are not very strong, they show a clear trend toward increased slow motion in the double cysteine mutant that is reversed when the crosslink is added. Calculation of the minimum distance between X153 and X236 (where X stands for the amino acid variation in the simulated models) showed that the distance between X153 and X236 remained large in all variants during the simulations of both the open and closed conformations (0.6 to 1.0 nm). This finding suggests a weak interaction between these two residues throughout all of the simulations. In strong contrast, WT GlpG shows a large difference in the distance of the two gating residues between closed (0. S8). Here, the entire TM5 helix becomes relatively mobile, most probably due to the diminishment of the TM2/TM5 interaction.
Docking of substrate TatA in the lateral gate
Last, we investigated whether the cavity formed by the opening of the TM5 helix allows binding of TatA, a well-characterized substrate of GlpG (10). For this, we used protein-protein docking using the In the top three ranked clusters, one cluster showed TatA binding to the interface of TM2 and TM5. In this cluster, we could identify poses where the cleavage site-peptide bond of Ala8 and Ala9-is adjacent to the catalytic center (Fig. 6, A and B). In contrast, for the closed state, no docking pose could be predicted in the same region. It should be noted that here we only energy-minimized the docking pose of the substrate-bound GlpG, which was not further relaxed by MD simulations. Therefore, the docking poses shown in Fig. 6 should be considered only as an approximate model for the GlpG-TatA complex and do not show the unwinding of the helix that was previously predicted (27,28). However, these poses are sufficient to correlate substrate binding with the cavity between TM2 and TM5 because the exact time of the helix unwinding remains unknown.
As discussed above, for the double mutants F153A + W236A and F153C + W236C, the distance between TM5 and TM2 remains relatively large during both open and closed simulations. Consequently, we chose one final snapshot from the open simulations of F153A + F236A, where the extreme bending of the top section of TM5 was not observed. Here, we repeated the docking to investigate whether bending of TM5 away from TM2 is indeed necessary for the substrate binding in this variant. Similar docking poses compared to open WT GlpG could be predicted for the F153A + W236A mutant, where the cleavage site of TatA is in proximity to the active center (Fig. 6, C and D). This result suggests that bending of the top part of TM5 as revealed in the open-state x-ray structure is not required for substrate processing in these variants, while without steric hindrance between TM2 and TM5, substrates such as TatA are ready to bind at the intramembrane exosite of GlpG. It should, however, be noted that the L5 cap remains open in the F153A + W236A and F153C + W236C double mutants. 13 Cα projections from 3D hCαNH spectra of WT GlpG (blue) and crosslinked GlpG F153C + W236C (red). Examples of residues for which peaks are only present in one of the samples and residues for which peaks exhibit considerable chemical shift changes between the samples are indicated. ppm, parts per million. (B) Structure of GlpG with residues for which peaks are only present in WT GlpG indicated in blue and residues for which peaks are only present in crosslinked GlpG F153C + W236C indicated in red, residues for which peaks are present in both samples are indicated in dark gray, and residues for which no information is available are shown in light gray. (C) Absolute chemical shift differences for 15 N (dark gray) and 13 Cα (orange) between WT GlpG and crosslinked GlpG F153C + W236C. The spectra were recorded on 600-MHz (WT GlpG) and 900-MHz (crosslinked mutant) spectrometers using 1.9-mm probes operating at 40-kHz magic angle spinning and at a sample temperature of ca. 20°C.
DISCUSSION
Conflicting theories have been proposed regarding how the substrate accesses the catalytic center of GlpG. In this study, we used a combination of functional assays including crosslinking experiments, solid-state NMR, MD simulations, and protein-protein docking to study GlpG in its native lipid bilayer environment. We found that introducing mutations that diminish the interaction between TM2 and TM5 helices promotes substrate access through the lateral gate and thereby increases protease activity. We were also able to show that stable M2M crosslinks between TM2 and TM5 block most of the substrate processing. Our results are therefore more in line with the lateral gate theory proposed by Urban and colleagues (12). However, because our data show that the bottom and middle TM2/TM5 interactions are less important than the top part, a model is conceivable where the substrate binds to TM2 and TM5 in the lower part and bends over in the top part close to the active site.
Available x-ray data show both open and closed conformations of WT GlpG, where the L5 loop adopts different conformational states, and the position of the TM5 helix varies between apo-(5, 6, 29) and inhibitor-bound (8,9,(30)(31)(32) structures. In one of our earlier studies, we observed a kink in TM5 at W236, which indicates that TM5 does not adopt a fully helical structure when situated in native-like liposomes, and thus GlpG in liposomes has a slightly different conformation than reported in the closed-state structure by x-ray crystallography on detergent-solubilized protein (15). In another previous study of GlpG with inhibitors bound in the active site (33), we observed that the peaks corresponding to TM5 were missing in solid-state NMR spectra of the bound state, indicating that ligand binding may also affect the stability of TM5. In the present study, our NMR data show disappearing peaks for TM5 upon mutation, suggesting that weakened interactions between TM2 and TM5 lead to increased dynamics. Unexpectedly, the same observation also occurred when GlpG variants were crosslinked with M2M. The comparison of the RMSF for different GlpG variants in the MD simulations revealed TM5 nearly as dynamic as some of the loop regions, e.g., the L5 cap. Furthermore, AlphaFold2 predictions suggested a lower confidence level (74 to 85 per residue confidence score) for TM5 compared to the other transmembrane helices (see fig. S7) (34,35). All of these observations support TM5 as a dynamic hotspot in GlpG.
We generated three different cysteine double mutants at different positions of TM2/TM5 and observed the strongest increase of substrate processing in the F153C + W236C variant. This increase is diminished when mutations are introduced in the middle part of the helices (W157C + F232C) and weakest for the mutation pair in the bottom section of the helices (Y160C + L229C). This result indicates that weakening the interaction at the top TM2 and TM5 region is essential for substrate processing. Previous MD simulations and our docking result suggest that substrates interact with the exosite (mostly residues of TM2/TM5) of the protease before being unwound and processed (14). It is possible that the mutated residues are also important for recognizing and binding the substrate and that M2M impedes this action through steric hindrance. This could explain as well why residues far away from the active site and at the bottom of the substrate gate (Y160 + L229) will also lead to reduced activity when crosslinked.
The opening of the TM2/TM5 gate in WT GlpG is necessary for substrate entry, as the interaction between F153 and W236 appears to stabilize the closed conformation that is inaccessible for the substrate. In contrast, for the F153C + W236C and F153A + W236A double mutants, MD simulations suggested an open conformation where bending of the top of TM5 is not stable, but rather the interaction between TM2 and TM5 becomes weak and consequently the entire TM5 helix is relatively mobile. This finding is in line with solid-state NMR data, showing a high conformational heterogeneity for the double mutants. The decreasing interaction of TM2 and TM5 allows an easier access for substrate processing and consequently a substantially higher protease activity was observed for these two mutants. For both WT GlpG and F153A + W236A in their open states, docking could predict reasonable binding poses for the substrate TatA. In conclusion, we propose here that extreme bending of the top section of the TM5 helix as observed in the original open x-ray structure (PDB ID: 2NRF, chain A) is not required for substrate processing.
On the other hand, when the F153C + W236C mutant is crosslinked with M2M, MD data suggest different dynamics not only for TM5 but also for the L5 loop. The L5 loop is connected to TM5 and caps the active site, which might play an important role in substrate positioning during proteolysis (31). M2M binding might also impede this interaction, and therefore, the activity is reduced. Unfortunately, we cannot observe residues of the L5 loop in our solidstate NMR spectra to complement these observations. Furthermore, it was shown previously that the cleavage site of the substrate shifts when cleaved by a "gate-open" mutant, most likely because the positioning of the substrate in the enzyme changes (36). The cleavage site also differs depending on whether GlpG cleaves substrates in detergent or in membranes highlighting the role of lipid interactions in the substrate-protease complex (36).
Furthermore, a previous study has suggested that the L1 loop is involved in the positioning of the enzyme in the membrane (37).
Our current and previous NMR data and MD simulations show structural changes of this loop, especially upon binding of inhibitors (14,33). When crosslinked, we observe previously invisible and unassigned residues in our NMR spectra, and MD simulations (of the closed form) show that fluctuations in this region decline, suggesting a more rigid L1 loop. Note that mutations (e.g., R137A) in L1 result in activity decline (12). Changing the loop might result in a better accessibility of the core to lipids, which would disturb the overall processing of the substrate (38). In addition, the L1 loop is deeply submerged in the lipid bilayer, even more so when the cytosolic domain is present and it likely facilitates lipid distortion and therefore faster lateral diffusion through the E. coli membrane (39).
The solid-state NMR spectra of WT and crosslinked GlpG show chemical shift changes, in particular around TM5, the L1 loop, and the active site. For both, only a single set of resonances is observed (15), indicating one dominant, unperturbed conformation for each, the WT and crosslinked variant. We therefore propose that the WT GlpG conformation is more open than the crosslinked variant. Though the conformational dynamics changed, the active site remains accessible and reactive to fluorophosphonate. The binding is still possible because fluorophosphonate reaches the active center from the aqueous phase, in contrast to regular substrates such as TatA binding laterally through the TM5 gate (24). Small inhibitors and peptides might take a different path to the active site (31). It should also be considered that unwinding of the substrate helix needs space, which might be blocked by crosslinking. Last, MD simulations show that the introduction of the crosslinker does not lead to major changes in the overall structure or to a perturbation of the geometry of the active site, which may potentially result in deactivation.
In conclusion, with the current study, we could confirm that substrates access the active center of the rhomboid protease GlpG through the lateral gate. A stable conformation is the most commonly populated state for WT GlpG, whereas for mutants with diminished TM2 and TM5 interaction, dynamics of TM5 becomes increased and thereby facilitates substrate processing. Crosslinking TM2/TM5 renders the cavity between TM2 and TM5 inaccessible for the substrate and therefore impedes substrate processing.
Cells were grown in M9 medium at 37°C until an optical density of 0.8 was reached and overexpressed after induction with 500 μM isopropyl-β-D-thiogalactopyranoside for 15 hours at 25°C. Cells were harvested, resuspended in lysis buffer, and lysed using an LM10 microfluidizer (Microfluidics, USA) with 15.000-psi working pressure. Insoluble parts were removed by centrifugation, and the supernatant was incubated with 2% (w/v) n-decyl-β-maltoside (DM; Glycon, Germany) for 2 hours at 4°C. The solubilized protein was purified via cobalt-based affinity chromatography on an ÄKTA Pure 25 System (GE Healthcare, Germany).
The same procedure was carried out for mutants of GlpGΔN. To allow for the formation of cysteine bridges, mutations between TM2 and TM5 were introduced at the following positions: F153C + W236C, W157C + F232C, or Y160C + L229C. In addition, the following alanine mutations were studied: F153A, W236A, and F153A + W236A.
Reconstitution
The rhomboid proteases were reconstituted into E. coli total lipid extract (Avanti Polar Lipids, USA) liposomes. For this purpose, E. coli total lipid extract in 3% DM detergent buffer was added to the purified GlpGΔN sample at a lipid/protein ratio of 30:1 (mol/mol). Detergent was removed by dialysis at 100 times dilution against dialysis buffer with additional Bio-Beads SM-2 resin (Bio-Rad) over the course of 10 days and buffer exchanges every 2 days until the sample was completely turbid.
Crosslinking
GlpGΔN mutants were diluted to 0.2 mg/ml. M2M (Santa Cruz Technology, USA) was diluted in dimethyl sulfoxide to a 20 mM stock solution. M2M was added to the GlpGΔN mutants in DM to a final concentration of 50 μM for the purpose of activity assessment. The sample was left for 30 min at room temperature while the crosslink was formed. If required for certain experiments, then the crosslink was subsequently dissolved by the addition of 50 mM DTT at 37°C for 2 hours. All samples were then diluted to 4 μM.
The same procedure was carried out for the NMR samples. A total of 50 μM M2M was added to the [ 2 H, 13 C, 15 N]-labeled GlpGΔN F153C + W236C mutant, and the sample was incubated for 1 hour at room temperature. Afterward, it was reconstituted into E. coli total lipid extract and dialysed over the course of 10 days. Subsequently, the samples were collected by ultracentrifugation (2 hours at 300,000g) and transferred into NMR rotors while a few crystals of sodium trimethylsilylpropanesulfonate (DSS) were added.
Activity assessment
The TatA, the natural substrate of AarA in P. stuartii, was used as the substrate for the GlpG activity assay as described before (23). Briefly, a fluorescein isothiocyanate (FITC)-labeled peptide with the first 33 amino acids of TatA and a ß-alanine linker was produced (FITCβ A-MESTIATAAFGSPWQLIIIALLIILIFGTKKLR). The peptide was dissolved in 50 mM tris, 150 mM NaCl, 0.2% (w/v) DM, and 0.2% (w/v) sarcosine to a final concentration of 400 μM.
E. coli total lipid extract was dissolved in 50 mM tris and 150 mM NaCl in a concentration of 10 mg/ml. Using 400-nm Nuclepore Track-Etched Polycarbonate filters (Whatman, USA), the lipids were extruded 31 times with an Avanti mini extruder (Avanti Polar Lipids, USA) to generate liposomes with a defined size.
A total of 4 pmol of DM-solubilized GlpGΔN or its respective mutants (native, crosslinked, or DTT uncrosslinked) were incubated with 20-fold excess of FITC-labeled TatA peptide in a solution of E. coli total liposomes (1 mg/ml) in 50 mM NaOAc (pH 4), 150 mM NaCl, and 0.2% (w/v) DM. The mixture was incubated for at least 20 min at room temperature. It was then diluted 20-fold with 12.5 mM NaOAc (pH 4.0) and 37.5 mM NaCl to reduce the detergent below its critical micelle concentration. After incubation for 10 min, the proteoliposomes were separated from the detergent by ultracentrifugation at 186,000g for 1 hour at room temperature. The samples were dissolved in neutral pH buffer (50 mM tris and 150 mM NaCl) to start the reaction and quickly transferred into a white small volume 384-well plate (784075, Greiner Bio-One, Germany). Fluorescence was read out every 5 min over the course of 120 min exciting at 490 nm and detecting emission at 525 nm at 25°C with a Tecan Infinite M Plex Microplate Reader (Tecan, Switzerland). The intensity of the fluorescent signal is measured in RFU. All 120 min were used for linear regression analysis because no plateau in fluorescence was reached ( fig. S3). Slope and the SE of the slope were calculated with Prism GraphPad.
TAMRA-fluorophosphonate assay
For the labeling of the active site of GlpG and its respective mutants with M2M crosslinks, ActivX TAMRA-Fluorophosphonate Serine Hydrolase Probe (Thermo Fisher Scientific, Germany) was used as described before (25). Briefly, 0.5 μg of purified GlpGΔN was mixed with the reactive probe to a final concentration of 0.5 μM. Subsequently, it was incubated for 1 hour at 37°C protected from light. The reaction was stopped with 4x Laemmli buffer and subjected to SDS-polyacrylamide gel electrophoresis (SDS-PAGE). The gel was visualized with ultraviolet light and afterward stained with Coomassie brilliant blue dye.
Solid-state NMR spectroscopy and analysis
Proton-detected solid-state NMR experiments were performed on Bruker 700 and 900 MHz ( 1 H Larmor frequency) spectrometers equipped with four-channel ( 1 H, 2 H, 13 C, and 15 N) 1.9-mm probes, operating at magic angle spinning (MAS) rates of 38 to 40 kHz. The sample temperature was kept around 20°C, estimated from the chemical shift of water relative to the DSS peak. D 2 O was used for locking. 2D hNH spectra were recorded for all samples, and, in addition, a 3D hCαNH spectrum was recorded for the crosslinked F153C + W236C mutant using standard sampling on a 900-MHz spectrometer at 40-kHz MAS. Subsequently, four 3D experiments (hCαNH, hCONH, hCαcoNH, and hCOcαNH) were recorded for chemical shift assignments of the F153C + W236C mutant crosslinked with the M2M linker, and two 3D experiments were recorded for transfer of the assignments to the F153C + W236C mutant without crosslink. These assignment spectra were recorded on a 700-MHz spectrometer at 38-kHz MAS using 35% nonuniform sampling (NUS) with sampling schedules generated from http://gwagner.med.harvard.edu/intranet/hmsIST/ (40,41). NUS spectra were reconstructed using the iterative reweighted least squares algorithm (20 iterations) in the qMDD software (42)(43)(44) and processed using nmrPipe (45). Assignments were performed using CcpNmr AnalysisAssign Version 3 (46). Spectra of WT GlpG (recorded on a 600-MHz spectrometer) were taken from our previously published study (33).
Bulk 15 N R 1 and R 1ρ relaxation rates were recorded for GlpG WT, F153C + W236C without crosslink, and F153C + W236C crosslinked with the M2M linker using 1 H detected 1D hnH experiments with varying relaxation delays (0.01, 0.1, 0.5, 1.5, 4, 7, 14, and 24 s for R 1 ) or spin-lock pulse lengths (2, 4.5, 7.5, 12, 18, 35, and 50 ms at a nutation frequency of 12 kHz for R 1ρ ). The relaxation experiments were recorded on a 900-MHz spectrometer equipped with a 1.9-mm probe operating at 40-kHz MAS and at a sample temperature of 21°C. R 1 and R 1ρ relaxation rates were extracted by fitting the intensities from the 1D spectra as a function of relaxation time to a monoexponential decaying function. Monte Carlo simulations based on the noise level of the spectra were used to estimate fit errors. The fits were repeated 1000 times with random noise (a random number between 0 and 1 was generated and multiplied with the average noise level of the spectra) added to the input data. Two times the SD was used for error bars in fig. S4.
Protein modeling
For each, the open (PDB: 2NRF, chain A) and closed (PDB ID: 2IC8) structures of GlpG, four models were generated: (i) the WT, (ii) the F153A + W236A, (iii) the F153C + W236C, and (iv) the F153C + W236C crosslinked variants. All protein models were parameterized using the Amber99SB-ILDN force field (47) and inserted into a pre-equilibrated and solvated 1-palmitoyl-2oleoyl-sn-glycero-3-phosphocholine (POPC) membrane using the gmx_membed routine (48). The POPC lipid membrane was simulated with the parameters derived by Berger et al. (49). Water was modeled with the SPC/E potential (50) and ions by the Joung et al. parameters (51). A summary of the resulting models can be found in table S3.
Protein-linker modeling
For the crosslinked structure, the methylsulfonyl groups of the 3D structure of M2M were removed to solely represent the linker structure ( fig. S5). Two cysteines were connected by a disulfide bridge in the trimmed linker. Both cysteine residues were capped with an acetyl (ACE) group and N-methyl amide (NME) capping group. The NME group was added to the C═O, and the ACE was added to the N side of the cysteines. The geometry of the linker was optimized, and the electrostatic potential was calculated at the Hartree-Fock/6-31G* level using Gaussian 16 (52). The generalized amber force field (53) topology of the linker was generated with the Antechamber software (54), incorporating partial charges from the preceding quantum mechanics calculations based on the restrained electrostatic potential approach (55). After geometry optimization, the cysteine and cap sections of the linker were removed and only the linker was manually inserted between cysteine 153 and cysteine 236 of GlpG. The disulfide bridges were generated at a distance of 2.05 Å.
MD simulations
All-atom MD simulations of the four models of the open and closed states were carried out with Gromacs 2019.3 (56). First, the models were energy minimized and equilibrated for 10 ns with position restraints using a force constant of 1000 kJ mol −1 nm −2 on the backbone atoms of the protein. Subsequently, 20 ns without position restraints was performed. During the complete equilibration, the isothermal-isobaric (NPT) ensemble was used. After the equilibration process, all models were simulated for 500 ns as a production run in an NPT ensemble. In total, five replicas of production runs were performed for each system.
A time step of 2 fs was enabled by constraining all bonds to hydrogen atoms with the Lincs algorithm (57). Short-ranged electrostatics and van der Waals interactions were truncated at 1.0 nm. Long-ranged electrostatics were calculated with the Particle-Mesh Ewald summation (58). Temperature and pressure coupling were treated with the V-rescale (59) scheme and the Parrinello-Rahman barostate (60), respectively. The temperature was set to 300 K, and the pressure was set to 1 bar. Fluctuations of the periodic cell were only allowed in z direction, normal to the membrane surface, keeping the density of the membrane unchanged. ). An angular step size of 6°was used resulting in 54,000 docked poses. After unfavorable poses were removed, a conformational clustering was performed. During the analysis, only the top three clusters were considered and analyzed by manual visualization. The selected docking poses shown in Fig. 6 were energy-minimized using the Charmm36 force field in Discov-eryStudio using the algorithm "Smart Minimizer" with a maximum of 1000 steps. No implicit solvent model was used.
Data analysis
The analysis was done on the basis of the MD production runs only. All MD simulations were analyzed by root mean square deviation (RMSD) of the protein backbone and RMSF per residue analysis using the routines g_rms and g_rmsf integrated in Gromacs 2019.3 (56), respectively. For the RMSD analysis, the corresponding x-ray structures were selected as a reference.
Graphical images depicting the structures were generated in Chi-meraX (62,63). Molecular graphics and analyses performed with UCSF ChimeraX, developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco, with support from National Institutes of Health R01-GM129325 and the Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases.
Supplementary Materials
This PDF file includes: Figs. S1 to S8 Table S1 to S3 View/request a protocol for this paper from Bio-protocol.
|
2023-07-21T05:10:18.638Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "62877335912ea5e4fa2affbc07b0b72851c6e129",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "62877335912ea5e4fa2affbc07b0b72851c6e129",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249642681
|
pes2o/s2orc
|
v3-fos-license
|
OSN Dashboard Tool For Sentiment Analysis
The amount of opinionated data on the internet is rapidly increasing. More and more people are sharing their ideas and opinions in reviews, discussion forums, microblogs and general social media. As opinions are central in all human activities, sentiment analysis has been applied to gain insights in this type of data. There are proposed several approaches for sentiment classification. The major drawback is the lack of standardized solutions for classification and high-level visualization. In this study, a sentiment analyzer dashboard for online social networking analysis is proposed. This, to enable people gaining insights in topics interesting to them. The tool allows users to run the desired sentiment analysis algorithm in the dashboard. In addition to providing several visualization types, the dashboard facilitates raw data results from the sentiment classification which can be downloaded for further analysis.
types are line charts, bar charts, pie charts and scatter plots. However, certain types of visualization have pre-attentive features where humans can interpret them before paying attention. These kinds of visualization include comparing the length of elements (bar chart) and their positions in two-dimensional space (line charts) [25].
In this article, the main objective is to create a sentiment analyzer dashboard for online social networking analysis. The study is focusing on collecting social media posts from Twitter, as it is one of the most widely used social platforms, having over 365 million users [26]. In addition, the publicly available Twitter API allows for easy fetching of posts. The OSN dashboard will fetch Twitter posts based on given input criteria from the user and perform document sentiment analysis on them. The dashboard will provide an overview of the sentiment polarity in those tweets and the ability to identify trends on people across the world towards a topic of interest.
The contributions of this research work are the following: • A sentiment analyzer dashboard for Twitter data.
• Perform sentiment analysis on various topics.
• Apply different sentiment analysis algorithm on Twitter data.
• Monitoring interesting trends across the world.
• Accessing raw data for further analysis.
The rest of the article is structured as follow. Section 2 entails related work regarding sentiment analysis and dashboards. The method, including architecture, design and implementation, is presented in section 3. In section 4, we present results and discussions, followed by conclusion and future work in section 5.
2 Related Work
Sentiment Analysis
Sentiment analysis has become a popular research field in natural language processing (NLP) [27]. Particularly, performing sentiment polarity assessment on Twitter data has been applied in various application domains including forecasting stock prices [28][29][30], students' feedback [11] and predicting president elections [31]. There is a great amount of literature on the subject. For example, [9] provide a systematic mapping study of reviewing literature on sentiment analysis in the context of the education domain. The results show 92 relevant studies conducted between 2015 and 2020. This supports that the field is extensively researched and rapidly growing. There are proposed several approaches to automatic sentiment categorization. Methods can be lexicon-based [32] and dictionary-based [33]. However, recent literature show a transfer from pure NLP techniques to machine learning and deep learning approaches [34]. The major drawback with the methods for sentiment analysis, is the lack of standardized solutions. Current solutions are programming language dependent and perform only certain tasks [9]. Some existing sentiment analysis algorithms are TextBlob, VADER, Flair and Stanza. Table 1 shows an overview of their approach and the accuracy achieved by them on the Sentiment140 dataset. The authors in [35] address the issue that off-the-shelf sentiment analysis tools are primarily trained on general social media data. Therefore, a classifier trained to support sentiment analysis in developers' communication channels is proposed to accommodate jargon within technical domains. The model exploited a series of lexicon-based, keywordbased [36,37], and semantic-based features [38][39][40]. It was trained on a dataset consisting of over 4000 posts on Stack Overflow. With respect to an off-the-shelf sentiment analysis baseline, SentiStrength, results show that the proposed model improves 19% in precision for negative polarity and a 25% improvement in recall for the neutral class.
Research conducted by Imran et al. [17] propose cross-cultural polarity and emotion detection in the context of COVID-19 tweets. Deep learning models were used to classify positive and negative sentiments and corresponding emotions in tweets. The performance of the trained models provided state of the art accuracy of the sentiment140 dataset.
Due to the fact that many people express feelings in their native languages, literature has attempted to develop approaches for multilingual sentiment analysis [41,42]. Zhu et al. propose a semi-supervised method based on bootstrapping and a SVM classifier to predict sentiment and polarity classification on microblog data [43]. Various combination of features such as the word and part of speech were used to improve the performance. A problem with multilingual sentiment analysis is the lack of lexical resources [44,45]. This can be solved by utilizing translation systems [46] and synthetic data [47] which also can affect the validity of the original resource. In that case, the authors in [48] propose a concept-level knowledge base for multilingual sentiment analysis which is available in 40 languages.
Dashboards
Mahadzir et al. [49] developed a sentiment analysis dashboard using real-time Twitter data, with the objective to understand the imbalance between supply and demand in the property industry of Malaysia. Data was extracted by limiting the Twitter posts to a specific keyword. Twitter posts were retrieved in both Malay and English. The authors performed sentence-based and aspect-based classification using a Naıve Bayes classifier to predict the polarity of the overall tweet and the polarity of the aspects within the post. The dashboard provided an overview of the overall sentiment and aspect-based analysis, real-time Twitter monitoring statistics and the detail of tweets based on their features and polarity. There are some limitations with this study. First, it is domain specific and lets users only get an overview of the property industry in Malaysia. Second, only 745 tweets was collected related to the domain. Maybe utilizing other social media platforms would resulted in more data. Lastly, the data was classified using one machine learning algorithm. By supporting the users to choose the desired sentiment analysis approach to be used in the dashboard, the results may change.
Most existing sentiment analysis dashboards are highly based on customer reviews. Different organizations use these dashboards to retrieve feedback from customers related to specific products and services. The solutions mine opinions from websites such as Amazon Reviews [50], Google Reviews and social media including YouTube, Twitter, Twitch and TikTok. Brandwatch 1 and Repustate 2 are examples of such customer intelligence platforms. The former is using a hybrid approach of manual and automated NLP techniques when assessing sentiments. Their approach is structured in three steps: In the first step, a sentence is processed trough a knowledge based rule and is classified as positive, neutral or negative. The rules are based on common language and does not deal with domain specific language. The second step deals with sentences that the knowledge based system is unable to classify. Here, the sentence is processed through a machine learning classifier that is taught to understand technical jargon. Lastly, the third step enables users to customize rules that are domain specific which will enhance the accuracy of the model. Repustate uses an extensive multilingual sentiment analysis approach which classifies sentiments in 23 languages. This can be helpful for international brands to perceive different opinions across regions and cultures.
Methodology
This section entails the architecture aspect of the application and the funcational requirements for the tool.
Architecture
The high-level architecture for the dashboard is made up of a microservices architecture, as shown in Figure 1. The main separation in the architecture is between the client side and backend. The client side is for the content that is displayed on the end-devices such as PC and smartphone while the backend is where the data is gathered and processed. To give a more detailed description of each component in the architecture: On the client side is a frontend service represented as a web application. In the back-end there is an "API Gateway" that handles all the tasks involved in accepting and processing up to hundreds of thousands concurrent API calls, that are presented in the "Application API" service. The API calls gather the data from the respective microservice that handles the part of the process. For the most demanding tasks, there will be a caching service, for example collecting a lot of tweets in real-time. Connected to the polarity microservice is a "Polarity Processing Pipeline" to handle big data in real-time with regards to social media posts. The Twitter API is added as a 3rd party service connected to the backend.
A microservices architecture offers several benefits. Since the functionality is decoupled into separate services, it becomes possible to deploy and make changes to each service independently from the rest of the application. Whereas in a monolithic application, adding new features requires the entire application to be redeployed, and changing one feature might also result in unwanted side effects in other parts of the application [51]. For example, with a microservices architecture, we are able to easily change our microservice that is responsible for the sentiment analysis. Another benefit of microservices architecture, is that it is easier to maintain each microservice. A monolithic application can become quite complex as it grows, with many different features that depends on each other. With a microservices architecture, each microservice only has one responsibility [51].
Having an application split into separate microservices also allows each microservice to be written in different programming languages. This allows the programmers to choose the language that is best suited for a microservice's task [51].
There are however some drawbacks that needs to be taken into account when using a microservices architecture. Microservices increases the complexity of the application, as it introduces more points of failure, since microservices needs to communicate with each other. The application needs error handling to take care of cases where a microservice is unavailable. If a microservice fails, it is important that data between each microservice stays consistent [51].
An application with a microservices architecture might also perform slower due to network latency when microservices communicate with each other, while a monolithic application can call functions directly [51].
Functional requirements
As introduced in Section 1, the tool will fetch Twitter posts and provide a sentiment analysis overview on the given different topics. To be able to reach the end goals, we formulated functional requirements which describe the functionality of the system: • The system must allow users to search on Twitter posts based on keywords, hashtags and usernames.
• The system must allow users to perform a extended search on Twitter posts including language, time frame, origin and the number of posts.
• The system must allow users to perform sentiment analysis on the Twitter posts.
• The system must allow users to choose the desired sentiment analysis algorithm to use in the dashboard.
• The system must provide appropriate data visualization such as plots, word cloud and map.
• The system must allow users to monitor the raw data provided by the dashboard.
• The system must allow users to download the raw data in a .csv format.
The following non-functional requirements are also created: • The user must be able to perform a simple search within max two steps.
• The application should handle 100 requests simultaneously when requesting less than 1000 Tweets.
• When requesting x Tweets each page must load within x × 0.05 second(s).
• The system must meet Web Content Accessibility Guidelines WCAG 2.1.
Implementation
The backend will consist of a Flask 1 API framework and Redis 2 for database cache. Redis will be connected to API calls that fetches data from the Twitter API. This is done to make the API calls faster and less fragile for DDOS attacks.
The cache time will be set to 60 seconds, because the user expects the latest Tweets to be returned.
The API will be a REST API that have multiple API calls. Each call will be independent to each module in the frontend (pie chart, raw table data, etc.). The documentation will be provided with Swagger 3 that also functions like a user interface for the API calls, like shown in Figure 2. Each call will be made with focus on minimising bigger calls to the Twitter API, because that is demanding and time consuming. The frontend is built using the web framework React 4 . Table 2 summarizes the major security concerns for the tool.
The research work in 3 illustrates how a hacker can exploit the system. In this Section there is created an abuse case description for one misuse case. This approach allows the Test Analyst to create test cases for Security Requirements.
Design principles
The following design principles are considered in this study.
Least privilege: When utilizing the API, different users should have different privileges. By following the least privilege principle that minimize the occurrence of unintentional, unwanted, or improper uses of privileges.
Minimizing attack surface area: The minimizing attack surface principle will be used to minimize the entry points to the application. Since the Token from Twitter has a limit in number of fetch data, it is required to limit access for only authorized users and the application can only be used within an institution's network. The system shall have authentication measures at all the entry points or inbound network connection.
To avoid unauthorized access 1 The system shall support authentication based on a API token with the back-end.
Improving the security using a unique token.
1
The system shall only allow incoming network requests from within a network.
To avoid unauthorized access. 2 Availability There has to be a hard limit for how many Tweets to fetch. To not create a huge overhead. 3 The system shall apply caching of Twitter data Help minimizing the impact of potential system failures. 3
Auditing
The system shall keep historical records (logging) of events and processes executed in or by the application.
Define more specific security loggings to allow recreating a clear picture of security events.
Authorization
The user token shall possess privileges within the application to perform their activities. However, the privileges must be limited.
Avoid an unauthorized user execute activities as another user. 1 The system shall ensure system level accounts have limited privileges.
Help avoiding attackers escalate user's accounts to access administrator's features.
1
The system shall ensure the Twitter token is performed using parameterized store procedures to allow all access to be revoked.
Apply security principles. 1 Economy of mechanism: This principle will be covered by implementing simpler and smaller functions that are easy to maintain.
Fail securely: It is important that the application do not crash unexpectedly. Therefore, will the fail securely principle by handling errors, such as timeouts and fault inputs.
Do NOT trust: This principle will be implemented by restricting user's access, and validate user input.
Results & Discussions
Following the design principles described in Section 3.4, the final product is created. Figure 5 illustrates the search page. Here, users can search on tweets based on keywords, username or hashtags. In addition the user can choose a preferable sentiment algorithm to use in the dashboard. The users can also extend the search by applying more criteria in the advanced search page.
As a search is performed, the user will arrive the dashboard page (Figure 6), where the user can visualize pie chart, line chart, tag cloud and map. In the top of the screen a new search can be performed.
By clicking the " Table" button in the side navigation bar, direct the users to the raw data page depicted in Figure 7.
The OSN dashboard is also a responsive website. It can be scaled such that it works fine on smaller devices such as mobile and tablets. The dashboard is using a dark blue color scheme. The reason behind the choice of color is based on how colors arouse emotions in users. Based on research in color psychology, the color blue is associated with trust and stability [52].
To assess the novelty of the OSN dashboard, two models named Level of Inventiveness and Norwegian Research Council (NRC) Scale are used.
Level of Inventiveness: With respect to the Level of Inventiveness scale depicted in Figure 8 [53], we believe the study is at level three. The major improvements are that the OSN dashboard is a more conventional way to perform sentiment analysis on online social media data. It is possible to search on various topics of interest and receive highlevel insights. With the use of visualization such as map, the users will perceive how different regions and countries express different opinions about the same topic. Another improvement is that researchers can choose the desired sentiment analysis algorithm to use in the dashboard and compare their results. In addition, integrating sentiment analysis algorithms which support multilingual posts will provide more and better results on tweets around the world. All these above mentioned improvements are knowledge within the industry, but the solution differ from the industry competitors in the way that search on topics do not need to be domain specific and it provides an intuitive way to perform sentiment analysis.
NRC Scale: In regards to the NRC scale shown in Figure 9 [54], the OSN dashboard is given a score of four. Recall the improvements mentioned in Section 4.2.1, the tool is a substantial innovation as it provides a new combination of knowledge within the industry. A concern with assessing the OSN dashboard is how to evaluate if it is at the level with the state-of-the-art in the industry. The dashboard consists of aspects which are not directly countable in any measure. By utilizing sentiment analysis algorithms, it will naturally perform similarly. However, the assessment of how intuitive the dashboard is requires usability testing and the need for the users to compare the tool with stat-of-theart sentiment analysis dashboards in the industry.
Integration
There are several application domains the OSN dashboard is suited for. First and foremost the tool can be integrated to the Twitter application. Here, users can retrieve public opinions regarding news, products, politics etc. Second, the dashboard can have huge impact on any decision making process and can be used with existing frameworks for getting students' opinions in education for instance, such as in [55][56][57]. Another example can be an organization such as Foodora 5 which can integrate the tool and utilize it as a part of their market campaign because the peoples opinions influence how they will conduct the business in the future. Another example of decision making is in the financial domain. The finance world becomes more and more quantitative. However, it has been shown that social media can affect the stock market to a great extent [58]. Therefore, the OSN dashboard can be integrated to the portfolio management tools to collect qualitative data. This data will support any decision towards investments. The study Figure 5: search page however doesn't incorporate embedding models [59] and advanced deep learning algorithms for sentiment analysis [60,61]. So, the support for these can be provided in the tool in future.
Deployment
The complete application is Dockerized 6 to ship all the applications with all the necessary functionalities as one package with Docker Compose (multi-container). The Twitter API has both an app rate limit and user rate limit that is controlled around requests per minute. Twitter has an options for a paid "Premium" API that allows many more requests per minute. As long as the application will not have to many users, using the public API should be sufficient.
Conclusion & Future Work
The OSN analysis dashboard is developed to enable users gaining insights in various topics. This, to get a perspective of the world. The tool can be utilized for several purposes such as decision making and research. Therefore, the dashboard can be integrated to several application domains including the Twitter website, product websites or portfolio management tools. In this article, an implementation of a sentiment analyzer dashboard was proposed. Sentiment analysis was performed on Twitter data based on keywords, hashtags or usernames given by the user. Different sentiment analysis algorithms were integrated to perform sentiment categorization. The analysis results are visualized in form of a dashboard to provide at-a-glance information. The data on the dashboard is presented in appropriate plots and charts with the addition of raw data from the sentiment analysis results.
When it comes to future work, there are always room for improvements as not every implementation goal is reached within the given time frame. Currently, the sentiment analysis algorithms integrated are lexicon-based. It would be interesting to add algorithms which are machine learning-based to compare the differences and train them on domainspecific topics using ontology [62,63] or concept vectors [64]. Another aspect regarding the future work is multilingual compatibility [46]. TextBlob classifies non-English languages by using Google translate. This work to some extent, but can affect the validity of the original resource [45]. In that case, an improvement of the proposed tool will be [4] Z. Kastrati, B. Arifaj, A. Lubishtani, F. Gashi, and E. Nishliu, "Aspect-based opinion mining of students' reviews on online courses," in Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence. New York, NY, USA: ACM, 2020, p. 510-514.
|
2022-06-15T01:15:58.596Z
|
2022-06-14T00:00:00.000
|
{
"year": 2022,
"sha1": "cfa50c0f6bd7a5749f650037f8decd860200211a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cfa50c0f6bd7a5749f650037f8decd860200211a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
239922005
|
pes2o/s2orc
|
v3-fos-license
|
Common Symptoms Experienced by Cancer Patients Undergoing Chemotherapy
Introduction: Chemotherapy is one of the most commonly used treatments of choice for cancer and it leads to certain side effects. The oncology care team needs to identify the symptoms experienced by cancer patients who are under chemotherapy to resolve these side effects. Objective: To identify the most common side effects experienced by cancer patients undergoing chemotherapy. Methods: A cross-sectional survey is carried out in the Medical Oncology Outpatient department and Chemotherapy Units of Sri Ramachandra Hospital, Sri Ramachandra Institute of Higher Education, and Research (DU), Chennai, India, from February to March 2020. A convenient sampling technique was used to recruit a total of 150 cancer patients. Data collection was done through personal interviews and a review of their medical records. Approximately 15-20 minutes was required to obtain data from each participant. Results: Most (31%) of them had breast cancer, 21% of them had gynaecological tract cancer, around 20% of them had digestive tract cancer. Fatigue was the most (48%) commonly reported symptom, followed by vomiting (36%) and pain (29.3%). Other symptoms like itching, oral mucositis, urinary problems, dyspnoea were experienced by the cancer patients under chemotherapy. Conclusion: This study will help the oncology care team to understand the most common symptoms experienced by cancer patients who underwent chemotherapy. Identification and management of these symptoms will help the cancer patients to have adherence to the treatment.
INTRODUCTION
Every year more than 12 million new cancer cases are diagnosed across the globe. Different treatment modalities were also introduced and improved along with the increase in the prevalence of cancer. Chemotherapy is one of the most commonly used treatments of choice for cancer. 1 The nature of Chemotherapy is to damage the cancer cells along with healthy cells which result in the development of certain side effects. 2 The side effects of chemotherapy can be common among cancer patients and may result in life-threatening. Even cancer patients may experience these side effects when they are at home. 3 The side effects result in certain manifestations like fatigue, loss of appetite nausea & vomiting, diarrhoea, constipation, insomnia, etc., which has an impact on cancer patients' quality of life and may disturb the continuity of the treatment. 4 The oncology care team needs to identify the symptoms experienced by the cancer patients who are under chemotherapy to resolve these side effects so that it can help cancer patients and the oncology care team to render continuity in treatment. 5 Ramachandra Hospital, Sri Ramachandra Institute of Higher Education and Research (DU), Chennai, India, from February to March 2020.
Sample
A convenient sampling technique was used to recruit a total of 150 cancer patients. The following criteria were used to include the participants in this study a) Cancer patients who are receiving Chemotherapy at any stage and any cycle. b) Who is willing to participate in study c) Both genders of 18 years of age or older than 18 years. d) Who can speak and understand English and /or Tamil.
Data Collection Tool
It consists of 8 items related to the cancer patient's personal and clinical information i.e. Age in years, gender, type of cancer, duration of treatment, stage of cancer, cycles of Chemotherapy completed, type of Chemotherapy, side effects/symptoms experienced.
Data collection procedure
The participants who met the eligibility criteria were invited to participate in the study. Informed consent was obtained from the participants. The participant's personal and clinical information was obtained by the researcher through personal interviews and a review of their medical records. Approximately 15-20 minutes was required to obtain data from each participant.
Ethical considerations
Ethical approval was obtained from the Institutional Ethics Committee of Sri Ramachandra Institute of Higher Education Research (DU). Informed consent was obtained from the participants. The anonymity of the participants was maintained. (IEC-NI/19/JUL/70/45)
Statistical analysis
Data analysis was performed by using R-Studio Version 3.6.2. Descriptive statistics like frequency and percentages were used to represent the participant's characteristics, clinical information, and common symptoms experienced by them.
RESULTS
Out of 150 participants, the majority (53.5%) were ≤ 55 years of age, most(66%) of them were females, around 60.7% of them have ≤ 4 months duration of treatment, the majority (59.3%) of them completed ≤ 5 cycles of chemotherapy, around 58% of them on curative chemotherapy, about 42.7% of them on IV stage of cancer, 21.3% of them on III stages of cancer (Table 1).
Regarding types of cancer, the majority (31.3%) of them were diagnosed with Breast cancer, around 14% of them had Ovarian cancer, 8.6% of them had rectal cancer, about 6.7% of them had Lung cancer and stomach cancer respectively, 4% of them had Cervical cancer and Multiple Myeloma respectively, 3.3% of them had Oesophageal cancer and endometrial cancer respectively, 2.7% of them had colon cancer, 2% of them had Prostate cancer ( Table 2).
The majority (48%) of them experiencing fatigue as a common symptom, 36% of them experiencing vomiting, around 29% of them experiencing pain, about 26.7%of them experiencing loss of appetite, 23.3% of them experiencing nausea, 12.7% of them complains of disturbed sleep, 9.3% of them experiencing constipation, around 7.3% of them complaining diarrhoea, about 5.3% of them complained itching, 4.7% of them experienced fever, 4% of complained oral mucositis, only 3.3% of them experienced urinary problems and 1.3% of them complained about dyspnoea and stomach fullness. (Table 3).
DISCUSSION
This study shows that 53.5% of them were in the age group of ≤ 55 years, 66% of them were females and 59.3% of them completed more than 5 cycles of chemotherapy and 42.7% of them were in the IV stage of cancer which was similar to those reported in other studies performed by Pearce and Wochen. 2,3 Most (31%) of them had breast cancer, 21% of them had gynaecological tract cancer, around 20% of them had digestive tract cancer and 6.7% of them had lung cancer which was similar to those reported by Wochna and Nayak. 3,5 Fatigue was the most (48%) commonly reported symptom, followed by vomiting (36%) and pain (29.3%). Which was similar to those reported in other studies 1,2,5 . Loss of appetite (26.7%), nausea (23.3%) and disturbed sleep (12.7%) constipation (9.3%), and diarrhoea (7.3%) were frequent symptoms experienced by the cancer patients under chemotherapy. 6,7,8 Other symptoms like itching, fever, oral mucositis, skin/nail discolouration, numbness, urinary problems, dyspnoea, and stomach fullness were reported by the chemotherapy cancer patients. 9,10 CONCLUSION Chemotherapy is one of the most widely used treatment modalities for cancer and it leads to certain side effects. This study will help the oncology care team to understand the most common symptoms experienced by cancer patients who underwent chemotherapy. The symptoms like fatigue,
|
2021-10-27T15:15:29.133Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ede201df2a8c7a7969390fdf464bf63c7c1d1083",
"oa_license": null,
"oa_url": "https://doi.org/10.31782/ijcrr.2021.132016",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c5a7cffbbd638a839a616c2e279c0f2fba9834d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
30542570
|
pes2o/s2orc
|
v3-fos-license
|
Migration of a Hem-o-Lok Clip to the Ureter Following Laparoscopic Partial Nephrectomy Presenting With Lower Urinary Tract Symptoms
We report a case of ureteral migration of a surgical clip after partial nephrectomy in which the clip was misdiagnosed as a ureteral stone. A 37-year-old woman had undergone laparoscopic partial nephrectomy of right renal cell carcinoma at another hospital 2 years previously. Postoperatively, she had gradually acquired lower urinary tract symptoms. Then, she complained of sudden right flank pain for a week. A plain X-ray and enhanced abdominopelvic computed tomography scan were performed. A 0.5 cm×1.0 cm right upper ureteral opacity with borderline hydronephrosis was seen but could not be found on the X-ray. Ureteroscopy revealed a medium-sized Hem-o-Lok clip on the right upper ureter that was removed with a stone basket. We concluded that a Hem-o-Lok clip used for collecting system sealing had migrated to the ureter and had been misdiagnosed as a ureteral stone on a computed tomography scan.
Laparoscopic partial nephrectomy (LPN) has been performed for the treatment of small renal masses for nephron sparing in recent years and has results similar to those of open surgery in terms of outcome, such as positive surgical margin rates [1]. Although LPN results in enhanced perioperative patient safety compared with open partial nephrectomy (OPN) in the United States [2], it is a challenging and highly advanced laparoscopic procedure owing to the difficulty of securing the collecting system and suturing the renal defect. Furthermore, there is no gold standard single agent or combination of products that can be applied in all cases, but rather hemostatic agents such as glues, bolsters and argon laser are used either alone or in combination, as well as sutures [3].
In this report, we present a case of migration of a surgical clip (Hem-o-Lok clip; Weck, Teleflex Medical, Research Triangle Park, NC, USA) on the ureter after LPN in which the clip was initially misdiagnosed as a ureteral stone.
CASE REPORT
A 37-year-old woman presented to us complaining of sudden right flank pain and lower abdominal pain that had lasted for 1 week, as well as nausea and vomiting. Two years previously she had undergone a laparoscopic right partial nephrectomy at another hospital owing to a small right renal mass that was diagnosed as renal cell carcinoma (T1aN0M0). Postoperatively, she had gradually acquired lower urinary tract symptoms, such as urge incontinence, urgency, frequency, and nocturia. Three months earlier, she had been checked with abdominopelvic computed tomography (CT) for regular follow-up of the renal cell carcinoma and heard that there were no definite abnormal findings in the CT scan. In the physical examination, right costover-
INJ
tebral angle tenderness was observed. The results of a urine test showed 5 to 10 red blood cells per high power field, and plain X-ray of the kidney, ureter, and bladder revealed metallic surgical clips on the right upper abdomen and a possible right renal stone, but no abnormal density on the ureteral courses (Fig. 1). The abdominopelvic CT scan revealed a 0.5 cm×1.0 cm opacity on the right proximal ureter with borderline hydronephrosis and a tiny right renal stone (Fig. 2). Ureteroscopy with the pa-tient under general anesthesia showed a white rectangular parallelepiped foreign body at the proximal right ureter (Fig. 3A). The foreign body was removed by use of a ureteroscopic stone basket device and was identified as a medium-sized surgical clip (Fig. 3B). There was no extravasation of the renal pelvis during contrast media instillation via the channel of the ureteroscope. A ureteral stent was placed for 1 week, and the patient had no more flank pain.
DISCUSSION
Currently, partial nephrectomy is considered a standard treatment for small renal tumors with the benefit of preserving renal function; improving overall survival, especially for patients younger than 65 years of age; and decreasing the overall mor-
INJ
tality rate. For T1b tumors, more clinical data are required to establish the oncological and functional benefits of partial nephrectomy (PN). LPN has come to represent comparable perioperative and oncological outcomes in the recent era [4] and has gained popularity, although it remains a challenging and highly advanced laparoscopic procedure. The most demanding step during LPN is the repairing of the collecting system and renal defect because this repair requires advanced laparoscopic skills and is performed under time pressure to minimize the warm ischemia time.
Ureteral migration of suture material after PN is not a common complication. There is a report of the migration of absorbable Lapra-Ty suture clips in the collecting system after LPN [5], and Massoud [6] also reported the migration of a metal surgical clip into the ureter after OPN, all of which were passed spontaneously. Furthermore, intravesical migration and stone formation of a surgical clip after laparoscopic radical prostatectomy has been reported [7], but there have been no reports of ureter migration of a surgical clip after partial nephrectomy.
Msezane et al. [3] reviewed the different sealants and laparoscopic instruments that are available for achieving hemostasis of the renal parenchyma in LPN and determined that there is no gold standard single agent or combination that can be applied to all cases. The decision as to which technology to use and how to manage the hilum should be made on a case by case basis.
Hem-o-Lok or metal clips that are used to repair the collecting system and the renal defect in LPN can migrate postoperatively and cause secondary complications, such as urinary stones. Ureteral stones following LPN can be managed conservatively with hydration and narcotics, but if symptoms do not improve, surgeons may consider more aggressive ureteroscopic management. Furthermore, the surgeon must be aware of the possibility of clip migration.
|
2016-05-04T20:20:58.661Z
|
2013-06-01T00:00:00.000
|
{
"year": 2013,
"sha1": "7770b7a3a9f79f07262576f6be220cc77350059a",
"oa_license": "CCBYNC",
"oa_url": "http://www.einj.org/upload/pdf/inj-17-2-90-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7770b7a3a9f79f07262576f6be220cc77350059a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14650800
|
pes2o/s2orc
|
v3-fos-license
|
Fractionated Marine Invertebrate Extract Libraries for Drug Discovery
The high-throughput screening and drug discovery paradigm has necessitated a change in preparation of natural product samples for screening programs. In an attempt to improve the quality of marine natural products samples for screening, several fractionation strategies were investigated. The final method used HP20SS as a solid support to effectively desalt extracts and fractionate the organic components. Additionally, methods to integrate an automated LCMS fractionation approach to shorten discovery time lines have been implemented.
Introduction
Marine invertebrates have been a rich source of chemical diversity and pharmaceutical leads. However, it has been estimated that only a small percentage of the total number of estimated species in the marine environment have been investigated [1]. Data extracted from the NCI preclinical antitumor drug discovery screen showed that sponges (phylum Porifera) exhibited more cytotoxic extracts compared to plants and other marine invertebrates [1]. Tunicates (phylum Chordata) and bryozoans OPEN ACCESS (phylum Bryozoa) also yielded high numbers of cytotoxic extracts. Since then, investigation of these groups has led to the discovery of numerous pharmaceutical leads, particularly in the anticancer drug discovery area [2,3]. Tunicates in particular have been a prolific source of cytotoxic natural products. Last year, the first anticancer agent from a marine invertebrate, Trabectedin ® (ecteinascidin-743, ET743) [4,5] was approved in Europe for advanced soft tissue sarcoma and has been marketed by PharmaMar. ET743 was first isolated from the tunicate Ecteinascidia turbinata. Another tunicate derived natural product, Aplidin ® (dehydrodidemnin B) [6,7], is currently undergoing phase II clinical trials, sponsored by PharmaMar. Numerous anticancer agents from marine invertebrates are in clinical trials [8] or are in preclinical development [2]. On the basis of the successes outline above, our natural products discovery program has largely focused on the discovery of anticancer agents from marine sponges and tunicates [9].
Marine natural product extracts present several problems with respect to modern drug discovery programs. The first and foremost problem encountered with marine invertebrate extracts from sponges and tunicates is the presence of large quantities of inorganic salts. Additionally, the chemical diversity found in one sponge may represent several different classes of bioactive molecules that exhibit different and sometimes opposing pharmacological activities. In many cases, the presence of a major non-selective compound can mask the activity of minor selective compounds. Minor compounds in many cases are present in crude extracts at concentrations that are below detection thresholds. From a discovery standpoint, these problems can be addressed to a certain point through the use of prefractionation strategies [10][11][12][13][14].
From a screening standpoint, complex natural product mixtures can present numerous problems in high-throughput screens [10,[12][13][14][15][16]. As part of a National Cooperative Drug Discovery Group (NCDDG), we observed a low hit rate for natural product extracts in high-throughput screening (HTS). Two major problems were observed. First, active components were many times 0.1% and sometimes less than 0.01% of the crude extract, which resulted in many components being below screening thresholds. Second, active components were being masked by other major metabolites.
Results and Discussion
In order to develop methods that addressed the issues with screening mixtures in HTS, we explored a number of options. Our primary goal was to improve the quality of samples for HTS while minimizing the overall cost and required labor. First, we tested a solvent-solvent partitioning scheme with 216 extracts. Methanol extracts were filtered, dried and then partitioned between ethyl acetate and water. The three ethyl acetate layers were combined, dried, and subsequently partitioned between hexanes and methanol. In most cases, greater than 80% of the original extract remained in the aqueous layer. The aqueous layer was desalted by triturating with 1:1 ethyl acetate-MeOH, but the process was time consuming. In the end, the solvent-solvent partitioning protocol was labor intensive, time consuming, and used large quantities of solvents, but did not separate the organic components.
HPLC was considered too costly to use with crude extracts, not to mention the difficulty of obtaining sufficient material from an extract that is ~50-70% inorganic salt. Therefore, methods employing flash chromatography were explored. C 18 Sep-Paks were tested, but did not have a high capacity compared to Diaion HP20SS. Diaion HP20SS, a porous polystyrene based adsorbent, was chosen as the solid support. Since HP20SS has large pores and a large surface area, it has high capacity to adsorb organic compounds. Additionally, the chromatographic behavior of HP20SS differs from C 18 in that it also separates by size and has selectivity for aromatic compounds. Therefore, HP20SS represents an orthogonal chromatographic approach compared to C 18 . Initial studies indicated efficient desalting and separation of the organic components of each extract.
Prior to performing a large number of separations, tests were performed to standardize our extracts. Primarily, we hypothesized that there was little need to dry and weigh individual extracts prior to fractionation. Our hypothesis was based on the fact that the mixture of compounds present in each extract could not be predetermined; therefore, the concentration of individual components would be independent of the weight. Additionally, if the fractionation step concentrated organics, we would increase the hit rate above what we observed for pre-weighed extracts that were submitted to HTS.
We began by placing invertebrate samples (loosely packed) in a 120 mL polypropylene jar and covered with MeOH for a minimum of 24 hours. Then, 1 mL of extract was removed from 145 sponge and tunicate extracts. The solvent was removed, and the dried extract was weighed. We found that the average was 11.1 mg, and the standard deviation showed a narrow range (6.5 mg) with a minimum of 1.5 mg and a maximum of 36.9 mg. Additionally, we have observed that the majority of variation among extract weights was due to the amount of inorganic salts present in the crude extract. Overall, these results indicated that most samples could be processed without the need to weigh individual extracts.
Separation on HP20SS was investigated by varying the amount of HP20SS as well as the eluant used. Additionally, both wet-loading and dry-loading techniques were investigated. To test wetloading, 10 mL of extract was filtered, and a slurry was prepared with 150 mg of HP20SS and loaded onto a plastic column. The solvent was forced through with a plastic syringe, the eluant diluted with 60 mL of H 2 O, and the column was reloaded with diluted sample to adsorb the organics onto the HP20SS. Subsequently, a 3-step elution (15 mL) was performed using 25% acetone/H 2 O, 50% acetone/H 2 O, and 100% acetone. The results were compared to a dry-loading procedure where the extract was placed on 150 mg of HP20SS, a slurry was prepared, and the solvent was removed using a centrifugal evaporator. The dried, "charged" resin was poured into a plastic column and subjected to a similar elution as the wet-loading method, but the first eluate (after optimization) was 15 mL of 100% H 2 O. In the end, the fastest method was to load an extract onto HP20SS (150 mg), dry the sample down in a centrifugal evaporator overnight, and then load the "charged" HP20SS into a column fitted with a frit. The extract was effectively desalted by washing the resin with 100% H 2 O. Subsequently, a four-step elution efficiently separated the organic components. The final optimized system utilized 100% After the initial studies were performed, the second step was to develop a method that could be applied to a large number of samples. One of the goals was to prepare samples for screening without the need to weigh the extracts or the resulting fractions. Therefore, a pilot study was performed using 100 sponge extracts. The amount of material in each HP20SS fraction was subsequently weighed and the average calculated. Since the range from the sponge samples was tightly grouped, we could prepare all fractions in a similar manner without the need to obtain weights and provided an enormous time savings. To determine the effectiveness of the process, cytotoxicity for the crude extract was compared to the cytotoxicity of the four eluants (See Table 1) in HCT-116 cells using the MTT assay. In most cases, we observed increased cytotoxicity in the HP20SS fractions compared to the crude extracts. For example, the Corticium sp. crude extract showed no cytotoxicity at 1.5 μg/mL, but F1 and F2 were found to be cytotoxic. F1 resulted in 15% survival at 2.4 μg/mL and F2 resulted in 12% survival at 1.8 μg/mL. Although the crude extract from Theonella swinhoei was cytotoxic, we observed increased cytotoxicity in the HP20SS fractions (See Table 2). For the two Neopetrosia spp., we observed consistent distribution of the activity. Overall, the HP20SS prefractionation strategy provided a cost-effective method to effectively desalt marine invertebrate extracts while providing an effective concentration of the organic components. Once extracts were fractionated, the HP20SS fractions were stored in 96-well format (10 mg/mL in DMSO) and daughter plates were made for screening. For all HP20SS fractions a material archive was maintained and stored at -80 °C in 96-well formatted polypropylene tubes (Figure 1).
We have been able to minimize the cost of the process by running eight columns in parallel in conjunction with metered solvent delivery from a one liter bottle. In terms of time, one person can process 16 samples in three hours using eight columns. Additionally, HP20SS can be regenerated and reused to minimize the overall cost. The best method we found for regenerating the HP20SS was to use a Soxhlet extractor with 1:1 dichloromethane:MeOH as an extraction solvent. Our second goal was to expand our methodology to shorten discovery timelines by optimizing our dereplication strategy and increasing our throughput. Once hits were identified, we wanted to establish a rapid and automated approach to pursue hits. Therefore, we developed an automated LCMS fractionation protocol. We have utilized the same method to generate natural product libraries using a small number of marine invertebrates, and the potential of this approach for rapid drug discovery has been recently published [17]. By combining the HP20SS fractionation with an automated LCMS fractionation protocol, we generated high-purity libraries for screening and rapid drug discovery. The overall goal of the approach was to eliminate bioassay-guided isolation by identifying active compounds directly from the library using MS and NMR. This process was effectively demonstrated with a subset of the library in a BRCA2 phenotype-selective screen [17]. However, generating purified libraries with our entire annual collection was not a feasible option, but the method was used mainly to generate focused libraries for specific screening programs. Nonetheless, the method was designed to integrate with our HP20SS library.
Although purified libraries are attractive, there are distinct advantages to screening a prefractionated library compared to a purified natural products library [12,18]. The time line to structure identification is significantly shorter for purified natural products libraries, but screening a partially purified library allows more chemical diversity to be sampled and can limit the number of highly polar compounds that can interfere with assays. For most sponge and tunicate extracts, fewer highly lipophilic components are present, for example, compared to extracts from microbial fermentations. Nonetheless, rapid methods for identification and dereplication are necessary for many HTS programs. By using an automated LCMS fractionation protocol, components from the HP20SS library can be rapidly purified and characterized by accurate mass measurements to facilitate dereplication. Additionally, as screening capacity increases, partly through miniaturization of screening platforms, the potential to develop large high-purity natural product libraries becomes more attractive.
Once hits from the HP20SS library were identified, a 200 μL sample from the HP20SS material archive was subjected to the LCMS fractionation protocol. The LCMS fractionation utilized a Q-tof micro mass spectrometer equipped with lockspray to enable accurate mass measurements. The effluent from the HPLC was directed into a splitter where a portion of the sample was infused into the Q-tof while most was directed into a 96-well collection plate. The collection time for each well was mapped onto the chromatogram using FractionLynx. After fractionation the contents of the collection plate could be split, one plate for screening and one plate for a material archive and NMR. As previously noted [17], the key to this separation strategy was the use of a Phenomenex 3 mm × 100 mm Onyx TM C18 monolithic HPLC column. The monolithic column has higher capacity compared to a traditional HPLC column [19] and has easily handled injections of HP20SS fractions (2 mg) dissolved in 200 μL of DMSO. Although we have successfully used this approach on numerous hits, two examples will be presented here.
The approach was applied to a project where an HP20SS fraction from Pseudoceratina purpurea was identified as a hit in a luciferase assay. A sample of the active fraction from the archive was subjected to the automated LCMS fractionation protocol. The contents of the collection plate were split with a portion being submitted for assay and the remaining portion for a material archive. The most active well (~50 μg) was analyzed by NMR and allowed rapid identification of the active compound as psammaplin A. The details of the assay and additional work have been published elsewhere [20]. In the end, the identification of the principal active component was achieved in less than 24 hours after the assay results were obtained on the LCMS fractions. However, it should be noted that on such a small scale, NMR analysis required a cryo probe operating at 600 MHz. The example demonstrates the speed at which active compounds can be identified by integrating a partially purified library with automated LCMS fractionation. Additionally, we have utilized this methodology to rapidly identify non-selective inhibitors. An active HP20SS fraction was identified as a hit in a HT screen, but based on field collection notes most likely contained alkyl pyridinium polymers, which have commonly hit in HTS. Therefore, we subjected an HP20SS sample to LCMS fractionation in an attempt to separate the pyridinium polymers from other potentially active natural products. The chromatogram showed a broad poorly defined peak eluting between three minutes and nine minutes (Figure 2). The mass spectra in this region of the chromatogram showed prominent ions that are consistent with halitoxin-like fragments at m/z 379.3123 (calcd for C 26 [21]. Overall, we could utilize the monolithic column to effectively elute the pyridinium polymers early in the chromatography with the subsequent separation of other components (Figure 2). The activity was correlated to the wells that exhibited halitoxin-like fragments in the mass spectra, and the project was quickly dropped. With respect to creating high-purity natural product libraries, application of HPLC in conjunction with mass spectrometry and evaporative light scattering detection was first demonstrated using plant extracts [22,23]. Although the potential of natural products as a source of pharmaceutical leads has clearly been demonstrated [24,25], a number of technical concerns have surfaced regarding natural product libraries. Libraries containing only pure compounds are costly and take more time to produce, but are more amenable to the high throughput paradigm due to decreased assay interference and the potential rapid structure identification of assay hits. As previously stated, the drawback to purified compound libraries is that less chemical diversity is sampled. Prior to generating a high-purity natural products library, several technical concerns must be addressed. For example, the method of purification, in part, must be automated. Since the size of a library is likely to be large, library storage and sample handling is a concern. The quantity of material in the library is another concern. Since modern spectroscopic instrumentation has become increasingly sensitive and modern assays require only small amounts of material, a high-purity library can be constructed and only contain small quantities (50 μg to 1 mg) of material. However, quantitation becomes difficult with microgram quantities, but can be addressed using ELS detection [26,27], CA detection [27], or by NMR [28]. Finally, data management quickly becomes difficult without an appropriate database.
Our LCMS fractionated library was constructed by assuming equal distribution of material among 80 wells in a collection plate with each collection plate representing one marine invertebrate. The drawback to assuming equal distribution was that some wells contained little or no material while other wells contained larger quantities of material. Another issue that surfaced was data management. We decided to obtain accurate mass measurements on analytes as the purification proceeded. The MS data provided a mechanism to track the contents of the library, but the raw data files from a Q-tof were large (~250 mb per HPLC run). However, we were able to process the data "on-the-fly" providing lock mass corrected, centroided accurate mass data. Although some information was lost, the data files were reduced to ten to fifteen mb per run allowing easy automated backup of data without the need for a large data server. Obtaining the MS data during fractionation was advantageous since it did not require retrieval of plates from storage and subsequent LCMS analysis for prioritization of hits and dereplication (Figure 3), which would add significant complication and require more time. Additionally, having the MS data in conjunction with taxonomy provided a mechanism to remove compound redundancy and known non-selective inhibitors prior to secondary screening (Figure 3). Overall, this effectively streamlined the process of prioritizing hits from primary screening. Plates were only removed from storage once a hit had been confirmed and yielded an appropriate dose response curve. Then, samples were removed from storage and analyzed by NMR. Overall, the method provided libraries and methods that were compatible with HTS timelines.
Recently, we began attempting to scale up the procedure in an attempt to provide more material for screening and structure elucidation. Utilizing an HP20SS fraction from a Pipestela sp. that contained large quantities of milnamide A (3) and milnamide D (4), the capacity of a 4.6 × 100 mm monolithic C18 column was compared to that of the 3 × 100 mm column. Since flow rates were ultimately a function of our collection volume, a flow of 2 mL/min could not be exceeded even though the scaled flow from 3 mm ID (1.5 mL/min) to 4.6 mm ID would be 3.53 mL/min. Nonetheless, a 50% increase in loaded material for the 4.6 mm ID column, from 2 mg to 3 mg, yielded similar resolution and only a slight change in retention time (See Figure 4). Overall, the use of a 4.6 mm ID column will increase the final amount of material in the natural product library, without the need to change the collection strategy from 96-well format.
Conclusions
In conclusion, marine invertebrate extracts can be rapidly separated on HP20SS. The method effectively separates the organic constituents from the inorganic salts and can concentrate the active principal components. By creating partially purified libraries, daughter plates can be easily generated for new screening programs. Additionally, identifying compounds directly from the LCMS fractionated material archive facilitates rapid dereplication and purification of active components as compared to returning to the source organism. Integrating the HP20SS library with our automated LCMS fractionation protocol has increased the number of projects that we can pursue as well as allowing rapid dereplication.
An additional outcome of our work on HP20SS fractionation has been that the method was widely applicable. The potential for variation in chemistry among sponge and tunicate species due to variations in microbial populations, seasonal, or environmental changes does exist, and the chemistry within one species is likely to show some variation. However, variability should increase the chemical diversity of the library, and variability can be easier to identify with partially purified extracts as compared to crude extracts.
General
All NMR data were obtained at 600 MHz on a Varian INOVA equipped with a cryogenically cooled 1 H channel. For gCOSY, gHMBC, gHSQC experiments, standard vendor supplied pulse sequences were used. Samples were dissolved in 250 μL of DMSO-d 6
Extraction and Chromatography
All invertebrates were extracted with MeOH. The MeOH extract (24 mL) was loaded onto HP20SS (300 mg) and subsequently dried using a centrifugal evaporator. After the sample was dried, the "loaded" HP20SS was transferred to a 2.5 × 9 cm polypropylene column fitted with a frit. Fifteen mL of each solvent (100% H 2 O, 25% IPA/H 2 O, 50% IPA/H 2 O, 75% IPA/H 2 O, and 100% MeOH) were pushed through the sample using a 60 mL syringe fitted to the top of the column to generate five eluates (FW, F1, F2, F3, F4, respectively). The solvents were removed from all samples using a centrifugal evaporator.
LCMS Fractionation
Experimental details are the same as those previously published [17].
LCMS Fractionation of Psammaplin A Containing Sample
The LCMS fractionation of the active HP20SS fraction (4WA7 = FJ04-4-36-F2), utilized a sample of archive material (200 mL, 5.0 mg/mL) and was subjected to the fractionation scheme. The active fraction was number six and eluted between eight and nine min.
|
2014-10-01T00:00:00.000Z
|
2008-06-01T00:00:00.000
|
{
"year": 2008,
"sha1": "f4043a98f2c13f9c10fb7fe243d406dc69542608",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/13/6/1372/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4043a98f2c13f9c10fb7fe243d406dc69542608",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
265163583
|
pes2o/s2orc
|
v3-fos-license
|
Utilization of mango seed oil as a cocoa butter replacer for the development of innovative chocolate
ABSTRACT Chocolate is the most popular food type and flavor in this world. The key ingredient in many chocolate products is cocoa butter due to its unique fatty acid profile. Due to the expensive nature of cocoa butter, it has stimulated extensive research for fats that are cheaper and more easily available and can be used as cocoa butter substitutes. The mango (Mangifera indica) is known as the king of fruit due to its rich nutritive profile. The major parts of mango fruit are peel, pulp, and seed. Mango seed is usually discarded as waste which is a source of edible oil (7–12%). The current study was designed to produce chocolate with mango seed oil as a cocoa butter replacer. Mango seed oil was extracted using the soxhlet apparatus and its physiochemical properties were evaluated. Extracted oil was used in chocolate preparation with different proportions (0%, 30%, 70%, and 100%). Furthermore, chocolate was subjected to explore the effect of storage (21days) on product quality and sensory with 7days interval. Current results show that mango seed oil has a valuable fat profile containing palmitic acid (C16:0) 26%, stearic acid (C18:0) 36%, and oleic acid (C18:2) 33%. Moreover, innovative chocolate showed higher antioxidant activity as compared to control in different storage intervals. In addition, chocolate prepared with different proportions of mango seed oil showed higher sensory scores as compared to the control sample. The findings suggest that mango seed oil can replace cocoa butter in chocolate and reduce/manage mango seed waste to improve its antioxidant activity and nutritional value.
Introduction
Mango is the dominant species of the genus Mangifera and it belongs to the family Anacardiaceae.Mango fruits are widely produced in tropical regions and are considered important among the major fruits.Several types of foodstuffs can be made by utilizing the mango.Pickles, nectar, juice, jam, and leather along with slices are prepared by utilizing 20% of mango globally. [1]The mango, "king of fruits" is the second most popular fruit in Pakistan.Mango is a well-known tropical and subtropical fresh fruit that is high in vitamins A, B, and C with high water, protein, sugar, iron, fat, and high fiber contents. [2]Each variety has its shape, color, size, aroma, and taste. [3]Pakistan is the third-largest exporter of mangoes in the world and ranks fifth among those that grow them. [4]In Pakistan, around 85,000 tons of mango fruit is exported annually.The cultivating area of the mango plant is about 4.25 million acres, and yields are about 1.77 million tons yearly. [5]ango has three main parts: pulp, peel, and seed.The pulp of mango fruit is consumed fresh or processed in a variety of products such as dried mango, mango juice, mango leather, and mango chutney.During mango processing, 40-60% of the fruits are discarded as waste products.The mango peel constitutes 12-15% whereas seed constitutes 15-20% of the total waste product.The mango fruits contain 10-25% of seed.In the seed, the kernel constitutes 45-75% and 20% of the total mango fruit.However, about 1 million tons of fresh mango seeds are treated as wasted.Depending on the variety, the mango seed kernel contains 6% protein, 77% carbohydrate, 11% fat 2% crude fiber, and 2% ascorbic acid. [6]Mango seed has a valuable nutritional profile as edible oil.The oil contents of mango seed kernels are 7-15%. [7]ango kernel oil is one of the few tropical fruits oil considered as a cocoa butter alternative.Due to their unique physical and chemical characteristics, in addition to their fatty acid profile which are similar to cocoa butter and shea butter. [8]The major fatty acid is stearic (24-57%), oleic (34-56%), and palmitic acid (3-18%).Cocoa butter is an essential ingredient for the manufacturing of chocolate and other confectionery products.It is a fatty phase of chocolate and is responsible for the texture and melting behavior of chocolate.Cocoa butter is a basic fat obtained from pressing ground, roasted, and decorticated cocoa beans.The fruit of the cocoa plant is known as a cocoa bean. [9]The fatty acid profile of cocoa butter shows that it contains 26% palmitic acid, 36% stearic acid, and 33% oleic acid. [10]The fatty acid profile of mango seed oil shows that mango seed oil can be used as an alternative to cocoa butter in food products like chocolate.The cocoa butter alternative is not chemically similar to cocoa butter, but they are well-suited with cocoa butter. [11]Mango seed oil/fat has a number of essential fatty acids that are not present in conventional cocoa butter replacers.One of the best advantages of mango seed fat/oil is that it has no trans-fats unlike conventional shortenings and cocoa butter replacers, and these trans-fats are linked to numerous health complications. [8]Mango seed being the food industrial waste can be effectively utilized to obtain cocoa butter replacers, which will be economical, healthier, and environmentally friendly.
The current study was conducted to produce chocolate with mango seed oil as a cocoa butter replacer.In this study, we explored the physicochemical properties of extracted mango seed oil (MSO).Then, MSO was used to develop chocolate samples with different proportions of MSO (0%, 30%, 70%, and 100%).In addition, the physicochemical, quality, and sensorial characteristics of prepared chocolate were evaluated.Chocolate was subjected to assess the effect of storage on product quality and sensorial properties.
Procurement of raw material
The raw material was purchased from the local market Multan.Mango seeds were purchased from the mango processing units located at the industrial state in Multan.All the mango seeds were from fully ripe mangoes that were being processed at the processing units.All of the chemicals, and reagents were analytical grade provided by G.M. Scientific Supplies, Multan.Glasswares were made available at the Department of Food Science and Technology and Central Lab System of MNS-UAM.
Washing and drying of Mango seed kernel
The seeds of mango were washed with clean running water.After washing the mango seed kernels, these were dried using conventional solar drying.About 15-18 kg of mango seeds were dried in sunlight for 4-5days and the final moisture percentage was 9.5 ± 1% on average.
Preparation and storage of Mango seed kernel powder
When the mango seeds were completely dried, these seeds were opened by a specially designed cutter to attain the seed kernels.After that, the drying kernels were manually cut into 5-6mm pieces.These pieces were converted into fine uniform powder by utilizing a grinding mill to facilitate the extraction of oil.
Dried mango seed kernels were ground using a universal grinding mill (Chenwei China), rpm: 2200 and sieve no.40 (500µm).In the end, the collected powder was stored at 4°C within sealed packages until further use.
Extraction of oil
According to the procedures described by Association of Official Analytical Chemists, the resulting powder was utilized for the extraction of oil by two different processes.Solvent extraction and mechanical compression techniques require light heating. [12]Oil from the mango seed kernel was extracted by both methods.For solvent extraction, firstly, the kernel separated from the mango seed was dried and converted into powder form after this, the powder of the mango kernel was filled in paper thimbles.Each thimble contains 5g of mango kernel powder or flour.A solvent such as hexane was used in the soxhlet in the amount of 450mL.The round bottom flask was filled with hexane and the temperature of the soxhlet machine at 75°C and allowed the thimbles to stay in the apparatus for about the time of 5 h throughout the removal.Afterward, fat was retrieved from the hexane using a rotary evaporator and to ensure the complete removal of hexane, the samples were placed in a hot air oven and 65°C for 60-90 min.
Fatty acid profile
Mango seed oil's fatty acid composition was assessed using GC-MS (996.06)described by Bigelow et al. [13] Methanol was used to form fatty acid methyl esters.A sample was introduced into the GC system along with an MS detector.As a carrier gas, helium was used, and the temperature range was set at 70°C to 280°C.The injection and detector temperatures were adjusted at 240°C and 250°C, respectively.Peaks on the chromatogram were identified using retention information from tested standard samples.Finally, total fatty acid contents were evaluated and calculated as percentages (%). [14]ee fatty acids (FFA) The fatty acid percentage was indicated by titrating in neutralized ethanol (95%) versus NaOH solution. [12]For the determination of free fatty acid in mango kernel fat, a sample of around 10g was brought into a cleaned, wiped conical flask together with 25mL neutralized ethanol (95%).After that, mix it correctly such that the sample is liquified in ethanol solution.When the sample was completely dissolved, add 2-3 drops of phenolphthalein as an indicator.After that, the remaining contents were agitated with 0.1 N sodium hydroxide (NaOH) solution and shaken continuously until a light pink color remained for a minimum of 35 s.The proportion of free fatty acid percentage was evaluated: Free fatty acids% (as oleic) = Alkali used mL where is N normality of alkali.
Peroxide value
To determine the peroxide value in terms of iodine produced from the hydrogen peroxide and iodine ion reactions according to the procedures as followed in (AOAC, Method No. Cd 8b-90). [12]To determine the peroxide value, a sample of 5g was occupied in an Erlenmeyer flask of 250mL.Afterward, glacial acetic acid which is a Chloroform solution mixture in the amount of 30mL was added.Intermingle the flask for almost 60 s to completely dissolve the oil in a solvent mixture.After that, add freshly prepared potassium iodide (KI) solution (0.5mL) into the beaker.The solution was titrated versus 0.
Iodine value
Iodine values are used to measure the amount of unsaturation and in chemistry, it is the mass of iodine in grams engrossed by hundred grams of oils and fats and described by AOAC [12] ; Method No. Cd 1-25.The fat extracted from the kernel of the mango seed was broken down when added in 5mL carbon tetrachloride (CCl 4 ) was in a flask named stoppered, along with the addition of 25mL of Wijs solution in it.Subsequently, the flask was put in a dark place for an hour distillation with 15% potassium iodide solution (20mL) along with 25mL of distilled water.The solution present in the flask was then titrated versus sodium thiosulfate (Na 2 S 2 O 3 ) mixture along with the addition of a starch indicator solution which was freshly prepared.Reading without running the sample was also noted.The number of milliliters of 0.1 N sodium thiosulfate needed to be subtracted from the mL utilized by the sample provided sodium thiosulfate equivalent of iodine value of the fat as stated: where N is normality of Na 2 S 2 O 3 , S is volume of Na 2 S 2 O 3 utilized for sample, B is volume of Na 2 S 2 O 3 utilized for empty sample.
Specific gravity
Specific gravity was measured at ambient temperature by using a pycnometer consisting of capillary bored closure by the procedures followed in AOAC. [12]Before measuring the specific gravity of the sample, the pycnometer was calibrated to be filled with pure water and weighing the net mass of water.
After that, the specific gravity of the sample at ambient temperature was deliberated as: here A is the weight of an empty bottle having specific gravity, B is the weight of a bottle filled with water with specific gravity, and C is the weight of a bottle filled with fats with specific gravity.
Saponification value
The saponification value is the number of milligrams of potassium hydroxide needed to fully saponify 1 g of sample containing oils or fats.Saponification of mango kernel fat was calculated as reported in the procedure followed by AOAC [12] ; Method No. Cd 3a-94.A sample of 2 g was brought in a conical flask and dissolved in 25mL of 0.5 N alcoholic potassium hydroxide solution.After that, the reaction mixture was reflowed with the help of a water condenser in a water bath for half an hour.The resulting solution was getting cold and titrated with 0.5 N HCl solution along with the addition of 1mL of phenolphthalein indicator.Empty reading was measured separately.
where S is volume of KOH utilized for sample and B is volume of KOH utilized for empty.
Product development
Using laboratory-scale tools and development procedures, control chocolate and cocoa butter replaced chocolate (CBRC) with MKO were created.Using a mixer, pre-mixed cocoa powder, sugar, milk powder, and cocoa butter.An additional stage of conducting the pre-mix for at least 1 hour at 50°C was performed after it had been refined with a sieve.Once the temperature was low enough for the slurry to be used to fill small molds, manual tempering was done (food-grade plastic material).There were 12g or so for every CBRC piece.Chocolate that had been molded was stored in propylene sheets. [7]reatment plan for chocolate preparation was presented in Table 1.
Moisture contents
Moisture content is the most important constituent of any product.The dry matter of food items is commonly dependent on the moisture percentage of the product which directly affects the processor and consumer's economic status.Moisture content is commonly influencing the properties of food items.The prepared chocolate was evaluated for moisture percentage according to the method described by the Association of Official Analytical Chemists (AOAC) [12] method No. 44-15A.The moisture contents were evaluated by drying oven (SLN-53-STD, POL-Eko-Apparatus).The sample was placed in a dried, pre-weighed china dish weighing about 2g, and the china dish was subsequently dried in an oven for 6-12hours at 105 ± 5°C.
Ash contents
The ash of prepared chocolate was evaluated according to the protocols mentioned by the Association of Official Analytical Chemists (AOAC) [12] method No. 08-01.The contents were determined using a muffle furnace (SNOL 39/1100, Utena, Lithuania).Before depositing the sample in the muffle furnace, it was burned on a spirit lamp using around 2g of the sample in a clean, precisely weighed crucible.The muffle furnace had a temperature of 600°C.After the completion of the procedure, the muffle furnace, and left to decrease the temperature of the muffle furnace and then the crucible was placed in the desiccator for cooling.
Crude fat
The Association of Official Analytical Chemists (AOAC) [12] ; Method No. 30-25) method No. 30-25 was used to test the crude fat level of the manufactured chocolate.For the crude fat determination procedure, the sample was measured in a thimble containing roughly 30g, and n-hexane was employed as the solvent.The removal of the fats from the sample in the Soxhlet apparatus was made by adjusting the speed of 4-5 drops per second of hexane for almost 2-3hours.When 6-7 siphons back or washings were completed, the thimble was removed from the apparatus.After the removal of the thimble, it was dried in an oven for 1 h at 105°C and then weighed.After that, petri plates were cooled down in a desiccator.Weighing the plates until no decrease in weight was achieved.The below formula was acquired to measure the crude fat contents.
Crude fiber
The prepared chocolate with mango seed oil was used to evaluate the crude fiber contents according to the AOAC method No. 32-10. [12]For this analysis, the sample (about 15g) was taken in a 250mL glass beaker.2.5% H 2 SO 4 about 200mL.The sample was heated at 90°C for 2 h for acid digestion.After the digestion, the sample was filtered with filter paper and washed the residue gradually with hot water until it was free from acid.Washed residue was poured into the 25O mL glass beaker for digestion with an alkali solution.The same process used for acid digestion was followed when an alkali solution was added to the beaker.Following the end of both digestion processes, the sample was transported to a crucible, which was then dried.The crucible was placed in a muffle furnace.
Crude protein
The protein contents of prepared chocolate were evaluated through the Kjeldahl apparatus according to the respected method recommended by the Association of Official Analytical Chemists (AOAC) [12] method No 46-10.The Kjeldahl apparatus is composed of three steps including digestion, distillation, and titration.The 1g sample of prepared chocolate was taken in the digestion tube and added 15mL H 2 SO 4 and 1g of digestion mixture were added to it, and the tube was placed in the digestion assembly for further process.After digestion, the sample solution was diluted for distillation.After distillation, ammonia gas containing boric acid was used for titration of the sample by 0.1 N HCl.
Total phenolic content
The total phenolic contents of chocolate were determined by using the Folin-Ciocalteu reagent at prescribed intervals (0, 7, 14, and 21days) as explained by Chen et al. [15] with slight changes.One hundred microliter of diluted sample was carefully added to measured 2.50mL (1:10) diluted Folin-Ciocalteu standard reagent, and 02mL of saturated sodium carbonate solution (75g per liter) was carefully added after 4 min.After the incubation at room temperature for calculated 120 min, the absorbance of this mixture was recorded and measured at set 760nm by using the respective standard solvent as blanks.Gallic acid was utilized as a standard to obtain a standard calibration curve.The results are precisely expressed as milligrams of Gallic acid equivalents, i.e. mg GAE per 100-g wet weight of the sample.
DPPH assay
By using the technique suggested by Kassim et al., [16] the calculation of free radical scavenging activity was examined using a spectrophotometer.Thoroughly mix 1.0ml solution of MeOH (methanol) with 2, 2-diphenyl-1-picryl-hydrazyl (DDPH) at a concentration of 170 uL, with 1.0ml of oil sample.All contents were continually mixed and then permitted to stand in a cool and dark place for 30 min at ambient temperature.After incubation, the reaction mixture's color was altered, and the outcome was evaluated at 517nm.DPPH values were expressed as percent inhibition.As benchmarks, BHT, αtocopherol, and ascorbic acid were employed.
pH determination pH of the chocolate sample was determined by using the pH meter according to the method explained in AOAC. [12]The sample of the product was taken, and the electrode of the pH meter was standardized with a buffer solution to set the pH at 7. The node then dipped into the sample and note the reading.
Sensory evaluation of products
The prepared chocolate was evaluated by a panel of eight judges at MNS-University of Agriculture, Multan, Department of Food Science and Technology.The chocolate was assessed by the panelist according to the hedonic scale where 9 represented extreme dislike and 1 for extremely like.Sensory evaluation of chocolate was conducted to evaluate the quality characteristics including texture, appearance, taste, flavor, and overall acceptability by following the methodology described by Lawless and Heymann. [17]
Results and discussion
In the current research, mango seed oil was extracted through solvent extraction and mechanical extraction methods.The yield results showed that the solvent extraction method showed a higher yield as compared to the mechanical extraction method.The Desi variety showed a higher yield of 12.49 ± 0.05 followed by other varieties.The results regarding the yield of oil from different mango seeds are shown in Table 2 and Figure 1.
Fatty acids profiling of mango seed oil
Fatty acid composition of mango seed kernel oil was analyzed using GC-MS, which provides a comparison of its unsaturated and saturated fatty acid composition.The fatty acid composition of mango seed kernels was determined using the procedures outlined in Bigelow et al. [13] The compositional study showed that 53.30% of the fatty acids were saturated, 44.74% were unsaturated, 43.54% were oleic acid, 36.18% were stearic acids, 7.46% were linolenic acids, 2.80% were linoleic acids, and 6.23% were palmitic acids.According to published literature [14] oleic acid and stearic acid, with values of 43.54% and 36.18%, were found to be the major fractions in mango seed kernel oil.The fatty acids with the lowest values are the palmitic, linoleic, and linolenic acids, which had respective values of 10.06%, 6.00%, and 2.48%.These results are presented in Table 3.
Free fatty acids
The FFA's content measured the degree of extent at which the glyceride compound in the oil has deteriorated due to lipase activity.The free fatty acid is a guide to the purity as well as freshness of oil.When the free fatty acid value is higher in oil and fats it increases the darkness in color.Increased free fatty acid value also causes the decomposition of fats and oils along with rancidity.For this reason, antioxidants are used because they inhibit the chain reaction as well as oxidative stability, and quality.The high free fatty acid value may also decrease the storage life of fat and oils.Mango seed fat was analyzed, and the findings are stated in Table 3 which demonstrated that the mean value of 2.96 ± 0.01% FFA.According to the literature, free fatty acids found in edible fats and oils range between 2.92% and 3.18%. [18]This also implies that mango seed oil/fat has natural antioxidants that slow down the production of FFAs in it.
Peroxide value
Peroxide value is a common test of lipid oxidation.It is obvious from the outcomes that the mean value for peroxide varies between 2.60 ± 0.08 meq/kg in the oil of mango kernel (Table 4).Lower amounts of peroxide present in mango demonstrated that fat of mango kernel has a greater quantity of saturated fats and can also be used in the production of edible food products.These results are identical to the study of Jahurul et al. [8]
Iodine value
The process of oxidation decreases or lowers the unsaturation rates of fats.If the resulting products consist of a higher unsaturation level, then it has increased vulnerable chances over rancidity and also shows a better iodine value.According to current research, fats-and-oils have a value of iodine varying from 40.28 to 42.69g/100g oil (Table 4).Oil from the seed kernel of mango was examined, and its findings are stated in Table 4. Mango fruit seed kernel fat has a value of iodine ranged in between (32.0 to 60.7) g/100g.The mean value of mango seed kernel fat is 40.28 ± 0.6g/100g.If the iodine value is decreased, then it expresses that the double carbon bond present in the sample of fat is little oxidized as well and if the iodine value is high, then there will be a high degree of unsaturation and oxidation in fats and oils.Mango seed kernel fat was examined, and the results are mentioned in Table 5.These results are in accordance with the results reported in the previous literature by Jahurul et al. [8]
Proximate analysis of chocolate (product)
The proximate analysis of the chocolate included moisture, ash, fat, fiber, and protein indicating that the lowest value noted in the treatment T 0 was 5.23 ± 0.03 followed by the treatment T 1 5.48 ± 0.02, T 2 5.77 ± 0.07 and T 3 6.27 ± 0.01, respectively, whereas the maximum value for the moisture in chocolate was seen in treatment T 3 6.27 ± 0.01.The highest value for the ash content was measured in the treatment T 1 (3.20 ± 0.04) pursued by the treatment T 2 (3.10 ± 0.02) and T 3 (3.00± 0.1) and T 0 (2.96 ± 0.06).The lowest amount of the ash was measured in the treatment T 0 (0.82 ± 0.02).The highest value for the crude fat content was present in the treatment T 0 (33.09 ± 1.00) Followed by the treatments T 1 (31.27 ± 1.73), T 2 (30.42 ± 0.96) and T 3 (29.63± 0.99).The lowest mean value for the crude fat content was found in the treatment T 3 (29.63± 0.99).The outcomes from the study indicated that the highest fiber content was present in treatment T 0 (2.71 ± 0.16) and the lowest value was present in treatment T 2 (2.19 ± 0.10).By raising the value of mango kernel oil, the crude fiber content in chocolate starts increasing as the mean values for the remaining treatments are T 1 (2.34 ± 0.10), T 2 (2.19 ± 0.03), and T 3 (2.61 ± 0.06).The data demonstrated that the highest protein amount was present in the treatment T 3 (9.30± 0.16).The least value for the protein was present in T 1 (7.45 ± 0.23).Other mean values are T0 (8.45 ± 0.23) and T 2 (8.32 ± 0.45).The proximate composition of the chocolate is shown in Table 5.
Total Phenolic Content
Statistical results regarding Total Phenolic Content (TPC) indicated that the effects of treatment, storage, and combined effects of storage of chocolate on total phenolic contents were observed to be highly significant (Table 6).The effect of treatments (different percentages of extracts) on the TPC of chocolate indicated that the highest TPC was observed with the mean value of 29.48 ± 0.8mg GAE/ 100g and the lowest 28.26 ± 0.05mg GAE/100g.The interactive effect of treatments and storage times on the TPC of chocolate indicated that significantly the highest value was observed in the T 3 at storage times of 0, 7, 14, and 21days with the mean value of 29.79 ± 0.03mg GAE/100g, 29.54 ± 0.5mg GAE/ 100g, 29.41 ± 0.04mg GAE/100g and 29.19 ± 0.5mg GAE/100g, respectively.The results of this study show that the TPC of chocolate decreases with storage time.This similar trend was also reported by Jacimovic et al., [19] ranging from 10.55 to 39.82mg GAE/100g.Phenolic compounds are sensitive to higher temperatures.Higher phenolic contents are due to the fact the temperatures during the process did not exceed 70°C.With longer storage periods quantity of phenolic compounds decreases owing to oxidative agents.
DPPH Assay
The percentage of DPPH in all samples was determined at the intervals of 0, 7, 14, and 21days.The influence of both storage and treatment on the DPPH content of chocolate is shown in Table 5.Table 5 shows that the maximum DPPH content was observed at 0days in T 3 (49.75± 0.04%), whereas a minimum DPPH was observed in T 1 (41.29 ± 0.05%) at 21days of storage.Means of DPPH in the controlled treatment T 0 were detected as 47.12 ± 0.05%, 45.10 ± 0.08%, 43.09 ± 0.05%, and 41.47 ± 0.0.07%at 0, 7, 14, and 21days of storage, respectively.The lowest percentage of DPPH was observed at 21days, whereas the highest percentage was observed at 0days.In T 1 , mean values of DPPH content were observed as 47.57 ± 0.03%, 45.31 ± 0.04%, 43.80 ± 0.05%, and 41.29 ± 0.05% at 0, 7, 14, and 21days of storage, respectively.The highest DPPH was shown at 0days.whereas the lowest value was seen at 21days during storage.In T 2 , the mean value was observed as 48.29 ± 0.06%, 46.27 ± 0.02%, 44.47 ± 0.10%, and 42.66 ± 0.03% at 0, 7, 14 and 21days of storage, respectively, at 0days of storage.The highest DPPH was seen at 0days, whereas the lowest DPPH was observed at 21days of storage.In T 3 .DPPH contents were observed as 49.75 ± 0.04%, 47.62 ± 0.06%, 45.83 ± 0.03%, and 42.21 ± 0.11% at 0, 7, 14, and 21days of storage, respectively.On 21days of storage, a maximum DPPH content was detected at 0days, while at 21days, a minimum percentage of DPPH was observed.Momeny et al. [20] reported similar results where the radical scavenging potential was 41% to 43%.The current study is also in line with Jacimovic et al. [19] The antioxidant activity (expressed as percent inhibition of oxidation) ranged from 45% to 55%.Antioxidant capacity is directly linked with the amount of phenolic compounds in the product.As time passes the phenolic compounds degrade owing to oxidative agents and hence the reduction of percent inhibition.
pH
The percentage of pH in all samples was determined with the help of a pH meter apparatus at the interval of 0, 7, 14 and 21days.The influence of both storage and treatment on the pH of chocolate is shown in Figure 3.This mean figure shows that the maximum pH was observed at 21days in T 3 (6.60 ± 0.02), whereas a minimum pH was observed in T 0 (6.19 ± 0.01) at 0days of storage.Means of pH in controlled treatment (T 0 ) were detected as 6.19 ± 0.02, 6.29 ± 0.01, 6.33 ± 0.01, and 6.39 ± 0.01 at 0, 7, 14, and 21days of storage, respectively.The lowest percentage of pH was observed at 0days, whereas the highest percentage was observed at 21days.In T 1 , the mean values of pH were observed as 6.30 ± 0.01, 6.35 ± 0.01, 6.45 ± 0.01, and 6.47 ± 0.01 at 0, 7, 14, and 21days of storage, respectively.The highest pH was observed at 21days, whereas the lowest value was observed at 0days during storage.In T 2 , the mean value was observed as 6.34 ± 0.02, 6.49 ± 0.0.3, 6.58 ± 0.01, and 6.60 ± 0.04 at 0, 7, 14, and 21days of storage, respectively.At 21days of storage, maximum pH was observed, whereas the lowest pH was observed at 0 day of storage.In T 3 .pH was observed as 6.20 ± 0.01, 6.40 ± 0.03, 6.42 ± 0.04, and 6.45 ± 0.02 at 0, 7, 14, and 21days of storage, respectively.At 21days, maximum pH was observed, while at 0-day, minimum percentage of pH was observed.The obtained results are also approximately equal to the values found by Rao and Tamber. [21]An increase in pH may be attributed to free fatty acids being released with an increase in storage time.
Hardness (N)
Statistical results for the hardness of chocolate prepared with mango seed oil indicated that observed to be highly significant, while treatment was observed to be non-significant.Effect of treatment on hardness were shown in Figure 2. The effect of varieties indicated that the highest hardness observed in chocolate prepared by mango seed oil in T3 was 12.36 ± 0.02, whereas the lowest hardness value was observed in T3 at 11.68 ± 0.04 N. The results also follow the results reported by Mantihal and fellow researchers, [22] who found that the hardness of chocolate ranges from 10.25 to 13.64 N.
Breaking strength (N)
tatistical results for breaking strength of chocolate prepared with mango seed oil showed that to be highly significant.The effect of treatment indicated that the highest breaking strength values were observed in chocolate prepared with mango seed oil in T3 with a mean value of 19.09 ± 0.05 N, Whereas the lowest breaking strength value was noticed in T0 with the mean value of 16.31 ± 0.17 N as revealed in (Fig. 6).The breaking strength of chocolate was obtained according to the respective protocol followed by Nurhayati et al. [16] Cutting strength (N) Statistical results for cutting strength of chocolate prepared with mango seed oil indicated that observed to be highly significant.The effect of treatment indicated that the highest cutting strength values were observed in chocolate prepared with mango seed oil in T3 followed by the mean value of 29.30 ± 0.17 N, Whereas the lowest cutting strength was observed in T0 with the mean value of 28.30 ± 0.07 N as revealed in (Figure 7).The results of this study are also similar to the results of Lasta and colleagues, [23] who found that the cutting strength of the chocolate ranged from 25 to 32 N. Effect of treatment on cutting strength were shown in Figure 4.
Sensory evaluation of chocolate
Sensory scores for each parmeter of the product were presented in Table 7.The highest mean value of appearance was found in T 3 at 0 day which was 7.63 in chocolate.The lowest value of appearance was noticed in the treatment of T 1 which is 5.50 at 14days.The highest mean value of taste was found in T 2 at 0 day which was 7.75 in chocolate.The lowest value of taste was noticed in the treatment of T 1 which is 4.75 at 21days.The highest mean value of flavor was found in T 0 at 0 day which was 8.50 in chocolate.The lowest value of flavor was noticed in the treatment of T 0 , which is 6.25 at 14days.The highest mean value of texture was found in T 2 at 0 day which was 7.63 in chocolate.The lowest value of texture was noticed in the treatment of T 0 which is 6.50 at 21days.The highest mean values of overall acceptability were found in T 1 at 0 day which was 7.75 in chocolate.The lowest value for the overall acceptability was observed in the treatment of T 0 (6.13) at 14days. [17]
Conclusion
In the present study, the solvent extraction method showed a higher yield of mango seed oil as compared to mechanical extraction.Mango seed oil is a good source of fatty acids and is safe for edible uses.Current findings showed that mango seed oil was used in the production of chocolate.The results of this research are proximate analysis of the chocolate prepared with mango seed oil are moisture 6.27 ± 0.01%, ash 3.00 ± 0.10%, fat 29.63 ± 0.99%, fiber 2.61 ± 0.06%, protein 9.32 ± 0.16%, and NFE 49.16 ± 0.04%.The results of chocolate made with mango seed oil, TPC was 29.79 ± 0.03, 29.54 ± 0.02, 29.41 ± 0.03, and 29.19 ± 0.05 at 0, 7, 14, and 21, respectively.Antioxidants that can delay, inhibit, or prevent the oxidation of oxidizable materials by scavenging free radicals and diminishing oxidative stress.The antioxidant activity of the chocolate shows the quality of the products.The obtained results were significant that was 48.18 ± 0.04, 46.07 ± 0.05, 44.30 ± 0.06, and 41.91 ± 0.06 at 0, 7, 14, and 21days, respectively.The lowest value was 41.29 ± 0.05 in T 2 at 21days, and the highest value was 48.18 ± 0.04 in T 3 at 0 day.The highest pH was observed in T 2 which was 6.60 ± 0.02 at 21days and the lowest value observed in T 3 which was 6.20 ± 0.02 at 0 day.At 0days, T3 had the highest mean appearance value of 7.63 in chocolate, and T1 had the lowest at 14.50 at 14days.Chocolate taste has the highest mean value at 0 day, 7.75, and the lowest value is 4.75 at 21days in T1.At 0days, T0 had the highest mean value of flavor: 8.50 in chocolate.At 14days, T0 had the lowest mean value of flavor: 6.25.At 0days, T2 had the highest mean value of texture of 7.63, while T0 had the lowest mean value of texture of 6.50.At 0days, T2 had the highest mean value of texture of 7.63, while T0 had the lowest mean value of texture of 6.50.
Table 1 .
Treatment plan for chocolate.
Table 2 .
Oil yield (%) from mango seed of different varieties.
TreatmentFigure 2. Effect of treatment on hardness.
Table 3 .
Fatty acid profile of mango seed oil.
Table 4 .
Mean values of physicochemical properties of Mango Kernel Oil.
Table 5 .
Proximate composition of the chocolate.
Table 6 .
Quality analysis of chocolate.
Table 7 .
Sensory evaluation of the product.
|
2023-11-15T16:21:56.372Z
|
2023-11-12T00:00:00.000
|
{
"year": 2023,
"sha1": "576355727cf626cce0d0ab3703e36df60e8cea73",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10942912.2023.2267784?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "65373e84e0cfbd241672ba08420acaac41dafe8c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
3529880
|
pes2o/s2orc
|
v3-fos-license
|
Severe musculoskeletal time-loss injuries and symptoms of common mental disorders in professional soccer: a longitudinal analysis of 12-month follow-up data
Purpose Psychological factors have shown to be predictors of injury in professional football. However, it seems that this is a two-way relationship, as severe musculoskeletal time-loss injuries have shown to be associated with the onset of symptoms of common mental disorders (CMD). There is no longitudinal study performed exploring this interaction between symptoms of CMD and injuries. The purpose of this study was to explore the interaction between severe musculoskeletal time-loss injuries and symptoms of CMD in professional football players over a 12-month period. Methods Players were recruited by their national players’ unions in five European countries. Symptoms of CMD included in the study were related to distress, anxiety/depression, sleep disturbance and adverse alcohol use. Results A total of 384 professional football players were enrolled in the study, of whom 262 (68%) completed the 12-month follow-up period. The mean age of the participants at baseline was 27 ± 5 years, and they had played professional football for 8 ± 5 years on average. Symptoms of CMD at baseline were not associated with the onset of severe musculoskeletal time-loss injuries during the follow-up period with relative risks (and 95% CI) ranging from 0.6 (0.3–1.0) to 1.0 (0.5–2.2). In contrast, severe musculoskeletal time-loss injuries reported at baseline were associated with the onset of symptoms of CMD during the follow-up period with relative risks ranging from 1.8 (0.8–3.7) to 6.9 (4.0–11.9). Conclusion No relationship was found between symptoms of CMD and the onset of severe musculoskeletal time-loss injuries. However, professional football players who suffered from severe musculoskeletal time-loss injuries are likely to develop subsequent symptoms of CMD. This study emphasizes the need for an interdisciplinary medical approach, which not only focuses on the physical but also on the mental health of professional football players. An early identification of players at risk of symptoms of CMD, such as those suffering from severe musculoskeletal injuries, creates the opportunity for an interdisciplinary clinical medical team to treat the players timely and adequately. Level of evidence Prospective cohort study, Level II.
Introduction
The overall risk of injury in professional football is estimated to be 1000 times higher when compared to typical high-risk industrial occupations like in manufacturing, construction or in the service sector [9]. In the UEFA Elite Club Injury Study during the seasons 2001-2008, a mean time-loss injury rate of 8.0 injuries per 1000 h was found, reaching up to 27.5 time-loss injuries per 1000 match hours. This study showed that typically a squad of 25 players could at least expect 50 injuries per season [12]. A 15-year epidemiological follow-up study among professional football players in Japan found that 2947 injuries occurred in 3984 matches and a mean annual injury rate of 21.8 per 1000 player hours [2]. Another 5-year prospective cohort study among professional football players competing at the Australian A-league presented a rate of timeloss injuries ranging from 58.9 to 109.7 time-loss injuries per squad of 25 players [19]. Time-loss injuries generally require medical treatment that can last from several days to several months, having a significant negative effect on the performance of the team [3,21]. In addition, time-loss injuries that result in a long period without training or competition are considered as major adverse events for the career of a football player, leading even to early retirement in the worst case [15,31,34].
Several studies showed that not only physical but also psychological factors may influence the risk of a musculoskeletal injury [22][23][24]28]. Psychological factors such as trait anxiety, negative-events-stress and daily hassle, were identified as predictors for injury in professional football [23]. While most of the studies are directed towards the incidence of musculoskeletal injuries, more attention has recently been given to the occurrence of symptoms of distress, anxiety/depression, sleep disturbance and substance abuse, typically referred to as common mental disorders (CMD), among professional football players. The prevalence of symptoms of CMD among European professional football players was found to extend to 32% for anxiety/depression, while the 12-month incidence ranged from 12% for distress to 37% for anxiety/depression [17]. Several studies showed that among others (e.g. career dissatisfaction, surgeries) severe time-loss injuries and life events were potential risk factors for symptoms of CMD [15,16,18,20]. In 2015, cross-sectional analyses showed that professional football players who have sustained one or more severe musculoskeletal time-loss injuries during their career were two to nearly four times more likely to report symptoms of CMD than players who had not suffered from severe time-loss injury [15]. However, a longitudinal association between symptoms of CMD and severe time-loss injuries has not been established yet.
The present study aimed to explore the interaction between severe musculoskeletal time-loss injuries and symptoms of CMD in professional football players over a 12-month period. Two hypotheses were tested, namely that (I) professional football players reporting symptoms of CMD at baseline had an increased risk of severe musculoskeletal time-loss injury in the subsequent 12-month follow-up period and (II) professional football players suffering from severe musculoskeletal time-loss injuries at baseline were more likely to develop symptoms of CMD in the subsequent 12-month follow-up period.
Materials and methods
This study was conducted in line with the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement for cohort studies [33]. The present study was an observational prospective cohort study with three measurements during follow-up, at baseline, at 6 months and at 12 months by means of questionnaires [33].
Participants
The national players' unions from five European countries were asked by the World Players' Union (FIFPro) to assist in the recruitment of participants. The inclusion criteria were (I) being an active professional football player; (II) being 18 years or older; (III) being male; (IV) being a member of the national players' union from Finland, France, Norway, Spain, or Sweden, which means committing significant time to football training and competing at the highest or second highest professional football level; and (V) being able to read and comprehend texts fluently in English, French or Spanish.
The Distress Screener (three items scored on a 3-point scale), which is based on the 4-dimensional symptom questionnaire (4DSQ) (e.g. "Did you recently suffer from worry?"), was used to measure distress in the previous 4 weeks (baseline) and in the previous 6 months (follow-up) [5,33]. The 4DSQ, that is, Distress Screener in English, French and Spanish, has been validated for a recall period of up to several weeks [internal consistency: 0.6-0.7; test-retest coefficients: ≥0.9; criterion-related validity: area under receiver operating characteristic (ROC) curve ≥0.8] [5,33]. A total score ranging from 0 to 6 was obtained by adding up the answers on the 3 items, a score of 4 or more indicating the presence of symptoms of distress [5,33].
To assess symptoms of anxiety/depression in the previous 4 weeks (baseline) and in the previous 6 months (follow-up), the 12-items General Health Questionnaire (GHQ-12) was used (e.g. "Have you recently felt under strain?") [14]. The GHQ-12 in English, French and Spanish has been validated for a recall period of up to several weeks (internal consistency: 0.7-0.9; criterion-related validity: sensitivity ≥0.7, specificity ≥0.7, area under ROC curve ≥0.8) [14,30]. Based on the traditional scoring system, a total score ranging from 0 to 12 was calculated by adding up the answers on the 12 items, with a score of 3 or more indicating the presence of symptoms of anxiety/depression (area under ROC curve = 0.9) [14,30].
Sleep disturbance in the previous 4 weeks (baseline) and in the previous 6 months (follow-up) was assessed through four single questions (e.g. "Have you recently had problems sleeping?") scored on a 5-point scale (from "not at all" to "very much") based on the Patient-Reported Outcomes Measurement Information System (PROMIS) [7,36]. The PROMIS in English, French and Spanish has been validated for a recall period of up to several weeks (internal consistency: >0.9; construct validity: product-moment correlations ≥0.9) (for detailed information, see www.nihpromis.org). A total score ranging from 1 to 20 is obtained by summing up the answers to the four questions, a score of 13 or more indicating the presence of symptoms of sleep disturbance [7,36].
To detect the level of alcohol consumption at present time (baseline) and in the previous 6 months (followup) the 3-items alcohol use disorders identification test was used (AUDIT-C) (e.g. "How many standard drinks containing alcohol do you have on a typical day?") [8].
The AUDIT-C in English, French and Spanish has been validated for a recall period of up to several weeks (test-retest coefficients: 0.6-0.9; criterion-related validity: area under ROC curve = 0.7-0.9) [8,27]. A total score ranging from 0 to 12 was obtained by adding up the answers on the three items, a score of 5 or more indicating the presence of adverse alcohol use [8].
Severe musculoskeletal time-loss injuries
Football players were asked to report whether they had suffered from severe musculoskeletal time-loss injuries in the previous 4 weeks (baseline) and in the previous 6 months (follow-up). Severe musculoskeletal time-loss injury was defined as an injury that involved the musculoskeletal system, occurred during team activities (training or match), and led to either training or match absence for more than 28 days [13].
Procedures
Based on the aforementioned variables included in the study, a baseline and two follow-up electronic questionnaires were arranged in English, French and Spanish (FluidSurveys™). The following descriptive variables were added: age, body height, body weight, duration of professional football career, level of play and team position. As several studies have shown that life events were associated with symptoms of CMD as well as with musculoskeletal injuries, the number of life events in the previous 6 months was also explored at baseline and follow-up with the validated Social Athletic Readjustment Rating Scale [6,20,24]. Each questionnaire took about 15-20 min to complete. The national players' unions sent the information about the study per email to potential participants. Participants interested in the study gave their informed consent and were given access to the online questionnaire, which they were asked to complete within 2 weeks. At the end of the questionnaire, participants could leave their email address and give their informed consent for the follow-up online questionnaires. Followup questionnaires were sent per email 6 and 12 months later, with a request to complete them within 2 weeks. Reminders at baseline and follow-up were sent after 2 and 4 weeks. The responses to baseline and follow-up questionnaires were anonymized for reasons of privacy and confidentiality. Once completed, the electronic questionnaires were saved automatically on a secured electronic server that only the principal researcher could access. Players participated voluntarily in the study and did not receive any reward for their participation. This study is as part of a larger research project involving 11 countries for which ethical approval was obtained by the board of St Marianna University School of Medicine (April 16, 2014; Kawasaki, Japan) [16]. The present study was conducted in accordance with the Declaration of Helsinki (2013).
Statistical analyses
The statistical software IBM SPSS 23.0 for Windows was used to perform all data analyses. Descriptive analyses (mean, standard deviation, frequency and range) were performed for all variables included in the study. An independent T test was used to explore whether loss to follow-up was selective by comparing baseline characteristics (all descriptive variables) of responders and non-responders [35].
In order to explore the interaction between independent (either symptoms of CMD or musculoskeletal time-loss injuries at baseline) and dependent (onset of either symptoms of CMD or musculoskeletal time-loss injuries during 12-month follow-up period) variables under study, three models were used: (1) unadjusted relative risk model, (2) relative risk model adjusted for age, and (3) relative risk model adjusted for age and life events, as both age and number of life events have been found to correlate with symptoms of CMD as well as with musculoskeletal injuries [1,4,20,24]. All relative risk models took into account any new injuries or symptoms of CMD reported at the 6-month follow-up. We also assessed the interaction between two or more symptoms of CMD (comorbidity) and severe musculoskeletal time-loss injuries using the same aforementioned relative risk models (1)(2)(3). For the unadjusted model, a contingency table was used to calculate relative risks (RR). For both adjusted models, the Mantel-Haenszel risk ratio method was used to calculate the adjusted risk ratios [25]. For all three models, 95% confidence interval (CI) was calculated. Under the assumption that at least one out of ten players might suffer from a health condition under study, sample size calculation indicated that at least 138 participants were needed (confidence interval of 95%; precision of 5%) [35]. Expecting a response rate of approximately 40% (based on previous similar studies in professional football) and a loss to follow-up at 20%, we intended to invite at least 440 players [16,18].
Results
Written informed consent to participate in the 12-month follow-up was given by 384 players (response rate of 65%). A total of 262 players completed the 12-month follow-up period (follow-up rate of 68%). The flowchart of the recruitment of the participants is presented in Fig. 1. The mean age of the 384 participants at baseline was 27 ± 5 years, and they had played professional football for 8 ± 5 years on average, of which 55% at the highest national level. From the 193 players that did not report any symptom of CMD at baseline, 37% reported a symptom of CMD in the subsequent 12 months. From the 336 players that did not report any severe musculoskeletal time-loss injury, 22% reported an injury in the subsequent 12 months. Main characteristics are presented in Table 1.
Interactions between symptoms of CMD and severe musculoskeletal time-loss injuries
Symptoms of CMD at baseline were not associated with the risk of severe musculoskeletal time-loss injury during the 12-month follow-up period, with relative risks ranging from 0.6 (0.3-1.0) to 1.0 (0.5-2.2) for sleep disturbance and distress, respectively. All relative risks between symptoms of CMD at baseline and the risk of severe musculoskeletal time-loss injury during the subsequent 12-month are presented in Table 2.
Prevalence of severe musculoskeletal time-loss injuries at baseline was associated with symptoms of CMD during the 12-month follow-up period with relative risks ranging from 1.8 (0.8-3.7) to 6.9 (4.1-11.9) for adverse alcohol use and distress, respectively. These results show that professional football players who reported a severe timeloss injury at baseline are nearly 2-7 times more likely to develop symptoms of CMD in the subsequent 12 months by comparison with non-injured football players. All relative risks are presented in Table 3.
Discussion
The most important finding of the present study was that professional football players who suffered from severe musculoskeletal time-loss injuries are likely to develop subsequent symptoms of CMD. Contrary to our hypothesis, no relationship was found between symptoms of CMD and the onset of severe musculoskeletal time-loss injuries during a subsequent 12-month follow-up period among professional football players. We do acknowledge a potential power problem with some of the 95% confidence intervals just barely overlapping the value 1.0. With regard to these results, we may assume that self-reported symptoms of CMD as assessed in our study might not be as severe as expected because those did not cause severe musculoskeletal time-loss injuries among participants. An assumption is that clinically diagnosed CMD, which is more severe than self-reported symptoms of CMD, might be more likely to induce severe time-loss injuries among football players. As previously mentioned, some studies found an association between psychological factors such as trait anxiety, negative-events-stress and daily hassle and the occurrence of injuries [23]. As life events have shown to be predictors of injury, it is most likely that these life events cause stress that reduces attention and mental performance that consequently modifies the reaction time of the athlete in situations with a possible risk of injury [10,11]. For instance, poor reaction time was found to be a predictor of injury in a previous study on amateur football players [11]. Also there is evidence that athletes show more pronounced readiness to take risks due to factors such as insufficient caution, adventurous spirit or higher outward expression of anger (more foul play) [10,24]. One might logically assume that if mental performance seems to be associated with injury, Fig. 1 Flowchart of participants symptoms of CMD should also be significantly associated with these injuries. An explanation why this association was not present in our results is that we assessed the occurrence of severe musculoskeletal time-loss injuries that lead to a layoff period of more than 4 week. One might hypothesize that symptoms of CMD as reported in our study might lead to less severe musculoskeletal injuries. This should be subject to further investigations. Regardless these results, the majority of the current and retired professional football players report that symptoms of CMD influence football performances, which is along with injuries a major reason to monitor the occurrence of symptoms of CMD [32]. In contrast to the association between symptoms of CMD and the onset of severe musculoskeletal time-loss injuries, severely injured professional football players were found to be nearly 2-7 times more likely to develop symptoms of CMD in the subsequent 12 months by comparison with non-injured football players. This longitudinal association was significant for all symptoms of CMD under study, confirming previous cross-sectional analyses conducted with the same study population [15]. It is worth mentioning that less severe musculoskeletal time-loss injuries logically are expected to have less psychological impact and might not be associated with the onset of symptoms of CMD as strong as severe musculoskeletal time-loss injuries. Also severe musculoskeletal time-loss injuries are a major adverse life event for professional football players [13,15,31]. In addition, studies among other populations have proven that adverse life events have a causal relationship with symptoms of CMD [15,16,18,20]. With this study, severe musculoskeletal time-loss injuries can be considered as major adverse life events for professional football players that are likely to cause symptoms of CMD.
A potential limitation of the present study might be that the data was self-reported as the questionnaires were answered by professional football players themselves. Measurement through medical professionals might have led to less subjective information and additional information with regard to the number of days until Return To Play. Another limitation could be the response and follow-up rates, namely 65 and 68%, respectively. Epidemiologists have suggested acceptable follow-up rates ranging from adequate to very good or required with a follow-up rate of, respectively, 50, 70 and 80% [26]. Despite that we strived to reach a follow-up rate of at least 80%, the 68% achieved in our study seems to be good compared with the aforementioned suggested acceptable follow-up rates. Also a monthly survey over the follow-up period might have given more valid data than the 6-month period used in this study. However, it is well know that professional athletes, especially football players, remain reluctant to complete surveys repeatedly. Although the native language of participants from Finland, Norway and Sweden was not administered in the scales used to measure symptoms of CMD, we feel that this has no negative effect on the validity of the collected data because an inclusion criterion was that the participants were able to read and comprehend texts fluently in English, French or Spanish, and secondly most members of the players' unions from Finland, Norway and Sweden are studying at an English academy arranged by their players' union. Also the baseline measurements vary between . It is worth mentioning that only male football players are analysed and outcomes could differ among female professional football players. An important strength is the longitudinal design of this study among nearly 400 professional football players concerning a sensitive topic like mental health that remains taboo even today. This is, to the authors' knowledge, the first prospective cohort study exploring this interaction. This longitudinal study design allows the establishment of a causal exploration between symptoms of CMD and severe musculoskeletal time-loss injuries. Musculoskeletal injuries have a negative impact on the performance of a player and his team and consequently, the possible association between symptoms of CMD and the onset of less severe musculoskeletal time-loss injuries should be explored [3,21]. If the abovementioned possible association might be present, it is important to acknowledge and recognize these symptoms of CMD and treat them in order to minimize the risk of or prevent a musculoskeletal time-loss injury. This study emphasizes the importance of the awareness, acknowledgement and recognition of symptoms of CMD. According to the results of this study, one can assume that a player that suffers from a severe musculoskeletal time-loss injury will be likely to develop symptoms of CMD. One can logically assume that these symptoms of CMD might have consequences for their performance and quality of life as mentioned in a previous study [32].
The clinical relevance of this study is that it emphasizes the need for an interdisciplinary medical approach, which not only focuses on the physical but also on the mental health of football players. Not only physical but also psychological readiness has shown to increase athletes' perceived likelihood of a successful return to play [29]. Consequently, an early identification of players at risk of symptoms of CMD creates the opportunity for an interdisciplinary medical team to recognize these symptoms timely and treat players in an early stage in order to prevent these symptoms getting worse and in order to remain or improve their performance and quality of life. One can logically assume that this may lead to a faster as well as a safer return to play.
Conclusion
No relationship was found between symptoms of CMD and the onset of severe musculoskeletal time-loss injuries during a subsequent 12-month follow-up period among professional football players. However, severely injured professional football players were found to be nearly 2-7 times more likely to develop symptoms of CMD in the subsequent 12 months by comparison with non-injured football players. An early identification of players at risk of symptoms of CMD, such as those suffering severe musculoskeletal injuries, creates the opportunity for an interdisciplinary medical team to treat the players timely and adequately.
|
2018-02-27T14:23:29.078Z
|
2017-07-11T00:00:00.000
|
{
"year": 2017,
"sha1": "2fdca361292cf9641d54a5b0c88b0b12ad039dc4",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc5847204?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f120280780d8b6fa383dbbcb84e75f760653b4ec",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
42389710
|
pes2o/s2orc
|
v3-fos-license
|
Dichloroacetic Acid ( DCA )-Induced Cytotoxicity in Human Breast Cancer Cells Accompanies Changes in Mitochondrial Membrane Permeability and Production of Reactive Oxygen Species
Cancer cells utilize cytosolic glycolysis for their energy production even in the presence of adequate levels of oxygen (Warbug effect) due to mitochondrial defects. Dichloroacetic acid (DCA) shifts cytosolic glucose metabolism to aerobic oxidation by inhibiting mitochondrial pyruvate dehydrogenase kinase (PDK) and increasing pyruvate uptake. Therefore, DCA has potential in reversing the glycolytic metabolism defect in cancerous cells. DCA is also known to induce apoptosis in a number of cancer cell lines, the mechanism of which is not well understood. In this study, an attempt has been made to investigate the effects of DCA on aggressive human breast cancer (MCF-7) cells as compared with less aggressive mouse osteoblastic (MC3T3) cells. Cell cytotoxicity was determined by MTT, crystal violet and Trypan blue exclusion assays. Western blot was used to detect any changes in the expression of apoptotic markers. Flow cytometry was used to measure apoptotic and necrotic effects of DCA. Mitochondrial integrity was determined by change in mitochondrial membrane potential (Δψm), whereas oxidative damage was determined by production Corresponding author. Z. Alkarakooly et al. 1235 of reactive oxygen species (ROS). DCA caused a concentration-dependent cytotoxicity both in MCF-7 and MC3T3 cell lines. MCF-7 cells were most affected. Flow cytometry results showed a significantly higher apoptosis in MCF-7 even at lower concentrations of DCA. However, higher concentrations of DCA were necrotic. Western blotting showed an increased expression of Mn-SOD-1 upon DCA treatment. Further, DCA decreased Δψm and increased ROS production. The effects of DCA were more pronounced on MCF-7 cells as compared to MC3T3 cells. Our results suggest that DCA-induced cytotoxicity in cancerous cells is mediated via changes in Δψm and production of ROS.
Introduction
Normally, mammalian cells (non-cancerous) produce their energy by aerobic respiration or oxidative phosphorylation utilizing electron transport chain in mitochondria.It has long been recognized that cancerous cells primarily utilize glycolysis even in the presence of adequate oxygen, a phenomenon termed aerobic glycolysis or "Warburg effect" [1].This change in cytosolic energy production in malignant cells is associated with reprogramming of mitochondrial function that limits pyruvate uptake for oxidative phosphorylation.This leads to an accumulation of large quantities of cytosolic lactic acid causing lactic acidosis.Accumulating evidence suggests that the persistent activation of aerobic glycolytic pathway in tumor cells plays a crucial role in carcinogenesis.Therefore, the inhibition of the increased glycolytic capacity of malignant cells may provide a key cancer treatment approach.
Dichloroacetic acid (DCA) is a small molecule known to shift cytosolic glucose metabolism to mitochondrial aerobic oxidation [2] [3] by inhibiting mitochondrial pyruvate dehydrogenase kinase (PDK).In recent years, reprogramming of mitochondrial function by DCA has received a great deal of attention for cancer treatment strategy as a result of its effectiveness in killing certain types of tumor cells [4].Studies have now established that DCA suppresses tumor growth via the inhibition of mitochondrial PDK.
Michelakis and his colleagues reported that DCA had caused cell death in certain types of cancerous cells in vitro by inducing apoptosis.Additionally, they showed that the tumor size in rats was significantly reduced by the administration of DCA [5].In an independent study, Bonnet and co-workers showed that exposing rats to DCA in drinking water caused regression of their xenografted A549 lung carcinoma cells [2].In an in vitro analysis, they further showed that DCA only killed cancerous cells; it did not affect normal somatic cells.These studies suggested that DCA could be used as a safer yet effective anticancer treatment agent.In a comprehensive study on cancerous cells, Heshe and colleagues reported that DCA was effective against a panel of 18 immortal tumor cell lines.They also reported that DCA-induced effects were mediated by decreasing mitochondrial membrane potential (Δψm) indicating a role for mitochondria-mediated cell death process [3].This study further showed that DCA also caused a significant induction of apoptosis in different types of human endometrial cancer cell lines.In other preclinical studies, DCA has been shown to inhibit cell proliferation and induce apoptosis in a number of cancer cell lines including prostate, breast, lung, endometrial and glioblastoma (GBM) cancer cell lines [2].These studies clearly indicated that the treatments with DCA were linked with reduced rates of cellular proliferation and induction of apoptosis.
Since the major target for DCA was mitochondria, Wong and co-workers studied changes in mitochondrial membrane potential (Δψm) and found that the DCA-induced changes were linked to a reduced Δψm [6].Recently, in an in vivo study, DCA has been shown to reduce the growth of lung cancer xenografts and significantly diminish lung metastasis in a rat mammary adenocarcinoma [7].In addition, the growth of a pancreatic tumor xenograft was also found to be reduced perhaps by reversal of the glycolytic phenotype.Thus, selective modulation of the glycolytic phenotype in cancer cells by DCA seems to be a promising approach for an effective and tolerable treatment of cancer in humans, although clinical trials in humans still await confirmation and outcome of the safety data.
DCA has recently been regarded as the magic bullet against cancer by the public press, further stimulating the attraction of DCA for cancer treatment [3].Its potential as anti-cancer agent is more attractive because it is easily available as a water-soluble small molecule, which is cell permeable and effective specifically in cancer cells with no or little toxicity to normal cells.It is also cost-effective.Further, DCA has been used in humans for over 30 years to treat lactic acidosis without any significant adverse side effect being reported.So DCA appears to be a safe drug for human use, although it still needs FDA approval for cancer treatment.The promise that the researchers have found in preclinical studies against adult malignancies and the availability of the limited safety data in adults and children provides a strong rationale to pursue further research on DCA in an effort to understand its mechanism of action in malignant as well as non-malignant cells.Therefore, in this study, an attempt has been made to investigate the effects of DCA on cytotoxicity of cancerous cells and study its mechanism of action.Two representative cell lines, namely MC3T3 mouse osteoblastic cells as less aggressive and MCF-7 human breast cancer cell lines as aggressive cancerous cells were used to compare the effects of DCA.
Cell Culture
Cells were grown following instructions from ATCC.A complete medium containing MEM or DMEM with 10% FBS was supplemented with antibiotics penicillin (500 units/ml) and streptomycin (500 units/ml) as per manufacturer recommendation.All cells were grown in a humidified CO 2 incubator set at 37˚C with 5% CO 2 atmosphere.For sub culturing, cells grown to confluence were washed with PBS (11.9 mM phosphates, pH 7.4, 13.7 mM NaCl, 2.7 mM KCl) and then detached with Trypsin-EDTA (0.05% Trypsin/0.53mM EDTA in HBSS without sodium bicarbonate, calcium and magnesium) by incubating for 5 minutes or until cells detached.After incubation, cells were collected by centrifugation at 500 g for 5 minutes.The cells were resuspended in 10 ml of the complete medium.Cell density was determined by counting the number of cells using a hemocytometer following staining with Trypan blue dye.
Cytotoxicity Assays
A number of methods including ethidium bromide/acridine orange (EB/AO) staining, MTT, crystal violet and Trypan blue exclusion assays were used to determine the cytotoxic effects of DCA.For DCA or other drug treatment, cells were seeded at a specific density depending upon the type of experiment and the cell culture plates used.After 24 hrs culture, drugs were added as bolus from stock solutions and mixed immediately to achieve the desired final concentrations.Further, incubations were continued for additional time intervals as required (see figure legends).Cells were then used for various assays, stained using appropriate staining procedures or harvested for biochemical determinations.Any variations in experimental procedures are described under figure legends.
For screening the cytotoxicity of DCA on various cell lines, MTT 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT assay) or crystal violet (CV assay) reagents were used.The cells were seeded in 96-well plates (Costar, USA) at a density of 1.0 -1.5 × 10 4 cells per well and grown for 24 hrs.Various concentrations of DCA or vehicle controls were then added to the wells and incubated for additional 48 hrs at 37˚C with 5% CO 2 .In MTT assay, mitochondrial dehydrogenase activity is used as an indicator of cell viability.It was assessed by the mitochondrial-dependent reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) to formazan [8].Briefly, the cells were washed twice with the medium free of FBS.A 10 μl MTT solution (5 mg/ml) was added to 90 μl in cell culture medium free of FBS, and the cells were incubated for an additional 4 hrs in the dark.Thereafter, the medium was removed, and the cells were lysed with 100 μl of dimethylsulfoxide (DMSO) to dissolve the purple insoluble MTT formazan produced by mitochondrial succinate dehydrogenase.An automated microplate reader set at 570 nm wavelength was used to measure the conversion of MTT to formazan by metabolically viable cells.The crystal violet assay is based on the inability of dead cells to adhere to cell culture plastic dish [9].Cells were washed with PBS to remove dead non-adherent cells.The remaining adherent viable cells were fixed with methanol and stained with 0.1% crystal violet solution for 10 minutes.The plates were thoroughly washed with water, and crystal violet was dissolved in 33% glacial acetic acid.The absorbance of the dissolved dye, corresponding to the number of viable cells, was measured in an automated microplate reader at 570 nm [10].The results of MTT and crystal violet assays were presented as % of the control values obtained from untreated cells.The cell viability was calculated as follows [11].The % of dead cells (A sample A blank) (A control A blank) 100 = − − × .All determinations were performed in triplicates, and each experiment was repeated at least three times for statistical analysis.
Cell viability was also determined by Trypan blue dye exclusion assay.Briefly, cells were detached by trypsinization and suspended in the medium as described for counting of the cells using hemocytometer.An aliquot of 90 μl of the cell suspension was transferred to an Eppendorf tube and mixed with 10 μl of 0.4% (w/v) Trypan blue dye.An aliquot of 10 μl of this mixture was loaded on each of the two sides of the counting chamber underneath the cover slip of the hemocytometer for cell counting.Cells stained with Trypan blue were counted as dead cells.Cells without Trypan blue stain were counted as live cells.Approximately 200 -300 cells per treatment were counted for statistical calculations.Each experiment was performed in triplicates and repeated independently at least three times.
EB/AO staining was used as another criterion for cell viability following treatment with DCA.Cells were washed with PBS (10 mM, pH 7.4) and stained with a solution of 100 mg/ml acridine orange and 100 mg/ml ethidium bromide in PBS mixed together in a ratio of 1:1.Cells were then visualized immediately under UV light using a Nikon Labophot fluorescence microscope equipped with a digital camera.Photographs were taken using randomly selected fields.In order to determine the percentage of cells undergoing apoptosis or cell death process, the photographs were used to count the number of live (green) and dead (red) cells.Acridine orange stains live cells green, whereas ethidium bromide stains fragmented nuclear DNA in dead cells red.Approximately 200 -300 cells per treatment were counted for statistical calculations.
Flow Cytometry Analysis
For separation of apoptotic and necrotic cells, flow cytometry was performed as described earlier [12] following staining of the cells with YO-PRO-I and PI dyes using Vybrant apoptosis assay kit # 4 (V-13243, Molecular Probes).Briefly, following treatment of cells with DCA to induce apoptosis, cells were harvested by acutase and washed with PBS, pH 7.4.The cell density was adjusted to ≈6 × 10 5 cells/ ml in PBS, pH 7.4. 1 μl of YO-PRO-I stock solution (component A) and 1 μl of PI stock solution (component B) were mixed per ml of cell suspension.After 30 minutes incubation at 4˚C, the cells were analyzed using BD FasCalibur flow cytometry to sort out cells labelled with each fluorescent probe from a total cell population.Fluorescence emissions were measured at 515 -545 nm for FITC using a FL-1 PMT detector and 564 -606 nm for PI using FL-2 PMT detector.In a quadrant (Figure 4), live cells showed little or no fluorescence (lower left), necrotic cells showed red and green fluorescence (upper left and right), and apoptotic cells showed green fluorescence (lower right).
Determination of Mitochondrial Membrane Potential
For analysis of mitochondrial membrane potential (Δψm), MC3T3 or MCF-7 cells were seeded at a density of 2.5 × 10 4 cells in 96-well plates and incubated overnight.Cells were then treated with DCA or PBS (control) as described above and maintained in supplemented medium.After 48 hrs, cells were washed with PBS, pH 7.4 and incubated with medium containing 10 μl of 10 mg/ml 5',5',6',6'-tetrachloro-1',1',3',3'-iodide (JC-1dye) for 20 min at 37˚C.In normal cells, the dye concentrates in the mitochondrial matrix due to the electrochemical potential gradient where it forms red fluorescence aggregates.A reaction that affects the mitochondrial membrane potential prevents the accumulation of the JC-1 dye in the mitochondria and thus the dye is dispersed throughout the entire cell leading to a shift from red (590 nm) to green (525 nm) fluorescence.Finally, the cells were washed and resuspended in 100 μl PBS for fluorescence measurements using microplate reader (SYNERGY H4, BioTek, hybrid technology).Mitochondrial depolarization is indicated by a decrease in red/green fluorescence ratio.All mitochondrial membrane potential analyses were performed in triplicates and each experiment was repeated independently at least three times.
Determination of Reactive Oxygen Species (ROS)
For the assessment of the production of intracellular ROS, MC3T3 and MCF-7 cells were plated in the black clear bottom 96-well plates at a cell density of 1.0 -1.5 × 10 4 cells per well and treated with DCA for 48 hrs as described above.A solution of 2',7'-dihydrochloroflurorescein acetate (DCFH-DA) was added to the medium at a final concentration of 10 μM and the cells were allowed to stain for 90 min in the dark.After DCFH-DA staining, the cells were washed twice and resuspended in 100 μl of PBS.DCFH-DA intensity was examined using fluorescence microplate reader (SYNERGY H4, Bio Tek, hybrid technology) at 485/535 nm.The data presented is a representative experiment performed independently at least three times in triplicate samples.
Preparation of Cell Lysate and Western Blotting
After appropriate treatments to induce cytotoxicity or apoptosis, the culture media were collected and centrifuged at 1100 g for 5 min to collect floating cells.Cell lysates were prepared by first washing the attached cells once with ice-cold PBS, pH 7.4 and then lysed for 20 min with a lysis buffer (RIPA) containing a protease inhibitor cocktail, a 1 mM phosphatase inhibitor cocktail and 0.1 M PMSF.The floating cells collected were mixed with corresponding cell lysates.Following cell lysis, cell debris was removed by centrifugation at 12,000 g (Eppindroff centrifuge) for 15 min at 4˚C.Protein concentration in cell lysates was determined by using a standard Coomassie Bradford protein assay according to the manufacturer's instructions (Bio-Rad, Hercules, CA).Western blotting procedures followed were as described by Agarwal et al., 2009.In order to prepare protein samples for SDS-PAGE, cell lysates were mixed with equal volumes of 2× sample buffer (0.5 MTris-HCl, pH = 6.8, 20% glycerol, 4% (w/v) SDS, 0.5% (w/v) bromophenol blue) and boiled for 5 min at 100˚C.Samples containing 100 μg proteins were separated by 10% SDS-PAGE.The proteins from the gels were electrophoretically transferred onto polyvinylidene fluoride (PVDF) membranes (Millipore, Billerica, MA).The membranes were blocked with 5% skimmed milk prepared in Tris-buffer saline (150 mM NaCl and 20 mM Tris-HCl, pH 7.2) containing 0.05% Tween-20 (TBS-T) for 1 hr at room temperature.The membranes were then incubated with diluted primary antibodies overnight at 4˚C with constant shaking.The primary antibody dilutions were prepared according to the manufacturers' recommendations.After primary antibody incubation, the membranes were washed three times for 10 min each with TBS-T and were incubated for 2 hrs with a 1:2000 dilution of an appropriate secondary antibody (anti-rabbit IgG for polyclonal or anti-mouse IgG for monoclonal antibodies) conjugated to horseradish peroxidase (HRP) in 2% skimmed milk in TBS-T (Sigma-Aldrich, St. Louis, MO).Signal detection was achieved with a Super Signal West Pico Chemiluminescence kit (Pierce Biotechnology, Rockford, IL) and high performance chemiluminescence film (Amersham Biosciences, Piscataway, NJ).
Statistical Analysis
The results are expressed as a mean ± SD from three to four independent experiments or as described under figure legends.One-way ANOVA test was used to evaluate statistical significance between control and experimental groups.The p-value of <0.05 was taken as the value with significant difference.
DCA Induces Cytotoxicity in Cancerous Cells
MTT assays are widely used to evaluate the overall cytotoxicity of the cell death-inducing drugs.In our experiments, the ability of DCA to promote cell death or cytotoxicity was evaluated by MTT assays in a number of cancer cell lines.The results are presented in Figure 1.The cytotoxic effects of DCA were observed on all cancer cell lines in a dose-dependent manner.MCF-7 breast cancer cell line was the most sensitive to DCA-induced cell death.MC3T3 osteoblastic cell line was the least sensitive to DCA-induced cytotoxicity.MC3T3 cell line has been shown to behave like less aggressive or non-aggressive cell line whereas MCF-7 cell line is known to behave as aggressive cancer cell line.In order to compare the effects of DCA on aggressive and non-aggressive cancer cells, we selected MCF-7 and MC3T3 as aggressive and non-aggressive cell lines respectively for further studies.
Crystal violet (CV) assays and Trypan blue staining were used as additional criteria to determine DCA-induced cytotoxicity in these two selected cell lines.Similar to MTT assays, CV and Trypan blue staining showed that DCA caused more cytotoxicity in MCF-7 breast cancer than MC3T3 osteoblastic cell line (Figure 2).The differential effects of DCA on these two cell lines were determined in a dose dependent manner (25 and 200 mM DCA) for 48 hrs of treatment.
Using a median dose of 50 mM DCA, we have also determined the cytotoxicity of DCA on MCF-7 and MC3T3 cells in a time-dependent manner (Figure 3).Our cytotoxicity assays indicated that DCA caused more cytotoxicity in aggressive cancerous cell line such as MCF-7 as compared to non-aggressive or less aggressive cell line such as MC3T3.
Cytotoxicity of DCA Is Mediated by Apoptosis
Our goal was to determine whether DCA-induced cytotoxicity was mediated via its apoptotic effects or merely due to non-specific cell death by necrosis.We chose MC3T3 as non-aggressive cell line, which is more resistant to cytotoxicity by DCA, and MCF-7 as aggressive cell line, which is more sensitive to DCA-induced cell death.Acridine orange and ethidium bromide staining was used as a criterion to determine apoptotic cell death.This method is frequently used to study the induction of apoptosis.Characteristic morphological changes due to apoptosis were assessed by fluorescence microscopy using acridine orange and ethidium bromide staining at 25 mM and 100 mM DCA at 48 hrs of treatments.These concentrations were selected because they were close to the IC 50 values for more sensitive MCF-7 and more resistant MC3T3 cell lines, respectively.The results showed a significantly higher amount of apoptotic cell death in MCF-7 cells than in MC3T3 cells (Figure 4) complementing our cytotoxicity data.Therefore, it is likely that the DCA-induced cytotoxicity observed earlier in our cytotoxicity assays could be at least partly related to the apoptotic cell death.
To confirm whether DCA-induced cell death was due to apoptosis, we performed flow cytometry analysis.This method distinguishes cells undergoing apoptosis and necrosis from the normal population.Figure 5 shows that at 25 mM DCA, both MCF-7 and MC3T3 cells had a significantly higher population of apoptotic cells as compared with their respective controls.However, necrosis increased with increasing DCA concentration.These results suggest that higher concentrations of DCA cause necrosis, while lower concentrations of DCA induce apoptosis.The overall cell death was, however, more in MCF-7 cells than in MC3T3 cells at all doses of DCA confirming our previous findings on cytotoxicity assays.
DCA Depolarizes Mitochondrial Membrane in Cancerous Cells
Since mitochondria are the primary target for DCA, we examined whether DCA affects mitochondrial function by measuring changes in its membrane potential.Mitochondrial membrane potential is known to be affected by its oxidative status, which is likely to be impaired in cancerous cells.The JC-1 dye staining to determine change in the ratio of red/green fluorescence is a well-established method to determining mitochondrial membrane integrity.The ratio of red/green fluorescence was decreased drastically in MCF-7 breast cancer aggressive cells following dose-dependent DCA treatment (Figure 6) indicating that DCA depolarizes mitochondrial membrane and decreased the membrane potential in cancerous cells.Such a decrease in mitochondrial membrane potential was not significant in MC3T3 non-aggressive osteoblastic cells.
DCA Produces Reactive Oxygen Species (ROS) in Cancerous Cells
Production of ROS is another criterion to evaluate oxidative cellular damage and thus cytotoxicity.We examined whether DCA-induced cytotoxicity was linked with ROS production and whether the level of ROS Figure 6.Determination of mitochondrial membrane potential.MCF-7 and MC3T3 cells were grown as above.Cells were treated with DCA or PBS (control) for 48 hrs.During last 30 min incubation, cells were subjected to media containing JC-1dye (10 mg/ml) for 20 min at 37˚C.The mitochondrial potential was measured using a fluorescence microplate reader.Data are expressed as relative ratio of red to green fluorescence.The level of JC-1 retained by untreated cells was considered to be 100%.Exposure of MC3T3 and MCF-7 cells to DCA was associated with a significant reduction in Δψm compared to control in MCF-7 cells; data shown represent mean ± SD of three independent experiments.** p < 0.01.production was different in non-aggressive MC3T3 and aggressive MCF-7 cancer cell lines.Our results suggested that DCA significantly increased ROS production in MCF-7 cells in a dose dependent manner (Figure 7).The production of ROS was not significant in MC3T3 non-aggressive cell line suggesting that DCA-induced cytotoxicity observed in aggressive breast cancer cells is, at least in part, mediated by cellular damage due to ROS production.
Manganese super oxide dismutase-1 (Mn-SOD-1), a marker for cytotoxic damage by oxygen free radicals, was also measured in response to DCA treatment.DCA induced the expression of Mn-SOD-1 in MCF-7 cells more than in MC3T3 cells (Figure 8).These results confirm our findings that DCA induces cytotoxicity by inducing oxidative stress in mitochondria due to production of enhanced amounts of ROS in aggressive cancer cells.Effect of DCA-induced expression of Mn-SOD1.MC3T3 and MCF-7 Cells were cultured as described in materials and methods.Cells were treated with indicated concentrations of DCA for 48 hrs.200 μM etoposide was used as a positive control for apoptosis induced expression.After these treatments, cell lysates were prepared in lysis buffer and subjected to SDS-PAGE followed by Western blot analysis, as detailed in materials and methods section.Membranes were probed by specific antibodies to SOD1 followed by horseradish peroxidase conjugated secondary antibody.Actin was used as an internal loading control to account for any variation in protein loading.The blots were visualized by chemiluminescence staining and autoradiography.Data are representative experiments for each antibody repeated multiple times with similar results.
Discussion
DCA is a small molecule, which has long been known to act on mitochondria as its primary subcellular target.In cancerous cells, it induces cell death and therefore considered as a molecule of interest in reducing cancer growth both in vitro and in vivo studies [2] [4] [6] [13]- [16].These beneficial effects of DCA occur without affecting noncancerous cells or causing systemic toxicity.DCA treatment is known to significantly increase glucose oxidation that only takes place in functional mitochondria.Cancerous cells display enormously high amounts of cytosolic glycolysis perhaps due to defect in mitochondrial function.DCA treatment seems to correct this defect by restoring mitochondrial function.
One of the unique bioenergetics features in tumor cells is the observance of "Warburg effect" which has received a great deal of attention as a potential therapeutic target in cancer therapy [17].Most cancer cells exhibit increased cytosolic glycolysis as a primary metabolic pathway for ATP production, despite the availability of oxygen source.This phenomenon is closely related with the case of apoptotic resistance in cancerous cells.The reversal of this process from cytosolic glycolysis to the mitochondrial oxidation may trigger apoptosis in cancerous cells [2] [5] [6] [18] [19].The key regulator of glucose oxidation is pyruvate dehydrogenase (PDH), which is inhibited by pyruvate dehydrogenase kinase (PDK) in most tumor cells.Bonnet and co-workers, in 2007, found that DCA could significantly inhibit the PDK activity in tumor cells.DCA was later found to promote apoptosis in lung, breast and glioblastoma cancer cells [20].
In this study, we have examined the effects of DCA on cell viability of human breast (MCF-7) cancer cell lines and compared with the effects on a non-aggressive MC3T3 cancer cell line.Our study demonstrated that not all cell lines were susceptible to DCA-induced cell death to the same degree of sensitivity.We have also confirmed that DCA induces apoptotic effects at least at lower doses, as previously reported by other researchers [6] [18] [21].Our data demonstrated that the aggressive cancerous cells MCF-7, was more sensitive to DCA than less aggressive MC3T3.Our findings were also comparable with previous findings [18], where it was shown that the stimulation of apoptosis or cell death and changes in mitochondrial function were more obvious in highly aggressive and metastatic cancer cells like LoVo than in the less aggressive HT29 and SW480 cells.Our results are in conformity with earlier findings where Wong and co-workers have demonstrated that DCA reduces endometrial cancer cell viability in a dose-dependent manner by inducing apoptosis while having no effect on non-cancerous cells.MCF-7 cell line originally developed in the Michigan Cancer Foundation (from which it carries its name) in 1973 from a pleural effusion [22] is the most commonly used breast cancer cell line in in vitro research as an aggressive cancer cell line.It is aggressive cancer cell line as it is not derived from the primary breast tumor; it is rather derived from the tumor which was metastasized [23].On the other hand, the MC3T3 cell line was originally isolated for varying degrees of osteogenic potential and has been widely used as a normal non-cancerous model cell line in bone biology.The MC3T3 cell line used in this study is a MC3T3-E1 sub-line, most frequently and conveniently used as physiologically relevant system for transcriptional control studies [24].Although it is a non-human cell line, it displays features of a non-cancerous cell line and comparable to other less aggressive human colorectal cell line such as HCT116 used in this study.MC3T3 cell line has also been used as a non-aggressive cancer cell line by other investigators.
We showed that DCA produced differential responses in aggressive and less-aggressive cancer with IC 50 value close to 25 mM in MCF-7 cells.This is in agreement with other published studies [18] [25] demonstrating the IC 50 values for aggressive cancer cells ranging 20 -30 mM DCA.The basis for differential effects of DCA on cancerous and non-cancerous cells may reside in its influence on mitochondrial function.DCA inhibits PDK and activates PDH.This leads to an enhanced pyruvate uptake and energy production from mitochondrial oxidative respiration and induction of apoptosis by intrinsic pathway involving caspase-3 and cytochrome c.Although these findings are comparable with other published studies [2] [6] [13], our results vary in certain respect.In our study, DCA caused cytotoxicity at higher doses; the apoptosis was seen only at lower concentrations of DCA.Similar results are reported by other groups where apoptosis in cancer cells was seen at doses as low as 0.5 -10 mM [2] [6] [19].Interestingly, some researchers in their study on breast cancer cells found that DCA inhibited cell proliferation but the induction of apoptosis by DCA was not clear [16].Therefore, we can safely state that although DCA causes cell death and inhibits growth of some cancer cells but the essential mechanism may be cell-type dependent.Higher doses of DCA cause non-specific cell death or necrosis.
Further to our investigation on the mechanism of action of DCA in inducing cytotoxicity, we looked at changes in mitochondrial membrane potential (Δψm).Treatment with DCA reduced the Δψm in aggressive MCF-7 cells but there was no significant effect on Δψm in less aggressive MC3T3 cells (Figure 6).This suggests that DCA promotes mitochondrial respiration in aggressive cancerous cells leading to the depolarization of mitochondrial membrane and induction of cell death by the proximal pathway of mitochondria as described in previous studies [2] [6] [13].These results are also in agreement with other findings reported in the literature [18] where changes in mitochondrial function are evidently linked to the invasive nature of LoVo cells than the less invasiveness of HT29 and SW480 cells.
Our study has also provided a link that the cytotoxic effects of DCA are perhaps mediated due to the production of ROS.We have found that the intracellular ROS production was significantly increased in MCF-7 cells treated with DCA (Figure 7).It is well known that cancer cells largely depend on cytosolic glycolysis for their ATP production.Our results indicated that DCA might shift the glucose metabolism of MCF-7 cells back to oxidative phosphorylation.This would lead to an enhanced glucose oxidation by mitochondrial electron transport chain resulting in a sustained production of ROS, which, in turn, could inhibit the mitochondrial H + efflux and ultimately decreased Δψm.These changes might also lead to the opening of mitochondrial membrane pores (MTP) by DCA and thus allowing the release of cytochrome c and induction of other apoptosis inducing factors with an ultimate result in enhanced cell death [20].
Most early stage tumors occur in a microenvironment and depend heavily on anaerobic glycolysis for energy needs due to mitochondrial defects [5] [26].Dysfunction of mitochondria are also a continuous source of ROS in tumor cells.In response to high levels of ROS, cancer cells express high levels of manganese dependent superoxide dismutase (Mn-SOD-1), which converts 2 2 O − into H 2 O and O 2 [27].Mn-SOD-1 is an important enzyme responsible for the detoxification of 2 O − and is considered a key antioxidant in aerobic cells.Deficiency in this enzyme or inhibition of its activity may cause an accumulation of increased amounts of 2 O − in the cells.This will result in the persistence of the oncogenic phenotype [28].Consistent with this role of Mn-SOD1, our results showed a significant increase in the expression of Mn-SOD-1 in MCF-7 compared to MC3T3 cells (Figure 8); only high concentrations of DCA increase Mn-SOD-1 expression in MC3T3 cells.This might be due to that MC3T3 cells have less hypoxic environment.Similar to our results, Saed and coworkers [29] have found an increased expression of Mn-SOD-1 during treatment of epithelial ovarian cells (EOC) with DCA.However, in other studies using different cancer cells lines, it was reported that the levels of Mn-SOD-1 decreased.This decrease in Mn-SOD-1 was associated with an increase in apoptosis [30]- [32].So any disturbance in levels of SOD would affect free radicals, and thus cytotoxicity.Whether ROS promote tumor cell survival or act as antitumorigenic agents depends on the cell and tissue type, the location of ROS production, and the concentration of individual ROS [33].
It has been shown that DCA increases PUMA transcripts in endometrial carcinoma cell lines with an apoptotic response suggesting a p53-PUMA-mediated mechanism may be involved in DCA-induced apoptosis [6].In our studies, we found that DCA increased Akt activation in MCF-7 but not in MC3T3 cells (results not shown).Previous studies [34] showed that DCA when administered on mice reduced the expression of PKB/Akt in liver; though they did not measure the levels of p-Akt.Therefore, it may be possible that alternative pathways are operative in DCA-induced apoptosis [35] [36].
Conclusion
DCA induces more cytotoxicity in aggressive cancerous cells than in non-aggressive cancerous cells.This was established by MTT, crystal violet, Trypan blue and acridine orange/ethidium bromide staining.Flow cytometry was used to confirm and differentiate apoptotic vs. necrotic effects of DCA.At lower concentrations, DCA brings about apoptotic changes but at higher concentrations, most cytotoxic effects of DCA are related to necrosis.Most prominent effects of DCA were seen on changing mitochondrial membrane potential (Δψm) and the production of reactive oxygen species (ROS).The increased expression of Mn-SOD1 is likely a consequence of the increased production of ROS.Taken together, our results suggest that DCA causes significantly higher cytotoxicity in aggressive cancerous cells than in non-aggressive cancerous cells and these effects are primarily mediated by its action on mitochondrial membrane potential and the production of ROS.
Figures 1 .
Figures 1.Effect of DCA on cytotoxicity.Indicated cell lines were grown and seeded at a density of 10 -15 × 10 3 cells/well in 96-well plates as described in materials and methods.After 24 hrs, cells were treated with indicated concentrations of DCA.Cell culture continued for additional 48 hrs.Cell viability was determined by MTT assays.The values shown are mean ± SD from at least three independent experiments each performed in triplicates.Experimental values with ** p < 0.01 were taken as significantly different as compared with their controls.One-way ANOVA determined statistical significance.* p < 0.05.
Figure 2 .
Figure 2. Comparison of the cytotoxic effect of DCA on MCF-7 and MC3T3 cells.MC3T3 and MCF-7 cells were seeded at a density of 10 -15 × 10 3 cells/well in 96-well plates or 10 × 10 4 cells/plates in 6-well plates as described in materials and methods.After 24 hrs, cells were treated with indicated concentrations of DCA.Cell culture continued for additional 48 hrs.Cell viability was determined by crystal violet (CV) assays (upper panel) or Trypan blue staining (lower panel).The values shown are mean ± SD from at least three independent experiments each performed in triplicates.Experimental values with ** p < 0.01 were taken as significantly different as compared with their controls.One-way ANOVA determined statistical significance.
Figure 3 .
Figure 3. Time-dependent effect of DCA.MC3T3 and MCF-7 cells were seeded at a density of 10 -15 × 10 3 cells/well in 96-well plates and grown in normal MEM and DMEM medium respectively with 10% FBS.50 mM was added to induce apoptosis for 12, 24, 36, 48 and 60 hrs.The percentage of cells undergoing apoptosis was determined by MTT assays.Results shown are the mean values ± SD determined from triplicate samples of three independent experiments.** p < 0.01.
Figure 4 .
Figure 4. Dose-dependent effect of DCA on apoptosis.MC-3T3 and MCF-7 cells were seeded at density of 10 × 10 4 cells/well in six-well plates and grown in normal MEM and DMEM with 10% FBS.Indicated concentrations of DCA were used to induce apoptosis for 48 hours and then cells were stained with acridine orange/ethidium bromide and visualized under UV light using a fluorescence microscope.The percentage apoptosis was determined by counting live (green) and dead (red) cells.Higher doses of treatment lost dead cells during staining.Values in graph are mean ± SD from three experiments performed in triplicates.** p < 0.01.
Figure 5 .
Figure 5. Flow cytometry analysis.MC3T3 and MCF-7 cells were cultured as described in materials and methods.Cells were then treated with indicated doses of DCA for 48 hours to induce apoptosis.Apoptosis and necrosis was determined by flow cytometry using Vybrant apoptosis assay kit # 4 under optimized conditions.Shown in a quadrant are live cells (lower left), necrotic cells (upper left, upper right), and apoptotic cells (lower right).Flow cytometry results are from a representative experiment repeated three times with similar results.
Figure 7 .
Figure 7. Determination of ROS generation.MCF-7 and MC3T3 cells were grown as above.Cells were exposed to different concentrations of DCA or PBS as indicated for 48 hrs.Cells were then incubated in the presence of 10 μM DCFH-DA for 30min for cell staining fluorescence microplate reader.Data shown represent mean ± SD of three independent experiments.
Figure 8 .
Figure8.Effect of DCA-induced expression of Mn-SOD1.MC3T3 and MCF-7 Cells were cultured as described in materials and methods.Cells were treated with indicated concentrations of DCA for 48 hrs.200 μM etoposide was used as a positive control for apoptosis induced expression.After these treatments, cell lysates were prepared in lysis buffer and subjected to SDS-PAGE followed by Western blot analysis, as detailed in materials and methods section.Membranes were probed by specific antibodies to SOD1 followed by horseradish peroxidase conjugated secondary antibody.Actin was used as an internal loading control to account for any variation in protein loading.The blots were visualized by chemiluminescence staining and autoradiography.Data are representative experiments for each antibody repeated multiple times with similar results.
|
2017-10-23T12:40:01.207Z
|
2014-11-06T00:00:00.000
|
{
"year": 2014,
"sha1": "2b8973dca90e762dc40eb8d2f2e8ea95afbb67a4",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=51445",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2b8973dca90e762dc40eb8d2f2e8ea95afbb67a4",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
250196389
|
pes2o/s2orc
|
v3-fos-license
|
Crimmigrating Narratives: Examining Third-Party Observations of US Detained Immigration Court
Examining what we call “crimmigrating narratives,” we show that US immigration court criminalizes non-citizens, cements forms of social control, and dispenses punishment in a non-punitive legal setting. Building on theories of crimmigration and a sociology of narrative, we code, categorize, and describe third-party observations of detained immigration court hearings conducted in Fort Snelling, Minnesota, from July 2018 to June 2019. We identify and investigate structural factors of three key crimmigrating narratives in the courtroom: one based on threats (stories of the non-citizen’s criminal history and perceived danger to society), a second involving deservingness (stories of the non-citizen’s social ties, hardship, and belonging in the United States), and a third pertaining to their status as “impossible subjects” (stories rendering non-citizens “illegal,” categorically excludable, and contradictory to the law). Findings demonstrate that the courts’ prioritization of these three narratives disconnects detainees from their own socially organized experience and prevents them from fully engaging in the immigration court process. In closing, we discuss the potential implications of crimmigrating narratives for the US immigration legal system and non-citizen status.
INTRODUCTION
In the Fort Snelling, Minnesota, detained immigration court, it is common to see a detainee alone, wearing an orange jumpsuit with ankles shackled underneath the table. More than half of detainees sit alone without an attorney by their side, a common trend in immigration courts across the country (Transactional Records Access Clearinghouse [TRAC] 2017b). Unlike in criminal courts, immigration proceedings in federal courts across the United States and its territories do not provide the right to a public defender at the court's expense (Torrey 2015). This is especially troubling when people are detained, and the burden falls on them, not on the state, to prove they should be released on bond and granted relief (Cade 2015;Chan 2021). These civil proceedings afford none of the basic protections that accompany criminal legal proceedings, depriving migrants of their life, liberty, and/or property (Golash-Boza 2015). This deprivation constitutes two forms of legal banishment: detention and deportation or what immigration procedure calls "removal." This article builds on a "sociology of narrative" theorized by Patricia Ewick and Susan Silbey (1995). We put their work into conversation with crimmigration, an area of overlap between immigration and criminal law that defines non-citizens' experience of the country's legal system (Stumpf 2006; see also Cházaro 2016;Vázquez 2017;García Hernández 2018. Using this framework, we ask and answer the following questions: which narratives shape and define the immigration court process and what happens when the immigration court system privileges certain narratives over others? This research adds to the growing number of empirical studies of US immigration court in the past decade (Eagly 2015;Eagly and Shafer 2015, 2020a, 2020bRyo 2016Ryo , 2018Ryo , 2019aRyo and Peacock 2018;Asad 2019). Our questions also resonate with theoretical research covering both the criminalization of immigration enforcement and the "immigrationization" of penal systems in the global North (Brandariz 2021). To answer our questions, we use third-party immigration court observations to construct and apply a theoretical concept of what we term "crimmigrating narratives," defined as selective tellings that categorize individuals as criminals based on their non-citizen status. Given Linus Chan and Kathryn Burkart's (2019) argument that non-legal actors help shape, critique, and determine the court's legitimacy and the morality of the immigration law enforcement system itself, analyzing third-party perspectives can help readers and practitioners better understand how the public perceives immigration court in action.
We find that respondents' lack of full inclusion in the immigration court process is less of a shortcoming than a feature of immigration law as it currently exists. We identify three crimmigrating narratives-based in threat, deservingness, and impossibility-and analyze how courtroom interactions reflect, establish, and carry out power relations among legal actors. We also show how the court process reveals the relationship between these narratives and codified law. This informs a broader discussion about the law and how it creates subjects that do and do not belong within the nation-state's boundaries (Ngai 2004).
BACKGROUND AND THEORETICAL FRAMEWORK A Court in Collapse
To the American Bar Association (ABA) (2019, 15), the US immigration court system is "irredeemably dysfunctional and on the brink of collapse." At root of the court's insolvency is its inability to promise fair and unbiased treatment for non-citizens, a lack of judicial independence that fails to safeguard against political interference in immigration law, an under-resourced and overworked staff, and a backlog of immigration-related cases surpassing 1.6 million in 2021 (TRAC 2022). This backlog has more than quadrupled in the past decade, signaling the court system's inability to keep pace with heightened immigration enforcement and maintain legitimacy in the realm of American jurisprudence (Esthimer 2019).
The ABA's report highlights two systemic issues that have affected immigration law's overall legitimacy for many years. First, judicial decision making and resulting case outcomes continue to depend on the assigned immigration judge (IJ) (Ramji-Nogales, Schoenholtz, and Schrag 2007;TRAC 2017a). Research demonstrates that judicial independence, training, and supervision create openings for implicit bias to influence decisions (Marouf 2011). In addition, IJ decision making unfolds against a complex and potentially fraught backdrop of individual, situational, and contextual factors (Eagly and Shafer 2015;Miller, Camp Keith, and Holmes 2015;Asad 2019;Ryo 2019aRyo , 2019b. The second systemic issue is "public skepticism and a lack of respect for the immigration court process" itself (ABA 2019, 15). A Pew Research Center (2019) report found that a majority of self-identified Democrats and Republicans say it is important to increase the number of judges overseeing asylum cases, highlighting the overall public skepticism over the US immigration court system and its capacity to mete out justice. Socio-legal scholarship also suggests that increased cynicism toward immigration law revolves around beliefs "that the legal system is punitive despite its purported administrative function, that legal rules are inscrutable by design, and that legal outcomes are arbitrary" (Ryo 2019b, 108; also see Ryo 2017).
A key reason for cynicism is that the immigration court process is far from orderly. IJs are overworked (Lustig, Delucchi, and Tennakoon 2008), job turnover has risen (TRAC 2020), and the hiring pace for new IJs remains insufficient to keep up with the court system's burgeoning workload (TRAC 2019a). Past efforts to improve IJ quality and performance have fallen short, with immigration lawyers questioning whether the court's overseeing agency-the Executive Office for Immigration Review (EOIR)sufficiently addresses these concerns (American Immigration Lawyers Association 2013; Transactional Records Access Clearinghouse 2008). Technology and infrastructure often break down and impede the respondent's already-limited due process protections, delaying a trial's end well beyond the established date (Eagly 2015;ABA 2019). 1 Meanwhile, in the past thirty years, immigration enforcement, detention, and deportation have effectively sped up, leading to new forms of criminalization and banishment that are straining the court's resources (Golash-Boza 2012).
Another criticism is that detainees face major challenges in securing defense counsel. These include financial limitations, underfunded pro bono defense services, and private attorneys who are disincentivized from providing representation to those with time-consuming and difficult cases (Markowitz 2009). Such factors have contributed to a representation rate of roughly 30 percent between 2015 and 2017, which is much lower than the rate for those in non-detained proceedings (TRAC 2017b). The lack of representation is a concerning trend, given that access to legal services and representation is an important tenet of accessing justice (Sandefur 2015). For instance, retaining an immigration attorney increases the likelihood of judges granting bond (Ryo 2018). IJs also tend to view those with attorney representation as being less dangerous than those who are pro se (self-represented), although Emily Ryo (2019a) finds that IJs are more likely to consider Central American detainees a threat than others, regardless of their having counsel. In the language of our analysis, legal representation has the potential to play an important role in challenging crimmigrating narratives that exist in immigration court, with the increased potential to understand which stories are heard and why certain narratives are effective.
Crimmigration, Narratives, and Crimmigrating Narratives
Past legislation targeting non-citizens-as well as the overall genealogy of immigration restriction-has created what Juliet Stumpf (2006) refers to as "crimmigration," a form of law that merges the punitive powers of immigration and criminal law. US legislation such as the 1988 Anti-Drug and Abuse Act and the 1996 Illegal Immigration Reform and Immigrant Responsibility Act (IIRAIRA) have expanded the range of deportable offenses to include more felony and misdemeanor crimes. 2 Recent "zero-tolerance" policies also have allowed immigration enforcement to prioritize the detention and prosecution of non-citizens whose only crime involves unauthorized entry (Hagan, Castro, and Rodriguez 2010;Department of Justice 2018).
A growing portion of crimmigration studies has focused on the structural vulnerabilities of immigration detention (Tsuchiya et al. 2021). In detention centers, many non-citizens remain behind bars for long periods, often for reasons tied to local/private prison contract incentives. Denise Gilman and Luis Romero (2018) find that when there are fewer immigrants in a detention center, Immigration and Customs Enforcement (ICE) tends to set higher bond amounts for each individual. Similarly, the immigration system falls short in considering non-citizens' financial ability to pay bond amounts, calling into question the constitutionality of immigration detention (Tan and Kaufman 2017). Particularly for detained respondents, crimmigration "legitimizes greater expenditures of state power to control the liberty of the non-citizen using criminal and immigration enforcement tools" (Stumpf 2013, 60).
Central to our analysis of crimmigration is Ewick and Sibley's (1995) theoretical framework of the "sociology of narrative." In their conceptualization, a narrative differs from a chronicle, such as a list of events that may have taken place in a respondent's life. Instead, a narrative is characterized by a selective telling of ordered events that are related to both one another and an overarching idea or structure. In other words, there is an explicit or implied meaning to the telling of events. Although they identify three features of narrative in social science research (that is, as an object, product, or method), our analysis focuses solely on narratives as an object of study. Specifically, we explore the development and effect of what we call "crimmigrating narratives," defined as selective tellings that categorize individuals as criminals based on their non-citizen status. This framework relies on a particular definition of narrative as well as on an understanding of the relationship between narrative, society, and power.
The emergence of narratives in the US immigration system is not random. They arise from the law itself and are intimately tied to their social, historical, and institutional setting. In legal settings, Ewick and Sibley argue that narratives support power imbalances that contribute to ongoing structural inequalities. To gain an understanding of these narratives, analysis must consider when certain types of narratives are demanded, what kind of narratives are deemed useful and appropriate, how narratives are expected to be shared, and why certain narratives are strategic or effective. By getting at these questions, we gain an understanding of the role that narratives play in immigration court specifically.
Prior research explores how legal actors employ narrative strategies to shape and mold perceptions of an individual's character. These strategies are important for transforming the legal process or justifying the law's outcomes. Focusing on the asylum process, legal actors construct immigrants' narratives of persecution in their asylum applications to appear logical, cohesive, and chronological in order to demonstrate their well-founded and rational fear of returning to their country of origin (Coutin 2003). For law enforcement, US Border Patrol agents use their own narratives to not only criminalize immigrants crossing the border but also to reaffirm the Border Patrol's moral prerogative to coerce and control as a means of compassion and care (Vega 2018).
Our study similarly examines how legal actors employ narrative strategies but within US immigration court. Because the court process largely ignores the respondent's relationship to a much broader system of non-citizen exclusion, the respondent must tell a particularized personal story to avoid deportation rather than a generalized one that speaks to broader structural concerns. This finding is reminiscent of John Conley and William O'Barr's (1990) assertion that courtroom narratives are stationed between relational orientation and rule orientation. On the one hand, legal parties use strategy to determine how much to reveal or omit in regard to the litigant's social status and position (relational orientation). On the other hand, they must decide how much to defer to legal rules and principles (rule orientation). In the context of a trial, the chosen narrative strategy can either subvert or legitimize the status quo (Ewick and Silbey 1995;see also Williams 1991;Hunter 2008;Englebrecht and Chavez 2014).
To illustrate when and how narratives emerge in immigration court, consider the various stages of the hearings process. Starting with a master calendar hearing (or hearings), the IJ confers with both sides to determine a schedule for submitting evidence as well as a date for the case's final merits hearing and testimony. During the master calendar hearing, which usually lasts fifteen minutes or less, the IJ asks the respondent to plead to each charge, correct any errors based on factual allegations in the charging document (also called the "Notice to Appear"), and decide which types of immigration relief to request, such as asylum or cancellation of removal. This process is quick and moves without much context, as the IJ does not consider any substantive arguments during these types of hearings. However, the pleadings process still puts the respondent at a disadvantage as their criminal history and potential "danger" become facets of the underlying courtroom story.
If eligible, a detained respondent may request to be released on bond. In this hearing (separate from the master calendar hearings), the IJ must determine whether the respondent is a "danger to society," a "flight risk," or a "threat to national security" (Department of Justice 2021a). To do so, the IJ asks the respondent or their attorney questions about the former's employment, criminal history, and ties to family in the United States. This stage often occurs to the respondent's detriment, raising concerns about individual liberty and due process (Chan 2021). Because the law puts the burden of proof on the respondent to prove bond eligibility, they can remain in custody for prolonged periods, sometimes six months or longer for so-called "criminal aliens" (Anello 2014). 3 Testimony and cross-examination occur later during the merits hearing, when the IJ also decides whether the respondent will be removed (deported) from the United States. This final stage can be the first time a respondent fully explains their story and identifies the personal equities of their case-that is, any characteristics of the respondent that an adjudicator may view as positive-which includes having old convictions, long duration of residence, and ties to family in the United States. Doing so helps ensure fair and proportional application of the law toward individual respondents, even when there are civil or minor criminal violations on the respondent's record. However, today's US immigration system does not adequately consider equities, even at the merits stage (Cade 2015), which is due to amendments to the US immigration code limiting the discretion of IJs in considering social and personal factors in their decision (Salyer 2020). It is also because the Department of Homeland Security (DHS) enforcement priorities push ICE prosecutors to focus on criminal records rather than on equities (Koh 2017).
Within these concerns about judicial and prosecutorial discretion resides the issue of the law itself. Crimmigration law broadened what counts as "criminal history" in an immigration context, as many removals since the IIRAIRA have been linked to immigration crimes, traffic offenses, or other misdemeanors (Jain 2015;DHS 2019). Today, it renders a non-citizen's criminal history "as an indiscriminate marker of undesirability" (Cade 2015, 661), while courts delay or even ignore substantive consideration of a case's other mitigating factors. Discussions of the respondent's criminal history can take place during any hearing, including master calendar and bond hearings. Thus, crimmigrating narratives often enter the legal space well before respondents' own narratives do. As they take hold, it becomes difficult for respondents to question why they are in removal proceedings to begin with. The only options to end proceedings before the merits hearing are to use legal technical arguments that do not explicitly address equities or request removal. Analyzing crimmigrating narratives in these early stages can help explain how a respondent's story begins to map onto statutory interpretation.
We argue that the court process prioritizes crimmigrating narratives that individuate the non-citizen and, like other hegemonic narratives, "reproduce, without exposing, the connections of the specific story and person to the structure of relations and institutions that make the story plausible" (Ewick and Silbey 1995, 214). These narratives are founded in rule-based orientations of the law and aim to criminalize individuals based on their immigration status. 4 Here, we place crimmigration in conversation with previous scholarship on immigrant narratives-namely, depictions of immigrants as "threats" (Chavez 2008), "deserving" (Willen 2012;Shiff 2021), and/or "impossible subjects" (Ngai 2004)-to show how they limit respondents' ability to transverse their perceived and assigned social roles in court.
Conceptualizing Threat, Deservingness, and Impossibility
In his discussion on threat narrative, Leo Chavez (2008) analyzes how media and political discourse constructs and maintains myths about Latinos in the United States, drawing attention to nativist fears of crime, demographic change, and the erosion of "mainstream" American culture as a result of Latino migration to the United States. Using this definition, threat narratives portray certain immigrants as criminals morally incompatible with the receiving state, its people, and its laws. These types of narratives had long existed in Congress, whose statutory provisions have historically defined certain people as per se or probable threats. In court, the IJ and prosecutor's scrutiny of criminal history result from and reinforce those statutory narratives. Governments typically label certain social groups as threats: since the September 11 attacks, immigration enforcement, detention, and deportation have sped up six-fold, with the dual intent to categorize Muslim-and Arab-Americans as "terrorists" and Latinos as "criminals" (Golash-Boza 2012). As Justin Pickett (2016, 103) argues, the ethnicity-coded issue of immigrant "threats" allows for "the veiled expression of broader anti-Latino sentiments" as well as increased support for immigration enforcement.
In court, threat narratives typically appear during immigration bond hearings. Empirical research confirms that IJs are less likely to grant bond to detainees with a criminal history (Ryo 2016), particularly Central Americans with felony and violent crime convictions (Ryo 2019a). Despite little proof that crime is linked to immigration, threat narratives demonstrate how detainees' criminal histories compound their immigration status (Jiang and Erez 2018). Some non-citizens may have committed immigration violations-such as an unauthorized entry into the United States-that are either civil violations that do not result in criminal sanctions or low-level crimes that result in little to no criminal penalties. They may be removable due to crimes that US immigration considers to be "aggravated felonies" and/or "crimes of moral turpitude." These crimes range from violent offenses (for example, murder, domestic abuse, aggravated assault) to smaller criminal offenses (for example, petty theft, public intoxication, drug possession). These expanded definitions of removability create openings for threat narratives to hold sway in immigration court proceedings (Stumpf 2006;García Hernández 2018). Criminalizing signs and indicia in the courtroom also reinforce threat narratives, where there are clearly visible security checkpoints, airlock entry systems, and armed guards, all of which "add up to a punitive atmosphere that does not differ from criminal prisons" (Solomon 2005, 9-10).
Another type of crimmigrating narrative concerns deservingness. Like threat narratives, deservingness relies on myth building and the criminalization of certain groups. However, it also focuses on who can be culturally, politically, and morally included in the United States in spite of their perceived transgressions of the law. To be portrayed as deserving, respondents' personal stories must conform to a moral sense of belonging that mitigates their perceived "danger to society." IJs ascertain a respondent's deservingness in a number of ways, including the respondent's strong family and community support, their fear of return, and their commitment to their own rehabilitation. Focusing on these characteristics, however, may also obscure more complicated narratives outside of the law that make up the respondent's social reality. Similar to past research on immigrant rehabilitation services and penal power, deservingness narratives in immigration court require non-citizens to "participate in public dramatizations of their criminal stigma"-to revisit their own criminal histories-in exchange for the IJ granting bond or relief from deportation (Guzman 2020, 681).
Deservingness in the immigration context has its roots in the US assimilation movements of the early twentieth century (Ngai 2004), asylum (Shiff 2021), and debates over entitlements to social services including health care (Willen 2012). In these contexts, deservingness does not simply rest only on the non-citizen's ability to tell their story. It also traces back to the law's conception of who belongs and how adjudicators identify and prevent undue hardship to the respondent and their family. For instance, research on US asylum officers shows that they routinely understand deservingness as something based on non-citizens' perceived traits, such as their gender, sexuality, or religious identity. These perceptions often bump up against codified definitions of asylum eligibility-moments of mismatch between personal belief and codified law that are called "encounters of ordinary discordance" (Shiff 2021, 339). These moments also happen in immigration court, as IJs grapple with their perceptions of who deserves to stay in the United States (Asad 2019). And while IJs may see an alignment between a respondent's story and legal notions of deservingness, their own stereotypes and biases may cause skepticism about that story's credibility (McKinnon 2009;Rempell 2010;Shiff 2021).
Drawing moral boundaries can exclude those who are not deserving, which is a central tenet of US immigration legal history (Salyer 1995;Law 2010;Kang 2017). Historians note that immigration law has always made "foreigners" (Parker 2015), "aliens" (Lew-Williams 2018), and "inmates" (Hernández 2017) by political design. This history underscores a third crimmigrating narrative of impossibility. Impossible narratives promote "illegality" as a marker of social difference, barring certain immigrants from US citizenship and foreclosing access to their legal rights. Impossible subjects, to use Mae Ngai's (2004, xxiii) term, comprise a caste group "categorically excluded from the national community" due to their unauthorized legal status. The history of US immigration demonstrates that illegality itself is fluid and that the law makes and unmakes so-called "illegal aliens" based on social categories such as country of origin, gender, and race. 5 Just as structural racism evolves with the law, the definition of impossibility is ever changing to maintain sovereign borders, produce new categories of racial difference, and naturalize social relations. While the line between "legal" and "illegal" remains blurred, immigrant exclusion based on impossibility continues in current codified law (Menjívar 2006;Menjívar and Abrego 2012). 5. In her analysis of US immigration restrictions from 1924 to 1965, Mae Ngai (2004, 4) writes that certain non-citizens' inclusion in the nation "was simultaneously a social reality and a legal impossibility." While many non-European immigrants were deported during that time, administrative discretion granted many "illegal" European immigrants the right to stay (De Genova 2002). Impossible narratives emerge in the courtroom when the government sees the respondent as statutorily ungovernable and, thus, an insoluble social problem (De Genova 2002). Adjudicators equate illegality with criminality, as the law defines an "illegal alien" in one of two ways: either they are unlawfully present in the United States or they have committed a deportable crime. In either case, the only way to resolve a respondent's impossibility is through their legalization of status or expulsion. These options are narrow by design, which shows that impossibility is less of a shortcoming than a feature of immigration law itself. These narratives also arise when the court process-with its long delays, complex arguments, and insufficient interpreter services-assumes a non-citizen's inclusion as both impractical and impossible (Barak 2021). As a result, this process systematically shuts them out from fully engaging during proceedings.
Crimmigrating Narratives from a Third-party Perspective
The US immigration system has long been "shrouded in secrecy and bureaucratic barriers" and vulnerable to political interference (Ryo 2019b, 98; Office of the Inspector General 2021). Barriers create methodological challenges for immigration court researchers, and the EOIR's data stewardship has recently been called into question (TRAC 2019b). Given the contradictory and clandestine nature of US immigration law, more creative, on-the-ground approaches to research are necessary. This need coincides with growing public distrust in the immigration court system (ABA 2019, 15), as the number of volunteer-led organizations observing and collecting data on immigration court has grown, especially since the first travel ban in 2017 (Wadhia 2019). 6 Researchers have begun to analyze third-party immigration court observations, identifying multiple and overlapping areas of perceived structural vulnerability among detained non-citizens, such as financial insecurity, discrimination, and mental health (Tsuchiya et al. 2021). This growth in immigration court research follows a long tradition of sociologists and legal geographers observing US courts (Atkinson and Drew 1979;Walenta 2020) and international human rights practices (Weissbrodt 1982;Jeffrey and Jakala 2014). Advocacy-centered initiatives such as these highlight perceived dilatory and unjust practices in court and detention, contributing to an action-based project rooted in human rights fact-finding (Orentlicher 1990) and moral responsibility (Tait 2011). As immigration court observers have documented and witnessed using human rights models, this article develops a theoretical and empirical understanding of their observations. By witnessing stories that respondents tell in court, third-party observers relate how the court process unfolds in real time using an open-ended and contextual framework. We focus on how legal actors scrutinize, discern, and apply moralizing judgments toward the respondent's narrative and understanding of their social conditions. As a non-legal audience, observers take stock of moments where immigration court and the law imagine, and fail to imagine, the respondent's perspective during trial. They also test their own expectations of justice and fairness against what they observe. In doing so, they construct "social spaces which, though they reveal themselves only in the form of highly abstract, objective relations, [make] the whole reality of the social world" (Bourdieu and Wacquant 1992, 231; see also Jiang and Erez 2018).
DATA AND METHODS
This study uses open-ended form observations of the Fort Snelling, Minnesota, detained immigration court, with a primary focus on master calendar and bond hearings. The purpose of the study is to examine when, where, and how crimmigrating narratives arise based on the perceived threat, deservingness, and impossibility of non-citizen respondents. To analyze the observers' responses and how they relate narrative in a courtroom setting, we relied on a coding scheme that incorporates themes deductively from previous crimmigration research and also allows for the emergence of these three narratives based on our coded results.
The Human Rights Defender Project
The Advocates for Human Rights, the University of Minnesota Law School's James H. Binger Center for New Americans, and Robins Kaplan LLP launched the Human Rights Defender Project (HRDP) following the mass mobilization of volunteers at US airports precipitated by the presidential travel bans. 7 Designed to foster pro bono bond representation and bring the public into immigration court, the HRDP recruits, trains, deploys, and supports volunteers to monitor and document hearings; analyzes and reports on issues and trends surfaced through the monitoring and documentation; advocates with systems actors; and identifies and refers cases in need of representation to pro bono counsel (HRDP 2020). The HRDP engages trained volunteer observers to attend immigration court hearings and report on issues of concern, including, for example, the manner of arrest, access to counsel, the ability of individuals to raise defenses to deportation, family and community support, interpretation, as well as IJ and attorney engagement. To collect data, the observers used a two-page court observation form that captured both closed-and open-ended responses (see Appendix 2). 8 For our analysis, we chose to code all open-ended questions, including concerns about procedure (for example, "were there any technical issues with interpretation?"); courtroom interactions (for example, "did the detainee's attorney interact with the client?"); and criminal history (for example, "was a criminal history mentioned by either the court or lawyers?"). A full table of these questions is included in Appendix 1.
7. The HRDP website is accessible at https://perma.cc/K9ZW-3A8H. 8. The HRDP developed this form as the Fort Snelling court was transitioning out of a recent immigration judge (IJ) hiring process. As the court infrastructure evolved substantively and mechanically, observers indicated three options on the form: Options 1 and 2 were courtrooms with one IJ each, while Option 3 involved another courtroom where one IJ was in the process of retiring and another was in the onboarding phase.
The HRDP typically schedules two volunteers per observation shift, each lasting 1.5 hours in either the morning or afternoon, two to four days a week. Per the EOIR guidelines, public observation may take place only during master calendar and bond hearings and during final merits hearings with the IJ and respondent's consent. 9 Because master calendar and bond hearings are easier to publicly access, the HRDP observations almost exclusively focus on those two types. Observers are not the judge, counsel, or respondent, nor are they relations or even acquaintances of these legal actors. They are aware that hearings move quickly with limited information and that, in almost every case, they will not witness the final written or oral decision of the judge. Therefore, observers are not required to provide responses to each open-ended question but, instead, select questions that capture the relevant details of each hearing. A large number of observers first heard about the project via their church, synagogue, or other faith-based organization based in the Twin Cities. Additional volunteers learned about the project via the University of Minnesota or local human rights organizations. The authors of this article began collaborating with the HRDP partners at the beginning of 2018 to aid in data cleaning and collection.
The HRDP's unique data raise at least two important considerations of our work in this article. First, in the absence of random selection, third-party immigration court observers are likely highly selected (on education, political affiliation, support for immigration, geography, and proximity to immigration court, and so on). We recognize that observations have the potential for bias because of observers' pro-immigration stance as well as their alignment with the HRDP's advocacy goals. Second, and as a consequence, the resulting observational data and associated findings may not be generalizable to the broader US population or fully representative of other immigration courts in the United States. While these concerns are valid, they are also outweighed by the need for new knowledge and public awareness about the immigration court system and how it functions. Recalling Howard Becker's (1998, 87) critique of random sampling, it should be noted that third-party immigration court observers can be viewed as unusual cases that are "likely to upset your thinking" by providing a unique opportunity to glimpse firsthand perceptions of procedural fairness in immigration court and coproduce knowledge for future research.
Analyzing data from the Fort Snelling court is important for a number of reasons. First, the Fort Snelling court's jurisdiction is unique in that it handles cases from a number of states-North Dakota, South Dakota, and Minnesota-considered to be new and reemerging settlement gateways for recently arrived immigrants (Donato et al. 2008;Passel and Cohn 2016). 10 Previous work shows that unauthorized immigrants are more susceptible to ICE apprehension in new gateway states and mostly rural parts of the Midwest, where labor market conditions have worsened, the Latino population has increased, and conservative politics have strengthened (Moinester 2018;Ryo and 9. Also per EOIR guidelines, IJs have the authority to close an immigration hearing at any time and ask any court visitor to identify themselves and their purposes for attending trial. 10. While the "immigrant gateways" literature has primarily focused on the sheer numbers of foreignborn individuals living in a given state, we also pay attention to the growth rate of the foreign born. Peacock 2020). Second, the Fort Snelling court is important in that, compared to other courts, it hears a proportionally large number of removal cases involving respondents from outside Mexico and Central America. This is partly due to Minnesota's status as a historically popular destination for refugees in the United States, particularly for the Hmong in the 1970s (Allen and Goetz 2010), Somalis in the 1990s (Boyle and Ali 2010;Abdi 2014), and Liberians today (Corrie and Randosevich 2013).
Sampling Strategy
For this article, we focused our analysis at the hearing level. During July 2018 to June 2019, the HRDP conducted a total of 3,125 hearing observations among 168 volunteers. Given the large number of observations, we restricted our analysis to a random sample (n = 301) limited to 10 percent of the original dataset, stratified by IJ and question (see Appendix 1). These observations reveal details involving various legal actors -including the IJ, prosecutor, attorney, interpreter, and/or respondent-as well as the perspective of the third-party observer. Table 1 presents descriptive statistics indicating the distribution of hearing type, attorney representation, and interpreter usage. All hearings took place in person rather than over video. In total, ninety-six of the 168 volunteer observers were present in our sample. Of the three judge categories, there were fifty-three unique observers for Judge Option 1, fifty-two unique observers for Judge Option 2, and twenty-six unique observers for Judge Option 3. We stratified the sample in this way because court processes and outcomes often differ by IJ and because observers sometimes leave questions blank if they do not pertain to the hearing (Ramji-Nogales, Schoenholtz, and Schrag 2007). As such, we wished to capture a representative sample of court observations in the full data set. In order to keep track of each observation, we assigned each question response a corresponding case identification that concatenates the hearing date, form question, judge identifier, origin country code, and the last three digits of the respondent's alien registration number.
Data Analysis
We performed our data analysis using NVivo, data management software for qualitative data analysis. Text search queries were employed to retrieve key words and phrases to be included in the analysis. This process involved multiple rounds of searching for terms such as "crime," "aggravated felony," "moral turpitude," and other words and phrases used in immigration court that also refer back to the prior literature on crimmigration (Stumpf 2006;Vázquez 2017;García Hernández 2018). Relying on our experiences as lawyers, human rights advocates, and social scientists, we identified key crimmigration themes based on 119 text queries, allowing for an interdisciplinary application that coded words or phrases from the ground up with larger core categories related to crimmigration in mind. For example, searching for more specific terms such as "battery" and "assault" often fell into the category of criminalization, contributing to a larger theme based on threat perceptions, which finally led to selecting codes in the core category of "threat narrative." This follows the guidelines of axial coding-the process of constructing an inductive and deductive relationship between codes-to examine the relationships between categories and refine them by placing them within the context of three crimmigrating narratives (threats, deservingness, and impossibility) (Strauss and Corbin 1998). This approach bolsters our theoretical understanding of the relationship between court process and interactions from our initial codes and categories.
The co-authors met regularly to discuss any discrepancies and to establish consensus during the coding process. In doing so, we were able to integrate our own legal and lay perspectives on immigation courts to form the following categories: (1) familial and social ties; (2) legal status; (3) judicial decisions; (4) court processes and procedures; (5) language and interpretation; (6) legal representation; (7) perceptions and emotions; and (8) criminalization. By implementing an iterative coding design, this list of categories captured the tension between formal and informal lay understandings of the law, building upon previous studies that investigate lay jurisprudence and legal discourse (Conley and O'Barr 1990;Barton 2004).
FINDINGS
As in most trial formats, immigration court requires strategic storytelling in order to prove a case. For that reason, it is often difficult for those without legal experience to follow what occurs during a hearing. Given that legal perplexities can obscure one's understanding of a case, we first note the stark difference between those having attorney representation and those who are pro se-the "haves" and the "have nots" (Galanter 1974). While the "haves" often rely on counsel to develop a sound legal strategy (Eagly 2015;Ryo 2018), the "have nots" often lack the specific legal training to develop a narrative strategy that appears competent, coherent, and responsive to the judge. Lacking a strategist can develop into the further criminalization and subordination of the respondent during immigration proceedings, as the HRDP observation data make clear. For instance, during an observed master calendar hearing, an unrepresented Somali detainee explained that he previously "pleaded guilty to something he never should have" on the advice of his public defender, and he was now stuck in detention trying to find an immigration attorney to represent him. He asked the IJ for help appealing his prior conviction, and the IJ explained that they had no authority to do this. 11 Sometimes, the narratives that respondents employ without counsel are given full or even additional consideration by IJs-in another master calendar hearing involving an unrepresented Mexican respondent, a HRDP observer notes that "the judge said [they] 'like to give latitude to unrepresented people' while doing bond hearings." However, there are other instances when IJs are less open to a respondent's story. In a bond hearing involving another Mexican respondent, the same IJ "asked questions irrelevant to immigration, prejudicial to [the] criminal case, listed statements not taken under oath, seemed to have a bias from the start," and they also "asked questions that [an] attorney may/should have objected to." The lack of legal protections for non-citizens in US immigration proceedings, such as the right to an attorney at the government's expense, presents a roadblock for those navigating immigration proceedings on their own.
Threat Narratives
Court observations noted how the legal process prioritizes threat narratives, especially during bond hearings. Similar to criminal court, IJs in these hearings consider not only whether the respondent poses a "danger to society" but also whether they are a "flight risk." While they often rely on police reports as evidence of a non-citizen's criminal history, those police rarely testify in immigration court. As such, IJs rely on these hearsay documents to make life-altering decisions, regardless of whether a respondent is charged or convicted of a crime (Holper 2014). 12 Preliminary immigration hearings therefore involve an initial inventory of a non-citizen's charges and convictions without the same criminal protections.
In one observation, the IJ denied bond to a respondent with driving-under-theinfluence (DUI) and traffic-related charges because the IJ deemed those charges to be "very serious." The IJ reached this conclusion despite the defense counsel's argument that the DUI was a first-time offense and that the respondent "has decent grades in school and would finish out his education this fall." These character assessments also take place during the master calendar phase: in one such hearing, the IJ made note of the respondent's "lengthy criminal record," saying that "a detailed description of the items on that record will need to be addressed in order to establish that the detainee is not a danger to the community." These examples show the indelible mark criminality can have on non-citizens, even at the beginning of a case. They demonstrate what 11. This example highlights the recent US Supreme Court decision of Padilla v. Kentucky (559 U.S. 356 (2010)), where the petitioner claimed that his criminal attorney gave him bad advice upon entering a plea bargain for a drug-related charge. The court decided henceforth that criminal defense attorneys are required to advise non-citizen clients about the deportation risks that come with a guilty plea in criminal proceedings.
12. Rules of evidence also generally differ from the criminal court system, where both the prosecution and defense obtain copies of the evidence that the other side has gathered. In immigration court, prosecutors are not required to share their evidence; instead, they selectively choose what enters the record after the respondent makes their statements in a hearing (Heeren 2013).
Daniel Kanstroom (2007, 5) calls "post-entry social control," a mechanism that depicts non-citizens as criminal threats in a civil court. By extending the state's carceral power, threat narratives privilege an eternal probation model that treats non-citizens as forever guests in the United States (Beckett and Murakawa 2012). The seriousness of one's crimes remains paramount, and a respondent's overall criminal history helps shape and determine an IJ's decision. Therein lies a paradox: despite falling outside of the realm of criminal law, the state engages in a process that criminalizes non-citizens and often denies them of their life, liberty, and/or property (Golash-Boza 2012).
Politicized debates over immigration law and policy also inform how threat narratives operate. Like accounts of criminal history, these debates rely on stigmas of criminality to rationalize immigration enforcement and removal (García Hernández 2019, 113). In one hearing, an IJ denied bond to a Mexican respondent despite his parents and siblings having legal status in the United States. This decision was a result of the fact that he had two recent DUIs on his record. From the observer's perspective, the ICE prosecutor was "dogged in pursuing bond denial and in countering detainee's arguments." He commented that plans for alcohol treatment "should have happened after the first DUI," with the observer noting how "he seemed on the edge of sarcasm in his tone during his rebuttal of the detainee's lawyer." At the time this hearing took place (November 2018), popular media-both via mass communication (for example, television, radio) and digital sources (for example, Twitter)-focused extensively on the so-called "wave of migrants" traveling in a "caravan" toward the United States' southern border (Semple and Malkin 2018). In-between hearings, the observer commented on the prosecutor's behavior as being unprofessional, hearing him say "something along the lines of 'Bangladeshis' entering the country which I had the impression he was saying were part of the 'caravan' of immigrants in the news currently." The observation draws a link between the prosecutor's in-court argument, which focuses on the respondent's perceived threat, and myths regarding immigrant groups approaching and already within the United States (Chavez 2008;Davíla 2008;Massey and Pren 2012). It highlights how threat narratives attempt to link criminal stigma to a group's perceived traits, all while blaming an individual's crimes. Some observations note that IJs try to maintain neutrality and admonish prosecutor's attempts to bring more overtly political characterizations of threat narrative into the fray of discussion, suggesting "a sense of balance in the discretion they have in following the law," as one observation notes. However, the same observer writes that IJs still "generally tip the balance against the detainees because the system is tilted that way."
Deservingness Narratives
Contrary to threat narratives, deservingness narratives are grounded in a moral sense of belonging rather than in a so-called danger to society. They focus on whether or not a person should be allowed to stay in the country despite their perceived violations of the law. Assessing deservingness involves multiple criteria in the IJ's decision. For bond hearings, familial and social ties as well as the length of stay in the United States, employment, and education are important factors in building a respondent's personal story and potentially leading to their release from detention. Defense attorneys often file letters of support that describe their clients' good moral character and confirm that their clients are not a danger to the community, regardless of their legal status. In one bond hearing, a Mexican respondent with a pending DUI charge submitted "more than thirty letters of support" corroborating the community ties he had made in the United States in the past twenty years. With the help of an attorney, the respondent described his alcoholism as a mental health issue that required professional help. His attorney reassured the IJ that a relative would connect the respondent to treatment and that many people pledged to give him rides during recovery. The IJ granted bond shortly thereafter.
This example demonstrates how a positive outpouring of support can be convincing evidence of a non-citizen's deservingness to stay in the United States. With a narrative strategy in place, this respondent argued not only that he was not a danger to society but also that his belonging in American society depended on his commitment to recovery. Despite having strong community ties, respondents must still publicly revisit their criminal history in a non-criminal court setting (Beckett and Murakawa 2012;Guzman 2020). While this action can recriminalize the respondent, having counsel to strategize and present these narratives still benefits his or her chances of receiving bond and/or relief from deportation.
However, because the majority of respondents lack counsel, the storytelling often falls upon the "have nots." In another bond hearing, an unrepresented Mexican respondent brought notes and reference letters as evidence of his good moral standing and ties to the community, specifically through his son. Here, the respondent's story appeared to fit into a deservingness narrative, but it still required legal documents as corroboration: the IJ asked for a birth certificate as proof of relationship "to establish that his removal would be a hardship to his U.S. citizen -beyond the ordinary hardship of separation from parent." The IJ also requested documentation proving that the detainee had been present with the child in the year prior to detention. Without an attorney to strategize for the case, the IJ remained skeptical of the respondent's story.
When respondents lack evidence for strong social ties, judges and prosecutors may question the respondent's ability to return to court after their custody release (Eagly and Shafer 2020a). For instance, during a bond hearing, a respondent's attorney stated that their client "has fears of returning to his country," while the DHS attorney argued that the respondent was a flight risk as "most of his family is in Mexico and he has lived in many [US] states : : : as if moving in order to find work makes one irresponsible and unlikely to follow up in court." Here, the prosecution cited the respondent's limited ties within the United States as proof that they will disappear when released from detention. This faults the individual and not the broader structural risks of returning to one's socalled "home country." The respondent had no criminal convictions, and yet, in this bond hearing, the prosecution assumed that the respondent would be unable to follow the law. After it was clear that a bond decision would not be made without more documentation surrounding the respondent's arrest, their attorney asked for and was granted a continuance. What this observation highlights is that to be portrayed as deserving, respondents' personal stories must conform to a certain moral framing but that framing still relates to the respondent's perceived social ties and criminal history. This obfuscates more complex, extralegal narratives that constitute the respondent's social world.
In other cases, the IJ may still choose to impart normative judgments after granting bond to the detainee. In one hearing, a prosecutor argued that a Mexican respondent was a danger to society "because of her drinking." Even post-ruling, after the IJ decided they were not a danger and granted them bond, they still "talked to her about NOT drinking and driving," bringing up the respondent's moral responsibility as "a mother with children." Even though positive bond outcomes can still lead to stern reprimands, this example shows that IJs do attempt to consider the complexity of a respondent's story during bond consideration. In a separate case, a judge granted bond despite driving-while-intoxicated convictions since no one was hurt and no one was driving erratically. He has directions to [either] not drive or seek treatment. Not a flight risk-long time in United States, citizen children. 13 Additional observations detail how IJs make efforts to educate as well as cover basic procedural information during hearings without a defense attorney. In one bond hearing, the judge "seemed to look for options for the detainee" and "expressed concern regarding [the] detainee's mistreatment while in the United States." Once no apparent options were available, "[the judge] stated to the detainee, 'I wish you the best. Good luck.'" Here, one can see how judges are unable to use the little discretion they have to see beyond the law's interpretation of the respondent's narratives. The law often still binds respondents, however deserving, to processes of removal and exclusion by virtue of their non-citizen status.
Impossible Narratives
As it attempts to resolve the tension between deservingness and threat, immigration court can be both receptive and cold toward the respondents. Still, it often fails to separate the general from the particular of a respondent's story. As respondents open up about their lives in front of the bench, immigration judges and lawyers hear them in a criminalizing space where, as one IJ put it, they are "doing death penalty cases in a traffic court setting" (Drummond 2019). Detachment and estrangement from respondents' social worlds allow the state and its bureaucratic actors to render punishment in a non-punitive setting, and this is where impossible narratives reveal themselves. This issue is at the crux of the impossible narrative: being placed in a rules-oriented setting without knowledge of the rules can disorient the respondent and subtract from their own story.
As noted earlier, the history of US immigration restriction tells us that the law does not stop at presuming that non-citizens are threatening and/or deserving. It continues by subjecting a class of people to "rules that would be unacceptable if applied to citizens," as the US Supreme Court decided in Demore v. Kim in 2003. 14 These are the rules that the HRDP observations often cite as procedurally unfair. They turn away from the 13. Following a Department of Justice memo in October 2019, multiple driving-under-the-influence convictions are "strong evidence that an alien lacked good moral character during that time and is thus not eligible for cancellation of removal." See Matter of Castillo-Perez, 27 I&N Dec. 664 (A.G. 2019).
relational-based understanding of the law and emphasize how a respondent's social inclusion is impossible because of their "illegal" status (Ngai 2004). However uncomplicated that may sound, the rules do not imply a straightforward and just legal process. An observation from a master calendar hearing demonstrates this issue: [The respondent] had previously requested asylum, but had not filled out an application because [he] couldn't find an attorney to help him. He had also been granted bond at a previous hearing. He had not paid it, because he was unable to contact his uncle in AZ for financial help. He stated that he had been removed from his home in AZ without his phone, and so had no contact info to reach the uncle. Judge asked Govt [DHS attorney] how this kind of situation could be helped, he had no answer. Judge explained that they could not release him without payment. He then asked to be deported. Judge asked how, if he was afraid of return. He is the sole support for his family and he is more afraid of them starving than of his fear for his life. They reluctantly ordered removal.
In this observation, a pro se respondent's inability to pay bond leads to their removal order. While the IJ was mindful of the respondent's fear of return, they still deferred to the court's rules and procedure. Their deference penalized the respondent by acknowledging their personal story but denying a broader social truth, which is that there are conditions of acceptance and inclusion the respondent is financially incapable of fulfilling. Shut out from fully engaging in the court process, the respondent also had no means to seek help from their family. Because the law does not give the IJ enough discretion to amend those conditions, the respondent was unable to seek justice in their own trial. The social, logistical, and financial blockages preventing pro se respondents from developing and refining their own narrative strategy until the merits hearing is a feature, not a flaw, of immigration court today. Another observation of an impossible narrative involves a pro se respondent from Burma. His family had legal permanent residency in the United States, though the respondent's legal status was unclear to the court: The [respondent] came to the United States as a refugee with his family. Parents had fled from Burma to Thailand, where he was born. He stated that his parents have applied for US citizenship but does not know their status or their birth dates : : : adjusted to LPR in August 2011. Government found a list of family members at their entry. These included his birth mother, who had died in Burma when he was five years old. So his "parents" were his father and his stepmother. Not known if he was adopted, so information on his entry was incorrect. Government couldn't determine the status of his parents. The underlying question is-is respondent a US citizen? There is much confusion about his family.
While the final outcome of this case is unknown, it shows how liminal legal status can be and how in immigration court the impossible subject becomes statutorily ungovernable. The proposed solution rests not on the narrative that the respondent offers to the court-which is often formed by their liminal status and its impact on their social experience-but, instead, depends on whether or not they can prove themselves to legally remain in the country. This refers back to how the law articulates territory and sovereignty and how bureaucracies often fail to see beyond a rule-based orientation of the law. In the aforementioned example, this failure occurred partly due to the DHS's own improper adherence to the rules, as the judge in this case was very frustrated that the prosecution's evidence was incomplete and that they "couldn't supply the needed information." Impossible narratives also emerge from the immigration court process itself. Court procedures can vacillate between abrupt adjudication and long, drawn-out exercises in rescheduled hearings, technological delays, and postponed decisions. Because hearings can move quickly, this can lead to little or no interaction between the respondent and the IJ. In one master calendar hearing where the defense attorney failed to show up, "[it] took so little time that the judge had very little contact with the detainee." As such, it became difficult or irrelevant in that miniscule fraction of time for any party to work through the respondent's circumstances and privilege a relational-based orientation of the law. A case can also slow down: sometimes, IJs grant continuances elongating the time allowed to find an attorney and gather pending documents, as our observations indicated. IJs would usually grant their request for a two-to-three-week extension, though this often came with a stern warning that they may not grant additional continuances in the future. In another master calendar hearing involving a Honduran respondent, an observation points out that the detainee had asked several times for an extension and had not been able to find an attorney. The judge commented she was aware of his physical limitations, without going into any detail, and has been asking that he be provided with an attorney who can represent him.
For those with representation, their attorneys would sometimes request an additional continuance from the judge in order to better prepare for trial. As one observation recalls, a lawyer who was hired earlier in the day immediately sought a rescheduled hearing for both bond and removal hearings so that they could "have time to learn the case." In another case, a respondent had not yet met their new attorney, and while they were ready to defend their client during the bond proceeding, the DHS attorney was missing important hearing documents due to a recent government shutdown. During the court process, both sides of counsel are often left short-handed in understanding the nature of the case, and often do not see or meet the respondent prior to trial. Meanwhile, the respondent must navigate the anachronistic nature of their trial, while still remaining at the ready to tell a story in a moral, chronological order suitable for the court's understanding.
While many of these hearings may be short, the length of detention is more interminable, which can preclude respondents from fully engaging in the court process. Observers mentioned the effect that the rules of mandatory detention can have on a case's trajectory, leading to extended separation from families and severance from basic sources of social and economic support, including a potential loss of income. In some cases, detention can also lead to prolonged detachment from the communities and families who support and rely on them. In a few cases, detainees simply asked the judge to deport them on the spot. One Honduran respondent "repeatedly stated [their] desire to go home," as one observer puts it, but the judge "repeatedly explained the need to follow court procedure while reassuring the detainee that he'd be deported as soon as the process had been completed." Similarly, another bond hearing observation notes a Mexican respondent who emphatically repeated: "I want my deportation right now." The judge had to repeat and clarify questions and rights. The detainee finally understood that the judge just wanted his acknowledgment of his rights before issuing the order for removal.
As this example shows, there can be breaks in communication between the IJ and respondent as well as a break between rule-and relational-based orientations toward the law. Even when the respondent wants to be deported, the law requires an adherence to process as an articulation of an individual's rights, as limited as they may be. So while immigration court often appears inefficient in terms of time, scheduling, and pace, it also embraces bureaucratic state control and the rule of law to maintain a sense of order. Immigration court process relies on these rules in order to subject respondents to an "arbitrary, disproportionately harsh system" that privileges a non-citizen's exit over their stay (Kanstroom 2007, 15).
This lack of engagement also becomes clear when observing everyday interactions between bureaucratic state actors (such as the IJ or ICE prosecutor), non-state actors (such as an interpreter or defense attorney), and the respondent. With so many legal actors in one room, different points of view may obstruct or privilege certain narratives in court (Conley and O'Barr 1990). For instance, an IJ's demeanor can sometimes prevent them from seeing beyond the respondent's personal story and consider how court processes may limit a respondent's ability to engage during proceedings. One IJ was "rude and abrupt" to a Sudanese respondent who had just retained counsel the day before and asked for a continuance. During another master calendar hearing, a different IJ was "impatient with [the] respondent's lack of understanding of rights and legal processes." Another observation notes that, for a bond hearing, the same IJ was "frustrated that the detainee was not prepared with an attorney"-as the observation notes, the respondent had failed to tell his attorney to show up.
These frustrations can signal a broken process, and they can also create tension in the courtroom. Another observation involved a master calendar hearing for an unrepresented Cuban respondent who first wanted his case transferred closer to his family, who could not afford to travel to Minneapolis; he then "talked about wanting deportation." This observation pointed out that the judge was "abrupt," interrupting a pro se Cuban respondent who then became visibly upset as evidenced by his face becoming flushed, his voice cracking, and tears welling in his eyes. When the judge was talking to him, they were shuffling and stapling paperwork at the same time. These interactions show how impossible narratives can further entrench ideas about not only who belongs in the United States but also who is able to engage in the legal process to begin with. Moreover, they emphasize Conley and O'Barr's (1990, 176) claim that "the legal thinking of lay people is neither random nor illogical" but that such thinking is "structured according to different precepts than the rule-oriented thought process that dominates legal reasoning." For instance, in the absence of an attorney, an observer writes, a respondent in a master calendar hearing "was engaged, but arguing legal issues that didn't apply to this type of hearing." In another case, a respondent confused his master calendar hearing with his bond hearing when his attorney was absent. The respondent then asked for the hearing to be delayed so that he could gather more evidence. This example highlights the deleterious consequences of navigating legal status alone in court, which limits the respondent's ability to understand court procedure and tell their story in the process.
The IJ and ICE prosecutor's body language and tone can be abrupt, impatient, and frustrated following a respondent's testimony that they may perceive as extraneous. In these scenarios, one sees how legal authority can immediately dismiss any narrative that exists outside of legal precepts; in short, legal authority gets to decide what narratives are irrelevant and outside the purview of the rules. Disregarding the respondent's story as undeserving of due process assumes a "taken for grantedness" of the law that denies the humanity of a Hmong respondent, as one observation of a master calendar hearing makes clear: "No eye-contact from judge to detainee. Why? So dehumanizing. He is afraid to return to Laos because he doesn't have documents to live there nor land to farm or survive. He came to the U.S. in 1992 as a refugee. Has 6 kids. Judge asked for no talking from myself and [another] observer after I rustled a paper and he asked if I had a packet." This observation highlights how the tension between rule-and relational-based orientations toward respondent's stories and even highlights the observer's own presence in the courtroom. While, in some cases, the judges were "kind," "compassionate," "respectful," "professional," and "fair," this relational-based understanding of court process shows how deservingness narratives, like threat narratives, develop from interactions between the law's facilitators and the non-citizen on trial and that relational aspect can often overlook aspects of a story that non-legal actors may consider to be very important.
Legal interpreter services also contribute to impossible narratives. Observations note how IJs are selective in their attempts to bridge between language barriers and work through interpreter issues. In many instances, judges try their best to assist respondents in this regard: one observer called the judge "respectful" and pointed out that they asked if the respondent wanted to request a Quiché interpreter for their bond hearing. In other situations, judges are less patient. Another observation points to a bond hearing where "a judge said it's too cumbersome to have everything translated so they did summaries and had those translated. Seemed really unfair to the detainee." In another hearing (also for bond determination), the respondent's attorney called into the trial via telephone, and a "slight language barrier" kept the detainee from fully understanding the proceedings. "Her Chinese interpreter would be in the office soon," the observer pointed out, "and they would call the detainee together to make sure he understood everything." Another example shows more generally how interpreter issues can lead to further delays and confusion: In the September 18 initial removal hearing, five to ten minutes were spent on obtaining a phone Hmong interpreter. None were available from the two resources tried by [the judge]. She postponed his hearing to "tomorrow." I am not sure what happened with the scheduling of the hearing, but when I came for another observation shift on September 25, this detainee was having his hearing (with a Hmong phone interpreter); I don't know if this was a result of several more "tomorrow" postponements because of an unavailable interpreter, or if it was a reschedule based on something else.
Language and interpreter challenges remain important due process issues, and they also underscore how difficult it is for the respondent and other legal actors to translate narrative and promote understanding of the law. Some observations point out that some detainees did not have the proper language resources to help them file their paperwork, with one observer asking: "How can detainees complete [an] application when they are in jail and speak little to no English?" Without dependable interpretation services, unrepresented non-citizens, in particular, are immediately constrained in their ability to participate in a meaningful and engaging way. These bureaucratic failings-from time lapses to literal misinterpretations of respondents' stories and understandings of the law-create a legal space that renders respondents' engagement with the state both impractical and impossible. The court process leaves them excludable, unable to fully participate, and without means to stay in the United States.
DISCUSSION AND CONCLUSION
While US immigration court appears on the brink of collapse, we contend that it continues to function today due to the power and social control of crimmigrating narratives. Our analysis of courtroom observations points to three narratives (threat, deservingness, impossibility) rooted in statutory provisions that in turn reflect social and political discourse, both current and historical. And while these framings do not always align with lay understandings of the law, third-party observations highlight how they move the legal process forward and disassociate non-citizens from their social realities.
Building on Ewick and Silbey's (1995) sociology of narrative, we argue that these three dominant narratives privilege particular truths that the respondent and other legal actors can tell in immigration court. Meanwhile, these narratives discount broader, more structural truths about the US immigration system today, which are that it lacks due process and targets certain immigrant groups as a means of social control. These narratives encourage a "taken for grantedness" that ignores any connection the respondent's story may have to more generalized social experiences of being a non-citizen in the United States, belonging to a particular origin group, and being potentially without legal status. Taken together, these conditions demonstrate how the law in action often regenerates existing relations of power and criminalizes the respondent even further. The law's tight grip on discretion means that respondents' stories must appear personal, unique, and not only convincing enough to the IJ in order to grant relief but also within the bounds of specific criteria written in codified law. In doing so, crimmigrating narratives facilitate a connection between personal story and the relational structure that make their story visible in court, but they also purposefully ignore how the conditions of their existence are formed in part by larger, more global power structures, such as the law, which remain rigid in an increasingly interconnected and migratory world.
A key contribution of this article finds that respondents' own stories often fail to fit into those narratives and that the institutional power of the law often rejects their stories by default. In this sense, courtroom observations can help to reveal how legal process can decontextualize, abridge, and even disregard personal narratives as untruthful and, by doing so, dispossess experiences of their meaning (Barthes 1975;Polkinghorne 1988;Bruner 2004). Our results demonstrate how court process effaces the connection between the respondent's personal story and a broader, more complex relationship to their social experience. With that in mind, another theoretical contribution of this article is to show how competing narratives in the immigration court context engrain the relational binary of the "haves" and "have nots," which point to the powers and privileges of citizenship as an institution.
Our findings also reveal the strategy behind crimmigrating narratives and who has access to producing countervailing narratives in court. During cross-examination in a removal trial, ICE prosecutors are strategic in locating the threat, deservingness, and impossibility of the respondent's ability to remain in the United States. If the respondent has an attorney-in other words, a "strategist" who helps them choose what story to tell-they can downplay such arguments, but there are many instances in our sample where pro se respondents must initiate a narrative strategy on their own. This can be difficult for the respondent, as past work demonstrates that having counsel in immigration court increases the likelihood of a successful outcome (Eagly and Shafer 2015;Ryo 2018Ryo , 2019aRyo , 2019b. Speaking on why retaining counsel may lead to more successful case outcomes, Ryo (2019a, 246) writes that "one possibility is that lawyers advance personal or individuating information about their clients that makes it difficult for immigration judges to engage in simple heuristics or categorical thinking about detainees as dangerous criminals." Our results add to this idea by showing how easy it becomes for adjudicators to accept crimmigrating narratives at face value at the start of a trial. This often occurs without another narrative strategy to displace it, especially when pro se respondents have little to no guidance on how to prepare their case before a judge.
We note in our results how some elements of a respondent's story can overlap with logico-deductive application of the law-for instance, if they argue that their deportation would cause undue harm to their US citizen child left behind, then the court might interpret it to be a violation of that child's constitutional protections. But if their child is not a US citizen or a green card holder, then that nexus connecting social experience and personal story disappears, or, at the very least, the legal status of their relation makes the respondent lawfully ineligible for relief. Our results also show that, despite their core distinctions, the three crimmigrating narratives that we have outlined are bound and reinforcing toward one another; they are not separate entities vying for attention. Instead, we have found that, by taking these themes together, one can identify and analyze how civil procedure turns detention and removal into administrative, punitive, criminalizing processes. As such, deservingness narratives' moral logic for inclusion complements threat and impossible narratives' logic for exclusion. This observation resonates with Asad Asad's (2019) findings that IJs in the Dallas immigration court avoid the removal process for those they consider deserving of relief, while, in other cases, IJs rely on a "scripted approach" of well-rehearsed narratives for those they deem as impossible to circumvent deportation. Our results add to this dialogue on courtroom narratives by showing how the law and legal processes limit discretion based on rules that repeat and affirm respondents' exclusion.
Another contribution is our emphasis that all three crimmigrating narratives place the burden of proof on the respondent detainee. Non-citizens' stories are often personal but contained in a legal space. Stories of respondents' dreams, commitments, and desires are powerful in the legal sense only if they are perceived as non-threatening, deserving, and/or "possible subjects." To borrow from Ngai (2004, 5) once more, respondents in court must fight to avoid perceptions of being "a person who cannot be and a problem that cannot be solved." This idea, on the one hand, not only tends to disengage the respondent from the process but also formulates a retelling of their story in an order perceived as cohesive, competent, and chronological. But, on the other hand, even with attorney representation to aid them in strategic storytelling, we find that the immigration court process often re-criminalizes respondents in civil proceedings. Crimmigrating narratives reference their criminal history to (1) prove their threat to society; (2) privilege moral justifications that determine their deservingness to belong in the United States; and/or (3) rely on the law as written to make those non-citizens "illegal." These forms of social control, which serve various gatekeeping functions, are on full display in the HRDP volunteers' observations of the Fort Snelling immigration court. Furthermore, our results highlight how important it is to understand that non-citizens today are formed and reformed via the structural conditions of narratives and that immigration court uses them to force individuals into certain patterned social positions (Emirbayer 1997;Bourdieu and Wacquant 1992).
The socio-legal scholar Susan Silbey (1991, 430) said that "ultimately law is a matter of force, but it is also a matter of words, words that enable the state-sanctioned use of force." In the context of immigration court and the crimmigrating narratives, those words do not always comport with lay understandings of law and justice. Third-party observers' perspectives show the lack of resolution between a respondent's personal story and the structural power relations that entail the US immigration system. We note that it is difficult for respondents to tell their story in a court process that ignores broader social experiences of immigration enforcement. Additionally, the court process's own dysfunction and delay can inhibit the respondent's ability to formulate a narrative over multiple hearings, leading to an asynchronous retelling of their lives without an appreciation of their social context.
Nearly forty years ago, immigration legal scholar Peter Schuck (1984, 1) wrote that "immigration law remains the realm in which government authority is at the zenith and individual entitlement is at the nadir." To that end, not much has changed in the US immigration system. The historical legacy of immigration restriction-from slavery to the Chinese Exclusion Act to targeting so-called "illegal aliens" today-demonstrates that, while immigration courts may be on the brink of collapse, it has always been part of a larger system intent "to punish, stigmatize, and marginalize" (García Hernández 2019, 13). 15 In that vein, this article speaks to how the current structures of immigration law use crimmigrating narratives in court to justify non-citizens' punishment in a non-punitive setting. More and more, it is becoming the job of researchers and public observers to work within and call attention to this realm and to support advocates and legal practitioners who have already begun to open pathways of understanding into the immigration court system and its procedures. Our hope is that future research explores third-party observations further, as they can elicit public understandings of detained and non-detained immigration court where crimmigration continues to expand. Such work can continue to investigate what stories are told in court and what stories get lost in the law.
|
2022-07-02T15:27:04.693Z
|
2022-06-30T00:00:00.000
|
{
"year": 2022,
"sha1": "097209c98506e8ca2916b9185143073c9962bf22",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1D82CA085F4EFEE0D77A83C4D5B24F10/S0897654622000168a.pdf/div-class-title-crimmigrating-narratives-examining-third-party-observations-of-us-detained-immigration-court-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "99009a811b36bf70d8c225d87ea3d6ed686ed287",
"s2fieldsofstudy": [
"Law",
"Sociology"
],
"extfieldsofstudy": []
}
|
233285924
|
pes2o/s2orc
|
v3-fos-license
|
Focus and Effects of Peer and Machine Feedback on Chinese University EFL Learners’ Revisions of English Argumentative Essays
The present mixed-method study examined the focus and effects of peer and machine feedback on the revisions of English argumentative essays. The study collected data from 127 Chinese university EFL learners, which included Draft 1, peer feedback (PF), PF-based Draft 2, machine feedback (MF), MF-based Draft 2, questionnaires, and interview recordings. The main findings were: (a) peer feedback was primarily concerned with content errors while machine feedback mainly involved language errors, (b) significant differences occurred in most types of errors between Draft 1, PF and PF-based Draft 2, and between Draft 1, MF, and MF-based Draft 2, (c) the uptake of ‘introducing a new topic in Conclusion’ was a powerful predictor of PF-based Draft 2 scores, and (d) the participants generally moderately considered peer and machine feedback to be useful. Based on the findings, some implications are discussed on how to better implement and enhance the quality of peer and machine feedback.
Introduction
As an essential component of students' academic development in a second/ foreign language (SL/FL), writing requires a considerable amount of time and effort since it involves higher order thinking, which makes it very challenging for many SL/FL writers (Cope et al., 2011;Dikli & Bleyle, 2014). Consequently, feedback plays a critical role in enhancing the quality of students' compositions. Nevertheless, assessing writing and providing feedback are also timeconsuming and challenging. This is why though teacher feedback is more 76 effective (Goldstein, 2004;Hattie & Timperley, 2007;Keh, 1990;Sterna & Solomo, 2006;Vardi, 2009), machine and peer feedback has been developed and implemented in both classroom and other learning situations (Allen & Katayama, 2016;Shintani, 2015). Even though both peer review and machine feedback have proved to have positive effects on SL/FL learners' rewrites (Caulk, 1994;Hyland & Hyland, 2006;Rollinson, 2005;Rollinson, 1998Rollinson, , 2005Topping, 1998;Yu & Lee, 2015), conflicts always exist about the actual effects (Anson, 2006;Xie, Ke & Sharma, 2008). Few studies have examined peer and machine feedback simultaneously either. Moreover, considering that accuracy is both an important and frustrating issue in writing (Li, Link & Hegelheimer, 2015), it is worthwhile to analyze more specifically the impact of peer and machine feedback on the quality of SL/FL learners' rewrites. For these reasons, the present mixed-method study, targeting Chinese university EFL (English as a FL) learners, explored the focus and effects of peer and machine feedback on learners' rewrites of English argumentative essays.
Literature Review
Defined as the "information with which a learner can confirm, add to, overwrite, tune, or restructure information in memory, whether that information is domain knowledge, meta-cognitive knowledge, beliefs about self and tasks, or cognitive tactics and strategies" (Winne & Butler, 1994, pp. 5740), feedback has been long held to facilitate the learning of SLs/FLs (Ellis, 2011;Ferris, 2010;Hattie & Timperley, 2007).
review indicated that
Focus and Effects of Peer and Machine Feedback… 77 PA was of adequate reliability and validity in a wide variety of applications and had positive formative effects on student achievement and attitudes. Ion et al.'s (2016) analyses of 637 feedback units showed that peer feedback helped students better develop the task in their writing.
In addition, trained PA can be more effective (Ellis, 2011;Kulkarni et al., 2015;Min, 2006). For example, Min (2006) examined the impact of trained responders' feedback on EFL college students' revisions in terms of revision types and quality. After a four-hour in-class demonstration of how to do peer review and a one-hour after-class reviewer-teacher conference with 18 students, the instructor-researcher collected students' first drafts and revisions, as well as reviewers' written feedback, and compared them with those produced prior to training. The results indicated that students incorporated a significantly higher number of reviewers' comments into revisions after the peer review training, and that the number of revisions with enhanced quality was significantly higher than that before the peer review training. The researcher thus concluded that trained peer review feedback could positively impact EFL students' revision types and quality of texts, supported by a subsequent study (Liu & Chai, 2009).
Moreover, peer feedback proves to be beneficial to students in other aspects (Ellis, 2011;Kurt & Atay, 2007;Lundstrom & Baker, 2009;Miao et al., 2006). Miao et al. (2006) examined peer and teacher feedback on essays of the same topic written by Chinese University EFL learners. Analyses of student texts, questionnaires, video recordings and interview transcripts revealed that peer feedback improved student autonomy thought it was less adopted in students' rewrites. Kurt and Atay's (2007) eight-week experimental study of 86 Turkish prospective teachers (PTs) of English showed that the peer feedback group experienced significantly less writing anxiety than the teacher feedback group at the end of the study. The study also revealed that the peer feedback process helped the PTs become aware of their mistakes and helped them look at their essays from a different perspective. Lundstrom and Baker (2009) did a study with 91 university students in nine writing classes at two proficiency levels to see which was more beneficial to improving student writing: giving or receiving peer feedback. The results indicated that the givers, who focused solely on reviewing peers' writing, made more significant gains in their own writing over the course of the semester than did the receivers, who focused solely on how to use peer feedback.
Machine Feedback
As technology develops, machine feedback becomes possible via computers and internet. The technology often used for feedback on writing is Automated Writing Evaluation (AWE) software which generates automated scores based on techniques such as artificial intelligence, natural language processing and latent semantic analysis (Philips, 2007;Shermis & Burstein, 2003;Ullmann, 2019), and provides written feedback in the form of general comments, specific comments and/or corrections (Stevenson & Phakiti, 2014). In recent years, using AWE to provide feedback in the writing classroom has steadily increased, such as Project Essay GraderTM (PEG), e-rater, Intelligent Essay AssessorTM (IEA), and IntelliMetricTM (Stevenson & Phakiti, 2014). In China, the most widely used is www.pigai.org. 1 While many scholars applaud AWE as a means of freeing instructors from marking assignments and enabling them to devote more to writing instruction (Hyland & Hyland, 2006;Philips, 2007;Ullmann, 2019), others doubt whether AWE is capable of providing accurate and effective feedback (Anson, 2006).
For example, Li et al. (2015) used mixed-methods to investigate how Criterion1 affected writing instruction and performance. Four ESL writing instructors and 70 non-native English-speaking students participated in the study. The results showed that Criterion1 led to increased revisions and that the corrective feedback from Criterion1 improved accuracy from a rough to a final draft. AbuSeileek and Abualsha'r (2014) investigated the effect of computermediated corrective feedback on 64 EFL learners' performance in writing over the course of eight weeks. The participants were randomly assigned to either a no-feedback control condition or a corrective feedback condition. The researchers found that students who received computer-mediated corrective feedback while writing achieved better results in their overall test scores than students in the control condition who did not receive feedback. Cheng (2017) employed a mixed-method to investigate the impact of online automated feedback (OAF) on the quality of 138 university students' reflective journals in a 13-week EFL course. The findings showed that the experimental group outperformed the control group in the overall score of the final reflective journal and demonstrated a significant improvement in scores across reflective journals. The results of these two studies show that AWE has a positive impact on the quality of students' writing, supporting those of earlier studies (Chen & Cheng, 2008;Warschauer & Ware, 2006). Ullmann's (2019) study of 76 essays showed that the automated analysis was immediate, scalable, and only on average 10% less accurate than the manual analysis.
Even so, Stevenson and Phakiti's (2014) review found little evidence for positive effects of AWE on the quality students' rewrites based on AWE. Stevenson and Phakiti (2014) attributed this to little research, heterogeneity of existing research, the mixed nature of research findings, and methodological issues. Other explanations are that computers do not possess human 79 inferencing skills and background knowledge (Anson, 2006) and that AWEgenerated comments primarily focus on grammar in writing (Hyland & Hyland, 2006). This may be why AWE-generated feedback is less acceptable to students than teacher feedback (Dikli & Bleyle, 2014). Dikli and Bleyle (2014) investigated the use of an AES system on 14 advanced students from various linguistic backgrounds in a college ESL writing classroom. The findings showed that the instructor provided more and better quality feedback and the AES system.
Rationale for the Study
As reviewed, there have been many studies on the results of peer and machine feedback in relation to grading and students' compositions (Bijami et al., 2013;Cho & Schunn, 2005;Gielen et al., 2010;Kulkarni et al., 2015;Lin & Yang, 2011;Rollinson, 1998Rollinson, , 2005Topping, 1998;Xie et al., 2008). However, little has been said as to the focus of peer and machine feedback in educational designs (AbuSeileek & Abualsha'r, 2014). Few studies have simultaneously examined peer and machine feedback either. More insight into the nature of peer and machine feedback would indicate more clearly how technology and students could be more helpful in SL/FL writing and what kind of assistance teachers should preferably provide. For example, if technology and peers can provide useful feedback on grammar, teachers can direct their assistance more to textual coherence or content (AbuSeileek & Abualsha'r, 2014). Moreover, since writing accuracy is both an important and frustrating issue (Li et al., 2015), it is worthwhile to examine more specifically the focus and effects of peer and machine feedback on the quality of SL/FL learners' writing. For these reasons as well as the intent to make better use of peer and machine feedback, the present study adopted mixed methods to explore the focus and effects of peer and machine feedback on Chinese university EFL learners' rewrites of English argumentative essays. To achieve this purpose, the following research questions were formulated: (1) What is the respective focus of peer and machine feedback on students' English argumentative essays?
(2) How does peer and machine feedback impact students' rewrites of English argumentative essays?
Context
The present research was conducted in a highly accredited university in Beijing, where English reading and writing courses were compulsory to undergraduate non-English majors. Upon entering the university, all non-English majors took a standardized English placement test, the results of which put the students into three band levels (a higher band level meant higher English proficiency). Based on their band levels, the students registered in compulsory and optional English courses accordingly. The majority fell into band level 2 and were required to take the English Argumentative Reading and Writing course, which contextualized the present study. The respondents of this study were randomly selected from those registered in the course taught by the same instructor. The students met the instructor once a week for a 90-minute period, who were required to write three long argumentative essays (more than 400 words) as well as a few short ones (about 100 words) during the 16-week semester. The instructor, PhD in Applied Linguistics, had been publishing widely in international journals and teaching the course for five years. In class, the students and the instructor discussed the techniques related to English argumentative essay reading and writing such as text structure, statement of arguments, paragraph structure, argument-developing skills, use of evidence, cohesion and coherence, and use of references. Adopting the process approach to writing, the instructor stressed the importance of revision and encouraged students to revise their drafts on the same composition at least twice from different sources: teacher feedback, peer comments and machine feedback. Prior to writing, a 30-minute peer review training based on Kramer, Leggett and Mead's scheme (1995) was arranged in class, which covered both content and language errors with more focus on content errors in that students had learned English grammar systematically but had not been trained how to write English argumentative essays effectively in previous schooling. Then students practiced peer review for each subsequent assigned writing task. Once a writing assignment was finished, each student sent his/her writing to the instructor, a peer, and www.pigai.org, independently. The instructor provided feedback electronically on each draft at sentence, paragraph and text levels, then gave a 25-minute summary report of the feedback and had individual discussions about the feedback when required by the students in the subsequent class; students assessed their peers' writing either electronically or in paper and must finish it within two days upon receiving the writing; www.pigai.org generated feedback in both Chinese and English (namely, 81 machine feedback in the present research) immediately upon receiving the submission. To avoid cross impact, students were required to revise their writing separately upon receiving different types of feedback.
Participants 127 (102 male and 25 female) students participated in the present study and answered the questionnaires related to their background information and perceptions of peer and machine feedback, of whom 64 were interviewed for their verbal perceptions about peer and machine feedback. Meanwhile, the first and second drafts of the same composition of 111 students, as well as peer and machine feedback, were complete for analyses. With an age range of 16-27 and an average of 19.42, the participants were from various disciplines such as civil engineering, mathematics, chemistry, and architecture. Prior to the course, they had never taken an English Argumentative Writing course.
Instruments
The collected data in the present study included interview transcripts, peer feedback (PF), machine feedback (MF), student draft 1, PF-based draft 2, MFbased draft 2, and writing scores, as detailed below.
Student texts. Draft 1, peer feedback, PF-based Draft 2, machine feedback, and MF-based Draft 2 of the course's second composition on global warming were collected. Based on student consent and the completeness of both drafts, 111 compositions of each draft as well as peer and machine feedback were finally collected for analyses.
Writing scores. The scores of each draft were collected, which was rated by the instructor on a scale of 1-15 in terms of text structure, power of argumentation, coherence, grammar and use of words (Appendix I).
Perceptions of peer and machine feedback questionnaire. This 14-item Perceptions of Peer and Machine Feedback Questionnaire (PPMFQ) was selfdeveloped to investigate students' attitudes towards peer and machine feedback in terms of their roles and usefulness in their composition revisions. The questionnaire involved such issues as grammar, use of words, expression of viewpoints, use of evidence and references, which are crucial elements of argumentative essays (Wyrick, 2008). All the items were placed on a 7-point Likert Scale, ranging from 'Strongly Disagree' to 'Strongly Agree' with values of 1-7 assigned to each of the alternatives respectively.
Informal semi-structured interview. The informal semi-structured interview guide covered such questions concerning teacher feedback, peer and machine feedback, their advantages, disadvantages and effects on composition revisions.
The background questionnaire. The background questionnaire aimed to collect informants' personal information such as age, gender, and major.
Procedure
Data were collected during weeks 7-9 of the semester when the second argumentative essay on global warming was assigned with the instructor's consent. To help students better understand the nature of argumentative essays, prompts on the task were provided such as effects of global warming on agriculture and major cause for global warming. Draft 1 was finished and submitted to the instructor, peers and www.pigai.org online (an account was created for the class beforehand) in week 7, followed by peer feedback within two days and immediate machine feedback, respectively. Based on the feedback, students revised their Drafts 1 independently according to the peer and machine feedback they had received respectively, and then submitted the rewrites to the instructor thereafter. Piloted to two students who had took the same course in the previous semester, the questionnaire was slightly modified, and then distributed to students together with a consent form who answered them in about 10 minutes in week 9's class meeting. According to their consent forms, a total of 64 students was informally interviewed by two research assistants thereafter in week 9. Each time, two students were interviewed together, which was mainly conducted in Chinese, recorded and lasted for 15-20 minutes.
Data Analyses
Since a writer needs to utilize an established language system to organize and present ideas in a certain mode in writing, the present study analyzed student texts and feedback in terms of both grammar and content. For this purpose, this study categorized errors with reference to the revision scheme in Kramer et al. (1995). The scheme (see Appendix II) used in the present study covered four types of errors: content errors (nine aspects involving failure to show a controlling idea, improper topic sentence and failure to achieve paragraph coherence, etc.), mechanical errors (misspelling, punctuation, and capitalization errors), syntactical errors (errors involving tense, part of speech, article, verb, adjective/adverb degree, agreement, and case, etc.), and lexical errors (errors in word formation, word choice, collocation, and unclear expression). Draft 1, PF-based Draft 2 and MF-based Draft 2 were analyzed carefully according to the scheme to identify the errors students made in their writing. All the analyses were done by two research assistants with an overall inter-rater coefficient of .91. Then the number of each type of error was counted for each text. The results were then analyzed via SPSS 20 to explore the distribution of and differences in different types of errors between Draft 1, peer feedback, PF-based Draft 2, machine feedback and MF-based Draft 2. To explore the effects of peer feedback on student revisions, Draft 1 and PF-based Draft 2 were compared to count and compute the uptake of peer feedback in the corresponding rewrites, so were Draft 1 and MF-based Draft 2. Then, multiple regression analyses were run, with scores of PF-based and MF-based Draft 2s being the dependent variable and the uptake of peer and machine feedback of errors of different types being independent variables.
The survey data were computed via SPSS 20. The mean and standard deviation of each survey item were computed to determine how students perceived peer and machine feedback respectively. The interview recordings were first transcribed, double-checked and then subjected to thematic content analyses by the two research assistants respectively with an inter-rater reliability of .932 (Charmaz, 2006). The themes were then generalized, counted, and supported with excerpts from the interviewees' comments. Example themes were strengths of peer feedback, weaknesses of machine feedback, benefits of peer and machine feedback. When reporting the comments, a number was used for each interviewee for the sake of privacy and convenience.
Text Analyses Results
Distribution of errors. Preliminary analyses of peer feedback showed that students commented on content errors in specific places of their peers' writing but provided very general comments on language problems such as 'There are lots of grammatical errors in the essay' in the writing. By contrast, www.pigai.org generated fairly specific suggestions on language problems but offered no content-related suggestions in students' writing. Consequently, further analyses of PF and PF-based Draft 2 focused on content errors while those of MF and MF-based Draft 2 focused on language errors. The errors in Draft 1, PF, PF-based Draft 2, MF, and MF-based Draft 2, were coded and counted, which were then analyzed in terms of mean and standard deviation (see Table 1). As seen from Table 1, the errors with highest mean scores in Draft 1 were SE6 (article errors) (mean = 2.67), LE2 (word choice errors) (mean = 2.13), SE2 (tense errors) (mean = 1.68), SE7 (errors of plural or singular nouns) (mean = 1.49), LE3 (collocation errors) (mean = 1.25), LE4 (unclear expressions) (mean = 1.25), SE3 (agreement errors) (mean = 1.22), SE1 (errors in part of speech) (mean = 1.19), C3 (failure to provide adequate evidence) (mean = 1.19), and ME (mechanical errors) (mean = 1.07). Peer feedback predominantly focused on content errors, barely involving syntactic errors except for such comments as "there are many tense errors in the writing" or "grammatical errors are too many" (comments like these were not counted in the final analyses in the paper because they were not specific). The means of content errors ranged from 0 (C9-introducing a new topic in Conclusion) to 1.02 (C8-inconsistency between the conclusion and the main argument). On the other hand, machine feedback was solely concerned with mechanical, syntactic and lexical errors. The errors in MF ranged from 0 (SE12-illogical comparison or ill parallelism) to 1.79 (SE3), and errors with highest mean scores were SE3 (agreement errors) (mean = 1.79), LE3 (collocation errors) (mean = 1.44), ME (mechanical errors) (mean = .91), SE6 (article errors) (mean = .73), and SE7 (errors of plural or singular nouns) (mean = .524).
Since PF and MF focused on certain aspects of Draft 1, most of which were incorporated into respective rewrites, the analyses of Draft 2 focused on the type of feedback students received correspondingly. As reported in Table 1, the mean scores of content errors ranged from .02 (C9) to .54 (C3) in PF-based rewrites and from 0 (SE12) to 2.20 (SE6) in MF-based rewrites.
Comparison of mean scores of the errors across Draft 1, PF, and PF-based Draft 2 shows that all content errors scored the highest in Draft 1 and that most content errors scored higher in PF than in PF-based Draft 2. Paired samples t-test results (see Table 2) indicated that Draft 1 differed significantly from PF in all types of content errors except C2 (improper topic sentence/no controlling idea/no topic sentence), largely with a small or medium effect size. Namely, significantly more content errors of all types existed in Draft 1 than identified by peers. Table 2 also shows that PF differed significantly from PF-based Draft 2 in C2 (t = 3.97), C3 (failure to provide adequate evidence) (t = -2.50), C5 (lack of the power of the argument/weak arguments or evidence) (t = -2.65), C7 (fail to achieve paragraph coherence: poor organization/Lack or misuse of transitional markers) (t = 3.73), C8 (inconsistency between the conclusion and the main argument) (t = 4.66), and TotalC (t = 3.66). Alternatively, significantly more errors of C2, C7, C8, and TotalC (total content errors) were identified in PF than in PF-based Draft 2, but the latter had significantly more errors of C3 and C5 than in the former. Yet Draft 1 had significantly more errors of C1 (failure to show a controlling idea/More than one controlling idea) (t = 5.47), C2 (t = 3.16), C3 (t = 4.10), C7 (t = 2.31), C9 (introducing a new topic in Conclusion) (t = 2.78), and TotalC (t = 5.88) than in PF-based Draft 2.
A similar pattern was observed for Draft 1, MF, and MF-based Draft 2, as reported in Table 1. Mechanical errors and most syntactic and lexical errors scored the highest in Draft 1, and errors of some types scored higher in MF than in MF-based Draft 2 while it was reversed for errors of other types. Paired samples t-test results (see Table 3) demonstrated that Draft 1 differed significantly from MF in all syntactic errors except SE5 (adjective/adverb degree errors), SE12 (errors of illogical comparison or ill parallelism), SE13 (errors of sentence fragments/run-on sentence/dangling modifiers), SE14 (errors of mixed or confused expression and sentence structure), SE15 (missing a part of the sentence), and all lexical errors except LE1 (errors in word formation) and LE3 (errors in collocations). Namely, significantly more errors of most types were identified in Draft 1 than in MF except SE3 (errors in agreement) and LE3. Table 3 also suggests that MF identified significantly more errors of SE3 but significantly fewer errors of SE1 (errors in part of speech), SE2 (tense errors), SE6 (articles errors), SE10 (errors in word order), SE11 (errors in coordinating conjunctions and subordinating conjunctions), SE16 (overuse of a part of the sentence), TotalSE (total syntactic errors), LE2 (errors in word choice), LE4 (unclear or incomplete expressions), TotalLE (total lexical errors), and TotalE (total errors) than in MF-based Draft 2. In addition, Draft 1 had significantly more errors in SE2 (tense errors), SE3 (errors in agreement), SE6 (articles errors), SE7 (errors in the use of plural or singular forms/uncountable nouns), SE11 (errors in coordinating conjunctions and subordinating conjunctions), SE15 (missing a part of the sentence), SE16 (overuse of a part of the sentence), TotalSE, LE2, LE3, LE4, TotalLE, and TotalE than in MF-based Draft 2. (Cohen, 1988) Effects of peer and machine feedback on students' rewrites. To explore the effects of peer and machine feedback on students' rewrites, multiple regression analyses were run, with PF-based and MF-based Draft 2 scores being dependent variables and the uptake of errors of different types being independent variables respectively. Regression analyses yielded no model for MF-based Draft 2 scores and 1 model for PF-based Draft 2 scores, as shown in Table 4. Notes: df = degree of freedom effect size of Cohen's f2: small = f2 ≤ .02; medium = f2 = .15; large = f2 ≥ .35 (Cohen, 1988) As shown in Table 4, with the change in R 2 being .068, C9 (introducing a new topic in Conclusion) was the only predictor (β = .261, t = 2.11, f 2 = .012) that positively predicted the scores of students' rewrites based on peer feedback.
Self-reported Results
Survey results. The mean and standard deviation of each survey item concerning peer and machine feedback were computed (see Table 5), [main] claims and supporting evidence) (mean = 5.29), 6 (logic of arguing) (mean = 5.26) and 5 (statement of supporting arguments) (mean = 5.24), centering on content. The five PMFQ items with the highest means were items 1 (improved ability to use grammar) (mean = 5.56), 2 (improved ability to use vocabulary appropriately) (mean = 5.54), 14 (acceptability of machine feedback) (mean = 5.24), 13 (uptake of machine feedback) (mean = 5.33), and 9 (improved ability to use vocabulary formally) (mean = 5.08), centering on the use of expressions and grammar. These findings indicated that the students were generally moderately positive toward peer and machine feedback.
Interview results. Table 6 summarizes the interviewees' perceptions of the advantages and disadvantages of peer and machine feedback. As seen in Table 6, around 20% of the interviewees commented that peer feedback provided more communication (23.4%), more chances to learn from each other (21.3%), new perspectives (21.3%) and good advice on language use and sentence polishing (17%). According to the interviewees, peers "feel more at ease and communicate frequently when reviewing each other's writing. This helps us to understand each other's writing better" (No. 34), and could "identify problems in logic" (No. 22), peer review enabled "me to know others' views of my writing" (No. 46), and "me to be aware of similar mistakes in my own writing" (No. 51). Meanwhile, since "we peers are at a similar English proficiency level, most peer comments are not much professional or appropriate" (No. 53), and "it is difficult for us to offer specific suggestions" (No. 35).
As seen in Table 6, machine feedback could "identify language and grammar mistakes effectively" (No. 31), and "better the sentences and format in my writing" (No. 18). However, because it was a machine, it could not "identify logical problems" (No. 10) or offer any content-related suggestions on aspects like "paragraph structure, statements of main and supporting arguments, and use of evidence" (No. 25). Moreover, the machine frequently "misidentified mistakes" (No. 31).
Probably because of these reasons, 72.3% and 63.9% of the interviewees reported that peer and machine feedback was helpful to the revision of their writing, respectively. On the whole, 100% and 71.7% of the interviewees reported feeling satisfied with peer and machine feedback, respectively. Self-reported Perceptions of Peer and Machine Feedback (N = 64) Feedback Advantages Disadvantages PF a) more communication (11/23.4%), b) chances to learn from each other (10/21.3%), c) new perspectives (10/21.3%), d) good advice on language use and sentence polishing (8/17%), e) suggestions being very specific (6/12.8%), f) being friendly (4/8.5%), g) feeling at ease (3/6.4%).
Focus of Peer and Machine Feedback
Analyses of the data showed that peer feedback primarily focused on content errors in the present study. Although the interviewees were intermediate-advanced learners, they were not confident enough to pinpoint language problems for their peers. This was also evident in the number of content errors they identified in PF, which was significantly lower than that in Draft 1. Apart from that, this might be partly attributed to the time-consuming nature of reviewing a text, which made the participants unwilling to provide detailed and specific suggestions. Meanwhile, as discussed in Yu and Lee (2015), EFL students' group peer feedback activities are often driven and defined by their motives, which are shaped and mediated by the sociocultural context. The learning context where the instructor emphasized content more than linguistic forms of argumentative writing might be partially accountable for the participants' performance in their PF in the present study. The students thus focused more on content errors correspondingly, which, nevertheless, needs to be further explored.
The present study also revealed that machine feedback was predominantly concerned with language errors, as found in Hyland and Hyland (2006). This might be because the so-called machine, though modeled on human intelligence, could still not detect human thinking to provide useful comments on contents of an essay. In addition, though it offered timely and generally accurate feedback on language problems, it mistook the correct use of grammar and expressions to be incorrect or provided wrong suggestions for "correctly pinpointed mistakes" "at a rather high rate" (No. 62). For example, www.pigai.org marked the part 'will in' in the sentence "It will in turn lead to the large scale release of the greenhouse gas into the atmosphere" (Writing 44, Draft 1) to be wrong. This finding partially supports the view that AWE is incapable of providing accurate feedback in certain aspects (Anson, 2006). Hence, it is necessary for both instructors and learners to be cautious when utilizing machine feedback. This is especially so for learners with lower proficiency in the SL/FL who are more unlikely to distinguish wrongly identified errors by machines. Moreover, to what extent and what type of language use is identified as errors by machines need to be further researched.
Effects of Peer and Machine Feedback
Regressional analyses indicated that the uptake of 'introducing a new topic in Conclusion' was a significant predictor for students' PF-based rewrites. This might be related to the culture of writing in Chinese, which tends to bring about something new in concluding parts of an essay. This thus deserves attention in formal classroom teaching and the effects need to be further researched as well. Analyses of self-reported data showed that the participants were generally positive about peer feedback, as found in the current literature (Liu & Chai, 2009;Miao et al., 2006). Apart from positively affecting students' rewrites, peer feedback offered students chances to communicate with and learn from each other, to become (more) aware of their own mistakes, to look at their own writing from a new perspective, as found in some existing studies (Miao et al., 2006;Wang, 2014). Miao et al.'s (2006) study indicated that peer feedback helped promote student autonomy, especially in cultures which look up teachers as authority figures.
Self-reported data indicated that the participants were generally moderately positive towards machine feedback, commenting that it was good, specific, timely, clear and convenient. This suggests that machine feedback did have positive effects on the polishing of sentences in students' rewrites, consistent with the finding in many existent studies (Cheng, 2017;Hyland & Hyland, 2006;Li et al., 2015;Philips, 2007). On the other hand, machine feedback was sometimes wrong, which frustrated the participants in the present research. Because of this, students are advised not to solely rely on machine feedback and consult peers and/or the instructor when being unsure of the comments. These findings suggest that developers of such platforms/softwares have to enhance their reliability and validity and pay more attention to providing content-related feedback, which is of central importance to an essay. They also indicate that EFL learners, especially low or low-intermediate learners, have to be cautious when using machine feedback. Writing instructors had better remind their students of this limitation of machine feedback. Otherwise, some feedback would be misleading and the uptake of such feedback would lead to (even worse) mistakes.
As illustrated in the present research, peer and machine feedback had positive effects on students' rewrites, at the same time they were not satisfactory in certain aspects. For example, peer feedback sometimes is not professional or appropriate, and superficial, as found in the present study. Thus, it is important to improve the quality of peer and machine feedback. As found in Yu and Lee (2015), student motives could have direct influence on students' participation in group peer feedback activities and their subsequent revisions. It is necessary to foster positive and constructive motives towards peer and machine feedback in students prior to revising the first drafts. Meanwhile, if peer feedback can be done anonymously, students may feel more comfortable in providing more and better feedback on different aspects of their peers' writing, as found in Lu and Bol (2007). If students become more proficient in the target language, they will be able to provide better feedback as well, so are they trained to provide peer feedback and to write (more) effectively. Integrating technology into the peer review process may also be beneficial to providing better and timely feedback (Ellis, 2011;Lin et al., 2011;Nobles & Paganucci, (2015). Nobles and Paganucci's (2015) mixed-method study of 18 high school students in a hybrid freshman English class at an independent school revealed that students perceived their writing to be of higher quality when writing with digital tools and that writing in online environments enhanced writing skill development. Kulkarni et al.'s (2015) study showed that students' final grades improved when feedback was delivered quickly, but not if delayed by 24 hours. In addition, it is equally important to train students to do peer review (Gielen et al., 2010;Liu & Carless, 2006;Rollinson, 1998). It is better for writing instructors to familiarize students with the peer review criterion and their expectations. As put in Stanley (1992, p. 230), "it is not fair to expect that students will be able to perform these demanding tasks [peer feedback] without first having been organized practice with and discussion of the skills involved." Strategies such as engaging students with criteria and embedding peer involvement within normal course processes may help promote peer feedback (Liu & Carless, 2006). Lastly, as found in Wang's (2014) investigation of 53 Chinese EFL learners' perceptions of peer feedback on their EFL writing over time, various factors affect students' perceived usefulness of peer feedback such as their knowledge of assigned essay topics, proficiency in the target language, attitudes, time constraints, and classroom environment. It is necessary for writing instructors to consider these factors when implementing peer feedback.
Conclusions
The present mixed-method study examined the focus and effects of peer and machine feedback on the rewrites of Chinese university EFL learners' English argumentative essays. The main findings were: (1) peer feedback was primarily concerned with content errors, while machine feedback mainly involved language errors, (2) significant differences occurred in errors of most types between Draft 1, PF and PF-based Draft 2, and between Draft 1, MF, and MF-based Draft 2, (3) the uptake of 'introducing a new idea in Conclusion' was a powerful predictor of PF-based Draft 2 scores, and (4) the participants generally moderately considered peer and machine feedback to be useful.
Although the present study yielded insightful findings, given that the participants were intermediate-advanced learners and the instructor was experienced in academic English writing, it is worth doing further research on different types of SL/FL learners and instructors to explore more about the focus and effects of peer and machine feedback. For example, lower proficient SL/FL learners may not be able to identify all language problems and/or distinguish correctly and incorrectly identified errors by machine; SL/FL learners with no/little training in argumentative writing may not be able to identify content errors. All these may not only lower the quality of peer feedback but also mislead learners to blindly depend on peer and machine feedback. More research on these issues with different SL/FL learner populations helps both learners and instructors to have a better understanding of peer and machine feedback. Then accordingly, peer and machine feedback may be better implemented to complement teacher feedback to improve the quality of SL/FL learners' writing as well as to alleviate writing teachers' workload.
Conflict of interest statement
On behalf of all authors, the corresponding author states that there is no conflict of interest.
• Clearly state the main idea of the paragraph,
|
2021-04-18T08:53:17.659Z
|
2021-01-29T00:00:00.000
|
{
"year": 2021,
"sha1": "9591f9bee8c8da73323d4143b6dc5cf3560a0a7a",
"oa_license": "CCBYSA",
"oa_url": "https://www.journals.us.edu.pl/index.php/TAPSLA/article/download/8642/8977",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9591f9bee8c8da73323d4143b6dc5cf3560a0a7a",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
53557324
|
pes2o/s2orc
|
v3-fos-license
|
An Insight into a Whole School Experience : The Implementation of Teaching Teams to Support Learning and Teaching
This paper presents some of the emerging outcomes from the experiences of a Maltese school that decided to embrace the philosophy of inclusion using a whole school approach based on the social model of disability. This was a qualitative study based on focus groups. A thematic analysis was used within an interpretative approach of hermeneutic phenomenology. Most schools in Malta now include ‘inclusive’ settings. This entails the use of a class Learning Support Assistant who is assigned to one or more classes where there are one or more children statemented as having learning difficulties. It is the usual practice for most Learning Support Assistants (LSAs) to follow the same child/children exclusively. All too frequently, teachers work individually. The outcome of the teachers' work has little or no effect on and is not affected by the actions of other educators. Teachers do their own work with their class and LSAs do their own work with the disabled student/s in class. The aims of the research were to generally explore the whole experience of one school in including disabled learners. The specific research questions for this part of the study were the following: 1. How can teaching teams reduce the barriers to education for all learners? 2. What practices within this model support or hinder the inclusion and education of disabled learners in a mainstream environment? Finally, there will be an attempt to expose the idealized notions of the fundamental principle of "schools for all". Social justice, disability, equality and human rights issues that underpin the social model of disability are being responded to within the "Special" Education discourse, creating exclusionary practice and inequalities within education.
Introduction
The main drive behind this paper is to positively present an account of research practice from an insider perspective on how disabled children and students can be supported in their mainstream classrooms. The biggest barriers to inclusive education can be the adults (Oliver, 1990;Titchkosky, 2003;Goodley, 2010) and the less than appropriate support systems (Lindqvist, Nilholm, Almqvist, & Westo, 2011). Support does matter and seeking the balance between teacher and Learning Support Assistant (LSA) involvement is key to good educational practice. To effectively research the LSAs' influence on inclusive Senior Lecturer, Northumbria University, UK. practice, this study explored learning and teaching together with social inclusion. The research arose in the context of emphasis placed on inclusion of disabled children and students in mainstream schools in Malta. In Malta, one of the most remarkable developments has been the Maltese Ministry of Education's incremental phasing-in of an inclusive education policy in 1994. The result is that the majority of disabled children and students receive their education in mainstream schools. The research was carried out for completion of a PhD (Agius Ferrante, 2012). The researcher was herself situated in the school and the school staff, parents and students were clear that they themselves valued the prospect of the study that would focus on this particular school's inclusive educational journey. From the outset, it is important to characterise this study's understanding of the discourse of disabled children and students within inclusive education. Children and students, whether with physical, learning, sensorial or other impairments, are identified as learners within a community of learners. All the learners are placed in mixed ability mainstream classrooms supported by a teacher and two LSAs; where they are authentically engaged and learn together with the required supports.
The teaching teams attempt to meet the needs of all learners at the onset of instruction by building in supports and scaffolds into their lessons plans. The teaching teams create a culture in which disability is accepted and embraced (Bernacchio & Mullen, 2007). The predominant finding in this study was the need to coordinate LSA support through the creation of teaching teams. Participants reported that ongoing, and conjoint processes of planning together when working in a team, are not only key to the inclusion of the disabled learner but of all learners and educators.
Inclusive Education
The concept of inclusive education is viewed as a process located within the culture, policies and practices of the whole school (Salamanca Statement, UNESCO, 1994). Whilst education is often regarded as a decisive constituent in the development and progress of society, the teaching and learning for disabled children and students in inclusive classrooms in many countries continues to be provided by leaning support assistants (LSAs). The historic evidence that disabled children and students do not receive the same kind of schooling as their nondisabled peers and experience social exclusion is overwhelming and not in dispute (Oliver, 1996). Children learn when they are together, encapsulated in the same experiences, interacting together (Salend, 2008). Despite the benefits, there are several key implications for the disabled children when they learn alongside their non-disabled peers in terms of accessing the learning and teaching in the classroom. Previous research indicates that teachers are hesitant to accept responsibility for disabled students (Pijl, 2010). Inclusive education is not about having a partial view of inclusion, but lies in the search for the processes of equity where excellence and choice are turned into positive influences rather than negative essentials of schooling. Inclusion is about education getting it right for all learners.
Learning Support Assistants (LSAs)
As the title suggests the role is around support, which has proved to be problematic when directed entirely towards the disabled learner without a thought for the teacher, the class as a whole, active learning and teaching participation and independence (Agius Ferrante & Falzon, 2011;Agius Ferrante, 2012;Blatchford, Russell, & Webster, 2012). Given the increasing evidence that LSAs are seen to be essential for the implementation of inclusive practice and indeed essential for a disabled learner's daily attendance within the classroom (Webster, Blachford, Bassett, Martin, & Russell, 2011), it is important to understand this complex and shifting role and the influence of the LSAs on students' learning. LSAs not only assist the teacher in developing the disabled learners' academic abilities; they also enable the student to nurture their social skills and to progress with confidence throughout their school journey.
Teaching Teams
Whilst the nature of the relationship between teachers and LSAs is constantly changing, it is very much left up to the individual school leaders, and in most schools, to the classroom or subject teachers and the LSA to evolve the relationship between them. The creation of teaching teams in the school studied came about through training and experience. Organisationally, there must be structures to develop new approaches to the role of the LSA and towards the organisation of teaching teams allowing them to collaborate with one another in order to facilitate the learning and teaching of all the learners in their classrooms (Agius Ferrante, 2008). Best practice is associated with the teacher and the LSA working as partners in the classroom (Agius Ferrante, 2012). One of the successes of in-class support is the quality of joint planning of the work between class/subject teacher and the class/subject LSA (Agius Ferrante & Falzon, 2011). Collaborative teamwork comes about when all members of the team have common goals and a shared understanding (O'brien & Garner, 2001).
Developing a shared framework helps identify the common denominators that exist among team members who often hold diverse opinions. If a group does not work to clarify a shared framework on an ongoing basis, it will perpetually interfere with their work and they are unlikely to become a real team. Team members are constantly struggling with redefining roles, relationships and responsibilities in order to collaborate more effectively. In schools, the instructional strategies associated with each discipline are among the most significant contributions team members make in the collaborative teamwork process. The incorporation of different perspectives increases the effectiveness of the educational experience for all learners.
Disabled Learner
In the light of inclusive education, the core values of disability culture that underline political struggles include an acceptance of human difference, recognition of human interdependence and of the ability to construct complex learning journeys. Early Intervention has been defined as the provision of support to families through a programme used with infants and young children, who have or are at risk of having an impairment or learning disability. Parents are supported by and work together with early interventionists to support their child's development through the use of different activities and experiences. Early Intervention affects the child, the parents and the way in which the family functions. Dunst (2003) mentions the importance of Early Intervention as being empoweringthus utilising capabilities for the development of new competencies.
The proportion of disabled learners attending the school studied was above the national intake. Disabled learners in this study can be identified as learners with particular labels attached to them. All the learners were between the ages of 5 and 16 years old and had attended the school from their first year of primary school. All the learners had received early intervention and had an educational statement for in-class support. Due to the fact that they had a range of impairments (sensorial, physical, intellectual and multiple impairments), they brought different contrasts and emphasis into the fabric of school.
Case Study
This study employed case-study methods (Yin, 2014). It was a qualitative study based on four focus groups. Using a case study method insured the capacity to explore this school in an in-depth, meaningful way (Luck, Jackson, & Usher, 2006;Batstone, Waghorn, & Tobias, 2016). The school was located in the central region of Malta. Grade levels within the school included Grade 1 to form 5. School enrollment was 1042 students of whom 114 had an educational statement. A thematic analysis was used within an interpretative approach of hermeneutic phenomenology, as the aim of the study was to develop a greater understanding of the experiences and perspectives of inclusive practice through the consciousness of the individual (Husserl, 1970). This allowed an exploration of a specific situation through the description and interpretation of a lived experience around inclusive education (Mayoh & Onwuegbuzie, 2013).
The data from the four focus groups is discussed in relation to the theory of inclusion in order to illuminate the nature of the processes, which led to and supported the development of teaching teams as a model of inclusive education.
Focus Groups
The aim of the focus groups was to explore how the teacher-LSA teams were supporting the learning of a diverse population of students and their implications. Prior to meeting the focus groups and following the analysis of the teacher/LSA questionnaires and the interviews of both the head of the primary and secondary school, I wrote nine questions from the themes developed from data analysis of the interview transcripts and questionnaires. To generate richer data, Barbour (2014) suggests combining interviews and focus groups to elicit information from participants both privately and publicly whilst addressing the most prominent themes previously highlighted by them in the interviews (Hennink, Hutter, & Bailey, 2011). The four focus group participants worked in and across teaching teams throughout the school. Their participation was on a voluntary basis. Each focus group consisted of key members of the teaching staff that included assistant head teachers and class or subject teachers together with their class or subject LSA, and a critical friend (Bassey, 1995) with whom I had a discussion on the outcomes of the focus group immediately after each session. The critical friend came from an education and disability studies background. All four focus groups were observed in practice by the critical friend and then evaluated and discussed with myself. All focus groups were audio recorded, transcribed verbatim and reviewed by the critical friend confirming the group dynamics and confirming the collaborative practices of the different teaching teams.
Thematic Analysis
I read and reread all the transcripts and coded them individually, initially using codes from the research questions and key words. I continually reviewed and revised the codes as needed and searched for patterns in the coding that yielded themes. I analysed emerging patterns across the four focus groups. The main thematic categories generated from the four focus groups will be discussed in the results section.
Focus Group Findings
The debate in the focus groups centered on the following extremely interesting issues and debates that came from the research questions about the creation of teaching teams and the implementation of teamwork. The main themes elicited were diversity in the classroom, teaching teams, pedagogy and positive practice, and staff development and training. The themes and subthemes are presented in Table 1 below.
Diversity in the Classroom
Diversity of the learner group was frequently noted. "What is a 'typical' classroom today (Focus group 4)?" Teachers and LSAs take on their respective roles in the knowledge that every class includes learners with different abilities and characteristics. Some students struggle, some students race ahead, some take learning in their stride, and all have different life experiences, personal learning preferences and their own different interests. "When we go round and monitor it is really difficult, when some students are way ahead and they finish, and the others have not even got out their file yet or opened their book on the right page" (Focus Group 2).
Promoting the principle that all students are equal and avoiding selection whilst respecting the natural variability in children and students, the teaching teams are responsible for delivering the mainstream National Minimum curriculum to a class of 26 students with a vast range of abilities, twenty-three non-disabled learners and three disabled learners with support entitlements. "Size, I mean classes should be more reachable" (Focus Group 3).
The majority of teaching teams within the four focus groups tended to put smaller class size at the top of their wish list: "Fewer students in a class; even 25 is too many sometimes. It depends on the class" (Focus Group 2). Interestingly, it was not the disabled learners who were considered as the ꞌproblemꞌ, but the class size and the range of abilities within the class. "I have brilliant students who get 90 and 95 and I am speaking about physics and then we have students who get hardly 10, and that is an enormous challenge. So what we are saying is, it is the mainstream subjects sometimes that are causing the problem, rather than inclusion itself" (Focus Group 1). In Focus group 1 the participants suggested that they would like to adopt different pedagogical strategies within the core lessons.
Teaching Teams
Teaching teams are synonymous with this school and are very much a part of the practice and learning support provision of the whole school. The general feeling from the participants of all four focus groups was that establishing the teaching teams has resulted in more students being reached on an individual level. Also, having the roles of the teacher and class/subject LSA clearly defined and understood by everyone is seen as important to both the teachers and the LSAs: "Having clearly defined roles helps prevent the misinterpretation of roles" (Focus group 4). Having knowledge of the LSA's role supports effective and inclusive practice and all the students gain from the LSA's presence in class. "… we can get closer to all the children, all the class, we both go around checking and helping the students" (Focus group 4).
Support Structures
In this school, support given by the LSA is differentiated at different school levels. Support is seen as being the right fit for the student when he can work alongside his peers as part of the class. "He followed the lesson, participated by answering questions, stayed quiet, but wanted to have something to do during the instruction; it was perfect" (Focus Group 3). Participants felt that the continuity of LSA support is important both for the students and for the teaching team. "It is important for the LSA to be continually available so we keep a constant situation in every lesson" (Focus Group 3). This ties in with the support the teachers feel they need from the LSA in order to reach the whole class. In Focus Group 1 teachers were upset with LSAs' lack of consistency with being present for lessons. "5 Green there were students that needed the support of an LSA, but there were no LSAs" (Focus Group 1). In Focus Group 4 it was noted that LSAs in class encourage student participation: "the LSAs' support enhances their participation" (Focus Group 4). The LSAs are also supportive during group work (Focus Group 1).
Instructional support is led by the class/subject teacher and supported by the LSA. Throughout the primary school and Forms 1 and 2 in the secondary school, all support is given in class by the teaching teams. It is interesting to note that creating independent learning opportunities to give the disabled learner autonomy and reduce dependency and labeling is seen as very important. "This can be achieved only if the teacher, along with the LSAs, truly envisage an inclusive classroom and makes sure that the LSA is not all the time next to the disabled student" (Focus Group 4). The English subject LSA also speaks about the need to empower disabled learners by getting the support levels right. "We are sometimes giving too much help, and as a result we don't help the students become independent. I believe that more tangible help in class to help access the curriculum is sometimes better" (Focus Group 4).
In the higher Forms, from Form 3, there is flexibility and individual support is given in the form of pre and post tasks out of class. There was concern in Focus Group 1 that providing disabled learners with one-to-one instruction or a different programme, results in leaving classes without LSA support.
The participants in the focus groups showed both knowledge and an understanding of the purpose behind curricular modifications. In Focus groups 1, 2 and 3 the participants spoke about adapted notes. Adapted notes were seen as helping a number of students especially with revision; the structure of the notes helps in memorisation (Focus Groups 1, 2, 3 & 4).
One of the biggest resources the school has in order to support the disabled learner are non-disabled peers, and gains are seen for both; "… students are willing to help out. Over a period of time students show an improvement in their interactive styles towards the disabled student" (Focus Group 4).
The use of strategies such as peer tutoring, co-operative group learning and team projects, benefit all students and prevents social isolation. "…disabled learners are shunned by their peers" (Focus Group 4). "We make sure that the boys are working well, and not fighting or arguing. We do our best to involve all students during group work. Students may contribute verbally through discussion, creatively by producing a drawing or by acting" (Focus Group 4). Peer tutoring working in pairs is used successfully. "It comes naturally within our system. Finding someone with whom they can stay and work with is the key" (Focus Group 1).
Pedagogy and Positive Practice
Schools that are best for all non-disabled students are also best for disabled students (Focus Groups 1, 2, 3 & 4). "Academically we do check that decisions taken are put into practice and not left on paper" (Focus Group 4). All students are members of a class, year or form irrespective of ability or impairment (Focus Groups 1, 2, 3, & 4). "As for the student, through experience, I have observed that each child likes to belong. Therefore, participation from the child's end is usually very positive" (Focus Group 4).
All the participants were knowledgeable of pedagogical approaches and the principles of Universal Design for Learning (UDL) (Rose, Meyer, & Hitchcock, 2005), together with its implementation at the multiple means of representation stage. " …basically , we use different strategies like the use of visuals, Power Points, project work, videos computer programmes, technology, games, drama and music" (Focus Group 1, 2, 3, & 4 ). Teachers and LSAs in all four focus groups found scaffolding questions and lessons provided them with a means of engaging all learners in their classrooms.
Student engagement both in school activities and in class is greatly influenced by the schools and adults' expectations. "All teaching teams try and engage all the students in everything; this is a priority" (Focus Group 4). A participant in the same Focus Group "the LSA's support also enhances their participation" (Focus Group 4).
Staff Development and Training
The participants viewed professional development in a broader context than inclusion. "…but staff development is not just about inclusion" (Focus Group 2). Another participant again mentions a broader context. "Different types of seminars on various topics that are related to problems encountered in the classroom, and teaching strategies, as well as other seminars, related to the formation of oneself" (Focus Group 4). Professional development in UDL strengthens the teaching teams' capacity to meet the needs of a wider range of learners in a mainstream classroom (Richmond-McGhie & Sung, 2013). Evaluating one's own practice was seen as an important part of professional development. "We evaluate the previous week and also the adaptations that were given during the previous week. We see what worked and what didn't and try to improve on that" (Focus Group 4).
Conclusion
Now the path for the future has to be laid, and whilst there is no one formula that can be applied for successful inclusive education, there are possibilities for inclusive education processes that support disabled learners to access quality teaching and learning. The purpose of this research was to address the questions: How can teaching teams reduce the barriers to education for all learners? What practices within this model support or hinder the inclusion and education of disabled learners in a mainstream environment? This small study suggests that it is essential to coordinate LSA support in inclusive schools.
Teaching teams were created in the school studied to assist all students and reduce the labeling effect on disabled learners, whilst increasing their autonomy without reducing their entitlement for support. Through the planning of the teaching teams, every student is ensured access to the learning of the class, and therefore the teaching teams felt secure in the sense that the disabled learners were participating members of the class community. The teaching teams' collective voices yielded much broader results than expected. Whereas I expected these participants to go into more detail with regard to disabled students, they shared opinions and concerns that are much broader and central to inclusive education. Focusing on learners and avoiding a disability narrative is a direct result of this school's inclusive education journey. The teaching teams' contribution reflected commitment and vocation to the teaching profession. Furthermore, what they are proposing is more akin to universal design for learning and they gave importance to collaborative teamwork between the teacher and the LSA.
The main conclusions of the Focus Groups are that inclusive education has many facets which are challenging to implement when it comes to providing a student-centered pedagogy capable of meeting the needs of all the students in the classroom; this links to the need to consider reducing class size from 26 learners to 20 learners in every class throughout the grade levels (Biddle & Berliner, 2008). This is a primary implication for practice and should be seen as a top priority for the policy makers.
Within the practice of inclusive education there is a disconnection between the need to respond to the different ranges of learning styles of the students within the classroom and the one size fits all curriculums. UDL was noted as a positive teaching strategy to address different learning styles therefore encouraging all the students to be learners.
Teachers and LSAs were viewed as professional teams and this was seen to be crucial, as key to addressing the individual needs of all the students in the classroom, hence the need for continuous professional development. Further, the provision of extensive and improved opportunities for training pre-service and in-service training as teaching teams should be seen as another priority for policy makers. To work in a team is key to the inclusion of the disabled learner together with conveying high expectation and providing intellectual challenge. Here the primary implication for practice would be carefully planned processes that are well supported together with flexible allocation of resources based on the needs of each class and teaching team. The supporting data from this case study indicates that while the teaching teams feel positive about their practice regarding shared decision-making, they want more time for collaboration and sharing of ideas.
At present, there is no international consensus about the extent to which LSAs should be utilised, circumstances that warrant their involvement, the duties they should appropriately perform, or what constitutes adequate training and supervision. Since most countries are still quite far from equitably including disabled students in mainstream classes, the opportunity is ripe for national, and international, dialogue on this issue. This school has altered the practice of support available for disabled students across the island. This research introduces the struggle for inclusive education from a different perspective, as a platform for further development within the practice of inclusive education.
|
2018-10-24T15:31:08.778Z
|
2017-10-31T00:00:00.000
|
{
"year": 2017,
"sha1": "823d0c5784379eb67f28c92648a5c978b7673696",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.30958/aje.4-4-3",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bd86b2a9da70ef54b71981efd245db69de8fdeb7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
199381942
|
pes2o/s2orc
|
v3-fos-license
|
Prevention of Adult Colitis by Oral Ferric Iron in Juvenile Mice Is Associated with the Inhibition of the Tbet Promoter Hypomethylation and Gene Overexpression
Iron is an essential nutrient needed for physiological functions, particularly during the developmental period of the early childhood of at-risk populations. The purpose of this study was to investigate, in an experimental colitis, the consequences of daily oral iron ingestion in the early period on the inflammatory response, the spleen T helper (Th) profiles and the associated molecular mechanisms. Juvenile mice orally received microencapsulated ferric iron or water for 6 weeks. On adult mice, we induced a sham or experimental trinitrobenzene sulfonic acid (TNBS) moderate colitis during the last week of the experiment before sacrificing the animals 7 days later. The severity of the gut inflammation was assessed by macroscopic damage scores (MDS) and the myeloperoxidase activity (MPO). Th profiles were evaluated by the examination of the splenic gene expression of key transcription factors of the Th differentiation (Tbet, Gata3, Foxp3 and RORγ) and the methylation of their respective promoter. While TNBS-induced colitis was associated with a change of the Th profile (notably an increase in the Tbet/Gata3 ratio in the spleen), the colitis-inhibition induced by ferric iron was associated with a limitation of the splenic Th profiles perturbation. The inhibition of the splenic Tbet gene overexpression was associated with an inhibition of promoter hypomethylation. In summary, mice treated by long-term oral ferric iron in the early period of life exhibited an inhibition of colitis associated with the inhibition of the splenic Tbet promoter hypomethylation and gene overexpression.
Introduction
Early iron fortification of food is generalised in Western Countries to prevent any risk of anaemia. In general, the forms of supplementation are ferrous (Fe 2+ ) but they are associated with frequent gastrointestinal side effects leading to poor compliance. In a previous study, we evidenced that the early administration of a new form of iron fortifier, lecithin bead microencapsulated ferric pyrophosphate (Fe 3+ ), is more efficient in preventing colitis in adult mice than its ferrous counterpart [1]. One of the explanations of such an effect is that crucial events during the perinatal stage drive the modelling of the memory mechanisms of immunity response maturation to reach full functionality. The immune system is mostly defined by an organised collection of cells interestingly concurring with their physiological function and development [2]. The high specificity of these immune cells in defending responses and the vast variability of their phenotypes were for a long time explained by a remarkable programming process that happens during the early life stages [3][4][5]. Among the numerous cells belonging to the immune system, lymphocytes participate in the maturation of the organism and its adaptation, from the maternal environment to the external environment rich in multiple pathogenic substances. Indeed, during the prenatal period, the foetal immune system profile remains naïve except for a small number of T helper (Th) lymphocytes that are polarised into a T helper type 2 (Th2) profile, a situation required to protect the foetus from rejection [4]. After birth, depending on the types of exposures to environmental stimulations that promote progressive immune system expansion and the differentiation of the other profiles, the future immune profile is determined [3,4]. Under these conditions, specific factors operate to activate naive Th cells and determine the polarisation of the Th subsets. Specific antigen recognition and local circulating cytokine factors initiate the process of differentiation and progressive development of the other sub-populations of Th cells. The maintenance and the amplification of the expression of specific Th cytokine genes that qualify each Th pattern are preserved and regulated by transcription factors. The key transcription factors for the cytokine signatures of Th1, Th2, Th17 and Treg subpopulations are, respectively, Tbet, Gata3, RORγ and Foxp3 [6]. Positive or negative interactions between these transcription factors have been shown, thus, giving the lead to one of the Th profiles or modulating the inflammatory response. For example, it was shown that T-bet is a key modulator of IL-23-driven colitogenic responses in the intestine [7]. Various transversal signalling pathways are also in direct connection with the promoter and the control locus regions of these transcription factors inducing the repression or the activation of gene expression as well as epigenetic modifications. Furthermore, studies have demonstrated epigenetic control of the expression of cytokines and key transcription factor genes involved in Th cells development [8][9][10]. In fact, these epigenetic regulations are also described as a bridge between the genotype and phenotype through changes influenced by the environment [11,12]. This refers to an adaptive response of cells towards various changes and events enounced during life (stress, specialisation and differentiation, dependency, etc.) [11,13], which happen through various processes such as DNA CpG islands methylation, histone modification, and small regulatory RNAs, as cellular responses to signals from the environment [14]. A relevant example that induced epigenetic changes in response to environmental signals is the differentiation of multipotent naive Th lymphocytes into distinct subpopulations [8,15]. In fact, during the perinatal period, epigenetic processes mediate massively temporary or permanent specific gene activation or repression depending on the environmental context and, therefore, preset the future Th profile [10,15]. Consequently, the exposition to abnormal environmental conditions may induce some modifications of the epigenetic profile and leads to a Th imbalance and a pathogenic immune profile observed during several inflammatory immune-related diseases [16,17].
Among the environmental factors, diet exposure considerably contributes to the modulation of the orientation of the immune profile and, more particularly, during the early postnatal period, which is a critical time window for epigenetic dysregulation. The literature has described the ability of nutrition, in the very early postnatal period, to induce epigenetic regulations leading to the beneficial or deleterious profile [18,19]. Some of these molecular mechanisms have been described. For instance, it has previously been determined that in mammalians, the one-carbon metabolic pathway depends mainly on the influence of the nutrient substrate (methyl donors) on the CpG islands methylation [20,21]. Other studies have pointed out a direct association between some micronutrients and CpG islands methylation linked to chronic diseases development [22,23]. However, the incidence of iron ingestion on the gene methylation processes has not yet been investigated.
In a previous study, we demonstrated that the early ingestion of microencapsulated ferric iron prevents microbiota dysbiosis and colitis in adult mice [1]. To further understand this effect, the current study aimed to provide evidence for the role of the immune system and, notably, its splenic Th profile modulation after iron ingestion during the perinatal period. Thus, the current study aimed to provide evidence whether the inhibition of colitis by early ingestion of this ferric iron is associated with modulation of (1) the spleen expression of transcription factors related to the Th profile and (2) the methylation on CpG islands present into promotor regions of these transcription factors. Investigations were carried out on mice supplemented from the juvenile period to adulthood with the microencapsulated ferric iron formulation and submitted to the Th1 model of inflammatory digestive pathologies at adulthood. This mice model is widely recognised for reproducing the immune and inflammatory reactions observed in humans. The impact of iron ingestion on the inflammatory reaction and on the genetic and epigenetic regulations of transcription factors involved in the immune orientation was investigated at adulthood.
Animals
Four-week-old male BALB/c mice (n = 54) were obtained from HARLAN Laboratories, Gannat (France). All animals were housed in stainless steel cages (4-5 mice/cage) under a controlled temperature (21 ± 1 • C) and 12 h light-dark cycles in compliance with the undergoing legislation and recommendations. They had free access to food (A04, SAFE, Epinay sur Orge, France) and water throughout the study. Experimental protocols were approved by the Ethics Committee (No. CEEA116). Animal care, handling and experimentation complied with the EU guide for use of laboratory animals and the undergoing French legislation.
Experimental Procedure
Three groups of 18 mice were used. They received ferric iron (75 or 150 mg/kg/day po -Lipofer ® ) or water daily during the 6 weeks. Each group was split into 2 sub-groups (n = 9 mice each). For each iron treatment, the first group was submitted to the TNBS-induced colitis during the last week of the experiment and the second group only received the vehicle (water in 50% ethanol). Weight variations of all animals were recorded during the procedure (Figure 1). At the end of the experiment, the mice were anaesthetised with sodic pentobarbital (60 mg/kg). Blood was drawn from the abdominal vein for seric immunoassays. Sera were collected by centrifugation for 15 min/7000 g/4 • C and stored at -20 • C until processed. Then, the mice were sacrificed and the colon and spleen tissues were harvested, snap-frozen in liquid nitrogen and stored at -80 • C until further determination.
Nutrients 2019, 11 FOR PEER REVIEW 3 of 14 the microencapsulated ferric iron formulation and submitted to the Th1 model of inflammatory digestive pathologies at adulthood. This mice model is widely recognised for reproducing the immune and inflammatory reactions observed in humans. The impact of iron ingestion on the inflammatory reaction and on the genetic and epigenetic regulations of transcription factors involved in the immune orientation was investigated at adulthood.
Animals
Four-week-old male BALB/c mice (n = 54) were obtained from HARLAN Laboratories, Gannat (France). All animals were housed in stainless steel cages (4-5 mice/cage) under a controlled temperature (21 ± 1 °C) and 12 h light-dark cycles in compliance with the undergoing legislation and recommendations. They had free access to food (A04, SAFE, Epinay sur Orge, France) and water throughout the study. Experimental protocols were approved by the Ethics Committee (No. CEEA116). Animal care, handling and experimentation complied with the EU guide for use of laboratory animals and the undergoing French legislation.
Experimental Procedure
Three groups of 18 mice were used. They received ferric iron (75 or 150 mg/kg/day po -Lipofer ® ) or water daily during the 6 weeks. Each group was split into 2 sub-groups (n = 9 mice each). For each iron treatment, the first group was submitted to the TNBS-induced colitis during the last week of the experiment and the second group only received the vehicle (water in 50% ethanol). Weight variations of all animals were recorded during the procedure (Figure 1). At the end of the experiment, the mice were anaesthetised with sodic pentobarbital (60 mg/kg). Blood was drawn from the abdominal vein for seric immunoassays. Sera were collected by centrifugation for 15 min/7000 g/4 °C and stored at -20 °C until processed. Then, the mice were sacrificed and the colon and spleen tissues were harvested, snap-frozen in liquid nitrogen and stored at -80 °C until further determination. Figure 1. The experimental design of the 6-week-long study (W1 to W6). Animals received 200 µL/day of either ferric iron (75 or 150 mg/kg/day po) or water during the 6 weeks. Animals received trinitrobenzene sulfonic acid (TNBS) (100 mg/kg) or its vehicle during week 6 and were kept for seven days before sacrifice. Figure 1. The experimental design of the 6-week-long study (W1 to W6). Animals received 200 µL/day of either ferric iron (75 or 150 mg/kg/day po) or water during the 6 weeks. Animals received trinitrobenzene sulfonic acid (TNBS) (100 mg/kg) or its vehicle during week 6 and were kept for seven days before sacrifice.
Experimental Inflammation
Animals were fasted overnight prior to colitis induction to offer better contact with the colonic lumen, but were allowed free access to water. Briefly, mice were anaesthetised with a mixture of ketamine/xylazine (12.5 mg/kg) and instilled with trinitrobenzene sulfonic acid (TNBS) diluted in 50% of ethanol (v/v) (100 mg/kg-25 µL), as previously described [1]. The control mice were instilled with the vehicle, i.e., water and ethanol (50% v/v).
Macroscopic Lesions
After sacrifice, the colon was removed immediately and the severity of the colonic mucosal alteration was determined according to a modified scale by Wallace et al., [24]. Briefly, the determination of the inflammatory damages was based on the presence of mucosal hyperaemia and bowel wall thickening, the presence and extent of ulceration and necrosis, and the event of adhesions and diarrhoea. Scores were established from normal appearance (0) to severe damage (10).
Myeloperoxidase (MPO) Assay
MPO activity, a marker of neutrophils tissular infiltration, was measured in the pieces of the colon adjacent to the instillation point as previously described [1] and according to Bradley et al., [25]. Briefly, sample homogenisation was followed by protein isolation and the measurement of MPO activity. Human MPO from purified neutrophils was used as a standard. The absorbance was measured after 10 min of incubation with H2O2 and chloride at 450 nm. The total protein content was assessed from the supernatants based on Lowry's method (Bio Rad DC Protein Assay, Marnes-la-Coquette, France).
Splenic Th Transcription Factors Gene Expression
RNA extraction: The total RNAs were extracted using the RNeasy Mini plus Kit (Qiagen, Courtaboeuf, France). The spleen was homogenised in the manufacturer's RLT buffer using a Tissue Lyser apparatus (Qiagen, Courtaboeuf, France). Then, the RNA was automatically extracted from the homogenate using a Qiacube machine (Qiagen, Courtaboeuf, France). Concentrations were determined based on the NanoDrop technology (Hellma TrayCell) using a Biophotometer apparatus (Eppendorf, Montesson, France). Absorbance ratios at 260/280 nm and at 260/230 nm were used to quantify and assess the purity of the DNA samples.
Reverse transcription: RNA samples were converted to cDNA using the QuantiTect Reverse Transcription Kit (Qiagen, Courtaboeuf, France). Briefly, samples were incubated with a genomic DNA wipe-out buffer at 42 • C for the effective elimination of any genomic DNA. RNA samples were submitted to reverse transcription using a master mix prepared with the Quantiscript RT buffer and the RT Primer Mix (QuantiTect Reverse Transcription Kit, Qiagen, Courtaboeuf, France). Reverse transcription was activated 20 min at 42 • C and then inactivated 3 min at 95 • C.
Quantitative PCR (qPCR): Primers for Tbet, Gata3, Foxp3, RORγ and for the housekeeping gene Gapdh were designed using the Primer Express Software (PE-V2.0-Applied Biosystems, Illkirch, France) based on the RNA messenger transcripts published in the database of the NCBI GenBank (Table 1).
qPCR was performed on the ABI Prism 7300 sequence detector system (Applied Biosystems) using a Sybr Green PCR master mix. A total of 5 µL of each cDNA (20 ng/µL) was amplified in 20 µL of PCR mixture containing a 2X SYBR Green master mix and 300 nM of each primer. After the activation cycle at 95 • for 3 min, forty amplification cycles at 95 • for 3 sec and 61 • for 30 sec were established. Non-specific amplification was confirmed by the analysis of the melting curve. qPCR was determined based on the threshold cycle number (Ct). The relative quantification of gene expression was calculated by the comparative Ct method (2 −∆∆Ct ) and was normalized by the Gapdh reference gene according to the method by Livak and collaborators [26], where ∆∆Ct = (Ct of target gene − Ct of the reference gene) of the treated condition was normalized by the ∆∆Ct of the control condition. Table 1. The characteristics of the primer used for qPCR. GAPDH was used as housekeeping gene sequences are expressed from the 5 to the 3 region and were designed using primer express 3.
Gene Name 5 → 3 Sequence Accession Number
Tbet For-GAC CCC TTC TAC TTG CGT TTT TC NM_008091 Rev-ACA TTT TGC TTT CTG CCT TCA AA
RORγ
For-GCT CTG CCC CCA GTG ACA NM_011281 Rev-TGC AAC CTC AAG GAA GAG ATT G
Foxp3
For-CCT CTA GCA GTC CAC TTC ACC AA NM_001199347 Rev-TCA ATA CCT CTC TGC CAC TTT CG
Gapdh
For-CTG CCA AGT ATG ATG ACA TCA AGA NM_008084 Rev-GCC CAG GAT GCC CTT TAG T
Methylation Status of CpG Islands
DNA extractions: DNA isolations from the spleens were performed using the QIAGEN Blood & Tissue Kit (Qiagen, France). Briefly, samples weighing less than 10 mg were placed in 2 mL microcentrifuge tubes with 180 µL of the manufacturer's ATL buffer and then homogenized in the Tissue Lyser. Twenty microlitres of proteinase k were added before incubation (3 h, 56°C). Then all of the homogenates were automatically extracted using a Qiacube.
Methylation profile analysis: the evaluation of the methylation status of specific CpG islands of the Tbet and Gata3 promoters ( Figure 2) were realised with the help of the Methyl-Profiler™ DNA Methylation Enzyme Kit (Sabioscience, France) that contains all necessary components for the cleavage of methylated and unmethylated DNA according to the manufacturer's recommendations. This is a classical method based on the principle of amplification of different DNA regions corresponding to CpG island regions involved in the regulation of the initiation of genes transcription. These regions have been submitted before amplification to digestion by specific restriction enzymes that are or are not sensitive to the presence of methyl groups. Briefly, DNA was digested in four equal reactions: (1) Mock Digest (Mo) in which no enzyme was added to the reaction buffer. This condition is a negative background. (2) Methylation Sensitive Digest (Ms), revealing a single cleavage with a methylation-sensitive enzyme in order to digest unmethylated and partially methylated DNA.
(3) Methylation Dependent Digest (Md), revealing a single cleavage with a methylation-dependent enzyme aiming at preferentially digesting methylated DNA. The remaining unmethylated DNA was detected by qPCR. (4) Double Digest (Msd), in which the two enzymes were added in the double digest, and in which all DNA molecules (both methylated and unmethylated) were digested. This last condition corresponds to a positive background. Real-time PCR was performed according to the abovementioned conditions and a total volume of reaction of 27.5 µL was used. For 5 µL of each digest product, we added 1.1 µL of specific primers delivered with the kit, 13.5 µL of SYBR ® Green qPCR Master Mix (Qiagen, France), and adjusted the volume with RNase free water.
The comparison between the presence or absence of different amplification products allowed us to identify the region with the methyl groups. Indeed, in nonsensitive conditions, these regions were cut by the restriction enzyme nonsensitive to the methylation and were not amplified by qPCR, whereas they were not cut by the sensitive restriction enzyme and consequently were not amplified by qPCR.
Statistical Analysis
The results are presented as Mean ± SEM or Tukey box-plot. Statistical analysis was performed using the Graph Pad PRISM Software (V5.0). Macroscopic damage and anaphylactic response scores were compared using the Mann-Whitney test for non-parametric data. For all of the other parameters Nutrients 2019, 11, 1758 6 of 14 studied, the data were submitted to an ANOVA test followed by the post-test of Tukey for unpaired data. A value of p < 0.05 was considered to be significant.
Statistical Analysis
The results are presented as Mean ± SEM or Tukey box-plot. Statistical analysis was performed using the Graph Pad PRISM Software (V5.0). Macroscopic damage and anaphylactic response scores were compared using the Mann-Whitney test for non-parametric data. For all of the other parameters studied, the data were submitted to an ANOVA test followed by the post-test of Tukey for unpaired data. A value of p < 0.05 was considered to be significant.
Phenotypic Parameters of Colitis After Microencapsulated Ferric Iron Supplementation
Colitic mice groups presented the general symptoms visualised during colitis [27]. Temporal evaluation of the corporal weight evolution showed significant weight loss consecutively to TNBS instillation (Table 2) associated with diarrhoea and moderate hyperaemic ulceration which are classical criteria used for the macroscopic assessment of lesions. MPO activity from colonic tissue was significantly increased (P < 0.001) compared to the controls, suggesting the increase of the inflammatory status (Table 2). Mice submitted to colitis and orally treated with ferric iron (Iron75-TNBS and Iron150-TNBS) exhibited a significant limitation of their pathologic symptoms at both the two doses of iron administered. This effect was more marked with a dose of 150 mg/kg/day. Weight gains were maintained even during the inflammatory periods. MPO activity and macroscopic damages were nearly normalised ( Table 2).
Phenotypic Parameters of Colitis after Microencapsulated Ferric Iron Supplementation
Colitic mice groups presented the general symptoms visualised during colitis [27]. Temporal evaluation of the corporal weight evolution showed significant weight loss consecutively to TNBS instillation (Table 2) associated with diarrhoea and moderate hyperaemic ulceration which are classical criteria used for the macroscopic assessment of lesions. MPO activity from colonic tissue was significantly increased (p < 0.001) compared to the controls, suggesting the increase of the inflammatory status (Table 2). Mice submitted to colitis and orally treated with ferric iron (Iron75-TNBS and Iron150-TNBS) exhibited a significant limitation of their pathologic symptoms at both the two doses of iron administered. This effect was more marked with a dose of 150 mg/kg/day. Weight gains were maintained even during the inflammatory periods. MPO activity and macroscopic damages were nearly normalised (Table 2).
Beneficial Effect of Iron in the Pathologic Th1 Immune Orientation:
Examination of the Tbet and Gata3 gene expressions in the spleen, directly related to naive T-cells development toward the Th pathway, confirmed the selective orientation according to the immune profile. Mice that were submitted to TNBS (Veh-TNBS) showed a higher value of Tbet/Gata3 ratio in the spleen (158.9 ± 52.8 vs. 4.8 ± 1.2 in controls) (Figure 3). This enhanced ratio was due to a strong increase of Tbet expression and a non-significant slight decrease of Gata3 in the spleen (103.2 ± 16.9 vs. 3.6 ± 1.6 and 0.4 ± 0.2 vs. 0.7 ± 0.2, respectively, by comparison to controls) (Figure 4a,b). By contrast, this ratio was dose-dependently lowered in animals submitted to iron supplementation during the juvenile period and to the experimental colitis at adulthood (54.1 ± 14.3 and 7.3 ± 3.3 in 75 and 150 mg/kg/day po colitic mice respectively) (Figure 3). At the two doses of iron used, this lowered ratio comes mainly from a strong reduced Tbet gene expression in the spleen (Figure 4a). At 150 mg/kg/day, this was reinforced by the increased Gata3 gene expression in the spleen (Figure 4b). In non-colitic mice, ferric iron, whichever dose tested, did not change the levels of expression of Tbet or Gata3 as compared to vehicle-treated mice (Figure 3).
Beneficial Effect of Iron in the Pathologic Th1 Immune Orientation:
Examination of the Tbet and Gata3 gene expressions in the spleen, directly related to naive Tcells development toward the Th pathway, confirmed the selective orientation according to the immune profile. Mice that were submitted to TNBS (Veh-TNBS) showed a higher value of Tbet/Gata3 ratio in the spleen (158.9 ± 52.8 vs. 4.8 ± 1.2 in controls) (Figure 3). This enhanced ratio was due to a strong increase of Tbet expression and a non-significant slight decrease of Gata3 in the spleen (103.2 ± 16.9 vs. 3.6 ± 1.6 and 0.4 ± 0.2 vs. 0.7 ± 0.2, respectively, by comparison to controls) (Figure 4a,b). By contrast, this ratio was dose-dependently lowered in animals submitted to iron supplementation during the juvenile period and to the experimental colitis at adulthood (54.1 ± 14.3 and 7.3 ± 3.3 in 75 and 150 mg/kg/day po colitic mice respectively) (Figure 3). At the two doses of iron used, this lowered ratio comes mainly from a strong reduced Tbet gene expression in the spleen (Figure 4a). At 150 mg/kg/day, this was reinforced by the increased Gata3 gene expression in the spleen (Figure 4b). In non-colitic mice, ferric iron, whichever dose tested, did not change the levels of expression of Tbet or Gata3 as compared to vehicle-treated mice (Figure 3). Examination of RORγ and Foxp3 gene expressions in the spleen was undertaken to evaluate the modulation of the immune orientation by the Th17 and Treg immune cells. Induction of TNBS colitis resulted in a significantly (p < 0.001 and p < 0.01 respectively) increased splenic gene expression of the both transcription factors, RORγ and Foxp3 (4 ± 0.5 vs 1.1 ± 0.4 and 4.8 ± 0.1 vs 1.7 ± 0.4, respectively, by comparison to controls) (Figure 4c,d). Furthermore, these results were closely associated with the elevated Tbet profile observed. When compared to the veh-TNBS group, both groups of mice treated by the ferric iron supplementations (Iron75-TNBS and Iron150-TNBS) presented a significantly lower splenic gene expression of RORγ (0.5 ± 0.1 and 0.8 ± 0.2% in 75 and 150 mg/kg/day po colitic mice, respectively) and Foxp3 (1.7 ± 0.4 and 2 ± 0.5 in 75 and 150 mg/kg/day po colitic mice, respectively) (Figure 4c,d). Examination of RORγ and Foxp3 gene expressions in the spleen was undertaken to evaluate the modulation of the immune orientation by the Th17 and Treg immune cells. Induction of TNBS colitis resulted in a significantly (p < 0.001 and p < 0.01 respectively) increased splenic gene expression of the both transcription factors, RORγ and Foxp3 (4 ± 0.5 vs 1.1 ± 0.4 and 4.8 ± 0.1 vs 1.7 ± 0.4, respectively, by comparison to controls) (Figure 4c,d). Furthermore, these results were closely associated with the elevated Tbet profile observed. When compared to the veh-TNBS group, both groups of mice treated by the ferric iron supplementations (Iron75-TNBS and Iron150-TNBS) presented a significantly lower splenic gene expression of RORγ (0.5 ± 0.1 and 0.8 ± 0.2% in 75 and 150 mg/kg/day po colitic mice, respectively) and Foxp3 (1.7 ± 0.4 and 2 ± 0.5 in 75 and 150 mg/kg/day po colitic mice, respectively) (Figure 4c,d).
Evaluation of Locus Specific Methylation in Tbet and Gata3 Gene Promoters
In colitic mice (Veh-TNBS), we evidenced a significant (p < 0.001) hypomethylation of the Tbet gene promoter region (-18.7 ± 10.5 vs. 15.4 ± 1.2% in controls) (Figure 5a) associated with the non-significant hypermethylation of the Gata3 gene promoter region (Figure 5b) in the spleen. By contrast, we observed a significant (p < 0.01) limitation of the hypomethylation of the Tbet gene promoter region in Iron150-TNBS mice (15.7 ± 2.7%) (Figure 5a). In non-colitic animals, ferric microencapsulated iron did not change the methylation profile of either the Tbet (Figure 5a) or Gata3 (Figure 5b) promoter regions.
Discussion
In addition to our first works on iron oral supplementation [1], this present study clearly illustrates the incidence of daily oral iron ingestion during the early period of life on the splenic Th profile and colitis symptoms at adulthood. Others studies have described that the immune profile of adults is in direct correlation with its early programming [3][4][5]. Our results are in accordance with the view of an intimate relationship between T-helper development and epigenetic regulation [8][9][10]28]. The study was aimed at identifying the effect of oral ferric iron supplementation in a model of Th1-colitis on the immune profile by studying the splenic expression of Th transcription factors and the accessibility of their promoters.
In order to address this point, we first confirmed the pathological status of the animal model. Mice submitted to the TNBS developed colitis, as evidenced by the analysis of the macroscopic lesions and MPO activity that are classical inflammatory parameters assessed to confirm inflammatory reactions. In fact, similar reports have been previously described in the literature and are in agreement with an increase of the inflammatory response associated with the TNBS-induced colitis model [29,30]. Significant weight variations and the macroscopic damage score of Wallace are two relevant parameters that easily reveal the presence of the intestinal physiological perturbations [23]. MPO activity, which reflects the level of neutrophil infiltration, is also usually used for determining colitic severity. In the presence of ferric iron, we confirmed the protective and dose-dependent effect of both doses of ferric pyrophosphate (75 and 150 mg/kg/day po) on the moderate colitis.
We also analysed the incidence of iron supplementation on the modulation of the Th profile induced by a moderate TNBS colitis. Indeed, the literature has described that this experimental colitis response was being driven by a Th1 immune response (for review, see Reference [31]). In our experiment, seven days after inducing a moderate inflammatory response, the analysis of Tbet and Gata3 expression in the spleen confirmed the Th1 profile, but also the main role of Tbet in the colitis, as illustrated by the strong increase of the Tbet expression (Figure 4a). The high Tbet/Gata3 ratio also reflected the dominance of the Th1 lymphocytes polarisation in the colitic mice as already described White columns: vehicle-treated mice; squared columns: ferric iron (150 mg/kg/g po) treated mice. TNBS instillation resulted in a significant reduction (p < 0.001) of the Tbet promoter CpG methylation status (A) which was reversed by iron treatment (a). By contrast, we did not observe any modification of the methylation status of the promoter region of Gata3 (b) regardless of whichever treatment was considered.
Discussion
In addition to our first works on iron oral supplementation [1], this present study clearly illustrates the incidence of daily oral iron ingestion during the early period of life on the splenic Th profile and colitis symptoms at adulthood. Others studies have described that the immune profile of adults is in direct correlation with its early programming [3][4][5]. Our results are in accordance with the view of an intimate relationship between T-helper development and epigenetic regulation [8][9][10]28]. The study was aimed at identifying the effect of oral ferric iron supplementation in a model of Th1-colitis on the immune profile by studying the splenic expression of Th transcription factors and the accessibility of their promoters.
In order to address this point, we first confirmed the pathological status of the animal model. Mice submitted to the TNBS developed colitis, as evidenced by the analysis of the macroscopic lesions and MPO activity that are classical inflammatory parameters assessed to confirm inflammatory reactions. In fact, similar reports have been previously described in the literature and are in agreement with an increase of the inflammatory response associated with the TNBS-induced colitis model [29,30]. Significant weight variations and the macroscopic damage score of Wallace are two relevant parameters that easily reveal the presence of the intestinal physiological perturbations [23]. MPO activity, which reflects the level of neutrophil infiltration, is also usually used for determining colitic severity. In the presence of ferric iron, we confirmed the protective and dose-dependent effect of both doses of ferric pyrophosphate (75 and 150 mg/kg/day po) on the moderate colitis.
We also analysed the incidence of iron supplementation on the modulation of the Th profile induced by a moderate TNBS colitis. Indeed, the literature has described that this experimental colitis response was being driven by a Th1 immune response (for review, see Reference [31]). In our experiment, seven days after inducing a moderate inflammatory response, the analysis of Tbet and Gata3 expression in the spleen confirmed the Th1 profile, but also the main role of Tbet in the colitis, as illustrated by the strong increase of the Tbet expression (Figure 4a). The high Tbet/Gata3 ratio also reflected the dominance of the Th1 lymphocytes polarisation in the colitic mice as already described [32,33]. Additionally, although the Tbet and Gata3 transcription factors remain the two major factors evaluated for the Th profile, the other subpopulations of the Th cells also play a non-negligible role in the determination of the immune profile [34][35][36]. Thus, a massive Th17 immune response stimulation had previously been described mostly during the inflammatory responses, particularly reinforcing the Th1 immune response [37,38]. It was shown that colitis increases the expression of RORγ and Foxp3 transcriptions factors [37,39]. Our results obtained with the TNBS group thus conform to the previous literature. However, these results were obtained from the whole splenic tissue. Splenic gene expressions of key transcription factors of Th differentiation (Tbet, Gata3, Foxp3 and RORγ) are a reflection of the Th profile, but Th cell differentiation is regulated by the complex transcriptional network [40]. Consequently, further experiments will be needed to confirm these results by the flow cytometry analysis of Th subsets and single-cell RNA sequencing.
We evaluated the consequences of a repeated low dose supplementation in the juvenile period of a ferric iron form used as a food additive. Under these conditions, we evidenced a remarkable effect of this formulation (ferric micro-encapsulated iron) on the modulation of intestinal inflammation associated with the re-equilibration of the splenic gene expression of key transcription factors of Th differentiation. Our results corroborate other studies evidencing the physiological role of iron in relation to some inflammatory processes [41,42]. A preliminary study (same experimental setup) aiming at assessing the iron vehicle did not show any difference between water and lecithin bead microcapsules without ferric pyrophosphate. Both groups presented the same weight loss, i.e., -24% and -23% from the original weight in TNBS-treated mice receiving either water or lecithin bead microcapsules without ferric pyrophosphate, respectively, versus no weight loss in the control mice (Control: 23.6 ± 0.5 at D0-W6 vs. 23.9 ± 0.6 at D7-W6; TNBS+water: 22.1 ± 0.4 at D0-W6 vs. 16.9 ± 0.6 at D7-W6; TNBS+vehicle: 23.1 ± 0.3 at D0-W6 vs. 17.8 ± 0.4 at D7-W6). Moreover, animals presented similar significant macroscopic lesions since we scored them at 5.8 ± 0.7 and 6 ± 0.1 in TNBS-treated mice receiving either water or lecithin bead microcapsules without ferric pyrophosphate, respectively, versus 0 in the control mice ( Figure S1). These results were not surprising since lecithin only counts for 0.12% of the total dry matter of the vehicle of Lipofer. Consequently, we evidenced that lecithin bead microcapsules alone do not have any effect on TNBS-induced colitis and, in turn, attributed all the effects to the iron form. The analysis of the transcription factors profile also showed the maintenance of the Th balance, notably due to a significant limitation of the substantial splenic expression of the Tbet gene and a non-significant slight activation of the splenic expression of the Gata3 gene. Mice treated by early ferric iron supplementation, before inducing the colitis at adulthood, presented a normalisation of the Th17 and the Treg profiles. One hypothesis is that the Treg profile could be dependent on iron supplementation and that it may be explained, according to the study by Zeng et al., by the relationship between iron absorption and the phenomenon of anergy [43]. One other important result is the absence of a per se effect of early ferric iron supplementation in the animals, which plays in favour of the use of this microencapsulated form of ferric iron. Our results contrast with previous studies evidencing a negative effect [44][45][46]. The microencapsulation of iron may explain such a difference. Indeed, its kinetics of release and availability will necessarily be different from the usual iron supplementation.
Our results show that colitic mice treated with iron exhibited the modulation of splenic Th transcription factors gene expression. Since the modulation of the Tbet and Gata3 expressions could be due to epigenetics, we chose to evaluate the incidence of daily oral iron ingestion on the methylation events of the promoter regions of Tbet and Gata3. In fact, it has already been established that some dietary components contribute to the epigenetic process modulating the immune system development [12] and condition and, at least in part, the establishment of intestinal homeostasis. From our results, chronic iron ingestion did not significantly affect the methylation level of the GATA3 promoter CpG island studied. This could be in concordance with the slight modification of GATA3 expression. However, as there are 3 putative CpG islands in the gene promoter, it will be interesting in the future to test methylation status of the remaining islands of this promoter. Another possibility would be to study the chromatin conformational modulation induced by histone (H3, H4) modifications as methylation or acetylation. Regarding the Tbet gene, after the induction of colitis, its promoter presented a decreased methylation (-18.7%) in the TNBS groups, supporting the hypothesis of an enhanced accessibility of the promoter for transcription factors and confirming the dominant expression of Tbet in this Th1 model observed above. Then, in mice supplemented with ferric iron, the epigenetic profile of the Tbet promoter was re-established since it presented a percentage of methylation comparable to one of the control groups (15.7% vs. 15.4%). One hypothesis, in adequation with others works that described the potential effect of particular nutrients modulating the epigenetic profile, could be that the iron may prevent the expansion of the Tbet gene expression observed in colitic mice [20]. Indeed, iron ingestion implicates a variation in the pool of free iron in the organism. Iron has the capacity to affect the intracellular redox state leading to a change in the activity of many enzymes including epigenetic enzymes. Additionally, as iron is a cofactor of many epigenetic enzymes, the variation of its concentration is able to modulate their activities [47]. Other hypotheses regarding the effects of chronic iron ingestion may be issued. Indeed, it was reported that iron is involved in the growth and function of immune cells and could modulate the Th1/Th2 ratio by a ROS-mediated mechanism [48][49][50], but contradictory results were found in different studies, possibly due to the form of iron [50,51].
The impact of the iron supplementation could also be indirect. In our previous study, we demonstrated that early ingestion of microencapsulated ferric iron prevents microbiota dysbiosis in adult mice [1]. Genetics, gut microbiota and the immune system are involved in the pathogenesis of IBD [52]. Intestinal microbiota plays a central role in the inflammation/tolerance balance by its influence on the Th profile, and gut microbiota dysbiosis could be a potential contributor to the inflammatory process [53,54]. Furthermore, it was shown that probiotics such as B. infantis could limit TNBS-induced colitis by inhibiting the Th1 and Th17 responses in mesenteric lymph nodes [55]. In our study, the inhibition of the Th1 and Th17 responses observed in the spleen could be the reflection of the modulation of the local intestinal Th response. We may hypothesise that the inflammation inhibition induced by iron is based on a double interconnected mechanism (gene expression and microbiota). Furthermore, it is known that innate immunity is also involved in the induction of inflammation. Here again, the impact of iron could be in two different ways, with the direct effect of iron on innate cells or the indirect effect by the modulation of microbiota. Finally, it was reported that iron could modulate gut barrier (epithelium and/or beneficial barrier commensal gut microbiota) and, thus, have an impact on colic inflammation, but this impact seems to be variable [56]. Confirmation of these hypotheses will require further investigations.
In conclusion, juvenile mice treated by daily oral ingestion of ferric iron presented a weaker alteration of the splenic gene expression of key transcription factors of Th differentiations in a Th1 colitis model associated with the inhibition of the Tbet promoter hypomethylation and the inhibition of adult colitis symptoms. If confirmed in other experimental models of inflammation, this supplementation could be of particular interest on the population prone to develop chronic inflammatory and autoimmune diseases during adulthood.
|
2019-08-03T13:03:22.867Z
|
2019-07-31T00:00:00.000
|
{
"year": 2019,
"sha1": "bc5643bafe9d9e134c93ae2397768b089bcb5480",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/11/8/1758/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2a2d491149ffe2d4810ec2da5de4e21682e8860",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225737681
|
pes2o/s2orc
|
v3-fos-license
|
METHOD OF FAST MATRIX MULTIPLICATION UNDER ARM ARCHITECTURE USING SIMD INSTRUCTIONS
and Eigen 3 libraries. Testing was done using the vipmed utility for running and measuring features developed for enterprise use at VIT. Conclusions. The proposed matrix multiplication method gives the expected acceleration of matrix multiplication operations, has passed evaluation test for use and meets the target requirements. For further work, it is necessary to study in more detail the influence of the cache at different levels and compare with other existing libraries.
Introduction
In the everyday life, the matrices are used much wider than people are apt to think. In fact, we face them every day.
Graphics software, such as Adobe Photoshop, uses image matrices for image processing. A square matrix can represent a linear transformation of a geometric object. Matrices and inverse matrices are used in programming to encode and encrypt messages. The message is generated as a sequence of numbers in binary format for communication, and the code theory should be used to solve it.
Many IT companies also use matrices as data structures to track user information, perform search queries, and manage databases. In terms of information security, many systems have been designed to manage matrices. Matrix multiplication is widely used when working with neural networks. Neurons values matrices are multiplied at transition between the network layers [1].
Matrices are broadly used in physics, electrodynamics, electronics, radio engineering. Even a cursory survey of the bibliography on this subject reveals its huge volume. The theory of matrix methods is sufficiently developed, but the practical implementation of these methods has not exhausted its potential.
The aim of this research is to explain what factors affect matrix multiplication and how to use them for improving performance. At first, we describe main problems of effective matrix multiplication implementation. Then we show some of the existing solutions and describe own one. Finally, we compare our implementation with the existing described above.
Leaving aside the application of the method described below in solving practical problems for the future, we will now turn to a detailed description of the method itself.
Problem statement
The aim of our research is to implement effective matrix multiplication method.
Overview of the existing solutions
The problem of using matrix multiplication is that it has the burden of performing a great deal of operations. In example, we define two matrices A and B: 11 1 11 1 Assume that the result is C: Therefore, matrix multiplication formula is: [2]. It is not necessary to go into the details that the cubic complexity is a bad property.
The existing solutions having open source code are not effective enough (this will be proven in the results).
There are many algorithms for rapid matrix multiplication that reduce the complexity of the operation. The best-known and used in practice is the Strassen algorithm, which reduces the complexity to O(n log27 ), which is approximately equal to O(n 2.81 ) [3]. All the other algorithms are only theoretical and approximate, so they are practically not used [4].
These algorithms are purely mathematical and do not take into account such an important point as placement the matrices in memory. They only reduce the number of multiplications, so in practical programming this is not enough.
Considering the chosen processor architecture, memory management is very important. To speed up this work, we use memory preload in cache. The auto-preloader of X86 processors, unlike ARM, works quite well, so it makes little sense to do this job manually. In ARM architecture, the well-timed use of the memory preloader can result in speeding up operation many-fold. However, if one makes a mistake, there is a significant drop in performance.
Therefore, we propose a new method of the matrix multiplication considering above information.
There are many libraries, including open source code, that implement matrix multiplication. In the specification of basic linear algebra subroutines (BLAS), this operation has a more enhanced interface and is called gemm. There are many implementations of this specification and most of them implement matrix multiplication using SIMD technology [5].
OpenCV is a library of software functions primarily focused on real-time computer vision algorithms. This library is a very powerful tool: it has many useful features, is cross-platform and implemented in several programming languages. It is distributed as BSD-licensed open source code software. OpenCV is not BLAS compatible, but implements a similar gemm function [6].
Eigen is a template library, it provides a simple and very common C++ 98 template interface for matrix/vector operations and related algorithms. This library, like OpenCV, contains implementations that leverage vector operations for optimization. Important thing is that there are some (more optimal) implementations for some fixed sizes. The main feature of this library is that it is fully implemented in the headers, so one only needs to download these files to be used [7].
These libraries stated that they have optimized algorithms for matrix operations, so we choose them for comparing with ours.
It is pointless to describe in detail how the matrix multiplication algorithm is implemented in the above libraries. The result of their work and comparison with the proposed method will be shown below. Their main disadvantage is the lack of performance, so the purpose of this article is to develop a faster method of implementing matrix multiplication.
Description of the proposed method
The matrix multiplication of the MK matrix A and KN matrix B results in the creation of MN matrix C. Each element in matrix C can be considered as a scalar product of the corresponding row of matrix A and column of matrix B.
It is possible to implement all matrix multiplications by using a primitive scalar product, but such implementation would be far from effective. In a scalar product, we load two elements for each multiplication-add operation, and on modern processors, this implementation will be limited by memory or cache bandwidth instead of the computing power of multiplication-add units. Neverthe-less, a minor modificationcalculating point products from several rows in A and several columns in B at a timeimproves performance significantly.
The modified primitive takes MR elements of elements in A and NR of B elements and performs multiplication operations with MRxNR accumulation. The number of registers and other details of the processor architecture limit maximum MR and NR values. But in most modern systems, they are large enough to make the operation limited, and all high-performance implementations of matrix-matrix multiplication are built on this primitive microkernel commonly called PDOT (panel dot product). The M = N = 4 workaround was selected in the method stated below. Such a small number causes the limitations of the chosen architecture.
This paper considers the matrix multiplication method for the ARM v7 architecture. It is 32-bit and has some limitations on the number of registers. With the exception of Armv6-M, Armv7-M, Armv8-M.baseline and Armv8-M.mainline based processors, there are 33 32-bit general-purpose registers, including redundant SP and LR registers. Fifteen general-purpose registers are visible at any time, depending on the current processor mode. These are R0-R12, SP and LR. PC (R15) is not considered a general-purpose registry [8].
SP (or R13) is a stack pointer. C and C++ compilers always use SP as a stack pointer. Arm devalues most SP applications as a general-purpose registry. In T32 state, SP is strictly defined as a stack pointer. ARM official documentation describes when SP and PC can be used.
In a user mode, LR (or R14) is used as a register of links to store a return address when a subroutine is called. It can also be used as a generalpurpose register, if the return address is stored in the stack.
In exception handling modes, the LR stores the return address for the exception or the return address of the subroutine, if subroutine calls are made within the exception. The LR can be used as a general-purpose register if the return address is stored in the stack.
From the above it is clear that only 13 regular registers are available for general use without any restrictions.
The algorithm requires much more regular registers to move with the four rows of the first to the second matrix, as noted above. Matrix row pointers (4 first + 4 second) also need pointers to the resulting matrix rows, registers to store sizes and iterators for each of them. Since matrices can actually only be parts of larger matrices, the notion of a step between rows is introduced. This step is defined in bytes and is equal to the actual width of the entire image, multiplied by the size of one element. The user with an understanding of the storage area in which one operates should transmit this data. These steps, for each of the three matrices, accordingly, require registers. So, even not taking into account the extra registers that may be needed to calculate, the required number equals already to twenty. Therefore, to save all the necessary values, one needs to allocate additional temporary memory. To use this data, they should be loaded in free registers and stored back in time, in case of necessity, the register should be freed up (saving constant values, such as matrix sizes and line spacing, each time is not required).
The main practical problem in calculating the product of matrices is the inefficient bypass of the second matrix, because the result of a particular element of the result matrix is the product of the row of the first matrix and the second matrix column. Column matrix bypass is quite inefficient in Row-Major mode (when a row in memory is in sequence). To solve this problem, the accumulation of results in a temporary buffer was chosen, while moving linearly on the second matrix. With this approach, the number of bypasses of the second matrix increases. In fact, with each row of the first matrix, the second one is completely read-out. However, gradual read-out of the data slows down the program operation to such an extent, that multiple linear reading of the matrix still faster than just one inconsistent read. Considering that the bypass will be performed at once by four rows of the first matrix, the number of bypasses of the second one is reduced by four times (it is fully read-out once per every four rows of the first matrix).
Of course, the disadvantage of this approach is the considerable amount of additional dedicated memory (4* the width of the second matrix). Thus, the resulting calculation is divided into two parts: first, there is an accumulation in the temporary buffer, and then, only at the last iteration, the entry in the resulting matrix. The last iteration is the calculation on the last rows of the second matrix. The magnitude of the so-called matrix tail will be equal to the remainder of the division of the second matrix height (or the width of the first, since they are equal) by the number of rows that are bypassed within one iteration (in this case by four). If there is no remainder, then one iteration less is performed in the total cycle and when processing the last four rows, the result is immediately written to the resulting matrix.
As a result, we have the following general algorithm for matrices A(MK) and B(KN) multiplied into the matrix C(MN) by the blocks mR and Rn with the tail having t value: 1. Allocation of necessary additional memory (for variables and accumulation), initial data initialization (including memory reset into which the result will be accumulated) and their storage.
2. Reading-out R elements from m rows of the first matrix.
3. Reading-out n elements from the R rows of the second matrix.
4. Read n elements from m lines of temporary buffer with intermediate results.
5. Scalar multiplication of blocks mR and Rn and their accumulation to data read-out from a temporary buffer.
6. Record of intermediate results back to the temporary buffer.
7. Implementation of items 3 -6 N/n times. 8. Upon bypass of the entire width of the second matrix, the transition to next R rows of that matrix and the R elements of the first one is performed. Temporary buffer pointers are moved to the beginning and accumulation will further occur in it.
9. Implementation of items 2 -8 (K/R-t) times. 10. At the tail iteration, we read-out the last t elements from the m rows of the first matrix.
11. We read n elements from the last t rows of the second matrix.
12. Same as item 4. 13. Scalar multiplication of blocks m x t and t x n and their accumulation to read-out data from the temporary buffer.
14. Resetting the temporary buffer. 15. Writing m rows of n elements in the resulting matrix.
16. Implementation of items 11 -15 N/n times. 17. Transition to the next m rows of the first and the resulting matrices.
Implementation using SIMD instructions
In practical application, this algorithm is good enough. It is worth to note that aliquant values are not taken into account here, that is, it is necessary to additionally process the remainders, but it is decided to describe the algorithm without such, quite clear, details.
Of course, fast calculation requires more than just an efficient algorithm. One has to additionally look for optimization methods (both algorithmic and architecture related). Using algorithmic optimization, one can specify items related to tail processing. In a simple algorithm, individual points are not dedicated to this: first, the result is completely obtained in the temporary buffer (i.e., points 2 -8 are performed K/R times), then the result is rewritten into matrix C and the last step is the resetting of the temporary buffer. In this algorithm, all these operations occur in one pass.
As to lower-level optimization, one should start with the vector instructions. Such operations allow executing operation with several values written in vector registers.
SIMD is a class of parallel programming, which is based on such operations. Most modern processors are designed to support SIMD instructions to enhance performance. This class is particularly popular in signal processing, where, as a rule, a large number of identical data is processed with similar operations. SIMD also allows processing several similar data types with the same instruction (as indicated in the name -Single instruction, multiple data; which is rendered as: one instruction for lots of data).
In ARM architecture processors, SIMD is represented as NEON (Advanced SIMD) extension. The Registry Bank of this extension is a collection of registers that can be accessed both as 64-bit and 128-bit vector registers. Advanced SIMD and VFP (floating-point values operations) use the same registers and differ from the main ARM register bank.
128-bit registers are called Q-registers, and 64-bit -D-registers. Each Q-register corresponds to 2 D-registers, they are overlapping. The mapping between the registers is as follows: D 2n maps the least significant half of Q n; D 2n + 1 maps the most significant half of Q n.
For example, one can access the least significant half of the vector elements in Q6 by referring to D12 and the most significant half of the elementsby referring to D13. Therefore, in general, the registry bank can be represented by: Sixteen 128-bit registers Q0-Q15; Thirty-two 64-bit registers D0-D31; A combination of D and Q registers. The SIMD extension treats each register as containing 1, 2, 4, 8, or 16 elements of the same size and type (the number depends on the register size and the element, respectively). Individual elements can also be accessed as scalars.
Let us consider using this technique in the proposed algorithm.
We omit the moments with reading and writing, we assume that the matrix elements have already been read into the vector registers and will be written from them.
Let us have a more detailed look at item 5. We will consider the example of a 32-bit float type. In the developed method, for m = 4 and n = 4, R is also assumed as four, due to the limitation of the registers number. Thus, 44 elements of the first matrix have been read in item 2. This corresponds to 64 bytes, so four Q registers are required. In items 3 and 4 the same number of Q registers of the second matrix and time buffer was read out, respectively.
According to the matrix multiplication algorithm, it is necessary to multiply the first row of the second matrix by the first element of each row of the first one, the second rowby every second element, and so on. Moreover, the products obtained from the i-th row of the first matrix and the j-th column of the second matrix correspond to the element (i,j) of the temporary buffer.
Given the specifics of the actions described, a vector-to-vector operation is not appropriate, and as stated above, NEON allows getting access to an individual element. Therefore, some instructions allow performing vector-scalar operations. VMLA is one such instruction [9].
VMLA (Vector Multiple and Accumulate)multiplies the corresponding elements of two vectors and adds the product to the corresponding elements of the result register. In the scalar-vector case, each element of the vector is multiplied by a scalar. The general syntax of the instruction is as follows: Let us consider each element of the structure: {cond} -an optional parameter, NEON allows conditional execution of instructions, cond box is clicked for the condition under which the instruction will be executed; datatypea vector register element type, in our case F32, denoting a 32-bit floating point number; Qd, Qn, Qminput registers for the operation, the following operation is conditionally performed: Qd + = Qn * Qm; Dm [x] -a scalar, for the vector-scalar operation it is important that the Q-registers are used as vectors, but the scalar gets from the D-register; xthe index of the desired element from the vector Dm. Therefore, we assume that vectors Q4 -Q7 were read from the first matrix, from the second matrix -Q8 -Q11 vectors, and from the temporary buffer -Q0 -Q3. Then the set of commands required to get the result will look as follows: Multiplying the 1st row of the second matrix by the first element of each row of the first matrix with accumulation in the temporary buffer. VMLA.F32 Q0, Q11, D9 [1] VMLA.F32 Q1, Q11, D11 [1] VMLA.F32 Q2, Q11, D13 [1] VMLA.F32 Q3, Q11, D15 [1] So, we obtained 16 vector VMLA operations. A similar solution by the conventional method would produce 64 multiplication and the same number of addition operations.
Another way to optimize is linearly reading and writing to the temporary buffer. In a simple way, for convenience, the temporary buffer corresponds to the four rows of the resulting matrix. Given that this buffer is only a temporary one, it is possible to read and write it linearly. At first glance, it is just an elementary change, when it comes to vector registers and if we consider the further processing of remainders, but, in fact, it leads to serious confusion. These details will not be described in this publication. The main thing is to correctly understand at what point the corresponding elements for accumulation are located and what registers are to be written in sums and where. It should be noted here that NEON allows to effectively read/write vector registers with one instruction. However, there are two limitations: one instruction can write at most two Q-registers at a time; the registers should be serial. That is, reading/writing of Q1, Q3 registers with one instruction is impossible. These limitations are one of the reasons for the problems with the transition from reading the temporary buffer by 4 lines to serial.
Given that the bypass of the second matrix is performed a large number of times, the overall execution time is greatly affected by the padding size. This is a value equal to the width of the matrix subtracted from the step between the rows. Briefly, this is the size of the region of the entire matrix that does not take part in the multiplication (if the multiplication is performed on the part of the larger matrix). By reducing the ratio between the width at which the multiplication is performed and the width at which it is not performed, the rate of execution decreases. Sometimes the deterioration is such that it is quicker to make a copy of the sub-matrix into a new memory where paddings will be removed and to perform the operation without them. It is due to these reasons that one more optimization occurred: from the above data, as well as the height of the first matrix (the number of the second matrix full readouts depends on this), the conversion factor is calculated, which makes it advantageous to first make a copy to the extra memory and only then to multiply the matrices on that memory.
For the ARM architecture the proper placement of the memory reboot is very important. As practice shows, 86 architecture types have a good auto-preloader, unlike ARM. In this processor family, correct and timely memory rebooting can result in a huge acceleration. On the contrary, if programmer makes a mistake, the performance can drop significantly. The official documentation does not give any flexible advice. It only recommends preloading with 128-byte indentation unit.
The PLD instruction performs a 64-byte preload of the transmitted pointer memory with some preset indentation unit. As mentioned above, the official documentation recommends that 128 bytes indentation should be made in advance. However, practical use shows that this is only a minimum of the real capabilities of this instruction. In different situations accelerations produce single preloads before the start of the general cycle, or in the cycle itself (not always with 128 byte indentation unit, and sometimes their number can be more than one). The PLD instruction preloads 64 bytes at once. The above design only makes 16-bytes pass, so this operation is redundant at every iteration. The simplest solution to this problem is to spin a 64-byte cycle. In fact, one such iteration will simply contain 4 blocks of operations of 16 bytes each, but this approach makes it possible to perform preloading much more efficiently.
Finally, it can be noted that due to the different behavior when bypassing the matrices (first, second and temporary), the schemes of their preload also differ.
Similarly, not only matrix multiplication on floating-point numbers is implemented, but integer 8-bit scaling as well. The option of the matrix multiplication by the transposed one and the list of vectors is also implemented (the actual implementation is one, the transposed goes to the vectors list). This option allows to read both matrices sequentially without additional memory buffer and to use a vector-vector type VMLA instruction.
The results of the comparison of the proposed method with others
The OpenCV and Eigen libraries described above were selected for the purpose of comparison. Matrix multiplication functions in all libraries (including the developed one) return the same result, so we assume that it is accurate. The data was verified by a vipmed application designed specifically to test and verify image and matrix management functions. This software allows running various mathematical functions of different libraries with the transfer of the necessary parameters, compare their results, and measure the execution time within the accuracy of a microsecond using C library function "clock".
All measurements were made on a NVIDIA Tegra K1 chip with ARM Cortex-A15 processors that support ARMv7 and NEON.
For illustration purposes, the tables show the percentage of run time of existing OpenCV and Eigen methods from the one developed for Vipm library. Table 1 represents multiplying the NN matrix by the NN matrix using 8-bit unsigned integers without a padding. Table 2 represents multiplying the NN matrix by the NN matrix using 8-bit unsigned integers with 4,000 bytes padding. 995 474 1000 216 2341 1067 1085 494 2000 2336 18709 8470 801 363 4000 20957 151314 67651 722 323 Table 3 represents multiplying the NN matrix by the NN matrix using 32-bit floating-point numbers without padding. 248 2302 318 926 128 2000 2160 18563 2475 860 115 4000 17938 150348 19131 838 107 Table 4 represents multiplying the NN matrix by the NN matrix using 32-bit floating-point numbers with 4,000 bytes padding. Table 5 represents multiplication of the NN matrix by the transposed NN matrix using 8-bit unsigned integers without padding. 809 401 1000 246 2154 1069 877 435 2000 2035 17105 8473 841 416 4000 16002 136829 67651 855 423 Table 6 represents multiplying the NN matrix by the transposed NN matrix using 8-bit unsigned integers with 4,000 bytes padding. 807 400 1000 247 2181 1069 885 434 2000 2044 17106 8475 837 415 4000 16405 136841 67651 834 412 Table 7 represents multiplying the NN matrix by the transposed NN matrix using 32-bit floatingpoint numbers without padding. Table 8 represents multiplying the NN matrix by the transposed NN matrix using 32-bit floatingpoint numbers with 4,000 bytes padding. The results show that in all cases OpenCV works about in the same way (time only increases with image size, which is quite logical), and options of the floating point are much more efficiently implemented in Eigen. The algorithm developed is also executed at about the same rate in all conditions, but generally, runs much faster than OpenCVsometimes ten-fold. Eigen is also much more efficient than OpenCV, but the 8-bit unsigned numbers are still significantly inferior to the developed algorithm. It can be assumed that this library does not have a direct implementation for this type of values, so it is executed by additional conversions to floating point numbers, which causes a delay. When using floating-point numbers, the algorithm for the given values is still faster. For higher values, the efficiency of the Eigen library approaches the developed algorithm.
Testing was performed on various sizes and types using vipmed software (designed for in-house use by VIT).
Conclusions
The proposed matrix multiplication method gives the expected speed of matrix multiplication operations (not slower than existing analogues) and has passed evaluation test for use.
The efficiency of the method is tested by practical application.
The method is used in the corporate library of one of the leading companies in Ukraine.
The article can help to understand how preloaders and caches are work and how to use them in operations such as matrix multiplication.
This algorithm do not consider that block sizes are depend on cache size. Therefore, speed on machines with a different cache size can be unexpected.
For future work, we need to explore more existed libraries with matrix multiplication. It will help understand in what way we can move next.
|
2020-07-09T09:09:22.097Z
|
2020-06-09T00:00:00.000
|
{
"year": 2020,
"sha1": "28d7c37a03e3a1ea96629fff947e32575cb0655f",
"oa_license": "CCBY",
"oa_url": "http://scinews.kpi.ua/article/download/205115/pdf_60",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "39fc6c3849fdf337855f140e0c900dbb0443ef69",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
215737126
|
pes2o/s2orc
|
v3-fos-license
|
Two asymptotic distributions related to R\'enyi-type continued fraction expansions
We attempt to investigate a two-dimensional Gauss-Kuzmin theorem for R\'enyi-type continued fraction expansions. More precisely speaking, our focus is to obtain specific lower and upper bounds for the error term considered which imply the convergence rate of the distribution function involved to its limit. To achieve our goal, we exploit the significant properties of the Perron-Frobenius operator of the R\'enyi-type map under its invariant measure on the Banach space of functions of bounded variation. Finally, we give some numerical calculations to conclude the paper.
Introduction
The subject of Rényi-type continued fractions has link with u-backward continued fractions studied by Gröchenig and Haas [2].
As is known, in 1957 Rényi [7] showed that every irrational number x ∈ [0, 1) has an infinite continued fraction expansion of the form where each n i is an integer greater than 1. The expansion in (1.1) is called the backward continued fraction expansion of x. The underlying dynamical system is the Rényi map R defined from [0, 1) to [0, 1) by where ⌊·⌋ stands for the integer part. Rényi showed that the infinite measure dx/x is invariant for R. This map does not possess a finite absolutely continuous invariant measure and the usual trick to investigate its thermodynamic formalism does not work. Starting from the expansion (1.1) and the Rényi transformation R, Gröchenig and Haas [2] define the family of maps T u (x) := where the integers n i = 1 + a i are ≥ 2 and the coefficient of n i is 1 or u, depending on the parity of i. In the particular case u = 1/N , for a positive integer N ≥ 2, they have identified a finite absolutely continuous invariant measure for T u , namely dx/(x + N − 1). For u = 1/N , where N ≥ 2 is an integer, we will call T u the Reńyi-type continued fraction transformation and denote it by R N . The present paper continues and completes our series of papers dedicated to Reńyi-type continued fraction expansions [3,4,5].
In [3] we started an approach to the metrical theory of the Reńyi-type continued fraction expansions via dependence with complete connections. We obtained a version of the Gauss-Kuzmin theorem for these expansions by applying the theory of random systems with complete connections, due to Iosifescu [1]. Briefly, we showed that the associated random systems with complete connections are with contraction and their transition operators are regular with respect to the Banach space of Lipschitz functions.
In [5] using a Wirsing-type approach [9] we obtained upper and lower bounds of the error which provide a refined estimate of the convergence rate. For example, in case N = 100, the upper and lower bounds of the convergence rate are respectively O (w n 100 ) and O (v n 100 ) as n → ∞, with v 100 > 0.00503350150708559 and w 100 < 0.00503358526129032. The strategy in this paper was to restrict the domain of the Perron-Frobenius operator of R N under its invariant measure ρ N to the Banach space of functions which have a continuous derivative on (0, 1).
Recently, in [4] using the method of Szüsz [8], we obtained more information on the convergence rate involved. The main novelty was the explicit expression in terms of Hurwitz zeta functions on η N that appears in [4,Theorem 3.1]. Finally, to enable direct comparisons of the results obtained in the last two methods (Wirsing and Szüsz) we give upper and lower bounds of η N for N = 100: 0.00505050495049505 < η N < 0.0050753806723955975.
The aim of this paper is to contribute a solution to two-dimensional Gauss-Kuzmin theorem for Reńyi-type continued fraction expansions.
The framework of this paper is arranged as follows. In Section 2 we gather prerequisites needed to prove our results in Section 3 and 4. In Section 3 we treat the Perron-Frobenius operator of R N under its invariant measure on the Banach space of functions of bounded variation and study the significant properties of this operator. Section 4 is devoted to the two-dimensional Gauss-Kuzmin theorem concerning the natural extension of corresponding interval maps R N , N ≥ 2. Here the specific lower and upper bounds for the error term considered are approached via the characteristic properties of the associated transfer operator in Section 3. Finally, we give some remarks and numerical calculations to conclude the paper.
Prerequisites
In this section we briefly present known results about Rényi-type continued fractions (see e.g. [3]).
Extended random variables
Define the projection E : Remark that a l (x, y) in (2.17) is also well-defined for l ≤ 0 because R N is invertible. By (2.12) and (2.15) we have a n (x, y) = a n (x), a 0 (x, y) = a 1 (y), a −n (x, y) = a n+1 (y), (2.18) for any n ∈ N + and (x, y) ∈ [0, 1] 2 . Since the measure ρ N is preserved by R N , the doubly infinite sequence (a l (x, y)) l∈Z is strictly stationary (i.e., its distribution is invariant under a shift of the indices) under ρ N . The stochastic property of (a l (x, y)) l∈Z follows from the fact that .
The strict stationarity of (a l ) l∈Z , under ρ N implies that Let a n 's be as in (2.4). For any t ∈ [0, 1] put (2.24) Note that by the very definition of s t N,n , we have These facts lead us to the random system with complete connections [3] for any s ∈ I, where E ρ t N stands for the mean-value operator with respect to the probability measure ρ t N , whatever t ∈ [0, 1], and U N is the Perron-Frobenius operator of ([0, 1], B [0,1] , ρ N , R N ) defined as in (3.1).
Note that for any t ∈ [0, 1] and n ∈ N + we have ρ t N (A|a 1 , . . . , a n ) = ρ L 1 ([0, 1], ρ N ) such that the following holds [3]: where P i N and u i N are as in (2.21) where the supremum being taken over y 1 < · · · < y k , y i ∈ A, i = 1, . . . , k and k ≥ 2. We write simply varf for var and where the constant K N is as in (3.3) and because we took into account that Proof. Note that for any f ∈ BV ([0, 1]) and u ∈ [0, 1], since which leads to (3.6). Next, (3.7) follows from (3.9) and (3.6).
A two-dimensional GaussKuzmin theorem
In this section we shall deliver an estimate of the error term below for any t ∈ [0, 1], x, y ∈ [0, 1] and n ∈ N + .
In which provide an estimate of the convergence rate involved. First, we obtain a lower bound for the error, which suggests the convergence rate of ρ t N s t N,n ∈ [0, y] to ρ N ([0, y]) as n → ∞ for all t ∈ [0, 1]. Theorem 4.1. For any t ∈ [0, 1] and n ∈ N + we have Proof. First, the continuity of the function y → ρ N ([0, y]), y ∈ [0, 1], and the equations lim hց0 ρ t N s t N,n ≤ y − h = ρ t N s t N,n < y and lim for all t ∈ [0, 1] and n ∈ N. Second, whatever s ∈ [0, 1] we have .
It is easy to see that P N (n) N (·) is a decreasing function. Therefore for all t ∈ [0, 1].
By the recurrence relation (2.7) with a n = i n for all n ∈ N, we obtain It should be noted that Theorem 4.2 in connection with the limit In what follows we exploit the characteristic properties of the transition operator associated with the random system with complete connections underlying Rényi-type continued fraction. By restricting this operator to the Banach space of functions of bounded variation on [0, 1], we derive an explicit upper bound for the supremum (4.1).
for all n ∈ N, where K N is as in (3.3). Hence where K N is as in (3.3), t, x, y ∈ [0, 1], n ∈ N.
Combining Theorem 4.2 with Theorem 4.4 we obtain Theorem 4.5. Actually, Theorem 4.5 implies that the convergence rate is O(α n ), with .
For example, we have
|
2020-04-13T01:00:25.155Z
|
2020-04-10T00:00:00.000
|
{
"year": 2022,
"sha1": "aaa9e1fb4dfa609f4a5f91fa8845436f1528bdf7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aaa9e1fb4dfa609f4a5f91fa8845436f1528bdf7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
67764261
|
pes2o/s2orc
|
v3-fos-license
|
A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media
This paper presents a direct solution technique for the scattering of time-harmonic waves from a bounded region of the plane in which the wavenumber varies smoothly in space.The method constructs the interior Dirichlet-to-Neumann (DtN) map for the bounded region via bottom-up recursive merges of (discretization of) certain boundary operators on a quadtree of boxes.These operators take the form of impedance-to-impedance (ItI) maps. Since ItI maps are unitary, this formulation is inherently numerically stable, and is immune to problems of artificial internal resonances. The ItI maps on the smallest (leaf) boxes are built by spectral collocation on tensor-product grids of Chebyshev nodes. At the top level the DtN map is recovered from the ItI map and coupled to a boundary integral formulation of the free space exterior problem, to give a provably second kind equation.Numerical results indicate that the scheme can solve challenging problems 70 wavelengths on a side to 9-digit accuracy with 4 million unknowns, in under 5 minutes on a desktop workstation. Each additional solve corresponding to a different incident wave (right-hand side) then requires only 0.04 seconds.
INTRODUCTION
1.1. Problem formulation. Consider time-harmonic waves propagating in a medium where the wave speed varies smoothly, but is constant outside of a bounded domain Ω ⊂ R 2 . This manuscript presents a technique for numerically solving the scattering problem in such a medium. Specifically, we seek to compute the scattered wave u s that results when a given incident wave u i (which satisfies the free space Helmholtz equation) impinges upon the region with variable wave speed, as in Figure 1. Mathematically, the scattered field u s satisfies the variable coefficient Helmholtz equation and the outgoing Sommerfeld radiation condition (2) ∂ u s ∂ r − iκu s = o(r −1/2 ), r := |x x x| → ∞, uniformly in angle. The real number κ in (1) and (2) is the free space wavenumber (or frequency), and the so called "scattering potential" b = b(x x x) is a given smooth function that specifies how the wave speed (phase velocity) v(x x x) at the point x x x ∈ R 2 deviates from the free space wave speed v free , One may interpret √ 1 − b as a spatially-varying refractive index. Observe that b is identically zero outside Ω. Together, equations (1) and (2) completely specify the problem. When 1 − b(x x x) is real and positive for all x x x, the problem is known to have a unique solution for each positive κ [10,Thm. 8.7].
The transmission problem (1)- (2), and its generalizations, have applications in acoustics, electromagnetics, optics, and quantum mechanics. Some specific applications include underwater acoustics [3], ultrasound and microwave tomography [14,35], wave propagation in metamaterials and photonic crystals, and seismology [36]. The solution technique in this paper is high-order accurate, robust and computationally highly efficient. It is based on a direct (as opposed to iterative) solver, and thus is particularly effective when the response of a given potential b to multiple incident waves u i is desired, as arises in optical device characterization, or computing radar scattering cross-sections. The complexity of the method is O(N 3/2 ) where N is the number of is a given, smooth, compactly supported "scattering potential." An incident field u i hits the scattering potential and induces the scattered field u s . The dashed line marks the artificial domain Ω which encloses the support of the scattering potential. discretization points in Ω. Additional solves with the same scattering potential b and wavenumber κ require only O(N) operations. (Further reductions in asymptotic complexity can sometimes be attained; see section 4.2.) For simplicity of presentation, the solution technique is presented in R 2 ; however, the method can be directly extended to R 3 . (1) is derived by requiring that the total field u = u s + u i satisfy the variable coefficient Helmholtz equation (4) ∆u
Remark 1.1. Equation
Plugging the condition that the incident field u i satisfies the free space equation (∆ + κ 2 )u i = 0 inside Ω, and the definition of the scattering potential (3) into (4) results in (1).
Outline of proposed method.
We solve (1)-(2) by splitting the problem into two parts, namely a variable-coefficient problem on the bounded domain Ω, and a constant coefficient problem on the exterior domain Ω c := R 2 \Ω. For each of the two domains, a solution operator in the form of a Dirichlet-to-Neumann (DtN) map on the boundary ∂ Ω is constructed. These operators are then "glued" together on ∂ Ω to form a solution operator for the full problem. The end result is a discretized boundary integral operator that takes as input the restriction of the incoming field u i (and its normal derivative) to ∂ Ω, and constructs the restriction of the scattered field u s on ∂ Ω (and its normal derivative). Once these quantities are known, the total field can rapidly be computed at any point x x x ∈ R 2 .
Solution technique for the variable-coefficient problem on Ω.
For the interior domain Ω, we construct a solution operator for the following homogeneous variable-coefficient boundary value problem where the unknown total field u = u s + u i satisfies x x x ∈ ∂ Ω.
Note that, for now, we specify the Dirichlet data h; when we consider the full problem h will become an unknown that will also be solved for. It is known that, for all but a countable set of wavenumbers, the interior Dirichlet BVP (5)-(6) has a unique solution u [26,Thm. 4.10]. The values {κ j } ∞ j=1 at which the solution is not unique are (the square roots of) the interior Dirichlet eigenvalues of the penetrable domain Ω; we will call them resonant wavenumbers. Definition 1.1 (interior Dirichlet-to-Neumann map). Suppose that κ > 0 is not a resonant wavenumber of Ω.
Remark 1.2. The operator T int is a pseudo-differential operator of order +1 [15]; that is, in the limit of high-frequency boundary data it behaves like a differentiation operator on ∂ Ω. The boundedness as a map T int : H 1 (∂ Ω) → L 2 (∂ Ω) holds for Ω any bounded Lipschitz domain since the PDE is strongly elliptic [26,Thm. 4.25].
In this paper, a discrete approximation to T int is constructed via a variation of recent composite spectral methods in [16,25] (which are also similar to [9]). These methods partition Ω into a collection of small "leaf" boxes and construct approximate DtN operators for each box via a brute force calculation on a local spectral grid. The DtN operator for Ω is then constructed via a hierarchical merge process. Unfortunately, at any given κ, each of the many leaves and merging subdomains may hit a resonance as described above, causing its DtN to fail to exist. As κ approaches any such resonance the norm of the DtN grows without bound. Thus a technique based on the DtN alone is not robust.
Remark 1.3.
We remind the reader that any such "box" resonance is purely artificial and is caused by the introduction of the solution regions. It is important to distinguish these from resonances that the physical scattering problem (1)-(2) itself might possess (e.g. due to nearly trapped rays), whose effect of course cannot be avoided in any accurate numerical method.
One contribution of the present work is to present a robust improvement to the methods of [16,25], built upon hierarchical merges of impedance-to-impedance (ItI) rather than DtN operators; see section 2. The idea of using impedance coupling builds upon the work of Kirsch-Monk [22]. ItI operators are inherently stable, with condition number O(1), and thus exclude the possibility of inverting arbitrarily ill-conditioned matrices as in the original DtN formulation. For instance, in the lens experiment of section 5, the DtN method has condition numbers as large as 2 × 10 5 , while for the new ItI method the condition number never grows larger than 20. The DtN of the whole domain Ω is still needed; however, if Ω has a resonance, the size of Ω can be changed slightly to avoid the resonance (see remark 2.6).
The discretization methods in [16,25], and in this paper are related to earlier work on spectral collocation methods on composite ("multi-domain") grids, such as, e.g., [23,39], and in particular Pfeiffer et al [29]. For a detailed review of the similarities and differences, see [25].
obtained by restricting (1)- (2) to Ω c . (Again, the Dirichlet data s later will become an unknown that is solved for.) It is known that (8)-(10) has a unique solution for every wavenumber κ [10,Ch. 3]. This means that the following DtN for the exterior domain is always well-defined. Definition 1.2 (exterior Dirichlet-to-Neumann map). Suppose that κ > 0. The exterior DtN operator T ext : H 1 (∂ Ω) → L 2 (∂ Ω) is defined by (11) T ext s = u s n for u s the unique solution to the exterior Dirichlet BVP (8)- (10).
Numerically, we construct an approximation to T ext by reformulating (8)-(10) as a boundary integral equation (BIE), as described in section 3.1, and then discretizing it using a Nyström method based on a high order Gaussian composite quadrature [19].
1.2.3.
Combining the two solution operators. Once the DtN operators T int and T ext have been determined (as described in sections 1.2.1 and 1.2.2), and the restriction of the incident field to ∂ Ω is given, it is possible to determine the scattered field on ∂ Ω as follows. First observe that the total field u = u s + u i must satisfy (12) T We also know that the scattered field u s satisfies Combining (12) and (13) As discussed in Remark 1.2, both T int and T ext have order +1. Lamentably, this behavior adds rather than cancels in (14), so that (T int − T ext ) also has order +1, and is therefore unbounded. This makes any numerical discretization of (14) ill-conditioned, with condition number growing linearly in the number of boundary nodes. To remedy this, we present in section 3 a new method for combining T int and T ext to give a provably second kind integral equation, which thus gives a well-conditioned linear system.
Once the scattered field is known on the boundary, the field at any exterior point may be found via Green's representation formula; see section 3.3. The interior transmitted wave u may be reconstructed anywhere in Ω by applying solution operators which were built as part of the composite spectral method.
1.3. Prior work. Perhaps the most common technique for solving the scattering problem stated in Section 1.1 is to discretize the variable coefficient PDE (1) via a finite element or finite difference method, while approximating the radiation condition in one of many ways, including perfectly matched layers (PML) [20], absorbing boundary conditions (ABC) [12], separation of variables or their perturbations [27], local impedance conditions [7], or a Nyström method [22] (as in the present work). However, the accuracy of finite element and finite difference schemes for the Helmholtz equation is limited by so-called "pollution" (dispersion) error [2,4], demanding an increasing number of degrees of freedom per wavelength in order to maintain fixed accuracy as wavenumber κ grows. In addition, while the resulting linear system is sparse, it is also large and is often ill-conditioned in such a way that standard pre-conditioning techniques fail, although hybrid direct-iterative solvers such as [13] have proven effective in certain environments. While there do exist fast direct solvers for such linear systems (for low wavenumbers κ) [32,31,37,24], the accuracy of the solution is limited by the discretization. The performance of the solver worsens when increasing the order of the discretization-thus it is not feasible to use a high order discretization that would overcome the above-mentioned pollution error.
Scattering problems on infinite domains are also commonly handled by rewriting them as volume integral equations (e.g. the Lippmann-Schwinger equation) defined on a domain (such as Ω) that contains the support of the scattering potential [1,8]. This approach is appealing in that the Sommerfeld condition (2) is enforced analytically, and in that high-order discretizations can be implemented without loss of stability [11]. Principal drawbacks are that the resulting linear systems have dense coefficient matrices, and tend to be challenging to solve using iterative solvers [11].
1.4. Outline. Section 2 describes in detail the stable hierarchical procedure for constructing an approximation to the DtN map T int for the interior problem (5)- (6). Section 3 describes how boundary integral equation techniques are used to approximate the DtN map T ext for the exterior problem (8)- (10), how to couple the DtN maps T int and T ext to solve the full problem (1)- (2), and the proof (Theorem 3.1) that the formulation is second kind. Section 4 details the computational cost of the method and explains the reduced cost for multiple incident waves. Finally, section 5 illustrates the performance of the method in several challenging scattering potential configurations.
CONSTRUCTING AND MERGING IMPEDANCE-TO-IMPEDANCE MAPS
This section describes a technique for building a discrete approximation to the Dirichlet-to-Neumann (DtN) operator for the interior variable coefficient BVP (5)-(6) on a square domain Ω. It relies on the hierarchical construction of impedance-to-impedance (ItI) maps; these are defined in section 2.1. Section 2.2 defines a hierarchical tree on the domain Ω. Section 2.3 explains how the ItI maps are built on the (small) leaf boxes in the tree. Section 2.4 describes the merge procedure whereby the global ItI map is built, and then how the global DtN map is recovered from the global ItI map.
2.1. The impedance-to-impedance map. We start by defining the ItI map on a general Lipschitz domain, and giving some of its properties. (In this section only, Ω will refer to such a general domain.)
Proposition 2.1.
Let Ω ⊂ R 2 be a bounded Lipschitz domain, and b(x x x) be real. Let η ∈ C, and Re η = 0. Then the interior Robin BVP, has a unique solution u for all real κ > 0.
Proof. We first prove uniqueness. Consider u a solution to the homogeneous problem f ≡ 0. Then using Green's 1st identity and (15), (16), Taking the imaginary part shows that u, and hence u n , vanishes on ∂ Ω, hence u ≡ 0 in Ω by unique continuation of the Cauchy data. Existence of u ∈ H 1 (Ω) now follows for data f ∈ H −1/2 (∂ Ω) from the Fredholm alternative, as explained in this context by McLean [26,Thm. 4.11]. Definition 2.1 (interior impedance-to-impedance map). Fix η ∈ C, and Re η = 0. Let be Robin traces of u. We refer to f and g as the "incoming" and "outgoing" (respectively) impedance data. For any κ > 0, the interior ItI operator R : for f and g the Robin traces of u the solution of (15)- (16), for all f ∈ L 2 (∂ Ω).
We choose the impedance parameter η (on dimensional grounds) to be η = κ. Numerically, in what follows, we observe very little sensitivity to the exact value or sign of η.
For the following, we need the result that the DtN map T int is self-adjoint for real κ and b(x x x). This holds since, for any functions u and v satisfying ( These are then gathered into a binary tree of successively larger boxes as described in Section 2. One possible enumeration of the boxes in the tree is shown, but note that the only restriction is that if box τ is the parent of box σ , then τ < σ .
by Green's second identity. This allows us to prove the following property of the ItI map that will be the key to the numerical stability of the method.
Proposition 2.2.
Let Ω be a bounded Lipschitz domain, let b(x x x) be real, and let η ∈ C and Re η = 0. Then the ItI map R for Ω exists for all real frequencies κ, and is unitary whenever η is also real.
Proof. Existence of R for all real κ follows from Proposition 2.1. To prove R is unitary, we insert the definitions of f and g into (19) and use the definition of the DtN to rewrite u n = T int u| ∂ Ω , giving which holds for any data u| ∂ Ω ∈ H 1 (∂ Ω). Thus the ItI map is given in operator form by Since T int is self-adjoint and η is real, this formula shows that R is unitary.
As a unitary operator, R has unit operator L 2 -norm, pseudo-differential order 0, and eigenvalues lying on the unit circle. From (21) and the pseudo-differential order of T int one may see that the eigenvalues of R accumulate only at +1.
Partitioning of Ω into hierarchical tree of boxes.
Recall that Ω is the square domain containing the support of b. We partition Ω into a collection of 4 M equally-sized square boxes called leaf boxes, where M sets the number of levels; see Figure 2. We place q Gauss-Legendre interpolation nodes on each edge of each leaf, which will serve to discretize all interactions of this leaf with its neighbors; see Figure 3(a). The size of the leaf boxes, and the parameter q, should be chosen so that any potential transmitted wave u, as well as its first derivatives, can be accurately interpolated on each box edge from their values at these nodes.
Next we construct a binary tree over the collection of leaf boxes. This is achieved by forming the union of adjacent pairs boxes (forming rectangular boxes), then forming the pairwise union of the rectangular boxes. The result is a collection of squares with twice the side length of a leaf box. The process is continued until the only box is Ω itself, as in Figure 2. The boxes should be ordered so that if τ is a parent of a box σ , then τ < σ . We also assume that the root of the tree (i.e. the full box Ω) has index τ = 1. Let Ω τ denote the domain associated with box τ. Remark 2.1. The method easily generalizes to rectangular boxes, and to more complicated domains Ω in the same manner as [25].
Spectral approximation of the ItI map on a leaf box.
Let Ω τ denote a single leaf box, and let f = f τ and g = g τ be a pair of vectors of associated incoming and outgoing impedance data, sampled at the 4q Gauss-Legendre boundary nodes, with entries ordered in a counter-clockwise fashion starting from the leftmost node 12 3 q q+1 of the bottom edge of the box, as in Figure 3(a). In this section, we describe a technique for constructing a matrix approximation to the ItI operator on this leaf box. Namely, we build a 4 q × 4 q matrix R such that g ≈ Rf holds to high-order accuracy, for all incoming data vectors f ∈ R 4q corresponding to smooth transmitted wave solutions u. First, we discretize the PDE (15) on the square leaf box Ω τ using a spectral method on a p × p tensor product Chebyshev grid filling the box, as in Figure 3(b), comprised of the nodes at locations . . , p are the Chebyshev points on [−1, 1]. We label the Chebyshev node locations x x x j ∈ R 2 , for j = 1, . . . , p 2 . For notational purposes, we order these nodes in the following fashion: the indices J b = {1, 2, . . . , 4(p − 1)} correspond to the Chebyshev nodes lying on the boundary of Ω τ , ordered counter-clockwise starting from the node located at the south-west corner (a, c). The remaining (p − 2) 2 interior nodes have indices J i = {4(p − 1) + 1, . . . , p 2 } and may be ordered arbitrarily (a Cartesian ordering is convenient). Let D (1) , D (2) ∈ R p 2 ×p 2 be the standard spectral differentiation matrices constructed on the full set of Chebyshev nodes, which approximate the ∂ /∂ x 1 (horizontal) and ∂ /∂ x 2 (vertical) derivative operators, respectively. As explained in [33,Ch. 7], these are constructed from Kronecker products of the p × p identity matrix and is the vector of barycentric weights for the Chebyshev nodes (see [33,Ch. 6] and [30,Eqn.(8)].) Let the matrix A ∈ R p 2 ×p 2 be the spectral discretization of the operator ∆ + κ 2 (1 − b(x)) on the product Chebyshev grid, namely , where "diag S" indicates the diagonal matrix whose entries are the elements of the ordered set S. Remark 2.2. The matrices D (1) , D (2) , and A must have rows and columns ordered as explained above (i.e. boundary then interior) for the Chebyshev nodes; this requires permuting rows and columns of the matrices constructed by Kronecker products. For example, the structure of A is We now break the 4(p − 1) boundary Chebyshev nodes into four sets J b = [J s , J e , J n , J w ], denoting the south, east, north, and west edges, as in Figure 3(b). Note that J s includes the south-western corner J s (1) but not the south-eastern corner (which in turn is the first element of J e ), etc.
We are now ready to derive the linear system required for constructing the approximate ItI operator. We first build a matrix N ∈ R 4(p−1)×p 2 which maps values of u at all Chebyshev nodes to the outgoing normal derivatives at the boundary Chebyshev nodes, as follows, where (as is standard in MATLAB) the notation A(S, :) denotes the matrix formed from the subset of rows of a matrix A given by the index set S. Then, recalling (16), the matrix F ∈ R 4(p−1)×p 2 which maps the values of u at all Chebyshev nodes to incoming impedance data on the boundary Chebyshev nodes is where I p 2 denotes the identity matrix of size p 2 . Using u ∈ R p 2 for the vector of u values at all Chebyshev nodes, the linear system for the unknown u that imposes the spectral discretization of the PDE at all interior nodes, and incoming impedance data f c ∈ R 4(p−1) at the boundary Chebyshev nodes, is where 0 is an appropriate column vector of zeros, and the square size-p 2 system matrix is
Remark 2.3.
At each of the four corner nodes, only one boundary condition is imposed, namely the one associated with the edge lying in the counter-clockwise direction. This results in a square linear system, which we observe is around twice as fast to solve as a similar-sized rectangular one.
To construct the p 2 × 4(p − 1) "solution matrix" X for the linear system, we solve (24) for each unit vector in R 4(p−1) , namely In practice, X is found using the backwards-stable solver available via MATLAB's mldivide command. If desired, the tabulated solution u can now be found at all the Chebyshev nodes by applying X to the right hand side of (24).
Recall that the goal is to construct matrices that act on boundary data on Gauss (as opposed to Chebyshev) nodes. With this in mind, let P be the matrix which performs Lagrange polynomial interpolation [34,Ch. 12] from q Gauss to p Chebyshev points on a single edge, and let Q be the matrix from Chebyshev to Gauss points. Let P 0 ∈ R (p−1)×q be P with the last row omitted. For example, P 0 maps from Gauss points on the south edge to the Chebyshev points J s .
Then the solution matrix which takes incoming impedance data on Gauss nodes to the values u at all Chebyshev nodes is Finally, we must extract outgoing impedance data on Gauss nodes from the vector u, to construct an approximation R to the full ItI map on the Gauss nodes. This is done by extracting (as in (22)) the relevant rows of the spectral differentiation matrices, then interpolating back to Gauss points. Let J ′ s := [J s , J e (1)] be the indices of all p Chebyshev nodes on the south edge, and correspondingly for the other three edges. Then the index set counts each corner twice. 2 Then let G ∈ R 4p×p 2 be the matrix mapping values of u to the outgoing impedance data with respect to each edge, given by Then, in terms of (25), is the desired spectral approximation to the ItI map on the leaf box. The computation time is dominated by the solution step for X, which takes effort O(p 6 ). We observe empirically that one must choose p > q + 1 in order that R not acquire a spurious numerical null space. We typically choose q = 14 and p = 16.
Merging ItI maps.
Once the approximate ItI maps are constructed on the boundary Gauss nodes on the leaf boxes, the ItI map defined on Ω is constructed by merging two boxes at a time, moving up the binary tree as described in section 2.2. This section demonstrates the purely local construction of an ItI operator for a box from the ItI operators of its children.
We begin by introducing some notation. Let Ω τ denote a box with children Ω α and Ω β where Ω τ = Ω α ∪ Ω β . For concreteness, consider the case where Ω α and Ω β share a vertical edge. As shown in Figure 4, the Gauss points on ∂ Ω α and ∂ Ω β are partitioned into three sets: J 1 : Boundary nodes of Ω α that are not boundary nodes of Ω β . J 2 : Boundary nodes of Ω β that are not boundary nodes of Ω α . J 3 : Boundary nodes of both Ω α and Ω β that are not boundary nodes of the union box Ω τ . Define interior and exterior outgoing data via The incoming vectors f i and f e are defined similarly. The goal is to obtain an equation mapping f e to g e . Since the ItI operators for Ω α and Ω β have previously been constructed, we have the following two equations 2 Including both endpoints allows more accurate interpolation back to Gauss nodes; functions on each edge are available at all Chebyshev nodes for that edge.
Since the normals of the two leaf boxes are opposed on the interior "3" edge, g α 3 = −f β 3 and f α 3 = −g β 3 (using the definitions (17)-(18)). This allows the bottom row equations to be rewritten using only α on the interior edge, namely (27) R Plugging (27) into (28) results in the following equation By collecting like terms and solving for f α 3 , we find Note that the matrix S α := WR β 33 R α 31 −WR β 32 maps the incoming impedance data on Ω τ to the incoming (with respect to α) impedance data on the interior edge. By combining the relationship between the impedance boundary data on neighbor boxes and (27) The outgoing impedance data g α 3 is found by plugging f α 3 into equation (27). Now the top row equations of (26) can be rewritten without reference to the interior edge. The top row equation of (26) from Ω α is now and the top row equation of (26) from Ω β is Writing these equations as a system, we find f α 1 and f β 2 satisfy Thus the block matrix on the left hand side of (30) is R τ , the ItI operator for Ω τ .
Remark 2.4.
In practice, the matrix products WR β 33 R α 31 and WR β 32 should be computed once per merge. Remark 2.5. Note that the formula (30) is quite different from that for merging DtN maps appearing in prior work [16,25]. The root cause is the way the equivalence of incoming and outgoing data on the interior edge differs from the case of Dirichlet and Neumann data. Algorithm 1 outlines the implementation of the hierarchical construction of the impedance operators, by repeated application of the above merge operation. Algorithm 2 illustrates the downwards sweep to construct from incoming impedance data f the solution at all Chebyshev discretization points in Ω. Note that the latter requires the solution matrices S at each level, and Y for each of the leaf boxes, that were precomputed by Algorithm 1.
The resulting approximation to the top-level ItI operator R = R 1 is a square matrix which acts on incoming impedance data living on the total of 4q2 M composite Gauss nodes on ∂ Ω. An approximation T int to the interior DtN map on these same nodes now comes from inverting equation (20) for T int , to give (31) T int = −iη (R − I) −1 (R + I) , where I is the identity matrix of size 4q2 M . The need for conversion from the ItI to the DtN for the domain Ω will become clear in the next section. Remark 2.6. Due to the pseudo-differential order +1 of T int , we expect the norm of T int to grow linearly in the number of boundary nodes. However, it is also possible that κ falls close enough to a resonant wavenumber of Ω that the norm of T int is actually much larger, resulting in a loss of accuracy due to the inversion of the nearly-singular matrix R − I in (31). In our extensive numerical experiments, this latter problem has never happened. However, it is important to include a condition number check when formula (31) is evaluated. Should there be a problem, it would be a simple matter to modify slightly the domain to avoid a resonance. For instance, one can add a column of leaf boxes to the side of Ω, and then inexpensively update the computed ItI operator for Ω to the enlarged domain.
WELL-CONDITIONED BOUNDARY INTEGRAL FORMULATION FOR SCATTERING
In this section, we present an improved boundary integral equation alternative to the scattering formulation (14) from the introduction.
3.1. Formula for the exterior DtN operator T ext . We first construct the exterior DtN operator T ext via potential theory. The starting point is Green's exterior representation formula [10, Thm. 2.5], which states that any radiative solution u s to the Helmholtz equation in Ω c may be written, 0 (κ|x x x − y y y|)φ (y y y)ds y y y are respectively the frequency-κ Helmholtz double-and single-layer potentials with density φ , with H (1) 0 the outgoing Hankel function of order zero. , The vector n y y y denotes the outward unit normal at y y y ∈ ∂ Ω, Letting x x x approach ∂ Ω in (32), one finds via the jump relations, where D and S are the double-and single-layer boundary integral operators on ∂ Ω. See [10, Ch. 3.1] for an introduction to these representations and operators. Rearranging (33) gives u s n = S −1 D − 1 2 I u s | ∂ Ω , thus the exterior DtN operator is given in terms of the operators of potential theory by (34) T ext = S −1 D − 1 2 I . (6). For τ a leaf box, the algorithm builds the solution matrix Y τ that maps impedance data at Gauss nodes to the solution at the interior Chebyshev nodes. For non-leaf boxes Ω τ , it builds the matrices S τ required for constructing f impedance data on interior Gauss nodes. It is assumed that if box Ω τ is a parent of box Ω σ , then τ < σ .
Split J α b and J β b into vectors J 1 , J 2 , and J 3 as shown in Figure 4.
ALGORITHM 2 (solve BVP (15)-(16) once solution matrices have been built)
This program constructs an approximation u to the solution u of (15)- (16). It assumes that all matrices S τ , Y τ have already been constructed. It is assumed that if box Ω τ is a parent of box Ω σ , then τ < σ . We call J τ the indices of nodes in box Ω τ .
end if (8) end for
The new integral formulation.
We apply from the left the single layer integral operator S to both sides of (14), and use (34), to obtain linear equation for u s | ∂ Ω , the restriction of the scattered wave to the domain boundary ∂ Ω. Let (36) A := 1 2 I − D + ST int be the boundary integral operator appearing in the above formulation. In the trivial case b ≡ 0 (no scattering potential) it is easy to check that A = I, by using T int = S −1 (D + 1 2 I) which can be derived in this case similarly to (34). Now we prove that introducing a general scattering potential perturbs A only compactly, that is, our left-regularization of the original ill-conditioned (14) has produced a well-conditioned equation.
Proof. Let u satisfy (5) in Ω, then by Green's interior representation formula (third identity) [10, Eq. (2.4)], 0 (κ|x x x − y y y|)φ (y y y)dy y y denotes the Helmholtz volume potential [10, Sec. 8.2] acting on a function φ with support in Ω. Define P to be the solution operator for the interior Dirichlet problem (5)-(6), i.e. u(y y y) = (Pu| ∂ Ω )(y y y) for y y y ∈ Ω, and let B denote the operator that multiplies a function pointwise by b(y y y), we can write the last term in (37) as −κ 2 V BPu| ∂ Ω . Taking x x x to ∂ Ω from inside in (37), the jump relations give where V is V restricted to evaluation on ∂ Ω. Recall u n = T int u ∂ Ω . Plugging this definition into (38), we find When substituted into (36) results in cancellation of the D terms, giving Note that D in (36) is not compact when ∂ Ω has corners [10,Sec. 3.5], yet the theorem holds with corners since D is canceled in the proof. Figure 5 compares the spectrum of the unregularized (14) and regularized (35) operators, in a simple computational example. The improvement in the eigenvalue distribution is dramatic: the spectrum of (14) has small eigenvalues but extends to large eigenvalues of order 10 5 , while the spectrum of (35) is tightly clustered around +1 with no eigenvalues of magnitude larger than 2.
3.3.
Reconstructing the scattered field on the exterior. Once equation (35) is solved for the scattered wave on ∂ Ω, the scattered wave can be found at any point in Ω c via the representation (32). All that is needed is the normal derivative of u s on ∂ Ω, which from equation (12) is found to be For evaluation of (32), the native Nyström quadrature on ∂ Ω is sufficient for 10-digit accuracy for all points further away from Ω than the size of one leaf box; however, as with any boundary integral method, for highly accurate evaluation very close to ∂ Ω a modified quadrature would be needed.
Numerical discretization of the boundary integral equation.
We discretize the BIE (35) on ∂ Ω via a Nyström method with composite (panel-based) quadrature with n nodes in total. The panels on ∂ Ω coincide with the edges of the leaf boxes from the interior discretization, apart from the eight panels touching corners, where six levels of dyadic panel refinement are used on each to achieve around 10-digit accuracy. 3 On each of these panels, a 10-point Gaussian rule is used. For building n × n matrix approximations to the operators S and D in (35), the plain Nyström method is used for matrix elements corresponding to non-neighboring panels, while generalized Gaussian quadrature for matrix elements corresponding to the self-or neighbor-interaction of each panel [19]. The matrix T int computed by (31) must also be interpolated from the 4q2 M Gauss nodes on ∂ Ω to the n new nodes; since the panels mostly coincide, this is a local operation analogous to the use of P and Q matrices in section 2.3.
COMPUTATIONAL COMPLEXITY
The computational cost of the solution technique is determined by the cost of constructing the approximate DtN operator T int and the cost of solving the boundary integral equation (35). Let N denote the total number of discretization points in Ω required for constructing R. As there are p 2 Chebyshev points for each leaf box, the total number of discretization points is roughly 4 M p 2 (to be precise, since points are shared on leaf box edges, it is N = 4 M (p − 1) 2 + 2 M+1 (p − 1) + 1). Recall that n is the number of points on ∂ Ω required to solve the integral equation. Note that n ∼ √ N.
4.1.
Using dense linear algebra. Using dense linear algebra, the cost of constructing R via the technique in section 2 is dominated by the cost at the top level where a matrix of size √ N × √ N is inverted. Thus the computational cost is O (N 3/2 ). The cost of approximating the DtN operator T int is also O(N 3/2 ). However, the computational cost of applying T int is O (N). If the solution in the interior of Ω is desired, the computation cost of the solve is O(N) as well.
The cost of inverting the linear system resulting from the (eg. Nyström) discretization of (35) is O(n 3 ). It is possible to accelerate the solve by using iterative methods such as GMRES, which, given its second kind nature, would converge in O(1) iterations.
When there are multiple incident waves at the same wavenumber κ, the solution technique should be separated into two steps: precomputation and solve. The precomputation step consist of constructing the approximate ItI, and DtN operators R and T int , respectively. Also included in the precomputation should be the discretization and inversion of the BIE (35). The solve step then consists of applying the inverse of the system in (35). The precomputation need only be done once per wavenumber with a computational cost O(N 3/2 ). The cost of each solve (one for each incident wave) is simply the cost of applying an n × n dense matrix ∼ O(N).
4.2.
Using fast algorithms. The matrices R τ in Algorithm 1 that approximate ItI operators, as well as the matrices T int and T ext approximating DtN operators, all have internal structure that could be exploited to accelerate the matrix algebra. Specifically, the off-diagonal blocks of these matrices tend to have low numerical ranks, which means that they can be represented efficiently in so called "data-sparse" formats such as, e.g., H or H 2 -matrices [18,6,5], or, even better, the Hierarchically Block Separable (HBS) format [17,21] (which is closely related to the "HSS" format [38]). If the wavenumber κ is kept fixed as N increases, it turns out to be possible to accelerate all computations in the build stage to optimal O(N) asymptotic complexity, and the solve stage to optimal O(N 1/2 ) complexity, see [16]. However, the scaling constants suppressed by the big-O notation depend on κ in such a way that the use of accelerated matrix algebra is worthwhile primarily for problems of only moderate size (say a few dozen wavelengths across). Moreover, for high-order methods such as ours, it is common to keep the number of discretization nodes per wavelength fixed as N increases (so that κ ∼ N 1/2 ), and in this environment, the scaling of the "accelerated" methods revert to O(N 3/2 ) and O(N) for the build and the solve stages, respectively.
This section reports on the performance of the new solution technique for several choices of potential b(x x x)
where the (numerical) support of b is contained in Ω = (−0.5, 0.5) 2 . The incident wave is a plane wave Firstly, in section 5.1 the method is applied to problems where b(x x x) is a single Gaussian "bump." In this case the radial symmetry allows for an independent semi-analytic solution, which we use to verify the accuracy of the method. Then section 5.2 reports on the performance of the method when applied to more complicated problems. Finally, section 5.3 illustrates the computational cost in practice.
For all the experiments, for the composite spectral method described in section 2 we use on each leaf a p × p Chebyshev tensor product grid with p = 16, and the number of Gaussian nodes per side of a leaf is q = 14.
We implemented the methods based on dense matrix algebra with O(N 3/2 ) asymptotic complexity described in section 4.1. (We do not use the O(N) accelerated techniques of section 4.2 since we are primarily interested in scatterers that are large in comparison to the wavelength.) All experiments were executed on a desktop workstation with two quad-core Intel Xeon E5-2643 processors and 128 GB of RAM. All computations were done in MATLAB (version 2012b), apart from the evaluation of Hankel functions in the Nyström and scattered wave calculations, which use Fortran. We expect that, careful implementation of the whole scheme in a compiled language can improve execution times substantially. 5.1. Accuracy of the method. In this section, we consider problems where the scattering potential b(x x x) is given by a Gaussian bump. Since b has radial symmetry, we may compute an accurate reference scattering solution by solving a series of ODEs, as explained in Appendix A. With κ = 40 (so that the square Ω is around six free-space wavelengths on a side), and w = (1, 0), we consider two problems given as follows, For Bump 1, the bump region has an increased refractive index, varying from 1 to around 1.58, which can be interpreted as an attractive potential. For Bump 2, the potential is repulsive, causing the waves to become slightly evanescent near the origin (here the refractive index decreases to zero then becomes purely imaginary, but note that this does not correspond to absorption.) Figure 6 illustrates the geometry and the resulting real part of the total field for each experiment. Letũ denote the approximate total field constructed via the proposed method, and u denote the reference total field computed as in Appendix A. Table 1 reports N: the number of discretization points used by the composite spectral method in Ω, n: the number of discretization points used for discretizing the BIE, Reũ Lens: A vertically-graded lens (Figure 7(a)), at wavenumber κ = 300.
The maximum refractive index is around 2.1 Random bumps: The sum of 200 wide Gaussian bumps randomly placed in Ω, rolled off to zero (see Figure 7(b)) giving a smooth random potential at wavenumber κ = 160. The maximum refractive index is around 4.3. Photonic crystal: 20 × 20 square array of small Gaussian bumps (with peak refractive index 6.7) with a "waveguide" channel removed (Figure 7(c)). The wavenumber κ = 77.1 is chosen carefully to lie in the first complete bandgap of the crystal.
For the first two cases, Ω is around 70 wavelengths on a side, measured using the typical wavelength occurring in the medium (for the lens case, it is 100 wavelengths on a side at the minimum wavelength). This is quite a high frequency for a variable-medium problem at the accuracies we achieve. In these two cases the waves mostly propagate; in contrast, in the third case the waves mostly resonate within each small bump, in such a way that large-scale propagation through the crystal is impossible (hence evanescent), except in the channel. 4 For each choice of varying wave speed, the incident wave is in the direction w = (1, 0) (we remind the reader that the method works for arbitrary incident direction). For the photonic crystal, we also consider the incident wave direction w = (− √ 2/2, √ 2/2). There are no reference solutions available for these problems, hence we study convergence.
In addition to the number of discretization points N and n, Table 2 Table 2 shows that typically 9-digit accuracy is reached when N ≈ 3.7 × 10 6 (M = 7), which corresponds to 1921 Chebyshev nodes in each direction, or around 20 nodes per wavelength at the shortest wavelengths in each medium.
Scaling of the method.
Recall that in the case of multiple incident waves, the solution technique should be broken into two steps: precomputation and solve. Since a direct solver is used, the timing results are independent of the particular scattering potential. For each choice of N and n, Table 3 reports T build : Time in seconds to building the approximate ItI and DtN, T solve : Time in seconds to discretize and invert the BIE (35), T apply : Time in seconds to apply the inverse A −1 of the discretized integral equation R build : Memory in MB required to store the ItI and solution operators in the hierarchical scheme, R solve : Memory in MB required to store the discretized inverse A −1 . Figure 9 plots the timings against the problem size N. (The total precomputation time is the sum of T build and T solve .) The results show that even at the largest N tested, the precomputation time has not reached its asymptotic O(N 3/2 ); the large dense linear algebra has not yet started to dominate T build (this may be due to MATLAB overheads). However, the cost for solving (35) and applying the inverse scale closer to expectations.
The memory usage scales as the expected O (N log N). We are not able to test beyond 15 million unknowns (M = 8) since by that point the memory usage approaches 100 GB. However, note that if all that is needed is the far-field solution for arbitrary incident waves at one wavenumber, the S τ and Y τ solution matrices need not be stored, reducing memory significantly, and the final solution matrix only requires 2 GB. We note that, extrapolating from the convergence study, this N should be sufficient for 9-digit accuracy for problems up to 200 wavelengths on a side. TABLE 3. T build and R build report the time in seconds and memory in MB, respectively, required for building the interior ItI operator and constructing the discretized integral equation (35). T solve reports the time in seconds required to invert the discretized system, while R solve reports the memory in MB to store the inverse. T apply reports the time in seconds required to apply the inverse to the incident wave dependent data. This table is independent of the choice of potential or wavenumber. 6. CONCLUDING REMARKS This paper presents a robust, high accuracy direct method for solving scattering problems involving smoothly varying media. Numerical results show that the method converges to high order, as expected, for choices of refractive index that are representative of challenging problems that occur in applications. Namely, for problems dominated either by propagation (lenses) or by resonances (a bandgap photonic crystal), with of order 100 wavelengths on a side, the method converges to around 9-digit accuracy with 3.7 million unknowns. The method is ideal for problems where the far field scattering is desired for multiple incident waves, since each additional incident wave requires merely applying a dense matrix to its boundary data. For example, a problem involving 14 million unknowns requires 21 minutes of precomputation (to build the necessary operators), but each additional solve takes approximately 0.1 seconds. As discussed in section 4.2, for low frequency problems, these timings, and asymptotic behavior, could be improved by replacing the dense linear algebra by faster algorithms exploiting compressed representations. One remaining open question is the existence of a convenient second-kind formulation which involves the ItI map (and not the DtN map) of the domain Ω.
ACKNOWLEDGMENTS
We are grateful for a helpful discussion with Michael Weinstein. The work of AHB is supported by NSF grant DMS-1216656; the work of PGM is supported by NSF grants DMS-0748488 and DMS-0941476.
APPENDIX A. REFERENCE SOLUTION FOR PLANE WAVE SCATTERING FROM RADIAL POTENTIALS
In this appendix we describe how we generate reference solutions with around 13 digits of accuracy for the scattering problem from smooth radially-symmetric potentials such as We write J l (z) = (H (1) l (z) + H (2) l (z))/2, and then notice that the effect of the potential b on this field is to modify only the outgoing scattering coefficients. Thus, restricting to a maximum order L, the full field becomes The coefficients {a l } are known as scattering phases; by flux conservation they lie on the unit circle if b(r) is a real-valued function. Convergence with respect to L is exponential, once L exceeds κR. For the case of (40) we choose R = 0.5 and L = 30. The phases are found in the following way. For each l = 0, . . . L we solve the homogeneous radial ODE, u ′′ l + 1 r u ′ l + − l 2 r 2 + (1 − b(r)) κ 2 u l = 0, 0 < r < R with initial conditions that correspond to a regular solution of the form u l (r) ∼ cr l as r → 0 + (we implement the initial condition by restricting the domain to [r 0 , R] for some small number r 0 > 0 chosen such that the solution growing with increasing r dominates sufficiently over the decaying one). For the numerical solution we use MATLAB's ode45 with machine precision requested for absolute and relative tolerances. (We note that the standard transformation u(r) = r l U (r) which mollifies the behavior at r = 0 resulted in no improvement in accuracy.) After extracting each interior solution's Robin constant β l := u ′ l (R)/u l (R), and matching value and derivative to (41) at r = R, we get after simplification, which completes the recipe for the phases. The computation time required is a few seconds, due to the large number of steps required by ode45. A simple accuracy test is independence of the phases with respect to variation in R. Values of u(r, θ ) for r ≥ R may then be found via evaluating the sum in (41), and for r < R by summation of the interior solutions {u l (r)}.
|
2013-08-27T21:14:04.000Z
|
2013-08-27T00:00:00.000
|
{
"year": 2015,
"sha1": "65daa74a12dc9aec1d8f189d7ed93199f1af91a7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1308.5998",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "65daa74a12dc9aec1d8f189d7ed93199f1af91a7",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
254535862
|
pes2o/s2orc
|
v3-fos-license
|
The Uplifton
Almost all proposals to construct de Sitter vacua with a small cosmological constant involve flux compactifications with stabilized moduli. These give AdS vacua, which are uplifted to de Sitter by adding antibranes in certain regions of the compactification manifold. However, antibranes are charged, singular and interact nontrivially with other ingredients of the compactification; this can invalidate the de Sitter construction. In this Letter, we construct a new ingredient for uplifting AdS solutions to de Sitter, which is neutral, smooth and horizonless, and therefore bypasses some of the problems of antibrane uplift.
INTRODUCTION
The accelerated expansion of our Universe points to the existence of a positive vacuum energy. However, String Theory appears rather reluctant to provide fourdimensional solutions with a positive vacuum energy. Even the simplest solution with positive vacuum energy -de Sitter space -is very hard to construct. There are no-go theorems preventing the direct realization of such a space via a compactification with common ingredients [1]. The most popular scenario to bypass these theorems and construct de Sitter spaces with a small cosmological constant, proposed by Kachru, Kallosh, Linde and Trivedi [2], involves a rather intricate sequence of steps: one first stabilizes the complex-structure moduli by turning on topologically-nontrivial fluxes on the compactification manifold and then one stabilizes the Kähler moduli via nonperturbative effects. This results in an Anti de Sitter solution (with a negative cosmological constant) which should be "uplifted" to a de Sitter solution (with a positive cosmological constant) by placing D3 branes with negative charge (antibranes) in a region of large warping inside the compactification manifold.
This last step has been under a lot of intense scrutiny over the past fifteen years, because the D3 branes interact non-trivially with the fluxes used to stabilize the moduli [3]. This can result in tachyons, unexpected massless modes and runaways [4,5]. Furthermore, the interaction of the four-form field sourced by the antibrane with the other fields of the compactification can give rise to new flux components, that can affect the regime of validity of Kähler moduli stabilization [6].
The purpose of this Letter is to construct a new ingredient for uplifting the cosmological constant -an uplifton -which is neutral, smooth and horizonless, thereby avoiding some of the problems of antibranes. Furthermore, unlike antibranes [7], the uplifton cannot move around the compactification manifold .
At first glance, neutral solutions which have mass but no charge do not appear to be optimal ingredients for use in flux compactifications, because they naïvely source a metric that does not preserve Lorentz invariance in the spacetime directions. The best example is perhaps a nonextremal black D3 brane, which does have more mass than charge, but whose metric breaks the Lorentz sym-metry along the spacetime direction. 1 However, as we will see, the non-extremal solutions we construct preserve this Lorentz invariance through a novel mechanism which involves the shrinking of certain compact directions.
To construct these solutions, we use the formalism that has been developed over the past few years by one of the authors and Bah [8][9][10][11][12][13]: the equations governing certain supergravity solutions with D −2 commuting Killing vectors and suitable fluxes decompose into a set of Ernst equations, thereby admitting an integrable structure. This formalism has allowed to obtain a plethora of solutions, describing both bound states of bubbles and black holes, as well as smooth horizonless solutions with multiple bubbles and topologically non-trivial fluxes. Some of these solutions are non-extremal and charged, but it is also possible to construct neutral solutions with opposite fluxes wrapping different cycles [12].
At first glance, the simplest way to construct an uplifton appears to be using smooth bubbles with D3-brane and D3 brane charges. However, the mechanism by which the solutions of [12] carry charges involves topologicallynontrival cycles formed by the shrinking of at least one direction inside the brane worldvolume. Hence, bubbling solutions whose bubbles have D3 and D3 charges break the SO(3, 1) Lorentz invariance.
Since we are looking for upliftons that one can add to Type IIB flux compactifications, the obvious step to bypass this problem is to use the technique of [12] to construct neutral solutions with D5 and D5 bubbles that preserve the SO(3, 1) invariance.
In this Letter we present the simplest of these solutions and their use in flux compactifications, leaving the details of their construction to a companion paper. Even if the upliftons are neutral, they have a nontrivial magnetic three-form field strength profile which is both positive and negative, so locally they have D5 and D5 charge corresponding to branes extending along (t, x 1 , x 2 , x 3 , x 4 , x 5 ), where x 5 must be compact. The orthogonal space is a U (1) fibration of a compact coordinate, y, over a three-dimensional base given in spherical coordinates (r, θ, φ). In a flux compactification, both x 4 and x 5 , together with y, r, θ, φ will be part of the compactification manifold. As we will show, regularity will impose certain periodicity constraints on the x 5 and y coordinates, so, unlike D3 branes, the uplifton will not be able to move inside the compactification manifold.
THE UPLIFTON SOLUTION
The six-dimensional spacetime transverse to the directions of the uplifton, (t, x 1 , x 2 , x 3 ), is made from three compact circles fibered over a three-dimensional space that becomes asymptotically R 3 . Using the technology of [12] we can build upliftons with an arbitrary number of bubbles, but for simplicity we will present here the simplest upliftons. They have two identical bubbles car-rying opposite D5 charges, and located at the opposite ends of an uncharged bubble (see Fig.1). The solutions are determined in terms of three parameters ( , m, q): and m are related to the mass, or energy, induced by the sources, while q is related to the amplitude of the D5 charges carried by the outermost bubbles.
Being a bound state of three sources, we introduce three sets of local spherical coordinates, centered around each bubble: , where we have defined 2σ and − 2σ to be the size of the outermost bubbles and the middle bubble respectively, and the distance (r These solutions exist when the parameters satisfy [12]: When the second bound is saturated, the outer bubbles degenerate into two singular five-brane sources of opposite charges. Hence, in this extreme regime, the solution can be thought of point-like D5 and D5 branes at the opposite ends of a bolt.
The string-frame type IIB uplifton solution is given by where the definitions of r 1 , r 2 , r 3 are in equation (1), and we have introduced the following warp factors and gauge potentials .
The spacetime is smooth and terminates at r = + 2σ. At this locus, either r 1 or r 2 or r 3 is zero, depending on the value of θ in terms of three intervals. These intervals are determined by the critical angle, θ c , defined as: For 0 ≤ θ ≤ θ c and π − θ c ≤ θ ≤ π, r 3 = 0 and r 1 = 0 respectively, such that x 5 degenerates at the origin. For θ c ≤ θ ≤ π −θ c , r 2 = 0 and the y coordinate degenerates. As we will see in the next section, these coordinate degeneracies correspond to smooth bolts only if x 5 and y are compact. 2 Thus, the maximal Lorentz invariance our solutions can preserve is SO(4, 1), when (x 1 , x 2 , x 3 , x 4 ) are infinite. However, we can also compactify one of these directions (x 4 for example) to obtain a more general solution which only preserves SO(3, 1) and which can be embedded in a flux compactification.
THE GLUING PROPERTIES
To analyse the regularity of the uplifton, it is useful to denote the periodicities of (y, x 4 , x 5 ) as R y , R x4 and R x5 : y = y + 2πR y , x 4 = x 4 + 2πR x4 , x 5 = x 5 + 2πR x5 .
(7) As we explained above, the shrinking of the y and x 5 coordinates at the origin (r = + 2σ) depends on the three θ-intervals 0 ≤ θ c ≤ π − θ c ≤ π. The local coordinates adapted for the first, second and third interval respectively, are (r 3 , θ 3 ), (r 2 , θ 2 ) and (r 1 , θ 1 ) (1). They allow us to write the constant-time slices of the metrics when each r i → 0 at the origin as where C i are constants that depends on ( , m, q). The periodic coordinate, X stands for x 5 when i = 1, 3 and for y when i = 2, while K 7 describes a smooth orthogonal space of topology S 3 ×S 1 ×R 3 when i = 1, 3 and S 2 ×T 2 × R 3 when i = 2. The bolt structure is explicit in terms of the radial coordinates ρ 2 i ≡ 4r i and the R 2 has no conical singularity if R 2 X = C i . This requires: Moreover, the three-form field strength is regular everywhere, and its integrals on the first and third bolts are equal and opposite. They give the D5 and D5 quantized charges carried by these bolts: where g s is the string coupling and l s is the string length. One can absorb the √ g s l s coefficient by expressing all length scales in units of √ g s l s . This is done by rescaling ( , m, q, σ, R X ) ≡ √ g s l s × (¯ ,m,q,σ,R X ).
Our solutions have three parameters, and two regularity constraints; it is natural to choose the free parameter to be the quantized charge of the bubbles, N = N D5 = N D5 = 2R yq . The parameters of the regular solutions are therefore completely determined by the periodicities of y and x 5 at infinity and by the D5 quantized charge: and the validity bound (3) translates into The solutions have two simple limits. When N = 0, the flux of the solution is strictly zero, and the solution becomes a pure-gravity solution describing three colinear vacuum bolts. When N =R 2 x5 the size of the outwards bubbles becomes zero (σ = 0) and these bubbles degenerate to singular locally-supersymmetric D5 and D5 branes on a vacuum bubble.
UPLIFTING WITH THE UPLIFTON
In order to use the uplifton for uplifting we have to embed it in flux compactifications, and compare its energy with that of other uplifting ingredients, such as fivebranes or anti-D3 branes. Our solution has three compact circles and one can consider adding it to a region of the compactification manifold where the geometry looks locally like a U (1) 3 fibration over R 3 . The mass of this solution is then completely determined by the size of the three U (1)'s in this region and by the quantized fivebrane and anti-five-brane charges of the bubbles.
To compute the mass, we reduce the uplifton along x 1,2,3,4,5 , y, and obtain a geometry with R 3,1 asymptotics: 3 where ds 2 3 is the three-dimensional base in the bracket of (4). The ADM mass per unit of spacetime volume (parameterized by x 1 , x 2 and x 3 ) is 4 This mass formula is very illustrative. First, we can see that the mass remains finite when N = 0. So the uplifton can be thought of as a topological soliton of mass to which one adds fluxes corresponding to D5 charges.
Remembering the factors of g s in the definition of R x4 ,R x5 andR y (11), we can see that the mass of this soliton is proportional to g −2 s , exactly as one expects for a gravitational soliton. Furthermore, since in this soliton 3 It is also possible to construct upliftons in which y is fibered over the R 3 base and the asymptotics is R 4,1 . 4 The relation between the four-dimensional and tendimensional Newton constants is G 10 = 8π 6 g 2 s l 8 the x 4 direction is not fibered, but x 5 and y are, the dependence of the mass onR x4 ,R x5 andR y when the other radii are kept fixed is (R x4 ) 1 , (R x5 ) 2 and (R y ) 2 , again as one expects. Furthermore, the second square root of (15) looks exactly like the mass of a bound state of a topological soliton with 2N objects of mass proportional to g −1 s . Hence, one may naïvely conclude that the side bubbles of the uplifton are bound states of D5 branes and topological solitons. However, this is not what happens: the ADM mass of 2N D5 branes wrapping x 1,2,3,4,5 is while the N R x5Ry limit 5 of the second square root in (15) gives a mass contribution proportional to NR x4Ry instead, Given that in this regime of parameters the growth of the soliton mass with N is linear, one may ask whether it is possible to have an uplifton with M flux > M BPS , which could lower its energy by tunneling emission of a D5 and a D5 brane. Using Equation (13), this does not happen, neither in the regime where the mass of the uplifton grows linearly with N , nor in any other regime of parameters.
Since our purpose is to use the uplifton to uplift the cosmological constant of a flux compactification in which supergravity can be trusted, the parametersR y ,R x4 and R x5 have to be large (they correspond to the extradimension sizes in units of √ g s l s ). One can check that in this regime of parameters the mass of the lightest uplifton (with N = 1) is necessarily heavier than the mass of two BPS D5 branes. However, as N increases, the mass of the uplifton can become smaller than the mass of 2N D5 branes. This happens because the binding energy of the brane and antibrane regions becomes of the same order as the energy of the branes
DISCUSSION
We have constructed the simplest example of an uplifton: a smooth solution that has three topologicallynontrivial cycles: a neutral one in the middle and two external ones with fluxes corresponding to D5 and D5 charges. The dependence of the uplifton mass on the charges and the size of the compact directions is exactly what one expects from a topologically-nontrivial solution with fluxes.
It is remarkable that our uplifton has exactly the same structure as the solution one might expect from the geometrical transition of D5 and D5 branes studied in [14] (depicted in Figure 2 in that paper): The two-cycle 5 This limit can be achieved whenRy R x 5 and N R2 wrapped by the branes shrinks at the two locations of the branes, giving rise to a flux-less topologically-nontrivial three cycle between the branes. Furthermore, the threecycles with positive and negative flux that, before the geometric transition, were shrinking at the position of the five-branes, now become large. Hence, the configuration of [14] should backreact in a three-bubble solution, with a neutral bubble in the middle and two equal and oppositely-charged ones on the sides. The only difference is that in our uplifton the three-cycles have an S 1 that is trivially fibered over an S 2 , while in a more general solution one may expect a more exotic fibration. As we mentioned in the introduction, the smoothness and neutrality of the uplifton make it a more controlled uplift ingredient than D3 branes. However, in the regime of parameters where we have supergravity control (R y ,R x4 ,R x5 > 1) the uplifton is heavier than D3 branes. Hence, in order to use it for uplifting one has to place it in a high-warp region of the compactification manifold. It would be intersting to establish whether this can be done using for example the Klabanov-Strassler throat [15].
The most important question that our analysis does not answer is whether the uplifton is perturbatively stable. The Kaluza-Klein bubbles that compose the uplifton are known to be unstable in vacuum [16], so this is a nontrivial possibility. However, as shown in [17][18][19], when these bubbles are wrapped by electromagnetic flux this instability can disappear. Furthermore, in the absence of fluxes it is possible that our bubbles can annihilate each other as it can happen for bound states of black holes and bubbles in vacuum [20]. It would be very interesting to explore these possibilities in future projects.
|
2022-12-12T06:42:14.532Z
|
2022-12-08T00:00:00.000
|
{
"year": 2022,
"sha1": "48541763bf6bd3699837bada0ecac404a1e4ad55",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "48541763bf6bd3699837bada0ecac404a1e4ad55",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
16859720
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Study of Upper Limb Load Assessment and Occurrence of Musculoskeletal Disorders at Repetitive Task Workstations
This study explored the relationship between subjectively assessed complaints of pain in the arm, forearm and hand, and musculoskeletal load caused by repetitive tasks. Workers (n=942) were divided into 22 subgroups, according to the type of their workstations. They answered questions on perceived musculoskeletal pain of upper limbs. Basic and aggregate indices from a questionnaire on the prevalence, intensity and frequency of pain were compared with an upper limb load indicator (repetitive task index, RTI) calculated with the recently developed Upper Limb Risk Assessment (ULRA). There was relatively strong correlation of RTI and general intensity and frequency of pain in the arm, and general intensity and frequency of pain in the arm and forearm or prevalence of pain in the arm. Frequency and intensity of pain in the arm were weakly correlated. An aggregate indicator of evaluation of MSDs, which was calculated on the basis of the prevalence, intensity and frequency of pain, was to a higher degree associated with the musculoskeletal load of a task than basic evaluative parameters. Thus, such an aggregate indicator can be an alternative in comparing subjectively assessed MSDs with task-related musculoskeletal load and in establishing limit levels for that load.
Introduction
Musculoskeletal disorders (MSDs) are an important issue with increasing personal and socio-economic impact. Prevalence of long-lasting neck/shoulder pain in the general population has been reported to be between 14% and 25%, whereas of short-term pain even as high as 43% [1][2][3][4] . The back, neck, the shoulders and the upper limbs are the parts of the body most affected by MSDs 4,5) . There is an established association between prevalence of pain and work, with noticeable diversification by occupation 6) .
In explaining the relationship between work and MSDs, biomechanical load assigned to work tasks performed at workstations is meaningful 7,8) . An epidemiological study found increased risk of MSDs in high-intensity work [9][10][11] . Manual tasks, operating a machine and working on a production line in a factory are examples of high-intensity work. Such work imposes an increased number of highly repetitive movements, sometimes requiring significant force, too. According to many researchers repetitive work poses a significant hazard and leads to the development of MSDs 12,13) . A repetitive task and the resulting workload can be defined with parameters related to posture, exertion Industrial Health 2014, 52, 461-470 of external forces and time sequences. Excessive workload causes the development of MSDs 14) .
Upper limbs are involved in most tasks, so this part of the worker's body is especially exposed to overload associated with the risk of developing MSDs 11,15) and it is mostly upper limbs that are affected by similar, repeatedly performed work tasks. MSDs, if not treated properly, could lead to work-related overload symptoms, a significant one of which is carpal tunnel syndrome (CTS). CTS can be observed especially in employees whose work requires repetitive actions, exertion of significant force and awkward posture of the wrist and hand 9,13) .
To avoid MSDs, work-related musculoskeletal load must be at an acceptable level, i.e., a level of load, which should not be exceeded. It can be determined with appropriate methods, which also determine the risk of a work task causing MSDs. Therefore, an established relationship between MSDs and upper limb load for a given type of repetitive task might be crucial in protecting workers against MSDs. A quantitative expression of both musculoskeletal load indicators and MSDs is crucial in comparing them and in establishing criteria for load indicators related to the risk of developing MSDs. MSDs are usually described with a few indicators, e.g., prevalence, intensity and frequency of pain in the past 12 months assessed on a visual scale. However, which of those three indicators is best related to musculoskeletal load at a workstation and can be used in determining risk criteria has to be established. In that respect, it is also important to determine how both musculoskeletal load and the occurrence of MSDs are assessed.
MSDs are mostly evaluated with questionnaires. The Dutch Musculoskeletal Questionnaire is one of the best and most recognized tools 16) . Musculoskeletal symptoms can also be well documented with the Standardized Nordic Questionnaire 6,17,18) or the Questionnaire for Subjective Symptoms of Fatigue 19) . Many studies use those questionnaires, either in full or in part, to assess the occurrence of MSDs in specific body parts 20) .
Musculoskeletal load associated with a workstation can be assessed with checklists 16,21,22) . However, very often are used methods that rely on describing the work process with parameters related to body posture, exerted forces and time sequences. Those methods focus on the work process without considering individual features of workers. The recently developed Upper Limb Risk Assessment (ULRA) method is an example 23) .
Exploring the relationship between work-related load and symptoms of MSDs, makes it possible to define that relationship for individual basic indicators (prevalence, intensity and frequency of pain). However, combined prevalence, intensity and frequency of pain may express better actual musculoskeletal load at the workstation and MSDs related to that load. Thus, they will constitute a good general indicator of MSDs. Multiplication or averaging those three indicators of MSDs, i.e., intensity, frequency and prevalence of pain, can create aggregate indicators. Such aggregate indicators could be used to explore the relationship between task-related musculoskeletal load and MSDs and help in establishing limit levels for work load.
The aim of this study was to (1) evaluate prevalence, intensity and frequency of upper limb pain complaints at various types of repetitive task workstations; (2) explore the relationship between subjectively assessed complaints of pain in the arm, forearm and hand with musculoskeletal load evaluated on repetitive task workstations and (3) propose aggregate indicators of the prevalence, frequency and intensity of pain as alternative indicators of MSD symptoms, which could serve as global MSD indicators in comparing MSDs and task-related musculoskeletal load.
Types of examined workstations
Analysis covered 22 types of repetitive task workstations characterized by cycle time (CT), the number of cycle phases (k), duration of cycle phases as well as upper limb posture and forces present during each cycle phase. CT was exactly the same for both limbs, whereas the number of phases of cycle could be different for the limbs, which was the case at 11 workstations. For each workstation upper limb load was assessed with ULRA. ULRA evaluates musculoskeletal load and risk of developing MSDs of the upper limbs with the value of the repetitive task indicator (RTI), which is a function of parameters related to cycle time (CT), number of phases (k) and integrated cycle load (ICL; i.e., the sum of k products of relative cycle phase forces multiplied by the duration of cycle phase and divided by cycle time). This method is described in detail in Roman-Liu 23) , whereas analysis of the workstations can be found in Roman-Liu et al 24) .
The 22 workstations can be grouped into assembly tasks, sewing, packing and surveillance, control and packing, installing and using comparably heavy mechanical tools (Table 1). Table 1 also presents parameters describing work load characteristics obtained with ULRA. It shows values of the integrated cycle load (ICL) and the repetitive task indi-cator (RTI) for the left and the right upper limbs.
Subjects
A total of 942 workers participated in the study. They were divided into 22 subgroups according to the type of their workstation ( Table 2). Participation in the study was voluntary. Participants signet informed consent. The commission of Scientific Research Ethics at the Central Institute for Labour Protection approved the protocol and methods of the study.
Analyses
The workers answered questions on perceived musculoskeletal pain of the upper limbs. Questions were related to subjective assessment of pain of the upper limbs in the area of arms, forearms and wrists/hands. The questionnaire consisted of two parts: one related to the general diagnosis of upper limb MSDs in arms and forearms, the other to the specific diagnosis of symptoms of CTS.
The part of the questionnaire on MSDs in the arms and forearms consisted of four instructions, with the subjects marking their answers on a 100-mm visual analogue scale (VAS): Please use the scale to indicate the intensity of pain in your arms in the past 12 months. Please use to scale to indicate the frequency of pain in your arms in the past 12 months. Please use the scale to indicate the intensity of pain in your forearms in the past 12 months. Please use to scale to indicate the frequency of pain in your forearms in the past 12 months.
From this part of the questionnaire six basic indicators were obtained: percentage of workers experiencing pain in the arm (Pa), percentage of workers experiencing pain in the forearm (Pf), intensity and frequency of pain in the arm and forearm in the population of workers who experienced pain (Ia, Fa, If, Ff). To simplify and standardize measures of individual indicator, prevalence was expressed as a decimal fraction. Intensity and frequency measures, which correspond to values on VAS, were accepted as dimensionless.
To analyse global complaints, aggregate indicators which express basic indicators jointly, were analysed, too (Fig. 1a). Aggregate indicators of prevalence of pain (Paf), its intensity (Iaf) and frequency (Faf) were obtained after averaging those indicators for the arm (a) and the forearm (f). Moreover, average intensity and frequency of pain were calculated for the arm (IFa) and the forearm (IFf). Those indicators produced information on the population of only those workers who reported upper limb pain. To obtain data related to the general worker population, those indices were multiplied by the prevalence of pain, expressed as a fraction. In this way, three more aggregate indices were obtained, i.e., severity of pain in the arm (PIFa), in the forearm (PIFf) and in the whole upper limb (PIF). Figure 1b presents sample calculations of aggregate indicators.
The other part of the questionnaire assessed symptoms of CTS. CTS is a specific MSD. In medical examinations, CTS can be diagnosed by the occurrence of increased pain and numbness at night. The workers were asked about pain in the wrist/hand and about increase in pain and numbness at night. There were also questions on relief brought by a change in the position of the wrist. Therefore, the part of the questionnaire on CTS consisted of the following questions: Do you experience increased pain at night? Does a change in the position of the hand/wrist position decrease pain? Do you experience increased numbness at night? Does numbness decrease after you change the position of your wrist?
Indices which relate to CTS symptoms express the percentage of workers who experience an increase in pain at night (IP) and the percentage of workers who found that a change in position decreased pain (CP). Moreover, in the case of numbness there were questions and indices of increased numbness at night (IN) and decreased numbness after a change in the position of the body (CN). The average of those four measures gave an indicator of CTS. Even though those basic indicators express the percentage of workers who answered positively, they are considered Figure 2 shows prevalence of pain in arm and forearm in the workers divided according to their workstations. There was quite a strong variation in the results for workers at individual workstations, from 0.02 (Programming option 2) to 0.56 (Sewing-lather) complaints of pain in the arm, and from 0.03 (Controller) to 0.68 (Scanning) of pain in the forearm. At most workstations the prevalence of pain was 0.30-0.45. Table 3 shows the intensity and frequency of pain in the arm and forearm experienced in the past 12 months. It also shows significant differences at individual workstations, both in intensity and frequency of pain. Table 3 presents mean values only of those workers who answered positively to the question regarding pain, i.e., who had experienced pain. The mean values multiplied by the percentage of workers with arm or forearm pain reflect the global indicator related to both the percentage of workers with musculoskeletal problems, and the intensity and frequency of those problems for the arm (PIFa) and for the forearm (PIFf). Figure 3 presents the percentage of workers who answered positively to the questions related to pain in wrist/ hand and numbness increasing at night and if a change in the position of the wrist decreased pain and numbness in the hand/wrist. Those results, too, indicate a diversity among the workstations. At some workstations it was lower than 1 and at others even exceeding 16. Generally, pain was less frequent than numbness. Symptoms of CTS related to workstations were more frequent at those workstations at which MSDs were more frequent, too. Figure 4 illustrates values of the total index (PIF), CT and average RTI for the left and right upper limbs for all types of workstations. To make comparison possible, PIF and CTS indicators were multiplied by constant value. Figure 4 gives a general overview of the relationship between indicators of musculoskeletal load and MSDs. Generally, for lower RTI, PIF and CTS were lower, too.
Results
Correlation coefficients were calculated for values of RTI and MSD indicators for the 22 workstations (Table 4). The correlations were statistically significant. The strongest correlation of RTI was for general intensity and frequency of pain in the arm (PIFa), in forearm (PIFf), as well as for general intensity and frequency of pain in the arm and forearm (PIF). There was weak correlation in case of frequency of pain in the arm (Fa).
The differences between PIF and the other indicators of MSDs were tested with Friedman ANOVA and post hoc tests. The results showed there were no differences between PIF and PIFa, PIFf, Paf or Pf. There were differences between PIF and all the other indicators.
Fig. 2. Prevalence of pain in the forearm (Pf) and pain in the arm (Pa). Prevalence was expressed as a decimal fraction.
Industrial Health 2014, 52, 461-470
Discussion
The results of this study showed that the prevalence of MSDs according to the type of repetitive task workstation ranged broadly from 3 to 61% for pain in the arm and from 3 to 68% for pain in the forearm. Other study on repetitive industrial work reported the prevalence of symptoms among industrial workers of 50% in the neck/ shoulders and 22% in the elbows/hands 25) . Assessment of MSDs of supermarket cashiers showed the prevalence of neck and shoulder disorders of 60-70% 26) . In this case the average number of registered items was 0.15 moves per second with the average force of 0.85 kG. This is close to the characteristics of the tasks of the left upper limb at Ironworker, Welder and Packer TVs workstations in the present study. Prevalence of pain at those workstations was about 45% for the first two and about 35% when Packer TVs is considered, too. This is lower than for cashiers in Rissén et al. 26) study. The discrepancies suggest that it is not enough to consider repetition of movements and averaged load only to fully characterize work tasks; more detailed parameters are necessary for a full assessment. Work posture as well as the duration and load of each cycle phase are also important for musculoskeletal load. More advanced assessment methods could also consider parameters related to personal characteristics (e.g., gender and age).
Differences among the examined workstations were also noticed in the CTS indicator. Hand and wrist symptoms and signs were prevalent in a car factory where Zetterberg and Öfverholm 27) studied subjective complaints in 564 car assembly workers: 57% of the females and 37% of the males reported them. Assembly work was repetitive, with a cycle time of up to two minutes. Sewing − leather and Sewing headrest in the present study had a similar cycle time. CTS symptoms at those workstations were at the level of about 12%, which was more similar to Leclerc et al. 13) results. According to Leclerc et al. in different companies with repetitive industrial work (assembly line, clothing and shoe industry, food industry) 11.8% of workers had CTS associated with repetitive work. The differences in the prevalence of MSDs in various studies can result from the fact that the repetitive tasks at the workstations varied in time sequences, forces and upper limb postures. Quintana and Hernandez-Masser proved the relationship between MSDs and those factors 14) .
The present study showed a statistically significant correlation between workload and basic and aggregated indicators of MSDs. When the aggregate indicators of pain in the arm (PIFa) or forearm (PIFf), and in the arm and forearm (PIF) were compared with RTI, the correlation coefficient was higher than when basic indicators were compared. The basic parameters referred to prevalence, intensity and frequency separately. Since assessment was subjective, it could have been associated with inaccuracies. The responses were subjectively biased; however, assessment was less subjective when a combination of all three indicators was used. The differences related to workstations multiplied when aggregated indicators were considered. That can be why correlation was better between RTI and the aggregate PIF indicator with the greatest correlation coefficient. This suggests that the approach in this study can be useful in comparing task-related musculoskeletal load with MSDs. In this way, it can help in establishing limit levels of load.
However, it is important that all correlation coefficients were between 0.40 and 0.65 and all of them were significant. That means the differences in correlation coefficients between basic and aggregate indicators are not high. That cannot strongly support the supposition that the aggregate indicators proposed in this paper, especially the general intensity and frequency of pain in the arm and forearm (PIF), give a much better overview of the existing severity of upper limb load associated with work tasks. Even if the correlation coefficient in the case of PIF is not much higher than in the case of the basic indicators, this is one global indicator, which can be recommended as suitable in comparing work-related musculoskeletal load and for the risk of developing MSDs.
The study effect consists in the result which shows a correlation between the occurrence of MSDs and the upper limb load indicator (RTI). This means that the occurrence of MSDs can be predicted by assessing musculoskeletal load involved in performing work tasks. That means that assessing musculoskeletal load on the basis of characteristics of work tasks with a method like ULRA can prevent injury. If the assessed musculoskeletal load carries a high risk of MSDs, the workstation, the work process or both can be modified to decrease that risk.
However, not in all cases was full agreement between MSDs indicators and RTI. The differences between RTI and MSDs, as well as the differences between various studies on the development of MSDs at various repetitive task workstation may have causes.
Firstly, MSDs were subjectively assessed. Therefore, the results were strongly influenced by the subjective component. Engstrom et al. 28) study proved that, in general, self-reported physical exposure showed only a few significant associations with musculoskeletal symptoms. A questionnaire shows about 50% higher prevalence of pain in various areas of the body than a physical examination 15) .
Secondly, MSDs are multifactorial and although in explaining the relationship between work and MSDs biomechanical load has been assigned the main role 7,8) , other factors are meaningful, too. Thus, there is no full convergence between RTI and MSD indicators. For example, numerous studies have shown psychosocial aspects to be risk factors for the development of MSDs 29-31) with a poor psychosocial situation resulting in higher reports of MSDs 26,32,33) .
Psychosocial factors are subjective; their assessment is not included in methods that rely on parameters that define tasks. Monitoring musculoskeletal load during work tasks is possible with so-called external and internal load assessment methods. The external load assessment method used in the present study relies on parameters related to body posture, exerted forces and time sequences. It considers the work process only, not factors such as age, gender or psychosocial characteristics. Mental demands as well as individual factors are reflected in internal load assessment methods, which register the reaction of the worker's body to external load. Heart rate, blood pressure or muscle activity registration (electromyography) register not only work-related load, but also individual characteristics of a worker's body and mental load [34][35][36] . Psychosocial and individual factors could have also influenced the occurrence of MSDs at the workstations examined in the present study. However, workers' internal load, which would take into account also those factors, was not assessed in the present study. Therefore, only strictly biomechanical parameters the same for each worker, were considered in assessing musculoskeletal load. This may account for the differences obtained when comparing MSDs and musculoskeletal load at the workstations.
Even when considering external load assessment methods only, the selection of a method of assessing upper limb load is important. In the present study, ULRA was chosen. SI (Strain Index) 37) and OCRA (Occupational Repetitive Actions) 38) are two other methods for assessing upper limb load musculoskeletal load. OCRA is the best known method for evaluating the musculoskeletal load of upper limbs caused by repetitive tasks, and the risk of developing MSDs. It only considers movements of the arms below shoulder level and focuses on movements of the forearms without differentiating load caused by the position of the arms. Both SI and OCRA refer to parameters describing repetitive task with codes, which assign a given measure one value for a range of values. This means the load indicator changes in steps. ULRA gives analogue results, which produce better comparisons than methods which assess musculoskeletal load with digitalized codes. Reliability of the assessment of upper limb musculoskeletal load presented in this study has been confirmed by another study that reported convergence of OCRA and ULRA 24) .
External load assessment methods do not distinguish musculoskeletal load and risk of developing MSDs in relation to the gender or age of the working population. Because gender and age influence the development of MSDs, that fact could have influenced the results presented in this paper. Eighty-six percent of the population covered by the study were female, which means that the obtained relationship refers mostly to women.
Even though convergence between MSDs and RTI has been documented and the proposed MSD indicator proves the best convergence with upper limb load, this study has some limitations: in assessing upper limb load only bio-mechanical factors are considered, not psychosocial and individual ones. However, if RTI considered individual factors, e.g., gender and age, this method would analyse not only external but also to some extent internal load, which could result in better convergence between upper limb load and MSD indicators. Another limitation can result from the questionnaire, which considered MSD and CTS symptoms averaged for both upper limbs, and not the left and the right ones individually. However, as repetitive work usually engages both upper limbs and as a comparison of RTI for the left and right upper limbs showed very small differences only, this limitation can probably be disregarded as not very relevant to study results.
Conclusion
This study proved the relationship between upper limb musculoskeletal load, assessed on the basis of factors describing repetitive tasks with parameters related to upper limb posture, force and time sequences, with the occurrence of MSDs. A relationship was found both when prevalence of pain in the arm and forearm was considered and when intensity and frequency of pain were analysed. However, the aggregate indicators proposed in this study, especially that of general intensity and frequency of pain in the arm and forearm (PIF), may provide a good overview of existing MSDs. As global indicators, they can be a good alternative in comparing subjectively assessed MSDs with task-related musculoskeletal load and in establishing limit levels of that load.
|
2015-03-20T15:25:33.000Z
|
2014-06-27T00:00:00.000
|
{
"year": 2014,
"sha1": "325a6c1fbb12783873a7de5e9536d9326f663f66",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/indhealth/52/6/52_2013-0232/_pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "325a6c1fbb12783873a7de5e9536d9326f663f66",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237533130
|
pes2o/s2orc
|
v3-fos-license
|
Immune response to SARS‐CoV‐2 in children: A review of the current knowledge
ABSTRACT Host immune responses to severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), especially in children, are still under investigation. Children with coronavirus disease 2019 (COVID‐19) constitute a significant study group of immune responses as they rarely present with severe clinical manifestations, require hospitalization, or develop complications such as multisystem inflammatory syndrome in children (MIS‐C) associated with SARS‐CoV‐2 infection. The deciphering of children’s immune responses during COVID‐19 infection will provide information about the protective mechanisms, while new potential targets for future therapies are likely to be revealed. Despite the limited immunological studies in children with COVID‐19, this review compares data between adults and children in terms of innate and adaptive immunity to SARS‐CoV‐2, discusses the possible reasons why children are mostly asymptomatic, and highlights unanswered or unclear immunological issues. Current evidence suggests that the activity of innate immunity seems to be crucial to the early phases of SARS‐CoV‐2 infection and adaptive memory immunity is vital to prevent reinfection.
R E V I E W
Immune response to SARS-CoV-2 in children: A review of the current knowledge DOI: 10.1002/ped4.12283 This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. Filippos Filippatos Elizabeth-Barbara Tatsi Athanasios Michos compared to the broader community. 10 The incidence rate of pediatric COVID-19 cases ranges between 1%-5% of all COVID-19 cases worldwide. However, this rate is likely to be underestimated, given the high proportion of underdiagnosed mild symptomatic and asymptomatic cases. 11 Although most cases of pediatric COVID-19 cases present as asymptomatic or mild, 12 infants and children with underlying medical conditions often require hospitalization to prevent life-threatening complications. 13,14 Severe postinfection clinical manifestations in children include the multisystem inflammatory syndrome in children (MIS-C) that resembles to Kawasaki disease (KD) or toxic shock syndrome (TSS) and develops weeks or months after the onset of COVID-19 symptoms. 15 To date, limited data regarding SARS-CoV-2 immune responses in children have been published, mainly due to their milder phenotype or the asymptomatic presentation of undiagnosed cases. The aim of this review is to present current evidence regarding innate, humoral, and cellular immune responses to SARS-CoV-2 infection in children, including the novel MIS-C associated with COVID-19 and to compare with immunological data in adults.
Mucosal immune response
As SARS-CoV-2 initially infects the upper respiratory tract, the mucosal immune response of the nasopharynx, including tonsils and adenoids, is activated. 16 IgA antibodies play a safeguarding role in mucosal immunity of the upper and lower respiratory tract by eliminating viral replication and reducing the risk of reinfection. 17,18 There are three heterogenous molecular forms of IgA immunoglobulin: secretory, monomeric, and polymeric. 16 Secretory IgA is dimeric, whereas circulating IgA is monomeric. 16 Secretory IgA significance in lymphoid tissues cannot be ignored. In a recent study of 173 participants, including patients with COVID-19 and a healthy group, approximately 15%-20% of seronegative patients with mild disease had detectable IgA antibodies with neutralizing activity in various mucosal sites, including tears, nasal fluid and saliva. 19 A statistically significant negative correlation between IgA titers and age was detected. 19 Large seroepidemiological studies have shown that IgA levels increase with age, with the highest levels being encountered in adolescence. 20 It has already been established that the elevation of circulating IgA antibody levels in hospitalized adult COVID-19 patients have been associated with worse prognosis and increased fatality rates. 19,21 However, Gruber et al 22 highlighted the crucial role of mucosal immunity in SARS-CoV-2 by investigating its role in 9 MIS-C patients. Interestingly, in MIS-C patients, IgA antibody titers remained elevated in the convalescent phase of the disease with comparable levels to the acute phase. 22 There was also a notable accordance between gastrointestinal clinical manifestations, mucosal immune dysregulation via IL-17A stimulation, and mucosal chemotaxis via CCL20 and CCL28 activation. 22 Children and adolescents are characterized by increased bronchus-associated lymphoid tissue (BALT), which is often activated by infections, but it is rarely encountered in adults. 23 Since children with COVID-19 usually have mild clinical manifestations, without excluding severe complications from the respiratory tract, the role of BALT in SARS-CoV-2 infection and disease progress remains elusive and requires further investigation. 16
Innate immunity
The first line of defense against pathogens is the innate immunity. The cells of the innate immune system are able to identify specific molecular patterns (pathogen-associated molecular patterns, PAMPs) of microorganisms through various proteins named as pattern recognition receptors (PRRs) [ Figure 1, (1)]. 24 These PAMPs, such as viral RNA and proteins are recognized by the transmembrane PRRs, toll-like receptors (TLRs), which are expressed in the innate immune system cells, such as monocytes, macrophages, epithelial cells, neutrophils and dendritic cells [ Figure 1, (1)]. [25][26][27] There are many TLRs that play an important role in COVID-19 pathogenesis, including TLR2, TLR3, TLR4, TLR6, TLR7, TLR8, and TLR9, mainly activated by the stimulation of the proinflammatory cytokines IL-1, IL-6, and TNF-α. 28 Recent in silico studies showed that SARS-CoV-2 spike (S) protein, after being bound to angiotensinconverting enzyme 2 (ACE2) receptor, activates a signaling inflammatory pathway through stimulating TLRs, especially TLR1, TLR4, and TLR6. 29 PRR activation stimulates the secretion of cytokines, such as the following interleukins IL-1, IL-6, and IL-18 [ Figure 1, (2)]. Type I/III interferons (IFNs) induced by PRRs are considered to play a crucial role in early-onset antiviral defense in SARS-CoV-2 and other viral infections [ Figure 1, (3)-(4)- (5)]. Interestingly, certain type I IFN subtypes, IFN-β1a and IFN-β1b were proved to interfere effectively in SARS-CoV in vivo inhibition and SARS-CoV-2 in vitro confrontation. 30,31 However, SARS-CoV-2 and other coronaviruses have invented several evasion mechanisms by inhibiting PRR stimulation and IFN signaling. 32 Previous studies have found the IFN production blockade via antagonism of type I and type III IFN responses in several SARS-CoV patients [33][34][35][36] and comparable escape strategies have also been described in SARS-CoV-2 infection. 37 Patients with severe COVID-19 are characterized by suppressed type I and III IFNs. 38,39 In vitro, SARS-CoV-2 infection presents an increased susceptibility to type I IFN compared to SARS-CoV. 37 Studies have focused on the importance of the IFN pathway in relation to the clinical severity and treatment options of COVID-19. 38,39 Zhang et al 40 investigated the contribution of genetic variants of genes encoding molecules involved in the IFN pathway and found that patients with life-threatening COVID-19 carried rare variants in 13 loci that lead to loss-of function and therefore impaired IFN pathway. Bastard et al 41 also found an impaired Type I IFN signaling pathway in 10% of adult patients with life-threatening SARS-CoV-2 infection as they had autoantibodies against Type I IFNs, and especially against IFN-α and ΙFN-ω, which lead to the inability of the immune system to fight the infection.
Stimulation of peripheral proinflammatory cytokines, neutrophils, and monocytes/macrophages in the lower respiratory tract has been reported in symptomatic COVID-19 patients, but the role of dendritic cells in COVID-19 remains elusive. 42,43 Strong innate immune responses are in a positive feedback loop with proinflammatory cytokines. This cytokine cascade is associated with increased severity rates in adults not only on SARS-CoV-2, but also on SARS and MERS. 44 Despite the limited spread of SARS and MERS compared to SARS-CoV-2, only few cases of severe clinical manifestations in children and adolescents have been reported. 45 SARS is a relatively mild respiratory illness in young children and most common clinical manifestations included cough, chills, fatigue, vomiting, diarrhea, rhinorrhea, diarrhea, and respiratory distress. 45 There are 14 pediatric patients with confirmed MERS-CoV infection, which accounts for 2% of total MERS-CoV reported cases. 45 However, only two of them died: a 2-year-old child with cystic fibrosis 46 and a 9-month-old with congenital nephrotic syndrome. 47 In a large-scale study from China, a correlation between disease severity and elements of innate immunity was investigated. For this purpose, 182 children (≤16 years old) positive for SARS-CoV-2 were recruited. An increase in serum procalcitonin (PCT), IL-2, IL-4, IL-6, IL-10, TNF-α as well as a decrease in complement C3 was revealed in children with SARS-CoV-2 pneumonia. 48,49 Pediatric patients are characterized by robust innate immune responses due to the high production of NK cells compared to older patients. 50 In children, NK cells produce IL-17A in the early stages of the infection, a cytokine which plays an immunological protective role in the development of lung disease. 51 Gruber et al 22 investigated the immune profiles of 9 MIS-C patients and found reduced numbers of NK lymphocyte subsets in peripheral blood, mainly due to extravasation and chemotaxis processes to affected tissues. NK cells recruitment was stimulated by many cytokines and chemokines, including CCL19, CXCL10, and CDCP1. 22
Humoral immunity
After SARS-CoV-2 infection, the majority of patients develop detectable serum antibodies against the receptorbinding domain (RBD) of the viral S protein with neutralizing and non-neutralizing activity. 52,53 Antibodies produced by B-and T-cell immune mechanisms recognize SARS-CoV-2 antigen epitopes beginning 4-8 days after the onset of COVID-19 symptoms. These responses consist of an initial immunoglobulin M (IgM) antibody production, followed by IgA and IgG increase in body circulation. 54 Until recently, most seroepidemiological studies refer to adult patients that are mainly in the acute phase of the infection. For this reason, the exact duration of SARS-CoV-2 antibody response in children requires further investigation. This duration probably depends on the severity of the infection and the viral load to which the individual was exposed. 59 There are indications that support the presence of humoral responses 3-5 months after the onset of symptoms. [60][61][62] Recent studies showed that less than 50% of adult patients developed neutralizing antibodies (NAbs) in the acute phase of infection with median time 36 days, but all of them had detectable antibodies in the convalescent phase. There are studies indicating that NAbs production is largely dependent on the exposure to viral load and the severity of the disease. 63 The duration of NAbs as investigated in mild symptomatic adult patients, which was estimated to last at least 3 months after the onset of symptoms. 64 Children often experience robust antibody production within the first 3 weeks post infection and an estimated seroconversion time to IgG antibodies in the first week. Additionally, there is an increase in IgG specific B-cell rates in children with SARS-CoV-2, indicating a rapid and effective humoral immune response. 65,66 In a seroprevalence pediatric study (n = 208), children aged 10-16 years have 2-3 times higher positive antibody titers than children under 10 years old. 67 Possible etiologies that enhance the innate immune response and humoral activity constitute a) the decreased expression of the ACE2 receptor in the upper respiratory epithelium of younger children and b) the antibody cross-reactivity with other common cold CoVs, in which older children are more frequently exposed to. 67 In a large population study involving approximately 2000 children and adolescents (5-21 years), a seropositivity of 1.35% were detected. Notably, 46.2% of the seropositive children were asymptomatic. 68 It was found that the titer of antibodies was low compared to that of the adults, even though 92.3% of children produced antibodies with neutralizing capacity, which may protect them from reinfection. However, the duration of these neutralizing antibodies remains unknown. 68
Cellular immunity
T-cell responses play an important role in the elimination of viral infections either by directly neutralizing infected cells or by instrumenting immunological memory.
It has been shown that immunological memory against other coronaviruses, like SARS, is so strong that it can even last for several years. 69 Despite the decreased serum antibody levels in patients who recovered from SARS or MERS, cell-mediated immunity was still present at least 10 years after infection. 70 SARS-CoV-2 specific CD4+ and CD8+ T-cells were detected 100% and 70%, respectively, in COVID-19 convalescent adult patients against various viral epitopes, including spike (S), nucleocapsid (N) and membrane (M) proteins. 71 Detection of SARS-CoV-2 reactive CD4+ T-cell responses in unexposed individuals suggests a cross-reactive recognition of common antigen epitopes between SARS-CoV-2 and other CoVs. 72,73 SARS-CoV-2 does not cause lymphopenia only by directly inducing apoptosis of lymphocytes, thymus suppression, and bone marrow impairment mechanisms, but also due to redistribution of T lymphocytes in affected organs via peripheral blood circulation. 74 In patients with moderate and severe COVID-19, migration of dendritic cells, monocytes, and lymphocytes from peripheric tissues to the lower respiratory system was associated with increased inflammatory markers, augmented intensity of radiographic findings in the lungs and poor clinical outcome of the disease. 75,76 In contrast to other viruses, the SARS-CoV-2 virus suppresses T-cell activity via induction of severe lymphopenia and exhaustion of T cells, suggesting an important impairment in the immunoregulatory arm of the adaptive immune response. 77 Adults with severe SARS-CoV-2 infection usually present with decreased lymphocyte counts, including total numbers of CD4+, CD8+, and T regulatory cells, being indicative of the poor disease outcome. 78,79 In a cohort study of 452 patients with laboratory-confirmed COVID-19 and in which 50% of them presented with severe disease, Qin et al reported a statistically significant (P-value 0.04) decreases of (CD3+, CD4+, CD25+, CD127low+) T regulatory cells (3.7/μl vs. 4.5/μl) in severe adult COVID-19 cases in Wuhan. 78 Chen et al 80 showed that the CD45RA+ T-regulatory cell frequencies were 0.5% in severe and 1.1% in moderate COVID-19 cases with a P-value of 0.02, but the total number of Treg cells did not have statistically significant differences (4.7% vs. 3.9%; P-value 0.92). Consequently, the correlation between the number of Treg cells and COVID-19 requires further investigation.
Lymphopenia is a marker of severe disease in children with COVID-19 as well, even though it is rarely encountered. Kosmeri et al 81 suggest that lymphopenia is infrequently documented in children possibly due to the immature immune system and decreased ACE2 expression compared to adults. In a recent meta-analysis, lymphocytosis and leukopenia were the most common laboratory abnormalities, encountered in 22% and 21% of hospitalized pediatric patients respectively. 82 In contrast to adults who mainly express Th1, CD25+ and IFN-γ inflammatory response, children are characterized by Th2 and Th17 immune responses. 51,83 Although children are frequently exposed to CoVs, immunological memory and neutralizing antibody production against SARS-CoV-2 are limited due to their reduced lifetime exposure to antigens. 84
Multisystem inflammatory syndrome in children
In April 2020, several reports from Europe, Canada, and the USA described a rare, novel clinical phenotype that shared similar clinical characteristics with incomplete Kawasaki syndrome (KS) or TSS. This condition was named MIS-C and was regarded as a severe complication of COVID-19 in children. 15 Even though many of these patients meet the criteria for complete or incomplete KD, there are many differences among epidemiological aspects, including age of onset and ethnicity. 85 Notably, many affected children did not have positive reverse transcription polymerase chain reaction (RT-PCR) for SARS-CoV-2 at the moment of MIS-C diagnosis, but they had positive seroimmunology testing. 86 These findings support the hypothesis of a late-onset abnormal immune response that occurs days or weeks after acute infection. Comparable deregulated immunophenotype and immune responses have previously been described in KD, macrophage activation syndrome (MAS), and cytokine release syndrome. 87 Immune responses that occur in MIS-C differ from KD and MAS, but the exact mechanisms stimulated by SARS-CoV-2 infection remain unknown. 88,89 Notwithstanding, recent data support that an increase in IL-17 as well as T-cell activation can distinguish patients with KD from those with MIS-C. 90 A recent study described the immunological differences among the pediatric population with MIS-C and SARS-CoV-2 positive adults presenting with mild disease and acute respiratory distress syndrome (ARDS). 90 In contrast to adults, children with MIS-C produce predominantly anti-S IgG but low amounts of anti-N IgG antibodies. 90 Compared to adults with ARDS, children with MIS-C expressed a lower number of antibodies accompanied by less neutralizing activity. 90 Consequently, significantly lower titers of anti-N IgG and neutralizing activity were identified in children compared to adults regardless of the severity, such as MIS-C. 90 There was also a positive correlation between anti-S-RBD IgG and the onset of clinical manifestations and a negative correlation between age and neutralizing activity in children without MIS-C. [90][91][92] In a study, which included 127 children with pneumonia, an increase of IL-10 and decreased levels of CD4+ CD25+ T lymphocytes, NK and CD4+/CD8+ T cell ratios were detected. 93 Patients with MIS-C had elevated levels of IL-6, TNF-α, and IFN gamma-induced protein 10 (IP-10) in serum as well as enhanced antibody-dependent cellular phagocytosis (ADCP) activity. 51 The increase of IL-17A, produced by CD4+ T cells, CD8+ T cells, gammadelta T cells, invariant NK T-cells, innate lymphoid cells and neutrophils, implies a possible protective role in the development of lung disease. 51 Peripheral immunophenotyping performed in children with MIS-C showed significant neutrophilia, lymphopenia with T-cell exhaustion, and elevation of cytokines, including IL-1β, IL-6, IL-8, IL-10, IL-17 and IFN-γ, which were present only in the acute phase of MIS-C. 94 Additionally, compared to children with severe COVID-19, children with MIS-C have a higher proportion of TNF-α and IL-10. 95
Why is COVID-19 less severe in children?
The rapid progression of COVID-19 pandemic has attracted the interest of the scientific community worldwide. The majority of patients that require hospitalization are elderly with comorbidities and fatal COVID-19 case rates are remarkably lower in pediatric populations. Several hypotheses have been proposed to explain those age-related differences in disease severity.
SARS-CoV-2 enters the human body mainly through ACE2 receptor and transmembrane protease serine 2 (TMPRSS2) in the nasopharyngeal cells. 96,97 Disease severity, as well as COVID-19 specific symptoms such as loss of taste and smell, probably depends on ACE2 and TMPRSS2 quantitative expression in the respiratory tract, renal, gastrointestinal and cardiovascular systems. 96,97 This age-dependent expression is significantly lower in children compared to adults (Table 1). 98 Pediatric patients have less travelling, hospitalization and workplace exposure rates, as well as fewer pulmonary and extrapulmonary comorbidities that are associated with severe COVID-19 in adulthood. 11,99 Comorbidities include chronic obstructive pulmonary disease, endothelial injury, hypercoagulopathy, heart failure, hypertension, diabetes, obesity, malignancy, chronic kidney disease or medications that may increase the risk of the severity of illness, such as ACE inhibitors or angiotensin II receptor blockers (ARBs). [100][101][102] Older adults are characterized by reduced innate and adaptive immune responses that result in decreased viral clearance. 72,103 Previous exposure to other HCoVs, mainly those that cause the common cold, leads to preexisting cross-reactive specific viral T-cell responses, even in individuals who have never been exposed to SARS-CoV-2 before. 71,72,104 The higher proportion of memory cells in adults and the absence of naive T cells, which are abundant in young children, may potentially contribute to the massive T cell-derived cytokine release mainly observed in adults with ARDS. Although children are frequently exposed to human coronaviruses, their T-cell responses and neutralizing activity are lower compared to adults. 105 There are many mechanisms that are usually encountered in adults and affect the immune responses of the host, including antibody-dependent enhancement (ADE), macrophage hyperstimulation, and cytokine storm. ADE is a mechanism usually promoted by viral infections or vaccination, 106 such as dengue virus, SARS-CoV, and MERS-CoV vaccination efforts, 107,108 in which neutralizing antibody production is stimulated by previously circulating viral particles of serotypes. 109,110 Korber et al 111 described a specific mutation in the RBD of SARS-CoV-2 S1 subunit (D614G), which is rapidly progressing worldwide. Beretta et al 112 suggested that a possible effect of this mutation is a potential inducer of ADE, since it shares a common linear epitope of the SARS-CoV spike which is located close to RBD and might influence the interaction between RBD and ACE2 receptor. Even though Zhou et al 113 described the presence of anti-RBD antibodies that induce ADE of viral entry in Raji cells through the Fcγ receptor-dependent mechanism, there are also some RBD epitopes that were associated with only neutralizing activity in the absence of ADE effect. These data suggest that there are different, nonoverlapping RBD epitopes regarding neutralization and ADE. 113 Notably, a recent case report of a 25 year-old man with symptomatic SARS-CoV-2 reinfection might be attributed to ADE mechanism. 114 An explanation for ADE after exposure to SARS-CoV-2 was hypothesized by Zimmermann et al, proposing a possible lifetime exposure to HCoVs makes the elderly vulnerable to ADE, due to the high levels of nonneutralizing cross-reactive antibodies, resulting in severe clinical manifestations and poor outcome of the disease, compared to children that are characterized by a lower level of pre-existing non-neutralizing antibodies. 115 Until recently, there are no specific data that demonstrate ADE in children. However, a model of MIS-C has been proposed by Ricke et al in which ADE activation of mast cells in children with COVID-19 is hypothesized. 116 Degranulation of mast cells with Fc receptor-bound SARS-CoV-2 antibodies leads to an hyperinflammatory response, which results in increased histamine levels, upregulated prostaglandin E2 (PGE2), leading to increased risks of coronary artery aneurysms. 116 In conjunction with immune complex formation by binding to IgG Fcγ receptors, 117 macrophages are hyperactivated and clinical manifestations of the disease become more severe as a result of inflammatory response exacerbation. Monocytes differentiate to macrophages at the site of inflammation after monocyte chemoattractant protein-1 (MCP-1) exposure. 118 It seems possible that adults have decreased L-selectin and increased CD11b, 119,120 both of which result in irregular macrophage migration.
Cytokine storm, defined as the abnormal systemic immune reaction associated with increased levels of inflammatory cytokines and activation of T lymphocytes and macrophages. This deregulated cytokine release of uncontrolled systemic inflammatory response results in elevated IL-1β, IL-2, IL-6, IL-7, IL-8, IL-10, granulocyte colony stimulating factor (GCSF), MCP-1, TNF-α with multiple organ dysfunction, such as MIS-C (Table 1). 80,121,122 Recently, it is obvious that cells of the innate immune system can be trained by past infections, vaccinations, or microbial components to enhance immune responses to future triggers. 123,124 This effect is known as "trained immunity", which has been shown after bacille Calmette-Guérin (BCG), measles and oral polio vaccination or certain infections. 123,125 Given that there are children who have never been immunized against tuberculosis, trained immunity may not be explained by BCG vaccination. Zimmermann et al 115 make the hypothesis that since the majority of vaccinations are implemented early in childhood, trained immunity mechanisms may contribute to a more efficient inhibition of viral replication and explain the age-related differences among COVID-19 clinical spectrum. However, their role in SARS-CoV-2 infection still requires further investigation, especially after the first vaccination. 126 Deficiencies of certain micronutrients, such as vitamin D, zinc, and selenium have been associated with more severe disease in observational studies. 127,128 Vitamin D has a significant role in the prevention of viral infections by suppressing their replication via various immunoregulatory pathways. 129 Even though there is growing interest in the role of vitamin D in the immune response during SARS-CoV-2 infection, no clear evidence that vitamin D supplementation reduces the risk or severity of COVID-19 has been established. Vitamin D levels are usually lower in elderly, due to inadequate supplementation. 130 However, vitamin D is often routinely supplemented to infants younger than 1 year in many countries. 115 In a randomized trial from Brazil, 244 patients with moderate COVID-19 disease were divided in two groups, evaluating the effect of a single high dose of vitamin D3 versus placebo. 131 Even though admission to the intensive care unit (ICU) and need for mechanical ventilation were both decreased 16.0% vs. 21.2% and 7.6% vs. 14.4%, respectively, these differences were not statistically significant (P-values 0.3 and 0.09 respectively). 131 However, it is notable that Lau A retrospective study showed that children with COVID-19 had lower vitamin-D titers compared to children without COVID-19. 133 Among children with SARS-CoV-2 infection, there was a negative correlation between vitamin D levels and fever as well. 133 A recent review suggests that vitamin D levels could be used as a predictive biomarker of MIS-C and suggests that correction of abnormal vitamin D levels in severe MIS-C may favorably affect the outcome of the disease. 134
Questions to be answered
The paramount interest of scientific investigation focuses on the severe forms of COVID-19 disease, most of which are predominantly occurring in the elderly with certain underlying medical conditions. However, the surprisingly effective immune responses against SARS-CoV-2 in children raise reasonable questions for future research in both innate and adaptive arms of immunity. Notably, there are significant differences in clinical severity depending on gender, age, ethnicity, or comorbidities, but the exact mechanisms are only partially understood.
The role of the innate immune response in effective viral clearance in the early stages of the disease in children requires further investigation to identify certain measurable immunological biomarkers that are predictive of the severity of the disease, of the response to treatment and convalescence. The different clinical manifestations of the disease do not reflect the exact contribution degree of humoral and cellular immune responses, neither the effective elimination of infection, nor the protection of reinfection, a condition already described in adult patients.
Since the beginning of SARS-CoV-2 pandemic, which is now more than 12 months, limited data have been published regarding the exact duration of antibody production, antibody kinetics, T-cell memory, and the factors that contribute to T-cell activation or exhaustion in children. Although non-neutralizing antibody production has been described well in children with COVID-19 with a wide range of clinical manifestations, the exact percentage and duration of neutralizing remain questionable. It also needs to be clarified the immunological mechanisms that contribute to MIS-C by enlightening the key role of T-regulatory cells, which assist in the establishment of protective immunity and immunological homeostasis.
Conclusion
This review summarizes the latest knowledge of the innate and adaptive immune responses against SARS-CoV-2 in both children and adults. The decryption of the immune mechanisms activated by SARS-CoV-2 infection as well as the detection of prognostic immuno-biomarkers related to the severity of the disease, age and sex will contribute to confrontation or prevent severe clinical manifestations or reinfection. Children are an important group in understanding these mechanisms as the majority of them are asymptomatic or mild symptomatic. Therefore, so far, there is limited knowledge regarding the immune responses in children and further research is required.
|
2021-08-23T13:28:40.656Z
|
2021-08-17T00:00:00.000
|
{
"year": 2021,
"sha1": "83d387ca53c30992d4929be36006841cffd31fd4",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ped4.12283",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ba1e2e29598eb568ffa77295af0578bdd794474",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244754599
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the phytochemical content, antimicrobial and antioxidant activity of Cocos nucifera liquid smoke, Garcinia mangostana pericarp, Syzygium aromaticum leaf, and Phyllanthus niruri L. extracts
Abstract Background and Aim: Many plants contain bioactive substances with antibacterial and antifungal properties. The aim of this study was to evaluate the antibacterial and antifungal activity of Cocos nucifera shell liquid smoke (CSL), clove leaf extract (CLE), and mangosteen pericarp extract (MPE) alone and in combination against Escherichia coli and Candida utilis. The antioxidant activity, phenol, saponin, and tannin of CSL, CLE, MPE, and Phyllanthus niruri L. extract were also measured. Materials and Methods: The agar well-diffusion method was used to determine the antimicrobial and antifungal activities of CSL, methanolic MPE, and CLE and their combination CSL+MPE+CLE (COMBI) on bacteria E. coli and fungus (C. utilis). Antioxidant activity was measured by the diphenylpicrylhydrazyl method. Total phenol and total tannin were measured by the Folin–Ciocalteu method and total saponin was measured by the vanillin-sulphate method. Results: The results indicated that phenolic and tannin levels were greater in MPE than in CLE, whereas the saponin content was higher in CLE compared with MPE. Undiluted (100%) MPE exhibited lower antibacterial activity (p<0.05) than chloramphenicol against E. coli, however, undiluted CLE and COMBI showed similar activity compared with chloramphenicol against E. coli. COMBI caused significantly (p<0.05) higher inhibition compared with virginiamycin against E. coli. CSL, MPE, and COMBI exhibited significantly lower antifungal activity (p<0.05) than that of ketoconazole against C. utilis. In contrast, CLE showed improved antifungal activity (p<0.05) compared with ketoconazole. Conclusion: Cocos nucifera liquid smoke, Garcinia mangostana pericarp extract, and Syzygium aromaticum leaf extract, either alone or in combination, have the potential to be used as antibacterial and antifungal agents.
Introduction
Agricultural feed additives are materials that do not contain nutrients, but aim to increase productivity, quality of livestock products (meat, eggs, milk, and fur), feed efficiency, and increase the immunity of livestock against disease. In some countries, feed additives that are widely used in the livestock industry belong to a class of antibiotics known as antibiotic growth promoters (AGPs). AGPs are added to minimize the population of pathogenic microbes in the digestive tract. In general, a provision of AGPs is to increase the growth of chickens by approximately 3.9% and the feed efficiency by 2.9% [1]. Since January 2018, however, the use of AGPs has been banned in Indonesia to avoid detrimental health effects to consumers, such as allergies and increasing number of drug-resistant microorganisms. European countries have banned the use of AGPs since 2006 [2]. Likewise, other countries, such as South Korea and the United States, have begun to limit AGP use. In 2019, there were more than 2.8 million people in the US suffering from infections, and there were 35,000 people who died from bacterial that was resistant to antibiotics [3]. There is a concern that this condition will increase if the use of AGPs is not controlled. In Indonesia, Noor and Poeloengan [4] reported that the use of antibiotics in animals was increased the resistance of Campylobacter and Salmonella bacteria to fluoroquinolone antibiotics and the third-generation antibiotics of cephalosporin. Antibiotics are generally given at a low dose (subtherapeutic level) at approximately 10-50 ppm [1]. Before AGPs were banned in 2018, the feed production in Indonesia was 18.2 million tons and it was calculated that the use of AGP would be around 182-910 tons/year. This amount will increase with the enhancement of feed production every year. However, this will increase the danger associated with resistance to antibiotics. However, the use of AGPs has been banned in Indonesia since 2018. Thus, an AGP alternative is needed, which is safer for livestock and the consumers of livestock products. In Europe, plant extracts have been used as a substitute for AGP [5]. Indonesia, as a tropical country, has a variety of plants containing bioactive compounds which may potentially serve as antibacterial and antifungal agents, such as Psidium guajava roots and leaves [6], cashew shell extract [7], mangosteen extract [8], Plumeria rubra flower and leaf extract [9], Phyllanthus niruri L. extract [10], and clove (Syzygium aromaticum) leaf extract [11].
We previously evaluated bioactive compounds from 12 plants and found three plant extracts that exhibit antibacterial, antifungal, and antioxidant properties (i.e., cashew nut shell liquid smoke, clove leaf extract [CLE], and mangosteen pericarp extract [MPE]) [12]. The combination of P. niruri L. and CLEs had the same ability to inhibit the growth of E. coli as the AGP, Zn bacitracin [13]. Although P. niruri extract exhibited the highest antioxidant activity, it is difficult to obtain enough quantity as it is a wild, non-cultivated plant. The MPE also contained high antioxidant activity [12] and is easier to obtain in large quantities. Other reports have shown that Cocos nucifera shell liquid smoke (CSL) also possesses antibacterial activities [14]. Since the coconut shell is abundant in Indonesia, the production of its liquid smoke is easier and cheaper than Anacardium occidentale shell liquid smoke. Therefore, a series of experiments was designed to produce a feed additive with antibacterial, antifungal, and antioxidant properties as an alternative to AGP consisting of the combination of CSL, MPE, and CLE (COMBI).
The aim of this study was to evaluate the phytochemical content, antibacterial and antifungal activity of CSL, mangosteen (Garcinia mangostana) pericarp extract, clove (S. aromaticum) leaf, P. niruri L. extract (PNE), and their combinations against Escherichia coli and Candida utilis.
Ethical approval
Ethical approval was not required for this study. All the experiments were performed in vitro.
Study period and location
The study was conducted from October 2020 to March 2021 in Microbiology Laboratory of Indonesian Research Institute for Animal Production, Bogor, West Java, Indonesia.
Preparation of extract and liquid smoke
Mangosteen (G. mangostana) pericarp was collected from Purworejo-Central Java and clove leaves (S. aromaticum) and P. niruri L. were collected from Bogor, West Java. Both plants are very common in Indonesia and public of Indonesia can identify it easily on the basis of gross characteristics. Both plants were identified by the authors. The rind of the mangosteen was washed using water, drained, then cut into small pieces of approximately 0.5-1 cm 2 . Mangosteen pericarp, clove leaves, and P. niruri L. were dried in an oven at 60°C for 4-5 days. The dried mangosteen pericarp, clove leaves, and P. niruri L. were ground using a hammer mill, then screened using laboratory sieve No. 50 to obtain powder with a particle size of 300 microns.
One hundred and sixty grams of mangosteen pericarp, clove leaves, and/or P. niruri L. powder were mixed with 1440 mL of absolute methanol, placed in a shaker for 4 h, left to stand overnight at 26 o C, and centrifuged at 10,000 RPM for 10 min at 4°C. The solution was filtered through filter paper and evaporated using a rotary evaporator at 40°C. This process produced the MPE, CLE, and PNE. Cocos nucifera shell liquid smoke (CSL) was produced according to the procedure described by Pasaribu et al. [14].
A mixture of the three compounds, that is, CSL: MPE:CLE was made at a 1:1:1 ratio and this 100% stock mixture was designated an undiluted combination stock (COMBI).
Antioxidant assay
The antioxidant activity of two ingredients, MPE and PNE, was measured. The antioxidant capacity was determined based on the percentage of diphenylpicrylhydrazyl (DPPH) radical inhibition and Vitamin C was used as the standard. DPPH was measured according to the method described by Rusmana et al. [15] with minor modification. Briefly, 1 mL of extract solution was diluted with sterile distilled water at various dilutions (8000; 12,000; 16,000; and 20,000 times) and mixed with 2 mL DPPH methanolic solution. The mixture was vortexed and left to stand in the dark for 30 min and the absorbance was measured at 517 nm. The radical scavenging activity was calculated by the following formula: Scavenging % = (Ac−As)/Ac×100 Where, Ac is the negative control absorbance (without sample) and As is the sample absorbance.
The results were presented as IC 50 values (sample concentration required to inhibit 50% radicals). The lower the IC 50 value, the higher the antioxidant activity of the sample [16].
Phytochemical determination
Phytochemical (total phenol and saponin) content was measured for CSL, MPE, CLE, and PNE.
Determination of total phenol
Total phenolic content was determined in CSL, MPE, CLE, and PNE using the Folin-Ciocalteu method [17]. Briefly, 0.2 g of each sample was diluted into 10 mL of acetone (70%) in a test tube, vortexed, and placed into an ultrasonic bath cleaner at −5°C for 20 min. The sample was diluted up to 50 times, 0.25 mL of Folin-Ciocalteu and 1.25 mL of Na 2 CO 3 ( 20%) were added, mixed by vortexing, and allowed to stand for 40 min. The absorbance of the Available at www.veterinaryworld.org/Vol.14/November-2021/27.pdf solution was read on a spectrophotometer at a wavelength of 725 nm.
Saponin determination
Saponin content was measured in CSL, MPE, CLE, and PNE according to Hiai et al. [18]. Vanillin reagent (1.6 g) was added to 20 mL of absolute ethanol in a test tube and stirred. Plant extract (0.25 mL) was pipetted into a test tube, and 0.25 mL of vanillin reagent (fresh) and 2.5 mL 72% H 2 SO 4 (in cold temperature) were added. The mixture was heated in a water bath for 10 min at 60°C and cooled. The absorbance of the mixture was measured at a wavelength of 544 nm. Diosgenin saponin was used as the reference standard.
Preparation of E. coli and C. utilis
E. coli was inoculated into nutrient agar (NA) media and incubated overnight for antibacterial tests. C. utilis was inoculated into potato dextrose agar (PDA) media and incubated for 5 days for the antifungal test. The turbidity of E. coli and C. utilis was measured using a spectrophotometer at a wavelength of 620 nm with a concentration compared to a standard (0.5 McFarland).
Antibacterial assay
The antibacterial test was done by the agar well-diffusion method as described by Das et al. [19]. MPE, CLE, and COMBI were tested on NA media to measure the zone of inhibition against E. coli. E. coli was inoculated by spreading approximately 1 mL onto NA media with a cell count of 108 cells/mL, then leaving it for 4-5 min. The remaining liquid was pipetted and discarded. Four wells of 6 mm diameter were perforated into the agar medium with a sterile cork borer (6 mm) and filled with 60 µL of plant extract (MPE/ CLE/COMBI) using a micropipette in each well under aseptic conditions. The MPE and CLE were diluted with sterile distilled water at different concentration, that is, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100% (no dilution). Antibiotic (chloramphenicol, 18 ppm) was used as a positive control and sterile distilled water was used as a negative control. COMBI was also diluted to a range of concentrations (6.25%, 12.50%, 25%, 50%, 75%, and 100%). Two positive controls were established (i.e., virginiamycin 40 ppm [K +1 ] and chloramphenicol 30 ppm [K +2 ]).
Antifungal assay
The antifungal test was carried out by an agar well-diffusion method as described by Ubulom et al. [20]. CSL, MPE, CLE, and COMBI were tested on PDA media to measure the zone of inhibition against C. utilis. C. utilis was inoculated by spreading approximately 1 mL onto PDA media with a cell count of 10 8 /mL, then leaving for 4-5 min. The remaining liquid was pipetted and discarded. Four wells of 6 mm diameter were perforated into the agar medium with a sterile cork borer (6 mm) and filled with 30 µL of plant extract (MPE/CLE/COMBI) using a micropipette for each well under aseptic conditions. MPE and CLE were diluted with sterile distilled water to establish a range of concentrations (10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100%). Antifungal ketoconazole (30 mg/mL) was used as a positive control and sterile distilled water was used as a negative control. COMBI was also diluted to a range of concentrations (10%, 20%, 40%, 60%, 80%, and 100%).
Overall, the properties measured in this study included antioxidant activity; phytochemical analysis (total phenols, saponins, and tannins) of MPE, CLE, PNE, and COMBI; antibacterial activity of MPE, CLE, and COMBI against E. coli; and antifungal activity of CSL, MPE, CLE, and COMBI against C. utilis.
Statistical analysis
All data were statistically analyzed using a oneway analysis of variance (ANOVA). Duncan tests were performed if the ANOVA showed significant differences (p<0.05). Assays were replicated 4 times and the values are expressed as the mean±standard error.
Antioxidant activity
Antioxidants are substances that can prevent or slow down damage to cells of organisms resulting from free radicals. A commonly used antioxidant is ascorbic acid (Vitamin C). IC 50 values are commonly used as an indicator of antioxidant activity, which is the concentration of a substance that reduces DPPH free radicals by 50%. The lower the IC 50 value, the higher the antioxidant activity [16].
The results showed that the IC 50 value of G. mangostana pericarp extract was lower (0.05 µL/mL) compared with that of PNE (0.10 µL/mL) and the phenol content of G. mangostana pericarp extract was higher (12.09%) than that of PNE (3.78%) ( Table-1). However, our previous results [12] indicated that the antioxidant capacity of PNE was slightly higher than that of MPE. The phenol content of MPE (12.09%) was higher than PNE (3.78%). This indicates that the phenol content exhibits a positive correlation with antioxidant activity. Plants that are rich in phenolic compounds are known to be good sources of natural antioxidants [21].
Phenol, saponins, and tannins in COMBI at different dilution ratios were also measured to determine whether dilution had an effect on the phytochemical compounds. The phytochemical compounds of COMBI at a range of 10-100% showed that the higher the concentration, the higher the content of total phenols, saponins, and tannins (Table-3). Saponin content was higher compared with that of phenols and tannins in the COMBI. At all COMBI concentrations, the highest phytochemical content was saponins followed by phenols and tannins. This is in accordance with the levels in CSL, MPE, and CLE individually (Table-2) in which saponin levels were higher than phenols and tannins.
Antibacterial activity of MPE and CLE
The antibacterial activity of MPE and CLE individually at different dilutions was determined against E. coli (Table-4 Figures-1 and 2). The inhibition zone resulting from MPE and CLE is shown by the clear areas. The results indicated that the dilution ratio significantly (p<0.01) affected the zone of inhibition against E. coli. There was no clear zone present when MPE was diluted from 1:9 to 4:6, but the clear zone appeared when MPE was present at higher concentrations (dilution ratio of 5:5 to undiluted). The zone of inhibition of undiluted MPE was significantly lower (p<0.05) than that of chloramphenicol (18 ppm), with a clear zone of 11.75 mm and 13.75 mm, respectively (Table-4). The zone of inhibition of MPE against E. coli was not significant (p>0.05) when MPE was diluted with distilled water (Table-4). This indicates that MPE at a concentration of 50-100% was able to inhibit the growth of E. coli with an inhibition zone of 9.75-11.75 mm. The results of this study are similar to that of Jacob et al. [30], who found the inhibition zone of G. mangostana pericarp extract to be 11 mm.
and illustrated in
development [33]. The acetic acid in CSL, which is protonated at low pH, permeates the lipid bilayer of the cell wall and releases protons into the intracellular environment [34]. Thus, more damage to the cell walls occurs compared with eugenol of CLE and α-mangostin of MPE. This indicates that acetic acid has a greater ability to inhibit the growth of pathogenic E. coli bacteria.
Antibacterial activity of COMBI
The antimicrobial activities of COMBI against E. coli at different concentrations are shown in Table-5 and Figure-3. The higher the COMBI concentration, the greater the zone of inhibition zone. Increased concentration of plant extracts causes greater cell membrane damage, so the inhibition zone is wider [35]. The wider zone of inhibition for 100% COMBI exhibited a greater destructive power because of the higher amount of bioactive substances penetrating into E. coli cells.
The efficacy of the antimicrobial activities of COMBI at a concentration of 6.25% was still able to inhibit the growth of E. coli, with an inhibition zone of 6.13 mm. COMBI at 100% concentration equaled the inhibition zone of 30 ppm chloramphenicol against E. coli, with values of 19.50 mm and 19.00 mm, respectively. COMBI was more effective (19.50 mm) than virginiamycin 40 ppm (6.25 mm) against E. coli. This indicates that a mixture of CSL, MPE, and CLE is able to inhibit the growth of pathogenic bacteria. The CSL inhibition zone at a 100% concentration was approximately 22.88 mm (in press). It was 11.75 mm and 13 mm for MPE and CLE, respectively. This shows that the CSL inhibition zone was still greater compared with COMBI if used alone. In contrast, MPE and CLE individually exhibited inhibition zones that were smaller than COMBI.
The reduced effectiveness of COMBI (the combination of the three plants) occurred because MPE at a concentration of 10-40% could not inhibit the growth of E. coli, whereas CSL and CLE at a concentration of 10% inhibited the growth of E. coli. The main chemical component of CSL is acetic acid, whereas it is However, Permata et al. [31] reported a higher inhibition zone of MPE at 15.8 mm.
The inhibition zone for CLE at a dilution ratio of 7:3 to undiluted was not significantly different (p>0.05) with those of chloramphenicol (18 ppm) against E. coli, with a clear zone of 11.8-13.0 versus 13.5 mm, respectively (Table-4). Unlike MPE, CLE exhibited growth inhibition of E. coli at a lower concentration from a dilution ratio of 1:9 to undiluted, with an inhibition zone of 8-13 mm. This result indicates that CLE had similar activity to 18 ppm chloramphenicol. The results of this study were lower for the ethanol fraction of CLE, with an inhibition zone of 16.07 mm against E. coli, but similar for the n-hexane fraction, with an inhibition zone of 13.61 mm, when compared with the results of Ramadhani et al. [27]. CLE at a concentration of 10-100% produced an inhibition zone against E. coli of 6.3-15.8 mm [11]. CLE at a concentration of 25% has also been reported to inhibit the growth of Salmonella typhi with an inhibition zone of approximately 16.90 mm [32].
At 100% concentration, the zone of inhibition of MPE against E. coli was approximately 11.75 mm, whereas CLE was around 13 mm (Table-4) and CSL was around 22.88 mm (in press). This indicates that CLE is stronger than MPE, but CSL was still more active than CLE against E. coli.
Bioactive substances (α-mangostin, eugenol, and acetic acid) kill bacteria by damaging the cell membrane as a result of the reaction between phenolic compounds and cell wall phospholipids. As a result, the permeability of the cell membrane is disrupted which inhibits the function of mRNA and bacteria α-mangostin in MPE and eugenol in CLE, which indicates that the combination could still inhibit E. coli growth. These findings also indicate that the bioactive substances in COMBI, which predominantly damage the cell membrane of E. coli, are acetic acid and eugenol.
Antifungal activity of CSL, MPE, and CLE
The antifungal activity of CSL, MPE, and CLE on C. utilis is shown in Table-6. CSL at a concentration of 100% was significantly (p<0.05) less effective than ketoconazole at inhibiting the growth of C. utilis (7.50 mm vs. 16.75 mm) ( Table-6). However, CSL was still able to inhibit the growth of C. utilis at a concentration of 80-100%, yielding a zone of inhibition of 6.50-7.50 mm. The predominant component, acetic acid, from CSL damages the cell walls of the C. utilis fungus so that its growth is disrupted. Using the microplate reader method, CSL inhibited the growth of C. utilis by up to 86.8% [14]. In contrast, there was no antifungal effect of CSL on Candida spp. [36]. This indicates that the antimicrobial activity of CSL was more effective on E. coli than C. utilis.
Similarly, MPE at a concentration of 100% was significantly (p<0.05) less effective than ketoconazole at inhibiting the growth of C. utilis (12.50 vs. 14.75 mm) ( Table-6). However, MPE at a concentration of 20-100% was able to inhibit the growth of C. utilis, with an inhibition zone of approximately 8.0-12.50 mm. The inhibition zone for MPE against C. utilis was not significantly (p>0.05) different at concentrations ranging from 40% to 90%. The effect of MPE on the inhibition zone of C. utilis was minimal at a concentration of 20%, which indicates that if the concentration was below 20%, MPE would be ineffective at inhibiting the growth of C. utilis. According to Geetha et al. [8], the inhibition zone of mangosteen fruit extract against C. utilis was 9 mm. Data on the effect of MPE on C. utilis have not yet become available.
The inhibition zone for CLE at 100% concentration was significantly (p<0.05) greater compared with that of ketoconazole (21.25 mm and 16.0 mm, respectively) ( Table-6). CLE at a concentration of 10% was able to inhibit the growth of C. utilis with an inhibition zone of approximately 6.38. CLE inhibited the growth of Candida albicans, whereas clove oil exhibited robust antifungal activity [37,38]. Data on the effect of CLE on C. utilis are still limited.
The results showed that CLE exhibited the highest growth inhibitory effect against C. utilis when compared with MPE and CSL. The bioactive substance, eugenol, in CLE was more effective than α-mangostin from MPE and acetic acid from CSL to damage the cell walls and disrupt the growth of C. utilis fungi.
Antifungal activity of the CSL, MPE, and CLE combination
The COMBI inhibition zone at 100% concentration was significantly lower (p<0.05) than that of Available at www.veterinaryworld.org/Vol.14/November-2021/27.pdf ketoconazole ( Table-6); however, COMBI was able to inhibit C. utilis (9.5 mm) at a minimum concentration of 40%. At a concentration of 80-100%, the COMBI inhibition zone (11.5-11.75 mm) was significantly (p<0.05) greater compared with a concentration of 40%. When each extract was tested against C. utilis individually, the minimum inhibition zone for CSL occurred at a concentration of 80%. The minimum concentration was 20% and 10% for MPE and CLE, respectively. This indicates that the effectiveness of CSL, MPE, and CLE was greater individually than in combination against C. utilis. Furthermore, α-mangostin from MPE and eugenol from CLE had an important role in inhibiting the growth of C. utilis in the COMBI treatment. This indicates that the activity of plant extracts is better when used individually compared to treatment of a mixture of several plant extracts for inhibiting C. utilis growth.
Apart from being antibacterial and antifungal, phenols exhibit other properties, including antioxidant activity. The mechanism through which bioactive substances kill bacteria is similar, namely, by damaging cytoplasmic cell walls and nucleotides and disruption the cell membrane to inhibit DNA and protein synthesis [39,40]. These results indicate that the combination of MPE, CLE, and CSL could be used as an antifungal treatment to inhibit the growth of C. utilis.
Conclusion
The antioxidant activity of MPE was greater compared with that of PNE. Phenolic and tannin compounds were higher in MPE than in CLE, whereas the saponin compound was higher in CLE. Undiluted (100%) MPE exhibited a significantly lower antibacterial activity than chloramphenicol against E. coli; however, undiluted CLE and COMBI exhibited similar antibacterial activity compared with chloramphenicol. The COMBI at a low concentration (6.25%) showed similar antibacterial activity as 40 ppm virginiamycin; however, the antifungal activity of COMBI occurred at a concentration >40%. In summary, C. nucifera liquid smoke, G. mangostana pericarp extract, and S. aromaticum leaf extract, either alone or in combination, have the potential to be used as antibacterial and antifungal treatments.
|
2021-12-01T16:09:11.593Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1cdf4716aeb0e9c8d82a87ac3c99f6a892958dbd",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.14/November-2021/27.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "90acf170129713e6e36b82b0877ff8d6eeac2957",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1724041
|
pes2o/s2orc
|
v3-fos-license
|
An omniphobic lubricant-infused coating produced by chemical vapor deposition of hydrophobic organosilanes attenuates clotting on catheter surfaces
Abstarct Catheter associated thrombosis is an ongoing problem. Omniphobic coatings based on tethering biocompatible liquid lubricants on self-assembled monolayers of hydrophobic organosilanes attenuate clotting on surfaces. Herein we report an efficient, non-invasive and robust process for coating catheters with an antithrombotic, omniphobic lubricant-infused coating produced using chemical vapor deposition (CVD) of hydrophobic fluorine-based organosilanes. Compared with uncoated catheters, CVD coated catheters significantly attenuated thrombosis via the contact pathway of coagulation. When compared with the commonly used technique of liquid phase deposition (LPD) of fluorine-based organosilanes, the CVD method was more efficient and reproducible, resulted in less disruption of the outer polymeric layer of the catheters and produced greater antithrombotic activity. Therefore, omniphobic coating of catheters using the CVD method is a simple, straightforward and non-invasive procedure. This method has the potential to not only prevent catheter thrombosis, but also to prevent thrombosis on other blood-contacting medical devices.
liquid onto the surface, thereby creating a highly stable, omniphobic lubricant-infused coating 29 . These surfaces outperform heparin-coated surfaces, as well as a range of hydrophilic coatings 19 developed to resist blood clot formation. Furthermore, lubricant-infused omniphobic coatings have been more effective than PEG or albumin for blocking non-specific adhesion of cells and bacteria 31,32 . In addition to increasing blood-compatibility, these surfaces are stable and durable when exposed to physiological shear stress in vitro 33,34 . Therefore, lubricant-based omniphobic coating of biomedical devices is a promising method for preventing thrombus formation.
Lubricant-based omniphobic coatings are produced by applying SAMs of hydrophobic organosilane (e.g. tridecafluoro-1,1,2,2-tetrahydrooctyl trichlorosilane) onto the surface. Liquid phase deposition (LPD) is the main technique reported in the literature for producing SAMs of fluorine-based silanes in order to obtain lubricant-infused omniphobic coatings 32 . However, the LPD method has several limitations. First, the high volumes of solvent waste produced during the procedure are harmful to the environment, which restricts the industrial viability of the process 35 . Second, self-polymerization of silanes in the solution phase may impair the formation of homogenous silane layers on the surface 36 . Third, and most important, surfaces treated by LPD are exposed to the impurities and side products produced during the treatment process, which may compromise the material and alter the bulk properties of its surface 35 . Such alterations are particularly problematic for materials used for biomedical applications.
To overcome the limitations of LPD, we set out to develop a more robust, simplified and clinically relevant chemical vapor deposition (CVD) method for creating a lubricant-infused omniphobic coating on FDA-approved catheters. The surface properties, chemical composition and antithrombotic activity of catheters coated in this manner were compared with those of uncoated catheters and catheters coated using the LPD method. We show that the CVD method has less of an effect on the surface topography of catheters than the LPD method and endows them with greater antithrombotic activity.
Results
Producing omniphobic lubricant-infused catheters. Omniphobic coatings on coronary catheters, composed of a soft polyether amide block on the outer layer were produced using two different chemical modification techniques: 1) The LPD technique, which is the most commonly used method to create omniphobic slippery surfaces 28 , and 2) Our developed CVD method, which is a more efficient, non-invasive and robust process for creating anti-thrombogenic coatings on catheters (Fig. 1a). Catheter segments were oxygen plasma treated and silanized with trichloro (1H, 1H, 2H, 2H-perfluorooctyl) silane (TPFS) using one of the techniques mentioned above (Fig. 1b). In the final step, a biocompatible, FDA approved liquid lubricant such as perfluorodecalin (PFD) or perfluoroperhydrophenanthrene (PFPP) was added to complete the modification process.
Assessment of surface chemical composition.
To examine the changes in the chemical composition of the catheters after oxygen plasma treatment and after CVD or LPD surface modification, X-ray photoelectron spectroscopy (XPS) was performed (Fig. 2). Oxygen plasma treated and silanized catheters showed a significant difference in chemical composition compared with controls. As expected, after oxygen plasma treatment, a high percentage of oxygen (about 50 atom %) was detected on the surface of the catheters, indicating the presence of hydroxyl (OH) groups and consistent with initial activation of the catheter surfaces.
Although fluorine (F) was detected after silanization with both the CVD and LPD method, the fluorine surface concentration was significantly higher with CVD treatment than with LPD treatment (about 45 atom % and 15 atom %, respectively).
Bismuth, the filling used in catheters to render them radiopaque 37 , was detected on the surface of catheters subjected to LPD treatment (>15 atom %). In contrast, bismuth was not detected on the surface of catheters coated using the CVD method. LPD-treated catheters also exhibited chlorine (>10 atom %) on their surface, which was not present on the surface of CVD-treated catheters.
Contact and sliding angle measurements. To investigate the relative hydrophobicity/hydrophilicity of the control and treated catheters, contact and sliding angle measurements were performed using a 5 µL droplet of deionized water. The sliding angle was defined as the minimum tilting angle required for the droplet to start moving along the catheter surface. A sliding angle of 90 degrees was assigned to droplets that failed to slide at angles of 90 degrees or higher. The static contact angle measurements of the control and treated surfaces before adding the lubricant layer are shown in Fig. 3a and b. Control catheters exhibited a relatively high contact angle (θ st = 107 ± 4°) indicative of their hydrophobicity. After CVD treatment and before lubricant addition, the water contact angle increased to 121 ± 2°. After the addition of PFD or PFPP lubricant layers to the CVD treated catheters, the water contact angles were lower (104.7 ± 2° and 104.8 ± 2°, respectively).
In contrast to CVD treated surfaces, LPD surfaces had a lower static contact angle (θ st = 83.6 ± 7°) compared with control surfaces. After adding the lubricants onto these surfaces, the water contact angles remained low. Although the PFPP lubricant increased the contact angle by about 7°, the difference was not significant and the wettability of the surfaces remained high.
Sliding angle measurements of the treated and control surfaces are shown in Fig. 3c. The 5 µL water droplet did not slide on lubricated LPD catheters even with tilting angles higher than 90°, suggesting that these surfaces do not have slippery properties, which is a major characteristic of omniphobic lubricant-infused surfaces. In contrast, with CVD treatment there was a significant increase in liquid repellency compared with the control or LPD catheters as demonstrated by sliding angles as low as 3°. When sliding angle measurements on control and coated catheters were repeated four months later, the results were similar to those obtained on initial measurement (results not shown).
Sliding angle measurements with whole blood. To investigate catheter-blood interactions and the stability of the coatings, sliding angle measurements were performed with whole blood on catheters that had been treated four months earlier. As seen in Supporting videos 1-3, similar to the results obtained with water, whole blood sliding angles on control and LPD-PFPP treated catheters were greater than 90°. In contrast, CVD-PFPP treated catheters exhibited excellent blood repellency as evidenced by sliding angles less than 3° and immediate sliding of the blood droplet off the catheter surface.
Effect of catheter modification on plasma clotting times. Clotting assays were performed to compare the antithrombotic activities of the various coatings. After gently flattening the catheter segments with a Figure 2. The chemical composition (reported as the percentage atomic concentrations) of the catheter surfaces at different stages of surface modification determined by XPS. Following oxygen plasma, an increase in the oxygen surface concentration was observed and catheters contained up to 50 atom % oxygen. Surfaces treated with LPD had a significantly lower amount of fluorine (about 15 atom %), compared with CVD treated catheters (about 45 atom %). In addition, LPD treated samples showed a large surface concentration of bismuth (>15 atom %) which indicates this treatment method has modified the bulk properties of the catheters. In addition to bismuth, LPD treated catheters had up to 15 atom % of chlorine on their surfaces, an impurity that was not seen on CVD treated catheters. Three samples from each group were analyzed and measurements where performed on four spots on each sample. *Significant difference between the fluorine atom percent when comparing the CVD and LPD treated catheters (P < 0.001). The results are presented as means ± S.D.
plastic roller, they were shaped into rings, placed around the inner walls of the wells of a 96-well plate and saturated with 150 µL of PFPP or PFD lubricant for about 1 min. Empty wells and wells with only lubricant were used as controls. Excess lubricant was removed from the wells and 100 µL aliquots of citrated human plasma were added to wells that did or did not contain catheter segments. The clotting assay was performed as explained in the methods section. As seen in Fig. 4, the average clotting time in wells without a catheter and with no lubricant was 1258 ± 168 s. Empty wells containing PFD or PFPP lubricant had average clotting times of 1239 ± 250 s and 1199 ± 216 s, respectively. Control catheters with no surface modification significantly shortened the clotting time by 2-fold to 577 ± 67 s. Catheters silanized using the CVD method significantly (P < 0.001) prolonged the clotting time compared with non-coated catheters to 935 ± 115 s and 1031 ± 123 s, using PFD or PFPP lubricants, respectively. These catheters had the longest clotting times compared with other experimental groups (Fig. 4). Both LPD-PFD and LPD-PFPP catheters slightly prolonged the clotting time (689 ± 119 s and 636 ± 87 s, respectively) Catheters were rolled and placed in 96-welplates. After incubating the catheters with the citrated plasma at 37 °C for 5-7 minutes, clotting was initiated by adding 100 µL of CaCl2 solution. Absorbance was calculated over time and clotting time was determined at the time to half-max. The bars represent the means of at least nine repeats from each group. *Significant difference between control catheters vs. CVD treated catheters (P < 0.05). **, ***Significant difference when comparing the results from the two different treatment types of CVD and LPD (P < 0.05). The results are presented as means ± S.D. compared with control catheters, but the differences were not statistically significant. When comparing the results with CVD and LPD catheters, clotting times were significantly (P < 0.002) longer with the CVD modification method than with the LPD method (Fig. 4).
Identification of the coagulation pathway activated by modified and unmodified catheters.
To identify the coagulation pathway involved in catheter-induced clotting, and to determine the effect of the various coatings on such clotting, results from clotting assays performed in control plasma were compared with those in plasma depleted of FXI or FXII, key components of the contact pathway, or FVII, which is the critical component of the extrinsic or tissue factor pathway of the coagulation cascade. Whereas control and modified catheters shortened the clotting time in control or FVII depleted plasma (Fig. 5b), they did not do so in plasma depleted of FXII or FXI (Fig. 5a). This suggests that the procoagulant activity of catheters is dependent on FXII and FXI, but not FVII. Similar to the results in normal plasma, CVD treated catheters shortened the clotting time less than LPD catheters in FVII depleted plasma.
Protein adhesion and clot formation on the catheter surfaces. After coating the catheter surfaces with TPFS using either the CVD or LPD method and after performing the clotting assay in normal plasma, catheter segments were subjected to scanning electron microscopy (SEM) to examine the effect of treatment on the catheter surface topography and to investigate their protein repellency properties. As seen in Fig. 6b, catheters treated with the CVD method had a smooth silane layer on their surface and the surface morphology and roughness were similar to those of control catheters. In contrast, with LPD treatment, there was no evidence of a silane layer and roughness of the surface with etching and exposure of inner layers in some areas was seen under higher magnification.
In addition, as illustrated in Fig. 6a and b, a highly dense protein layer was formed on control and lubricated LPD-treated catheters. In contrast, lubricated CVD-treated catheters exhibited significantly less protein deposition on their surfaces, consistent with the normal clotting assay results.
Protein deposition and platelet adhesion to catheters in whole blood. To assess the stability of the omniphobic slippery coating and the capacity of the coated catheters to resist protein deposition and platelet Figure 5. Comparison between the clotting times in normal and FVII, FXII or FXI depleted plasma. Similar to whole plasma clotting assay, catheters were rolled and placed in 96-welplates. After incubating the catheters with depleted plasma at 37 °C for 5-7 minutes, clotting was initiated by adding 100 µL of CaCl 2 solution. Absorbance was calculated over time and clotting time was determined at the time to half-max. (a) Clotting assay in FXI and FXII depleted plasma. Both Control and treated catheters significantly prolong the clotting time in FXI or FXII depleted plasma when comparing the results to normal plasma. (b) Clotting assay in FVII depleted plasma. There was no significant difference between the clotting times in FVII depleted plasma when comparing the results to normal plasma. The bars represent the means of at least nine repeats from each group. The results are presented as means ± S.D.
adhesion, catheter segments that had been treated four months earlier were incubated with whole human blood. Since the PFPP lubricant was superior to PFD lubricant in the clotting assays, CVD and LPD catheters were only lubricated with PFPP in the whole blood experiments. As seen in Fig. 7, after immersing catheter segments in whole blood for 15 s, clot formation was evident on control and LPD-PFPP treated catheters. In contrast, no clot formation was observed on CVD-PFPP treated catheter segments. To further investigate the catheter-blood interaction, blood treated catheter segments were fixed in 4% formaldehyde and submitted for SEM imaging. As seen in the SEM images presented in Fig. 7, a highly dense protein layer was formed on both control and LPD-PFPP catheters, whereas CVD-PFPP catheters showed no protein on their surface. Platelet adhesion was also evident on the control and LPD-PFPP treated catheters, but not on CVD-PFPP catheters.
Discussion and Conclusions
Thrombosis on blood-contacting medical devices is an ongoing problem. Therefore, there remains a need for surface modification techniques that render such devices more biocompatible 4 . Although LPD is a well described method for producing SAMs of fluorine-based silanes 38 and is the most widely used technique for producing omniphobic coatings on biomaterials 28,32 , the results of this work show that the CVD method is more efficient and effective than the LPD method for rendering medical grade polymeric catheters less thrombogenic.
A major drawback of the LPD method is the direct exposure of the treated surfaces to the side products produced and released in the liquid solution 38 . Hydrochloric acid, which is the main side product generated during the hydrolysis step of TPFS (Fig. 1b), may damage the polymeric surface of the catheters. Such damage is evident from the XPS and SEM results. With LPD treatment a high atomic concentration of bismuth (>15 atom %) is evident on the surface of the catheters while with CVD treatment no bismuth was detected. Bismuth is introduced to render the catheters radiopaque so that they can be visualized on x-rays during and after insertion 37 . It is likely that hydrochloric acid produced during the liquid treatment process partially degraded the outer polymeric layer of the catheter, thus exposing the bismuth on the surface. This concept is supported by the SEM images, which reveal surface roughness under higher magnification along with etching and exposure of inner layers in some areas in LPD treated but not in CVD treated catheters (Fig. 6b). Although CVD treated catheters were incubated with TPFS for a longer period than LPD catheters (5 h and 1 h, respectively), this did not negatively affect the surface properties of CVD treated catheters. In addition to bismuth, chlorine (>10 atom %) was also present on the surfaces of LPD treated catheters; an impurity not seen on CVD treated surfaces. The presence of chlorine on these catheters could be due to the partial hydrolysis of the Si-Cl bonds and the unsuccessful formation of inner Figure 6. Scanning electron microscopy images of catheters before, after silanization, and after plasma clotting assay. Control (a), LPD and CVD treated catheters (b) before and after the clotting assay are shown. A uniform smooth silane layer is formed on the catheter surfaces after CVD treating them. In contrast, LPD treated catheters have a rough surface compared with the Controls. Both Control (a) and LPD treated catheters (b) form a dense protein layer on their surfaces, after the clotting assay, something that is not evident in CVD treated catheters (b). The magnification bars are 10 µm on the small images and 1 µm on the larger images. covalent bonds between the silane molecules 39 , suggesting that the LPD method is not as efficient as the CVD method. After treatment with TPFS, the presence of fluorine (F) is expected as a result of formation of the fluorosilane SAM on the catheter surfaces. Although fluorine was detected on both LPD and CVD treated catheters, the fluorine atom concentration on CVD treated samples was significantly higher than that on LPD treated samples (about 45 atom % and 15 atom %, respectively), indicating that the CVD method is a more efficient technique for producing SAM layers of the organosilane. This is further supported by the lower oxygen content on CVD treated samples compared to LPD treated ones, suggesting that with the CVD method, more of the active OH groups were coated with fluorosilane.
Water repellency was greater with the CVD method than with the LPD method as evidenced by lower sliding angles (θ ≤ 5° and θ > 100° respectively). The CVD silanization step transformed the hydrophobic surface of the control catheters to a more hydrophobic surface by increasing the static water contact angle from 107 ± 4° to 121 ± 2°, thereby confirming the presence of the hydrophobic silane coating. In contrast, LPD treated surfaces had a lower contact angle (θ st = 83.6 ± 7°) compared with the control and CVD treated catheters, confirming the fact that the catheter surfaces were not efficiently coated with a hydrophobic silane layer. In addition, the hydrophobic surface properties were disrupted with the LPD method due to the surface degradation caused by the side products produced during the LPD modification step.
Although the static contact angles decreased in the CVD treated catheters after adding the PFD or PFPP lubricant layer, they were highly water and blood repellant. In contrast, sliding angels were significantly higher with the LPD method (θ > 90°), indicating less omniphobicity and lower water and blood repellency (Fig. 3c, Supporting videos 1-3). This could be due to the etching of the catheter surface and the roughness of the outer layer that occurs with the LPD method. In addition, there is less efficient formation of a SAM layer with the LPD method, which may limit the capacity of the lubricant to completely wet and cover the catheter surface. Therefore, LPD treated catheters showed poorer water and blood repellency compared with CVD treated catheters.
Both lubricants (PFD or PFPP) increased the antithrombotic activity of CVD treated catheters as evidenced by significantly longer clotting times compared with control or LPD treated catheters. The enhanced antithrombotic activity of catheters coated using the CVD method is due to reduced activation of the contact system because this activity is evident in plasma depleted of FVII, which is essential for the extrinsic pathway of coagulation, but not in plasma depleted of FXI or FXII, key components of the contact system. Thus, the findings from these experiments, suggest that modified catheters, similar to unmodified ones, initiate coagulation through the contact pathway and have minimal effect in activating the tissue factor pathway. In both the normal and FVII depleted plasma assays, clotting times were longest with CVD-PFPP catheter segments. PFPP has shown to be more stable than PFD 40 and has a lower vapor pressure and greater viscosity. Although, immobilized liquid layers modified with PFPP are more durable in open-air environments, this is unlikely to have influenced our results because the lubricated samples were immediately covered with plasma and were maintained in a closed space.
SEM analysis of catheters incubated in plasma or whole blood reveals differences between the CVD and LPD catheters. Due to the omniphobic slippery properties of the CVD treated catheters, no clot formation or platelet adhesion was seen after incubation with whole blood. In contrast, protein deposition and platelet adhesion were observed on the control and LPD treated catheters. Figure 7. SEM images of catheters incubated with whole blood. Silanized catheters were stored at room temperature and four months after the surface modification procedure, the blood-catheter interaction was investigated. Control, CVD-PFPP and LPD-PFPP catheters were submerged in whole blood for 15 s and images were taken afterwards. Further on, they were washed with PBS, fixed in 4% formaldehyde for 20 mins, and submitted for SEM imaging. Blood clotts were formed on control and LPD-PFPP treated catheters immediatly after being in contact with blood. However, no blood clot formation was seen on CVD-PFPP treated catheters. In addition, platelet adhesion (shown with white arrows) was evident on control and LPD-PFPP treated catheters, while no platelets or protein adhesion was seen on CVD-PFPP treated catheters. The magnification bars are 50 µm.
In summary, we reported a simple and biocompatible method for successful production of omniphobic lubricant-infused polymeric medical catheter coatings using CVD of hydrophobic organosilanes. Catheters modified in this manner are less thrombogenic than uncoated catheters and catheters modified using the LPD method.
Materials and Methods
Materials. Trichloro (1H, 1H, 2H, 2H-perfluorooctyl) silane (TPFS), perfluoroperhydrophenanthrene (PFPP) and perfluorodecalin (PFD) were purchased from Sigma-Aldrich (Oakville, Canada). Human plasma depleted of FVII, FXI, or FXII was purchased from Affinity Biologicals (Ancaster, Canada). Coronary catheters (Medtronic, Minneapolis, USA) composed of a soft hydrophobic polyether amide block on the outer layer and a thin walled polytetrafluoroethylene (PTFE) tube on the luminal side 41 were generously provided by S. Gracie. Whole blood and pooled citrated plasma was generated from blood samples collected from healthy donors as previously described 42 . All donors provided signed written consent. All procedures were approved by the McMaster University Research Ethics Board.
Oxygen plasma treatment of catheter segments. Prior to silanizing the catheters, they were cut into 1.7 cm segments, a length chosen to enable placement in the wells of 96-well polystyrene plates (Evergreen Scientific). Segments were then vertically fixed on plastic petri dishes, placed in an oxygen plasma cleaner (Harrick Plasma Cleaner, PDC-002, 230 V) and exposed to high-pressure oxygen plasma for 2 minutes to functionalize their surfaces and to enable reaction with TPFS.
Preparation of silanized catheters using CVD. After removing the oxygen plasma-treated catheters from the plasma cleaner, they were immediately placed in a desiccator connected to a vacuum pump and two droplets (200 µL) of TPFS were added in a separate petri dish on the side of the catheters. The vacuum pump was turned on and once a pressure of −0.08 MPa was achieved, the exit valve was closed and CVD of the silane onto the catheters was initiated. The silanization reaction was carried out for 5 hours at room temperature. After the CVD step, catheters were removed from the desiccator and placed in an oven at 60 °C for a minimum of 12 h in order to complete the reaction. After removing the catheters from the oven, CVD-modified catheters were placed under vacuum for 30 mins with an open exit valve to ensure removal of non-bonded silanes from the surface.
Preparation of silanized catheters using LPD. Catheters were oxygen plasma treated as described above and then immediately incubated in a 20 mL glass vial containing TPFS in anhydrous ethanol solution (5% (v/v)). The solution was stirred with a small magnetic stir bar for 1 h at room temperature. LPD-treated catheters were removed from the solution and then washed with 100% anhydrous ethanol followed by deionized water and ultimately with 70% ethanol. Washed catheters were left to dry at room temperature and then placed in the oven at 60 °C overnight. Similar to CVD treated catheters, after removing the LPD treated catheters from the oven, they were placed under vacuum for 30 mins with an open exit valve to ensure removal of non-bonded silanes from the surface.
Applying fluorinated lubricants on silanized catheters. As a final step, and before preforming different measurements, CVD and LPD treated catheters were submerged into fluorinated lubricants in order to complete the surface modification. Two types of fluorinated lubricants were used: perfluoroperhydrophenanthrene (PFPP) and perfluorodecalin (PFD). X-ray photoelectron spectroscopy (XPS). XPS was used to assess the surface chemical composition of the catheters before and after each treatment step. For each condition, three catheter segments were subjected to XPS analysis, measurements were taken from four distinct sites on each segment, and means were determined. XPS spectra were recorded using a Physical Electronics (PHI) Quantera II spectrometer equipped with an Al anode source for X-ray generation and a quartz crystal monochromator was used to focus the generated X-rays (BioInterface Institute, McMaster University). The monochromatic Al K −α X-ray (1486.7 eV) source was operated at 50 W 15 kV with a system base pressure no higher than 1.0 × 10 −9 Torr and an operating pressure that did not exceed 2.0 × 10 −8 Torr. A pass energy of 280 eV was used to obtain survey spectra and spectra were obtained at 45° take off angles using a dual beam charge compensation system for neutralization. The raw data were analyzed using the instrument software and the atom percentages of carbon, oxygen, fluorine, bismuth, silicon and chlorine were calculated.
Contact and sliding angle measurements. Contact and sliding angles of the treated and non-treated catheters were measured using a droplet of deionized water (5 µL). Water sessile drop contact angle measurements were performed at room temperature before and after each modification step using a Future Digital Scientific OCA20 goniometer (Garden City, NY), which was calibrated prior to each measurement. Sliding angles were measured using a custom-made goniometer. Immediately prior to testing, silanized samples coated with PFPP or PFD were placed on the calibrated goniometer. A droplet of deionized water (5 µL) was placed on the catheter surface and the sample was gently tilted until the droplet started to move. The sliding angle was defined as the minimum tilting angle required for droplet movement. A sliding angle of 90 degrees was assigned to droplets that failed to slide at angles of 90 degrees or higher. Measurements were made in triplicate on three different catheter segments and means were determined.
Antithrombotic activity of modified catheters. Clotting assays were performed to compare the antithrombotic activities of the various coatings. After flattening the catheter segments with a plastic roller, they were shaped into rings, placed around the inner walls of the wells of a 96-well plate and saturated with 150 µL of PFPP or PFD lubricant for about 1 min. Excess lubricant was removed and 100 µL aliquots of citrated human plasma were added to wells that did or did not contain catheter segments. After incubating the plate for 5-7 minutes at 37 °C, clotting was initiated by adding HEPES (100 µL of 20 mM, pH 7.4) containing CaCl 2 (1 M) to each well, yielding a final CaCl 2 concentration of 25 mM 7,43 . Clot formation was assessed by monitoring absorbance at 405 nm at 10-sec intervals for 60 min in kinetic mode using a SPECTRAmax plate reader (Molecular Devices). Clotting times were defined as the time to reach half-maximal absorbance as calculated by the instrument software from plots of absorbance versus time. The same procedure was repeated in FVII, FXI, or FXII depleted plasma, except that absorbance was monitored over 3 hours to account for the longer clotting times.
Whole human blood experiments. Treated catheters were stored at room temperature and four months later, the stability of their coating was investigated by performing sliding angle measurements and catheter-blood interaction experiments using whole human blood. Sliding angle measurements with blood were performed according to the procedure described above (contact and sliding measurements).
To investigate catheter-blood interactions, control and treated catheters were submerged in whole human blood for 15 s. Catheters were then washed with PBS, fixed in 4% formaldehyde for 20 min and stored at room temperature in PBS until SEM analysis. Using SEM, the extent of clot formation and platelet adhesion on the catheter surfaces was evaluated.
Scanning Electron Microscopy (SEM). Catheter segments were washed three times, fixed in 4% formaldehyde in PBS for 2 hours, washed with PBS (0.1 M) and sputter-coated with a 4 nm thick platinum coating. SEM imaging (JSM-7000 F) was performed in secondary electron image (SEI) mode with voltages of 1.0 kV at 10,000x magnification or 2.0 kV at 1000x magnification.
Statistical Analysis. Data are presented as means ± S.D. In the control and depleted plasma clotting assays, each experimental condition was repeated at least nine times. For all other studies, experiments were repeated at least three times. One-way analysis of variance (ANOVA) followed by post hoc analysis using Tukey's test was performed to assess statistical significance. For all comparisons, P values less than 0.05 were considered statistically significant.
Data availability statement. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2018-04-03T00:41:57.964Z
|
2017-09-14T00:00:00.000
|
{
"year": 2017,
"sha1": "7879f5ba1cf25f146054b3322379d26b43bdbbc6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-12149-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7879f5ba1cf25f146054b3322379d26b43bdbbc6",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
212675773
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptomic and Functional Screens Reveal MicroRNAs That Modulate Prostate Cancer Metastasis
Identifying new mechanisms that underlie the complex process of metastasis is vital to combat this fatal step in prostate cancer (PCa) progression. Small non-coding RNAs are emerging as important regulators of tumor cell biology. Here we take an integrative approach to elucidate the contribution of microRNAs to metastatic progression, combining transcriptomic analysis with functional screens for migration and morphology. We developed high-content microscopy, high-throughput functional screens for migration and morphology in PCa cells using a microRNA library. RNA-Seq analysis of paired epithelial and mesenchymal PCa cells identified differential expression of 200 microRNAs. Data integration identified two microRNAs that inhibited migration, induced an epithelial-like morphology and were increased in epithelial PCa cells. An overrepresentation of the AAGUGC seed sequence was detected in all three datasets. Analysis of published datasets of patients with PCa identified microRNAs of clinical relevance. The integration of high-throughput functional and expression analyses identifies microRNAs with clinical significance that modulate metastatic behavior in PCa.
INTRODUCTION
Metastasis is responsible for >90% of human cancer related-deaths, and a comprehensive understanding of the cellular and molecular control of metastatic spread is imperative in order to develop new approaches to combat this fatal stage (1). The metastatic cascade is a multistep process that begins with tumor cells at the primary site undergoing morphological changes, facilitating their migration to distant sites for subsequent colonization. In advanced prostate cancer, malignant epithelial cells escape the prostate capsule, and can seed distant tissues like lymph nodes, bone marrow and adrenal glands (2). They do this by migrating and invading through several barriers, including the basement membrane, connective tissue of the prostatic capsule and blood vessel walls. Whereas, localized prostate cancer has a good prognosis, once prostate cancer has metastasized to distant sites, the disease is ultimately fatal and treatment is largely palliative.
MicroRNAs are short non-coding RNAs that are 20-23 nucleotides in length which cause translational repression or mRNA degradation by binding to cognate regions in the 3 ′ untranslated regions (UTRs) of messenger RNA. A 6-8 nucleotide sequence at the 5' end of the microRNA, called the "seed" region, is crucial for the majority of miRNA:mRNA interactions. MicroRNAs have emerged as key regulatory molecules in multiple facets of tumor cell behavior, and migration is no exception. A number of studies have underlined the importance of microRNAs in prostate cancer pathogenesis, including the control of cellular morphology and migration in prostate cancer cells (3,4). However, these previous studies utilize a candidate-based approach to investigate the role of microRNAs, which could potentially miss out on identifying crucial players. Hence, we aimed to employ unbiased highthroughput functional screening techniques, assessing migration and morphology, and combine them with transcriptomic analysis of prostate cancer cell lines and in silico analysis of patient datasets.
As it is technically challenging to study cellular migration in vivo, particularly in a high-throughput fashion, several in vitro models have been established to mimic this (5,6). Among these, the "wound healing" or "scratch" assay is the most commonly used technique (7), owing to the simplicity and low cost of its set-up. There have been previous reports in which the scratch assay was scaled up to 96-or 384-well plates, for use in highthroughput screening for migration (8), using pin tools attached to robots (9). Alternative approaches have also been reported, including the use of exclusion zone technology to create cellfree regions for subsequent analysis of cell movement (10). It has been reported that a spindle-like morphology is associated with an epithelial-mesenchymal transition (EMT) gene signature (11), and that a change in morphology, due to alterations in cell-cell adhesion interactions and cellular protrusions, is an important parameter associated with directed cell migration in vitro (12,13). Here we employ a 96-pin scratch tool for the migration screen, and concurrently perform high content imaging to analyze morphological changes indicative of epithelial or mesenchymal morphology. Utilizing a microRNA mimic library, we have identified a number of microRNAs that control both migration and morphological changes. Transcriptomic analysis, and integration of functional and expression data with analysis of clinical datasets have enabled the identification of microRNAs and a microRNA seed sequence that are strongly linked to metastatic behavior and prostate cancer progression.
MATERIALS AND METHODS
Cell Culture PC3-EGFP cells were a gift from Yolanda Calle (Kings College London), and were cultured in RPMI 1640 medium with L-glutamine, sodium pyruvate, MEM non-essential amino acids, MEM vitamins, 10% fetal bovine serum, and penicillinstreptomycin. ARCaPE and ARCaPM cells were purchased from Novicure, Inc., USA, and were cultured in MCaP medium with 5% fetal bovine serum and penicillin-streptomycin as described previously (14).
Transfection and Cell Seeding for High-Throughput Screens
Lipofectamine RNAiMax reagent was used for transfection, according to the manufacturer's recommendations. Briefly, RNAiMax reagent was diluted in Opti-MEM and mixed with microRNA mimics, and was aliquoted manually into tissue culture-treated 96-well plates (Perkin-Elmer). Cells were then seeded into these wells using an automated liquid handling system at 20,000 cells per well, resulting in a final concentration of 25 nM of the microRNA mimic or controls. For the morphology screen, cells were seeded at a density of 7,500 cells per well, and transfected as above.
Scratch Assay
Twenty-four hours post-transfection, confluent monolayers of cells were scratched uniformly using a 96-pin scratch tool called WoundMaker (IncuCyte R Cell Migration Kit, Cat No 4493, Essen Bioscience), and washed twice with phosphate buffered saline using the automated liquid handling system to remove floating cells. The wells were then replaced with cell culture medium.
High-Content Imaging
All high-content imaging was performed using the InCell Analyser 6000 Cell Imaging System (GE Healthcare Life Sciences). Images for the migration screen were obtained at 0 h (i.e., immediately after the scratch was performed), 6, 12, 18, and 24 h, at 4X magnification in both bright-field and green fluorescent channels. For the morphology screen, images were obtained 24 and 48 h post-transfection in the green fluorescent channel at 10X magnification.
Migration Analysis
The area of the scratch was extracted using the InCell Analysis software, for each well and for each time point. The area from 0 h was subtracted from that of all subsequent time points to yield the migration of the cells in the corresponding duration. Data from non-targeting control-transfected wells (negative controls) were used for per-plate normalization, to reduce plate and batcheffects (Supplementary Figure 2), using the CellHTS2 package (15) (version 2.40.0) in R/Bioconductor.
Morphology Analysis
The images were segmented and cell outlines ("objects") extracted using CellProfiler software (16). These objects were further filtered based on size to eliminate cell debris and imaging artifacts. Following this, CellProfiler was used to extract features describing the shape of the objects. Eccentricity was selected for single feature analysis, using the CellHTS2 package. As above, negative controls were used for per-plate normalization (Supplementary Figure 7). For multi-feature analysis, data from all control wells (∼3,000 wells) were divided equally into a training set and test set. For the training set, nontargeting controls and mock-transfected wells were classified as mesenchymal, miR-373 wells as epithelial (based on a visible change to epithelial morphology and it coming up as a candidate in single feature analysis based on Eccentricity alone), and siPTK6 and miCon-transfected wells as intermediate morphologies. From the training set, secondary features (i.e., features derived from some combination of primary features like radius, diameter, major axis length, etc.) were used to build a linear discriminant analysis model. The secondary features used were Area, Compactness, Eccentricity, EulerNumber, Extent, FormFactor, and Solidity. The linear discriminant model was then applied on the test set to determine the accuracy of the model. Finally, the model was applied to the unknown samples to classify them into epithelial, intermediate, and mesenchymal morphologies.
Identification of Hits
Z-scores were calculated for the normalized migration and morphology data using the CellHTS2 package. A Z-score cutoff of −1 was used for microRNAs that inhibit migration, and a cut-off of +1 for those that promote migration. Similarly, Zscore cut-offs of −1 and +1 were used for rounded and spindle shapes in the morphology (eccentricity) screen.
Annotation
The microRNA mimic library, RNA-seq data, and microRNA expression data from public datasets used different formats for microRNA annotation. Hence, these disparate formats were reconciled using the microRNA sequences, and were matched to the latest Mirbase (version 21) nomenclature (17,18). All microRNA names in this study are referred to in this format.
MicroRNA Sequencing and Analysis
Total RNA was extracted from near-confluent ARCaPE and ARCaPM cells in triplicates, size-selected for small RNAs (<200 bases), adapter-ligated and sequenced on the Illumina HiSeq2000 platform, at the Wellcome Trust Centre for Human Genetics, University of Oxford. Sequencing data thus obtained were checked for quality and correlation between replicates (Supplementary Figure 11) and microRNA counts were obtained using Chimira (version 1.0) (19). Differential expression analysis was performed using DESeq2 (20).
Data Mining
MicroRNA expression and clinical data in the Taylor dataset (GSE21036) were downloaded from cBioportal and analyzed in R statistical software (21). Viability data for the PC3 cell line were downloaded from the Lethal MicroRNA Database (http:// microrna.garvan.unsw.edu.au/mtp/database/index).
Target Analysis
Experimentally validated targets were downloaded from miRTarBase Release 7.0 (22). The database was filtered for microRNAs of clinical interest and the target genes were sorted by the number of microRNAs targeting them.
Seed Analysis
A 6-nucleotide sequence in the positions 2-7 from the 5 ′ end was considered as the seed sequence.
Network Analysis
All microRNA: known target interactions were downloaded directly from DIANA Tarbase with approval (23). A subset of the data containing four microRNAs was used for this analysis.
Statistical Analysis
All statistical analyses were performed in R (version 3.4) and Bioconductor (version 3.5). Correlation analyses were performed using Pearson method. For survival analyses, Cox Proportional Hazard (univariate) model was used, with Bonferroni correction for multiple testing. Over-representation of the seed sequence was analyzed using Fisher's exact test. Student's t-test (unpaired, two-tailed) was used for comparison of two groups (qRT-PCR). For all statistical analysis, the significance level, α, was set at 0.05. For the phenotypic screening experiments, three technical replicates (cells seeded and microRNA mimics transfected on the same day for all replicates) were used for each microRNA mimic library plate. All plates processed on the same day were defined as a batch. For small RNA sequencing, three technical replicates (RNA extracted on the same day from cells of the same passage number, seeded in 3 wells each) were used for each cell line.
Functional Screening for miRNAs Regulating Prostate Cancer Cell Migration
Increased migration is a key characteristic of metastasis. In order to systematically identify microRNAs that regulate migration in prostate cancer cells, we chose a 2D migration model, commonly known as a wound healing assay or scratch assay, which was scalable and cost-effective to develop to a large high-throughput screen. We performed a high-content, fluorescence-based, highthroughput screening in PC3-EGFP prostate cancer cells using a library of 1,253 microRNAs (Mirbase version 16). Scratches were uniformly generated in PC3-EGFP cells using a WoundMaker, following transfection with the library of miRNA mimics and image analysis performed to quantify migration ( Figure Figure 3). Non-targeting siRNAand mock-transfected cells were used as negative controls, and miR-373-transfected cells were used as a positive control (24). There was limited overlap between the kernel density estimate curves of negative and positive controls ( Figure 1C) and the mean normalized Z-score was −1.11 for the positive controls, and +1.04 for the negative controls. The screening was performed in triplicate and the replicates showed good reproducibility, seen by a pairwise correlation of >0.85 among all replicates (Supplementary Figure 4). No edge effects were detected in any library plates, with the distinct pattern in plate Frontiers in Oncology | www.frontiersin.org 1 reflecting the distribution of a specific set of miRNAs in this plate ( Figure 1D). A Z-score cut-off of −1 and +1 was used to identify "hits, " i.e., microRNAs that inhibit migration and promote migration, respectively (Figure 1D), and a strong positive correlation (Pearson's r = 0.80, p < 0.001) was found between time points across the entire library ( Figure 1E). The screening identified 239 miRNAs with a Z-score <-1 and 192 miRNAs with a Z-score > +1 ( Figure 1D). Z-score cutoffs were designed to be less stringent than those commonly applied, since PC3-EGFP cells are highly migratory and subsequent analysis was only performed with those microRNAs that inhibit migration (i.e., only limited to the left of the distribution curve).
To determine whether these microRNAs were also detectable in clinical samples and associated with advanced disease we performed in silico differential expression analysis of the Taylor dataset (25). This revealed 55 microRNAs that were significantly down-regulated (log FC > 1) in metastatic prostate cancer samples compared to primary tumor tissue (Supplementary Table 1). When these microRNAs were overlapped with microRNAs that inhibit migration in the screen, six microRNAs (miR-145-3p, -145-5p, 195-5p, 221-3p, -221-5p, 222-3p) were found to be common between the two datasets ( Table 1
Functional Screening for microRNAs Regulating Morphology
Epithelial-to-mesenchymal transition is considered to be a key component of metastatic progression. In vitro, the epithelial phenotype is characterized by a rounded morphology, whereas, mesenchymal cells tend to be spindle-shaped, a morphology change thought to promote invasion and migration. The shift between epithelial and mesenchymal states is increasingly being recognized as a dynamic process in cancer progression, and this plasticity could be regulated by microRNAs. We developed a second high-throughput screen to characterize the change in shape induced by overexpression of microRNAs as an indicator of epithelial-mesenchymal plasticity ( Figure 3A). PC3-EGFP cells were used for this screen, allowing for the use of GFP fluorescence in the morphological analysis. PC3 cells were transfected with a microRNA mimic library as described previously, and microscopy images were acquired 24 h following transfection. The microscopy images were segmented to identify individual cells as objects, with each cell represented by a distinct color so as to distinguish adjacent objects, and morphology features were extracted from these objects ( Figure 3B; Supplementary Figure 6). Eccentricity was chosen as a measure of mesenchymal morphology, from among a list of morphological parameters, due to its effectiveness in separating the positive and negative controls (Supplementary Figure 6). Per-plate normalization was performed to account for plateto-plate variation (Supplementary Figure 7). There was very high concordance between replicates (Pearson's r > 0.95) (Supplementary Figure 8) and a Q-Q plot of the data shows a left-skewed normal distribution (Supplementary Figure 9). Kernel density estimate curves of positive (mean Z-score ∼ −2) and negative (mean Z-score ∼ 0.7) controls were again well-separated with little overlap (Figure 3C). PC3 cells have a spindle-shaped mesenchymal-like morphology and the screening identified 243 miRNAs with a Z-score <-1, indicative of an epithelial transformation and 142 miRNAs with a Z-score >1 that induced a further elongated morphology indicative of a mesenchymal phenotype ( Figure 3D). However, similar to the migration screen, only miRs that induce a rounded morphology were analyzed further, as PC3-EGFP cells originally have a mesenchymal morphology. Data from the migration and morphology screens were combined to determine the degree of correlation between the two. A significant correlation was observed between migration and eccentricity for the controls alone (Pearson's r = 0.8, p < 0.001), and for all samples (Pearson's r = 0.36, p < 0.001), respectively ( Figure 3E), with 94 miRNAs found to both inhibit migration and induce a rounded morphology indicative of driving an epithelial phenotype ( Figure 3E). As for the migration data, microRNAs downregulated in metastatic samples in the MSKCC dataset were overlapped with microRNAs that induce a rounded morphology, identifying six microRNAs (hsa-let-7e-5p, hsa-miR-101-3p, hsa-miR-130a-3p, hsa-miR-148a-3p, hsa-miR-214-3p, hsa-miR-221-5p) as common between the two datasets ( Table 2).
To confirm that analysis of morphology using eccentricity alone is robust, we performed a multi-feature analysis with several morphology measures. A linear discriminant analysis model was trained with half of the control samples (training set) to classify them into epithelial, intermediate, or mesenchymal phenotypes (Supplementary Figure 10A). The model was then applied to a test set to calculate the mis-classification rate (Supplementary Figure 10B), and FIGURE 2 | MicroRNAs identified as inhibitory in migration screen are associated with an increase in disease-free survival. Following analysis of the Taylor dataset, those microRNAs that were significantly decreased in metastatic prostate cancer as compared to primary tumor were overlapped with microRNAs found to inhibit migration (Z-score < −1). The overlapping samples were then stratified into low-and high-expressing groups relative to the median for each microRNA. Kaplan Meier survival curves for (A) miR-145-3p, (B) miR-221-5p, (C) miR-195-5p. p-values shown are corrected for multiple testing using the Bonferroni method.
finally applied to unknown samples. This resulted in a much smaller set of samples classified as "epithelial" morphology, all of which, except one, were also identified as candidates using single feature analysis (Eccentricity Z-score < −1) (Supplementary Figures 10C,D).
Cell Viability Is an Important Confounder for Migration
Decreased cell viability caused by specific microRNAs (either by decreased proliferation or increased cell death) can result in apparently decreased migration as measured by the scratch assay. Further, dying cells may detach from the culture surface, appearing as rounded cells. Hence, we chose to consider the effect of microRNAs on the viability of prostate cancer cells, to identify microRNAs that strictly reduce only migration or eccentricity. PC3 cell viability data from Nikolic et al. (26) were used to account for any confounding of migration and morphology data. A moderate positive correlation (Pearson's r = 0.43, p < 0.001) was noted between migration and cell viability ( Figure 4A). On the other hand, there was a mild positive correlation (r = 0.23, p < 0.001) between eccentricity and cell viability (Figure 4B). In both cases, a lower viability cut-off of 0.8 and an upper cut-off of 1.2 were used to distinguish microRNAs that primarily affect migration or change in morphology without affecting cell viability (Figures 4A,B, Supplementary Tables 2, 3).
Transcriptomic Analysis Reveals Distinct miRNA Profiles Associated With EMT in Prostate Cancer
Taken together, our screening strategies have identified a number of microRNAs that have functional effects to dysregulate migration and/or morphology. To further interrogate the role of microRNAs in prostate cancer metastasis, we have undertaken transcriptomic analysis in a cellular model of epithelialmesenchymal transition. ARCaPE and ARCaPM human prostate cancer cell lines, derived from the parental line ARCaP, have been well-established as a model of epithelial-mesenchymal transition in prostate cancer (27)(28)(29)(30)(31). Importantly, they show remarkable differences in their ability to metastasise to bone and other organs in vivo, aligned with distinct phenotypic differences in vitro. Hence, we sought to identify microRNAs that are differentially expressed between these two cell lines, which may contribute to their functional metastatic differences. Small RNA (<200 bases) from both these cell lines was subjected to sequencing in triplicates (Supplementary Figure 11). Principal component analysis confirmed a distinct clustering of the two cell lines (Figure 5A). Unsupervised hierarchical clustering confirmed that microRNA expression profiles are distinctly different between the two cell lines, with 119 miRNAs significantly increased and 81 microRNAs significantly decreased in ARCaPM cells as compared to ARCaPE (Figures 5B,C, Supplementary Table 4). Further analysis revealed that many of the differentially expressed microRNAs belong to only a few microRNA families/clusters. Unsurprisingly, microRNAs in these clusters are co-expressed due to common promoters, which are known to be regulated by Wnt signaling (32) and BMP signaling (33) pathways. The microRNAs that were most strikingly overexpressed in ARCaPE cells belong to the miR-372 and miR-302 clusters, which have been previously shown to be key regulators of EMT in embryonic stem cells (34) and induced pluripotent stem cells (35).
Integration of Functional Screening With Transcriptomic Profiling Identifies miRNAs and Seed Sequences Associated With Prostate Cancer Metastasis
When "miRs inhibiting migration" (migration Z-score < −1, viability > 0.8) and "miRs inducing rounded morphology, " (Eccentricity Z-score < −1, viability > 0.8) were combined with differentially expressed miRs from the RNA-seq data, there was minimal overlap between all the datasets, with only two microRNAs miR-373-3p and miR-302d-3p found to inhibit migration, induce a rounded morphology and exhibit higher expression in the epithelial ARCaPE cell line ( Figure 6A, Table 3). MiR-373 was confirmed to be expressed at higher levels in ARCaPE cells compared to ARCaPM cells (Figure 6B). Overexpression of miR-373 in ARCaPM cells (Figure 6C) resulted in significantly increased E-cadherin, and significantly decreased vimentin and ZEB1 mRNA expression (Figures 6D-F, p < 0.01). There is also a distinct shift in miR-373-overexpressing ARCaPM cells from mesenchymal to epithelial morphology (Figure 6G), supportive of epithelial plasticity. Further investigation revealed that the AAGUGC seed sequence was overrepresented (p < 0.01) in all three datasets FIGURE 6 | Integration of functional microRNA screening and differential microRNA expression. (A) MicroRNAs that were significantly highly expressed in ARCaPE prostate cancer cells ("miRs high in ARCaPE") were intersected with hits from the migration screen ("miRs inhibiting migration," Z-score < −1, viability > 0.8) and hits from the morphology screen ("miRs inducing rounded morphology," Eccentricity Z-score < −1, viability > 0.8) to identify two microRNAs that were common among the three datasets. (high in ARCaPE, inhibiting migration, and inducing rounded morphology, but not affecting viability), suggesting a common functional role of this sequence in regulating migration, morphology and EMT (Figures 7A,B). In support, seed analysis of samples classified as "epithelial" by the linear discrimination analysis model revealed an over-representation of the AAGUGC sequence (Supplementary Figure 10D). MicroRNAs that share this seed sequence belong to three main families, miR-372, miR-302 and miR-520 ( Figure 7C). In addition to the seed sequence, a homology can be noted in the positions 10, 13, 15, 20 for bases C, U, U, and G, respectively. On the other hand, microRNAs with the AAGUGC motif in positions 3-8 (belonging predominantly to microRNAs of the miR-17-92 cluster) have additional homology in positions 2, 15, 18, and 19 for A, U, A, and G, respectively (Supplementary Figure 12A). The mean Z-scores of miRs with Seed sequence-position 2-7. Migration Z-score < −1 and viability > 0.8. Eccentricity (morphology) Z-score < −1 and viability > 0.8. Log2FC (Fold Change) (ARCaPE vs. ARCaPM microRNA expression) > 1. AAGUGC at position 2-7 were −1.03 (migration) and −2.05 (morphology), as opposed to those with AAGUGC at position 3-8, which were 0.23 (migration) and −0.05 (morphology) (Supplementary Figure 12B). All microRNAs with the AAGUGC motif anywhere in their sequence is also shown for comparison (Supplementary Figure 12C).
DISCUSSION
Once prostate cancer has metastasized, the tumor becomes refractory to current treatment approaches and the malignancy is largely incurable. Understanding the molecular control of the metastatic process is critical in order to develop new effective approaches to combat this advanced disease. MicroRNAs cause changes in phenotype through the regulation of a network of target messenger RNAs. While there has been much focus on direct targets, functional characterization of the microRNAs deserves more attention, and is arguably more relevant to study cancer cell behavior. Phenotypic screens have previously been used to identify key microRNA regulators of cell function, but here we developed a novel, integrative screening approach designed to identify key molecular regulators of metastatic progression by combining multiple functional analyses. As evidence is accumulating for the importance of microRNAs in prostate cancer, we used our integrative screening to identify key microRNAs that were associated with prostate cancer migration and EMT. The overall aim of this study was to use the functional screen to identify candidates that are of relevance in the clinical context, as well as in the context of epithelialmesenchymal plasticity (in which migration and morphology are key functional readouts).
One of the most important aspect of the metastatic cascade is the ability of tumor cells to migrate. We have developed a high-throughput migration screen, measuring the movement of cells into the space made by a uniform scratch through a confluent cell layer. The migration screen demonstrated good reproducibility, plate uniformity and statistical validation metrics and enabled the high-throughput functional study of microRNAs on prostate cancer migration. Compared to previous examples of scratch assays performed in a high-throughput manner (8)(9)(10)36), our assay shows similar or superior uniformity across wells. The very nature of the migration assay dictates that a migratory cell line is required, and PC3 prostate cancer cells are well-characterized for their representation of latestage prostate cancer, migratory ability and metastatic behavior in vivo. However, it should be noted that consequently, we predominantly identified microRNAs that inhibit migration and the assay was less sensitive in identifying microRNAs that promote migration.
Progression from localized prostate cancer to advanced disease is associated with a transition of prostate cancer cells from an epithelial phenotype to a more mesenchymal phenotype. The process of EMT is important to drive both local invasion and metastatic spread. One of the defining features of EMT is a change in morphology, from a rounded, epithelial shape to a more elongated spindle-shaped morphology. This morphology change is well-documented in prostate cancer cells, with a change from rounded to spindle-shaped morphology associated with increased metastatic behavior and vice versa. The marked difference in shape renders this morphological change well-suited for high content, high-throughput screening. In the current study, we have developed such a screen, taking advantage of the spindle-shaped morphology of PC3 prostate cancer cells that can be driven to a more rounded epithelial shape. Using automated microscopy, multi-parameter image processing and visualization tools, we have extracted quantitative data on multiple features associated with mesenchymal morphology, with eccentricity most representative of the morphologic changes. As with the migration screen, the morphology screen demonstrated good reproducibility, plate uniformity and statistical validation metrics. Simpson et al. studied the morphology of migrating cells at the leading edge of the scratch (8), whereas our use of a separate screen enabled a more in-depth analysis of changes in morphology. However, the left-skewed normal distribution of the data suggested that the assay is more sensitive for identifying microRNAs that induce an epithelial morphology compared to those that induce a mesenchymal morphology; this is unsurprising since PC3 cells have a spindle morphology in vitro. Correlation analysis of the two screens demonstrated a positive correlation between migration and morphology, providing support for the concept that a more mesenchymal morphology is important in driving migration. Previous highthroughput approaches to study the role of microRNAs in prostate cancer include identification of miRs regulating the expression of the androgen receptor (37,38), and miRs that regulate proliferation (39). While our screens are target-agnostic, and were not explicitly aimed at looking at the role of the androgen receptor, they are complementary to previous screens, and in combination with them, provide valuable insights into the functional role of microRNAs in prostate cancer.
Combining the morphology and migration functional screens with a microRNA mimic library enabled the high throughput evaluation of the functional effect of microRNAs on these key aspects of metastatic behavior. Sixteen percentage of miRNAs were found to inhibit prostate cancer cell migration and 19% were found to alter morphology, highlighting the importance of microRNAs in regulation of these metastatic processes. A limitation of these functional screens is the necessity for overexpression of miRNAs. To address the question of basal levels of microRNAs driving metastasis, we performed transcriptomic analysis of a pair of prostate cancer cell lines known to differ in their epithelial and mesenchymal morphology, migratory behavior and metastasis in vivo, revealing distinct expression profiles in the metastatic mesenchymal ARCaPM cells as compared to the non-metastatic, epithelial ARCaPE cells.
One of the challenges of prostate cancer research is the difficulty in isolating and working with primary cells, and as such, the integrative screen developed takes advantage of well-characterized prostate cancer cell lines. To ensure the clinical relevance of our integrative screen approach, we have aligned our results with those from publicly available datasets of microRNA expression profiles from benign, primary prostate cancer or metastatic prostate cancer. This enables the further focusing of the hits identified from the screens, based upon their potential clinical relevance. Due to the paucity of large microRNA expression studies in men with advanced or metastatic prostate cancer, we were limited to only one dataset to study the clinical significance of selected microRNAs. Using this approach, six microRNAs (hsa-miR-145-3p, hsa-miR-145-5p, hsa-miR-195-5p, hsa-miR-221-3p, hsa-miR-221-5p, hsa-miR-222-3p) were found to be both inhibiting migration and show reduced expression in metastatic prostate cancer (vs. primary tumors, Taylor dataset). Among these, low levels of three microRNAs (miR-145-3p, miR-221-5p, and miR-195-5p) identified in our migration screen, were associated with a reduction in disease-free survival. Further, miR-221-5p also induced a rounded morphology in our screen. The significance of this microRNA is highlighted by a study by Kiener et al., where overexpression of the microRNA was shown to reduce migration, proliferation and colony formation in PC-3M-Pro4luc2 prostate cancer cells in vitro, and to inhibit extravasation in a zebrafish model in vivo (40). It is also interesting to note that the median fold change values for microRNAs that inhibit migration are higher than those for microRNAs that induce a rounded morphology (2.14 vs. 1.37 respectively) in the Taylor dataset, although the difference falls short of statistical significance (p-value = 0.052). This may suggest that migration is more important than morphology in the clinical context. We also analyzed miRTarBase (22), a manually curated database of experimentally validated targets, to identify the top 20 genes commonly targeted by the 11 clinically significant microRNAs (Supplementary Table 5). Interestingly, the microRNA processing genes AGO2 and DICER1 are targets of 7 and 6 of these microRNAs respectively. Other common target genes include oncogenes such as MYC, CDK6, CCND1 and the hormonal receptor gene ESR1.
While our individual screening approaches were successful in identifying multiple microRNAs with differential functional effects and/or expression profiles, the integration of the three screens proved effective in revealing those microRNAs that were common to all screens and therefore may have a greater contribution to the metastatic process. Further, the integration of viability data from Nikolic et al. (26) to the analysis added high stringency to the analysis. A moderate correlation was observed between viability and migration, suggesting that for a number of microRNAs, decreased migration may at least partly be due to decreased cell number. A mild correlation was also noted between viability and cell eccentricity. Hence, for further analysis, only microRNAs that do not alter viability were considered to strictly alter migration or morphology. The combination of those microRNAs with high expression in ARCaPE cells, that inhibited migration and induced a rounded morphology (without reducing viability) identified two microRNAs; miR-373-3p, and miR-302d-3p. Both these microRNAs are known to regulate epithelial-mesenchymal transition as well as stem cell behavior by regulating the TGF-β signaling pathway (35,41,42). While the integrative approach we utilized in this study accounted for some known shortcomings (e.g., viability), thus yielding a small number of microRNAs as candidates for further study, hits from the individual screens may also be functionally important in their own right.
miR-373-3p has been previously associated with prostate cancer progression, providing strong support for the power of our integrative screening approach to identify key mediators of the metastatic process. miR-373-3p is known to induce mesenchymal-epithelial transition in prostate cancer cells by inducing the expression of E-cadherin (43) or inhibiting ZEB1 post-transcriptionally (24). In contrast, miR-373-3p has been shown to promote invasion and metastasis in breast and colon cancer cells (44), suggesting a changing role depending on tissue context. Interestingly, miR-373-3p was reported to be elevated in high grade prostate cancer, which is counterintuitive to their functional role as inhibitors of migration (45). The miR-302/367 cluster was recently shown by Guo et al. to be elevated in prostate cancer compared to normal prostate tissue, and shown to promote proliferation and androgenindependence by targeting the tumor suppressor gene LATS2 (46), suggesting that the role of these microRNAs may be a cumulative effect of several functional phenotypes. It should be noted, however, that Guo et al. transfected the entire miR-302/367 cluster into prostate cancer cells, whereas in our study, each microRNA was studied individually highlighting the importance of miR-302d-3p.
MicroRNAs exhibit a sequence specific function, and a 6-8 base region at their 5' end, the seed region, is important to this specificity (47). Analysis of the seed sequences revealed that the AAGUGC was over-represented in all the above three datasets. A shared seed sequence and a further homology in other positions of microRNAs belonging to four microRNA families (miR-372, miR-302, miR-520, and miR-519), may together account for a shared set of targets and consequently, a shared function. Interestingly, Zhou et al. reported that miRs with the AAGUGC motif are oncogenic in non-small cell lung cancer cells, increasing their proliferation (48). While their definition of the AAGUGC motif included miRs with this sequence occurring anywhere in the seed region, we used a stricter definition for the AAGUGC seed, as those sharing the sequence in the 2-7 position appear to have a distinct function in our migration and morphology screens compared to those with this sequence in the 3-8 position. Sinkkonen et al. reported that microRNAs sharing the AAGUGC seed sequence are specific to mouse embryonic stem cells, and regulate DNA methylation in differentiating ES cells (49) and the miR-302 and miR-372 families are well-characterized as regulators of EMT in embryonic stem cells (34,50). In prostate cancer, in addition to the known role of miR-373 in inducing mesenchymalepithelial transition, another microRNA miR-371a-3p, which belongs to the same family and contains the AAGUGC sequence at position 1-6, is known to down-regulate the androgen receptor (37).
Taken together, we have developed an integrative screening approach, which combines functional screening with expression profiling and alignment with clinical data in order to narrow down the candidate microRNAs to those of greatest importance in prostate cancer progression. Using this screen, we have identified both novel microRNAs and a microRNA seed sequence that are strongly linked to metastatic behavior and prostate cancer progression. This approach provides the basis for developing new approaches to prevent disease progression, which could include targeting the specific microRNAs identified, or a detailed cellular and molecular investigation into their mechanisms of action. Further, seed analysis provides novel insights into the functional consequences of motifs and their position in the microRNAs. Thus, this new approach to identifying mechanisms that drive prostate cancer metastasis has implications for understanding cancer pathogenesis and the potential to reveal opportunities for developing innovative treatment approaches.
DATA AVAILABILITY STATEMENT
The datasets generated for this study can be found in the NCBI Gene Expression Omnibus (GSE145078) (51).
AUTHOR CONTRIBUTIONS
SR devised and performed experiments, analyzed data, and prepared manuscript. AH, PK, AS, and CY devised and performed experiments. DE contributed to experimental design. FH contributed to experimental design and concept. CE supervised this research, devised experiments, reviewed data, and prepared manuscript. All authors read and approved the final manuscript.
|
2020-03-13T13:14:19.102Z
|
2020-03-13T00:00:00.000
|
{
"year": 2020,
"sha1": "8535e406b87a3a926cefd25063a1b10cd46c4944",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.00292/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8535e406b87a3a926cefd25063a1b10cd46c4944",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
245853725
|
pes2o/s2orc
|
v3-fos-license
|
New constraints on the dark matter density profiles of dwarf galaxies from proper motions of globular cluster streams
The central density profiles in dwarf galaxy halos depend strongly on the nature of dark matter. Recently, in Malhan et al. (2021), we employed N-body simulations to show that the cuspy cold dark matter (CDM) subhalos predicted by cosmological simulations can be differentiated from cored subhalos using the properties of accreted globular cluster (GC) streams since these GCs experience tidal stripping within their parent halos prior to accretion onto the Milky Way. We previously found that clusters that are accreted within cuspy subhalos produce streams with larger physical widths and higher dispersions in line-of-sight velocity and angular momentum than streams that are accreted within cored subhalos. Here, we use the same suite of simulations to demonstrate that the dispersion in the tangential velocities of streams ($\sigma_{v_\mathrm{Tan}}$) is also sensitive to the central DM density profiles of their parent dwarfs and GCs that were accreted from: cuspy subhalos produce streams with larger $\sigma_{v_\mathrm{Tan}}$ than those accreted inside cored subhalos. Using Gaia EDR3 observations of multiple GC streams we compare their $\sigma_{v_\mathrm{Tan}}$ values with simulations. The measured $\sigma_{v_\mathrm{Tan}}$ values are consistent with both an ``in situ'' origin and with accretion inside cored subhalos of $M\sim 10^{8-9}M_{\odot}$ (or very low-mass cuspy subhalos of mass $\sim 10^8M_{\odot}$). Despite the large current uncertainties in $\sigma_{v_\mathrm{Tan}}$, we find a low probability that any of the progenitor GCs were accreted from cuspy subhalos of $M_{\rm subhalo}\buildrel>\over \sim$ $10^9 M_{\odot}$. The uncertainties on Gaia tangential velocity measurements are expected to decrease in future and will allow for stronger constraints on subhalo DM density profiles.
INTRODUCTION
The true nature of dark matter (DM) is currently unknown (cf. Bertone et al. 2005) and our understanding about this mysterious particle is based primarily on theoretical predictions from cosmological simulations and observations of large scale structure (cf. Salucci 2019). While particle physicists have been working for decades Corresponding author: Khyati Malhan kmalhan07@gmail.com to set limits on the mass of the putative DM particle, much is still unknown. For instance, the widely accepted cold dark matter (CDM) theory hypothesizes that the DM particle is non-relativistic ("cold"), collisionless and weakly interacting (White & Rees 1978;Blumenthal et al. 1984). The CDM framework predicts that galaxy halos (irrespective of their sizes) should possess cuspy DM distributions, with very steeply rising inner density profiles of the form ρ DM ∝ r −1 (Dubinski & Carlberg 1991;Navarro et al. 1997). Alternative theories which differ from CDM in terms of the behaviour of their elementary particles (e.g. ultra-light DM, a.k.a. fuzzy DM, Hui et al. 2017), interaction strength (e.g. self-interacting DM, Spergel & Steinhardt 2000;Elbert et al. 2015), etc. Interestingly, many of these alternative DM theories instead predict cored DM distributions on galactic/sub-galactic scales, where central densities are approximately constant. Therefore, measurements of the central DM densities in dwarf galaxies provide a possible avenue to constrain the fundamental properties of DM. It has also been previously suggested that the widths of dwarf galaxy streams can also be a probe of their parent galaxy's DM density profiles (Errani et al. 2015).
Recently, in Malhan et al. (2021), we presented a new method of probing the central DM densities in dwarf galaxies using globular cluster (GC) stellar streams. Stellar streams are produced from the tidal stripping of a progenitor (e.g. a GC) as it orbits in the potential of the host galaxy. In the Milky Way (MW), nearly a 100 streams have been detected to date (Mateu 2022). Among this set, some of the progenitor GCs of these streams are suspected to have been accreted; i.e., these GC streams originally evolved within their parent dwarf galaxies and only later merged with the MW (e.g., Malhan et al. 2019b,a;Bonaca et al. 2021;Malhan et al. 2022). Motivated by this scenario, we asked in Malhan et al. (2021): can the present day physical properties of accreted GC streams inform us about the DM density profiles inside their parent dwarf galaxies? To explore this question, we ran several N-body simulations and showed that GCs that accrete within cuspy CDM subhalos produce streams that are substantially wider (physically) and dynamically hotter than those streams that accrete inside cored subhalos. This difference occurs due to the difference in the dynamical evolution of GCs inside two different potential models -cuspy and cored -with the former causing larger tidal stripping of the GC (inside the parent subhalo) than the latter. This implies that the physical properties of accreted GC streams provides a means to probe the DM density profiles inside their parent dwarfs.
In Malhan et al. (2021), the physical properties of the streams were quantified in terms of their a) transverse physical widths (w), b) dispersion in the line-ofsight (los) velocities (σ v los ), and c) dispersion in the zcomponent of angular momenta (σ Lz ). We found that these parameters differ in in cuspy and cored halos (see Figure 7 of Malhan et al. 2021). In particular, the parameters σ v los and σ Lz depend on the spectroscopic los velocities, that we lack for a majority of stream stars. However, with ESA/Gaia mission (Gaia Collaboration et al. 2016), we now possess excellent proper motions and parallaxes for millions of halo stars, and this data can be used to measure the tangential velocities (v Tan ) of stream stars. Our aim in this work is to show that the intrinsic dispersion in the tangential velocities of stream stars (σ vTan ) can be used as an alternative parameter to differentiate between the cusp/core scenario (at least for halos of mass > ∼ 10 9 M ), and this provides a new means to probe the central DM density profiles inside the dwarf galaxies.
This article is arranged as follows. Section 2 details the computation of σ vTan for the simulated stream models produced in cuspy vs. cored halos. Section 3 describes the procedure to measure σ vTan of the observed streams of the MW using Gaia EDR3 dataset (Lindegren, Lennart et al. 2020). Finally, in Section 4, we compare σ vTan of the observations and the simulations and provide our conclusions. The N-body stream simulations in Malhan et al. (2021) were of two types: those that were produced by in situ GCs and those that were produced accreted GCs.
The in situ GC streams arise from GCs that formed inside the MW, and whose evolution has been primarily determined by the MW potential. In Malhan et al. (2021), we simulated n = 5 in situ GC streams. The progenitor GC were modeled by King profiles (King 1962) with masses ranging from M GC = [3 − 10] × 10 4 M , central potential ranging from W = 1.5 − 3 and tidal radius from r t = 0.05 − 0.2 kpc. This mass range was motivated by previous studies on clusters and streams of the Milky Way (e.g., Baumgardt 2016; Thomas et al. 2016). The star particles had individual masses of 5 M and softenings of 2 pc. To evolve these N-body GC models in a host Galactic potential (that mimics the MW), we used model #1 of Dehnen & Binney (1998). This is a static, axisymmetric potential comprising of a thin disk, a thick disk, interstellar medium, bulge and DM halo. The simulations were evolved for T = 8 Gyr using the collisionless GyrfalcON integrator (Dehnen 2002) from the NEMO package (Teuben 1995).
To produce accreted GC streams, we tried a total of 4 parent subhalos that were constructed using the Dehnen model (Dehnen 1993). The Dehnen model is expressed as where M 0 , r 0 , −γ are the mass, scale radius, and the logarithmic slope of the inner density profile of the subhalo, respectively. Two of the sub-halos possess cuspy (NFW-like) profile and two possess cored density profiles. These subhalos are described as 1) SCu (small/cuspy) model: {M 0 , r 0 , γ}={10 8 M , 0.75 kpc, 1}; 2) SCo (small/cored) model: {M 0 , r 0 , γ}={10 8 M , 0.75 kpc, 0}; 3) LCu (large/cuspy) model: {M 0 , r 0 , γ}={10 9 M , 1.60 kpc, 1} and 4) LCo (large/cored) model: {M 0 , r 0 , γ} = {10 9 M , 1.60 kpc, 0}. This mass range was adopted because it is similar to the masses of some of the dwarf galaxies that host GCs (e.g., Forbes et al. 2018), and also similar to the mass of the (hypothesized) parent dwarf galaxy of the "GD-1" stream (Malhan et al. 2019a,b). The mass and softening parameters of the DM particles were 750 M and 20 pc, respectively 1 . Each subhalo model was populated with one GC model, and this GC was placed at an off-centre location and was launched on an orbit inside the subhalo. At the same time, the subhalo was launched on an orbit inside the host Galactic potential. The integration time of every simulation was T = 8 Gyr and the GC spends ∼ 3 − 4 Gyr inside the parent subhalo before escaping into the host (see Malhan et al. 2021).
We ran over 100 N-body simulations of accreted GC streams, including many different orbital configurations of GCs inside the subhalo (see Table 1 of Malhan et al. 2021). The majority of orbits of the subhalos (hosting the GC) within the MW, were circular (with galactocentric radius ∼ 60 kpc), and only a few were eccentric. Furthermore, while most of the simulations employed subhalos that lacked an extended population of stars, we did experiment with a few cases that included a stellar population (as expected from dwarf galaxies). However, we found that in both the cases (with and without the stellar population), the final morphologies of the accreted GC streams were the same.
All of the GC stream models were transformed from the galactocentric Cartesian coordinates to the heliocentric equatorial coordinates from which we measure σ vTan of the simulated streams. This transformation provided for every star particle its position (α, δ), heliocentric distances (d ) and proper motions (µ * α ≡ µ α cosδ, µ δ ) 2 . Here, we use all of these quantities to measure σ vTan of streams. Note that these are the same quantities that are provided by the Gaia dataset, except for d (as Gaia provides only parallaxes of stars). In Section 3 we discuss how we use Gaia parallaxes to estimate the distances to stream stars.
Computing tangential velocity dispersion (σ vTan ) of simulated streams
To compute the dispersion in the tangential velocity of a given stream (σ vTan ), we first compute tangential velocities of the individual member stars (v Tan ). Tangential velocity is defined as v Tan = k × d × µ; where k = 4.7405 km s −1 kpc −1 ( mas yr −1 ) −1 , µ = µ * 2 α + µ 2 δ . Instead of computing σ vTan , one maybe tempted to directly compute the dispersion in the proper motions; since it is the proper motion of stars that are provided by the Gaia dataset. However, proper motions are distance dependent, therefore we use the dispersion in tangential velocities which is independent of distance.
To compute σ vTan of simulated streams, we follow a similar approach as that used in Malhan et al. (2021) to measure other dynamical quantities. For a given stream, we first transform the positions of its member stars from the equatorial (α, δ) coordinate system to the (φ 1 , φ 2 ) coordinate system, where φ 1 is the angle that is aligned with the stream and φ 2 is the angle perpendicular to the stream. Next, we consider small segments along φ 1 of length 30 • and compute σ s vTan ,i independently for each i th segment (the reason for undertaking this "segment-wise" calculation is described below). To compute σ s vTan ,i in a given segment we first fit v Tan of the star particles using a smooth function of the form where a 1 , b 1 , c 1 are the fitting parameters to obtain the systemic value of v Tan (φ 1 ). After this, we subtract the fitted v Tan function from the v Tan of star particles to obtain the residual distribution. The standard deviation of this distribution provides the tangential velocity dispersion for the i th segment of the stream (i.e., σ s vTan ,i ). This procedure is iterated over all the segments in a given stream. Finally, the median and the standard deviation of the σ s vtan,i distribution provides the σ vTan measurement for the entire stream and the dispersion on this measurement, respectively. We use this procedure to compute σ vTan for all the N-body stream models. The reason we compute σ vTan independently for each segment of a stream is that many accreted GC streams are long and highly complex in structure (see Figures 1, 5, 6 of Malhan et al. 2021). Therefore, it is difficult to approximate the entire stream with a single function. Nonetheless, our procedure to obtain σ vTan also provides the dispersion on the σ vTan measurements. Using σv Tan of GC streams to probe the DM density profiles inside their parent subhalos (or parent dwarf galaxies). Upper panel: each red/black/gray point represents the tangential velocity dispersion (σv Tan ) of a particular simulated stream, and the corresponding error bar reflects the dispersion in the σv Tan measurement along that stream. The Y axis denotes different simulations. The red points correspond to the in situ GC stream models and the black/gray points correspond to streams that accreted inside cuspy/cored subhalos (where small/large markers correspond to cases where subhalos had mass of M = 10 8 M /10 9 M ). The colored triangles are σv Tan values we measure for 5 Milky Way streams, using Gaia EDR3 data. Lower panel: red/black/gray Gaussians correspond respectively to the distribution of simulated σv Tan values from the in situ/cuspy/cored scenarios (including the scatter in the σv Tan measurements). Gaussian with thin/thick borders correspond to cases where subhalos had mass of M = 10 8 M /10 9 M ). In summary, in situ GC streams (red stars) possess extremely low values of σv Tan , GC streams accreted inside cuspy CDM subhalos (black diamonds) possess very large values of σv Tan , while streams accreted inside cored subhalos (gray circles) lie in between. grey circles (cored) and black diamonds (cuspy); the dispersions on their σ vTan are shown with error-bars. A visual inspection of this figure already indicates that streams produced in different scenarios (i.e. in situ, cuspy and cored) possess quite different values of σ vTan . For a given in situ/cored/cuspy scenario, we quantify the variance in σ vTan distribution (denoted as σ vTan ) by modeling the σ vTan measurements with a Gaussian function of mean x and intrinsic dispersion σ x . To this end, we use the MCMC sampler emcee (Foreman-Mackey et al. 2013) and define the log-likelihood function for every stream seqment i as: Here, x i = σ vTan of the stream segment i and δ i is the dispersion on σ vTan of that segment of the stream.
For the in situ GC streams, we find σ vTan = 0.5 ± 0.1 km s −1 , implying that these streams are dynamically very cold. Furthermore, the cuspy SCu (LCu) subhalo with mass M 0 = 10 8 (10 9 ) M produced GC streams with value σ vTan = 3.7 ± 0.2 (8.5 ± 0.4) km s −1 . This implies that these streams are dynamically very hot. For the cored SCo (LCo) subhalo we infer the value of σ vTan = 1.6 ± 0.3 (2.1 ± 1.0) km s −1 . This difference in the σ vTan measurement of streams produced under cuspy/cored subhalo implies that present day σ vTan of streams are sensitive to the gravitational potential of their parent subhalos (i.e., σ vTan ∝ M 0 /r 0 ). Some degeneracy between subhalo mass and the presence of a cusp is apparent in Figure 1. This issue is discussed further in Section 4. . Panel (d) shows the residuals of the vTan (obs) after the mean trend has been subtracted off, and the "blue band" represents the intrinsic dispersion. In panel (d), the quoted σv Tan value represents the median and the corresponding uncertainties reflect the 16th to 84th percentile range of the distribution (see text). Specifically for "GD-1", we compare our distance fit with that of its orbit solution, only to ensure that our distance solutions are reliable (the orbit solution is taken from Malhan & Ibata 2019).
TANGENTIAL VELOCITY DISPERSIONS OF THE MILKY WAY STREAMS
There is now mounting evidence that some of the GC streams that orbit the MW halo were accreted from dwarf galaxies (e.g., Malhan et al. 2019b,a;Gialluca et al. 2021;Bonaca et al. 2021). This implies that the σ vTan measurement of these streams provide an opportunity to test the prediction that we obtained above, and thus understand whether the parent dwarfs of these streams possessed cuspy or cored DM distribution.
Here, we measure σ vTan of n = 5 streams, namely "GD-1", "Phlegethon", "Fjörm", "Gjöll", "Sylgr". The reason for choosing these particular streams is that (1) these are GC streams 3 , (2) these streams have been hypothesized to be of accreted origin (e.g., Bonaca et al. 2021;Malhan et al. 2022), and (3) these are long streams that also possess high stellar densities, and are thus suitable for performing the intended analysis. All of these streams are quite metal poor (with their [Fe/H] lying below ∼ −2 dex, Malhan et al. 2022), and this further supports the accretion scenario. The member stars of these streams are taken from the Ibata et al. (2021) catalogue. The streams in this catalogue were detected in the Gaia EDR3 dataset using the STREAMFINDER algorithm Ibata et al. 2019). In this catalogue, every star possesses Gaia EDR3 based position (α, δ), parallax ( ), proper motions (µ * α , µ δ ) and photometry (G, G BP , G RP ), along with the associated uncertainties. The photometric information is used along with parallaxes to improve the distance estimates (see below). The parallaxes are corrected for the global parallax zero-point in Gaia EDR3 (Lindegren, Lennart et al. 2020) and the photometry is corrected for extinction Ibata et al. (2021). These streams are shown in Figures 2 and 3.
To measure σ vTan of these streams, we follow a similar procedure as described in Section 2, with slight modifications in order to account for the observational errors. First we transform the positions of stream stars from (α, δ) to (φ 1 , φ 2 ) coordinates aligned with each stream. This is shown in panels (a) of Figures 2, 3. Next we compute the distance of the stream as a function of φ 1 (which can then be multiplied with proper motions to obtain v Tan ) (we do not simply compute the average parallax of the stream since this can bias σ vTan measurement for streams with distance gradients). Therefore, to properly account for the possible distance gradients, we follow a pragmatic approach. In a given stream, we consider segments along φ 1 of length ≈ 10 • . This length allows us to have a minimum of 15 stars in every segment. For each segment we use the stars to compute uncertainty-weighted average mean parallax value (along with the uncertainty on this mean parallax). A reliable estimate of mean parallax value requires high enough number of stars in a given segment. Taking the inverse of this mean parallax provides the average heliocentric distance (d ) of that segment (along with the uncertainty on d ). This d value is computed at all the segments of the stream, that provides a means to constrain the distance gradient of the entire stream structure. These distance measurements are shown in panels (b) of Figures 2 and 3. The typical distance uncertainty (per segment) is ≈ 0.5 kpc.
Next, for a given stream, we fit these d measurements using a similar function described by equation 2 (except this time we fit the entire stream at once, and not in individual segments). This fitting is performed using the emcee and it takes into account the uncertainties in d measurements. The posterior on the parameters a 1 , b 1 , c 1 provides the distance fit (as a function of φ 1 ) and the spread on the posterior provides the uncertainty on this distance fit. Effectively, this procedure allows us to estimate the distance (and the uncertainty) for every star using its φ 1 value. For a given star, we can now multiply its distance with its proper motion to obtain its σ vTan (as explained below).
In passing, we also note that the above distance fitting procedure is augmented with the information on color magnitude diagram (CMD) of stars ([G BP − G RP , G], that comes from Gaia EDR3); the CMD information is used as a prior in our likelihood evaluation (see Appendix A). Since the scatter in the CMDs of all the streams are reduced after this distance correction step, it gives us confidence that the estimated distances are reliable. This is because streams, in general, have distance gradients. Therefore, their observed CMDs are slightly smeared out in apparent magnitude. However, if the observed magnitude of each star is corrected by its "true" distance value, then the corrected CMD should have a reduced scatter. Here, we quantify the scatter in a stream's CMD using the k-nearest neighbors algorithm (implemented using NearestNeighbors module in sklearn package). For this, we set the parameter n neighbors=10 and metric=euclidean. In Figure 4, we compare the distance corrected CMDs with the observed CMDs. Furthermore, we also note that our fitted distance solutions are compatible with the distance measurements of Bailer-Jones et al. (2021); as shown in Appendix B.
In a given stream, to obtain v Tan measurements of the member stars, we multiply the above distance solutions with the Gaia proper motions. For a given star, the uncertainties on the distance solution and on the proper motions provides the uncertainty on the v Tan measurement. Using these v Tan measurements (along with the uncertainties), the stream is fitted using equation 2; the entire stream structure is fitted at once, and not in segments. The best fit solutions for v Tan for all the streams are shown in panels (c) of Figures 2 and 3. We further highlight that our fitted v Tan Finally, to obtain σ vTan measurement of a given stream, we subtract-off the above v Tan -fit as the systemic velocity of the stream from measured v Tan . Then we model the residuals with a Gaussian distribution, including uncertainties on v Tan , to derive the σ vTan of the stream. These residuals are shown in the panels (d) of the Figures 2 and 3. For the resulting posterior distribution, its median and 16/84 percentile provide the σ vTan of the stream and the uncertainty on σ vTan , respectively. These values are shown in panels (e) of Figures 2 and 3 and they are also plotted in Figure 1.
In Appendix C we demonstrate that these σ vTan measurements of the streams are robust. Table 1 provides the z-score (for two-tailed hypothesis test) and the corresponding p-value for the null hypothesis that an observed σ vTan measurement (with its associated uncertainty) is drawn from the Gaussian distribution for one of the 5 simulation scenarios shown in the lower panel of Figure 1. For a given stream s and a given scenario i, the z-score is computed as: where σ vTan s is the σ vTan measurement of the observed stream s, σ vTan i corresponds to that of the scenario i, and σ is the sum in quadrature of the uncertainties on these two quantities. A given p-value implies that the probability that the observed stream was drawn from the population describing a given simulation scenario can be rejected with confidence of (1 − p) × 100% (e.g. p = 0.01, implies the null hypothesis can be rejected at the 99% level).
CONCLUSION AND DISCUSSION
We draw our main conclusions by inspecting Figure 1 and Table 1. They compare the predicted values of σ vTan (that we obtained by analysing N-body GC stream models produced in different DM scenarios) with the observations (coming from the MW streams). The bottom panel of Figure 1 shows Gaussians that quantify the scatter in σ vTan measurements of the simulated streams produced in in situ/cored/cuspy scenarios. These Gaussians imply that: 1) in situ GC streams should possess σ vTan = 0.5 ± 0.2 km s −1 , 2) GC streams accreted inside the cuspy SCu (LCu) subhalo with mass M 0 = 10 8 (10 9 ) M should possess σ vTan = 3.7 ± 0.2 (8.5 ± 0.4) km s −1 , and 3) GC streams accreted inside the cored SCo (LCo) subhalo with mass M 0 = 10 8 (10 9 ) M should possess σ vTan = 1.6 ± 0.3 (2.1 ± 1.0) km s −1 . We summarize our main results below.
• N-body simulations of GC tidal streams accreted from dwarf galaxies with different central DM density profiles (cuspy vs. cored) show that there Note-Left most column provides the name of the observed stream; the next 4 columns give z-score (and corresponding pvalues) for the hypothesis that the observed stream is drawn from the Gaussian distribution describing simulated streams from: the in-situ scenario, the SCu dwarf galaxy scenario, the LCu scenario, the SCo scenario and the LCo scenario.
are significant and measurable differences in the observed σ vTan (the tangential velocity dispersion stars in the stream) that reflect the nature of the central density profiles of their parent dwarf galaxies.
• Current Gaia EDR3 proper motions and parallaxes are used to determine σ vTan for 5 GC streams ("GD-1", "Phlegethon", "Fjörm", "Gjöll", "Sylgr") studied in this work. It is not possible with current Gaia observational uncertainties to reject the hypothesis that these streams were formed in situ. Most of the observed GC streams in this study orbit at a galactocentric distance of ≈ 20 kpc, while the in situ stream models in were simulated with orbital radii of 60 kpc. Therefore, for a fair comparison, we ran 5 additional N-body simulations of streams under the in situ framework, but this time adopting the GC's orbital radius as ≈ 20 kpc. These additional streams can be seen as the top 5 red markers in Figure 1. These additional simulations do not alter our conclusion on this point.
• If however, the progenitor GCs of the MW streams analysed here were indeed accreted as previously argued (see below), our σ vTan measurements enable us to reject with high confidence the hypothesis that their parent dwarf galaxies were cuspy with M 0 > ∼ 10 9 M . We can also reject higher mass cuspy subhalos since GC streams from such dwarfs are expected be even hotter. Also, it is not possible that these MW streams would have originated from lower-mass cuspy subhalos because dwarfs with M 0 < ∼ 10 8 M are not expected to host any GC population (e.g., Forbes et al. 2018). In view of these arguments, our current analysis disfavours the cuspy CDM subhalos.
• The Gaia uncertainties on proper motions and parallaxes are currently too large to definitively determine whether the parent subhalos of these streams were cored or cuspy with M 0 = 10 8 M .
• Additionally, we recompute σ vTan of 5 GC streams by incorporating the "systematic errors" present in Gaia EDR3's proper motions and parallaxes (for details on these errors, see Lindegren, Lennart et al. 2020). This analysis is performed to examine the impact of these errors on the σ vTan values that we measure in this work. As shown in Appendix D, the inclusion of these systematic errors only minutely change the σ vTan measurements, and they do not affect the final conclusion of this work.
Although we are unable to definitively rule out (based on the kinematic analysis done here) the possibility that the progenitors of these streams were in situ GCs, there are other lines of evidence that indicate most of them have an accreted origin. The in situ GC population is overall redder and more metal rich than the accereted GC population (e.g. Kruijssen et al. 2019). Furthermore, orbital action space clustering of GCs and halo stars and a comparison of the metallicities of GCs and those same halo stars has been used to assign many accreted GCs, including GD1 to previous merger events (Myeong et al. 2019;Massari et al. 2019;Kruijssen et al. 2020;Bonaca et al. 2021;Malhan et al. 2022). The metal rich in situ GC population has a slight net prograde rotation, while the accreted GC population has no net rotation but subsets associated with specific accretion events can be seen to be clustered in angular-momentum (Massari et al. 2019). In addition to having a nearly circular and retrograde orbit, GD1 is extremely metal poor with a mean metallicity of −2.2 dex (Malhan & Ibata 2019), much closer to the metallicity of dwarf spheroidal satellites of the MW (Kirby et al. 2013) than in situ GCs (Zinn 1985).
In addition to σ vTan , the other two stream parameters that are also useful to probe the DM density profiles inside dwarfs are: transverse physical widths (w) and dispersion in the los velocity (σ v los ). Malhan et al. (2021) showed that in situ GCs produce streams with ( w , σ v los ) = (45 ± 15 pc, 0.7 ± 0.2 km s −1 ), GC streams accreted in cuspy subhalos produce with ( w , σ v los ) > ∼ (650 pc, 4 km s −1 ), and somewhat smaller widths ( w , σ v los ) ∼ (90 − 500 pc, < 4 km s −1 ) result when GCs accrete inside cored subhalos 4 . A combination of multiple parameters could provide a stronger means to probe the DM density profile inside the parent dwarf. For instance, we can in principle compare the predicted w values with the recent w measurements of other MW streams (e.g., Bonaca et al. 2020b;Ferguson et al. 2022;Tavangar et al. 2022) to comment on their "accretion" origin. For GD-1, while its σ vTan measurement appears to be more consistent with the in-situ scenario (see Table 1), consideration of these additional parameters: w (= 130 +30 −20 pc, Malhan et al. 2019b) and σ v los (= 2.1±0.3 km s −1 , Gialluca et al. 2021) suggest that GD-1 was likely accreted inside a cored subhalo.
In summary, our analysis indicates that 4 (out of 5) MW streams shown in Figure 1 favor cored DM subhalos over cuspy CDM subhalos. Although this inference is based on only two subhalo masses (i.e., M 0 = 10 8 M and 10 9 M ), we argue that it is unlikely that these streams could have accreted inside cuspy subhalos of higher mass since such streams would be even hotter. The origin of cored subhalos is still hotly debated. While cored subhalos are favored by alternative DM candidates, hydrodynamical simulations have shown that DM cores can result from erasure of DM cusps if the dwarf galaxy had a sufficiently vigrous and episodic star formation phase (e.g. Pontzen & Governato 2012). Under such a scenario, the resulting cored subhalo would still be consistent with the CDM paradigm. Recent cosmological hydrodynamic simulations predict that subhalos with M 0 < ∼ 10 10 M would have formed too few stars over their lifetimes, and the resulting baryonic feedback is too weak to unbind their DM cusps (e.g., Lazar et al. 2020). If a significant fraction of tidal streams from accreted GCs are found to be dynamically consistent with having originated from cored subhalos with M 0 < ∼ 10 10 M , then we may be forced to move to models beyond CDM. However additional simulations with a greater variety of dwarf galaxy properties and orbital 4 These constrains are based on the subhalo models with mass M 0 = 10 8 M , 10 9 M initial conditions are needed before firm conclusions can be drawn.
Cosmological hydrodynamical zoom-in simulations with different types of dark matter: CDM, WDM, SIDM and mixed DM, e.g. WDM with self-interaction, (Fitts et al. 2019) show that the addition of baryons substantially decrease differences between the simulations with different types of DM. However baryons decrease the sizes of cores in SIDM and WDM+SIDM subhalos compared to SIDM-only simulations, but they have significantly lower central densities than CDM-only halos. In future, it will be interesting to simulate a wider variety of cored subhalo models (by varying their mass ranges, physical sizes, core sizes and inner density slopes).
In Malhan et al. (2021), we showed that three observationally determinable quantities for accreted GC streams: physical width w, line-of-sight velocity dispersion σ vlos and dispersion in the z-component of angular momentum L z , were all sensitive probes of the degree of tidal heating experienced by a GC stream in its parent dwarf galaxy and could enable us to set constraints on the DM profiles of dwarf galaxies. In this work we have shown, in addition, that σ vTan is able to provide similar discrimination.
In the future, we will consider additional heating arising from passage of the stream through the disk or interactions with molecular clouds (Amorisco et al. 2016) or the bar (Pearson et al. 2017). Furthermore,we will assess whether all 6 phase space coordinates when combined may yield stronger constraints on DM. In practice however stream membership is difficult to assess in the absence of spectroscopy and accurate Gaia proper motions, especially for distant streams. While radial velocities have the smallest uncertainties -e.g. 1 − 2 km s −1 uncertainty for G 19 with current large multiobject spectrographs like DESI (DESI Collaboration et al. 2016a,b;Allende Prieto et al. 2020;Cooper et al. 2022) -the fact that tidal streams generally extend over tens of degrees on the sky make it extremely expensive observationally to obtain the large numbers of v los measurements needed to reliably compute σ v los for many streams. While Gaia DR3 released v los for over 30 million stars brighter than G = 14, Figure 4 shows that most of the stars of interest here are fainter than this magnitude limit.
The metric we study in this work, σ vTan , depends on accurate measurements of both the proper motions of stream stars and their distances. We obtained both quantities in this work from Gaia EDR3 observations. Future Gaia data releases are expected to decrease the uncertainties on both the measured proper motions and parallaxes by around 50% for each quantity relative to EDR3 uncertainties (see, Gaia Collaboration et al. 2021, and the Gaia-ESA website 5 ) resulting in a net decrease in the uncertainty on σ vTan of ∼ 60−65% for the streams we consider here. If both σ v los and σ vTan are available for a significant sample of stars, one might combine them to obtain a 3D velocity dispersion, but currently adequate numbers of v los measurements do not exist for the streams considered here. At the present time and for the foreseeable future, Gaia proper motions and parallaxes, being the most abundantly measured quantities, offer the best way to quantify the velocity dispersions of GC tidal streams.
A. COMPARING THE OBSERVED AND DISTANCE-CORRECTED CMDS OF STREAMS
In Section 3, we perform distance fitting to the streams as a function of their φ 1 coordinate. For this distance fitting, we use Gaia's parallaxes and also Gaia's photometry (G BP −G RP , G). The reason for using the photometry information can be explained as follows. Stellar streams generally possess distance gradients along their lengths, and therefore their observed CMDs are slightly smeared out in apparent magnitude (here, G magnitude). However, if the photometry of each star is corrected by its "true" distance value, then the corrected CMD will have less scatter. Therefore, this additional information on the "CMD scatter" provides a means to better constrain the streams' distance solutions. For this, during our distance fitting procedure, we impose a (constant) prior condition in likelihood evaluation thatthe resulting distance solution should be such that produces a CMD with less scatter than the observed CMD.
The corresponding result is shown in Figure 4, that compares the "observed" and "distance corrected" CMDs of all the streams. The scatter in these CMDs are quantified using the NearestNeighbors module, and this confirms that the distance corrected CMDs have less scatter than the observed CMDs. This result can also be discerned by visually inspecting Figure 4. This implies that our distance solutions are reliable. Figure 5c for GD-1. Based on the visual inspection, we conclude that our solutions are consistent with these measurements; although the uncertainties on the measurements of the individual stars are very large (∼ 550 km s −1 ). We repeated this exercise for other streams as well and found similar consistency, leading us to conclude that our fitted d and v Tan solutions are reliable. For the Milky Way streams, we note that their member stars possess quite large observational uncertainties on v Tan (of the order of ∼ 20 km s −1 , see panels 'd' in Figures 2 and 3). However, we constrain the σ vTan to the order of ∼ 1 − 2 km s −1 . In this appendix we demonstrate that even though the v Tan uncertainties on tangential velocity measurements for individual stars are large our method is sensitive to the changes in these large uncertainties and able to measure intrinsic velocity dispersions that are much smaller than the current uncertainties.
To illustrate this we take the Phlegethon stream and artificially modify the v Tan uncertainties on individual stars and recompute the σ vTan , always keeping the v Tan measurements unchanged (i.e. only modifying the uncertainties). In the first case, we set these uncertainties to 0 km s −1 which results in σ vTan = 17.19 km s −1 by applying equivalent of eq. 4 (see Section 3). This value is much larger than the value mentioned above for Phlegethon, but this is expected because now the uncertainty-term (in eq. 4) attributes the entire spread in the "observed v Tan (obs) -v Tan (fit)" distribution (i.e., residuals shown in panels 'd' in Figure 2) to the internal dispersion of the stream σ vTan . In the second case, we decrease the v Tan uncertainty to half of the actual values and measure σ vTan = 12.25 km s −1 . Note that this value is smaller than the one computed in the first case because now the spread in the residual distribution is being shared by the uncertainty-term (that are finite and non-zero) and the internal dispersion σ vTan . In the third case we decrease the uncertainties to 80% of the actual values and measure σ vTan = 1.29 km s −1 . As expected σ vTan decreases further because now the velocity uncertainties absorb a larger share of the residual distribution. This explains both why the measured intrinsic dispersion is so much smaller than the observed dispersion and why we assert that a decrease in the uncertainties on v Tan expected from future Gaia data releases will improve these σ vTan measurements.
C.2. Determining the effects of correlations
We assess the effects of correlations between uncertainties in proper motions and parallax in the following way. First, we take the Phlegethon stream and shuffle the proper motion uncertainties of its stars while keeping the parallax uncertainties unchanged (i.e., we randomly reassign the proper motion uncertainty of star j to star k and star k to i, and so on). We do this 10 times to examine whether the resulting σ vTan (on an average) is same as what we report above. Based on this we find that σ vTan (on an average) changes by only +1.5%. We repeat the above exercise, except this time we shuffle the parallax uncertainties between stars while keeping the proper motion uncertainties unchanged. In this case we find that the resulting σ vTan (on an average) changes by only −2.7%. Finally, we repeat the above exercise with a few other streams and find similarly small changes in our estimated σ vTan measurements. This suggests that the correlations should have minor effects on the reported σ vTan values of the streams. Table 2. σv Tan of Milky Way streams (in mas yr −1 ) computed by including the systematic errors. The values in the brackets provide the p−values of these new σv Tan measurements being drawn from their counterpart streams whose σv Tan were measured without including the systematic errors. We recompute σ vTan of 5 GC streams by incorporating the "systematic errors" present in Gaia EDR3's proper motions and parallaxes. These errors are provided in Section 5.6 of Lindegren, Lennart et al. (2020) as 0.0108 mas in , 0.0112 mas yr −1 in µ * α and and 0.0107 mas yr −1 in µ δ . These values essentially put a floor on the precision with which parallaxes and proper motions are measured.
To recompute σ vTan , we do the following. For a given stream, we consider the individual stars, and to these we add the above errors (in quadrature) to the observed Gaia uncertainties in parallaxes and proper motions. This essentially inflates the uncertainties of every star. Then, we compute σ vTan by following the same procedure as described in Section 3. The final σ vTan values are provided in Table 2. Table 2 also provides the p−values for the null hypothesis that these new σ vTan values (with their associated uncertainties) are drawn from the counterpart σ vTan measurements that we computed in Section 3 without the inclusion of systematic errors. To compute these p−values, we follow the same method described in Section 3. The fact that these p−values are ∼ 1 indicate that, for a given stream, the two types of σ vTan measurements are similar. Table 3 is similar to Table 1, except this time produced using the new σ vTan measurements. The fact that values in Table 3 are qualitatively similar to those present in Table 1 suggests that inclusion of systematic errors do not affect our final conclusion in regard to the cusp/core scenario of the parent subhalos.
|
2022-01-12T02:15:48.626Z
|
2022-01-10T00:00:00.000
|
{
"year": 2022,
"sha1": "fff53268b7fc273b3adaf0729ee113beb583f157",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fff53268b7fc273b3adaf0729ee113beb583f157",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256650013
|
pes2o/s2orc
|
v3-fos-license
|
Novel Zebrafish Patient-Derived Tumor Xenograft Methodology for Evaluating Efficacy of Immune-Stimulating BCG Therapy in Urinary Bladder Cancer
Background: Bacillus Calmette-Guérin (BCG) immunotherapy is the standard-of-care adjuvant therapy for non-muscle-invasive bladder cancer in patients at considerable risk of disease recurrence. Although its exact mechanism of action is unknown, BCG significantly reduces this risk in responding patients but is mainly associated with toxic side-effects in those facing treatment resistance. Methods that allow the identification of BCG responders are, therefore, urgently needed. Methods: Fluorescently labelled UM-UC-3 cells and dissociated patient tumor samples were used to establish zebrafish tumor xenograft (ZTX) models. Changes in the relative primary tumor size and cell dissemination to the tail were evaluated via fluorescence microscopy at three days post-implantation. The data were compared to the treatment outcomes of the corresponding patients. Toxicity was evaluated based on gross morphological evaluation of the treated zebrafish larvae. Results: BCG-induced toxicity was avoided by removing the water-soluble fraction of the BCG formulation prior to use. BCG treatment via co-injection with the tumor cells resulted in significant and dose-dependent primary tumor size regression. Heat-inactivation of BCG decreased this effect, while intravenous BCG injections were ineffective. ZTX models were successfully established for six of six patients based on TUR-B biopsies. In two of these models, significant tumor regression was observed, which, in both cases, corresponded to the treatment response in the patients. Conclusions: The observed BCG-related anti-tumor effect indicates that ZTX models might predict the BCG response and thereby improve treatment planning. More experiments and clinical studies are needed, however, to elucidate the BCG mechanism and estimate the predictive value.
Introduction
Cancer is recognized as a highly heterogeneous class of diseases, where specific disease characteristics including the efficacy of available medical or non-medical treatments differ greatly among patients [1]. Therefore, diagnostic methods that may identify disease characteristics associated with sensitivity to available therapies for each patient are becoming an increasingly important aspect of cancer medicine. Methods used for this purpose include genetic characterization of tumor material to find mutations or gene-expression profiles associated with a heightened response rate [2], but such molecular diagnostics suffer from poor accuracy as many patients do not respond as predicted [3]. This is particularly problematic within the growing field of immune-oncology, where up to two-thirds of the patients that test positive using such molecular tests (e.g., PD-L1 high tumors) turn out to be non-responders to the corresponding (e.g., anti-PD-1 or anti-PD-L1) therapy [4]. Compared to molecular tests, functional assessment of drug efficacy using, for example, patient-derived 3D-cell models in vitro (i.e., patient-derived organoids, PDO) or in vivo (i.e., patient-derived xenografts, PDX) have much higher accuracy and correctly predict the treatment outcome for~80-90% of the patients tested [5][6][7]. PDX models in mice are, however, too costly for routine clinical implementation and are not timebound within the clinical decision-making processes existing for most types of cancer [7]. PDO models are faster and cheaper and could potentially be implemented in the clinical routine. Current PDO technologies, however, are poorly suited for evaluating drugs that target complex interactions between malignant and non-malignant cells in the tumor microenvironment such as those used within immune-oncology. Therefore, fast, functional patient-derived tumor model technologies that allow accurate efficacy predictions of immune-oncology drugs are urgently needed to improve clinical treatment planning.
Urinary bladder cancer is the most prevalent urothelial malignancy and the seventhmost prevalent cancer overall [8]. Due to its high mutational frequency resulting in numerous different molecular drivers of the disease, it is a very heterogeneous disease, for which meaningful molecular sub-classifications have not been clinically established [9]. Instead, these cancers are classified based on medical imaging and pathological findings as low-to high-grade non-muscle-invasive bladder cancer (NMIBC) or muscle-invasive bladder cancer (MIBC) [9,10]. The standard-of-care primary adjuvant therapy for NMIBC (T-categories TaG3 and T1) is intravesical Bacillus Calmette-Guerin (BCG) treatment, which is one of the earliest and best-established immune-oncology therapies applied in cancer [11,12]. BCG is an attenuated strain of Mycobacterium bovis, which has been developed and used as a vaccine against Mycobacterium tuberculosis since 1921 [12]. The anti-tumor effect of BCG vaccination in malignant diseases was first reported in the 1950s when it was observed that BCG-infected mice show a higher resistance to tumor transplantation [13]. Supported by findings demonstrating a strong delayed hypersensitivity reaction to immunogenic antigens in the bladder of guinea pigs [14], Morales et al. reported in 1976 that intravesical BCG instillation lowered both recurrence and progression rates in nine cases of human bladder cancer [15]. This was further confirmed by Lamm et al. in a randomized prospective trial in 1980 [16], leading to the clinical application of BCG, which, until today, has remained the recommended first-line therapy in intermediate and high-risk NMIBC after transurethral tumor resection [11,17]. However, while reducing the risk of tumor recurrence and progression by up to 30% compared to tumor removal alone, recurrence is seen in approximately 30 to 40% of the patients, and up to 15% will progress to invasive and disseminated disease [18][19][20]. Given this considerable risk of disease recurrence after BCG therapy and frequent cases of unexplainable primary treatment failure, it is important but challenging to identify and manage both BCG-sensitive and BCG-resistant patient groups [10,11,17]. Even though several studies were conducted to investigate molecular signatures associated with BCG response, no biomarkers have been found that may predict the short or long-term treatment outcome [18,19,21]. On top of this unmet clinical problem, repeated periods of BCG shortage have plagued urology clinics over the past few years [20,22]. This persistent shortage has caused an increase in the number of immediate radical cystectomies instead of bladder-sparing BCG immunotherapy in high-risk patients [17,20,22]. There is therefore an urgent need for tests that may help to prioritize patients for BCG immunotherapy such that only those not likely to respond to such therapy could be considered for immediate radical cystectomy.
Embryonic zebrafish tumor xenograft models are gaining popularity as an experimental system that combines the benefits of PDO and PDX models by offering cheap and fast readouts while also allowing studies of the tumor microenvironment and tumor invasion or metastasis [23]. The tumor xenograft models can be based on either cancer cell lines or patient-derived xenografts (PDX) to better represent the heterogeneity of patient tumors, which then are microinjected into the zebrafish larvae where they create 3D microtumors that allow the investigation of drug-induced changes in tumor size and dissemination [6,24,25]. The generation of zebrafish patient-derived tumor xenograft (ZTX) models has been used in the past to predict treatment outcomes to commonly used chemotherapies in colorectal, gastric, and hematological cancers [26][27][28][29]. Bladder cancer has only recently been studied in this system [30], but whether these models sufficiently recapitulate the immune interactions required to study the efficacy of immune-oncological therapies, including BCG therapy, remains to be determined.
Here, we create ZTX models for urinary bladder cancer and establish the conditions by which such systems can be exploited to investigate BCG treatment efficacy. We identify the importance of physical contact between the BCG bacteria and the tumor cells as well as the viability of the bacteria, and we determine the dose-response relationship of this treatment in the ZTX model. Finally, we demonstrate that the ZTX models could be established from six of six NMIBC patients and correctly predicted two of two positive and one of two negative treatment outcomes to BCG treatment in the four patients that had finished the treatment.
Zebrafish Breeding and Maintenance
The transgenic Tg(fli1a:EGFP) strain [31] with eGFP-expressing vasculature was maintained in the zebrafish facility at Linkoping University (Linkoping, Sweden) and used for all experiments. After breeding, fertilized zebrafish eggs were collected in Petri dishes and incubated at 28 • C in an E3 embryo medium (pH 7.2) containing 0.29 g of NaCl, 0.082 g of MgSO 4 , 0.048 g of CaCl 2 , and 0.013 g of KCl per liter of purified water. The E3 embryo medium was supplemented with 0.2 mM PTU to inhibit pigmentation of the larvae.
Cell Culture and Fluorescent Labelling
The human muscle-invasive urinary bladder cancer cell line UM-UC-3 (ATCC, #CRL-1749) was cultured in T75-cell culture flasks in EMEM supplemented with 10% FBS and 1% Pen/Strep under standard cell culture conditions at 37 • C in a humidified incubator containing 5% CO 2 . The cell culture medium was changed every two to three days. Subculturing was performed following incubation of the cells in 1×Trypsin/EDTA at 37 • C for approximately 5 min, the addition of the cell culture medium, and centrifugation at 250× g for 5 min. At 70 to 80% confluency, tumor cells were fluorescently labelled using 1,1-dioctadecyl-3,3,3 ,3 -tetramethylindocarbocyanine perchlorate (DiI). After washing once with PBS, 10 mL of pre-warmed PBS containing 2 µg/mL Fast-DiI™ dye was added. Following 30 min of incubation at 37 • C, cells detached from the culture flask and the suspension was centrifuged at 250× g for 5 min. The cell pellet was washed twice with 5 mL of PBS, and centrifugation was repeated. Labelled cells were resuspended in 1 mL of Cells 2023, 12, 508 4 of 16 culture medium and filtered through a 40 µm cell strainer. Cell counts and cell viability were evaluated using trypan blue staining.
Clinical Tumor Tissue Samples
After written informed consent, tumor tissue from patients with presumed urinary bladder cancer was collected during TUR-B and placed in cryotubes containing 1 mL cryomedium. The tubes were placed in a CoolCell (Corning, Corning, NY, USA) cryopreservation box within 2 h and kept at −80 • C. The patients were pseudonymized by giving the incoming material of every donor patient a unique code, which was subsequently used to identify the corresponding ZTX model. Ethical approval for studies based on primary tumor biopsies was granted to A. Sherif by the Swedish Ethical Review Authority. Information on each patient's diagnosis, pathological findings, surgical, and BCG treatment outcome and was collected from their medical journals in accordance with the ethical approval.
Cryopreserved biopsies from six bladder cancer patients known to be treated with BCG were thawed and washed twice in 10 mL of RPMI-1640 medium supplemented with 10% FBS by gentle mixing. Using ethanol-disinfected surgical scissors, the tumor samples then were minced into small 1-2 mm 3 pieces and 5 mL of the tissue dissociation enzyme mix was added. The tissue piece suspension was transferred to a gentleMACS™ C-tube (Miltenyi Biotec, Bergish Gladbach, Germany, #130096334) and dissociated for 30-60 min on a gentleMACS™ Octo Dissociator set to 37 • C. The homogenous cell suspension was then filtered, and the C-tube and filters were washed with 5 mL of RPMI 1640 supplemented with 2% FBS (RPMI/2% FBS) before centrifuging the sample for 5 min at 300× g. The pellet was resuspended in 10 mL RPMI/2% FBS, re-filtered, and re-pelleted. For fluorescent labelling, cells were incubated at 37 • C in 5 mL of RPMI/2% FBS mixed with Fast-DiI™ dye (8 µg/mL). Following pelleting and washing with PBS, the cells were resuspended in 1 mL RPMI/2% FBS. The total number of cells and cell viability were calculated.
BCG Reconstitution, Toxicity Testing and Treatment in the Xenograft Model
For BCG immunotherapy, one ampule of BCG-medac powder for intravesical suspension (MTnr: 17493, Lot#210332C) was used. According to information from the supplier, the ampule contained 2 × 10 8 to 3 × 10 9 freeze-dried viable units of the BCG-RIVM strain, polygeline, glucose anhydrous, and polysorbate 80. All experiments were performed using one and the same ampule thereby ensuring consistency of the bacterial concentration and other parameters that might be different in different ampules. The ampule was stored according to the manufacturer's instructions and re-sealed after opening to not jeopardize the stability of the bacteria. The BCG dose in the ampule was assumed to be 1.6 × 10 9 viable units, i.e., the mean of the indicated interval, to allow further calculations of doses in the ZTX models. An appropriate amount of the powder was reconstituted in 50 mL of 0.9% NaCl, centrifuged at 4600× g for 20 min, and resuspended in PBS. In some experiments, BCG was used directly after reconstitution without prior centrifugation, as indicated in the text. To establish a dosage-survival curve of BCG in zebrafish larvae, increasing BCG concentrations ranging from 1.88 × 10 6 to 2.4 × 10 8 viable units/mL were injected into 48 h old embryos in groups of 15 embryos. Successfully injected embryos were incubated in Petri dishes containing the E3 medium supplemented with PTU at 35.5 • C, a temperature at which Mycobacterium bovis bacteria are still viable [32] and able to induce immune activation [33]. Three days post injection (dpi), signs of toxicity, e.g., edema, and survival of the embryos in the different treatment groups were documented. BCG treatment of the zebrafish tumor xenografts was performed via the co-injection of tumor cells and BCG and intravenous injections of BCG. For this, the desired number of viable units was weighed and reconstituted in 0.9% NaCl. The BCG suspension was then centrifuged at 4600× g for 20 min and the pelleted bacteria were resuspended in PBS. Labelled tumor cells were resuspended in the BCG solution immediately before injections. Heat inactivation of BCG was conducted according to a previously validated protocol, shown to effectively kill the vast majority of the BCG bacteria [34]. Briefly, the reconstituted BCG bacteria were incubated on a thermo-shaker set to 80 • C for 20 min before co-injection with UM-UC-3 cells.
Microinjections into 48 h Old Zebrafish Larvae
Fluorescently labelled cells (with or without BCG) were subcutaneously injected into the PVS of 48 h old zebrafish embryos using a micromanipulator. For this, embryos were mechanically dechorionated using micro-surgical forceps. Stained cell suspensions were centrifuged for 5 min at 250× g and resuspended in PBS to reach a cell concentration of 300 cells/nl. Borosilicate glass capillaries (World-Precision instruments, Sarasota, FL, USA, #TW100-4) were pulled using a Narishige (Setagaya City, Tokyo, Japan) PC-10 micropipette puller and filled with 3-5 µL of cell suspension. At the needle tip, a needle opening was created using fine forceps and the droplet size was calibrated. Dechorionated embryos were put on pre-warmed 2% agarose plates and anesthetized with 1 mg/mL Tricaine before microinjecting the tumor cells. Embryos were transferred back to the E3 medium supplemented with PTU following injection.
Using fluorescence microscopy, successfully injected embryos, which (for PVS injections) showed tumor cells in the PVS but not in the yolk or in the bloodstream, were selected. The selected zebrafish tumor xenografts were incubated at 35.5 • C in 24-well plates containing the E3 medium supplemented with PTU until 3 dpi.
Analysis of Primary Tumor Size and Cell Dissemination
Fluorescent images of the zebrafish xenograft models were acquired using a Nikon (Konan, Tokyo, Japan) SMZ1500 fluorescence stereomicroscope. Images were taken at 0 dpi and 3 dpi at 30-and 80-fold magnification. Analysis of the images was conducted using the thresholding function of the open-source ImageJ software [35]. Relative tumor size in percent was determined by calculating the ratio of tumor sizes 3 dpi and 0 dpi and multiplying it by 100 for each individual fish. Distant metastases were evaluated by counting the number of cells in the caudal hematopoietic plexus at 3 dpi.
Statistical Analysis
Graphical data visualization and statistical analysis were conducted using Graphpad Prism 8.0.2 (Graphpad Software Inc., San Diego, CA, USA). Outliers were identified and removed using the ROUT outlier test (Q = 1), and the normality of data was verified by applying the D'Agostino Pearson omnibus normality test. Two group comparisons were conducted using a two-tailed, unpaired Student's t-test, and three or more groups were compared by applying a multiple comparisons one-way ANOVA. In the case of unequal variances, the respective corrections were applied. The significance threshold α was set to 0.05. Data are shown as mean ± standard deviation, and n-values represent the number of zebrafish larvae analyzed per group.
A Component of the BCG-Medac Drug Formulation Is Toxic to Zebrafish Larvae but Can Be Removed by Centrifugation
As BCG therapy has not been tested in zebrafish larvae in the past, we first investigated the tolerability and toxicity of the clinical BCG formulation in tumor-free zebrafish larvae. Intravenous injection of a BCG dose similar to that used in human intravesical immunotherapy (6 × 10 7 viable units/mL) in 2-day-old zebrafish larvae revealed a reduced survival rate at 3 dpi and clear signs of toxicity (e.g., edema) in the surviving larvae ( Figure 1A,B). To find the maximum tolerated dose for zebrafish larvae, we injected larvae at 2 dpf with a range of concentrations starting at 1.88 × 10 6 and reaching 2.4 × 10 8 viable units/mL and monitored survival until 3 dpi. Under these conditions, the LD50 of BCG-medac was 9.5 × 10 7 viable units/mL, but most embryos survived at doses between 1.88 × 10 6 and 9.5 × 10 7 viable units/mL ( Figure 1C). Doses beyond 1.5 × 10 7 viable units/mL led to similar mortality and toxicity profiles as seen using the clinically relevant dose of 6 × 10 7 viable units/mL ( Figure 1A). As the toxic phenotypes were also seen at low doses of the BCG-medac formulation and were not consistent with phenotypes expected from infection with tuberculosis bacteria in zebrafish larvae [36], we hypothesized that an adjuvant component of the formulation rather than the bacteria themselves might be responsible for the observed toxicity. To test this hypothesis, we centrifuged the reconstituted BCG-medac powder and resuspended the pellet containing BCG bacteria in PBS. We then repeated the toxicity dose-response experiment using the supernatant, containing the water-soluble fraction of the formulation, and the "cleaned" BCG fraction. While the injection of the bacteria-free supernatant led to a similar toxicity and mortality profile as the complete reconstituted formulation, injection of the cleaned bacteria alone was fully tolerated and led to no increase in mortality or signs of excessive toxicity even at the highest doses tested ( Figure 1D-F). Taken together, these results indicate that the observed toxicity of the complete reconstituted formulation was due to a water-soluble ingredient of the BCG-medac powder and not the BCG bacteria themselves. We therefore decided to continue further experiments using the cleaned bacterial component.
larvae. Intravenous injection of a BCG dose similar to that used in human intravesical immunotherapy (6 × 10 7 viable units/mL) in 2-day-old zebrafish larvae revealed a reduced survival rate at 3 dpi and clear signs of toxicity (e.g., edema) in the surviving larvae (Figure 1A,B). To find the maximum tolerated dose for zebrafish larvae, we injected larvae at 2 dpf with a range of concentrations starting at 1.88 × 10 6 and reaching 2.4 × 10 8 viable units/mL and monitored survival until 3 dpi. Under these conditions, the LD50 of BCGmedac was 9.5 × 10 7 viable units/mL, but most embryos survived at doses between 1.88 × 10 6 and 9.5 × 10 7 viable units/mL ( Figure 1C). Doses beyond 1.5 × 10 7 viable units/mL led to similar mortality and toxicity profiles as seen using the clinically relevant dose of 6 × 10 7 viable units/mL ( Figure 1A). As the toxic phenotypes were also seen at low doses of the BCG-medac formulation and were not consistent with phenotypes expected from infection with tuberculosis bacteria in zebrafish larvae [36], we hypothesized that an adjuvant component of the formulation rather than the bacteria themselves might be responsible for the observed toxicity. To test this hypothesis, we centrifuged the reconstituted BCG-medac powder and resuspended the pellet containing BCG bacteria in PBS. We then repeated the toxicity dose-response experiment using the supernatant, containing the water-soluble fraction of the formulation, and the "cleaned" BCG fraction. While the injection of the bacteria-free supernatant led to a similar toxicity and mortality profile as the complete reconstituted formulation, injection of the cleaned bacteria alone was fully tolerated and led to no increase in mortality or signs of excessive toxicity even at the highest doses tested ( Figure 1D-F). Taken together, these results indicate that the observed toxicity of the complete reconstituted formulation was due to a water-soluble ingredient of the BCGmedac powder and not the BCG bacteria themselves. We therefore decided to continue further experiments using the cleaned bacterial component.
BCG Leads to Significant Regression of UM-UC-3 Xenografts
To investigate the effect of BCG on bladder cancer xenografts in the ZTX model, four different concentrations of BCG ranging from 0.5× and up to 2× the concentration used in human intravesical therapy were co-injected with~300 fluorescently labelled UM-UC-3 [37] cells/nl into the PVS of 48 h old embryos. The relative proportions of tumor cells to BCG bacteria for these experiments are indicated in Supplemental Table S1. Relative tumor regression compared to untreated control fish was documented at 3 dpi (Figure 2A). All four doses induced a significant regression in the xenograft sizes (ANOVA: p < 0.0001, Figure 2B,C), showing a dose-dependent increase in efficacy from a mean relative tumor size of 50.5% at 3 × 10 7 to 23.2% at 1.2 × 10 8 viable units/mL.
BCG Leads to Significant Regression of UM-UC-3 Xenografts
To investigate the effect of BCG on bladder cancer xenografts in the ZTX model, four different concentrations of BCG ranging from 0.5× and up to 2× the concentration used in human intravesical therapy were co-injected with ~300 fluorescently labelled UM-UC-3 [37] cells/nl into the PVS of 48 h old embryos. The relative proportions of tumor cells to BCG bacteria for these experiments are indicated in Supplemental Table S1. Relative tumor regression compared to untreated control fish was documented at 3 dpi (Figure 2A). All four doses induced a significant regression in the xenograft sizes (ANOVA: p < 0.0001, Figure 2B,C), showing a dose-dependent increase in efficacy from a mean relative tumor size of 50.5% at 3 × 10 7 to 23.2% at 1.2 × 10 8 viable units/mL.
Bacterial Viability and Physical Contact with the Tumor Cells Are Both Required for Efficacy
We next investigated the mechanism leading to a therapeutic outcome from BCG treatment by testing whether heat-inactivated bacteria, which would still be able to activate part of, but likely not the complete, immune response [38], could provide a similar outcome as the viable bacteria used clinically. While the heat-inactivated BCG treatment significantly increased tumor regression compared to non-treated controls, this effect was greatly attenuated as compared to treatment with the viable BCG therapy (78% vs. 25%, respectively) ( Figure 3A-C). This suggests that bacterial viability is important for mounting the complete anti-tumor response in the ZTX models.
Bacterial Viability and Physical Contact with the Tumor Cells Are Both Required for Efficacy
We next investigated the mechanism leading to a therapeutic outcome from BCG treatment by testing whether heat-inactivated bacteria, which would still be able to activate part of, but likely not the complete, immune response [38], could provide a similar outcome as the viable bacteria used clinically. While the heat-inactivated BCG treatment significantly increased tumor regression compared to non-treated controls, this effect was greatly attenuated as compared to treatment with the viable BCG therapy (78% vs. 25%, respectively) ( Figure 3A-C). This suggests that bacterial viability is important for mounting the complete anti-tumor response in the ZTX models.
Next, we asked if intravenous administration of viable BCG bacteria, likely leading to systemic rather than tumor-localized immune activation, could also be effective. To investigate this important question, we first established subcutaneous UM-UC-3 xenografts and subsequently injected the BCG therapy intravenously. Interestingly, intravenous injection of BCG was completely ineffective and unable to induce regression of the tumors compared to non-treated controls ( Figure 3D,E). Taken together, these findings clearly suggest that complete and localized activation of anti-cancer immunity in the tumor microenvironment is required for optimal efficacy of the BCG therapy. Next, we asked if intravenous administration of viable BCG bacteria, likely leading to systemic rather than tumor-localized immune activation, could also be effective. To investigate this important question, we first established subcutaneous UM-UC-3 xenografts and subsequently injected the BCG therapy intravenously. Interestingly, intravenous injection of BCG was completely ineffective and unable to induce regression of the tumors compared to non-treated controls ( Figure 3D,E). Taken together, these findings clearly suggest that complete and localized activation of anti-cancer immunity in the tumor microenvironment is required for optimal efficacy of the BCG therapy.
BCG Therapy Does Not Impact on Bladder Cancer Invasiveness and Dissemination
Activation of the innate and adaptive immune system has been suggested to impact tumor cell dissemination and metastasis in a complex manner, either inducing or inhibiting metastasis depending on poorly understood, context-specific cues [39][40][41]. We therefore investigated if the complex immune activation caused by either heat-inactivated or viable BCG treatments and elicited either specifically in the tumor microenvironment or systemically in the zebrafish larvae could impact the dissemination of the UM-UC-3 cells ( Figure 4A). Dissemination was investigated at the major metastatic site in the caudal hematopoietic plexus, at the caudoventral region of the body ( Figure 4A). UM-UC-3 cells readily disseminated to this region under control conditions, and neither heat-inactivated nor viable BCG treatment had any impact on this dissemination phenotype ( Figure 4B,C). Furthermore, i.v. injection of the BCG, which is expected to primarily activate the immune system at the caudal hematopoietic plexus where the majority of immune cells reside at
BCG Therapy Does Not Impact on Bladder Cancer Invasiveness and Dissemination
Activation of the innate and adaptive immune system has been suggested to impact tumor cell dissemination and metastasis in a complex manner, either inducing or inhibiting metastasis depending on poorly understood, context-specific cues [39][40][41]. We therefore investigated if the complex immune activation caused by either heat-inactivated or viable BCG treatments and elicited either specifically in the tumor microenvironment or systemically in the zebrafish larvae could impact the dissemination of the UM-UC-3 cells ( Figure 4A). Dissemination was investigated at the major metastatic site in the caudal hematopoietic plexus, at the caudoventral region of the body ( Figure 4A). UM-UC-3 cells readily disseminated to this region under control conditions, and neither heat-inactivated nor viable BCG treatment had any impact on this dissemination phenotype ( Figure 4B,C). Furthermore, i.v. injection of the BCG, which is expected to primarily activate the immune system at the caudal hematopoietic plexus where the majority of immune cells reside at these developmental stages [42], did not affect the dissemination and homing of tumor cells to this area ( Figure 4B,C). These findings suggest that the type of immunity activated by BCG treatment within the ZTX system is neither pro-nor anti-metastatic. these developmental stages [42], did not affect the dissemination and homing of tumor cells to this area ( Figure 4B,C). These findings suggest that the type of immunity activated by BCG treatment within the ZTX system is neither pro-nor anti-metastatic.
ZTX Models Predicted Responses to BCG Therapy in NMIBC Patients
Next, we asked if the heterogeneity in responses to BCG treatment among bladder cancer patients could be recapitulated in the ZTX platform. To answer this, we obtained cryopreserved, viable tumor tissue samples from six non-muscle invasive bladder cancer patients, known to have been treated with BCG and in four of which the treatment had been completed and the clinical treatment outcome was available. A summary of relevant clinical data related to these six patients is shown in Table 1. The tumor tissues were dissociated, fluorescently labelled, and injected into the PVS of 48 h old zebrafish larvae with or without 6 × 10 7 viable units/mL BCG ( Figure 5A). All six tumors implanted effectively in the zebrafish larvae, exhibiting only minimal spontaneous regression to a mean relative tumor size of 80.8%, 57.1%, 77.1%, 51.6%, 31.1%, and 40.0%, respectively, in the nontreated control groups at 3 dpi ( Figure 5B). BCG treatment led to significant additional regression compared to the control group in two of the six ZTX models ( Figure 5D,E). For Patient B, the mean relative regression compared to the control was 82%, whereas the mean regression was 32% for Patient C. Both of these patients had responded to BCG treatment; Patient B only had small in situ lesions left at follow-up (which, in clinical practice, is regarded as an objective response), whereas Patient C was a complete responder. The ZTX models generated from Patients A and D, however, did not demonstrate significant regression in the BCG-treatment group, suggesting that these tumors would not respond to BCG treatment ( Figure 5C,F). While Patient D was considered a clinical responder, Patient A, indeed, developed recurrent disease associated with clinical BCG treatment failure, confirming the ZTX model results for this patient. ZTX models generated from Patients E and F suggested that BCG treatment would not be effective for these patients as the relative tumor sizes of the treatment groups were significantly larger compared to the controls ( Figure 5G,H). While Patient E has started the induction treatment and the outcome of that is still pending, Patient F had recurrence and re-resection of the tumor prior to starting the BCG treatment, which is therefore still pending. Taken
ZTX Models Predicted Responses to BCG Therapy in NMIBC Patients
Next, we asked if the heterogeneity in responses to BCG treatment among bladder cancer patients could be recapitulated in the ZTX platform. To answer this, we obtained cryopreserved, viable tumor tissue samples from six non-muscle invasive bladder cancer patients, known to have been treated with BCG and in four of which the treatment had been completed and the clinical treatment outcome was available. A summary of relevant clinical data related to these six patients is shown in Table 1. The tumor tissues were dissociated, fluorescently labelled, and injected into the PVS of 48 h old zebrafish larvae with or without 6 × 10 7 viable units/mL BCG ( Figure 5A). All six tumors implanted effectively in the zebrafish larvae, exhibiting only minimal spontaneous regression to a mean relative tumor size of 80.8%, 57.1%, 77.1%, 51.6%, 31.1%, and 40.0%, respectively, in the non-treated control groups at 3 dpi ( Figure 5B). BCG treatment led to significant additional regression compared to the control group in two of the six ZTX models ( Figure 5D,E). For Patient B, the mean relative regression compared to the control was 82%, whereas the mean regression was 32% for Patient C. Both of these patients had responded to BCG treatment; Patient B only had small in situ lesions left at follow-up (which, in clinical practice, is regarded as an objective response), whereas Patient C was a complete responder. The ZTX models generated from Patients A and D, however, did not demonstrate significant regression in the BCG-treatment group, suggesting that these tumors would not respond to BCG treatment ( Figure 5C,F). While Patient D was considered a clinical responder, Patient A, indeed, developed recurrent disease associated with clinical BCG treatment failure, confirming the ZTX model results for this patient. ZTX models generated from Patients E and F suggested that BCG treatment would not be effective for these patients as the relative tumor sizes of the treatment groups were significantly larger compared to the controls ( Figure 5G,H). While Patient E has started the induction treatment and the outcome of that is still pending, Patient F had recurrence and re-resection of the tumor prior to starting the BCG treatment, which is therefore still pending. Taken together, these findings provide strong, preliminary evidence and proof-of-concept data for predicting BCG treatment outcomes in NMIBC patients using ZTX models. together, these findings provide strong, preliminary evidence and proof-of-concept data for predicting BCG treatment outcomes in NMIBC patients using ZTX models.
Discussion
Functional experimental model systems such as 3D spheroid/organoid and xenograft models are considered the most accurate and reliable readouts of anti-proliferative, cell killing, or anti-migratory effects of anti-cancer compounds [3]. Because of this, such models have long been the gold standard for anti-cancer efficacy studies in cancer research and pre-clinical drug development [6]. Due to the slow growth of many patient tumor cells in vitro or after implantation in immunocompromised mice, 3D culture or xenograft models based on primary tumor cells take weeks or even months to generate results [7] and have therefore not been successfully implemented for precision diagnostics and individualized treatment planning in cancer patients. Furthermore, current PDO or PDX models do not readily recapitulate the complexity of the immune-tumor cell interactions required to evaluate the efficacy of immune-oncological treatments [43]. While human-like immunity can be established in immune-compromised mice to allow evaluation of some immuneoncology drugs such as checkpoint inhibitors, these models generally retain mouse innate immunity, which may interfere with the readout. Similarly, PDO models that contain T-cells or other immune cells to mirror part of the intratumoral immune system can be generated, but these generally do not allow studies on immune-cell homing to the tumor, tumor cell dissemination, and metastasis or off-tumor targets (such as targets in the lymph nodes), which have high clinical importance. Furthermore, simpler models analyzing tumor cells growing in 2D in vitro do not allow the evaluation of important but complex immune-oncological mechanisms, nor are they well suited for the evaluation of metastatic dissemination through the circulation to distal tissues.
Zebrafish xenograft models have recently been developed as an alternative to both 2D cell cultures and PDO/PDX models. These models are often constructed to include either specific aspects of human tumor-immune cell interactions such as macrophage-or neutrophil-induced metastasis [39,40,44], T-cell mediated cytotoxicity [45], or macrophageinduced resistance to immune-oncology treatments [46]; alternatively, they can be created to include the entire complexity of the cellular tumor microenvironment [24]. Here we take the zebrafish xenograft platform one step further by demonstrating that the efficacy of a classical immune-oncology therapy for bladder cancer, BCG, can be studied using both established cell lines (CDX models) and patient material (PDX models). While only six patient samples have been studied in this work, of which three were responders, one was a non-responder and two for whom the clinical outcome is still unknown, the ZTX models could be established from all six and correctly predicted the patient treatment outcome in three of four cases, providing a robust proof-of-principle for these models as a system that recapitulates the complex tumor-immune cell interactions of the patients needed to elicit efficacy of the BCG treatment (or the lack thereof). Since we found that direct contact between tumor cells and BCG bacteria was required for efficacy, it is tempting to speculate that the ability of the tumor cells to ingest the bacteria might be involved in mediating a positive treatment outcome. As BCG ingestion depends on the specific oncogenic signaling pathways of the tumor cells [47], this process could have been impaired in the non-responding patient sample. Further mechanistic studies using a larger patient cohort are needed to further address this important clinical question.
To establish the zebrafish xenograft platform for studies within immune-oncology in general, and for evaluating BCG treatment efficacy in particular, we thoroughly character-ized the methodological requirements of the assay. Firstly, we found that co-injection of the clinical BCG-medac formulation led to high mortality of the injected zebrafish embryos. The BCG-medac powder is formulated to include anhydrous glucose, polygeline, and polysorbate 80, in addition to the BCG bacteria themselves. Polygeline, a urea polymer derived via the degradation of gelatin, is a commonly used drug carrier with properties similar to albumin and is held to be unproblematic due to its quick metabolism [48]. Furthermore, glucose has been evaluated in the zebrafish system and found to not cause any toxicity at the concentrations relevant to this study [49]. Polysorbate 80 is a non-ionic surfactant, which is applied as a solubilizing agent. Although it is commonly used in human drug formulations, polysorbate-80-containing injectables have been reported to occasionally cause severe anaphylactoid reactions and hemolysis [50]. Intraperitoneal injections of polysorbate 80 into zebrafish embryos similarly revealed anaphylactoid reactions and increased mortality exceeding 40% of the larvae [51]. In addition to polysorbate-80, the temperature used in this study (35.5 • C) may by itself cause transient toxic phenotypes including pericardial edema, but only if the embryos are exposed to such temperatures from early embryonic stages. Toxicity to this temperature is not seen if exposure is delayed until the larval stages as seen here. On the other hand, the temperature is not likely to have mediated the lack of toxicity of the pelleted and reconstituted bacteria, as the same temperature was used for both formulations. This strongly suggests that polysorbate-80 likely caused the observed toxicity. However, by removing this compound by pelleting the bacteria, the drug formulation could be used without overt toxicity.
Prior in vitro studies have demonstrated a direct BCG-related inhibition of proliferation or BCG-related cell death in certain bladder cancer cell lines but only a few in vivo studies exist [11]. In the zebrafish xenograft model of UM-UC-3, we found that BCG treatment augmented tumor regression to a maximum of 23.2% for the highest BCG concentration. UM-UC-3 is a low-grade, muscle-invasive bladder cancer cell line, which is known to readily internalize BCG by macropinocytosis [47]. BCG immune cell priming alone does not enhance NK or T cell cytotoxicity against UM-UC-3 cancer cells in vitro [52]. Since the mechanism behind the effect of BCG in the ZTX models is completely unknown, some crucial requirements were tested that are known to be important for the efficacy of human BCG therapy. One of these is the direct contact between the BCG bacteria and the urothelium/bladder cancer cells [53]. As intravenous injection of BCG instead of coinjection failed to reduce the relative tumor size of the xenografts, direct BCG-cell contact was shown to also be important in the ZTX models. The second requirement pertains to using therapy containing viable BCG bacteria [11,54]. In this study, heat inactivation of BCG before co-injection significantly reduced the efficacy of the BCG therapy compared to viable BCG. Our heat-inactivation protocol has previously been shown to inactivate the vast majority of the Mycobacterium bovis BCG bacilli [34,55]. However, even though the effect was small, inactivated BCG was able to cause a slight tumor regression in the zebrafish CDX model. A probable explanation for this might be that heat-inactivated bacteria still induce a partial immune response, which in turn could eliminate some of the tumor cells. In this context, studies have also shown that both heat-inactivated BCG and sub-cellular BCG fractions stimulate peripheral blood mononuclear cells and induce a cytotoxic NK cell response, which recognizes and eliminates bladder cancer cells [55].
A major risk factor for the progression of NMIBC to more advanced stages is local or distal invasion/metastasis of the tumor cells. Here we found that both local and intravenous administration of viable or heat-inactivated BCG did not reduce the invasiveness and initial metastatic dissemination of the tumor cells in the zebrafish larvae. Previously, tumor dissemination in the ZTX models has been shown to accurately predict positive versus negative lymph node disease in non-small cell lung cancer [24], and radiation-induced invasiveness [56] or hypoxia-induced metastasis [25] of various cancers. The mechanisms underlying phenotypic switching of NMIBC to more advanced stages, as well as how this could be affected by BCG treatment, are currently not known. It is theoretically possible that the lack of an effect of BCG on this process in the ZTX models is due to the metastasized cells escaping the primary tumor environment quickly after implantation, prior to internalizing the bacteria. This might prevent immune-mediated tumor cell killing at the metastatic niche. It is also possible that mechanisms that, in a subset of the tumor cells, inhibit the internalization of the bacteria may also be involved in metastasis, implying that the BCG-resistant tumor cells are those that are also metastasizing. Alternatively, mechanisms involved in endowing tumor cells with invasive phenotypes could also lead to immune evasion and, as such, allow metastasized tumor cells with internalized bacteria to stay alive at the metastatic site even if the primary tumor is effectively regressed. These critical questions related to the progression of non-muscle invasive bladder cancer should be the subject of further study using the ZTX models in the future.
Conclusions
We have demonstrated that BCG co-injection in zebrafish xenograft models of urinary bladder cancer leads to significant tumor regression in both the UM-UC-3 cell line and two out of six clinical patient samples, in both cases recapitulating the clinical treatment outcome of the patients. This indicates that ZTX models might be suitable for predicting the BCG outcome and thereby for BCG treatment planning. However, further investigations on how this mechanism is mediated in the zebrafish model, as well as further clinical validation studies, are needed.
Author Contributions: S.K., Y.H., D.T., Z.A. and G.V.-R. planned and conducted experiments, retrieved data from patient journals, and analyzed data. A.E., A.F. and A.S. planned experiments and provided critical research infrastructure, Y.C. provided critical feedback on project planning and data analysis, and L.D.J. planned and led the study and analyzed data. S.K. and L.D.J. wrote the paper with important input from the other authors. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Swedish Ethics Authority (EPM) (Dnr: 2018_837-32, approval date: 20 April 2018).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to GDPR-regulations.
Conflicts of Interest: D.T., Z.A., G.V.-R., A.E., A.F. and L.J. have been/are employees or shareholders in BioReperia AB, a company that is developing ZTX models for the diagnosis and prognosis of cancer patients. The other authors declare no conflict of interest related to this work.
Ethics Statement:
All zebrafish experiments were conducted on larvae under the age where they develop the ability to feed freely, and therefore did not require animal research ethical permission. All patient samples were collected with informed consent from the patients and used in this study in accordance with the declaration of Helsinki and following approval from the Swedish Ethical Review Authority (Dnr: 2018_837-32).
|
2023-02-08T16:15:41.550Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "2703748e94f631a137932ebf00e78fcfd6553e33",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/12/3/508/pdf?version=1675421468",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fb4a1274acd4fa221eb66491a0cadbff8dad8e3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8818169
|
pes2o/s2orc
|
v3-fos-license
|
A prominent role of Hepatitis D Virus in liver cancers documented in Central Africa
Background Hepatocellular Carcinoma (HCC) is one of the commonest cancers in Central Africa, a region with the unusual peculiarity to be hyperendemic for infections with Hepatitis B, C and D viruses. However, data estimating the respective proportions of HCC cases attributable to these viruses are still limited in this area. The current study was undertaken to determine the role of these viruses in HCC compared to non-HCC Cameroonian patients. Methods A case–control study was conducted in the Gastroenterology Unit of Central Hospital of Yaounde in collaboration with Centre Pasteur of Cameroon. Blood samples of all HCC cases (n = 88) and matched control individuals without known liver disease (n = 85) were tested for serological markers of Hepatitis B, C and D viral infections using commercially available enzyme immune-assay kits. Hepatitis B and C viral loads were quantified for positive patients by real-time PCR using commercial kits. Results The mean age was 46.0 ± 18 and 42.1 ± 16 years old for HCC-patients and controls, respectively for a 2.3 Male/Female sex ratio. The prevalence of hepatitis B surface antigen, antibody to HCV and antibody to HDV were significantly higher in HCC patients (65.90, 20.26 and 26 % respectively) than in control patients (9.23, 4.62 and 1 %) (P < 2.5 10−5). The risk factors analysis showed that both HBV and HCV infections were strongly associated with HCC development in Cameroon with crude odds ratios of 15.98 (95 % CI 6.19-41.25) and 7.33 (95 % CI 2.09-25.77), respectively. Furthermore, the risk of developing HCC increased even more significantly in case of HBV and HDV co-infections with the odd ratio of 29.3 (95 % CI, 4.1-1231). HBV-DNA level was significantly higher in HBsAg-positive HCC-patients than in HBsAg-positive controls with (6.3 Log IU/mL and 5.7 Log IU/mL) respectively (P < 0.05). Conclusion HBV and HCV infections are the mains factors of HCC development in Cameroon. Our results show that patients co-infected with HDV are at very high risk to develop HCC. An active surveillance program of patients and, foremost, an easier access to antivirals and primary prevention measures are crucial steps to reduce the incidence of HCC in this country. Due to the lack of truly efficient antiviral therapy, the fate of HDV-infected patients remains, however, particularly worrying.
Background
Hepatocellular carcinoma (HCC) is the second most common cause of cancer death in the world and is particularly prevalent in Africa and Asia [1][2][3]. There is a strong correlation between the incidence of HCC and the prevalence of chronic hepatitis B and C, indicating that these two viral infections are some of the most important risk factors of HCC worldwide with a combined attributable fraction of at least 75 % of all HCC cases [4,5].
Growing evidence indicates that the high incidence areas of HCC correspond primarily to the zones where chronic hepatitis B is prevalent and exposure to aflatoxin B1 (AFB1), a mutagenic mycotoxin, is frequent [6]. In addition to viral hepatitis infections and dietary AFB1 exposure many other risk factors associated with HCC are well documented. They include heavy alcohol consumption, iron overload, type II diabetes and cigarette smoking [6].
In Cameroon, HCC has been reported as the country's commonest type of cancer [7]. Cameroon, as other countries of Central Africa (eg Burundi) is also a hyperendemic area for hepatitis B, C and D viral infections with HBsAg prevalence ranging between 6 and 33 % [8][9][10][11], anti-HCV between 4 and 30 % [11,12] and anti-HDV between 13 and 62.5 % of HBV surface antigen (HBsAg) carriers [13] depending on the population and the area studied.
However, like for other developing countries in Central Africa, recent and first-hand data regarding the involvement of hepatitis B and C viruses in HCC is rather limited [14]. The few available studies carried out in Cameroon bring out essentially the clinical, epidemiological and diagnostic aspects of HCC resulting from cross sectional analyses [15,16]. So far, information on the respective involvement of HBV and HCV in HCC remains unknown. The current study was undertaken to assess the risk associated with the three viruses in HCCcases compared to HCC-control (non-hepatic disease) Cameroonian patients using a case-control study.
Methods
Patients A case-control study was performed. Cases were made of HCC patients consecutively enrolled in the Gastroenterology and Radiology Units of Central Hospital, Yaoundé between February 2013 and January 2014. They were individually 1:1 paired-matched by sex and age (±5 years) with control subjects consecutively selected and represented by patients without clinical symptom of liver diseases attending at the same period the same medical departments.
The diagnosis of HCC was based on presence of a liver mass at ultrasound and, when possible, histology of tissues samples together with elevation of serum alphafetoprotein (AFP) (>400 ng/ml) levels. Of the 88 HCC cases included, 61 % (n = 54) were found with AFP levels of 400 ng/mL or higher, 20.5 % (n = 18) had AFP level less than 400 ng/mL or above normal values (>16.5 ng/ mL). In the remaining cases (n = 16) AFP were found normal (<16.5 ng/mL). Inclusion criteria for control subjects without clinical evidence of liver disease were the absence of previously known hepatitis, the presence of normal level of serum AFP (<16.5 ng/ml) and no liver mass at ultrasound examination. Written informed consent was obtained from all the patients (or from parents, in the case of children). The study protocol conformed to the ethical guidelines of the 1975 declaration of Helsinki was approved by the National Ethics Committee and the Ministry of Health of Cameroon. All the cases and controls were interviewed using a standard questionnaire that gathered information on demographic characteristics, past medical history, family history of liver disease, history of alcohol drinking, cigarette smoking, dietary history and history of blood transfusion. All the blood samples collected were aliquoted and stored at −80°C until analyzed.
Serological tests HCV serology
The presence of antibodies against HCV (anti-HCV) was checked by the use of a third-generation enzyme immunoassay (EIA, Monolisa anti-HCV Plus version 2; Bio-Rad, Marne-La-Coquette, France). The reactivity of samples was determined as described by Njouom in 2003 [17]. Briefly, a ratio was calculated for each sample by dividing its optical density by the cut-off value. A positive result for anti-HCV was defined as a Monolisa ratio greater than 6 whereas all samples with a <6 ratio were scored as negative.
HBV serology
Different serological markers of HBV were assessed using commercial kits: hepatitis B surface antigen (HBsAg) antibody to hepatitis B core antigen (Anti-HBc), antibody to HBsAg (anti-HBs) and hepatitis e antigen (HBeAg). The presence of HBsAg was tested by enzyme-linked immunosorbent assay (ELISA) by the use of third generation reagents (Murex AgHBs version 3; DiaSorin, SPA UK BRANCH) and the presence of ant-HBc andanti-HBs were detected by enzyme immunoassay (EIA) by the using the respective commercial kits of (Monolisa; Bio-Rad, Marne-La-Coquette, France).
Participants positive for HBsAg were tested for HBV "e" antigen (HBeAg) as a surrogate marker of active replication using enzyme immunoassay kit (Monolisa, Bio-Rad, Marne-La-Coquette, France). All the reactivity was determined according to the manufacturer's instructions. Infection with HBV was defined positive when only HBsAg was detected or both HBsAg and HBeAg in the same patient.
HDV serology
The presence of antibodies against HDV (anti-HDV) was assessed only in HBV positive patients using commercial kits for enzyme-linked immunosorbent assay (ELISA) by the use of ETI-AB-DELTAK-2 Anti-HDV; DiaSorin, P2808). The reactivity of samples was determined according to the manufacturer instructions. . Samples with absorbance values within +/−10 % of cutoff value were retested in order to confirm the initial result. Only repeatedly reactive samples were considered positive.
Molecular analysis
Occult hepatitis B characterized by the presence of hepatitis B virus (HBV) DNA in the serum of patients in the absence of serological markers signing active viral replication was identify in this study [18,19] by quantification of HBV viral loads in HCC-cases negative for HBsAg. In addition we also searched for HCV RNA and quantified HCV viral loads in patients with anti-HCV antibodies to search for possible HCV occult infection. Plasma HCV-RNA and HBV-DNA levels were quantified using Abbott RealTime assay (Abbott Molecular Inc, Des Plaines, IL 60018 USA) according to manufacturer's instructions. The lower detection limit of the assay for HCV infection was defined as viral load value greater than 12 viral RNA copies ml −1 (IU/mL). For HBV infection, the limit of the assay was defined as viral load value greater than 10 viral DNA copies ml −1 (IU/mL).
Statistical analyses
Data were presented as mean ± SD. Prevalence of HBV and HCV were compared between HCC-cases and HCC-controls. The odds ratios (ORs) were calculated to assess the risk of HCC using a conditional logistic regression analysis and confidence interval were determined. Viral loads were compared using T test or Mann-Whitney U test as appropriate following log2 transformation of the values. . The difference was considered statistically significant for P < 0.05. The analysis was performed using SPSS 16.0 and Prism 6.0 (GraphPad Inc, La Jolla, CA, USA) statistical softwares.
Baseline characteristics
In all, 88 HCC-cases and 85 controls were included in this study. The mean age of HCC-cases was 46.0 ± 18.8 years (age range 8-93 years). Among the HCCcases, 69.3 % (61/88) were men (45.8 ± 18.8 years) and 30.7 % (27/88) were women (46.5 ± 19.1 years). For the 85 controls included, 21 had diseases that need to be treated in the division of cardiology, 52 in internal medicine and 12 in the division of surgery. Every control patients was definitely diagnosed without clinical symptom of liver disease and without HCC. The mean age of the controls was 42.1 ± 16.4 years (age range: 11-82 years). Among these, 67.1 % (57/85) were men with the mean age of 41.9 ± 15.1 and 32.9 % (28/85) were women with the mean age 46.5 ± 19.1 years. The mean age between HCC-cases and controls was not significantly different. The basic demographic characteristics of patients are shown in Table 1.
Serological prevalence of hepatitis viruses markers in HCC patients and controls
The prevalence of HBV, HCV and HDV markers is shown in Table 2. Hepatitis B surface antigen (HBsAg) was present in 65.9 % (58/88) of the cases and in 10.6 % (9/85) of the controls. The difference between the two groups was significantly different (P < 0.0001). The presence of Anti-HBc positivity was found more prevalent in HCC patients than in controls patients with 82.9 and 78.8 % respectively (NS). The presence of both Anti-HBc and Anti-HBs defined as resolutive HBV was significantly (P < 0.05) present in controls patients compared with HCC-cases with 45.1 % vs 12.5 % respectively. Of the 58 HBsAg positive HCC-cases found, hepatitis Delta antibody (Anti-HDV) co-infection was present in 41.4 % (24/58). Only one control was coinfected with HDV infection in our study. Regarding HCV infection, the antibody against HCV (Anti-HCV) was also more frequently detected in HCC-cases compared to the controls (26.1 % (23/88) vs 3.5 % (3/85) respectively). The difference between the two groups was significantly different (P < 0.0001). Overall, the prevalence the serological markers for the three viruses were much higher in HCC-cases than in controls patients. Only 16 % of HCC cases (n = 14) were free from markers of chronic hepatitis whereas this proportion was 87 % in controls (P < 0.0001). Taken together, the OR associated with positivity for either HBsAg or anti-HCV was 35 (95%CI: 15.0-81.9).
Dual HBV and HCV infection was observed in 5.7 % (5/88) of HCC-cases and in 1.2 % (1/85) of control and triple infection was only found in one HCC patient (2.3 %).
A small subset of patients was positive for HBV early antigen (HBeAg). These patients were much younger than the whole series (mean = 29.4 years, . No controls were presenting a positive serology for HBeAg. Due to the small number of positive subjects, the difference between cases and controls was, however, not reaching the degree of statistical significance (P = 0.059, OR = 11.2, 95%CI = 0.6-207.1).
Hepatitis B and C viral loads in HCC patients and controls
As shown in Table 3, HBV and HCV genomes were respectively found in 93 % (54/58) and 87 % (20/23) of serologically positive HCC-cases and in 67 % (6/9) and 100 % (3/3) of the corresponding controls. HBV-DNA level were significantly higher in HBsAg-positive HCCcases than in HBsAg-positive controls with (6.3 Log UI/ mL and 5.7 Log UI/mL) respectively (P < 0.05). HCV-RNA levels were also higher in anti-HCV positive HCCcases (5.5 log10) than in anti-HCV positive controls (5.2 Log 10) although the difference was not statiscally significant.
We further analyzed viral loads in HCC-HBV related co-infected with hepatitis Delta virus infection compared to those without HDV infection. The mean of HBV viral load were significantly low in patients co-infected with HDV compared with patients without HDV infection. HBV DNA loads were decreased in HDV-positive samples when compared with HBsAg (+) only (3.6 ± 1.7 vs 4.5 ± 2.1) but the difference was not statistically significant (P = 0.13).
Regarding occult hepatitis, 30.0 % (9/30) of HBsAg (−) patients were found positive by PCR suggesting a high rate of occult B infection (OBI) in HCC-cases in this cohort. The mean age of the 9 OBI identified was significantly higher than the mean age of patients with overt infection (59.9 ± 14.1 vs 39.04 ± 14.7 years) respectively suggesting that the kinetics of the tumor development can differ between overt and occult HBV infection. The overall proportion of patients with overt or occult B infection was thus reaching 76 % (n = 67/88). By contrast, no HCC case was occult for HCV in this study.
Analysis of HBV, HCV and HDV as risk factors for HCC
The analysis of serological markers as risk factors confirmed that the risk of developing HCC was strongly associated with the three viruses in our country. The odds ratios (OR) and 95 % confidence interval (CI) for presence of HBV, HCV and HDV markers were 16.3 (7.2-37.1), 9.6 (2.8-33.6) and 29.3 (4.1-1231) respectively. Similarly, the adjusted OR with sex and age values for each infection was very high (Table 2) suggesting that, in Central Africa, the carcinogenic risk associated with HBsAg positivity is higher than the risk associated with anti-HCV sero-reactivity ( Table 2). The strongest hepatocarcinogenic viral factor in Cameroonian patients was, however, the presence of an infection with HDV.
In order to characterize a possible stratification of the viral markers within the series examined, the association between viral hepatitis, age range and sex was also analyzed. The sex ratios of HBV, HDV and HCV positive cases were not significantly different (P > 0.05). By the contrast, the age distribution was significantly different (P < 0.05) among anti-HCV positive patients and HBsAg positive patients (64.6 ± 15.5 years vs 39.04 ± 14.7 years). The same observation was also found in patients co- infected with HDV infection. The peak of incidence of HBsAg and anti-HDV were located in the 20-39 years old subset and about 80 % of HCV-related HCC patients were in the age group ≥60 years. The liver tumor development process appears, therefore, as drastically different between both viruses in Cameroon, a situation plausibly due to the usual significant difference of age at contamination. The age distribution of the subjects analyzed is shown in Fig. 1a,b and c.
Discussion
This is the first reported case-control study of HCC in Cameroon and to the best of our knowledge the first in Central Africa (Cameroon, Central African Republic, Chad, Gabon, Congo, Democratic Republic of Congo) an ensemble that counts 136 million inhabitants. A close relationship between the development of HCC and older age in the settings of HBV or HCV infections has been stressed in various areas of the world [20][21][22][23][24].
El-Serag et al. [20]. In the current work, we observed an overall low mean age of patients (46 years) when compared to the situation prevailing in resource-rich countries where the tumor mostly occurs at an age that ranges from the late fifties to the early seventies (approximate mean age 65 years) [21][22][23][24][25] . Our data are, however, similar to that reported in previous studies conducted on Cameroonian HCC cases and to other studies conducted in Central Africa countries like Gabon [26]. Our finding are also in accordance with some studies conducted in West Africa countries like Nigeria [27], Gambia [28] and Ghana [29]. Our results contrast however, with those of North Africa HCC patient [23]. It seems thus, that the kinetics of tumor development is stable since decades and homogeneous throughout Sub Saharan Africa.
The high male-to-female ratio (2.3) observed in this Cameroonian cohort of HCC cases is similar to those reported from the East (2.4), the West (2.2) and the South (2.3) of Africa [14] and in the literature. This may be due in part from higher prevalence of HBV and HCV infections among men and/or to the fact that males usually have history of drinking alcohol [30][31][32][33] .
In the present study, all three viruses responsible for chronic hepatitis were strongly associated with HCC development in Cameroon. Our results show that HBV is the predominant risk factor associated with HCC development in our country; as we found, it is involved in 65 % of cases with overt infection and in 75 % of cases when both overt and occult B infections were regrouped. Furthermore, almost all of our patients had a serological marker of previous HBV infection. The high prevalence of HBV infection was consistently reported in various groups of patients in Cameroon: 10.1 % in blood donors [12], 20.4 % in pregnant women [10], 23.6 % in health care workers [11] and 33 % in the Bantus enrolled in Central, South, North West and East part of the country [13]. Our findings regarding HBV prevalence in HCC is in keeping with data reported for decades in Gabon (40.5 %) and many studies conducted in West African countries like Niger (73 %), Senegal and Mali (63 %), Gambia (60 %) and Nigeria 61 % [26,[34][35][36]. This observation confirms that HBV infection is still the most important risk factor of HCC development in Sub Saharan Africa and that Central Africa does not differ from other sub Saharan regions.
We observed the presence of Anti-HCV in 26 % of the HCC-cases. Similar levels were found in different studies conducted in Sub-Saharan Africa [34,36]. Our results contrast, however, with those of the study conducted in 2008 by Perret et al. on hepatitis B and C chronics patients from Gabon where the prevalence of the two viruses were equal and many study conducted in HCCcases of the North of Africa were it is HCV infection which is generally more prevalent. In Cameroon, as in many other countries, anti-HCV carriers represent a birth cohort (>50 years old) for which the diseaseassociated burden will progressively decreases as this generation will pass away.
Much less is known about HDV in Sub-Saharan Africa. In this study, a high proportion of patients coinfected with Hepatitis Delta virus was also detected in HCC cases (41.1 %) compared to control individuals (1.1 %). The present result is consistent with previous studies conducted on non HCC patients with liver diseases in Cameroon [9,13], in many Sub-Saharan countries and the literature [29]. By contrast, our findings were markedly different from that reported in Nigerian HCC cases [37] where it was reported no HDV coinfection both in HCC cases and controls.
The impact of active replication is known to be associated with an increased risk of HCC development [38,39]. In the present work, the mean of HBV and HCV viral loads were significantly high in HCC patients compared to control patients. Regarding patients co-infected with HDV, the means of HBV viral load were significantly low in patients co-infected with HDV compared with patients without HDV infection (15328.4 IU/ml vs 261264.1 IU/ ml) respectively. This result indicates as reported by previous studies the inhibition of HBV replication in HDV infection [9,40]. We observed above 16-fold, 10-fold and 29-fold odds ratios increase for HCC risk in HBV, HCV and HDV infections respectively. Our data indicate that HBV and HCV infections are involved in the vast majority of HCC-cases observed in Cameroon. We detected at least one viral marker in around 85 % of cases. This result indicates that most Cameroonians HCC cases are resulting from a hepatotropic virus-related chronic liver disease. Similarly to what was reported in Cameroon and elsewhere, we observed that tumor onset is drastically different (25 years apart) between HBV-and HCV-infected patients. This observation suggests either that it takes longer to HCV to result in a HCC or that HCV infection occuring later in life (through contact with inappropriate health practices for example) as compared to vertical or early horizontal HBV transmission, kinetics of both tumorigenic processes are grossly similar. Overall, both vertical transmission and early horizontal infection for HBV [41,42] coupled to parenteral exposition in the 1980s for HCV [43][44][45][46] explain the spread of both viruses in Cameroon and provide the epidemiological bases of liver tumor development.
In this study, approximately 15 % of the HCC patients were negative for HBV and HCV markers. Our finding suggests that there is thus a significant involvement of non-viral factors in the incidence of HCC development in Cameroon. In many African and Asian countries, HCC has been associated with chronic exposure to Aflatoxin B1, a mycotoxin known to be present in Cameroon staple-food such as maize [47]. There are some limitations in this study that should be considered. First, we used only anti-Delta Ab to validate the association between HDV infection and HCC; second, HBV DNA and HCV RNA were not searched in negatives HBV and HCV controls. Molecular analysis will better assess the interplay between HDV infection and estimate the real implication of HBV and HCV infections in the development of HCC.
Beside environment, lifestyle risk factors, including prolonged abuse of alcohol or cigarette smoking, both highly prevalent in Cameroon, are known to increase liver cancer risk [1,5]. All these factors were not evaluated in this study and their contributory role as causative agents of HCC in Cameroon is still poorly documented. There is therefore an urgent need for further studies aiming to evaluate their possible involvement of nonviral co-carcinogenic factors acting alone or in association with the viruses on HCC development.
Conclusion
In summary, this study provides for the first time a landscape of the major viral risk factors associated with HCC development in Cameroon and in Central Africa. Our results show that patients co-infected with HDV or mono-infected with HBV are at very high risk to develop HCC. We consider that in absence of an easy access to novel antiviral compounds, effective preventive measures aiming to control HBV transmission (vaccination at birth, better hygiene) are of paramount importance to reduce the pool of Cameroonians citizens at risk and significantly curb down the future HCC incidence in Central Africa including Cameroon. Among all individuals at risk, HDV-infected represent the most worrying subpopulation due to the almost complete lack of efficient antiviral compounds active on HDV and to the remarkably high relative risk of HCC associated with this viral infection. A national epidemiological survey aiming to identify groups at risk to be HDV-infected and a closer surveillance of hepatitis delta among women in reproductive age should be undertaken now in Cameroon.
|
2018-04-03T00:05:44.027Z
|
2016-11-07T00:00:00.000
|
{
"year": 2016,
"sha1": "f72920df420f68643f4b18c17bab49e732f47b10",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-016-1992-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f72920df420f68643f4b18c17bab49e732f47b10",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85530296
|
pes2o/s2orc
|
v3-fos-license
|
Will we reach the Sustainable Development Goals target for tuberculosis in the European Union/European Economic Area by 2030?
We assessed progress towards the Sustainable Development Goals target for tuberculosis in the European Union/European Economic Area using the latest tuberculosis (TB) surveillance and Eurostat data. Both the TB notification rate and the number of TB deaths were decreasing before 2015 and the TB notification rate further declined between 2015 and 2017. With the current average decline in notification rate and number of TB deaths however, the EU/EEA will not reach the targets by 2030.
In 2015, all United Nations Member States adopted the 17 Sustainable Development Goals (SDGs) and their 169 targets [1]. The target for tuberculosis (TB) is to end the epidemic by 2030. The End TB Strategy provides three additional sub-targets that are used to measure progress towards the SDGs [2]. According to these sub-targets, the TB incidence should be 80% lower in 2030 compared with 2015; the number of TB deaths should be 90% lower and no family should face catastrophic costs due to TB. Here, we assess progress towards the first two sub-targets at European Union/ European Economic Area (EU/EEA) level. Information on catastrophic costs is not available at EU/EEA level.
Analysis
We used data obtained from the European tuberculosis surveillance network under the joint coordination of the European Centre for Disease Prevention and Control (ECDC) and World Health Organization (WHO) Regional Office for Europe [3] and data from Eurostat [4] for the years 2008-17. The TB case data were extracted from The European surveillance system (TESSy) [5] hosted by ECDC (as at 5 October 2018). Since Croatia did not report case-based TB data to TESSy for 2008-11, Croatia was excluded from the notification rate for those years. The population denominators for the notification rates were obtained from Eurostat (as at 20 April 2018), as were the data on cause of death due to TB (ICD 10 code A15-A19 and B90, as at 21 November 2018) [6]. Cause of death data were only available up to 2015 (last updated by EUROSTAT on 20 July 2018). The TB notification rates were used as proxy for TB incidence and reported TB deaths as a proxy for actual number of deaths due to TB.
Countries with missing annual data on deaths and reporting 10 or less deaths per year in the remaining years were considered to have zero TB deaths for the years with missing data. Denmark did not report any data on TB deaths in 2010 but reported more than 10 deaths in the other years. We therefore estimated the number of TB deaths, by calculating the average of the two preceding and following years and applying the ceiling function in STATA version 14.2 (StataCorp, College Station, Texas, United States).
To assess whether the EU/EEA will reach the SDG target we used the average annual change in notification rate between 2008 and 2017 and the average annual change in number of deaths between 2008 and 2015 and assumed that the change will continue similarly in future.
For our analysis, we used STATA/SE 14.2.
Key findings
The total TB notification rate declined during the study period ( Figure 1). In 2017, the notification rate was 10. If the 5.3% average annual decline continued unchanged, the EU/EEA would reach 1,947 TB deaths per year in 2030. The annual average decline required to reach the target is 14.2%.
Discussion
The targets for TB incidence and the number of TB deaths set in the End TB Strategy translate to 2.4 TB cases per 100,000 population and 444 TB deaths for the EU/EEA in 2030. Both the annual TB notification rate and the number of TB deaths were decreasing before 2015 and the TB notification rate further declined by 10% between 2015 and 2017. If the average 4.8% annual decline of the TB notification rate continues in the EU/EEA we will not reach the target by 2030; the average 5.3% decline of TB deaths is also not sufficient to reach the target.
Compared to other parts of the world, the observed TB notification rate and number of TB deaths in the EU/EEA are low [7]. Nonetheless, the SDG and End TB Strategy targets apply to the EU/EEA and our results show that there is little room for complacency.
Globally, the average annual decline of the TB incidence rate was 1.5% between 2000 and 2017, far less than what was observed in the EU/EEA [7]. The global number of TB deaths among HIV-negative people decreased by 5% since 2015 and by 29% between 2000 and 2017 [7]. Since 33% of the reported TB cases in the EU/EEA are diagnosed in individuals of foreign origin [8], a decrease in the global incidence of TB may impact the observed TB incidence in the EU/EEA. This would be more apparent in countries that diagnose a large proportion of their TB cases in individuals of foreign origin such as Malta, Norway and Sweden (> 85% of TB cases of foreign origin).
Within the EU/EEA, some countries are closer to ending TB than others: 24 countries reported a notification rate of less than 10 TB cases per 100,000 population [8]. In addition, there are substantial differences in the annual change in the TB notification rate in the EU/EEA [8]. In the period 2013-17, two countries observed an increasing notification rate of more than 5% per year, 17 had a decreasing notification rate of more than 5% and in 11, the annual change was between -5% and + 5%. To reach the target of 2.4 TB cases per 100,000 population, different strategies may need to be applied within the EU/EEA countries, depending on the respective epidemiological situation. Similarly, the estimated TB deaths among HIV-negative persons in EU/EEA countries in 2017 ranged between zero and 920, with an annual percentage change ranging from -17.6% to 15.0% between 2013 and 2017 [8]. Thus further indicating that some EU/EEA countries are making more progress towards the target than others are.
The End TB Strategy includes a package of interventions that are encouraged for use by countries to prevent and control TB and reach the targets [2]. The interventions are grouped under three pillars: (i) integrated, patient-centred care and prevention, (ii) bold policies and supportive systems, and (iii) intensified research and innovation. Countries are encouraged to adapt their strategy to the specifics of their TB epidemic. In 2017, 17 EU/EEA countries had a national TB control plan or strategy [9].
It is acknowledged that specific actions are needed in countries that are close to ending TB and aiming for TB elimination [10]. These countries will often have epidemics concentrated in hard-to-reach and vulnerable populations e.g. migrants, prisoners and homeless people. Targeting hard-to-reach and vulnerable populations requires specific interventions and may need additional resources [11][12][13][14]. For example, screening and providing treatment for latent TB infection (LTBI) prevents new TB cases [15,16], as well as screening and treating prisoners and migrants for active TB may also contribute to a further decline [12,13]. To our knowledge, however, not all EU/EEA countries test hard-toreach populations for LTBI. Mathematical modelling and cost-effectiveness studies show that programmatic management of LTBI can have an impact on TB burden [17,18]. In addition, migrants are not screened for TB in all EU/EEA countries [19]. The interventions suggested in the ECDC guidance on TB control in vulnerable and hard to reach populations are also not implemented in all countries in the EU/EEA [11].
Our results come with limitations. We used notification rate as a proxy for TB incidence. We believe this to be a valid approach since several studies have shown that completeness of TB surveillance data in EU/EEA countries is > 80% [20][21][22]. We relied on the completeness and accuracy of the cause of death register for the number of TB deaths. The quality of death registration systems has been assessed as good in most countries of the WHO European Region [23]. We therefore consider our results valid for assessing progress towards the targets on an EU/EEA level. However, accurately assigning cause of death is challenging [24] and improvements in cause of death registration may still be needed on country level [25]. We recognise that improvements in both TB surveillance and cause of death registration can affect the progress assessment if more complete data become available.
In conclusion, additional interventions need to be implemented to reach the targets for TB incidence and number of TB deaths in the EU/EEA, and thus the SDG, especially in countries that are currently facing stable or increasing trends.
|
2019-03-28T13:02:30.352Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "d61a961c0ce93c939e4473c6684128bd438be79d",
"oa_license": "CCBY",
"oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/24/12/eurosurv-24-12-2.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES.2019.24.12.1900153&mimeType=pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "32915e011661dc6d355a7ce2327a6d2c7f34ab81",
"s2fieldsofstudy": [
"Medicine",
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
261229856
|
pes2o/s2orc
|
v3-fos-license
|
Measurement of the electroweak production of W$\gamma$ in association with two jets in proton-proton collisions at $\sqrt{s}$ = 13 TeV
A measurement is presented for the electroweak production of a W boson, a photon ($\gamma$), and two jets (j) in proton-proton collisions. The leptonic decay of the W boson is selected by requiring one identified electron or muon and large missing transverse momentum. The two jets are required to have large invariant dijet mass and large separation in pseudorapidity. The measurement is performed with the data collected by the CMS detector at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb$^{-1}$. The cross section for the electroweak W$\gamma$jj production is 23.5 $^{+4.9}_{-4.7}$ fb, whereas the total cross section for W$\gamma$jj production is 113 $\pm$ 13 fb. Differential cross sections are also measured with the distributions unfolded to the particle level. All results are in agreement with the standard model expectations. Constraints are placed on anomalous quartic gauge couplings (aQGCs) in terms of dimension-8 effective field theory operators. These are the most stringent limits to date on the aQGCs parameters $f_\mathrm{M,2-5}$ $/$ $\Lambda^4$ and $f_\mathrm{T,6-7}$ $/$ $\Lambda^4$.
Introduction
The discovery of the Higgs boson at the CERN LHC [1][2][3] was made about ten years ago.Now, it is of great interest to examine in depth the mechanism of electroweak (EW) symmetry breaking using rare EW processes.Vector boson scattering (VBS) processes play an independent and complementary role in understanding the EW symmetry breaking.The nonabelian nature of gauge interactions in the standard model (SM) leads to a large variety of VBS processes with unique features and opportunities to probe new physics beyond the SM (BSM).
The center-of-mass energy of the proton-proton(pp) collisions and the integrated luminosity accumulated by the LHC experiments present an opportunity to measure many rare VBS processes.For example, the observed (expected) significance for the EW production of Wγ + 2 jets reported by CMS is 5.3 (4.8) standard deviations ( SD) combining Run 1 data and Run 2 data collected in 2016 [4].This paper presents a measurement of the EW Wγjj production at √ s = 13 TeV based on the complete Run 2 data collected during 2016-2018, superseding the previous CMS result [4].A complete set of tabulated results of this analysis is available in the HEPData database [5].In addition to increased integrated luminosity, our new results include: (i) an updated fiducial region requiring jets with p T > 50 GeV; (ii) the removal of the missing transverse momentum requirement from the fiducial region definition; (iii) the treatment of the interference term between the EW-and quantum chromodynamics (QCD) induced processes as a background component; (iv) and the treatment of the out-of-fiducial signal contribution as a background component.
The EW signal includes both VBS and non-VBS diagrams, such as the contributions depicted in Figs.1(a)-1(c).The QCD-induced production of Wγjj, in which both jets originate from QCD interaction, occurs at a much higher rate and is depicted in the rightmost diagram in Fig. 1(d).The interference among the VBS diagrams ensures the unitarity of the VBS cross section in the SM at high energy.An interference is also expected between the EW-and QCDinduced processes [6,7].The interference is regarded as a background when measuring the EW process.The cross section for the EW Wγjj production and the total cross section for the Wγjj production that includes both the EW-and QCD-induced processes are determined in the same restricted fiducial region.The measurements are based on a two-dimensional fit in the invariant mass m ℓγ of the lepton and the photon and the invariant mass m jj of the two jets.Differential cross sections unfolded to the particle level are also measured.
In addition, BSM couplings, such as anomalous triple and quartic gauge couplings (aTGCs and aQGCs), as predicted in BSM theories [8] and would affect the Wγjj production.The aTGCs are well constrained by processes such as Higgs boson and diboson production, whereas the aQGCs can be better constrained by VBS measurements.In this analysis, constraints are placed on aQGCs in terms of dimension-8 effective field theory operators.
The data set used in this analysis corresponds to an integrated luminosity of 138 fb −1 collected in Run 2 with the CMS detector [9] at the LHC.The final state is characterized by an isolated electron or muon with high transverse momentum (p T ), large missing transverse momentum (p miss T ) from the leptonic decay of the W boson, a high-p T isolated photon, and two jets.Exploiting the VBS Wγjj topology, the two jets are required to have a large invariant mass m jj and a large separation in pseudorapidity |∆η jj |.This selection effectively suppresses the contamination from the QCD-induced production of Wγjj, as well as the non-VBS EW contribution [Figs.1(a
The CMS detector
The central feature of the CMS [9] apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections.Forward calorimeters extend the coverage provided by the barrel and endcap detectors up to a pseudorapidity of |η| = 5.Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid.Events of interest are selected using a twotiered trigger system [10,11].The first level, composed of specialized hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz within a fixed latency of about 4 µs.The second level, the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing that reduces the event rate to around 1 kHz before data storage.
A more detailed description of the CMS detector, together with a definition of the coordinate system and kinematic variables, is reported in Ref. [9].
Signal and background simulation
The signal and background processes are simulated using the MADGRAPH5 aMC@NLO (MG5) Monte Carlo (MC) generator [12].The EW Wγjj signal is simulated at leading order (LO) using MG5 version 2.6.0.The dominant background from the QCD-induced production of Wγjj is simulated with up to one additional jet in the matrix element calculations at nextto-leading order (NLO) with MG5 version 2.4.2, using the FxFx scheme [13] to merge jets from matrix elements and from parton showering.The interference term between the EW-and QCDinduced processes, of order O(α 4 α S ) at tree level, is estimated with a full simulation and is treated as a part of the QCD-induced Wγjj contribution.The contribution of the interference is calculated as the difference between the total Wγjj production, which contains the interference term, and the sum of the individual EW-and QCD-induced Wγjj contributions as simulated by MG5.The interference term ranges from 1% to 3% of the expected EW signal in the signal region (defined in Section 5), varying with m jj bin.
The PYTHIA 8 generator with the CUETP8M1 [20,21] tune for 2016 and the CP5 [22] tune for 2017-2018 is used for parton showering, hadronization, and underlying-event simulation.The NNPDF 3.0 (3.1) set [23] is used for the parton distribution functions (PDFs) for the simu-lated samples of the 2016 (2017-2018) data-taking periods.All simulated events are processed with GEANT4 [24] for the CMS detector simulation.Correction factors evaluated with the tagand-probe method [25] are used to account for differences between data and simulation in the trigger, reconstruction, and identification (ID) efficiencies.Additional simulated pp interactions (pileup, PU) are superimposed over the hard scattering interaction with a distribution matching that obtained from the collision data.
Object reconstruction
The particle-flow (PF) algorithm [26] reconstructs and identifies individual particles in an event, through an optimized combination of information from the various components of the CMS detector.The energy of photons is obtained from the ECAL measurement.The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex from the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track.
The energy of muons is obtained from the curvature of the corresponding tracks.The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy depositions, corrected for the response of the calorimeters to hadronic showers.The energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.The PF candidates are used for a variety of purposes in this analysis, such as evaluating electron, muon, and photon isolation variables, reconstructing jets, and computing the p miss T in the event, as described below.
The reconstructed vertex with the largest value of summed physics-object p 2 T is taken as the primary pp interaction vertex [27].The jets are clustered using the anti-k T jet finding algorithm [28,29] using tracks assigned to candidate vertices as inputs and the distance parameter is set to 0.4.
Electron candidates must satisfy |η| < 2.5 and p T > 35 GeV, excluding the ECAL transition region 1.444 < |η| < 1.566.Electrons are also required to satisfy identification criteria [30]: a selection on the relative amount of energy deposited in the HCAL, a match of the trajectory in the tracker with the position of the ECAL cluster, requirements on the number of missing measurements in the tracker, the compatibility of the electron track and the primary vertex, and σ ηη , which quantifies the spread along η of the shower in the ECAL.Electrons identified as arising from photon conversions are removed [30,31].The CMS cut-based tight ID is used to define tight electrons from W decays, whereas the CMS cut-based veto ID is used to define loose electrons to suppress events that contain additional leptons.An isolation requirement is applied to electrons.The isolation variable is defined relative to the electron p T by summing the p T of charged hadrons and neutral particles within geometrical cones of ∆R = √ (∆η) 2 + (∆ϕ) 2 = 0.3 around the electron momentum direction.To minimize PU effects, only charged hadrons originating from the primary vertex are included.For the neutral-hadron and photon components, an estimate of the expected PU contribution (p PU T ) is subtracted [32].For the tight (loose) electrons, the isolation variable is required to be less than 0.0287 + 0.506 GeV/p T (0.198 + 0.506 GeV/p T ) if the pseudorapidity of the ECAL cluster (η SC ) satisfies |η SC | < 1.479, and less than 0.0445 + 0.963 GeV/p T (0.203 + 0.963 GeV/p T ) if 1.479 < |η SC | < 2.5.
Muon candidates are required to satisfy |η| < 2.4 and p T > 35 GeV.They must satisfy ID criteria based on the number of measurements in the muon system and the tracker, the number of matched muon detector planes, the quality of the combined fit to the track, and the compatibil-ity of the muon to originate from the primary vertex [33].The CMS cut-based tight ID is used.An isolation requirement is applied to muons.The isolation variable is defined relative to the muon p T by summing the p T of charged hadrons and neutral particles within geometrical cones of ∆R = 0.4.The PU suppression is performed in a similar way as done for electrons.The isolation variable is required to be < 0.15(0.25) to define tight (loose) muons.Tight muons are used to select signal events, whereas loose muons are used to veto events that feature additional leptons [33].
Photon candidates must satisfy |η| < 2.5 and p T > 25 GeV, excluding the ECAL transition region of 1.444 < |η| < 1.566.To minimize the contribution of jets misidentified as photons, photon candidates must satisfy [34] criteria based on the distribution of energy deposited in the ECAL and HCAL, and criteria based on the isolation variables constructed from the kinematic inputs of the charged hadrons, neutral hadrons, and other photons near the photon of interest.The CMS cut-based medium ID defines tight photons and is used to identify prompt photons (i.e., not originating from hadron decays) in the final state, and the CMS cut-based loose ID defines loose photons and is used to identify nonprompt photons, which are mainly products of neutral pion decays [34].An isolation requirement using a consistent definition as mentioned above for electrons and muons is applied with ∆R = 0.3 for the three components separately, i.e., the charged hadron isolation must be less than 1.141 (1.051), the neutral hadron isolation must be less than 1.189 ) and the photon isolation component must be less than 2.08 + 0.004017p T (3.867 + 0.0037p T ), for the tight photon candidates found in the barrel (endcap) region, whereas the charged hadron isolation must be less than 1.694 (2.089), the neutral hadron isolation must be less than 24.032 + 0.01512p T + 2.259 × 10 −5 p 2 T (19.722 + 0.0117p T + 2.3 × 10 −5 p 2 T ) and the photon isolation component must be less than 2.876 + 0.004017p T (4.162 + 0.0037p T ), for the loose photon candidates found in the barrel (endcap) region, where p T is measured in GeV.The PU suppression is performed in a similar way as for electrons.An additional veto is applied on electrons reconstructed as photons.
Jets are required to have |η| < 4.7 and p T > 50 GeV.To reduce the contamination from PU, charged PF candidates within the tracker acceptance are excluded from the jet clustering when they are associated with PU vertices [26].The contribution from neutral PU particles to the jet energy is corrected based on the projected area of the jet onto the front face of the calorimeter [35].A jet energy correction, similar to the one developed for 8 TeV collisions [36], is obtained from dedicated studies performed on both data and simulated events (typically involving dijet, γ+jet, Z+jet, and multijet production).Other residual corrections are applied to the data as functions of p T and η to correct for small differences between data and simulation.Additional quality criteria are applied to jet candidates to remove spurious jet-like features originating from isolated noise patterns in the calorimeters or in the tracker [37].
The missing transverse momentum ⃗ p miss T is computed as the projection onto the plane perpendicular to the beam axis of the negative vector momentum sum of all PF candidates originating from the primary vertex in an event [38], and its magnitude is denoted as p miss T .The jet energy corrections are propagated to the ⃗ p miss T .Data-to-simulation efficiency ratios are used as scale factors to correct the simulated event yields.A separation of ∆R > 0.5 is required between any two selected objects (photon, lepton, jets), as detailed in Section 9.In the electron channel, we additionally require the invariant mass m ℓγ of the selected photon and electron to be inconsistent with the Z boson mass, |m ℓγ − m Z | > 10 GeV, to suppress the Z → e + e − background where one electron is misidentified as a photon.Depending on the photon pseudorapidity, the electron and muon channels are each subdivided into a barrel region with |η γ | < 1.444, and an endcap region with 1.566 < |η γ | < 2.5.The nominal selection consists of all the above requirements.
Event selection
The longitudinal component of the neutrino momentum is estimated by solving the quadratic equation that constrains the mass of the charged lepton and neutrino system to the worldaverage value of the W boson mass [39].As described in Ref. [40], when there are multiple solutions, the one with the smallest longitudinal neutrino momentum component is chosen; if there are only complex solutions, the real part is chosen as the longitudinal momentum.
The signal region (SR) is defined as the above nominal selection with the additional requirements of m jj > 500 GeV, |∆η jj | > 2.5, m Wγ > 100 GeV, |y Wγ − (y j1 + y j2 )/2| < 1.2 [41], and |ϕ Wγ − ϕ jj | > 2, where m Wγ , ϕ Wγ , and y Wγ are the invariant mass, azimuthal angle, and the rapidity of the Wγ system, respectively, ϕ jj is the azimuthal angle of the dijet system between the two p T -leading jets, and y j1 (2) is the rapidity of the p T -leading (subleading) jet.The requirements on |y Wγ − (y j1 + y j2 )/2| and on |ϕ Wγ − ϕ jj | are intended to ensure that the momentum of the Wγ system is balanced by that of the dijet system, which is expected in the absence of additional QCD radiation.The selection thresholds are determined by scanning the expected significance of the EW signal to give the maximum sensitivity.
A control region (CR) is defined to validate the modeling from simulation and perform a background estimation derived from data.The CR uses the nominal selection mentioned above with the additional requirements of 200 < m jj < 500 GeV.The contamination from signal events in the CR is less than 1%.
Background estimation
In Fig. 2 the p γ T distributions for the unfit data and the estimated backgrounds in the CR are presented for the barrel (left) and endcap (right).This region is used to constrain the QCD Wγjj background.The estimations of the backgrounds are described in this section.
Reconstructed photons or leptons that do not originate from the hard interaction are denoted as misidentified (misID) photons and leptons.This reducible background includes genuine photons or leptons, as well as photons or leptons of instrumental origin.Because of the variety of sources of these misID particles and the difficulty of modeling instrumental effects, their contribution is estimated using data in a signal free region.
The main backgrounds arise from W+jets and top quark processes where the jet constituents are misidentified as a photon.The method used to estimate this background involves measuring the fraction of jets misidentified as photons in data and applying a per-photon extrapolation factor from the region with loose photons to the signal region with tight photons.The factors are extracted as functions of the photon p T and η.The fraction of jets misidentified as photons is determined from a template fit to the photon σ ηη observable, which is the lateral extension of the shower, defined as the energy-weighted spread within the 5×5 crystal matrix centered on the crystal with the largest energy deposit in the supercluster.The prompt photons are more populated in the small σ ηη region, while the nonprompt photons are enriched in the large σ ηη region.The fit template for the prompt photons uses MC, while the fit template for the nonprompt photons uses data from a sideband of the photon isolation distribution in W+jets using the same method as used in Ref. [42].
The background from jets misidentified as leptons (nonprompt leptons) is estimated in a similar way.The lepton misidentification rate f ℓ is defined as the ratio of the number of misID leptons passing the tight lepton requirements to the number of leptons passing only the loose lepton requirements.To extrapolate from loose to tight requirements leptons, an extrapolation factor is defined as: f ℓ /(1 − f ℓ ).To suppress additional contamination from genuine leptons, the W+jets and Z+jets contributions are subtracted from both the numerator and denominator using MC simulation.The extrapolation factor is measured as a function of the η and p T of the lepton in a CR dominated by dijet events.This CR is defined by selecting one lepton, one jet well separated from the lepton, and p miss T < 30 GeV.More details are described in Ref. [43].
The double-misID background is defined as events containing both a misID photon and a misID lepton.Its yield is estimated using an event sample where both the photon and lepton are required to pass the loose lepton requirements and fail the tight lepton requirements.A weight is assigned to such events, equal to the product of the misID extrapolation factors of the photon and lepton.Double-misID events contaminate the single-misID background estimate since the second object is assumed to be genuine.Whenever a weight is added to the double-misID estimate, the same weight is subtracted from both the single-photon and -lepton estimates.In addition, events in which genuine photons and leptons pass the loose lepton requirements but fail the tight lepton requirements contaminate both the single-and double-misID estimates.This source of contamination is estimated and removed using simulation with reconstructed objects matched to generator-level objects.
Other background contributions that feature genuine photons and leptons in the final state, such as top quark, diboson and Zγ, are estimated from MC simulation and normalized to the integrated luminosity of the data set using their corresponding cross sections.
Systematic uncertainties
Systematic uncertainties that affect the measurements arising from experimental inputs, such as detector effects and methods, and theoretical inputs such as the choice of the renormalization (µ R ) and factorization (µ F ) scales and the choice of PDF sets, are included.Each source of systematic uncertainty is quantified by evaluating its effect on the yield and on the distributions of relevant kinematic variables in the signal and background categories.The uncertainties are calculated bin-by-bin and propagated to the final distributions.
The uncertainties in jet energy scale (JES) and jet energy resolution (JER) are estimated by shifting or spreading the jet energies in the simulations up and down by one standard deviation, and are then propagated to all relevant variables, including VBS jet kinematic observables and p miss T , and the impacts on the signal and background yields are evaluated.The uncertainties arising from the JES and JER correspond to various processes and various m jj -m ℓγ (m jj vs. m ℓγ 2D distribution) bins are in the ranges of 0.1-34% and 1.8-33%, respectively.The uncertainties in the lepton trigger, reconstruction, and selection efficiencies, measured using a tag-and-probe technique, are 1.8-4.6%[30,33].The uncertainties in the photon reconstruction and selection efficiencies are 1.9-4.3%[44].The integrated luminosities have uncertainties in the 1.2-2.5% range [45][46][47], with an overall uncertainty for the 2016-2018 dataset of 1.6%.
The statistical uncertainties arising from the limited size of both the simulated and data samples used in our background and signal predictions are estimated assuming a Poisson distribution.The uncertainties related to the limited number of simulated events or to the limited number of events in the data control samples are 1.2-11% for the EW Wγjj signal, 2.1-48% for the QCDinduced Wγjj background, 4.9-77% for the nonprompt-lepton background, and 2.1-45% for the nonprompt-photon background.Some of these statistical uncertainties increase with increasing m jj and m ℓγ .The largest values typically come from bins where the specific process is less important, and do not significantly impact the signal sensitivity.All the statistical uncertainties are uncorrelated across various processes and bins of any single distribution.
An overall systematic uncertainty in the nonprompt-photon background estimate is defined as the quadratic sum of the systematic uncertainties from three distinct sources.The uncertainty arising from the choice of the isolation variable sideband is evaluated by estimating the nonprompt-photon fraction with alternative choices of the sideband [48].The statistical uncertainty in extracting the fake photon fraction is obtained from the template fits.The nonclosure uncertainty is defined by performing the nonprompt-photon fraction fits using simulated events and comparing the results with the predicted fractions from MC simulation.The nonclosure uncertainty in the endcap region is larger than in the barrel region and increases with the photon p T .The overall systematic uncertainty in the nonprompt-photon background ranges from 7.8% to 12%, dominated by the nonclosure contribution.
Similarly, the uncertainty in the nonprompt-lepton estimate comes from the nonclosure that is obtained using MC samples.The same misidentified lepton method used in the analysis is applied to MC γ+jets events, and the result is compared with the true number of γ+jets events falling into the SR.The difference of the two quantifies the nonclosure.The selection used is the same as in the nominal event selection, except that the m W T and p miss T requirements are removed to increase the size of the selected sample.The uncertainty associated with the nonprompt-lepton background is 30%.
The effects of the choice of µ R and µ F in the theoretical calculation for signal and background cross sections are estimated by independently changing µ R and µ F up and down by a factor of 2 from their nominal values in each event, satisfying 1/2 < µ R /µ F < 2. The uncertainties are defined as the maximal differences from the nominal values.The PDF uncertainties are evaluated according to the procedure described in Ref. [49] using the NNPDF set.For the signal, the scale uncertainty varies within 0.7-5.4% and the PDF uncertainty varies within 0.06-0.10% in the acceptance.The scale uncertainty in the QCD-induced Wγjj process corresponds to a 0.08-12% uncertainty in the acceptance.It is constrained by the simultaneous fit to the data in the CR.The PDF uncertainty in the acceptance of the QCD-induced Wγ production is 0.05-1.40%.
A correction factor is applied to the simulated events to account for the first level trigger timing drift in 2016 and 2017 data [11].This mistiming results in a loss of trigger efficiency in the data and is not modeled by the simulation.Uncertainties arising from these correction factors vary within 0.9-3.4%, and are treated as correlated across various processes and bins of the 2016 and 2017 data analysis.
Observation of EW Wγ production
The measurement of the total EW Wγ production rate is performed using a binned likelihood fit to the data of the two-dimensional (2D) distribution in m jj (four bins) and m ℓγ (three bins).Both m jj and m ℓγ are highly discriminating variables between the EW signal and the QCDinduced Wγjj background.Furthermore, the 2D analysis provides a larger expected significance than using either variable alone.Data in the SR and CR are both included in the fits to constrain the dominant background (QCD-induced Wγjj).Table 1 shows the signal and background yields after the fit, as well as the observed data yields.Figure 3 shows the observed and expected distributions of m jj -m ℓγ used in the total EW Wγjj cross section measurement.The expectation is given after the fit to data.
The signal significance is quantified using a profile likelihood test statistic [50].This test statistic involves the ratio of two Poisson likelihood functions, one in which the signal strength is fixed to zero and one in which the signal strength is allowed to have any positive value.The signal strength represents the ratio of observed to expected signal yields.Systematic uncertainties are included as nuisance parameters in the likelihood function that scale the relevant processes using log-normal probability density functions.The distribution of the test statistic is assumed to be in the asymptotic regime where there is a simple relationship between its value and the significance of the result [51].The observed (expected) significance is 6.0 (6.8) SD for the EW Wγ processes.
Fiducial cross section measurement
The fiducial cross section measurement for the EW Wγ production at 13 TeV is extracted with the same 2D m jj -m ℓγ binning used for the signal significance.The fiducial region is defined based on the particle-level (for leptons, photons, jets) quantities: one lepton p ℓ T > 35 GeV and > 50 GeV, |η j | < 4.7, m jj > 500 GeV, ∆R jj > 0.5, ∆R jℓ > 0.5, ∆R jγ > 0.5, and |∆η jj | > 2.5.The leptons are reconstructed at the particle level with fully recovered final-state radiation.The acceptance is defined as the fraction of the signal events passing the fiducial region selection, and is estimated using MG5.The theoretical uncertainty in the extrapolation between the fiducial and SR is negligible (< 1%).We define the cross section as σ fid = σ g μ α gf , where the cross section for the signal events is σ g = 0.776 pb calculated with MG5 at LO in QCD [12], the observed signal strength parameter μ = 0.88 +0. 19 −0.18 , and the acceptance of the fiducial region, α gf = 0.034.The measured fiducial cross section is σ fid EW = 23.5 ± 2.8 (stat) +1.9 −1.7 (theo) +3.5 −3.4 (syst) fb = 23.5 +4.9 −4.7 fb. (1) The observed signal strength is compatible with unity within one SD, and the measured fiducial cross section agrees well with the SM prediction.
The cross section for the sum of the EW and QCD-induced Wγjj contributions is also measured.The fiducial region definition is identical to that used for the EW Wγjj fiducial cross section measurement and the formula for the cross section is ).The inputs used for the fit are similar to the ones for EW Wγjj production, with the difference that EW and QCD-induced Wγjj contributions are combined as signal.The cross section for QCDinduced Wγjj production is 192.3 pb calculated with MG5 at NLO in QCD [12], and α QCD gf is calculated to be 4.6 × 10 −4 .The measured signal strength for the EW+QCD Wγjj production is 0.98 +0.12 −0.11 and the observed fiducial cross section is The observed signal strength is compatible unity within one SD, the measured fiducial cross section agrees well with the SM prediction.
Differential cross section measurements
The differential cross sections for the EW only and for the EW+QCD Wγjj production processes are measured for several characteristic variables using the same SR as defined in the fiducial cross section measurement.For each unfolded variable, its generator-level values are mapped to the reconstruction-level ones in binned histograms that account for the detector resolution effects.The efficiencies for selecting events from the generator level to the reconstruction level are calculated using the same binning as used in the fiducial region measurements, in order to recover the limited acceptance and selection efficiencies.Signal events outside the fiducial region are treated as background.Both the resolution and efficiency effects are evaluated using signal simulation.A bin-by-bin unfolding is performed to obtain differential distributions, in which the effects of detector resolution, limited acceptance, and selection efficiencies are corrected.
The unfolded variables include the transverse momentum of the lepton p ℓ T , of the photon p γ T ; the invariant masses of the lepton and the photon m ℓγ ; the transverse momentum of the leading jet (p T ordered) p j1 T ; the invariant mass of the two jets m jj ; and the separation in pseudorapidity of the two jet ∆η jj .Since the ranges of some variables extend to infinity, the last bins accommodate all the events above the last bin boundaries, but the bin widths that are used in the denominator are finite and are (110, 400), (170 200), (160, 1000), (250, 500), and (1500, 2000) GeV for p ℓ T , p γ T , m ℓγ , p j1 T and m jj , respectively.The unfolded differential distributions are shown in Fig. 4 for the EW production and in Fig. 5 for EW+QCD production.Comparisons are shown with the theoretical predictions from MG5.The predictions are in agreement with the unfolded data in general.
Limits on anomalous quartic gauge couplings
The effects of BSM physics can be parameterized in a generic way through a set of linearly independent higher-dimensional operators in an effective field theory [8].As mentioned above, VBS is particularly suitable to constrain aQGCs.The lowest-dimension operators that modify quartic gauge couplings but do not exhibit two or three weak gauge boson vertices are dimension-eight.Reference.[52] proposes nine independent charge-conjugate and parityconserving dimension-eight effective operators by assuming the SU(2)×U(1) symmetry of the EW gauge field.The model includes a Higgs-field doublet to incorporate the presence of the SM Higgs boson.The operators affecting the Wγjj channel can be divided into two categories.The operators L M,0 -L M,7 contain an SU(2) field strength, the U(1) field strength, and the covariant derivative of the Higgs doublet field.The operators L T,0 -L T,2 and L T,5 -L T,7 , contain only the two field strengths.The coefficient of the operator L X,Y is denoted by f X,Y /Λ 4 , where Λ is the unknown scale of BSM physics.
A simulation is performed that includes the effects of aQGCs in addition to the SM EW Wγjj production, as well as the interference between the two contributions.Since a contribution from aQGCs would enhance the production of events with large Wγ mass, we therefore use this observable to extract limits on the aQGC parameters.To obtain a continuous prediction for the signal as a function of each anomalous coupling, a quadratic fit is performed to the SM+aQGC yield as a function of the aQGC coefficient value, separately in each m Wγ bin.In addition to the selection described in Section 5, further requirements are applied to exploit the fact that the aQGC contributions arise from pure VBS diagrams, and are thus enhanced in the VBS phase space region, and the anomalous operators lead to more energetic final-state particles.These requirements are optimized to enhance the aQGC sensitivity, based on simulation studies, and are: m jj > 800 GeV, |∆η jj | > 2.5, m Wγ > 150 GeV, and p γ T > 100 GeV.As an example, Fig. 6 (left) shows the resulting m Wγ distribution in muon channel.
We set two-sided limits on the operator coefficients through a limit-setting procedure that involves first obtaining the global maximum of the profile likelihood function, and then the maximum of the profile likelihood function at fixed coefficient values, which are compared with the global maximum and converted to confidence level (CL) intervals.Figure 6 (right) shows the likelihood scan for the f M,2 /Λ 4 parameter in the calculation of the observed limits.
The observed and expected 95% CL limits on the aQGC coefficients are summarized in Tab. 2. These are the most stringent limits to date on the aQGC parameters f M,2-5 /Λ 4 and f T,6-7 /Λ 4 .
They are obtained by varying the coefficient of one operator at a time, with all others set to zero, i.e., the SM value.The yield of the EW signal in any bin is a quadratic function of the coefficient, whose minimum in general does not occur at a coefficient value of zero because of the interference with the SM operators.The constraints set on the aQGCs are compatible with the SM predictions of zero.The NLO EW corrections to VBS Wγ can be sizable and increase as a function of m jj , which may bias the aQGC measurement.Although there is no NLO EW calculation available yet for VBS Wγ production, we have checked, using the numbers from same-sign WW scattering [53,54], that the effect on the aQGC limits is negligible.The unitarity bound (U bound ) is defined as the scattering energy at which the aQGC coupling strength, when set equal to the observed limit, would result in a scattering amplitude that violates unitarity.The value of U bound is determined using the analytical formulas from Ref. [55].
Summary
Measurements of the electroweak (EW) production of a W boson, a photon, and two jets in proton-proton collisions at a center-of-mass energy of 13 TeV have been presented.The data correspond to an integrated luminosity of 138 fb −1 in Run 2 collected with the CMS detector.Events are selected by requiring one isolated lepton (electron or muon) with high transverse Table 2: Exclusion limits at the 95% CL for each aQGC coefficient, derived from the m Wγ distribution, assuming all other coefficients are set to zero.Unitarity bounds corresponding to each operator are also listed.All coupling parameter limits are in TeV −4 , while U bound values are in TeV.
Signal event candidates are collected with single lepton triggers and are selected by requiring exactly one electron (muon) with p T > 35 GeV and m W T > 30 GeV, where m W T is the transverse mass of the W boson defined as 2p ℓ T p miss T [1 − cos (∆ϕ ℓ,p miss T )], p ℓ T is the lepton p T , and ∆ϕ ℓ,p miss T is the azimuthal angle between the p ℓ T and the ⃗ p miss T directions.Events are required to contain a well-identified and isolated photon with p γ T > 25 GeV, p miss T > 30 GeV, and at least two jets, each with |η| < 4.7 and p T > 50 GeV.
Figure 2 :
Figure2: The p T distributions for photons in the barrel (left) and in the endcaps (right) in the control region for data and from background estimations before the fit to the data.The misID backgrounds are derived from data, whereas the remaining backgrounds are estimated from simulation.All events with a photon p T > 200 GeV are included in the last bin.The hatched bands represent the combined statistical and systematic uncertainties on the predicted yields.The vertical bars on the data points represent the statistical uncertainties of data.The bottom panels show the ratios of the data to the predicted yields.
Figure 3 :
Figure 3: The 2D distributions used in the fit for the total EW Wγ cross section measurement.The hatched bands represent the combined statistical and systematic uncertainties in the predicted yields.The vertical bars on the data points represent the statistical uncertainties of data.The expectation is shown after the fit to the data.EW Wγ inside (outside) fiducial region stands for the events of EW Wγ falling inside (outside) the fiducial region defined in Section 9. |η ℓ | < 2.4, p miss T
Figure 4 :
Figure 4: Differential cross sections for the EW Wγjj production as functions of p l T , p γ T , p j1 T , m ℓγ , m jj , and ∆η jj .The highest bins in each plot have no upper bound and are normalized by the bin boundaries of (110, 400), (170 200), (160, 1000), (250, 500), and (1500, 2000) GeV for p l T , p γ T , m ℓγ , p j1 T and m jj respectively.The blue bands stand for the systematic uncertainties and the black bands represent the total uncertainties.
Figure 6 :
Figure6: The m Wγ distribution for muon events satisfying the aQGC region selection and used to set constraints on the anomalous gauge coupling parameters (left).Electron events, not shown here, are also used.The gray line represents a nonzero f M,2 /Λ 4 setting.Events with m Wγ > 1500 GeV are included in the last bin.The hatched bands represent the combined statistical and systematic uncertainties on the predicted yields.The vertical bars on the data points represent the statistical uncertainties of data.Likelihood scan and the observed 95% CL interval for the aQGC parameter f M,2 /Λ 4 (right).
Table 1 :
Number of Wγ events from the fit to the data in the signal region.The signal predictions inside and outside the fiducial region defined in Section 9 are shown.The contributions of various backgrounds are also shown.Statistical and systematic uncertainties are added in quadrature.
|
2023-08-26T15:14:22.341Z
|
2022-12-23T00:00:00.000
|
{
"year": 2022,
"sha1": "26a597e99485e5247f49c196433640f92f826fa2",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.108.032017",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "00074b97bc153018c9b57cd1bfddffd5057b9556",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
53744127
|
pes2o/s2orc
|
v3-fos-license
|
Selective agonist of TRPML2 reveals direct role in chemokine release from innate immune cells
Cytokines and chemokines are produced and secreted by a broad range of immune cells including macrophages. Remarkably, little is known about how these inflammatory mediators are released from the various immune cells. Here, the endolysosomal cation channel TRPML2 is shown to play a direct role in chemokine trafficking and secretion from murine macrophages. To demonstrate acute and direct involvement of TRPML2 in these processes, the first isoform-selective TRPML2 channel agonist was generated, ML2-SA1. ML2-SA1 was not only found to directly stimulate release of the chemokine CCL2 from macrophages but also to stimulate macrophage migration, thus mimicking CCL2 function. Endogenous TRPML2 is expressed in early/recycling endosomes as demonstrated by endolysosomal patch-clamp experimentation and ML2-SA1 promotes trafficking through early/recycling endosomes, suggesting CCL2 being transported and secreted via this pathway. These data provide a direct link between TRPML2 activation, CCL2 release and stimulation of macrophage migration in the innate immune response.
Introduction
Cytokines/chemokines are released from a wide range of immune cells such as macrophages, B-and T-lymphocytes, neutrophils, mast cells and dendritic cells. They are essential for intercellular communication in both innate and adaptive immunity. Remarkably, our knowledge of the function of cytokines/chemokines in immunity is much more advanced than our knowledge about how they are packaged and secreted from immune cells. Understanding how innate immune cells release cytokines/chemokines is important, as these factors are indispensable for communication between immune but also with non-immune cells to coordinate inflammatory responses (Lacy and Stow, 2011). Importantly, secretion pathways vary between different cell types. Macrophages for example lack typical secretory granules (Lacy and Stow, 2011). Thus, macrophage cytokine/chemokine release is mediated either by direct transport to the cell surface from the trans-Golgi network (TGN) (e.g. IL-10), by transport via recycling endosomes (RE) to the cell surface (e.g. TNF-a, IL-6, IL-10) (Manderson et al., 2007;Murray and Stow, 2014), or via late endosomes/lysosomes (LE/LY), for example IL-1b (Andrei et al., 1999;Lopez-Castejon and Brough, 2011).
We show here that the endolysosomal calcium-permeable cation channel TRPML2 plays a direct role in chemokine secretion, thereby modulating the inflammatory response. Expression of TRPML2 in different immune cells and tissues has been demonstrated by several groups (Cuajungco et al., 2016;Valadez and Cuajungco, 2015;García-Añoveros and Wiwatpanit, 2014;Sun et al., 2015). On the subcellular level, TRPML2 has been shown to be expressed primarily on RE and LE/LY by immunocytochemistry experiments (Sun et al., 2015;Venkatachalam et al., 2006;Karacsonyi et al., 2007). However, functional expression of TRPML2 in different intracellular vesicles and organelles has not been confirmed yet by direct and selective patch-clamp analysis, that is patch-clamping of RE, EE (early endosomes), LE/LY, or other endolysosomal vesicles. Furthermore, it remains unclear whether direct and selective stimulation of TRPML2 leads to an increase in cytokine/ chemokine release from macrophages, and which intracellular trafficking pathways mediate the release of these cytokines/chemokines.
We demonstrate that ML2-SA1 activates TRPML2 in EE and LE/LY as well as in Rab11+ and Tf+/ TfR+ (transferrin/transferrin receptor) vesicles. In macrophages, LPS (lipopolysaccharide) exposure leads to a strong upregulation of TRPML2 expression, while TRPML1 and TRPML3 expression levels remain unaffected by LPS (Sun et al., 2015). Importantly, activation by ML2-SA1 was not observed in macrophages without LPS treatment which express TRPML2 only at very low levels, further confirming specificity of the compound. We also show that direct activation of TRPML2 by ML2-SA1 results in an increased release of the chemokine CCL2 from LPS-stimulated WT macrophages, while TRPML2 -/macrophages show no release increase, suggesting that TRPML2 channel activity is directly linked to CCL2 trafficking and secretion. We further provide evidence that CCL2 is released via the early/recycling endosomal pathway but not via LE/LY. Finally, we show that stimulation with ML2-SA1 promotes macrophage migration, one of the major physiological functions of the chemoattractant CCL2, one synonym of which is monocyte chemoattractant protein 1 (MCP-1).
Development of a potent isoform-selective TRPML2 channel agonist
With the aim to further improve the characteristics of existing TRPML channel agonists, we generated more than 80 novel derivatives of recently reported lead activators of TRPML channels which had been originally identified by random screening of the MLSMR small molecule library (Scripps Research Institute Molecular Screening Center) . Here, novel derivatives of the lead compounds SN-2 and ML-SA1, a SF-51 analogue Shen et al., 2012;Grimm, 2016;Chen et al., 2014) were evaluated for their efficacy, potency, and selectivity profiles, respectively.
We first synthesized and tested >50 chemically modified versions of the TRPML3 activator SN-2 ( Figure 1; Figure 1-figure supplement 1; Supplementary file 1). These modifications comprise systematic variations of the substitution pattern of the aryl ring, variations of the aliphatic norbornane ring system, aromatisation of the isoxazoline to an isoxazole fragment, introduction of polar substituents, as well as replacement of the isoxazol(in)e ring by other heterocycles. Crucial steps in these syntheses were Huisgen-type 1,3-dipolar cycloaddition reactions of norbornene (for the closer analogues) and other alkenes with nitrile oxides (Jawalekar et al., 2011;Huisgen, 1963) and related 1,3-dipoles. Related aromatic isoxazole analogues were prepared via cycloaddition of nitrile oxides with ketone enolates (Vitale and Scilimati, 2013) or enamines (Fos et al., 1992). General synthesis strategies for these modifications are shown in Figure 1A.
Derivatives Following synthesis, we initially tested the compounds in HEK293 cells transiently transfected with human TRPML1, TRPML2, or TRPML3 (C-terminally fused to YFP) by using the fura-2 calcium imaging technique. When expressed in HEK293 cells, TRPML2 and TRPML3 but not TRPML1 substantially localize at the plasma membrane besides endolysosomes as described previously , enabling standard fura-2 calcium imaging experimentation. To evaluate effects on TRPML1, a plasma membrane variant with mutated lysosomal targeting sequences in the N-and C-termini (TRPML1(NC)) was used as reported previously .
The majority of the SN-2 and SF-51/ML-SA1 derivatives were either inactive, non-selective like ML-SA1, or selective for TRPML3 like SN-2 ( Figure 1B; Figure 1-figure supplement 3B). A subset of molecules however displayed a strong preference for TRPML2: ML2-SA1 (=EVP-22), a derivative of SN-2, as well as derivatives of SF-51/ML-SA1: . The latter three SF-51/ML-SA1 derivatives however showed lower efficacy compared to ML2-SA1 ( Figure 1B TRPML2 activity is detectable in EE, LE/LY as well as Rab11+ and TfR + organelles In endolysosomal patch-clamp experiments using transiently transfected HEK293 cells, we investigated TRPML2 channel activity in wortmannin/latrunculin B (Wort./Lat.B)-enlarged EE (Chen et al., 2017a), in YM201636-enlarged LE/LY (Chen et al., 2017a), as well as in vacuolin-enlarged Rab11 + and TfR+ organelles ( evoked TRPML2 activation while no or very little activation was detectable for TRPML1 and TRPML3. In contrast, the latter ones were robustly activated by ML-SA1 as a positive control ( Figure 2C-E). The time course for activation of TRPML2 in LE/LY patch-clamp experiments and the relative Ca 2+ permeability are shown in Figure 2G and Figure 2-figure supplement 1B. In addition to LE/LY, TRPML2 channel activity was also detectable in EE after stimulation with ML2-SA1 ( Figure 2H and K). In order to patch-clamp discrete populations of vesicles involved in early/recycling endosomal trafficking, cells were transfected with fluorophore-tagged Rab11 or TfR, and enlarged with vacuolin. ML2-SA1 elicited significant currents in Rab11+ and in TfR+ vesicles ( Figure 2I-K).
Molecular modeling of ML2-SA1 binding
Several recent papers have provided in-depth information on the structures of TRPML1 and TRPML3 channels (Schmiege et al., 2017;Chen et al., 2017b;Hirschi et al., 2017). Schmiege et al., 2017 found that a hydrophobic cavity created by I468 and F465 of PH1 (pore helix 1), F428, C429, V432 and Y436 of S5, F505 and F513 of S6, and Y499 and Y507 of S6 in the neighboring subunit, tightly accommodates ML-SA1 ( Figure 3A). In a molecular modeling approach using these recently published structures of TRPML1 and TRPML3 as a basis, we simulated the binding of ML-SA1 as well as ML2-SA1 to hTRPML1 and hTRPML2 ( Figure 3; Figure 3-figure supplement 1). Complete 3D models of the open conformation of hTRPML1 and hTRPML2 were constructed and used for ligand docking analysis. Amino acids differing between hTRPML1 and hTRPML2 are colored green ( Figure 3B-D). Based on this model, ML2-SA1 (both enantiomers are described, one in Figure 3figure supplement 1) is predicted to bind to the same binding pocket as ML-SA1 as observed in the cryo-EM structure of hTRPML1 ( Figure 3A-B). Six amino acids (A422, A424, G425, A453, V460, and I498) in this pocket are unique to hTRPML2 (highlighted in green; Figure 3C-D). The orientation of ML2-SA1 in the binding pocket of hTRPML2 with the highest docking score is shown in Figure 3C. The dichlorophenyl ring shows favorable p-stacking interaction with F502 whereas the polar isoxazole ring is located near the side chain OH-groups of Y428 and Y496. The hydrophobic norbornane ring is interacting with G425 and Y428. Other possible orientations of ML2-SA1 binding to hTRPML2 are shown in Figure 3-figure supplement 1C-D). The observed binding mode of ML2-SA1 at hTRPML1 is different and appears to be energetically less favorable compared to hTRPML2 due to the observed amino acid substitutions in the predicted binding cavity ( Figure 3D). We subsequently replaced each of the six amino acids that are unique to the predicted hTRPML2 binding pocket with the respective amino acids of hTRPML1. We analysed these mutant isoforms first in calcium imaging experiments where we found the strongest reduction of the ML2-SA1 effect in G425A ( Figure 3E). In the next step, we performed endolysosomal patch-clamp experiments with this mutant. Mutation of G425 to alanine was found to selectively abrogate the effect of ML2-SA1, while ML-SA1 was still able to activate G425A to a degree not significantly different from WT ( Figure 3E-F). G425 is close to the norbornane ring of ML2-SA1 (minimum distance 3.6 Å ) docked to hTRPML2 and substitution to alanine is unfavorable for this binding mode ( Figure 3C). The experimental data corroborate binding of ML2-SA1 to the ML-SA1 binding pocket and confirm a critical role of G425 in mediating ML2-SA1 selectivity.
Effect of ML2-SA1 on endogenous TRPML2 channel activity in organelles isolated from LPS-stimulated macrophages In macrophages significant TRPML2 channel expression is found only after stimulation with LPS, as demonstrated previously by qRT-PCR and western blot analysis (Sun et al., 2015). We confirmed this finding by qRT-PCR and endolysosomal patch-clamping, revealing that only after several hours of LPS treatment, robust endogenous TRPML2 channel expression and activity were detectable In LPS-stimulated bone marrow-derived macrophages (BMDMF) ML2-SA1-induced currents were detectable in Tf-Alexa555 loaded, vacuolin-enlarged vesicles, while no significant TRPML2 channel activity could be detected in non-LPS stimulated BMDMF Tf+ vesicles ( Figure 4A-B). Currents measured in BMDMF LE/LY with ML2-SA1 after LPS-stimulation were smaller than currents measured in Tf+ loaded vesicles ( Figure 4C-D). In contrast, in LE/LY isolated from alveolar macrophages (AMF), TRPML2 currents elicited with ML2-SA1 were larger on average than in BMDMF ( Figure 4E-F). These data confirm that ML2-SA1 elicits robust TRPML2 currents in endogenously expressing cells.
Effect of selective TRPML2 activation on CCL2 secretion
To evaluate effects of the novel TRPML2 channel agonist on chemokine secretion from macrophages, we performed experiments based on the results recently provided by Sun et al. (2015) ( Figure 5A). We found that incubation with ML2-SA1 significantly increased secretion of the chemokine CCL2 from BMDMF, both after 4 hr and 8 hr of LPS treatment ( Figure 5A). Importantly, ML2-SA1 did not induce CCL2 secretion in unstimulated BMDMF. Furthermore, CCL2 secretion was severely reduced in TRPML2 -/-BMDMF and ML2-SA1 showed no further increase of CCL2 secretion in the TRPML2 -/-BMDMF, corroborating the specificity of the agonist ( Figure 5A). To characterise the pathway of ML2-SA1-induced CCL2 secretion from macrophages, we performed lysosomal exocytosis and Tf trafficking experiments to distinguish between LE/LY and EE/RE as possible secretion routes. Lysosomal exocytosis experiments revealed no significant effect of ML2-SA1 on lysosomal enzyme (beta-hexosaminidase) release ( Figure 5B). In accordance with this, ML2-SA1 application did not result in translocation of LAMP1 to the plasma membrane ( Figure 5C), arguing against LE/LY being involved in CCL2 secretion in BMDMF. These findings are supported by the LE/LY environment being less favorable for TRPML2 activity as outlined above. More favorable conditions are found in EE/RE compartments (less acid to neutral pH). In line with this, ML2-SA1 application resulted in a significant enhancement of Tf trafficking and recycling through EE/RE ( Figure 5D-E). Taken together, these data argue for a TRPML2-dependent trafficking route of CCL2 from Golgi to EE/RE ( Figure 5F).
ML2-SA1 promotes macrophage migration
To assess effects of ML2-SA1 on cell migration, we performed migration assays in a modified Boyden chamber setup ( Figure 6-figure supplement 1). BMDMF in the presence or absence of LPS were seeded in the lower compartment of the chamber and exposed to different concentrations of ML2-SA1. LPS-stimulated, ML2-SA1 pre-treated BMDMF were able to significantly increase migration of untreated BMDMF through the transwell chamber, while LPS-stimulated BMDMF without ML2-SA1 pre-treatment (only DMSO) were not able to alter migration properties of untreated BMDMF. ( Figure 6A). This is in accordance with the enhanced release of CCL2 by ML2-SA1, which independent experiments as indicated, each. (F) Dose-response curves obtained from fura-2 calcium imaging experiments with hTRPML1(NC), hTRPML2, and hTRPML3 expressed in HEK293 cells and elicited with ML2-SA1 at varying concentrations. The calculated EC 50 value for hTRPML2 is: 1.24 ± 0.12 mM (mean ± SEM). (G) Time course of TRPML2 activation by ML2-SA1 taken from experiments as shown in B. Black and red arrows indicate time points for basal and ML2-SA1 induced TRPML2 activity that were used for the IV relationship. (H-J) Representative basal and ML2-SA1 (10 mM) elicited currents from Wort./Lat.B-enlarged EE, from vacuolin-enlarged Rab11+, or form TfR+ vesicles isolated from hTRPML2 expressing HEK293 cells. (K) Statistical summary of data as shown in G-I. * indicates p<0.05, ** indicates p<0.01, *** indicates p<0.001, Figure 2E, one-way ANOVA test followed by Tukey's post-hoc test, Figure 2J, paired t-test. DOI: https://doi.org/10.7554/eLife.39720.006 The following figure supplements are available for figure 2: Overall, these data suggest that ML2-SA1 is able to induce CCL2 secretion selectively in TRPML2-expressing macrophages, thus serving as chemoattractant to recruit more macrophages.
Discussion
We describe here a novel, isoform-selective activator of the TRPML2 channel and describe how TRPML2 activation enhances endosomal trafficking to induce inflammatory mediator release in LPSstimulated macrophages. Until now, selective activators for TRPML2 had not been available. In an effort to identify such selective activators we synthesized >80 chemical compounds by systematic variation of the known lead structures SN-2 and SF-51/ML-SA1 Shen et al., 2012), generating a library of analogues of sufficient size to deduce structure-activity relationships. In the ML-SA1 series, improved TRPML2 activation was achieved by modification of the length of the acyl spacer, but the resulting selective activators were of only intermediate efficacy and potency. By contrast, the activator ML2-SA1 from the series of norbornene-derived isoxazolines (based on SN-2) is characterized by high TRPML2 subtype selectivity as well as high efficacy and potency, rendering this new small molecule a valuable compound for future studies on this ion channel (Supplementary file 2). Molecular modeling data support specific binding of ML2-SA1 to the pore region of the channel, as observed for ML-SA1. The binding orientation of ML-SA1 at hTRPML2 was found to be similar to the experimentally observed binding to hTRPML1 (Schmiege et al., 2017) which is in agreement with nonselective activation. In contrast, the binding orientation of docked ML2-SA1 at hTRPML1 differs from that found for hTRPML2, suggesting a plausible rationale for its selectivity. In an experimental approach where we investigated the functional consequences of point mutations in hTRPML2 with the endolysosomal patch-clamp technique we found that in mutant G425A activation by ML2-SA1 is selectively lost, while activation by ML-SA1 is preserved, indicating that this amino acid is highly critical for the selective effect of ML2-SA1 on TRPML2. Sun et al. (2015) have recently shown that the levels of TRPML2 are strongly upregulated in macrophages upon TLR4 (toll-like receptor) activation (Supplementary file 2). Thus, treatment with LPS was found to lead to TRPML2 upregulation in, for example microglia, peritoneal macrophages, bone marrow derived macrophages, or alveolar macrophages (Sun et al., 2015). The authors further found that the translation and secretion of several chemokines such as CCL2 was reduced in TRPML2 -/mice, and concluded that TRPML2 might play a role in the regulation of trafficking and/or Figure 3 continued one of the four identical binding pockets are displayed. The S6 helix of monomer A of hTRPML2 is colored petrol blue, the PH1 and S5 helices of monomer B are colored salmon. Amino acid residues that are different in hTRPML1 and hTRPML2 are colored green (C) Binding mode of one ML2-SA1 enantiomer (cyan colored carbon atoms; 3aS, 4S, 7R, 7aS) at hTRPML2 as predicted by ligand docking. Only residues within 5 Å of ML2-SA1 in one of the four identical binding pockets are displayed (same coloring and representation style as in Figure 3B). Binding of the other ML2-SA1 enantiomer (3aR, 4R, 7S, 7aR) resulted in a similar binding mode that is shown in Figure 3-figure supplement 1B (D) Binding mode of one ML2-SA1 enantiomer (cyan colored carbon atoms; 3aS, 4S, 7R, 7aS) at hTRPML1 as predicted by ligand docking. Only residues within 5 Å of ML2-SA1 in one of the four identical binding pockets are displayed (same coloring and representation style as in Figure 3a). (E) Fura-2 calcium imaging results showing the effect of ML2-SA1 (10 mM) on hTRPML2-YFP WT and mutant transfected HEK293 cells. Mean values normalized to basal (120 s after compound application)± SEM of at least three independent experiments, each. * indicates p<0.05, one-way ANOVA, followed by Dunnet post-hoc test. (F) Representative ML2-SA1 or ML-SA1 (10 mM) elicited currents from YM201636-enlarged LE/LY isolated from hTRPML2(G425A) expressing HEK293 cells. (G) Statistical summary of data as shown in F as fold increase compared to the respective basal currents in LE/LY. Shown are mean values ± SEM at À100 mV of at n independent experiments as indicated. * indicates p<0.05, unpaired t-test. DOI: https://doi.org/10.7554/eLife.39720.010 The following figure supplement is available for figure 3: Figure 4 continued on next page secretion of these chemokines. However, it remained unclear whether TRPML2 is directly involved in these processes and whether activation of TRPML2 channel activity would show increased release.
Here, we present data strongly supporting a direct involvement of TRPML2, as direct stimulation of TRPML2 with ML2-SA1 leads to an increase in CCL2 secretion from macrophages. Using the specific TRPML2 agonist and the TRPML2 -/knockout mouse model as control, we demonstrate a positive relationship between TRPML2 activity and CCL2 secretion. Using the endolysosomal patchclamp technique we demonstrate that TRPML2 is present in LE/LY and EE as well as in Rab11+ and TfR+/Tf+ vesicles (Supplementary file 2). However, early endosomes including RE provide more favorable activation conditions for TRPML2 than LE/LY due to their less acidic/neutral luminal pH. In accordance with this, TRPML2 currents elicited with ML2-SA1 in LE/LY isolated from endogenously expressing BMDMF were smaller than currents in Tf+ vesicles. In addition, no evidence was found that ML2-SA1 can promote lysosomal exocytosis, while ionomycin or ML-SA1 were able to increase the release of beta-hexosaminidase as previously reported (Samie et al., 2013). The subcellular distribution of LAMP1 did also not change during ML2-SA1 treatment and no translocation to the PM was observed. In contrast, ML2-SA1 application was found to significantly promote Tf trafficking through the early/recycling endosomal compartment, arguing for a role of TRPML2 in CCL2 release via the early/recycling endosomal pathway ( Figure 5F).
Loss of function mutations in the TRPML2-related channel TRPML1 result in lysosomal storage and endolysosomal trafficking defects underlying the neurodegenerative disease mucolipidosis type IV (Bach, 2001;Pryor et al., 2006;Chen et al., 2014). Mechanistically, it was postulated that loss of TRPML1 impairs lysosomal exocytosis (LaPlante et al., 2006). It was also suggested that TRPML1 is required for lysosomal pH regulation (Soyombo et al., 2006) and for vesicle fusion (Venkatachalam et al., 2013) while, very recently, data have been presented, supporting that TRPML1 may regulate lysosomal fission (Chen et al., 2017a). A further interesting finding has been presented by Park et al. (2016), suggesting that, in secretory cells, a major role for TRPML1 is to guard against unintended, pathological fusion of lysosomes with other intracellular organelles, for example secretory vesicles. TRPML1 has also been attributed to mediate lysosomal trafficking via Ca 2+ -dependent motor protein recruitment, its activity favoring retrograde lysosomal movement (Vergarajauregui et al., 2009;Li et al., 2016).
Like TRPML1, TRPML3 was also suggested to regulate membrane trafficking. In particular, it was found to regulate trafficking of early endosomes and to affect endocytosis (Kim et al., 2009). Lelouvier and Puertollano (2011) further presented data showing that TRPML3 is required for proper calcium homeostasis in the endosomal pathway and that impairment of TRPML3 function leads to defective endosomal acidification and defective membrane trafficking. Surprisingly, the authors found increased endosomal fusion after depletion of TRPML3. Recently, Miao et al. (2015) showed that TRPML3 activation, upon neutralization of lysosomal pH, mediates efflux of Ca 2+ ions from lysosomes, which in turn induces lysosome exocytosis. TRPML3 is normally inactive in highly acidic lysosomes, in contrast to early/recycling endosomes with more neutral pH, but when the pH in the lumen of the lysosome is neutralized, TRPML3 becomes activated, releases Ca 2+ into the cytosol, which in turn triggers spontaneous exocytosis of the lysosome and its contents.
TRPML2 has been suggested to play a role in the regulation of the Arf6-associated pathway and, more specifically, in the trafficking of GPI-APs (Karacsonyi et al., 2007). Arf6 has been implicated in the regulation of endocytosis as well as endocytic recycling and cytoskeleton remodeling. More recently, TRPML2 has been found to increase trafficking efficiency of endocytosed viruses (Rinkenberger and Schoggins, 2018). Furthermore, we are showing here that TRPML2 is, like its relatives TRPML1 and 3, Ca 2+ permeable (Figure 2-figure supplement 1C).
Taken together, these findings imply that all three TRPML channels can impact intracellular trafficking processes while the mechanisms how they affect trafficking might differ. While it is likely, based on the available data, that the effect of TRPML2 knockout/activation on CCL2 trafficking and release is occurring at the level of EE/RE, it remains to be further established where along this pathway the effect takes place. Possible scenarios might be: fusion of Golgi vesicles with RE, fission from Figure 5F).
Functionally, we found that ML2-SA1 promotes migration of untreated macrophages towards LPS-treated macrophages. This suggests that TRPML2-dependent CCL2 release is enhancing the inflammatory response by recruiting innate immune cells to the site of inflammation. This is in accordance with the results presented by Sun et al. (2015) who found that macrophage migration is impaired in vivo in the absence of TRPML2. CCL2 is known to be a key chemokine regulating migration and infiltration of monocytes/macrophages (Deshmane et al., 2009). Since CCL2 is implicated in the pathogenesis of diseases characterized by infiltrates containing macrophages like psoriasis, rheumatoid arthritis, multiple sclerosis, and atherosclerosis (Deshmane et al., 2009;Xia and Sui, 2009;Daly and Rollins, 2003), we postulate that TRPML2 may be an attractive novel target for the treatment of such innate immunityrelated inflammatory diseases.
Endolysosomal patch-clamp and calcium imaging experiments
Whole-LE/LY and whole-EE recordings have been described previously in detail (Chen et al., 2017a;Chen et al., 2017c). In brief, for whole-LE/LY manual patch-clamp recordings, cells were treated with YM201636 (HEK293 cells: 800 nM o/n; macrophages: 800 nM 1 hr). For whole-EE manual patch-clamp recordings, cells were treated with a combination of 200 nM wortmannin and 10 nM latrunculin B (HEK293 cells: 10-15 min). Cells were treated with compounds at 37˚C and 5% CO 2 . YM201636 was obtained from Chemdea (CD0181), wortmannin and latrunculin B from Sigma (W1628 and L5288), and vacuolin from Santa Cruz (sc-216045). Compounds were washed out before patch-clamp experimentation. For other organelle patch-clamp recordings, HEK293 cells were transfected with the markers Rab11-DsRed or TfR-mCherry, respectively, and treated with 1 mM vacuolin o/n. Since macrophages could not be transfected with standard transfection protocols or by electroporation, cells were loaded with transferrin-Alexa555 and simultaneously treated with vacuolin for 1 hr to enlarge and visualize vesicles for patch-clamp.
Isolation-micropipettes were used to open up the plasma membrane, and push the enlarged vesicle of interest out of the cell. Afterwards, electrode-micropipettes were applied to patch-clamp the isolated vesicles.
Macrophages were used for experiments within 2-10 days after isolation. Mean capacitance values for Rab11+ vesicles isolated from HEK293 cells was 0.7 ± 0.2 (n = 6), for TfR+ vesicles (n = 3) 1.4 ± 0.3 pF, for EE (n = 10) 0.4 ± 0.1 pF, and for LE/LY (n = 51) 1.0 ± 0.2 pF. For LE/LY isolated from primary macrophages it was 0.8 ± 0.1 pF (n = 41), for Tf-loaded vesicles 1.3 ± 0.5 pF (n = 8). Currents were recorded using an EPC-10 patch-clamp amplifier (HEKA, Lambrecht, Germany) and PatchMaster acquisition software (HEKA). Data were digitized at 40 kHz and filtered at 2.8 kHz. Fast and slow capacitive transients were cancelled by the compensation circuit of the EPC-10 amplifier. Recording glass pipettes were polished and had a resistance of 4-8 MW. For all experiments, salt-agar bridges were used to connect the reference Ag-AgCl wire to the bath solution to minimize voltage offsets. Liquid junction potential was corrected. For the application of the lipids (A.G. Scientific) or small molecule agonists (ML2-SA1, ML-SA1), cytoplasmic solution was completely exchanged by cytoplasmic solution containing agonist. Unless otherwise stated, cytoplasmic solution contained 140 mM K-MSA, 5 mM KOH, 4 mM NaCl, 0.39 mM CaCl 2 , 1 mM EGTA and 10 mM HEPES (pH was adjusted with KOH to 7.2). Luminal solution contained 140 mM Na-MSA, 5 mM K-MSA, 2 mM Ca-MSA 2 mM, 1 mM CaCl 2 , 10 mM HEPES and 10 mM MES (pH was adjusted with NaOH to 7.2). For optimal conditions of TRPML1, luminal pH was adjusted to 4.6 and Na-MSA was used in the luminal solution. For optimal conditions of TRPML2, luminal pH was adjusted to 7.2 and Na-MSA was used in the luminal solution. For optimal conditions of TRPML3, luminal pH was adjusted to 7.2 and K-MSA was applied to replace Na-MSA in the luminal solution. In all experiments, 500 ms voltage ramps from À100 to +100 mV were applied every 5 s, holding potential at 0 mV. The current amplitudes at À100 mV were extracted from individual ramp current recordings. All statistical analysis was done using Origin8 software.
Calcium imaging experiments were performed using fura-2 as described previously (Grimm et al., 2012a). Briefly, HEK293 cells were plated onto glass coverslips, grown over night and transiently transfected with the respected cDNAs using TurboFect transfection reagent (Thermo Scientific). After 24-48 h cells were loaded for 1 hr with the fluorescent indicator fura2-AM (4 mM; Invitrogen) in a standard bath solution (SBS) containing (in mM) 138 NaCl, 6 KCl, 2 MgCl 2 , 2 CaCl 2 , 10 HEPES, and 5.5 D-glucose (adjusted to pH 7.4 with NaOH). Cells were washed in SBS for 30 min before measurement. Calcium imaging was performed using a monochromator-based imaging system (Polychrome IV mono-chromator, TILL Photonics).
Computational methods
Analysis of electron density map. The electron density maps for the cryo-electron microscopy structures of hTRPML1 and hTRPML3 in open agonist-bound form (PDB IDs: 5WJ9 and 6AYF, respectively) were downloaded from the Protein Data Bank (PDB; ww.rcsb.org) (Berman et al., 2000) and visualized in PyMOL (The PyMOL Molecular Graphics System, Version 1.7.4 Schrö dinger, LLC). Homology modelling of TRPML2. The amino acid sequence of hTRPML2 was retrieved from UniProt (The UniProt Consortium, 2017); Accession number: Q8IZK6-1) and a Blast (Altschul et al., 1990) search using BLOSUM62 matrix was performed against the PDB to find the closest homologues. Subsequently, sequence alignment of hTRPML2 to the top scored template, hTRPML3 (Sequence identity 59%), was conducted in MOE2012.10 (Molecular Operating Environment (MOE), 2016.08;Chemical Computing Group Inc., 1010Sherbooke St. West, Suite #910, Montreal, QC, Canada, H3A 2R7, 2016 and the alignment file was used to generate the homology model using MODELLER 9.11 (Webb and Sali, 2014). Ligand-bound homology models of hTRPML2 were finally built using the agonist-bound structure of hTRPML3 (PDB ID: 6AYF) and ranked according to their DOPE score (Shen and Sali, 2006). Molecular docking to hTRPML1 and À2. The ligands were prepared for docking using the LigPrep tool as implemented in Schrö dinger's software (Schrö dinger Release 2017-1: LigPrep, Schrö dinger, LLC, New York, NY, 2017, where the two stereoisomers of ML2-SA1 were generated and energy minimized using the OPLS force field. Conformers of the prepared ligands were calculated with ConfGen using the default settings and allowing minimization of the output conformations. Protein preparation. The cryo-electron microscopy structure of the open conformation of hTRPML1 in complex with ML-SA1 (PDB ID: 5WJ9) and the generated hTRPML2 homology model were prepared with Schrö dinger's Protein Preparation Wizard (Schrö dinger Release 2017-1: Schrödinger Suite 2017-1 Protein Preparation Wizard; Epik, Schrö dinger, LLC, New York, NY, 2016; Impact, Schrö dinger, LLC, New York, NY, 2016; Prime, Schrö dinger, LLC, New York, NY, 2017): Hydrogen atoms were added and the H-bond network was subsequently optimized. The protonation states at pH 7.0 were predicted using the PROPKA tool in Schrö dinger. The structures were finally subjected to a restrained energy minimization step using the OPLS2005 force field (RMSD of the atom displacement for terminating the minimization was 0.3 Å ).
The receptor grid preparation for the docking procedure was carried out by assigning the agonist as the centroid of the grid box. The generated ligand conformers were docked into the proteins using Glide ( Cell culture of primary macrophages isolated from knockout and WT mice For preparation of primary alveolar macrophages (AMF), mice were deeply anesthetized and euthanized by exsanguination. Afterwards, the trachea was carefully exposed and cannulated by inserting a 20 gauge catheter (B. Braun, cat. no. 4252110B). AMF were harvested by eight consecutive lung lavages with 1 ml of DPBS each. After a centrifugation step, cells were immediately collected and cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum and 1% antibiotics. AMF were directly seeded onto 12 mm glass cover slips and used for experiments within 5 days after preparation. Bone marrow-derived macrophages (BMDMF) were isolated from femur and tibias of mice. Thus, bones were isolated and bone marrow was flushed with 10 ml PBS using a sterile 25 gauge needle. Cells were obtained by centrifugation, resuspended and subsequently cultured in 10 cm petri dishes in RPMI 1640 medium supplemented with 10% fetal bovine serum and 1% penicillin/ streptomycin and 40 ng/mL murine M-CSF (Miltenyi Biotech). Cells were incubated for 5 days, before they were plated onto poly-L-lysine coated cover slips for experiments. All cells were maintained at 37˚C in 5% CO 2 atmosphere. If necessary, cells were stimulated with 1 mg/mL LPS (Escherichia coli O26:B6, Sigma, L2762) prior to experiments for different time periods as stated in the text. Animals were used under approved animal protocols and University of Munich (LMU) Institutional Animal Care Guidelines.
Measurement of CCL2 content in bmdm' culture supernatants by ELISA
Cell culture supernatants from WT or TRPML2 -/-BMDMF were collected at 4 hr or 8 hr following LPS treatment in the presence or absence of TRPML2 agonist (ML2-SA1), and CCL2 was measured using an ELISA kit (BioLegend, 432707), per the manufacturer's instructions. Cell culture supernatants were diluted ten times for the assay, and 50 mL diluted supernatant was assessed.
Transferrin trafficking assay RAW264.7 cells were seeded overnight with 0.1 mg/mL of lipopolysaccharide (LPS) (L4391, Sigma). Then, cells were loaded for 20 min at 37˚C with transferrin from human serum, Alexa Fluor 546-conjugated (T23364, ThermoFisher) at the concentration of 50 mg/mL in complete medium (DMEM 10% FBS). The analysis of recycling kinetics was performed by chasing for 5, 10, 15 and 20 min in complete media plus 50 mg/mL of unconjugated transferrin (T0665, Sigma) in the presence of either DMSO or ML2-SA1 (30 mM). Before fixation with 4% paraformaldehyde (PFA), non-internalized transferrin was acid-stripped (150 mM NaCl, 0.5% acetic acid in H 2 O) for 30 s. Images were acquired using a Zeiss LSM 800 with 63x magnification.
Lysosomal exocytosis assay (FACS) RAW264.7 cells were seeded overnight with 0.1 mg/mL of lipopolysaccharide (LPS) (L4391, Sigma). Then, cells were treated with DMSO, calcium ionophore A23187 (C7522, Sigma) or ML2-SA1 for 3 hr. After 3 h cells were collected and stained with LAMP1 antibody (SC-19992, Santa Cruz) in PBS (1% BSA) during agitation for 20 min (4˚C). Cells were then collected by centrifugation and resuspended in PBS (1% BSA) with goat anti-rat, Alexa488 (A-11006 ThermoFisher) during agitation for 1 hr (4˚C). Finally, cells were washed in PBS and left on ice until FACS analysis. Cells were loaded into the FACS machine using a nozzle of 100 mm and the LAMP1 fluorescence intensity was measured using a 488 nm excitation laser and a FITCH (530/30 nm) emission filter. The threshold was set using DMSO-treated samples, and 1000 events were counted for each condition.
Lysosomal exocytosis assay (Hexosaminidase)
For measurement of lysosomal hexosaminidase enzyme release, bone marrow macrophages were treated with ML2-SA1, ML-SA1 or DMSO in serum-free RPMI medium, concentrations and durations as indicated. Ionomycin was used as control. After treatment, supernatants were collected, centrifuged and incubated with natrium citrate buffer (pH 4.5) and 4-Methylumbelliferyl N-acetyl-b-D-glucosaminide (M1233, Sigma, 1 mM final concentration) for 1.5 hr. Cells were lysed with Triton-X buffer and lysates were processed in parallel. The reaction was stopped by adding glycin buffer to the samples and the turnover of hexosaminidase substrate was detected as fluorescence (Exitation: 365 nm; Emission: 450 nm) using a plate reader (Spectramax ID3, Molecular Devices). The increase in substrate turnover was analyzed as fluorescence increase in supernatants relative to lysates.
Macrophage migration experiments
ML2-SA1 effects on macrophage migration were assessed by a modified Boyden chamber setup ( Figure 6-figure supplement 2). In the modified Boyden chamber setup, BMDMF were plated onto poly-L-lysine coated cover slips in a twenty-four well plate (lower compartment) in the presence or absence of 1 mg/ml LPS for 6 hr. After 6 hr, media was replaced with media containing 10 or 30 mM ML2-SA1 or DMSO. 1 Â 10 5 BMDMF were placed on top of the transwell chamber (Corning) in media without any compound. Transwell chambers were placed into the twenty-four well plate and incubated for 3 hr at 37˚C in 5% CO 2 atmosphere. In the classical Boyden chamber approach a twenty-four well plate was filled with media containing either DMSO, 1 mg/ml LPS and DMSO, 1 mg/ ml LPS and 30 mM ML2-SA1, or 10 ng/ml CCL2. Transwell chambers were equally prepared and incubated. Migrated cells were fixed and stained with crystal violet/methanol. The top of the transwell chamber was cleaned an images were taken. Cell covered area was determined with ImageJ (NIH, Bethesda, MD). Rosa Puertollano http://orcid.org/0000-0002-1106-5489 Christian Grimm http://orcid.org/0000-0002-0177-5559 Ethics Animal experimentation: This study was performed where applicable in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. This study was performed where applicable in strict accordance with the recommendations of the Bavarian Government (ROB; AZ_55.2-1-54-2532AZ_55.2-1-54- -27-2015. Data availability All data generated or analysed during this study are included in the manuscript and supporting files.
|
2018-12-01T05:55:31.319Z
|
2018-11-27T00:00:00.000
|
{
"year": 2018,
"sha1": "620a4b6c4b01e00b26183694fa95b1f1ee1d7cd4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.39720",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "620a4b6c4b01e00b26183694fa95b1f1ee1d7cd4",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
232849085
|
pes2o/s2orc
|
v3-fos-license
|
Pre-Service Teachers Achievement and Mastery Levels in Solid Geometry at E. P. College of Education, Bimbilla-Ghana
The purpose of the study was to determine Pre-Service Teachers (PSTs) achievement levels and selfevaluation of their level of mastery in Solid Geometry. The study used descriptive research design with purely quantitative approach to collect data related to Evangelical Presbyterian (E.P.) College of Education, Bimbilla-Ghana Pre-Service Teachers’. The population was one hundred and ninety-two (192) level 200 PSTs Pre-Service Teachers either majoring in mathematics, ICT or science. The sample used for the study was 140. Convenient, purposive and simple random sampling techniques were adopted. The instruments used were two comprising of achievement test and closed ended questionnaire. The overall results from the achievement test indicate that the PSTs were at good mastery level in solid geometry. Also, the self –evaluation questionnaire mastery levels for geometric properties, drawing of solid nets, finding surface areas were high and finding volume of solids were also very high. Finally, finding on composite solids area and volume was at moderate level. It was recommended that College Mathematics Tutors should encourage PSTs to always draw nets of solid shapes and also use solid nets to form solid shapes.
Introduction
In Ghana, solid geometry is a compulsory component of mathematics curriculum that is studied from kindergarten to Colleges of Education level. The lengthiest-recognized main branch of mathematics is Geometry. Malkevitch [1] asserted that geometry is a branch of mathematics that entails visual phenomena. Royal Society [2] have also opined that geometry branches exploit students visual intuition to remember, understand proof, inspire conjecture, perceive reality, and give global insight of mathematics. Threedimensional (3D) geometry consists of three dimensions: length, width and height. Geometry concepts serve as pre-requite knowledge for application in pure science, engineering, architecture and astronomy. Van de Walle [3] also reported that geometry is a footing for study in other fields such as science, engineering, architecture, geology and astronomy. "The study of geometry contributes to helping students develop the skills of visualization, critical thinking, intuition, perspective, problem-solving, conjecturing, deductive reasoning, logical argument and proof" [4]. Geometry concepts are helpful to teachers when teaching fractions, functions, calculus, decimals and percentages.
In the new 4-year Bachelor of Education (B. Ed) curriculum, Pre-service teachers (PSTs) are supposed to be trained for effective delivery in one of the following levels; Lower Primary (Kindergarten to Basic 3), Upper Primary (Basic 4 to Basic 6) and Junior High School (JHS1 to JHS3). In the new programme pre-service teachers (PSTs) are to also pick a major and a minor course as their specialized courses of study. With the new curriculum both mathematics content and methodology are taught together unlike previously where they were taught separately. Throughout the course, there is a strong emphasis on recognizing the uses of mathematics in different local and global context as well as exploring learners' misconceptions and difficulties in geometry as specific in the national teachers' standards. Specific attention is given to topic areas that have consistently been flagged up in chief's examiners reports for Senior High School core mathematics as difficult.
The geometry course is specifically designed to develop and consolidate the basic mathematical knowledge and skills of PSTs in geometry, taking into account the uses of geometry in different local contexts as well as exploring learners' misconceptions and difficulties in geometry. The goals of the geometry course are in three -folds: (1) to extend the mathematical knowledge and skills of PSTs to a level significantly beyond what they are likely to teach in basic schools mathematics curriculum. (2) to provide PSTs with a general understanding of the basic principles of teaching the basic school mathematics and (3) to support PSTs to develop appropriate practical approaches to teaching and assessment.
2 Literature Review on Solid Geometry Patkin and Sarfaty [5] study concluded that mathematics pre-service teachers' levels of geometric thinking can be promoted, entailing control on higher thinking levels as well as a more positive attitude towards this field. Their study aims were to determine whether the intervention programme, comprising task-oriented activities of solid geometry, enhance mathematics pre-service teachers' mastery of their geometric thinking levels as well as investigate their feelings towards this discipline prior and after the intervention. Also, Koester [6] revealed that pupils and their teachers at any level experience difficulties in solid geometry. Similarly, Patkin [7] investigated elementary school mathematics teachers' personal knowledge in solid geometry and found that the elementary school mathematics teachers showed lack of understanding and found solid geometry difficult. Meng and Idris [8] study findings revealed that the teaching intervention could enhance the students' geometric thinking and achievement in solid geometry. Similarly, Nduka and Ajoke [9] study compared the effectiveness of Design-Based Learning (DBL) and Problem-based Learning (PbL) models among senior secondary students' achievement in solid geometry. The participants were 59 Senior Secondary School I (SSSI) students in Nigeria. The instrument used was Solid Geometry Achievement Test (SGAT). Their finding was in favor of DBL model because it proved as a better strategy to Problem-based Learning (PbL) model solid geometry achievements. In a recent study by Nduka and Charles-Ogan [10] study investigated the effect of Teaching for Understanding (TfU) instructional model on the solid geometry. Their study used a quasi-experimental design where the experimental group were taught using TfU model whereas the students in the control group were taught using the Problem-based Learning (PbL) model. Their results showed a significant effect of the TfU model in solid geometry achievement among the senior secondary students. They recommended that mathematics teachers should adopt the TfU model in solid geometry.
Furthermore, Patkin and Barkai [11] study purpose was to determine whether there were differences in their mastery of triangles and quadrilaterals, circles and three-dimensional geometric figures according to van Hiele's theory. The findings were that, all the five participating groups demonstrated higher mastery in the three main geometry topics. Further analysis revealed that, the participants failed to master (solid) threedimensional geometric figures.
Finally, Lie and Harun [12] conducted a study titled "Malaysian Students' achievement in solid Geometry" whose objectives were for students to self-evaluate their level of mastery, their satisfaction towards teachers" teaching strategies and mastery level of solid geometry among Form 4 students and the factor affecting their mastery level. Their findings indicated that, the students' achievement was moderate at 62.47% in solid geometry. Again, the students' self-evaluation of mastery level in solid geometry was high with an overall mean of 3.80. Furthermore, the students' satisfaction towards their teachers' teaching strategies was moderate with an overall mean of 3.60.
Problem statement
In Ghana, solid geometry is a significant component of the pre-service teachers' mathematics curriculum. Since it is applicable and helpful in everyday life, it is not astonishing that most international and local examinations bodies like Trends in International Mathematics and Science Study (TIMSS), Institute of Education, University of Cape Coast and West African Secondary School Certificate Examination (WASSCE) always have some aspect of solid geometry questions for students to answer.
Notwithstanding, its relevance, pre-service teachers in Colleges of Education consistently performed poorly on the compulsory solid geometry questions in the Institute of Education, University of Cape Coast examinations from 2007 till date. In 2007, 56.8% PSTs failed the geometry examination while 31.8% failed in 2009. Again in 2013, 1,965 (26.4%) PSTs obtained weak grades (grade D+ and D) while, 12.4% failed the geometry semester examination. The 2015 analysis was worst 42.3% PSTs failed and had to re-sit the examination.
Also, the West African Examination Council chief examiners reports have also consistently stated that Senior High Schools performed poorly in solid geometry. It was revealed in [13] study that Senior High candidates were unable to solve core Mathematics questions requiring 3-dimensions measuring their spatial visualization and geometric reasoning. Furthermore, grade 8 Ghanaian basic school pupils' performance was among the lowest in countries that participated in the 2003, 2007 and 2011 TIMSS studies [14,15]. The rankings in TIMSS revealed lack of solid geometric comprehension among Ghanaian Basic School pupils. Notwithstanding the growing need for learning solid geometry as a topic in particular, the truth is that several research studies have revealed an unsatisfactorily low performance level in plane geometry, with the solid geometry being the worse in the United States [16] and in Malaysia [8].
Finally, there are limited studies in Ghana's Colleges of Education that explores pre service teachers' achievement and mastery levels in solid geometry. Hence, the motivation of doing this paper is to narrow the gaps in the Ghanaian setting.
Purpose of the study
The study purpose was to determine Pre-Service Teachers achievement levels in Solid Geometry and selfevaluation of their level of mastery in Solid Geometry.
Research questions
Based on the purpose of this study, these research questions were formulated to guide the study.
What are the achievement levels of the Pre-Service Teachers in Solid Geometry? What is the Pre-Service Teachers self-evaluation of their level of mastery in Geometric properties of Solid Geometry? What is the Pre-Service Teachers self-evaluation of their level of mastery in drawing nets of Solid Geometry? What is the Pre-Service Teachers self-evaluation of their level of mastery in finding surface area of Solid Geometry? What is the Pre-Service Teachers self-evaluation of their level of mastery in finding volume of Solid Geometry? What is the Pre-Service Teachers self-evaluation of their level of mastery in finding composite solids area and volume?
Research design
By way of design the descriptive research design was employed with purely quantitative approach to collect data related to E.P. College of Education, Bimbilla-Ghana Pre-Service Teachers achievement levels in Solid Geometry and self-evaluation of their level of mastery in Solid Geometry.
Population
The study population consisted one hundred and ninety-two (192) level 200 PSTs mathematics Pre-Service Teachers of E. P. College of Education, Bimbilla-Ghana. The Mathematics PSTs composed of mathematics and science major/minors and Mathematics and ICT major/minors. The population was made up of 61 mathematics major science minor, 60 science major mathematics minor, 54 mathematics major ICT minor and 17 ICT major mathematics minor. From the population 170 (88.5%) were male PSTs while 22 (11.5%) were female PSTs.
Sample size and sampling procedure
The sample used for the study was 140 mathematics and science major/minors and mathematics and ICT major/minors PSTs. The sampled was one hundred and forty (140) represents 72.9% of the study population.
Out of the sample one hundred and forty (140), 118 (84.3%) were male PSTs while two hundred and eighteen 22 (15.7%) were female PSTs. Convenient, purposive and simple random sampling techniques were as adopted in selecting the College and the Mathematics and Science major/minors and mathematics and ICT major/minors. Convenient was used because the researcher is a Tutor hence organizing the PSTs for the study was easy. Purposive was also used because the concepts being investigated best suited PSTs offering mathematics as mathematics major or minors. The simple random sampling technique was also used to give all the mathematics PSTs opportunity to participate in the study. The random number method developed from Microsoft Excel was used to generate the index numbers of the sample for the study.
Research instruments and pilot test
Two instruments were used comprising of closed ended questionnaire adopted from Lie & Harun, [12] and achievement test. The self-evaluated mastery level questionnaire had 24 items composed of the following concepts cube and cuboids, prism, pyramid, cylinder, cone and sphere measuring their properties, drawing nets, finding surface area and volume, finding composite solids surface area and volume. This questionnaire used a 3 Likert scale of which Fair = 1, Good = 2 and Excellent = 3. The 3 Likert scale was used for easy data analysis and interpretation of results. Also, the achievement test was divided into three sections namely sections A, B and C. Section A achievement test of (solid) three-dimensional geometric figures composed of 30 multiple choice items of which the first fifteen questions were developed by Patkin, [17]; Patkin & Levenberg; [17,18] represented the first three levels of van Hiele theory. The remaining fifteen were also developed by the researcher after going through several solid questions in Colleges of Education Mathematics Education curriculum. Five options answers were provided for each question and the respondents had to choose the correct one. Section B had 7 items requiring the PSTs to fill in the: (number of flat surfaces, number of curved surfaces, number of sides, number of vertex and number of edges of cube, cuboid, pentagonal prism, rectangular pyramid, cylinder, cone and sphere). The section C items were 7 measuring PSTs skills in drawing solid nets of the following: cube, cuboid, pentagonal prism, rectangular pyramid, cylinder, cone and sphere. The instruments were pre-tested with 35 level 300 PSTs of E.P. College of education who were on field practicum in Bimbilla, the capital of Nanumba North Municipality. The pilot test afforded the researcher the opportunity to refine the instruments for the main study especially the achievement test. It also enables the researcher to determine the reliability to confirm the relevance of the instruments.
Validity and reliability
Instruments validation was improved through expert judgment [19]. The two instruments were given to two experts in Mathematics Education and another colleague of the researcher in E.P. College of Education, Bimbilla for scrutiny and vetting. Their recommendations were the basis for its validation and administering. The reliability coefficient of the PSTs Solid Geometry Achievement Test (SGAT) was calculated using Kuder-Richardson formula 21 (KR-21). This is because the questions were scored as follows: zero (0) mark for any wrong answer and one (1) mark for correct responses. This formula determines the reliability of the instrument in a single administration. The reliability test yielded 0.85 for achievement test. The questionnaires also used Cronbach alpha method to determine the coefficients of the self-evaluated mastery level in solid geometry which gave a value of 0.88.
Data collection procedure
The researcher having met the ethical requirements made the instruments available to the pre-service teachers in the College. The questionnaire on self-evaluated mastery level of pre-service teachers' in solid geometry was given to the pre-service teachers' to fill on 25 th November, 2019 within 30 minutes. A week later the achievements test were administered to the pre-service teachers' to answer with duration of 1 hour 30 minutes covering section A and B. Two weeks later again, the section C items were also administered to the PSTs.
Data analysis
Data obtained from the mathematics and science major/minors and mathematics and ICT major / minors PSTs completed copies of the questions were sorted, coded and entered in statistical package for social science (SPSS) software version 16.0 and Microsoft excel 2013 version. To address research question 1, which was on the achievement levels of the Pre-Service Teachers in Solid Geometry were analyzed using frequency count and Percentages to classify their mastery levels of Excellent: (80-100%), Good: (70-79%), Fair: (60-69%), Satisfactorily: (50 -59%), and Fail: (0-49%). Section A was scored out of 30 marks, sections B and C were scored 35 marks each making a total of 100 marks for the test item. To address research question 2, data from the 3 Likert scales of Fair =1, Good =2 and Excellent =3 were analyzed using frequency count, percentages, and means. To determine the PSTs geometric mastery levels in solid geometry the following range of scales were used: 2.6-3.0 = very high, 2.1-2.5 = high, 1.6-2.0 = moderate, 1.1-1.5 = weak and 0.0-1.0 as very weak.
Results
The purpose of the study was to determine Pre-Service Teachers achievement levels in solid geometry and self-evaluation of their level of mastery in Solid Geometry. The results of the study are organized by means of descriptive statistics. The results are presented according to the research questions.
Research Question 1: What are the achievement levels of the Pre-Service Teachers in Solid Geometry?
In order to answer the research question one, the results was analyzed sequentially from section A to B and to C before the overall analysis.
Section A: Multiple choice items
Section B: Fill in the blank
The Table 2 analyses is done horizontally taking questions 31 to 37 on overall achievement of flat surfaces, curved surfaces, sides, vertex and edges for rectangular pyramid, cuboid ,cylinder , pentagonal prism, cube, sphere and cone. Table 2, the students were able to identify or determine the rectangular pyramid and cuboid questions requiring number of: flat surfaces, curved surfaces, sides, vertex, and edges very well with rectangular pyramid correct answer responses of 427 (61%), while identifying or determining number of: flat surfaces, curved surfaces, sides, vertex, and edges of the cuboid yielded 420 responses with 60% correct answer rate. Also, the PSTs performed well on number of: flat surfaces, curved surfaces, sides, vertex and edge of the cube with correct answer rate of 58% with 406 correct responses. However, the PSTs had difficulty in identifying number of: flat surfaces, curved surface, sides, vertex and edges of cylinder, pentagonal prism and sphere where their correct answer rates fell below 44%. The overall performance was just average correct answer rate of 50.06%. The best performance was rectangular pyramid while the worst was sphere. Further analysis has indicated that the minimum, maximum, mode and average marks recorded were 2, 33, 17 and 17.29 respectively for the section answers.
The Table 3 analyses is done vertically taking questions 31 to 37 on overall achievement of rectangular pyramid, cuboid ,cylinder, pentagonal prism, cube, sphere and cone for flat surfaces, curved surfaces, sides, vertex and edges. Table 3, the PSTs performed well in identifying flat surfaces of rectangular pyramid, cuboid, cylinder, pentagonal prism, cube, sphere and cone with 62.55% correct answers. Also, the PSTs identification of curved surfaces of rectangular pyramid, cuboid, cylinder, pentagonal prism, cube, sphere and cone was 44.80% while identification of sides of rectangular pyramid, cuboid, cylinder, pentagonal prism, cube, sphere and cone was 54.39%. Furthermore, identification of vertex and edges of rectangular pyramid, cuboid, cylinder, pentagonal prism, cube, sphere and cone were 52.14% and 37.04% respectively. Table 5 presents the overall analysis from section A, B and C that is questions 1-44, the marks are grouped in to mastery levels. The table also contains number of PSTs and percentages achievements. Research Question 2: What is the Pre-Service Teachers self-evaluation of their level of mastery in Geometric properties of Solid Geometry? Overall mastery Level = High Table 6 presents the PSTs level of understanding of geometric properties of cube and cuboids, prism, pyramid, cylinder, cone and sphere. The analysis has revealed that, the PSTs level of understanding on properties of cylinder was excellent, better than any of the solid geometric figures with a mean score of 2.6 representing 93 (66.4%) very high mastery level. The next level of better understanding which the PSTs indicated were the properties of pyramid and cone with 82 (58.6%) and 80 (57.1%) respectively signifying excellent with high mastery levels. The cone and pyramid had the same mean score of 2.5. The selfevaluation level of sphere geometric properties was between good (42.9%) and excellent (45.7%) with a high mastery level with mean of 2.3. The overall self-evaluation level responses of the solid geometric properties of shapes were 453 (53.93%) as excellent, 335 (39.88 %) as good and 52 (6.19%) as fair with overall mean of 2.5 mean indicating high mastery level. In conclusion, the PSTs level of mastery of geometric properties of cube and cuboids, prism, pyramid, cone and sphere was high while that of cylinder was very high level.
Research Question 3:
What is the Pre-Service Teachers self-evaluation of their level of mastery in drawing nets of Solid Geometry? Overall mastery Level = High As shown in Table 7, the PSTs has indicated a better mastery level in understanding the drawing of cylinder net with a mean score of 2.6 which is very high mastery level representing 89 (63.6%) out of the 140 PSTs responses signifying excellent level with 36.4% indicating between a fair and good understanding. The next self-evaluation understanding nets were cube and cuboids, and cone. The PSTs responses for cube and cuboids and cone were 80 (57.1%) and 78 (55.7%) respectively for excellent understanding with mean of 2.5 each which is high mastery level. The drawing net of the sphere was difficult for the PSTs because 35 (25%) for fair understanding and 37% indicated both good and excellent understanding of drawing the sphere net which resulted in a mean score of 2.1. The overall responses for the drawing of solid nets revealed a mean score of 2.4 which is high. The overall level understanding responses of the solid geometric net drawing of shapes were 438 (52.14%) as excellent, 310 (36.90%) as good and 92 (10.95%) as fair. In conclusion, the PSTs level of overall mastery of drawing nets of cube and cuboids, prism, cylinder, pyramid, cone and sphere was high.
Research Question 4:
What is the Pre-Service Teachers self-evaluation of their level of mastery in finding surface area of Solid Geometry? The results of Table 8 revealed that the PSTs understood how to find surface area of cylinder easily than any of the solid shapes in the Table 8 with a mean score of 2.6 indicating a very high mastery level with 94 (67.1%) indicating excellent level of understanding. The self-evaluation level of understanding of finding surface area of cylinder also indicated 4.3% and 28.6% for fair and good understanding respectively by the PSTs. The next mastery level concept was finding surface area of cone with a mean of 2.6 with 87 out of the 140 PSTs saying that they have excellent understanding of surface area better representing 62.1%. Also, 5.7% and 32.1% of the PSTs indicated fair and good understanding of finding surface area of cone. The understanding levels responses on finding areas of the cube and cuboids were 60.7% for excellent, 32.1% for good and 7.1% for fair. Finding surface area of sphere concept indicated 47.9% for excellent understanding, 37.1% for good understanding and 15% for fair understanding with a mean score of 2.3 signifying high mastery level. The overall mean for finding the surface areas: cube, cuboid, prism, pyramid, cylinder, cone and sphere were 2.5 which is high mastery level. The overall understanding level responses of finding the surface areas: cube, cuboid, prism, pyramid, cylinder, cone and sphere were 472 (56.19%) as excellent, 300 (35.71%) as good and 68 (8.10%) as fair. In conclusion, apart from cylinder and cone which was at very high mastery level the rest of the concepts on finding surface area of Solid Geometry was at a high level with mean 2.5.
Research Question 5: What is the Pre-Service Teachers self-evaluation of their level of mastery in finding volume of Solid Geometry? Table 9, the findings showed that, the most understood concept was finding volume of cylinder selfevaluation of finding volume of solid geometry with a mean of 2.7 representing a very high mastery level. Out of the 140 PSTs 97 (69.3%) indicated they have excellent idea of finding volume of cylinder, while 27.1% indicated good understanding and 3.6% also indicated fair understanding. The next most understood mastery concept was finding volume of cone, because 87 responses representing 62.1% indicated they have excellent idea in finding volume of cone with a mean of 2.6 which is very high mastery level. Whereas 33.6% of the PSTs indicated good understanding and 4.3% also indicated fair understanding on finding volume of cone. Also, on finding volume of the sphere 52.9% of the PSTs indicated they have excellent understanding of finding volume of sphere, finding volume of sphere. Furthermore, 11.4% and 35.7% also indicated that they have fair and good understanding of finding volume of sphere respectively. However, finding volume of sphere had the lowest mean score of 2.4 which is high mastery level. The overall mean score for volume of the solid geometry was 2.5. Also, the overall mastery level responses of finding the volume of the solid geometry were 492 (58.57%) as excellent, 292 (34.76%) as good and 56 (6.67%) as fair.
In conclusion, the following cube and cuboids, prism, cylinder and sphere concepts were at high mastery with the exception of pyramid and cone which were very high on finding volume of geometric shapes.
Research Question 6:
What is the Pre-Service Teachers self-evaluation of their level of mastery in finding composite solids area and volume? Table 10 shows the result of the analysis of finding surface area of composite solids and volume of composite solids. Fifty-three each representing 37.9% indicated that they have fair and good understanding of how to find surface area of composite solids with a mean score of 1.86 which is moderate level. However, 24.3% said they have excellent idea of finding surface area of composite solids and volume of composite solids. Also, 53 (37.9%) PSTs indicated that they have a good idea of finding volume of composite solids which yielded a mean score of 1.92 which is also moderate mastery level. The overall understanding of finding surface area of composite solids and volume of composite solids are 102 (36.43%) for fair understanding, 106 (37.86%) for good understanding and 72 (25.71%) for excellent understanding.
Finally, the overall mean for mastering the geometric shapes properties, drawing nets, surface area, volume and composite geometry solids shapes was 2.43 which is high with 1927 responses representing 52.94% signifies excellent grasps of solid geometry concepts. Also, 1343 (36.90%) and 370 (10.16%) responses have indicated good and fair mastery levels for the overall geometry concepts respectively.
Discussion of Results
The study examined pre-service teachers' achievement levels and their self-evaluation of their level of mastery in solid geometry.
Achievement test
The section A performance of the PSTs was encouraging because 44 (31.4 %) scored at least 25 out of the 30 marks in the multiple choice achievement test. The PSTs achievements in the solid geometry test was good in the multiple choice because a minimum and maximum marks of 20 (32.9%) and 28 (2.1%) respectively were found. Also, the modal mark of 20 (32.9%) was realized. A mean and standard deviation marks of 22.86 and 2.5 were also found indicating good performance with majority PSTs marks ranging between 20.36 and 25.36. Some of the students had challenges in identifying: regular polyhedrons and its properties, coplanar edges from rectangular right prism. Also, some of the PSTs found it difficult in solving questions on rectangular prism when volume is given in a quadratic form, width as a linear equations and length as a constant value.
From the section B achievement test analysis, the PSTs exhibited understanding of geometric properties of pyramid, cuboids and cube better than prism, cylinder, cone and sphere. The analysis has revealed that PSTs understood properties of pyramid, cuboids and cube with correct answer rate of 61%, 60%, and 58% respectively. This performance could be as a result of PSTs using Maggie cubes, Matches and modeling roofs of houses in their daily lives hence are familiar with their properties. The PSTs also had difficult in identify number flat surfaces, curved surfaces, sides, vertex and edges of prism, cylinder, cone and sphere with correct answer rate between 37.86% and 49.86% as in Table 2. The analysis has revealed that the properties and drawing net of the sphere has been tough for the PSTs hence tutors should hammer on it a lot by drawing the net of the sphere and taking time to explain the properties very well. From the scripts analysis few of the PSTs were not clear about the difference between vertex and edge, because they exchanged their answers during the solid achievement test in columns 6 and 7 of questions 31-37. The overall achievement in solid geometric properties was 2453 (50.06%) correct answer rate.
In relation to the section C achievement test again, the PSTs exhibited enough knowledge in drawing nets of solid shapes far better them their performance in determine number of: flat surface, curved surfaces, sides, vertex and edges. Their best performance came through in the drawing of nets of rectangular pyramid, cuboid, cylinder, cube and cone with percentages of correct answer rate all above 64%. Also, some of the PSTs faced challenges in drawing nets of pentagonal prism and sphere with correct drawings rates being at 35% and 17.14% respectively. This finding tallies with Patkin [17] which reported that some students found solid geometry concepts difficult. PSTs should be allowed to perform activities on making geometrical solid from their nets and vice versa during solid geometry lessons, which will improve their geometric thinking and reasoning. This is similar to the comment by Strutchens, Harris and Martin, [21] who indicated teaching of solid geometry include hands-on explorations, which will invariable lead to geometric thinking, reasoning and making conjectures. The PSTs overall achievements in the drawing of nets of rectangular pyramid, cuboid, cylinder, pentagonal prism, cube, sphere and cone yielded a total correct drawing of 63.67% in solid geometry in this study.
Questionnaire
From the questionnaire analysis on properties of solid geometry, the PSTs indicated a good grasp of solid geometry on the number of: flat surfaces, curved surfaces, sides, vertex, and edges. The PSTs indicated that they have a thorough understanding of geometric properties of pyramid, cylinder and cone questions with at least 57% excellent understanding level in identifying number of: flat surfaces, curved surface, sides, vertex and edges. This present finding is consistent with an earlier study by Lie and Harun [12] which revealed that students' best understood geometric properties of pyramid, cylinder and cone. However, some of the PSTs had some challenges in identifying number of: flat surfaces, curved surface, sides, vertex and edges of cylinder, pentagonal prism and sphere with correct answers rate below 44%. The geometric property that was best understood by the PSTs was cylinder while the least grasp was sphere. This could be due to the way sphere is taught in Ghanaian schools by teachers, because most teachers draw sphere on the chalk board without taking their time to presents it as a teaching learning material and teaching its properties to students. Most teachers also just give the formula of sphere and proceed to work with examples. This study has indicated a similar result as that of Lie and Harun [12] which also recorded a lower understanding on sphere.
Again, from the questionnaire analysis the PSTs have shown that they have a better mastery level with excellent understanding in the drawing of cylinder net with a mean score of 2.6 signifying very high representing 63.6%. This finding also corroborates the findings of Lie & Harun [12] who recorded a high mastery level percentage above 63.6%. The next high mastery levels net drawings from the questionnaire responses were pyramid, cube and cuboids and cone with all having a mean of 2.5 which was high mastery level. The PSTs responses for excellent mastery rates for pyramid, cube and cuboids and cone were 53.6%, 57.1% and 55.7% respectively. This is also similar to Lie & Harun [12] study which had percentages for pyramid, cube and cuboids, and cone above 50%. The PSTs indicated that drawing the net of the sphere was difficult, because a quarter representing 25% indicated they have a fair idea of drawing the sphere net. Teachers' inability to draw net of sphere for their students could be a reason why they are not able to draw the sphere net. Some of the students left drawing sphere net blank because they had serious challenges in drawing the sphere. This finding is in line with Lie & Harun [12] study which indicated that students confirmed drawing of sphere net was a challenge.
The analysis from the questionnaire has further revealed that the PST responded that, they have excellent knowledge in finding surface areas of solid shapes: cube, cuboid, prism, pyramid, cylinder, cone and sphere with an overall mean score 2.5 which is a high mastery level with 56.19% correct understanding 472 responses. The PSTs percentage responses tallies with those of Lie & Harun [12] who recorded a percentage above 50% mastery level in finding surface areas of solid shapes. The PSTs agreed that they excellent understanding the concept of finding surface area of cylinder easily than any of the solid shapes. However, the understanding or mastery level for the concept of finding surface area of sphere was just 15% with a mean score of 2.3.
Again, the findings from the questionnaire has showed that, majority of the PSTs 69.3% with mean of 2.7 reflecting very high level of mastery have understood the concept of finding volume of cylinder. Also, finding volume of cone was well understood by the PSTs, because 87 out of 140 PSTs indicated they could solve problems involving volume of cone excellently with mastery level being very high. Again, the lowest mastery level concept was finding volume of the sphere with 11.4% PSTs indicating that they do not understand volume of sphere. On the whole, 58.57% respondents agreed they understood the concept of finding volume of the solid geometry excellently with mastery level being very high. This finding is in agreement with the results of Lie & Harun, [12] study which documented above 50% with high mastery level. Furthermore, the questionnaire analysis has revealed that the PSTs understanding of finding surface area and volume of composite solids was moderate level with means of 1.86 and 1.89 respectively.
Finally from the questionnaire, the overall mastering level for the solids geometric shapes: properties, drawing nets, surface area, volume and composite geometry solids shapes was 2.43 which is high level with 89.94% PSTs saying they understood solid geometry concepts. The present finding is consistent with an earlier study by Lie & Harun, [12] which also indicates a high level mastery of all the solid geometry concepts.
Major findings
Based on the results of the present study the following findings were drawn:
Achievement test
The PSTs achievement in solid geometry test was at good mastery level with 103 (73.57%) achieving at least 60 out of the total 100 marks. The PSTs understanding of solid geometric properties was at slightly above average of 50% with high identification of flat surfaces, sides and vertex. Majority of the PSTs understood correct nets drawing of solid shapes at excellent level.
Questionnaire
Majority of the PSTs self-evaluation mastery level of solid geometric properties, drawing of solid nets, finding surface area and volume was high. Majority of the PSTs self-evaluation mastery level of finding surface area and volume of composite solids shapes was moderate.
Conclusion
The pre-service teachers' achievement in solid geometry was good and their self-evaluation of geometric properties, drawing of solid nets, finding surface area and volume of solid geometry mastery level was high.
Recommendations
The following recommendations are considered relevant to the study made: College Mathematics Tutors should always give PSTs enough time to solve problems on their own before discussing together in class. College Mathematics Tutors should always provide PSTs with real solid shapes to manipulate. College Mathematics Tutors should encourage PSTs to always draw nets of solid shapes and also use solid nets to form solid shapes more especially the sphere.
Consent and Ethical Approval
As per university standard guideline participant consent and ethical approval has been collected and preserved by the authors.
Competing Interests
Author has declared that no competing interests exist. 28. The volume of a small cube is 8cm 3 . How many of these cubes will be needed to completely fill a cube of edge 12cm. 108
__________________________
; This is an Open Access article distributed under the terms of the Creative Commons Attribution License ), which permits unrestricted use, distribution, and reproduction in any medium, provided
|
2021-04-04T13:23:59.274Z
|
2020-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c78e15d65888a287066815cf1d30511cc10f1652",
"oa_license": null,
"oa_url": "https://www.journalarjom.com/index.php/ARJOM/article/download/30246/56752",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ea9b97f7bb0ef6746c49f60e238b31f5f13c2aa6",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
232418145
|
pes2o/s2orc
|
v3-fos-license
|
Validation of an Isothermal Amplification Platform for Microbial Identification and Antimicrobial Resistance Detection in Blood: A Prospective Study
Abstract Background: Recent advances in nucleic acid amplification technique (NAAT)-based identification of pathogens in blood stream infections (BSI) have revolutionized molecular diagnostics in comparison to traditional clinical microbiology practice of blood culture. Rapid pathogen detection with point-of-care diagnostic-applicable platform is prerequisite for efficient patient management. The aim of the study is to evaluate an in-house developed, lyophilized OmiX-AMP pathogen test for the detection of top six BSI-causing bacteria along with two major antimicrobial resistance (AMR) markers of carbapenem and compare it to the traditional blood culture-based detection. Materials and methods: One hundred forty-three patients admitted to the Medical Intensive Care Unit, Narayana Hrudayalaya, Bangalore, with either suspected or proven sepsis, of either gender, of age ≥18 years were enrolled for the study. Pathogen DNA extracted from blood culture sample using OmiX pReP method was amplified at isothermal conditions and analyzed in real time using OmiX Analysis software. Results: Among the processed 143 samples, 54 were true negative, 83 were true positive, 3 were false negative, and 2 were false positive as analyzed by OmiX READ software. Gram-negative bacteria (91.3%) and gram-positive bacteria (75%) were detected with 100% specificity and 95.6% sensitivity along with the AMR marker pattern with a turnaround time of 4 hours from sample collection to results. Conclusion: OmiX-AMP pathogen test detected pathogens with 96.5% concordance in comparison to traditional blood culture. Henceforth, OmiX-AMP pathogen test could be used as a readily deployable diagnostic kit even in low-resource settings. How to cite this article: Maheshwarappa HM, Guru P, Mundre RS, Lawrence N, Majumder S, Sigamani A, et al. Validation of an Isothermal Amplification Platform for Microbial Identification and Antimicrobial Resistance Detection in Blood: A Prospective Study. Indian J Crit Care Med 2021;25(3):299–304.
IntroductIon
Blood stream infections (BSI) ranging from mild bacteremia to potentially life-threatening septic shock are posing a major healthcare burden worldwide. 1,2 A delay in appropriate treatment could lead to multiorgan failure and eventual death. 3 Overall mortality due to sepsis in developing countries like India is about 63%, of which 34% of deaths were from the intensive care unit (ICU)of hospitals. 4 Traditional blood culture (BC) takes 48-72 hours for pathogen detection with culture positivity rates of 10-25%. 5,6 Meanwhile treating patients with high-end antibiotics like carbapenems and colistin has changed the epidemiology and susceptibility patterns of microorganisms, with a huge impact on antimicrobial stewardship. [7][8][9] The higher turnaround time (TAT) along with the lack of sensitivity and contamination issues associated with BC testing highlight the need for a more rapid and accurate method for pathogen detection and antibiotic susceptibility patterns. 10 Nucleic acid amplification technique (NAAT)-based molecular diagnostic methods enable rapid pathogen identification (ID) in 2-7 hours to complement or to confirm the BC results. 11,12 Recently, loop-mediated isothermal amplification (LAMP) has emerged as a point-of-care (POC) deployable technique with characteristics like better amplification efficiency, 2-3 pairs of sequence-specific primers, and requirement of simple water bath/dry bath to maintain isothermal conditions. 13,14 World Health Organization has authorized LAMP-based tuberculosis test-Xpert MTB/RIF® (Xpert) (Cepheid, Sunnyvale, CA, USA) in 2013, but high consumable costs, need of sophisticated instrumentation, and maintenance limits its usage as a POC test. 15 Reaction components of LAMP in lyophilized format have been reported for human African Trypanosomiasis and Coxiella burnetii, which require minimal technical expertise and easy workflow. 16,17 However, there is no dried or lyophilized isothermal assay available commercially for the diagnosis of BSI in low resource settings.
In order to provide a cost-effective, easy-to-use diagnostic platform that can be POC deployable, OmiX Labs has developed an isothermal test for BSI called OmiX-AMP pathogen test to detect the gene signatures of top 6 bacterial pathogens and related antibiotic resistance based on Indian epidemiology. The bacterial pathogens in the panel include Escherichia coli, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, Enterococcus spp., and Staphylococcus aureus along with carbapenem antibiotic-resistant markers: NDM and OXA-48. In this study, we have clinically validated the OmiX-AMP pathogen test, and concordance was established in comparison to traditional BC test results of critically ill patients. As the first study using such a platform for isothermal tests for BSI, it was preferred to first validate in BCs and compare it to the existing tests in the market.
Study Design and Ethics Approval
The study was designed to recruit 150 patients with suspected BSI, from whom blood samples were collected for standard culture by BD BACTEC FX™ system (USA). Patients admitted to medical intensive care unit with suspected/proven sepsis and ≥18 years of age of either gender were approved by the institutional review board of Narayana Hrudayalaya, Bangalore, India (protocol number-NHH/MEC-S01/A-1-2019). The study has been registered with The Clinical Trials Registry-India, with the registration number: CTRI/2019/04/018459.
BC for ID and Antibiotic Susceptibility Test (AST)
The enrolled 150 samples were 100 BC-positive and 50 BC-negative cases and the study design allowed for 90% power to detect the top 2 pathogens with 5% type I error. Blood samples were collected in BD BACTEC™ plus anaerobic and aerobic bottles separately and incubated for a period of 7 days. BC bottles that beeped positive were processed for Gram staining and subculturing in blood agar and MacConkey's agar medium for organism ID. Meanwhile, 2 ml of blood from BC-positive bottles and BC-negative bottles (no growth after 7 days of incubation) was sent to the OmiX-AMP pathogen test platform. The information of the pathogen identified by BC and AST were kept confidential and only made available at the end of the validation study with the OmiX-AMP pathogen test results.
DNA Extraction by OmiX pReP Method
For each BC sample received, a unique ID and barcode were generated and immediately processed for DNA extraction. To the 200 µL of BC sample, 100 µL of 2% red blood cell lysis buffer was added, incubated at 95°C for 2 minutes, and centrifuged at 8000 rpm for 5 minutes. Pellet was resuspended in 75 µL of ARCIS solution-I, from which 60 µL was transferred to a fresh tube containing 60 µL of ARCIS solution-II. The suspension was incubated at 95°C for 3 minutes and centrifuged at 8000 rpm for 3 minutes. The supernatant-containing DNA was collected into a fresh tube and utilized either immediately or stored at −20°C until the amplification was performed.
Lyophilization of Master Mix for OmiX-AMP Pathogen Test
OmiX-AMP pathogen test was prepared using LAMP master mix and dispensed into plasma-treated 0.2 ml clear tubes. The dispensed formulation tubes were freeze-dried using OmiX proprietary lyophilization program in Genesis SP Scientific pilot freeze dryer. After freeze-drying, lyophilized tubes were visually checked for "white cake" appearances, assembled in OmiX-AMP ID sepsis test kit format of eight unitized panels (in duplicates) along with a positive control and a negative control in the silver pouches with a desiccant and stored at room temperature. Tris-Cl (pH 8.8)-based reaction buffer was provided with each pouch for reagent reconstitution before testing reaction. The quality of each manufactured batch of the OmiX-AMP test kit was tested with the OmiX laboratory standard control DNA samples. Quality-approved batches were used for testing clinical samples.
OmiX Assay Using OmiX-AMP Pathogen Test
For each clinical sample, a unitized panel pouch was utilized. Reconstitution buffer (20 µL) was added to each tube to reconstitute the lyophilized reagents. Then, 5 µL of DNA was added to each of the 18 reaction tubes. The reaction tubes for 1 or 2 samples (18 or 36 tubes) were then placed in the Rotor-Gene Q-device (Qiagen) and heated to 65˚C for 1 hour, and a final denaturation of 2 minutes at 95˚C to inactivate the enzyme. No template control run was performed at regular intervals to ensure no amplicon contamination prevailed in the lab setup. At the end of the run, the real-time fluorescence data collected on the Rotor-Gene Q-device were exported to analyze the results using OmiX Analysis software.
Statistical Analysis
Sensitivity, specificity, positive and negative predictive values, and their 95% confidence intervals were computed using the epiR package in R-software. Cohen's kappa was calculated as a measure of agreement between the organism identified by the OmiX-AMP pathogen test and the standard BC results. For this purpose, "negative" cases were of three types; (a) negative in culture; (b) positive in culture but negative for the six panel organisms, with a different organism identified; and (c) positive in culture but no organism identified and culture considered to have a contaminant or coagulase-negative Staphylococci (CoNS).
Pathogen Identification
In the present study, a total of 100 positive and 50 negative BC samples were evaluated using the OmiX-AMP pathogen test from February 2019 to June 2019. Among the 150 BCs, 6 samples were used to standardize the process, 1 sample was not processed due to insufficient quantity, and 143 samples were considered for the study (Flowchart 1).
Of 143 subjects, 86 were men (age: 18-93 with average of 55.9 years) and 57 were women (age: 23-88.9 with average of 57.1 years). The majority of the samples were from old age of ≥60 (n = 65) followed by age of 31-59 years (n = 57). Young age people were less affected according to the study (n = 21).
Of the 143 BC samples, 89 were culture positive and 54 were culture negative as per standard BC results. Of these 89 culturepositive cases, 50 were positive for the pathogens which are part of the OmiX-AMP pathogen test, 15 were CoNS which are considered in BSI as contaminants, and 24 samples were positive for off-panel organisms (Table S1). The rate of OmiX panel organisms was 73% [(50+15)/89] and 27% of cases were not part of the OmiX-AMP panel.
The OmiX-AMP pathogen test detected 45 of 50 OmiX panel-related organisms that included E. coli, 16 K. pneumoniae, 14 A. baumannii, 10 P. aeruginosa, 2 Enterococcus sp. 2 , and S. aureus 1 ( Table 1). Fifty-four no-growth cases in culture and fifteen cases of CoNS were reported as negative in the OmiX-AMP pathogen test. Figure 1 demonstrates the correlation between no growth to date (NH) BC results and OmiX-AMP pathogen test.
Among the 121 cases which were either of OmiX panel or negative in BC or CoNS in BC, the Cohen's kappa measure was 0.94 with a 95% confidence interval of 0.89-0.97. This is well above the 0.80 threshold that is considered to be a statistically relevant measure of a high level of agreement. In the 143 cases, the Cohen's kappa measure of agreement is 0.74 with a 95% confidence interval of 0.66-0.82. Table 2 illustrated the true prevalence, sensitivity, specificity, and positive and negative predictive values as computed from the 121 cases which include the OmiX panel organisms, negative in culture, and CoNS identified in culture cases. There was nearly 100% specificity, positive predictive value, and negative predictive value for all organisms. However, due to the low prevalence of some organisms, sensitivity was easily affected.
Carbapenem Resistance Pattern Using NDM and OXA-48 as Genetic Markers
AST results from BC for ertapenem and meropenem antibiotics identified 13 E. coli cases as sensitive, 1 as resistant to ertapenem, and the other 2 as resistant to both ertapenem and meropenem antibiotics, while the OmiX-AMP test detected 12 of 13 AST-sensitive samples with a sensitivity of 92.3% and the other as sensitive. The other two E. coli positive and resistant to ertapenem and meropenem, one was detected for NDM and the other for OXA-48 marker in OmiX-AMP test.
The OmiX-AMP test detected 11 of 12 resistant cases for OXA-48 10 and NDM 1 with a sensitivity of 91.6% (11/12). Among the four sensitive cases, three were detected negative for NDM and OXA-48 markers with a specificity of 75% and 1 sensitive K. pneumoniae for ertapenem and meropenem was detected as positive for OXA-48 in OmiX-AMP test.
A. baumannii and P. aeruginosa were the two main nonfermenting gram-negative bacteria (GNB) covered in the OmiX-AMP pathogen panel. Among the 12 A. baumannii positive cases, 8 were resistant to both meropenem and ertapenem antibiotics, while the OmiX-AMP test showed that only 4 were positive for NDM marker and the other 4 samples were detected sensitive to the NDM and OXA-48 markers, resulting in false negatives. The two P. aeruginosa positive cases detected in both BC and OmiX-AMP pathogen test were sensitive to antibiotics with 100% specificity.
OmiX-AMP pathogen test detected two of the three monomicrobial Enterococcus spp. positive cases and two of the two polymicrobial infections (detected along with A. baumannii) Flowchart 1: Flowchart of blood culture samples considered for the study with a sensitivity of 80% and the other one was detected negative to the OmiX panel, resulting in one false-negative result and found sensitive to antibiotics in both the tests with 100% specificity. Only one S. aureus was detected positive in both BC and OmiX-AMP pathogen test and was found sensitive to antibiotics with 100% specificity. Fig. 2 represents overall carbapenem resistant and sensitive cases detected in NH BC and OmiX-AMP pathogen test. Among the 24 off-panel organisms identified in BC, 24 were detected as negative in the OmiX-AMP sepsis panel.
dIscussIon
This study reports validation of an in-house developed, room temperature stable OmiX-AMP pathogen detection kit that detects bacterial pathogens in sepsis-related BSI with 95.69% sensitivity and 100% specificity with 96.5% concordance and was able to generate results in an easy-to-use format with a TAT of 4 hours using OmiX Analysis software. Commercially available molecular-based detection systems include: SeptiFast (Roche), Magicplex Sepsis (Seegene), SeptiTest (Molzym), broad-range polymerase chain reaction (PCR) and electrospray ionization mass spectrometry (PCR/ESI-MS) (IRIDICA), and film array-based BIOFIRE to mention a few. SeptiFast claims to detect 25 BSI pathogens with a sensitivity of 68-69% and specificity of 83-93% in 4.5-6 hours. 18 Magicplex technology could detect 27 organisms and antimicrobial resistance (AMR) markers mecA, vanA, and vanB with a sensitivity of 11-65% and specificity of 77-92%. 19,20 SeptiTest based on broad-range PCR and sequencing detects more than 300 BSI-related pathogens with a sensitivity of 37-87% and specificity of 85.5-100%. 21,22 IRIDICA detects more than 700 pathogens along with mecA, vanA, vanB, and Klebsiella pneumoniae carbapenemase AMR markers with a sensitivity 45-83% and sensitivity range of 69-94%. 23,24 The Food and Drug Administration-approved BIOFIRE based on nested multiplex PCR has a sensitivity of 94.6% and specificity of 100% when only on-panel organisms (24 GNB, gram-positive bacteria, and yeast pathogens, as well as 3 AMR genes) were considered. 25 However, variability in results and the high cost of automation limit their utility as POC diagnostics, especially in emerging markets.
BC results for OmiX on-panel organisms revealed that GNB members-E. coli and K. pneumoniae were the most widespread pathogens (64% 32 of 50). Seventy-five percent of detected K. pneumoniae were found resistant to carbapenem panel, OXA-48 in particular (88%) that correlates with the previous studies emphasizing rapidly disseminating carbapenem-resistant E. coli and K. pneumoniae in the population. 26 BC results projected that 70% of the detected A. baumannii were resistant to both ertapenem and meropenem antibiotics, whereas OmiX panel detected resistance in only 57% (NDM) of the AMR culture-positive results. This could be due to the stabilized expression of NDM-2 gene in A. baumannii rather than NDM-1 and OXA-48-mediated resistance. 27 Prevalence of resistance among the pathogens might differ based on the magnitude of the pathogens and the kind of antibiotics being consumed in a particular geographical area. 28 Of the 24 off-panel organisms mentioned in study one, A. nosocomialis was detected as A. baumannii. It could be because the gene signatures of both the organisms display sequence similarity at the genus level as they both belong to Acinetobacter calcoaceticus-baumannii complex, which causes nosocomial infections. 29 Apart from age and gender, this study did not include other clinical parameters and severity of illness characteristics because this is the initial clinical validation of the in-house developed OmiX-AMP pathogen test. It would be worth to include a less-prevalent ICU antibiogram-Burkholderia, Streptococcus, Enterobacter, etc., and fungal species like Candida, etc., to the OmiX-AMP panel.
OmiX pReP method with simplified DNA extraction protocol and OmiX assay with minimal pipetting steps using lyophilized tubes further shortened the time to 2 hours from the receipt of sample to result generation. Assay run raw data were analyzed using OmiX Analyze software that generated data in the form of amplification plots and self-populated Excel sheet with cycle threshold values and
Fig. 2:
Correlation between resistance and sensitivity for carbapenem markers detected in NH blood culture and OmiX-AMP pathogen test pathogen detection status. Assay run and report generation took around 90 minutes in comparison to the traditional analysis time of 2-3 hours by normal PCR-based reaction setup. 30 Overall, the costeffective OmiX-AMP detection platform has the potential for rapid detection of bacterial pathogens in clinical samples with minimal laboratory setup at conventional laboratories in healthcare centers.
conclusIon
In conclusion, the developed OmiX-AMP pathogen detection test is a rapid and cost-effective detection method for identifying top six BSI-causing bacterial pathogens. With a TAT of 4 hours along with high specificity and sensitivity in detection, the OmiX-AMP pathogen detection test would accelerate the pathogen detection process with minimal laboratory setup. Further challenge was to detect BSI-related pathogens from whole blood excluding the need for BC.
|
2021-03-31T05:15:26.874Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f70c46343646f8cc57488b51c83c99e65501db85",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5005/jp-journals-10071-23761",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f70c46343646f8cc57488b51c83c99e65501db85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
154353325
|
pes2o/s2orc
|
v3-fos-license
|
brazilian foreign policy towards South america during the lula administration : caught between South america and mercosur
The aim of this article is to analyze Brazil’s foreign policy towards the South American region during President Lula’s administration. As such, the article intends to highlight two specific dimensions: the extent to which foreign policy during this period has differed from previous periods and the relative importance granted by Brazilian diplomacy to recent cooperation and integration efforts, more specifically the Unasur and Mercosur. The article argues that the Lula administration has behaved differently from its predecessors by prioritizing the building up of Brazilian leadership in South America on several different fronts, especially by strengthening multilateral institutions in the region.
both cases, these movements have gone hand in hand with efforts to use foreign policy to support national development.
The aim of this article is to analyze Brazil's foreign policy towards South American countries under the government of President Lula (2003-2010).As such, it intends to highlight two specific dimensions: the extent to which foreign policy during this period has differed from that of previous periods, and the relative importance granted by Brazilian diplomacy to recent cooperation and integration efforts, more specifically the Union of South American Nations (Unasur) and Mercosur. 4The article argues that the Lula administration has behaved differently from its predecessors by prioritizing the building up of Brazilian leadership in South America on several different fronts, especially by strengthening multilateral institutions in the region.
In order to fulfil this aim, the article first investigates continuities and discontinuities in Brazilian foreign policy, laying special attention on the Lula years.Next, it traces Brazil's historic behaviour towards its South American peers, in this case focusing more on the regional policy developed by the Fernando Henrique Cardoso administration.The third part analyzes Brazilian foreign policy as practised by the Lula administration in its relations with South America and especially with regard to Mercosur.Throughout the text, the ideas of foreign policymakers -linked with their interests -are considered as important tools for the analysis.continuity and discontinuity in lula's foreign policy Brazil's relationship with is neighbours and efforts to build up regional leadership have not been consistent over the last twenty years, with different strategies and priorities gaining precedence during this period.For many years, the overriding paradigm inside Itamaraty has been based on beliefs that would seem to indicate an increasing meeting of minds within Brazilian diplomatic circles and some important signs of continuity in the country's foreign policy. 5ccording to Vigevani et al (2008), autonomy and universalism are the two mainstays of Brazilian foreign policy.Here, universalism is meant to express the idea of receptiveness towards all countries, regardless of their geographical location, regime or economic policy, and could be equated with the idea of acting as a global player.Meanwhile, autonomy can be seen as the amount of manoeuvring space a country has in its dealings with other States and in international politics.Underlying both ideas is the belief -shared by Brazilian foreign policymakers over the years -that Brazil is destined to become a major power, allusions to which have been made since the early 1900s.It therefore follows that Brazil should have a special place on the international scene in political and strategic terms (Silva, 1998).
These beliefs are consistent with the presence of a structured diplomatic corps.The highly historically concentrated foreign policymaking process in Brazil with the presence of Itamaraty as a specialized bureaucracy, from a perspective of historical institutionalism6 , has contributed to more consistent behaviour founded on longer-term principles.
Meanwhile, these beliefs also contribute to initiatives towards the region that are inspired on realistic assumptions.Lima (1990: 17) argues that countries like Brazil often adopt multifaceted international behavior, seeking to take advantage of what the international system has to offer, while simultaneously spearheading efforts to remodel it with the aim of benefitting southern hemisphere countries and adopting a stance of leadership in the region.
Nonetheless, continuity has to coexist with some discontinuities.The strategies inspired by Hobbes or Grotius, and the quest for greater autonomy in international relations or for leading initiatives representing southern nations are formulated according to: a) the international context; b) the national development strategy; and c) calculations made by foreign policymakers that vary according to their political preferences and perceptions as to what the "national interests" are and other more specific variables.
From the 1990s onwards, explains Lima (2000), as the foreign policy agenda started to gain space in the realm of public policies and attract the interest of different spheres of civil society, Itamaraty's monopoly in policymaking and what could be termed the country's "national interests" started to wane.The opening up of the economy was one factor behind the politicization of foreign policy as a function of the unequal distribution of its costs and gains, while the consolidation of democracy led to discussions in society and different opinions being voiced about what should be on the international agenda.These two processes made room not just for a consolidation of different schools of thought within Itamaraty (also identified with different political groups), but also for the inclusion of players from other state agencies in foreign policy making and implementation. 8hen Lula came into power, the autonomist school of thought gained ground within Itamaraty, and since then it has become the main foreign policymaking group in Brazil.Above all, the autonomists defend a more self-directed and active projection for the country in the international arena.As part of this, these analysts and policymakers are in favour of a reform of international institutions so as to open up a broader international platform for Brazil.Adopting behaviour defined by Lima as soft revisionism,9 they have political and strategic concerns regarding north-south problems and forge links with other so-called emerging countries with similar traits to Brazil.The main goals are to build up regional leadership and be seen as a global power. 10he autonomists are largely an offshoot from economic developmentalism.They see integration as a way of gaining access to foreign markets, strengthening the country's bargaining position in international economic negotiations, and projecting Brazilian industry in the region.
This group now coexists with a more recently assembled community having its own foreign policy proposals, which has scant historic ties with the diplomatic classes but which, during the Lula administration and in the process of including new players in foreign policymaking, has set up an important dialogue with Itamaraty and has exerted some influence on foreign policy decisions.11This force comprises scholars and political leaders, mostly from the Workers' Party (PT).Indeed, when Lula came into office, he broke with the tradition of keeping foreign policymaking within the confines of Itamaraty by inviting Marco Aurélio Garcia, then the PT's Secretary for International Relations, to be his advisor.By so doing, he effectively opened up new spaces for this group to influence policymaking.This new point of view is also expressed in several government agencies. 12ased on the understanding that South America has its own identity, this group has prioritized regional integration which it seeks to develop in the political and social spheres.In this sense, it supports initiatives taken by the region's anti-liberal governments that are designed to bolster their respective countries' development strategies and even their political regimes and proposes a kind of tacit solidarity with them.This group also argues that Brazil should be willing to take on a larger share of the costs of regional integration.As far as Mercosur is concerned, they are in favour of strengthening integration in the political, social and cultural spheres.
This position has been influential amongst Itamaraty's autonomists, as it has contributed for Brazil to take a more proactive stance in its cooperation with its neighbours and in accepting the different political positions existing in the region.Nonetheless, when it comes to some topics, Mercosur being a case in point, the influence of one group outweighs the other, leading to results that are often incoherent, such as the weakening of the bloc just when the Mercosur Parliament was created.As Brazil's cooperation with other countries from the region has grown, certain agencies, such as the Ministries of Health, Science & Technology and Education, have been more involved in formulating the country's international cooperation policy, while the Brazilian Development Bank, BNDES, has started lending more abroad.
Unlike the Cardoso administration's foreign policy, autonomy-oriented diplomacy efforts under Lula have sought out more direct strategies for boosting the autonomy of Brazilian actions, while strengthening universalism through south-south cooperation initiatives and in multilateral forums, and strengthening Brazil's proactive role in international politics.With respect to South America, the Lula da Silva administration has demonstrated a political will to increase the level of coordination between the region's countries, with Brazil at the hub. 13ecedents of brazil's behavior in the region Until the 1950s, Brazil channelled most of its dealings with its neighbours through its participation in Panamerican multilateral forums.However, as the 1950s progressed, a new regional identity started to take shape thanks to the developmentalist ideas of the Economic Commission for Latin America and the Caribbean (ECLAC), which also put discussions about regional integration on the agenda.In 1961, Brazil inaugurated its Independent Foreign Policy with more explicit support for the new sub-regional integration initiative, the Latin American Free Trade Area (LAFTA), and a bid to forge closer ties with Argentina through 13 LIMA, M.R.S.de -Are Regional Blocs leading from nation states to global governance?A skeptical vision from Latin America.Iberoamericana.Nordic Journal of Latin American and Caribbean Studies vol.n.1, 2007mentions the political will of the Lula administration to build up regional integration and notes Brazil's effective leadership in the region, while drawing attention to its limitations.what was called the spirit of Uruguaiana14 , even if this was never a top priority in Brazil's foreign policy.From 1964 until the end of the following decade, Brazil's approach towards its South American neighbours and regional integration was to give precedence to bilateral agreements and only formal support for joint initiatives.
The rise to power of João Figueiredo in 1979 saw a major shift in the country's foreign policy for the region.The government incorporated into its foreign agenda the idea of a Latin American identity for Brazil by drawing closer ties with the other countries in the continent, and also started to prioritize actions in multilateral forums.The exacerbated conflict between East and West, the weakening of the Third World on the international scene and the foreign debt crisis contrived to bring Brazil closer to its regional peers.The Brazilian government took its first steps towards closer links with Argentina with the following measures: the signing of the Tripartite Agreement on Corpus and Itaipu; the visits by the presidents to their neighbours in 1980; the signing of a nuclear agreement between the two countries; and Brazil's position of partial neutrality during the Falklands War.
But it was in the second half of the 1980s that Brazil made its most notable shift in approach towards the rest of the continent as the countries started emerging with new democratization processes.Within this context, the Brazilian government took the important step of signing the Declaration of Iguaçu and launching the Programme for Integration and Economic Cooperation with Argentina.The same period also saw the creation of the Rio Group with the aim of aligning the region's international policies.At this time, Brazil's attitude towards the region was influenced by a combination of domestic factors and positions within the government apparatus, which were instrumental in the move towards integration with Argentina along heterodoxal economic lines.The mechanisms designed to address the economic crisis triggered by the foreign debt problem, the need to update the country's production sector, and the consolidation of democracy were drivers for this rapprochement.
The turn of the 1990s saw major changes in the international scenario and inside Brazil.The foreign policy of forging bonds and integration with its neighbours became a priority for Brazil, and since that period Brazilian government has taken forth a number of initiatives in this area, the most ambitious being Mercosur.The demise of the model of economic development based on import substitution and the financial problems brought about by the foreign debt crisis led the Brazilian government to set about redefining its development project.The fact that two liberal governments were in power concomitantly in Brazil and Argentina took the integration process, launched in 1985, down a more liberal path: the trade dimension of Mercosur gained force and the process took on the features of open regionalism.
From an economic perspective, Mercosur was seen by the government and government agencies as the first step towards a customs union, which was in line with its development strategy as it would help achieve economies of scale, with greater comparative advantages and efficiency in production.The government then started to negotiate the formation of a common external tariff.Meanwhile, Mercosur could also boost foreign trade and operate as a magnet for attracting foreign private investments, being it an integration project that was nonetheless open to foreign trade.Politically speaking, Mercosur could also reinforce Brazil's bargaining position, adding it weight in the international arena.An intergovernmental institutional model was adopted in order to maintain autonomy in foreign and macroeconomic policy decision making.
The arrival in power of Itamar Franco put the brakes on the growth of liberalism in Brazil and opened up new space in Itamaraty for autonomist players.In terms of economic cooperation, his government gave greater priority to creating a future South American Free Trade Area than it gave to Mercosur.With South American integration under Brazilian leadership raised top of the agenda, the autonomists sought to expand the bloc by opening the doors to new countries and pushing for the formation of a free trade area across the whole continent.In the meantime, Mercosur could still serve to give Brazil some regional leverage and could be a helpful element in the formation of such a free trade area.However, this project failed to get off the drawing board, while the Mercosur integration project gained ground.Even so, it was during Franco's administration that Brazil started to conceive of South America as something different from Latin America in its foreign policy.
During the tenures of Fernando Henrique Cardoso, Brazilian diplomacy, which had until then been marked by the ideas of the pragmatic institutionalists, started to perceive the importance of having South American partners if they were to strengthen Brazil's position as a global player and negotiator in multilateral forums, and as space for expanding Brazilian development.Diplomats started to review traditional attitudes towards the region based on the idea of nonintervention, and strove to establish leadership in the area by striking a balance between integration, regional security, democratic stability and infrastructure development (Villa, 2004).This position also started to take a stance whenever a democratic regime came under threat.
Meanwhile, the first steps were taken to build up a community of countries in the region.In 2000, with the weakening of Mercosur as a result of the exchange rate crisis of 1999, the first meeting of South American countries was held in Brasília with a view to forming the South American Community of Nations (SACN).The meeting's agenda was dominated by discussions about economic integration, infrastructure and the strengthening of democratic regimes.Brazil's energy system was reoriented towards the region and infrastructure integration projects were designed that signalled the way towards the Initiative for the Integration of Regional Infrastructure in South America (IIRSA).
As regards Mercosur, there was a growing movement within Itamaraty that defended its development based on an incomplete customs union, on limiting political integration and on a low institutional profile, which would bolster Brazil's international position while avoiding the strict commitments required for a common market or any supranational traits (VIGEVANI et al, 2008).From this time on, trade integration took on a key role within the framework of open regionalism, while institutionalizing the bloc was not deemed relevant.Politically speaking, Mercosur was seen as useful in strengthening Brazil's negotiating clout, adding it weight in the international arena.Despite some friction inside the bloc about the common external tariff, parallel trade negotiations were held with the EU and for the formation of the Free Trade Area of the Americas (FTAA) under the bloc's new legal personality, instituted at the end of 1994.
The prospect of an alliance with Argentina concerning the regional policies implemented by Itamaraty was halted by a consensus amongst diplomats and other sectors of Brazil's bureaucratic apparatus: Brazilian foreign policy would be an area of national sovereignty. 15To compound matters, Brazilian diplomacy started to see Argentina as a lesser partner and its frequent changes of foreign policy only went to raise suspicions.It was neither clear what weight each country should have in the alliance nor to what extent Brazil would be an ally of Argentina's or would act as the bloc's paymaster.
In practice, however, efforts were made in the regional ambit to develop common positions with Argentina on topics concerning South America where they had previously held different positions.In Mercosur, the signing of the Ushuaia Protocol was an important step.In this process, Mercosur took a priority position in Brazil's foreign policy for the region, and integration with other South American countries was relegated to a complementary level with Mercosur at the hub.
In 1999, Mercosur went through a serious crisis when Brazil devalued its currency, which had serious knock-on effects for the Argentine economy.Brazil considered the decision to be one of national sovereignty over economic policy decisions and failed to consult the other members of the bloc in advance.The devaluation had a strong impact on Argentina's Convertibility Plan, and the Menem administration reacted by imposing customs barriers on Brazilian products.While Menem's successor, De la Rúa, was in power, Brazil again started to play up its relations with its South American neighbours, while putting Mercosur on the back burner in response to the perceived fragility and unpredictability of the Argentine administration.15 Argentina's decision to align itself with the USA during the period also made further articulations in this area impossible.
Ultimately, it was the 2001 crisis in Argentina that gave the bloc a new lease of life.Brazil chose to give the country political support, aligning itself as an ally within the Mercosur framework.During the last year of the Cardoso administration, which overlapped with President Duhalde's term in Argentina, the countries drew closer again in response to the important role played by Brazil during the Argentine crisis.The Brazilian government restated and elucidated its support for its neighbour and for Mercosur trade negotiations which helped bolster Argentina's position in the eyes of countries from outside the continent.
building up brazilian leadership during the lula administration Brazil's foreign policy for South America underwent some changes during the Lula da Silva administration.The period was marked by the rise of the autonomists inside Itamaraty.But alongside the traditionally central role played by Itamaraty in foreign policymaking, this policy was also influenced by a more politicallyand academically -inclined group which, as mentioned earlier, defended stronger political and social integration based on the perception of a certain compatibility between the countries' values, real mutual advantages to be reaped, and a relatively common identity across the continent.
The convergence or in some instances the mere coexistence of these two viewpoints meant that the region was perceived differently from how it had been during the previous administration, and also opened up space for a new attitude by Brazil's diplomats towards the building up of Brazilian leadership, by pursuing new forms of cooperation and integration with neighbouring countries, and also towards Mercosur (which in this case lost ground).This movement instigated by the Brazilian government incorporated both the Hobbesian and Grotian dimensions of realism.
The globalized international scenario, a more multipolar international system with the rise of new players after 9/11 and greater fragmentation as of the 2008 crisis paved the way for the rise of Brazil.New spaces became available for it to take a more proactive stance.In the US, the Bush government gave up once and for all a Panamerican policy for Latin America after 2001, and there has been no specific policy for the region since Obama came to power.In South America, liberalism has lost ground since the early 2000s as new anti-liberal governments have been elected, reinforcing this overall trend.This external scenario has also been propitious for Brazil's revised approach to the region.Cervo (2008) identified Lula administration's attitude towards South America during this period as being characteristic of a "logistic State", which takes on an important role in orienting and supporting the domestic economy and society in its dealings with the rest of the region.This pattern of behaviour in its interaction with its neighbours is propitious for South American integration.
The importance of the South american dimension
When Lula da Silva came to power, increased coordination between South American countries under Brazilian leadership started to be a political priority.Integration with its neighbours was seen as the surest route for Brazil to gain international standing, while also helping Brazil realize its potential and form a bloc that was strong enough to have more international clout.With this in mind, Brazilian diplomacy set about further developing an approach that had already begun under President Cardoso, while giving new weight to leadership building through a combination of soft power patterns, based on Grotian realism, which took the form of strengthened multilateralism in the region.Brazil reinstated and adjusted the principle of non-intervention in the form of "non-indifference" 16 , and included in its agenda a regional leadership construction programme by coordinating regional cooperation and integration efforts with an eye to boosting Brazilian development.
The strategy to consolidate the SACN was an important ingredient in this project.Once Lula was elected, Brazilian diplomacy focused more directly on its institutionalization, which was formalized in 2004.At the 1st Meeting of Presidents and Heads of Government of SACN countries in 2005, the group's agenda gave priority to addressing asymmetries, and also included talks on a broad range of topics, including political dialogue, physical integration, the environment, energy integration, South American financial mechanisms, asymmetries, the promotion of social cohesion, social inclusion and social justice, and telecommunications.This demonstrates the outcome of the broadening of the scope of technical and financial cooperation initiatives with countries from the region.
In 2008 the SACN was succeeded by Unasur in response to pressure from Venezuela.The approach within Unasur is more one of cooperation than of traditional integration, but it has become increasingly consistent and has been important in responding to situations of crisis in the continent.For the Brazilian government, the organization has become its main channel for multilateral action.For one thing, it is strictly intergovernmental and has a very limited institutional framework, which assures Brazil a good level of autonomy from the other members and in its relations with countries outside the region.It is also an important mechanism that highlights the political dimension of Brazilian policy for the region and through which Brazil's diplomats have operated in their quest to build up common positions with its neighbours in response to situations of crisis, 16 In the words of Celso Amorim -A política externa do governo Lula: os dois primeiros anos.Rio de Janeiro: Observatório de Política Sul-Americana/Iuperj. (Análise de conjuntura n.4).[http:// obsevatorio.iuperj.br/analises.php]Accessed: 01/03/2010-"Brazil has always taken the stance of non-intervention in the domestic affairs of other States [...].But non-intervention cannot mean a lack of interest.In other words, the precept of non-intervention should be seen in the light of another precept, based on solidarity: that of non-indifference."while striving to hold onto a leading position inside it.Economically speaking, as it has no specific regional integration commitments, it can accommodate different sub-regional initiatives like Mercosur and the Andean Community.In strategic terms, the South American Defence Council was recently formed on the initiative of the Brazilian government.
The autonomists, who defend developmentalist thinking, see integration and cooperation with other countries in the region as a tool for gaining access to foreign markets, encouraging transformations in and enhanced efficiency of domestic production systems, and an instrument that can strengthen the country at international economic negotiations.It also has the potential to open up new prospects for Brazil's industry in that it can take advantage of any gaps in its neighbours' production systems.The National Defence Strategy presented by the Lula government puts particular weight on the development of Brazil's defence industry.
Under President Lula, Brazil has added a complex cooperation structure with other South American countries to its overall foreign policy agenda.While in its dealings with emerging countries from other parts of the world it has focused on technology exchange and joint actions at multilateral forums, in its dealings with its South American peers it has given priority to technical and financial cooperation, bilateralism, and "non-indifference".
Brazil's efforts to build up its leadership in South America have been particularly marked by this second form of cooperation.One important indicator of Brazil's regional position is its level of technical and financial cooperation with its neighbours.In South America, Brazil has funded infrastructure projects, engaged in technical cooperation initiatives, shown a preference for bilateral relations and relativized the concept of non-intervention.On the financial front, BNDES has started lending money for infrastructure projects in other countries in the continent that are being conducted by Brazilian enterprise.During the period the IIRSA has become increasingly important in raising funds for regional infrastructure.17Technical cooperation in some sectors is starting to be introduced bilaterally via the countries' respective Ministries of Education, Science & Technology and Health.These initiatives effectively work as foreign policy tools, but rely on the decentralization of their formulators.
Nonetheless, Brazil's foreign policy stance in the region has not been free of tension.With the rise in nationalistic sentiments in some governments as they realign their domestic agendas, some of Brazil's neighbours have challenged its position and demanded economic concessions.The nationalization of oil and gas by the Bolivian government was a blow to the Brazilian government.The pressure exerted by Fernando Lugo's administration to reform the Itaipu Treaty is starting to bear fruit, even if only to some extent for the moment.There are widespread calls for Brazil to act as regional paymaster.
In response, Brazil has taken some major steps internally in order to obtain greater political support for its regional leadership project, which can be seen by the formation of a coalition that is more favourably disposed towards Brazil's taking on some of the costs of South American integration.The debate is now public and the association between Brazilian leadership and its costs is clear to members of government agencies.The country is slowly but surely becoming the region's de facto paymaster, despite facing some resistance at home.Thinkers from the group previously identified with academic and political arenas have also had some influence on this overall move, expounding the idea that cooperation is positive, encouraging efforts to build up a South American identity, and bolstering initiatives to bring the country closer to other governments that are also identified as being progressive.
Another significant yet little discussed element in the agenda is Venezuela and the Bolivarian Alliance for the Americas (ALBA).According to Marco Aurélio Garcia, President Chávez "is a sincere man of exceptional will who has grasped the problems of Venezuelan society"; he also defends close ties with the neighbouring country. 18Garcia goes on to argue that "there exists greater solidarity between Brazil and its neighbours.We do not want the country to be an island of prosperity in the midst of a world of paupers.We do have to help them.This is a pragmatic view." In the eyes of international players outside the region like the EU, Brazil could be seen as the "natural leader of South America" with the means to buffer the moves made by Chávez in Venezuela and bolster stability in the region (Gratius 2008, 116).But Brazil's autonomous foreign policy stance prevents it from playing such a role.While Venezuela's regional integration moves (ALBA) may be different from and compete with the integration model championed by Brazil, it is nonetheless important for it to be kept within the regional frameworks.
Finally, when it comes to the USA, Brazil has maintained autonomy when it comes to the issues of the South America continent.There is no consensus between the two countries as to how to deal with these topics and no prospect of building up any coordinated action.The negotiations towards the formation of the FTAA were effectively blocked and ended in failure.Brazil's more autonomous involvement in international politics and its reformist trends have created new points of friction between the two countries, which are addressed with low political profile.
The relative weakening of Mercosur
As regards Mercosur, the behaviour of the Lula administration is symptomatic of the coexistence of the two broad influences on the country's foreign policy.For their part, the autonomists aim to achieve South American integration under Brazilian leadership, for which purpose they are pushing for the expansion of Mercosur through the entry of new states or the formation of Unasur.Those that defend this position see Mercosur as capable of leveraging Brazil's regional standing and opening the way for the formation of a free trade area in the region. 19The signing of agreements with the Andean Community and the process of admitting Venezuela as a full member are indicative of this.
Meanwhile, the open regionalism and trade-oriented nature of Mercosur have their critics.In a publication from 2006, the then Secretary-General of Itamaraty Samuel Pinheiro Guimarães comments: "the shortsightedness of Brazil's strategy in abandoning the model of political cooperation between Brazil and Argentina and exchanging it for the neoliberal model of integration around trade extolled in the Treaty of Asuncion has been notable," (2006, 357 cited in Vigevani and Ramanzini, 2009, 24). 20In the same work, Guimarães criticizes the waning importance being given to "development" in the bloc's framework.The current administration has striven to maintain an economic balance within Mercosur, giving precedence to Brazilian infrastructure development and industry projects.
Those players who are aligned with the PT are more likely to defend greater political and social integration.Although their influence in government is more limited, their presence is still felt and they have gained ground.To overcome the institutional deficit, the Permanent Review Tribunal came into effect, and the Commission of Permanent Representatives was created, with a more technical bias for the bloc's Secretary being discussed.Finally, in 2006, the Mercosur Parliament was created, albeit with no legislature.The creation of the Mercosur Fund for Structural Convergence (Focem) was a step towards Brazil's officially taking on the role as the bloc's paymaster.However, the Brazilian government is still strongly biased in favour of pursuing bilateral initiatives in the realm of cooperation, and these far outweigh any influence Focem might have when it comes to Brazil's relations with its neighbours and even with other members of Mercosur, such as Paraguay. 21owever, the current scenario has not helped much in the way of strengthening this group's influence on Brazilian strategic making.Though some parts of the government defend an alliance with Argentina, the greater weight the Brazilian government has placed on South America does not make it likely. 22Meanwhile, the Brazil/Argentina axis, which is the political cornerstone of Mercosur, is facing some problems of its own.While one might have expected the election of Lula and Néstor Kirchner to have made way for a more robust political partnership between the two countries, it has actually been somewhat eroded by a combination of other factors.
Politically speaking, Brazilian government investments in South American integration and in pursuing its regional leadership agenda have been one priority in its foreign affairs.This has been received badly by the Argentine government, causing some sectors of the country's diplomacy close to former President Kirchner to turn to Venezuela in a bid to counterbalance this putative leadership.Meanwhile, it has been hard to discern any clear longer-term objectives for the region in the foreign policy developed first by Néstor Kirchner and then by Cristina Kirchner, which leaves little hope for any bolstering of the alliance.
When it comes to economic policy, Kirchner's strategy is neodevelopmentalist, with the aim of establishing a more active policy designed to reorganize the country's industry, but this has clashed directly with Brazil's consolidated industrial policy and the expansion prospects for Brazilian businesses in the region.The corollary of this is that Argentina has shifted in its attitude towards Mercosur, breaking some of the terms of the free trade area and the common external tariff.This change of behaviour has eroded the confidence Brazilian government agencies and export agents had in the Argentine market, and trade with the country has diminished in relative terms in the Brazilian trade balance.
The trade agreement prospects for Mercosur have also proved limited.Only one agreement was signed recently between Mercosur and Israel.But if the possibility of joint economic negotiations with international partners was originally an important factor, Brazil's growth has not been matched by its Mercosur partners.According to some private economic players, Brazil's Mercosur partners do not have much of a say in these negotiations. 23When it comes to the agreement between the EU and Mercosur, the negotiations are still underway but with negligible results thus far.A "strategic partnership" has been signed by Brazil and the EU outside the ambit of Mercosur, which implicitly undermines the interregional effort and consequently the agreement between the EU and Mercosur as the default forum for political dialogue and cooperation.
Finally, the strengthening of the Brazilian economy and the country's growing international presence have opened up new arenas for Brazilian diplomacy -the IBSA Dialogue Forum, the BRIC nations, etc. -while Argentina has been left behind.Brazil has been active in a number of multilateral forums without any kind of recourse to its southern neighbour.The countries' nuclear cooperation agreement is losing ground as Brazil sets its sights higher.
In general terms, it is the autonomists' view that has set the course for diplomacy in realist terms.The South American perspective combined with the country's international projection has gained precedence and are being pursued independently of Mercosur.Although without mention by Brazilian diplomacy, the partnership between Brazil and Argentina has in practice ceased to be a priority for Brazil in its foreign policy.
Despite the diplomatic limitations, there are important gains that have been reached in terms of integration, partly within Mercosur but primarily between Brazil and Argentina.At the end of the government, anew Mercosur customs code was signed and double taxation came to an end as part of the custom union; these will be introduced during the next government.The Mercosur Fund for Structural Convergence has also seen some progress.
Above and beyond the Mercosur Parliament (with all its limitations), the degree of cooperation between different ministries working in the realms of education, culture, energy and labour on both sides of the border has grown during the Lula years.Integration is starting to make sense on a societal level thanks to initiatives taken by different government agents, expressing the incorporation of new players in foreign policymaking in the Lula government.
conclusion
In the current scenario, it is a priority to open up and consolidate room for cooperation and integration within South America, and there are some elements that are clearly beneficial for this process.
The Brazilian government has clearly set its sights on making Brazil a regional leader.The country's growing international presence has helped strengthen its regional standing, although growth in one sphere does not necessarily lead to growth in the other.As for Brazil's foreign policy for the region, the autonomist school of thought has gained precedence inside Itamaraty and other government agencies.The scenario in the continent has proved favourable in the sense that several progressive governments working within different frameworks and alliances have come to power, and certain inter-and intra-state crises have come to a head.The building up of this leadership and the model of cooperation and integration being pursued is in tune with the three pillars of Brazil's foreign policy: autonomy, universalism and growth for the country on the international sphere.This logistic State, as defined by Cervo (2008), has put its diplomats and government agencies at the service of its drive to draw closer ties with its neighbours both politically and economically and through technical and scientific cooperation.
Meanwhile, the scenario within Mercosur is far from propitious.Brazil's trade relations with Argentina have seen a number of setbacks, causing certain Brazilian sectors to speak out against the bloc and fuelling the position of those that prioritize South American integration in strategy formulation.It has proved harder to make progress inside the bloc than on a broader regional level.Brazil's belief in autonomy, universalism and its destiny as a global power has received such attention under the Lula administration that Argentina has reacted with some mistrust.Indeed, Brazil's newfound international standing, while drawing interest to the region, has ultimately eroded the partnership between the two countries.
Finally, this analysis of the Lula government's foreign policy towards South America would generally confirm that Brazil's attitude towards the rest of the region is underpinned by a strong belief in autonomy, universalism and the country's destiny as a global power.However, it also highlights a lack of continuity in the international and South American scenarios, in the political options available, in the foreign policy strategies and their outcomes.The aim of this article is to analyze Brazil's foreign policy towards the South American region during President Lula's administration.As such, the article intends to highlight two specific dimensions: the extent to which foreign policy during this period has differed from previous periods and the relative importance granted by Brazilian diplomacy to recent cooperation and integration efforts, more specifically the Unasur and Mercosur.The article argues that the Lula administration has behaved differently from its predecessors by prioritizing the building up of Brazilian leadership in South America on several different fronts, especially by strengthening multilateral institutions in the region.
|
2019-01-02T18:09:17.491Z
|
2010-12-01T00:00:00.000
|
{
"year": 2010,
"sha1": "fccdde2ec47aebb66011e3fa3f4d72fe83328492",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbpi/a/JpMhsQPLvpM6yP5vcZtPpfh/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fccdde2ec47aebb66011e3fa3f4d72fe83328492",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
116898211
|
pes2o/s2orc
|
v3-fos-license
|
On the Zero-Bias Anomaly in Quantum Wires
Undoped GaAs/AlGaAs heterostructures have been used to fabricate quantum wires in which the average impurity separation is greater than the device size. We compare the behavior of the Zero-Bias Anomaly against predictions from Kondo and spin polarization models. Both theories display shortcomings, the most dramatic of which are the linear electron-density dependence of the Zero-Bias Anomaly spin-splitting at fixed magnetic field B and the suppression of the Zeeman effect at pinch-off.
Systematically studying the ZBA in modulation-doped 2DEGs has proven difficult because of the large variability of its characteristics from device to device [20,21], probably due to the randomly fluctuating background potential caused by the ionized dopants, significant even with the use of large (≥75 nm) spacer layers. This disorder is so pervasive that one can be led to wonder whether the ZBA always results from interactions between conduction electrons and a random localized state near the 1D channel. However, disorder can be dramatically reduced in undoped GaAs/AlGaAs heterostructures where an external electric field (via a voltage V top on a metal top gate) electrostatically induces the 2DEG [22,23]. Figure 1(a) shows the advantages of this technique, particularly at low carrier densities (see also Fig. 3 in Ref. [22]), a regime most relevant for the ZBA. In this Letter, we report on the study of the ZBA in ten quantum wires fabricated in undoped GaAs/AlGaAs heterostructures. We demonstrate that an unsplit ZBA does not result from interactions between conduction electrons and a random localized state near the 1D channel: it is a fundamental property of 1D channels, in disagreement with spin polarization models. Another inconsistency is a suppression of the Zeeman effect at pinch-off. In disagreement with Kondo theory, we observe a non-monotonic increase of the Kondo temperature T K with V gate , and a linear peak-splitting of the ZBA with V gate at a fixed B.
The two wafers primarily used in this study, T622 (T623) with a 317 (117) nm deep 2DEG, were grown by molecular beam epitaxy and consisted of: a 17 nm GaAs cap, 300 (100) nm of Al .33 Ga .67 As/GaAs, 1 µm of GaAs, and a 1 µm superlattice with a 5 nm Al .33 Ga .67 As/5 nm GaAs period. No layer was intentionally doped. For T622, n 2D = (0.275 V top /V − 0.315) × 10 11 cm −2 . Figure 1(a) shows the mobility µ versus the 2D sheet carrier density n 2D for T622; wafer T623 has slightly higher mobilities, e.g. 1.7×10 6 cm 2 /Vs versus 1.6×10 6 cm 2 /Vs at 5×10 10 cm −2 . Using Matthiessen's rule far from the localization regime, the experimental data is fit to standard models of scattering times 1 τ total = j 1 τj [24,25]. The dominant sources of scattering in our system (analyzed in Ref. [23]) are charged background impurities and interface roughness, from which we extracted the background impurity concentration N B = 1.25 × 10 14 cm −3 . Intersecting the background impurity potential with a 2DEG wavefunction of width λ ≤ 20 nm yields a minimum average distance between scattering centers D = 0.6 µm in wafer T622. A similar number is found for wafer T623. Ten quantum wires, labeled (i)-(x) throughout this paper (seven from T622 and three from T623), were measured in two dilution refrigerators (with base electron temperature 60 mK and ∼150 mK), using standard lockin techniques and varying T , B, V sd , and n 2D . Following a mesa etch, recessed ohmic contacts (Ni/AuGe/Ni/Ti/Pt) were deposited and annealed [26]. A voltage V gate can be applied to surface Ti/Au split gates of length L = 400 nm with width W = 700 (400) nm on on T622 (T623). Polyimide insulated the inducing Ti/Au top gate from other gates and ohmic contacts.
Although the average distance between impurities is D ≥ 0.6 µm, their distribution is not uniform. In analogy to mean-free-path calculations, the probability P of finding an impurity within a 1D channel of length L is P = 1 − e (L/D) ∼ 50%. For G ≤ 0.8 G 0 , an unsplit, symmetric ZBA was observed in all ten devices. Figure 2(a) shows the ZBA in eight of these. It is thus unlikely (of order 10 j=1 P j ≪ 1%) that all such occurrences were the result of interactions between conduction electrons and some localized state near the 1D channel.
Defining G max as the maximum conductance achieved at base T , V sd = 0, and B = 0 for each value of V gate , Fig. 2(b) shows that G max increases monotonically with V gate (as in all our devices). Defining ∆h ZBA as G max minus the average conductance of the local minima on the rhs and lhs of the ZBA, Fig. 2 decreases as T increases for all V gate , as would be expected from Kondo physics. As T increases, a local minimum near G max ≈ 0.75 G 0 becomes more pronounced. In a previous study on doped quantum wires (see Fig. 6 in Ref. [19]), similar plots of ∆h ZBA also showed a local minimum near G max ≈ 0.75 G 0 . Figure 2(c) links its appearance to the formation of the 0.7 structure.
Varying n 2D affects the Fermi energy of electrons entering the 1D channel from the 2D leads, as well as the 1D confinement potential [e.g. increasing V top = 4 V in Fig. 3(a) to 7 V in Fig. 3(b), the energy-level spacing between the first two 1D subbands increases from 0.6 to 0.8 meV]. Figure 3(c) shows no clear trend for ∆h ZBA with increasing n 2D , but the minimum near G max ≈ 0.75 G 0 remains present in all curves. In the Kondo formalism [ Fig. 3(d)], a specific T K is associated with each V gate , and the full width at half maximum (fwhm) of the ZBA should scale linearly either with its T K if T K > T , or with T if T > T K [16,27]. For G max ≥ 0.9 G 0 in Fig. 3(f), we do not use the fwhm as it is difficult to distinguish the ZBA unambiguously from the bell-shape traces of G just below a plateau (see Fig. 6 in Ref. [28]). For G max < 0.7 G 0 at V top = 4 V, the fwhm remain essentially flat: T > T K . For 0.5 G 0 < G max < 0.7 G 0 , increasing n 2D appears to increase T K beyond T ≈ 150 mK. An upper limit of T K < fwhm kB at each V gate can be estimated [17]. In most devices, regardless of whether the 0.7 structure is visible or not, the fwhm has a local minimum near G max ≈ 0.75 G 0 . Identical minima are also observed in doped GaAs quantum wires (see Fig. 3 in Ref. [11]) and in GaN quantum wires (see Fig. 4 in Ref. [29]). Near G max ≈ 0.75 G 0 , we interpret the fwhm minimum to indicate a suppression of Kondo interactions, leading to a non-monotonic increase of T K (V gate ) from pinch-off to 2e 2 /h, in direct contradiction to 1D Kondo theory [12]. Kondo theory also predicts that fwhm(T K1 ) will increase more than fwhm(T K2 ) as T increases [i.e. ∆1 > ∆2 in Figures 4(a)-(c) show how the ZBA spin-splits at low B. At a fixed B, the peak-to-peak separation ∆V p-p increases almost linearly with V gate [Fig. 4(g)]. In an inplane B, pinch-off voltage can change due to diamagnetic shift [30], making V gate an unreliable marker. However, G(|V sd | > 0.25 mV) is mostly insensitive to B, while the ZBA changes significantly. Thus, fitting the linear relation ∆V p-p = αB to the red points in Fig. 4(f) asymmetric gaussians to Fig. 4(e)].
At finite B, the ZBA in quantum dots splits into two peaks [16], whose peak-to-peak separation e∆V p-p = 2g * µ B B is a defining characteristic of the Kondo effect [14] where µ B is the Bohr magneton and g * the effective Landé g factor. Figure 4(d) illustrates three distinct regimes one would expect from the singlet Kondo effect at fixed B and T [31,32]. In the topmost traces, k B T K > g * µ B B > k B T : spin-splitting cannot be resolved. In the middle traces, g * µ B B > k B T K > k B T : the linewidth of each split peak is narrow enough to make the splitting visible. In the bottom traces, g * µ B B > k B T > k B T K : the split peaks shrink but their splitting should remain constant as long they are still resolvable. However, in our quantum wires, this is clearly not the case. The variation of ∆V p-p = αB with V gate in Fig. 4(b)-(c) cannot be reconciled with singlet Kondo physics.
In quantum dots, the ZBA splitting can vary with V gate for B ≥ 0 (Fig. 4 in Ref. [33], Fig. 3 in Ref. [34]) from the competition between the Kondo effect and the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction between two localized spins [35]. Although two such localised spins are predicted to form in quantum wires near pinch-off [10,13] and these could explain the behavior observed in Figs. 4(b)-(c), this scenario would also require the ZBA to be split at B = 0. This is not the case [Figs. 2(b), 3(a)-(b), 4(a), 5(c)]: the two-impurity Kondo model is not applicable.
In spin-polarization models [5,6,7,8,9,10], the energy difference between spin-up and spin-down electrons ∆E ↑↓ = gµ B B + E ex (n 1D ) includes E ex , an exchangeenhanced spin splitting that could account for previous observations of an enhanced g factor above the value |g| = 0.44 of bulk GaAs [4]. Neglecting correlation effects, the bare exchange energy in 1D scales linearly with n 1D . Assuming n 1D ∝ V gate , the almost linear splitting of the ZBA is consistent with a density-dependent spin polarization. However, this scenario would also require that the minimum value of eα be the bare Zeeman energy gµ B = 25 µ eV/T. This is not what we observe: eα < 16 µ eV/T in Fig. 4(e). Instead, we find ∆E ↑↓ = g * (n 1D )µ B B, where 0.27 < g * (n 1D ) < 1.5 [ Fig. 4(f)]. The Zeeman effect can be suppressed (g * ∼ 0.2) if a 2DEG significantly penetrates into the AlGaAs barriers [36], at high n 2D or if the 2DEG is close to the surface. Neither situation applies to our devices. The suppression of the bare Zeeman effect at pinch-off in our quantum wires is not consistent with spin polarization models.
Despite their exceptional device-to-device reproducibility (compared with doped wires), undoped quantum wires are not free from disorder [ Fig. 5(b)]. The apparent splitting for G ≥ 0.8 G 0 in some of our devices [ Fig. 5(c)] is not due to spontaneous spin-splitting or RKKY vs. Kondo interactions, but rather to resonant backscattering or length resonances [37]. By increasing the 2D density (and thus long-range screening), many disorder-related effects can be minimized.
In summary, we provide compelling evidence for the ZBA to be a fundamental property of quantum wires. Its continued presence from G ∼ 2e 2 /h down to G ∼ (2e 2 /h) × 10 −5 suggests it is a different phenomenon to the 0.7 structure, as proposed in [18,19]. Both 1D Kondo physics and spin polarization models fall short of accurately predicting experimental observations. For 1D Kondo physics models, these are: (i) a non-monotonic increase of T K with V gate , (ii) the fwhm of the ZBA not scaling with max[T, T K ] as T increases, and (iii) a linear peak-splitting of the ZBA with V gate at fixed B. Spin polarization models can account neither for the occurrence of the ZBA nor for the suppression of the bare Zeeman effect at pinch-off. It is hoped that further refinements in theory will account for these observations.
|
2008-10-06T13:12:23.000Z
|
2008-10-06T00:00:00.000
|
{
"year": 2008,
"sha1": "a1515db2fe8b4f0e99ae66fb5dd492f661449406",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0810.0960",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a1515db2fe8b4f0e99ae66fb5dd492f661449406",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
32998707
|
pes2o/s2orc
|
v3-fos-license
|
Building an e-Commerce Infrastructure in Jordan: Challenges and Requirements
—Many countries around the world are trying to build and enhance their internet infrastructure and utilize services related to the Internet such as e-Commerce, information connectivity, accessibility, etc. However, studies indicated that network and hardware requirements are not always the major barrier for progressing in these goals. In some cases, cultural, legal or environmental factor may dominate the type of barriers for the expansion of internet related service in many countries around the world. This paper presents challenges and requirements for the enhancement of e-Commerce services in particular for Jordan.
INTRODUCTION
The revolution of the Internet has played significant role in commerce and business in the whole world, which connects millions of people as well as millions of computers. Nevertheless, these types of communication are increasing day after day, and it is used in many sectors, especially in conducting business on the Internet, because internet reduces cost of process, accomplishes and achieves more work without any increasing of costs, and it improves the quality of services, but this does not mean every online business can be success, because on the other hand Internet and the new technology encourage and help hackers and online criminals to attack any kind of business and disrupt it, so this leads to emerge security issue, also hackers might access and steal customer personal and financial information, as a result this will lead to emerge privacy issue. In addition because the Internet gives people the ability to access any Website in everywhere, people can search for better products or services, accordingly this increases the competition among companies. e-Commerce websites allow financial transactions to be executed over the internet. This completes the full cycle of buying or selling an item (i.e. shopping, selecting, paying, shipping, and receiving). e-Commerce websites can work as virtual stores only or can augment physical retail operations. e-Commerce websites can also be built to sell services or items that does not required shipping such as tickets, calling cards, stock trading, subscriptions, and memberships.
The ability for e-Commerce websites to complete the transaction online is very important. The ability to complete this transaction in a secured manner is more important to raise the customers' level of confidence on online shopping as a competing alternative for regular ones. A secure e-Commerce website can provide businesses with powerful user's motivated advantages, including increased online retail sales, as well as streamlined application processes for products such as insurance or credit cards.
Digital identification is increasingly evolving in use and importance as a method to safely identify humans or entities especially through online business transactions. Through history, several techniques are used to uniquely identify individuals. Up to date, writing signatures are significant and required to verify that the person filling an application for example, is the same person he or she claims to be. Biometric signatures such as finger prints, iris patterns of the eyes, retina scans, DNA, voice prints, facial features, etc. are important and are able to uniquely identify an individual. However, none of those signatures can be conveniently implementing online to complete a transaction in a short time with a reasonable cost.
Public Key Infrastructure (PKI) is a set of technologies and security policies used to issue, revoke, and manage digital certificates and key pairs [1].
In digital certificates, users are identified by the information embedded on their machines, and verified by mutually trusted third party entities called Certification Authorities (CA) (such as Thawte or Verisign), that guarantees that the website operating is who it claims to be [2].
CA issues and manages digital certificates. They are third party trusted entities to authenticate sellers to buyers, banks to customers, email servers to email users, etc. In general, users are not supposed to expose any personal or financial information in any website that does not have a valid certification.
There are some requirements for any company or entity that wish to become a certificate authority who issues certificates to clients. As a hardware requirement, digital certificates are usually created by certificate servers such as Cisco IOS, Microsoft certificate server, EverLink, etc. CA's should make sure that their certificate database is secured from being accessed or hacked by invaders.
There are several forms of digital certificates. In this first type of certificates, software companies send their keys (public keys) to their customers. Customers will return back a certificate that combines the software company's public key with their private key (which includes specific information taken from their computers to include unique identifiers that distinguish a computer from all other computers). This information may include MAC addresses, IP address, CPU and hard drive unique identifiers, etc. The digital certificate will be encrypted so that its information will not be readable if retrieved by unauthenticated users. It can be understood only by those who is-sued it. Figure 1 shows a sample of a digital certificate retrieved from a computer.
In a second type, users can also gain self or individual certificates if they wish to uniquely identify themselves through online transactions. There are some companies who can provide such individual certificates for free.
The third type of digital certificates, which is our focus here, is those digital certificates that are gained by websites who wants to allow users to enter personal or financial information online. They want their users to trust them and feel secure entering their personal or financial information. Examples of such websites include: banks, hotels, e-businesses (such as Amazon, ebay, etc.), and email servers (such as Yahoo and Gmail).
There are different models for CA's. In the traditional model or infrastructure for the Public Key Infrastructure (PKI), a company or entity will submit their information and request to a certificate authority. The CA will review the submitted information and decided whether to issue a certificate or not. The certificate will be issued for a specific limited time. The same process is repeated whenever the certificate is renewed. This type of certification is not dynamic or real time. This means that the CA does not check the validity of the CA upon requests. The entity will possess the certification for the specified amount.
In another model, the certificate are requested and authorized, or declined upon request. This process is expected to be more secure, but more complex than the earlier one. In different flavors of the models, the transaction content will, or will not be sent with the authorization request. As such, some entities will be authenticated in general, while others will be authenticated to do specific transactions.
Digital certificates are trusted identifications in electronic formats that bind a public encryption key to an identity to achieve public trust in that identity. They are a major factor in giving users confidence in websites and their legitimacy.
Some digital certificates can be transferred from one machine to another. Others generate the individual private key using some of the machine information such as the MAC address, computer name, etc. This means that those certificates can not be used on other machines -without being reinitialized by the CA or the company who issued it.
Typically, two things distinguish a certified website: the letter "s" after http, and the certificate header in the right side of the address bar (Figure 2 and 3). The "s" in "https" means that you are logging onto a Secure Socket Layer (SSL) site.
If you view the certificate from the web browser, it will display three main information: issued to, issued by, and the validity period.
II. CERTIFICATE AUTHORITIES IN MIDDLE EAST
Some companies such as Comtrust [3] offer digital certificates in United Arab Emirates (UAE). They provide PKI technologies and authorize digital certificates for servers, companies, and individuals. Nevertheless, they still require individuals to come in person to verify their identity. In Egypt, digital certificates are issues through some companies such as: ITIDA (Root-CA, Trento Egypt and Gateway, LINKdotNET, etc... Egypt experienced research and proposals e-Government services to allow citizens to process some papers online. In Saudi Arabia, the Government is in the progress of authorizing certificates, secure emails and several other e-Government and e-Business solutions. Some companies or agencies who are working toward this goal are: SAMA and Entrust partnership. Israel has several CA companies that issue certificates such as: ComSign, IUCC, StarCom, etc. Jordan started early in studying the possibilities of implementing online security. In 2002, a joint effort by Middle East Communications Corporation (MEC) and WISekey Switzerland is initiated to allow MEC issuing digital certificates in Jordan [4]. It was expected to be in use by the year 2004. In 2003, Jordanian ESKADENIA Software Solutions worked in a project to be a local dealer for UAE Comtrust to market e-business services in Jordan [5]. However, neither one of those projects aparently reached a deliverable goal.
In an article published in June 14th 2006 by the Ministry of Information and Communications Technology (MoICT) [5], the document envisions a plan for an e-Government through a plan over the period of 2006-2009. Several laws and regulations were issued to regulate online services. The article concluded by the assessment for critical success factors and risks. Table1 shows the critical success factors and Table2 shows the risk elements.
Tables1 and 2 summarize some of the obstacles and barriers for implementing e-Government including offering online identities, digital certificates, and several other online related services.
Ahmed and Hussein Al-Omari (2006) [7] similarly listed some of obstacles and challenges such as organizations, government, and customers' readiness. Recognition of e-Government as a priority in National Agenda However, it is fair to say that some of those obstacles are not exclusively related to Jordan or any third world country. This field is quickly evolving in a way that presents a difficulty for public or government entities to keep up with. Governments are not expected to take the major role in such field. The Governments will be responsible in making sure that there is an infrastructure for handling all related activities.
Laws and regulations should be established to control online business and activities. The government should promote using digital certificates and online business through cooperation and acceptance. Internal or external investors should be invited and helped in building an infrastructure for establishing such environment. There is no much difference between this field and the wireless and cell phones communications fields. They both require large investment to build a reliable infrastructure. As a result, investments need to see a commitment from the Government for cooperation. Using digital certificates can be a one important step to facilitate e-Business. Jordan does not lack technical problems in the telecommunication and industry fields. On contrary, Jordan is a pioneer country in the Middle East in those fields. It also provides several other countries in the region with the personnel support and experience. Recruit staff with relevant skills 3.
Incentives for Government entities to invest in developing ICT expertise internally 4.
Outsourcing certain functions when business CA's supports it 5.
Create links with local universities to give on-the-job-training to students 6.
Promote retention of skilled professionals in cooperation with other programs (e.g., Reach) Resistance to change Increase awareness among stakeholders, raise accountability, and enhance change management However, user can inquire -and not have to pay for example -for those services. There is not any usage of digital certificates (not even the e-Government websites themselves). According to the income and sales tax department's website (http://www.incometax.gov.jo/IncomeTax/ Home/Login.aspx), users can inquire and pay their taxes on line. The website states that it is certified, however, it does not seem to be using digital certificates. The website does not allow users to create their user name and password which indicate that they may need to register in person first.
Doing a survey for Jordanian banks, very few banks, such as the Arab bank, Arab Jordan investment bank, the housing bank, etc. are using digital certificates and securing online transactions -verified by VeriSign CA, a widely known international certificate authority-, others do not have on line access at all, and the third category offers online services without secure transactions which may cause customers' information to be compromised. As an online user, before entering personal or financial information online, the user should check the certificate, to verify the identity of the website he or she is entering his or her information into. Without such information, user could be giving his or her information to unknown individuals that may reuse it without prior knowledge. However, similar to e-Businesses, banks are the second major category that will benefit from digital certificates. Figure 4 shows the certificate market share for CA's in Jordan [9]. The international certificate authority, VeriSign, is taking the majority of the market share in Jordan. Table 3 shows the number of certified websites in selected countries as of 2006 [9]. Jordan has only 26 websites (mainly banks), which is relatively a very small number of websites if it will be compared to the readiness of Jordan (in resources and infrastructure). In most studies to evaluate e-Government worldwide [10,11,12,13,14,15], Jordan scores relatively low in terms of citizens' participation. Jordanian citizens will be encouraged to visit e-Government websites if they can provide them with alternative services to check their taxes and pay them, check their electric, water, and phone bills and pay them, or check the status of any info or service they are requesting from a governmental entity. All those services may not be possible without digital certificates.
As described earlier, there have been several unsuccessful trials by local companies to establish certificate authorities in Jordan. They maybe have problems getting the right authentication and trust from Government and private sectors. As an alternative, Jordanian Government, represented by any entity or ministry such as Ministry of Information and Communications Technology (MOICT) can be a certificate authority that will authenticate certificates for all those who are requesting to have them. IV. GOALS AND APPROACHES A successful e-Commerce or e-Business infrastructure in Jordan will help several public and private sectors benefit from it. For example, electric power, water and telephone companies may utilize the e-Commerce infrastructure to allow their customers check their accounts online for their current amount of usage and will also allow them to pay online. This may help both service providers and customers. Providers will have less effort and their employees can perform accounts checking and billing and will reduce the overhead of customer services tasks. For customers, it will be more convenient as they can track their account status and pay on their convenient time without the need to go to local, usually busy, agencies. Banks and hotels have also large piece of the steak of interest of having a successful e-Commerce infrastructure. For hotels, customers can shop their websites and book online without the need for mediators or agencies who usually charge for being the middle man. e-Banking is convenient to both banks customers and employees. It will reduce the amount of customer service calls from customers who usually enquire about their account details or transactions that they can check online.
Requirements for a successful e-Commerce infrastructure in Jordan are divided into three categories: legal, software and hardware perspectives. The followings are the typical requirements for each category, along with what is missing and needed.
A. Legal perspectives: e-Commerce laws and regulations
In European Union countries, e-Commerce refers to the carrying out of business using electronic means. This generally means over the internet. However, from a legal per-spective, the term is often used to include remote selling by telephone and email, as well as online. It is also frequently used to refer to legal issues generally relating to the Internet and online trading.
There are several types of contracts which are required to exist when a business becomes involved in e-Commerce transactions. As a result, laws should regulate each section of those contracts to ensure that online customers transactions will go smoothly and that a judge can have clear regulations to rule with once an online dispute occurs. These include: Website development: content and hosting agreements. When a business wishes to set up a website, it needs to ensure that the design and content of the website do not infringe or violate any third party rights.
For example, laws should regulate who is in charge of website content, the owner or the design company. Other related issues may arise from sites performance and security specially once a website start having heavy online transactions.
Internet service provider agreements: Companies who are responsible for developing the e-Commerce website can be the same companies who provide the hosting service or they can be separate. Similar to the web design companies, web hosting companies should have clear responsibilities regarding their duties and responsibilities. This may include laws about privacy, copyrights, etc. It also needs to be sure that their websites are properly hosted and will not suffer from excessive down time. These issues can be dealt with by website development, content, and hosting agreements. Disputes may occur between companies who provide the e-services and the company who support them or provide the hosting or the related tasks. Laws should regulate when a support company is liable for performance, reliability, or security problems. In general, supporting companies should provide explicit agreements of things they are liable to provide or control and will be compared to their rivals once their clients claim that they did not perform their expected duties.
Website usage and privacy policies: This may include the privacy of both owners and customers. Web site design and hosting companies are not supposed to expose their clients' information to their rivals. They should have implicit or explicit agreements with their clients of who is in charge of contents protection. In some cases, hackers may access an e-Commerce website to vandalize it. Clear boundaries should be drawn of what is considered as "reasonable" protection from the web design or hosting companies for their clients' information. This is usually compared to the security and protection that competitor companies provide.
Website and telephone sales terms and conditions: In some countries, online or telephone sales are governed by the Consumer Protection (Distance Selling) Regulations 2000 and the Electronic Commerce Regulations 2002. Such sales are generally conducted in a manner where there will be no scope for negotiating terms. Accordingly, any business hoping to trade in this way will require standard terms and conditions of sale. There are special rules which cover most sales to customers (subject to some exceptions), and give them more rights than they would have in a face-to-face purchase. These include: A: A right to receive certain information as to the products and the identity of the seller. This will ensure protection from scammers or fake e-Commerce companies.
B: A right to cancel goods without any penalty or fees. This can be for up to certain amount of days after delivery if the necessary information has been provided to the customers and the right for customers to return their products without any charge, if sold products were not as prescribed. Return policies can be cumbersome for both clients and sellers in initial stages. On the one hand, sellers need to respect their buyers' right to undo a sale that they are not happy with, within certain amount of time. On the other hand, buyers need to learn that despite the fact that they have the right to return a product within certain time, and they should not do this without a logical convincing reason. As mentioned earlier, this can be initially complex and require time and cultural change.
C: A duty imposed on the seller to deliver goods within 30 days of the order (subject to an ability to extend or cancel) Because the relevant regulations are laid down by European law, these rights apply to all customers located in any European Union country.
B. Software perspective: How to build an infrastructure for a trusted e-Commerce websites
This section will focus on the software and websites requirements to implement e-services or business.
In most e-Commerce infrastructures, to secure access to e-Commerce websites, it should be including two basic components in order to allow users to securely perform online transactions: A: Digital certificates for web servers, providing guarantees of authentication, privacy and data integrity through encryption. Digital certificates can be issued by mediators called Certificate Authorities (CAs) to authenticate the seller to the buyer and vice versa. They can be generated through special programs or tools. They contain unique information about the user identity or machine to uniquely identify them from others. Digital certificates are also used to protect software products from piracy. Two users of particular software will have problem using it once they both go online as the owner of the software will discover that the key that was given for a particular client is used by more than one user.
Encryption is used to secure the information embedded in digital certificates. Encryption is the process of transforming information before communicating it to make it unintelligible to all but the intended recipient. Encryption uses mathematical formulas called cryptographic algorithms, or ciphers, and numbers called keys, to encrypt or decrypt information.
B: Secure e-payment system and management, to allow e-commerce sites to secure and automatically accept, manage and process online payments. This can be usually organized with owners' banks. Websites will be securely connected to the buyers' bank accounts. Once an online transaction is secured executed, the money should be directly transferred from the seller to the buyer account. This process should be performed in a fast, reliable, and secure way. Those three elements (i.e. reliability, performance, and security) are vital to the success of any e-Commerce website.
Laws should regulate any dispute on online transactions. Users may deny that they actually perform such transaction. They may claim that their cards were stolen. They may also claim that they have been double charged, or charged extra amounts. In some cases, insurance companies may provide services to cover such expenses.
C. Network and hardware perspective: Internet readiness A closely related requirement to the software and websites' requirements is the existence of a network or hardware infrastructure. This may include the routers, fiber optics, or wireless communication channels, firewalls, etc.
Since both (software and hardware perspectives) may include hardware and software elements, they will be distinguished through the location. This perspective represents any requirements outside the user machine.
Generally speaking, Jordan has a relatively good network and internet infrastructure relative to most Middle East countries. Internet services exist in different forms in Jordan. They are also accessible to most citizens with a reasonable cost. Internet service is currently provided by several companies. It is also provided in different speeds such as dial-up, ADSL, and wireless. Users can access the internet from their desktops, laptops, and PDAs.
D. Extra requirements for a successful e-Commerce business
Another major player in the e-Commerce world is the sipping companies. In order to compete with normal shops and businesses, shipping should be also quick, reliable, and secure. Laws should regulate the terms for shipping, such as costs, types, and who is in charge in case of products defects. Products defects may due to buyers or shipping issues.
Despite the fact that an e-Business culture should exist in any country to allow such business to exist, however, laws and regulations should always assume the worst and be able to handle dispute cases.
V. CONCLUSION AND FUTURE WORK In order to build an e-Commerce infrastructure in any company, the technical, network, and hardware requirements are not always the only barriers. The readiness for such technologies or services may require also improving or enhancing society, cultural, or legal regulations and perspectives to the new technologies.
Customers need to learn how to communicate online. They need to learn how to shop, sincerely post items online, committed to pay and ship on time. Businesses also need to be opened toward some necessary ethical e-Business manners such as giving their buyers certain period of time for returning the items or changing them.
Jordan Internet infrastructure is strong and capable of handling the construction of a trusted network. The private sector should take the lead in PKI and digital identity infrastructure. Once such infrastructure is established, many of those related and dependent industries can exist and provide an important economical input to the national revenue.
In future, the researchers will follow up with two research projects. They will create surveys to evaluate strengths and weaknesses in e-Government implementation in Jordan. Then they will try to distribute the survey among those who can provide the right useful information for the purpose of the study.
The second track will use web metrics to evaluate the traffic and usage of e-Government websites in Jordan. This stage is expected to give the researchers a better image about the actual usage of those websites. It may indicate weaknesses or strengths in particular websites. In order to do that, the researchers may need the assistant and the cooperation of e-Government websites administrators to get websites log or to add scripts to monitor those websites.
|
2018-01-23T22:38:23.035Z
|
2010-09-29T00:00:00.000
|
{
"year": 2010,
"sha1": "7209217cfa8d15662a6d9ec11bff7f0c1e1d34bb",
"oa_license": "CCBY",
"oa_url": "http://online-journals.org/index.php/i-jim/article/download/1425/1522",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "136ecc39a5cf43c8459ad85b07a2f1bed36943f0",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Business"
]
}
|
233901573
|
pes2o/s2orc
|
v3-fos-license
|
Developing Critical Thinking Skills in Omani EFL Foundation Programme: Constraints and Possibilities
Educational institutions across the globe unanimously acknowledge the importance of incorporating critical thinking skills in their curricula, yet this objective has not always been met adequately or consistently across the board. In EFL settings, the obstacles to teaching critical thinking are not only genuine but also multifaceted, ranging from teachers’ and students’ training and attitudes, cultural influence and degree of support from the various stakeholders, which often results in a general perception that it is difficult to teach efficaciously. This article will report on the procedures and satisfactory outcomes of an action research that I have conducted with intermediate EFL foundation programme students at Sultan Qaboos University, Oman, using a mixed method approach. The scope of the study is to investigate the constraints to teaching critical thinking skills in this context (quite similar to other non-western ones, e.g. Asian cultures) and ultimately pilot a flexible middle-way approach that enables teachers to work around these restrictions to foster critical thinking skills in their students, without detracting from course content or sacrificing test scores. The trialled approach consists of adapting and extending activities from assigned English language course books/materials to build in more critical thinking awareness and practice, all within a learner-centred social constructivist environment, without the need for extra time or supplementary materials. In their post-course evaluation, most of the students have reported tangible improvement in information literacy, critical thinking abilities and even language proficiency. The article will close by providing practical guidelines on materials and methodology for teaching critical thinking skills in EFL contexts.
Defining Critical Thinking
Probably contributing to the lack of understanding of what CT is and what it actually entails in terms of teaching and learning is the absence of a precise and rigorous definition. Halanon (1995) states that "critical thinking scholarship is in a mystified state. No single definition of critical thinking is widely accepted" (p.75). Mayfield humorously notes that "there are as many definitions of critical thinking as there are writers on the subject" (2001, p.4). Hughes concedes that CT is "a term that often defies simple definition" (2014, p. 2). For instance, Paul (1985) defines CT as "learning how to ask and answer questions of analysis, synthesis, and evaluation" (p. 37) -a definition closely aligned with Bloom's upper three levels of educational objectives, which are often referred to as the practical application of CT in education. A study by Griggs et al summarised 25 definitions of CT abilities in the literature as ". . .a process of evaluating evidence for certain claims, determining whether presented conclusions logically follow from the evidence, and considering alternative explanations. " ( in Stapleton, 2011).
To demystify this concept, Riddell (2007) suggests that CT should not be defined, but rather explained by its components, stages, characteristics and sub-tasks, for no single ability can capture the full scope of CT skills and dispositions. The list below summarises the skills and dispositions that many consider basic to the process of CT (based on Mayfield, 2001 andBuskist andIrons, 2008): Reid, 1998). In an article that sparked off a heated debate over the teaching of CT in EFL/ESL contexts, Atkinson (1997) advances arguments against teaching CT in TESOL: that CT is rather a "social practice" and "teaching [it] to nonnative speakers may be fraught with cultural problems" (p 71); that it is "beyond the capability of most teachers to teach [it] in more than an anecdotal and hit-and-miss way" (p 77); that CT is not universal and "many cultures endorse modes of thought and education that almost diametrically oppose it" (p.72), which, not only makes it challenging for "nonnative thinkers" (p 79), but also represents a form of cultural imperialism; and that it is notoriously difficult to transfer to new contexts (p.71).
In response, Davidson (1998) and Kubota (1999) criticise Atkinson's arguments on the ground that CT is not culture-specific but rather a universal skill that can be acquired equally easily by students from all cultures, and that arguments against teaching CT in non-Western cultures are based on unfounded generalisations and stereotypes of Asian cultures.
Davidson interestingly noted that his Japanese students showed more aptitude in some CT skills than western students (1998). Ennis (1996b) asserts that educators should not be discussing whether or not to teach CT, or whether it has value for people from other cultures. This is because not teaching critical thinking to some degree means creating generations of graduates who "believe everything that [they] read and hear" (Ennis, 1996a, p.1).
A no less significant question about the feasibility of teaching CT is whether or not all ELT teachers are equally equipped, trained and ready for such a task. Most of the literature on the topic seems to suggest that they are not. The absence of unanimity over a rigorous definition of CT seems to have resulted in a lack of understanding on the part of the teachers of the concept of CT and how to incorporate it in the classroom (Lauer 2005;Choy and Cheah, 2009;Stapleton, 2011). Alexander et al explain that "although lecturers claim to value critical thinking highly, they tend to recognize it mainly by its absence" (2008, p. 251). Other studies have revealed that teachers may have misrepresented and/or reductionist perceptions of CT skills, like equating it with rephrasing given facts in students' own words (Black, 2005), or merely with being opinionated (Long, 2004), or just being able to differentiate fact from opinion (Siegel, 1998;Fok, 2002).
For various reasons, Fok (2002) reports that some teachers think CT cannot be taught while others value it but feel they lack the ability and confidence to teach it effectively. Rana highlights some teachers' inertia and resistance "to change their stereotypical teaching techniques" (2012, p.54). Rafi maintains that "the teachers need to revamp their pedagogical views, and to adapt a more flexible attitude in the existing system of language education in order to exploit the metalinguistic abilities of the learners" (2009, p. 65).
Students in EFL contexts can be part of the problem, too. Indeed, those who come from a rote learning culture usually tend to resist any change to the learning mode they are accustomed to. As Nisbet and Shucksmith observe, "most adults will avoid the need to learn if they can by clinging to familiar routines and will have difficulty dealing with unfamiliar tasks" (1990 in Williams and Burden, 1997, p. 146). Rana explains that many students are already struggling to improve their linguistic abilities, and so "developing critical thinking skills in the language classroom seems to be a by-product of teaching English… [and] a farreaching goal" (2012, p.53).
Cultural considerations can also have considerable bearing on how students perceive CT. This is particularly true of students coming from cultures where children and young adults are made to strictly obey and look up to authority figures, such as parents and teachers, who make decisions for them (Buskit and Irons, 2008). In a highly conservative and religious society like Oman, for example, foundation programme students rarely have opinions of their own on any social, economic or political issues.
The last type of constraint to teaching CT is the lack of support provision on the part of the main stakeholders: syllabus designers, material developers, examiners and institution managers. Overloaded syllabi and exam pressures often result in teachers and students racing against time to complete the usually imposed syllabus, which also induces teachers to switch to the lecturing mode (Astleitner, 2002;Fok, 2002;Petry, 2002;Duron et al, 2006;Choy and Cheah, 2009;Rana, 2012). Another challenge is the washback effect on teaching and learning. Teachers and students alike will often shy away from 'wasting' time on the rather time-consuming CT activities as these are almost never tested (Fok, 2002;Rana, 2012).
As far as materials are concerned, only lately have some recent editions of commercial textbooks started to include some CT activities. However, Lucantoni (2015) observes that these are mostly included as add-on activities, rather than providing a scaffolded approach that will progressively help students make their way through the steps of Bloom's Taxonomy of cognitive levels, or, as Alexander et al put it, that will guide the student from knowledge telling to knowledge transforming (2008).
Materials and Teaching Methodology
There is overwhelming unanimity among researchers and educationalists that challenging tasks not only help students improve their language proficiency, but also trigger higher order thinking skills and best motivate them to engage in critical thinking (Krashen, 1985;Turner, 1995;Ur, 1996). From this perspective, the focus of EFL classes should be on language tasks that require learners to use greater degrees of elaboration and criticality, like exploring, comparing, evaluating, criticising, or advocating a variety of ideas, reasoning inductively and deductively, and inferring sound conclusions from ambiguous statements (Freeley& Steinberg, 2000). These are the types of tasks that materials should focus on the most, and which particularly draw on premises inherent in the methods known as Task -based Learning (TBL) and Problem-based Learning (PBL).
Activities that best stimulate CT skills are debate and problem solving in speaking, which Benesch calls dialogical critical thinking (1999); issue-based and controversial topics in argumentative writing (Benesch, 1999;Ghokale, 1995); and critical reading which involves identifying and evaluating the writer's purpose, attitude, and validity of claims, arguments and evidence (Elder and Paul, 2004).
Whether CT should be taught as a standalone subject or in integration with subjectspecific content is another controversial point. Cotton concludes that "neither infused thinking skills instruction nor separate curricula is inherently superior to the other; both can lead to improved student performance, and elements of both are often used together, with beneficial results" (1991, p. 10). In a more recent literature review, however, Lai reports that "stand-alone approaches to instruction in general critical thinking appear to be less successful than approaches in which critical thinking instruction is infused into discipline-specific courses alongside traditional academic content" (2011, p. 16).
Most literature on CT teaching strategies focuses on two main factors: teacher's role and the learning environment. Many researchers have emphasised the need for teachers to be critical thinkers themselves, not only to be able to create the right materials as and when required, but also to be able to model CT and critical attitude in their own teaching (Smith, 1990;Paul 1992in Lai, 2011Facione, 2000;). This could be accomplished by making their reasoning visible through "thinking aloud" and by using concrete examples of critical thinking at work (or lack of it) (Paul, 1985;Heyman, 2008in Lai, 2011. Another greatly www.ijohmn.com 10 emphasised strategy is the use of teacher questions, particularly Socratic questioning, to probe for assumptions, rationale, evidence, viewpoints, perspectives and implications (Siegel, 1988;Feng, 2013).
This, in turn, entails that teachers should move away from the lecturing mode and take more of a facilitative role in a student-centered context (Paul, 1985;Bonk andSmith, 1998 in Lai, 2011). In fact, most research seems to place a premium on the social constructivist approach as the most conducive environment to effective CT learning, whereby students interact and collaborate by encouraging and respecting the contributions of others (Siegel, 1988;Cotton, 1991;Ghokale, 1995;Swain andLapkin, 2002, Simina andHamel in Yang andGamble, 2013). Genuine communication should be targeted in class and students' opinions should be heard and accepted (Bourdillon and Storey, 2002) and students should be "admitted into arguments, challenges and debates based on respect rather than power or exploitation" (Smith, 1990, p.107).
Context and Rationale
The participants in the present research study are 38 Omani students, 18 to 19 years old, distributed in two groups of 19 students each. These students have just graduated from high school and are starting a 16-week intermediate English language foundation programme, after which they will proceed to their faculties. Omani students come to university foundation programmes from a background of a heavily teacher-centred teaching and learning tradition where they have had no or very little training in such skills as learning strategies, autonomy or CT. At best, they have been irregularly exposed to weak forms of CT, as in occasionally exchanging opinions in speaking activities or writing short opinion essays. And despite attempts to introduce some interactive teaching methods in textbooks, teacher-centredness and focus on exam-oriented input remains the prevalent teaching mode. The way I see it, the foundation programme should be the place to bridge the gap.
Instead of being regarded as a merely language-proficiency-building course, a foundation programme should be considered as a wide-angled EAP course, where additional focus is also placed on those survival academic study skills which students are terribly lacking, before they move to their faculties.
I teach these two groups Reading, Listening and Speaking (two 100-minute classes per week). The assigned materials consist of commercial textbooks as well as in-house materials. The research purpose and procedures were explained to the participants who provided informed consent.
Method
Given the aim of the study and research questions, a mixed method approach was used to collect quantitative and qualitative data through various means toachieve complementarity and triangulation.To this end, teachers' perceptions and experiences were probed for by an online questionnaire that combined open-ended and closed questions (appendix. 1). Follow-up interviews were scheduled with ten teachers who volunteered to discuss the constraints and possibilities of teaching CT to Omani EFL foundation programme students more in depth.A pre-course can-do checklist was completed by students in the first week to see what they already know or can/can't do in terms of CT skills.At the end of the course, students completed the same can-do checklist again to see how much they have learned and improved. This was the main tool I used to measure the effectiveness of the approach being piloted (table. 2). Another tool I used to the same purpose was a post-course evaluation form which I asked the students to complete to seek feedback on the extent to which they think they have benefitted from CT practice, how important they think it is, its Omani context as well as on the capacity of Omani students to acquire CT skills due to their linguistic, educational and cultural backgrounds.
As far as the students themselves are concerned, and as confirmed by 95 percent of the teachers, the data clearly shows that students coming from the Omani high school system hardly have any critical thinking skills or disposition. This fact serves as a good rationale for incorporating CT in the ELT curriculum as suggested by 87 percent of the teachers. Most of the reasons mentioned by the teachers to account for students' lack of CT competence, like rote learning culture, low language proficiency and lack of world knowledge, are indeed valid and genuine. But these are the very shortcomings that we need to address through the right materials and methodology rather than use as excuses to shy away from teaching CT or to justify poor results.
Omani EFL foundation programme institutions are not making things any easier either. Acquiring CT skills is only implicitly encouraged in some institutions and in some programmes, but not consistently across the board. Hence the need for these institutions to explicitly adopt CT as a learning outcome and to cope with whatever implications this might entail.
Therefore, based on the literature review and the findings from the collected data, it is the objective of this study to propose a flexible approach that will enable teachers to incorporate CT skills in the foundation programme curriculum without detracting from the linguistic goals of their courses, without having recourse to CT dedicated materials and without the need for extra time allocation.
Lesson Procedure
As established by the above literature review (Lai, 2011;Yang & Gamble, 2013) and the researched data, the best approach to teach CT seems to be through explicit teaching in integration with subject-specific content, in a social constructivist environment that encourages enquiry and free airing of opinions, where teachers are called upon to adapt and develop course materials and activities and where students interact and cooperate to create a meaningful learning experience. However, because the concept of CT was completely new to my students, I chose to delay using assigned course materials and to start with mostly selfdeveloped CT materials instead. The aim was to foster in the students explicit awareness of what CT is, its worth, its applications in education and life in general and the various skills and attitudes required to become critical thinkers. Once I was confident this goal had been reasonably achieved, I started adapting and supplementing the course materials to create graded and supported CT activities. Therefore, the integration of CT skills in the course was effected in two phases:
Phase one: Standalone approach
For the first three weeks, I taught nothing but CT using materials which I specifically designed to foster CT awareness. Students were first introduced to a list of revised Bloom's taxonomy process verbs, assessments, and questioning strategies and the concepts of lower order thinking skills and higher order thinking skills. These were simplified and explained to the students, emphasising the skills and abilities they will need to acquire in order to be able toexercise higher order thinking skills. This paved the way for the subsequent introduction of a CT definition and a list of qualities of critical thinkers.The purpose of these two steps was two-fold: familiarising the students with the concept of CT and its sub-skills, and motivating them to know how they could apply these in their education and how CT functions.
The next step was to introduce students to information literacy skills and critical reading. The students were provided with a teacher-designed form for the critical evaluation of texts (table. 1). The students practised using the form by evaluating some selected texts in terms of source credibility, bias, author intention, use of supporting data or figures, weak and strong arguments and validity of claims and drawn conclusions. The texts were downloaded from the internet and had the common characteristic of easily lending themselves to CT teaching: issue-based, lacking objectivity, embedding inconsistencies, fallacies and conclusions based on questionable or insufficient evidence. To highlight these, I designed probing questions that gradually led the students to notice these shortcomings, to challenge the advanced arguments and to question the validity of the drawn conclusions. In some cases, I also provided students with other texts that dealt with the same issues, but which were more evidence-based and more objective, so that they could compare and contrast, and differentiate weak from strong arguments and biased from objective positions. In addition to choosing topics that closely touched on their real lives, I also actively involved the students in the tasks by assigning them important roles, such as being evaluators, voters or decision makers, such as when students had to prepare, evaluate and vote for the best trip plan, or when they had to choose the right candidate for a job based on matching a job description to candidates' profiles and CV's.
In speaking, a dialogic approach was adopted, in which various interaction patterns were encouraged, mainly group discussions and class debates. Topics were carefully selected in line with specific criteria: engaging and relating to students' real lives, controversial and posing conflicts of interest, e.g. boys vs girls, men vs women or locals vs expats, and which can have more than one possible or defensible solution. Students were also trained on using discussion strategies and expressions that enhance exchange of views and interaction.
The writing component of this initial training consisted of a 400-500 word researchbased report. The aim was to introduce students to information literacy skills and how to back up arguments with in-text citations from authorities in the field and proper referencing. To ensure sustained content, this task was designed as an extension of a reading activity on the same topic, thusreinforcing the acquisition and use of related vocabulary and teaching students to suspend judgement until sufficient data is sought and evaluated. Because the students were not familiar with longer research-based writing, their first attempt was far from the required level. I had the students to redraft their reports three times, each time giving them individual as well as collective feedback on synthesising researched information to back up their arguments and proper in-text referencing. After the second feedback, I provided a research-based model for discussion and analysis, which proved to be extremely helpful as they ultimately produced reports with reasonable levels of critical analysis. At this point, I turned to the assigned course books and in-house materials. In order to continue promoting CT skills in my students, my role as a teacher consisted of carefully selecting the content/themes/lessons to be taught from the assigned materials, adapting and supplementing them and creating a classroom environment that would be most conducive to CT development.
As far as the materials are concerned, I focused on the units, themes and lessons that are most suitable for CT teaching: motivating and relating to students' real life interests, but also addressing topics which, if looked at from different perspectives, could prove to be open to question and debate. This meant that merely factual, descriptive and/or narrative materials were systematically discarded.
The process of materials adaptation and supplementation was not much different from what I had done in the initial 3-week direct instruction period. The first and most important step was to make the students interested in the materials so as to actively engage them in the activities (Harmer, 2001;Dornyei, 2007). Secondly, I frequently supplemented the tasks accompanying the materials, as these usually only addressed comprehension and linguistic competence. After quickly checking overall comprehension, I often posed probing questions that would gradually lead the students to approach the information from different perspectives as well as to notice inconsistencies, inconclusiveness, weak and strong arguments and possibilities of other interpretations.
At this point, I usually gave students out-of-class supplementary activities that required research, often in the form of research-based reports, presentations or preparation for debates. As a rule, I made sure to retain the same themes and topics while selecting, adapting and supplementing the materials in all four skills (Davidson and Dunham, 1996;Yang & Gamble, 2013). The purpose is to give students as much exposure as possible to the same topic from various texts, auditory and visual, preferably offering different perspectives, in order to help students acquire and retain topic-related vocabulary and concepts. This was achieved by consistently making students listen and read about the topic, then speak and/or write about it (sustained content). Young and Gamble explain that "sustained content builds the vocabulary, conceptual knowledge, and resources necessary to think, speak and write critically" (2013, p. 409 Family…etc. As far as methodology is concerned, my primary focus was on creating a safe environment for students to openly voice their opinions (Krashen, 1985;Scrivener, 2005;Cheng and Dörnyei, 2007), one that is free from peer pressure or teacher power and where all opinions are respected and reacted to in a polite manner. Students were also given enough time to think and prepare for their contributions, mostly within groups of their own choice.
Group and whole class interaction were used the most, where students collaborated and helped each other, and where the teacher also mingled as a facilitator and scaffolder. In order to address the disparity between their level of thinking and their linguistic competence, and in order not to deprive them from an important resource already at their disposal, the students were allowed to code-switch to their L1 as and when required when discussing issues within their groups. Then they could ask for help from their more able peers or the teacher for the English equivalent words, or otherwise use the bilingual dictionaries on their mobile phones. This proved to be an effective way of acquiring new vocabulary that enabled students, especially the weaker ones, to effectually articulate their opinions.
Discussion
The effectiveness of the suggested approach of integrating CT skills in the Omani EFL foundation programme was evaluated through two main tools: a student post-course cando checklist and a student post-course feedback form. The collected data from both tools seem to suggest that the proposed approach was highly effective, not only in developing students' higher order thinking skills but also in improving their language proficiency. Below are the findings from the two methods: Student post-course can-do checklist This is the same checklist that the students filled in at the very beginning at the course prior to CT teaching. Filling in the same checklist after having completed the course gives them the chance to see where they have come from and to what extent their CT skills have improved. As can be seen in the table below (table. 2), all the figures accounting for students' initial CT competencies have risen by various degrees, some more dramatically than others.
The numbers are self-explanatory and a simple comparison of the figures will easily point to that effect. (3) promotion of learner autonomy; and (4) improvement of language proficiency.
Importance of teaching CT on the foundation programme
In response to the statement "It is important to learn CT skills on the foundation programme", 84 percent agreed. The reasons for the importance fell into 3 categories: (1) The need to learn about it before moving on to the faculties, especially that it is completely new to them; (2) The need for a stronger foundation, not only in language but also in ways and level of thinking; and (3) promotes learner autonomy-all three being deemed by the students as very important conditions for success in the upcoming university education.
Most enjoyed CT activities
Most CT activities conducted during the course have been reported as more or less enjoyable by the students. However, two activities stood out as the most valued by the students: critical reading and debates (45% and 53% respectively). As previously established, the social constructivist approach is the most conducive environment to learning CT skills: greater teacher support and facilitation, learner collaboration in co-construction of knowledge and promotion of learner autonomy. In order for student cooperation to be fruitful, enough time and opportunity should be allowed for the students to discuss and collaborate to achieve the task and seek help from the teacher as and when required. It is also imperative that a safe and engaging classroom environment be created in order to encourage students to freely voice their opinions. This could be achieved by 1) granting students as much freedom as possible in choosing their partners and/or groups, in discussing with the teacher, in expressing their viewpoints and in disagreeing with others, and 2) ensuring that all opinions are respected and reacted to respectfully. Finally, teachers should model CT by posing probing questions, by thinking aloud and by getting involved in the discussions as the students' equal and not necessarily as the one who holds the truth.
Conclusion
The evaluation data, especially the student course feedback form, suggest that the proposed approach was largely successful. The areas of success could be summed up in the points confirmed by more than 80 percent of the students: all surveyed CT abilities have largely increased in the students, some have even more than doubled or tripled; being more satisfied with their English course this semester; that it is important to study CT at the foundation programme; that they have greatly enjoyed CT activities, especially critical reading and debate; that they have noticed an improvement in their linguistic proficiency and information literacy; and that they have greatly valued the adopted methodology and classroom environment. For CT to be effectively incorporated in SQU foundation programme and other similar EFL contexts, it has to be fully endorsed by such institutions rather than paying it lip service support. This is a rather long-term aim that will take many years to materialize, for it will involve massive on-the-job training for all teachers, revisiting of in-house materials, significant time reallocation and a substantial adjustment to the testing system in both summative and, especially, formative assessment. In the meantime, it is left up to individual teachers to fill the gap through personal initiative and endeavour. Teachers who are willing to go the extra mile in order to foster CT abilities in their students should rely more on themselves by learning more about what CT is and how to teach it effectively, by adapting and extending activities from assigned course materials and by creating the right classroom environment.
It is my belief that the proposed approach of integrating CT skills in the Omani EFL foundation programme without having recourse to extra time allocation or many additional materials can be successfully implemented in any other EFL context due to its simplicity and flexibility. However, the study could benefit more from some kind of assessment-based evaluation, especially a formative one, to be able to gauge the degree of CT assimilation and transferability more tangibly.
|
2021-05-08T00:02:37.066Z
|
2021-02-27T00:00:00.000
|
{
"year": 2021,
"sha1": "72b6117eeab9f8e6b68cc830754c629af2c0122f",
"oa_license": "CCBY",
"oa_url": "https://ijohmn.com/index.php/ijohmn/article/download/217/610",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f8164db29dc6cbfd913c7c6240a5064e7a4ffb5d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
119536824
|
pes2o/s2orc
|
v3-fos-license
|
Possible Problems with Four-Dimensionalism and a Possible Solution
Mark Heller, in “Temporal Parts of Four-Dimensional Objects,” argues for an ontology of objects with four dimensions. He thinks that by arguing for an ontology that incorporates the temporal dimension he is able to deny some distasteful propositions that the proponent of a three-dimensional ontology must choose between to avoid contradicting themselves as they attempt to develop an account of how objects are able to persist, as the same objects, through change. This is a view that I find attractive, but with some reservation. This paper will explain Heller’s fourth-dimensionalism by looking at an argument he brings up against three-dimensionalism. I will reply to his handling of the argument by raising two important areas of clarification that the four-dimensionalist must address. After this, I will bring up a possible solution to the areas needing clarification by suggesting an ontology that includes an essential part alongside temporal parts.
General Definitions
"A four-dimensional object is the material content of a filled region of space-time.A spatiotemporal part of such an object is the material content of a subregion of the spacetime occupied by the whole."(Ibid, 497) Heller points out that for any region that is not filled by space-time, then there is no physical object.For a region to be full, he says, is for it to contain no empty subregions (Ibid,496).Heller thinks that we should expect temporal characteristics to share some traits of spatial characteristics, such as precise boundaries."A spatiotemporal part is not a set or a process or a way something is at a place and time.It, like the object it is a part of, is a hunk of matter."(Ibid,497) This means that spatiotemporal parts are physical parts.(Ibid,497) He thinks that we can consistently accept these three aspects of objects: 1) There are four-dimensional objects and spatiotemporal parts of such objects.2) Not every filled region of space-time contains a physical object.
2155-4838 | commons.pacificu.edu/rescogitans3) Even for a region of space-time that does contain a physical object, not every subregion contains a spatiotemporal part of that object.(Ibid,497) His reasoning for (2) depends upon his understanding of what it means for space-time to be filled.A region of space-time might be filled with matter of a certain shape that allows for that region of space-time to be divided into subregions.If the matter is the wrong shape, then there might not be a physical object; yet, the subregion is still filled.In this way, not all space-time would filled with matter, and it would depend on the shape of the matter for there to be physical objects (ibid, 495).He suggests that (3) could be accepted for the same reasons as (2).
The Argument
Heller's argument is against the coincidence of two physical objects.He says that there are five distasteful alternatives that the three-dimensionalist must take in order to avoid the contradiction in the argument.Bear in mind that the three-dimensionalist wants to deny all of these alternatives, as well as have objects persist through time: a) there is no such physical object as my body, b) there is no physical object in the space that we would typically say is now exactly occupied by all of me other than my hand, c) no physical object can undergo a loss of parts, d) there can be distinct physical objects exactly occupying the same space at the same time, e) identity is not transitive.
The four-dimensionalist does not have to accept any of these alternatives.Here is a sketch of the argument: There are two spatially different objects, my body ('Body') and all of my body except my hand ('Body-minus').At some time t, I lose my hand.
3) After-t Body = after-t Body-minus 4) So, before-t Body-minus = before-t Body Before-t Body had more parts than before-t Body-minus, specifically my hand, which implies that before-t Body is not identical to before-t Body-minus.Thus, we have a contradiction.
However, if there are four dimensions, then spatially different objects could share a temporal part.Strictly speaking, both Body and Body-minus are not in the same space 2155-4838 | commons.pacificu.edu/rescogitansat the same time.Only their shared temporal part is filling that region of space-time.The temporal part that the two objects share does not have two objects coinciding in it, it is not being overcrowded by both objects, it is exactly filled by a part of Body and a part of Body-minus that is identical.Heller gives an example of a piece of gold which was shaped into a golden ring (ibid, 499).The ring undergoes a change of matter until it is silver.The gold and the ring overlap, they both share a temporal part; they do not coincide.I think that this raises a question about what boundaries distinguish objects from one another, as well as how those boundaries are formed.The next section will raise such questions of clarification for the four-dimensionalist.
Questions for Clarification
What are the temporal boundaries around an object, and how are those boundaries formed?
Judith Jarvis Thomson brought up an objection to four-dimensionalism by an example of her holding a piece of chalk for an hour.The chalk, she says, has certain physical characteristics such as being white, cylindrical, dusty, and so forth (ibid, 500).Yet, the temporal parts of the chalk comes into and out of existence without any sufficient cause.There is a piece of chalk from time-1 until time-2, another piece of chalk from time-2 until time-3, and so forth.There is no change in matter, no molecules added or subtracted, but a new piece of chalk keeps coming into existence.
Heller sees this critique as founded on the belief that objects are merely threedimensional, and suggests that the question to be answered is what causes the chalk to have the lower temporal boundary (time-1, etc.) that it does.He thinks that the question of what causes this lower temporal boundary is similar to what causes spatial boundaries.It is a mixture of "... causal mechanisms and material configurations of matter at any given time that affect which parts will exist at the next moment."(Ibid, 501) For Heller, the concept that all sub-regions of an object should contain a temporal part is not built into the theory of temporal parts (ibid, 500).This means that the amount of temporal parts an object has might be less than the amount of sub-regions it has.The piece of chalk might have only one temporal part for the duration of an hour, as there are no causal or material reasons why it should not.Still, the problem of how temporal boundaries are formed has not been solved.Heller gives us an example of how temporal boundaries might be formed.He says that we tend to think of a person as being one object from birth until death (ibid, 500).It seems natural, however, to distinguish the pre-and post-pubescent parts of the person as being distinct, as it is a change that will have significant ramifications for the person (ibid, 500).This example shows how every subregion of the object that a person fills in space-time does not have to be filled with temporal parts, and that the boundaries of 2155-4838 | commons.pacificu.edu/rescogitanstemporal parts have material and causal origins.Yet the formation of temporal boundaries seems arbitrary, especially if it only selects events that cause enough change to be significant.The argument about Body and Body-minus, from an earlier section, should be reconsidered: 1) Before-t Body-minus = after-t Body-minus.
3) After-t Body = after-t Body-minus 4) So, before-t Body-minus = before-t Body If this argument is reconsidered with both temporal and spatial boundaries in mind, then we might notice that (2) is not true.In losing a hand, Body goes through a significant change in its subregions of space-time.The objects that are Body before-t and after-t have different spatiotemporal boundaries and do not share a temporal part if it is possible that a new temporal part began when the hand was lost.If they have distinct spatiotemporal boundaries, then they are simply different objects.Heller thinks that (2) is true because the four-dimensionalist is able to accept that physical objects can undergo a loss of parts. 1 That might be the case, but the four-dimensionalist would have to present a thorough account of what makes temporal boundaries.It might be that whatever ends and begins temporal parts of objects is intimately tied to what ends and begins those objects' spatial parts, which would weaken four-dimensionalism's ability to let objects persist through change.
Is it possible for objects to fill new space-time regions?
Another area that requires clarification is the part-to-whole relationship that temporal parts have to the whole object.Either objects are determined and have firm spatiotemporal boundaries, or they are unfolding and have indeterminate spatiotemporal boundaries.Heller's views seem to fall into the former group (ibid, 502).If objects are able to expand and fill other regions of space-time, then it complicates where our spatiotemporal boundaries should begin and end for that object to remain the same object.A human object provides a good example of how this could complicate four-dimensionalism.A human object's spatiotemporal boundaries are continually expanding and diminishing because of causal and material factors.
If a human object eats a pie, then the four-dimensionalist should provide an account for how the pie's region of space-time was affected by the human object.It seems as though the pie's spatiotemporal boundaries were meaningfully altered in such a way that added to the human object's region of space-time.The human object's numerical value of space-time expanded.(I do not have an adequate account of how regions of space-time could be subsumed by other regions of space-time, as it appears to happen when the human object interacts with the pie.That is something that the fourdimensionalist should also provide an answer for.)Because a human object's spatiotemporal boundaries are continually expanding and diminishing, we ought to consider what it is for an object to be a distinct object.Spatiotemporal boundaries only seem to be able to hold an object if those boundaries are determined and firm.If an object can expand and diminish its boundaries, however, then we should consider what would be required for an object to persist despite change.
Essential Part
I am tempted to accept an ontology with temporal parts, but it seems to me that there is something lacking if objects are only defined by their spatiotemporal boundaries.These boundaries are, if what we have seen is correct, changeable and without an account of why those boundaries ought to constitute that object.If a region of space-time is able to change its spatiotemporal boundaries, then it seems that that object changes with it.Body and Body-minus are an example of this.Heller thinks that they are two objects that share a temporal part, but I suggested above that the temporal part is divided exactly where they would share it.This means that the subregion of space-time that that temporal part is supposed to fill is divided further into two sub-subregions that are spatiotemporally distinct.If this is true, then Body and Body-minus are distinct spatiotemporal objects.It would mean that Body did not survive a change of its parts.This seems problematic, however, as we want to think that objects can persist despite a change of their parts.In this section I will accept four-dimensionalism and modify it to include a subregion of space-time that constitutes the object to be an essential part of that object that, if lost, would result in the object no longer existing.This will allow for the boundaries of objects to be expanded or diminished, as well allow for objects to persist through time so long as their essential part has not been changed.When an essential part is changed, however, perhaps in a circumstance like puberty, the essential part might be changed.My hope is that the following account of essential parts will be a useful tool in deciding the boundaries of spatiotemporal parts, which will show us when spatiotemporal parts begin and end.
An essential part is the part that an object cannot lose, or do without, without either ceasing to exist or being reshaped into a different object.There is a distinction to be made between animated objects and inanimate objects.An example of an animated object is a human; an example of an inanimate object is a pie.Inanimate objects have no essential part, as they are regions of space-time with specific shapes.They can lose these shapes, and the matter than fills the region of space-time could form a new, different shape.This forms a new object.These regions of space-time require causal and material factors in order to be reshaped.They are the same shape of space-time so 2155-4838 | commons.pacificu.edu/rescogitanslong as their spatiotemporal parts are not altered or reshaped, and so do not persist through change, but also do not randomly come in and out of existence.
Animated objects, such as humans, have an essential part that, if altered or lost, alters or takes the animated object out of existence as that specific animated object.The part that animates is the essential part, such as consciousness or rationality.This is the self, the ego, the I that is the thing that can think, and so on.My goal here is not to define the essential part, but only to assert that we have one in order to suggest a possible way that the four-dimensionalists might be able to get away from the problem of spatiotemporal boundaries 2 .I would like to divorce the essential part from spatial and temporal parts, as it provides better explanatory power.If the four-dimensionalist will accept the essential part, then that is the part that is shared by both Body and Body-minus.They would be two spatiotemporal parts of one object with an essential part.They would not have a shared temporal part, though they would have some shared spatial parts.
Conclusion
We began by looking at Mark Heller's fourth-dimensionalism as a way for objects to persists through change without having to accept some distasteful alternatives.We discussed how the changing of spatial and temporal boundaries might be intimately connected, so as to end and begin temporal parts alongside their object's spatial parts and make persistence through time for that object impossible.This led us to describe an essential part that would be required for an object to persist through change despite losing or gaining spatiotemporal parts.The three-dimensionalist could also include an essential part in their ontology in an effort to explain persisting through change, but I think that it would be an error to do without temporal parts.Without temporal parts, inanimate objects would have nothing that holds them together as the same inanimate object throughout time.I also suspect that the essential part of animate objects, since it is the part that animates the object, could interact with the spatial and temporal parts in different circumstances, or even continuously, in order be animate.
1 'c) no physical object can undergo a loss of parts' from above, one of the five alternatives the three-dimensionalist must accept.
2 What the essential part is should be answered by both scientists and philosophers.
|
2019-04-18T13:07:14.856Z
|
2013-06-19T00:00:00.000
|
{
"year": 2013,
"sha1": "fd4fea91678e6ce728d76bc5d9a3119747bfdf46",
"oa_license": "CCBY",
"oa_url": "http://ijurca-pub.org/articles/10.7710/2155-4838.1070/galley/142/download/",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "31a24a33c5c1f53306e04731f24bce21bf896e58",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
208158095
|
pes2o/s2orc
|
v3-fos-license
|
Signatures of the d∗(2380) hexaquark in d(γ,p~n)
M. Bashkanov, D.P. Watts, ∗ S.J.D. Kay, S. Abt, P. Achenbach, P. Adlarson, F. Afzal, Z. Ahmed, C.S. Akondi, J.R.M. Annand, H.J. Arends, R. Beck, M. Biroth, N. Borisov, A. Braghieri, W.J. Briscoe, F. Cividini, C. Collicott, S. Costanza, 9 A. Denig, E.J. Downie, P. Drexler, 13 S. Gardner, D. Ghosal, D.I. Glazier, I. Gorodnov, W. Gradl, M. Günther, D. Gurevich, L. Heijkenskjöld, D. Hornidge, G.M. Huber, A. Käser, V.L. Kashevarov, 8 M. Korolija, B. Krusche, A. Lazarev, K. Livingston, S. Lutterer, I.J.D. MacGregor, D.M. Manley, P.P. Martel, 15 R. Miskimen, E. Mornacchi, C. Mullen, A. Neganov, A. Neiser, M. Ostrick, P.B. Otte, D. Paudyal, P. Pedroni, A. Powell, S.N. Prakhov, G. Ron, A. Sarty, C. Sfienti, V. Sokhoyan, K. Spieker, O. Steffen, I.I. Strakovsky, T. Strub, I. Supek, A. Thiel, M. Thiel, A. Thomas, Yu.A. Usov, S. Wagner, N.K. Walford, D. Werthmüller, J. Wettig, M. Wolfes, N. Zachariou, and L.A. Zana
We report a measurement of the spin polarisation of the recoiling neutron in deuterium photodisintegration, utilising a new large acceptance polarimeter within the Crystal Ball at MAMI. The measured photon energy range of 300 -700 MeV provides the first measurement of recoil neutron polarisation at photon energies where the quark substructure of the deuteron plays a role, thereby providing important new constraints on photodisintegration mechanisms. A very high neutron polarisation in a narrow structure centred around Eγ ∼ 570 MeV is observed, which is inconsistent with current theoretical predictions employing nucleon resonance degrees of freedom. A Legendre polynomial decomposition suggests this behaviour could be related to the excitation of the d * (2380) hexaquark.
PACS numbers: INTRODUCTION
The photodisintegration of the deuteron is one of the simplest reactions in nuclear physics, in which a well understood and clean electromagnetic probe leads to the breakup of a few-body nucleonic system. However, despite experimental measurements of deuteron photodisintegration spanning almost a century [1], many key ex- * Electronic address: daniel.watts@york.ac.uk perimental observables remain unmeasured. This is particularly evidenced at distance scales (photon energies) where the quark substructure of the deuteron can be excited. This limits a detailed assessment of the reaction mechanism, including the contributions of nucleon resonances and meson exchange currents as well as potential roles for more exotic QCD possibilities, such as the d * (2380) hexaquark recently evidenced in a range of nucleon-nucleon scattering reactions [2][3][4][5][6][7][8][9]. The d * (2380) has inferred quantum numbers I(J P ) = 0(3 + ) and a mass ∼ 2380 MeV, which in photoreactions would correspond to a pole at E γ ∼ 570 MeV. Constraints on the existence, properties end electromagnetic coupling of the d * (2380) would have important ramifications for the emerging field of non-standard multiquark states and our understanding of the dynamics of condensed matter systems such as neutron stars [10].
Although cross sections for deuterium photodisintegration have been determined [11], polarisation observables provide different sensitivities to the underlying reaction processes and are indispensable in constraining the basic photoreaction amplitudes. Of all the single-polarisation variables, the ejected nucleon polarisation (P y ) is probably the most challenging experimentally, requiring the characterisation of a sufficient statistical quantity of events where the ejectile nucleon subsequently undergoes a (spin-dependent) nuclear scattering reaction in an analysing medium. Nucleon polarisation measurements of sufficient quality have therefore only recently become feasible with the availability of sufficiently intense photon beams. Efforts to date have focused on recoil proton polarisation (P p y ) [12,13], exploiting proton polarimeters in the focal planes of (small acceptance) magnetic spectrometers. The data have good statistical accuracy but with a discrete and sparse coverage of incident photon energy and breakup kinematics [12,13], with most data restricted to a proton polar angle of Θ CM p ∼ 90 • in the photon-deuteron centre-of-mass (CM) frame. However, these available P p y data do exhibit a distinct behaviour, reaching ∼ −1 (i.e. around -100% polarisation), in a narrow structure centred on E γ ∼ 550 MeV. Due to the inability to describe this behaviour with theoretical calculations including only the established nucleon resonances, it was speculated [12,14] that it would be consistent with a then unknown 6-quark resonance, with inferred properties having a striking similarity to the d * (2380) hexaquark discovered later in N N scattering.
Clearly, measurement of the ejected neutron polarisation (P n y ) would be important to establish a role for the d * (2380) in photodisintegration. In d * (2380) → pn decays, the spins of the proton and neutron would be expected to be aligned [37]. Therefore, if the P p y anomaly originates from a d * (2380) contribution, the neutron polarisation should mimic this anomalous behaviour. Measurements of P n y are even more challenging experimentally than P p y , due to the inability to track the uncharged neutron into the scattering medium, and have only been obtained below E γ ∼ 30 MeV [16,17]. The experimental difficulties even led to attempts to extract P n y from studies of the inverse reaction n + p → d + γ, using detailed balance [18,19].
This new work provides the first measurement of P n y in deuterium photodisintegration for E γ sensitive to the quark substructure of the deuteron, covering E γ = 300 − 700 MeV and neutron breakup angles in the photondeuteron CM frame of Θ CM n = 60 − 120 • .
II. EXPERIMENTAL DETAILS
The measurement employed a new large acceptance neutron polarimeter [20] within the Crystal Ball detector at the A2@MAMI [22] facility during a 300 hour beamtime. An 1557 MeV longitudinally polarised electron beam impinged on either a thin amorphous (cobaltiron alloy) or crystalline (diamond) radiator, producing circularly (alloy) or linearly (diamond) polarised bremsstrahlung photons. As photon beam polarisation is not used to extract P n y , equal flux from the two linear/circular polarisation settings were combined to increase the unpolarised yield. The photons were energytagged (∆E ∼ 2 MeV) by the Glasgow-Mainz Tagger [23] and impinged on a 10 cm long liquid deuterium target cell. Reaction products were detected by the Crystal Ball (CB) [24], a highly segmented NaI(Tl) photon calorimeter covering nearly 96% of 4π steradians. For this experiment, a new bespoke 24 element, 7 cm diameter and 30 cm long plastic scintillator barrel (PID-POL) [25] surrounded the target, with a smaller diameter than the earlier PID detector [25], but provided similar particle identification capabilities. A 2.6 cm thick cylinder of analysing material (graphite) for nucleon polarimetry was placed around PID-POL, covering polar angles 12 • < θ < 150 • and occupying the space between PID-POL and the Multi Wire Proportional Chamber (MWPC) [26]. The MWPC provided charged particle tracking for particles passing out of the graphite into the CB. At forward angles, an additional 2.6 cm thick graphite disc covered the range 2 < θ < 12 • [25,28].
The d(γ, p n) events of interest consist of a primary proton track and a reconstructed neutron, which undergoes a (n, p) charge-exchange reaction in the graphite to produce a secondary proton which gives signals in the MWPC and CB. The primary proton was identified using the correlation between the energy deposits in the PID and CB using ∆E − E analysis [25] along with an associated charged track in the MWPC. The intercept of the primary proton track with the photon beamline allowed determination of the production vertex, and hence permitted the yield originating from the target cell windows to be removed. Neutron 12 C(n, p) charge exchange candidates required an absence of a PID-POL signal on the reconstructed neutron path, while having an associated track in the MWPC and signal in the CB from the scattered secondary proton. The incident neutron angle (θ n ) was determined using E γ and the production vertex coordinates. A distance of closest approach condition was imposed to ensure a crossing of the (reconstructed) neutron track and the secondary proton candidate track (measured with MWPC and CB). Once candidate proton and neutron tracks were identified, a kinematic fit was employed to increase the sample purity and improve the determination of the reaction kinematics [38], exploiting the fact that the disintegration can be constrained with measurements of two kinematic quantities while three (θ p , T p and θ n ) are measured in the experiment. A 10% cut on the probability function was used to select only events from the observed uniform probability region [35].
III. DETERMINATION OF NEUTRON POLARISATION
The neutron polarisation was determined through analysis of the neutron-spin dependent 12 C(n, p) reactions occuring in the graphite polarimeter. The spinorbit component of the nucleon-nucleon interaction results in a φ-anisotropy in the produced yield of secondary protons. For a fixed nucleon energy, the secondary proton yield as a function of polar (Θ) and azimuthal (φ) scattering angle can be expressed as where P n is the neutron polarisation and A y is the analysing power. A y for free n−p scattering is established for the appropriate energy range in the SAID parameterisation [29]. Differences in the analysing power between the free (n, p) process and the in-medium 12 C(n, p)X process were established by a direct measurement of A y for 12 C(n, p)X by JEDI@Juelich [27]. Above T n = 300 MeV the measured A y agreed with the SAID (n, p) parameterisation to within a few %. For lower energies, the influence of coherent nuclear processes, such as 12 C(n, p) 12 N resulted in an increased magnitude (around a factor of 2) but exhibiting a similar Θ dependence to the free reaction [28]. The (n, p) analysing power from SAID was corrected [39] by the function: To reduce systematic dependencies in the simulation of the polarimeter, the events were only retained if A y (np) was above 0.1, the proton scattering angle (Θ) was in the range Θ scat p ∈ 15 − 45 • and Θ n − Θ scat p > 27 • where Θ scat p is the polar angle of scattered proton relative to the direction of the neutron. The latter cut reduced the contribution of secondary protons travelling parallel to the axis of the polarimeter. The scattered yields were corrected for small angle-dependent variations in detection efficiency from the MWPC, established using reconstructed charged particles in the data. The acceptance with the above cuts was determined using a GEANT4 [30] simulation of the apparatus. The yield of scattered events was then corrected for this efficiency and the polarisation extracted according to equation 1.
To quantify systematic errors in the P n y extraction, the analysis cuts were relaxed. This involved widening the cuts on the scattered proton angle and minimum energy (both of which change the MWPC efficiency), reducing the minimum analysing power cut, as well as varying the minimum probability in the kinematic fit up to 40% [28]. The systematic errors are extracted from the resulting variations in the extracted P n y , so include significant contributions from the achievable measurement statistics.
The main systematic error arose from variations in the φdependent detector efficiencies for the secondary protons in GEANT4, which had increasing influence for the lower nucleon energies. The extracted systematic error in P n y was typically around ±0.2 and is presented bin-by-bin with the results in the next section.
IV. RESULTS
The extracted P n y are presented as a function of photon energy at a fixed θ CM n ∼ 90 • bin in Fig. 1. The P n y observable was extracted in both a binned (red filled circles) and an unbinned (black dashed line) ansatz. Both methods gave consistent results within the statistical accuracy of the data. At the lower photon energies, in the region of the ∆ resonance, P n y is negative in sign, small in magnitude and rather uniform. However, at higher photon energies the P n y data exhibits a pronounced and sharp structure reaching ∼ − 1 around E γ ∼ 550 MeV. The new data reveals a striking consistency between P n y and the previous P p y [12] measurements (blue open circles) in the region of the d * (2380).
The cyan (pink) dashed-dot curves show theoretical calculations of P p y (P n y ) respectively. The model includes meson exchange currents (π, ρ, η, ω) and conventional nucleon resonance degrees of freedom [31]. These calculations reproduce the measured P p y and P n y in the ∆ region, but fail to describe the pronounced and narrow structure centred around E γ ∼ 550 MeV for either observable. At the very highest photon energies the model reproduces the trend indicated in the data, towards smaller magnitudes of P n y compared to P p y , an effect attributed [31] to the N * (1520) resonance having opposite sign for photocoupling to the neutron and proton [34]. Clearly, data at higher E γ would help to confirm this trend and better constrain the role of this resonance.
The blue (red) solid lines show a simple approximation to include an additional contribution to these theoretical predictions from the d * (2380) hexaquark in P n y (P p y ), taking the established mass and width, and having a magnitude fitted to reproduce the P p y data alone. Previous observations of a lack of mixing of the d * (2380) with nucleon resonance backgrounds in the inverted reaction np → d * gives some justification to this approximate ansatz [8,9].
The main features of the data in the d * (2380) region, specifically the minima position and width of the dip evident in both P n y and P p y , appear consistent with a d * (2380) contribution of a common magnitude for both channels. Such a common magnitude may be expected from a symmetric decay to pn from a particle which does not mix significantly with other (non-d * (2380)) background contributions. Clearly, more detailed theoretical calculations including the d * (2380) in a consistent framework within the model would be a valuable next step and we hope our new results will encourage such efforts.
The new data set also has sufficient kinematic accep-tance and statistical accuracy to provide a first measurement of the angular dependence of P n y . For a d * (2380) → pn decay, P n y would be expected to exhibit the angular behaviour of the associated P 3 1 Legendre function [40], reaching a maximum at θ CM n = 90 • with zero crossings at θ CM n = 64 • and 116 • . In Fig. 2, P n y is presented as a function of θ CM n for two E γ bins, one in the ∆ region and one in the region of the d * (2380). The P n y from the ∆ region show a broadly flat distribution with θ CM n . In the d * (2380) region, P n y is higher in magnitude and exhibits a distinct angular dependence with a minima of ∼ − 1 reached at ∼ 90 • . There are no previous angular data on P n y in this region. However, the sparse data on P p y (open points) appear consistent with the new P n y measurements in the ∆ region, as predicted by the theoretical model [31]. [12] (blue open circles) for CM angular bins centred on 90 • as a function of photon energy. The result of an unbinned analysis of P n y is presented as black dashed line with the error bars as a grey band. The dashed-dot lines shows predictions from Ref [31] for P p y (cyan) and P n y (pink). The solid lines show the result of the fit with an additional d * (2380) contribution (see text) for P p y (blue) and P n y (red). Systematic uncertainties for the P n y data are shown by the hatched area.
To quantify the dependence of P n y on photon energy and polar angle, we performed an expansion of our results into associated Legendre functions.
The result of this expansion can be seen in Fig. 2, which shows the fitted contributions from a 1 P 1 1 (green line), a 2 P 2 1 (blue line) and a 3 P 3 1 (red line), and the sum of all contributions (black line). The strongly varying angular behaviour in the d * (2380) region is consistent with a sizeable P 3 1 contribution [41]. Figure 3 shows the E γ dependence of fitted expansion coefficients. We employ the prescription adopted in Ref. [35] and use two fit methods: (i) a single-energy [12], pink circle [32], blue triangles [33] and green circle [13]. The curves are results of the Legendre decomposition (see text): a1P 1 1 (green), a2P 2 1 (blue), a3P 3 1 (red) and their sum (black). Systematic uncertainty is shown as the hatched area.
procedure in which the fit was performed using data from each photon energy bin in isolation (black data points) and (ii) an energy-dependent procedure where the expansion coefficients, a l , were assumed to vary smoothly from bin to bin (dotted lines with the errors represented by bands). The a 1 coefficient did not show any particular energy dependent variation, so it was fixed to the value of a 1 = −0.3. The extracted coefficients are presented as a function of photon energy in Fig. 3. The energy dependence of the P 3 1 coefficient is consistent with the established mass and width of the d * (2380) hexaquark (M = 2380 ± 10 MeV and Γ = 70 ± 10 MeV), indicating the angular dependence of P n y is consistent with a sizable J = 3 contribution having properties consistent with those of the d * (2380).
V. SUMMARY
The recoil neutron polarisation in deuteron photodisintegration has been measured for 300 < E γ < 700 MeV and photon-deuteron centre-of-mass breakup angles for the proton of 60-120 • , providing the first measurement of this fundamental observable at photon energies where the quark substructure of the deuteron can play a role in the mechanism. At lower photon energies, the data are well described by a reaction model which includes meson exchange currents and the known nucleon resonances. At higher photon energies, a narrow structure centred around E γ ∼ 550 MeV is observed in which the neutrons reach a high polarisation. Such behaviour is not reproduced by the theoretical model and is consistent with the "anomalous" structure observed previously for the recoil proton polarisation [12]. In a simple ansatz the photon energy and angular dependencies of this "anomaly" are consistent with a contribution from the J p = 3 + d * (2380) hexaquark.
VI. ACKNOWLEDGEMENT
We are indebted to M. Zurek for providing us data on n 12 C analysing powers.
|
2019-11-19T14:42:25.000Z
|
2019-11-19T00:00:00.000
|
{
"year": 2020,
"sha1": "83f3a101adb3e39196d9b7040331007dbb5240ba",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.124.132001",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d893f8fe5e22df084aa1ea86e849658535c5f79c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
7564942
|
pes2o/s2orc
|
v3-fos-license
|
Primary glia expressing the G93A-SOD1 mutation present a neuroinflammatory phenotype and provide a cellular system for studies of glial inflammation
Detailed study of glial inflammation has been hindered by lack of cell culture systems that spontaneously demonstrate the "neuroinflammatory phenotype". Mice expressing a glycine → alanine substitution in cytosolic Cu, Zn-superoxide dismutase (G93A-SOD1) associated with familial amyotrophic lateral sclerosis (ALS) demonstrate age-dependent neuroinflammation associated with broad-spectrum cytokine, eicosanoid and oxidant production. In order to more precisely study the cellular mechanisms underlying glial activation in the G93A-SOD1 mouse, primary astrocytes were cultured from 7 day mouse neonates. At this age, G93A-SOD1 mice demonstrated no in vivo hallmarks of neuroinflammation. Nonetheless astrocytes cultured from G93A-SOD1 (but not wild-type human SOD1-expressing) transgenic mouse pups demonstrated a significant elevation in either the basal or the tumor necrosis alpha (TNFα)-stimulated levels of proinflammatory eicosanoids prostaglandin E2 (PGE2) and leukotriene B4 (LTB4); inducible nitric oxide synthase (iNOS) and •NO (indexed by nitrite release into the culture medium); and protein carbonyl products. Specific cytokine- and TNFα death-receptor-associated components were similarly upregulated in cultured G93A-SOD1 cells as assessed by multiprobe ribonuclease protection assays (RPAs) for their mRNA transcripts. Thus, endogenous glial expression of G93A-SOD1 produces a metastable condition in which glia are more prone to enter an activated neuroinflammatory state associated with broad-spectrum increased production of paracrine-acting substances. These findings support a role for active glial involvement in ALS and may provide a useful cell culture tool for the study of glial inflammation.
Introduction
Although the proximal cause of paralysis in amyotrophic lateral sclerosis (ALS) is the death of motor neurons, it is becoming widely accepted that motor neuron death in ALS is not cell autonomous but depends upon active and passive roles for ambient glial cells. The neuron-cell autonomy of ALS pathogenesis has been strongly questioned by a number of studies over the past several years. In work published during 2001, Rouleau et al. created a strain of transgenic mice that express mutant SOD1 specifically in neurons. These mice display no frank pathology even at 1.5 years of age [1]. Caroni's group subsequently reported similar findings [2]. Selective expression of mutant SOD1 only in astroglia, causes a type of astrogliosis but fails to produce motor neuron disease [3] in the absence of simultaneous mutant SOD1 expression in neurons. Nonetheless, Cleveland and colleagues recently showed that the rate of disease progression in mutant SOD1 chimeric mice depends on the extraneuronal expression of mutant SOD1 [4]. The survival of chimeric mice was dependent upon mutant SOD1 expression in neurons, but also highly dependent on the number of ambient mutant SOD1-expressing non-neuronal cells. These studies provide strong incentive to consider glial involvement in ALS.
With the advent of transgenic mouse models for familial amyotrophic lateral sclerosis (FALS), it has become more possible to study inflammatory and autoimmune features of the disease at distinct time points during the progression of the illness. Using the G93A-SOD1 mutant mouse model for ALS, Gurney et al. reported dramatically increased numbers of MHC-II + microglia and concomitant astroglial activation beginning prior to onset of paralysis and increasing during the paralytic phase [5]. Several recent studies have built upon these early studies by documenting reproducible, age-dependent elaboration of pro-inflammatory cytokines during the onset and progression phases of disease in the G93A-SOD1 mouse [6][7][8][9][10][11]. Tumor necrosis factor-α (TNFα) and its principle receptor TNF-RI are particularly elevated at pre-and post-symptomatic stages of disease [6][7][8][9], suggesting a rationale for the application of this cytokine in cell culture studies of ALSlinked glial activation. The time-course of cytokine up-regulation closely mirrors the time-course of protein oxidative damage, and begins approximately two weeks prior to the point of actual motor neuron death [7,12]. In addition to cytokines and reactive oxygen species, eicosanoids such as PGE 2 are elevated and pharmacological antagonism of PGE 2 -synthesizing inducible cyclooxygenase (COX-II) improves prognosis in the murine model [13]. Likewise arachidonic acid 5-lipoxygenase (5LOX) is elevated in G93A-SOD1 spinal cords and the 5LOX antagonist nordihydroguaiaretic acid (NDGA) slows disease progression in the ALS mouse [14]. These findings suggest a robust, multi-faceted neuroinflammatory response, antagonism of which may slow the progression of ALS.
In order to better understand the contributions of astroglia to neuroinflammation in the ALS context, and to create a tool for the study of neuroinflammatory signal transduction, primary cortical astrocytes were cultured from neonatal mice bearing G93A-SOD1 mutations. The cells were characterized for their ability to synthesize salient biomolecules including cytokines, eicosanoids, and reactive oxygen species. G93A-SOD1 transgenic astroglia were found to synthesize higher-than-normal levels of TNFα, COX-II, 5LOX, and PGE 2 even in the absence of deliberate stimulation. When challenged with TNFα alone or in combination with IFNγ, selective subsets of cytokines were further induced. Leukotriene B 4 , nitric oxide and protein oxidation increased more markedly in G93A-SOD1 glia challenged with TNFα or interferon-γ IFNγ than in similarly treated nontransgenic cells. Expression of high copy numbers of wild-type human SOD1 had no effect or slightly diminished the inflammatory indices. These findings suggest that SOD1 mutations fundamentally alter the phenotype of astrocytes, placing the cells in a metastable condition that is hypersensitive to certain types of ligand-induced activation.
Animals
Mice expressing high copy numbers of human mutant G93A-SOD1 were obtained from Jackson Laboratories (Bar Harbor ME; strain designation B6SJL-Tg(SOD1 G93A)1Gur/J; [15][16][17]. In some control experiments mice were used that express equivalent protein levels of wildtype human SOD1 (B6SJL-TgN-(SOD1 G93A)-2Gur; Jackson Laboratories). Transgenic mice were maintained in the hemizygous state by mating G93A males with B6SJL-TGN females. All animal procedures were approved by the OMRF Institutional Animal Care and Use Committee (IACUC).
Astrocyte culture
Primary mouse neocortical astrocytes were cultured by slight modifications of previously described methods [18] from G93A-SOD1 mice, matched nontransgenic littermates, or wildtype-human SOD1 expressing mice. In all cases the cortex was used to maximize astroglial yield. Briefly, the neocortex was removed from 7 day old pups under aseptic conditions and large blood vessels carefully removed. Tissue was rinsed and triturated in cold Ca ++ / Mg + free HBSS buffer, then centrifuged at 300 × g for five minutes. The resulting pellet was resuspended in 30 mL of 50% Dulbecco's Modified Essential Medium (DMEM) and 50% F12 media containing 10% heat-inactivated fetal bovine serum, 1% glutamine, and 1% streptomycin and penicillin. The 30 mL suspension was placed into a 75 cm 2 tissue culture flask. Cells obtained from individual mouse pups were plated in separate flasks. Media was replenished 7 days following the initial plating. Between 6-10 days after initial plating, glia became fully confluent at which time they were subcultured at a 1:4 dilution into 6-or 24-well plates. In all cases, unless otherwise specified, astroglial cultures were not futher subcultured. Furthermore each experiment compared genotype-specific cell responses between parallel cultures of identical passage number, obtained from paired neonatal pups (littermates in the case of G93A-SOD1 +/-and -/-mice). Paired cultures were prepared on the same day and subject to medium changes and manipulations in exactly parallel fashion so as to avoid artifacts arising from differences in medium self-conditioning, clonal selection, or other uncontrolled variables. Specific experiments were conducted deliberately on astroglia that had been subcultured intentionally for up to 5 passages. Statistically significant genotype-specific differences in cytokine-stimulated nitrite production and other variables, as indicated, were maintained at least to the fifth passage in these experiments. Purity of cultures was routinely assessed by immunocytochemistry using fluorescein-conjugated anti-OX-42 antibody (Chemicon, Temecula CA USA) to identify microglia, and rhodamine-conjugated rabbit anti-glial fibrillary protein (GFAP) antibody (Chemicon) to identify astroctyes.
Cytokine treatments
In all experiments cell cultures were stimulated at full confluence (110,000 cells/cm 2 ). Cells were treated with recombinant murine TNFα and/or interferon gamma (IFNγ) (BD Pharmingen, San Diego CA USA) as indicated in specific experiments. Cytokines were predissolved in 4% fatty acid-free bovine serum albumin (BSA) in 0.9% saline at 100-fold working concentration. Vehicle control treatments used 4% BSA: saline only. Because TNFα activity varied somewhat from lot to lot, each lot was pretested to determine the concentration of applied cytokine that would yield a measurable effect within the linear range of cell response. For cytokine treatments, culture medium was replaced with fresh medium. After 2 hours equilibration, cytokines or vehicle were diluted 1:100 into cell culture medium. Viability was routinely assessed by means of tetrazolium reduction assays (Aqueous OneStep ® , Promega, Gaithersburg MD USA).
Ribonuclease protection assays
Multiprobe ribonuclease protection assays (RPAs) were performed as described [14,7,8]. Cells or brain cortices were lysed in TRIzol™ mRNA isolation reagent (Life Technologies, Gaithersburg MD). Total RNA was quantified spectrophotometrically at 260 nm. Panels of mRNA were detected using commercial RPA kits (Riboquant™, Pharmingen, San Diego, CA). Radiolabeled probes were synthesized from DNA templates containing a T7 RNA polymerase promoter (Pharmingen). Templates were transcribed in the presence of 100 µCi [γ-32 P]UTP to yield radioactive probes of defined size for each mRNA. Probes were hybridized with 5-10 µg total RNA, then treated with RNAse A and T1 to digest single-stranded RNA. Intact double-stranded RNA hybrids were resolved on 5% polyacrylamide/8 M urea gels. Dried gels were visualized using a phosphorimager (Molecular Dynamics, Sunnyvale CA) and bands quantified using instrument-resident densitometry software (ImageQuant™, Molecular Dynamics). Within each sample, the density of each apoptosis-associated mRNA band was normalized to the sum of the L32 + GAPDH bands.
Eicosanoid assays PGE 2 and LTB 4 were measured in cell culture medium using commercially available enzyme linked immunosorbent assays (ELISAs; Cayman Chemical, San Diego CA USA).
Nitrite assay
Cell culture medium was assayed for NO 2 by the Griess assay as described [8]. Samples were mixed 1:1 with a mixture of equal portions sulfanilamide and napthylethylenediamine reagents (LabChem, Gaithersburg MD USA). External standards were prepared in fresh cell culture medium. The diazo product was measured spectrophotometrically at 560 nm.
Statistics
Data were evaluated by analysis of variance (ANOVA) followed by post-hoc comparisons to assess genotype-specific differences in particular endpoints amongst nontransgenic, G93A-SOD1 + and human wildtype-expressing glial cell cultures. All analyses were conducted using GraphPad Prism ® software (GraphPad, San Diege CA USA).
Results
When fresh cortical tissue was excised from G93A-SOD1 and non-transgenic neonatal pups at 7 days of age, no genotype-dependent differences were observed with respect to PGE 2 concentration as measured by ELISA (NonTg = 281 ± 200 pg/mg protein; G93A-SOD1 = 336 ± 134 pg/mg protein; N = 6/group); COX-II expression or 5LOX expression as measured by immunoblot (not shown); or cytokine expression patterns assessed by RPAs (data not shown). Nonetheless cultured astroglia demonstrated clear genotype-dependent differences in these several parameters, as described below.
Primary astrocyte cultures from G93A-SOD1 or nontransgenic mice were almost exclusively astrocytic based on immunocytochemical staining with anti-glial fibrillary acidic protein (GFAP) (not illustrated). In initial cultures, microglia were occasionally evident; however, these cells were not retained throughout multiple serial passages. Both nontransgenic and transgenic cells displayed typical morphological attributes of cultured astrocytes. G93A-SOD1 cells tended to be slightly more elongated than nontransgenic cells though no formal attempt was made to quantify or statistically analyze this feature. There was no discernible difference in rates of tetrazolium reduction amongst the genotypes, under any of the conditions tested. Viability of cells treated with maximum concentrations of stimulatory cytokines (40 ng/mL TNFα plus 50 U/ mL IFNγ) did not differ significantly from that of untreated cells, based on tetrazolium reduction assays, at time points up to 48 hours post-stimulation.
Specific cytokine expressioin differences occur in G93A-SOD1 astrocytes
Multiprobe RPA methods were used to assess genotypedependent differences in cytokine-stimulated cytokine expression between nontransgenic and G93A-SOD1 glia. Medium was replaced and cells were stimulated for 4 hours, which was found to represent the approximate peak for TNFα-stimulated cytokine mRNA transcription. A number of observations were evident in these experiments. First, G93A-SOD1 cells demonstrated lower levels of "housekeeping" messages L32 and GAPDH than did non-transgenic, matched cell cultures (Fig. 1). This may reflect a fundamental alteration in mRNA distribution with an over-expression of "non-housekeeping" genes such that the ratio of L32 and GAPDH to total mRNA, is Representative multiprobe ribonuclease protection assay results indicating selective increase in certain cytokine mRNAs in G93A-SOD1 astrocyte cultures, either in the absence of deliberate stimulation (basal condition) or after 4 hours treatment with recombinant murine IFNγ (50 U/mL), TNFα (40 ng/mL) or both Figure 1 Representative multiprobe ribonuclease protection assay results indicating selective increase in certain cytokine mRNAs in G93A-SOD1 astrocyte cultures, either in the absence of deliberate stimulation (basal condition) or after 4 hours treatment with recombinant murine IFNγ (50 U/mL), TNFα (40 ng/mL) or both. Each lane represents pooled mRNA from at 6 wells of cells.
fundamentally skewed in G93A-SOD1+ gial cultures. Thus, when equivalent amounts of message (based on UV absorption of RNA extracts) was loaded onto polyacrylamide gells, the cytokine: housekeeping message ratio could be noticeably affected.
iNOS expression and nitric oxide synthesis is increased in G93A-SOD1 glia
Primary glia cultured from 7 day old G93A-SOD1 or nontransgenic pups were treated with increasing concentrations of TNFα plus or minus IFNγ. As an indicator of nitric oxide production, nitrite was measured in the cell culture medium 48 hours later. Measurable NO 2formation required IFNγ in both non-transgenic and G93A-SOD1 astrocytes, and abundant nitrite production was only observed 48 hours after cytokine stimulation (Fig. 3). Under combined IFNγ and TNFα stimulation, TNFα-stimulated G93A-SOD1 glia produced significantly more NO 2than did nontransgenic cells (Fig. 3). The G93A-SOD1 enhanced NO 2 production was maintained through at least 5 serial passages of cell cultures (not illustrated). Elevated levels of iNOS protein could be detected in G93A-SOD1 astrocytes relative to nontransgenic cells (Fig. 3).
G93A-SOD1 astrocytes experience exacerbated protein carbonylation under cytokine challenge
Protein carbonyl accumulation is a well-accepted indicator of oxidative damage [7,20]. Recently biotin hydrazide and similar reagents have been adapted to monitor carbonylation in cell and tissue lysates [7]. The use of biotin hydrazide allows the sensitive detection of oxidized proteins by means of streptavidin conjugates, without resorting to antibody methods that are often hindered by low Comparison of basal and cytokine-stimulated PGE 2 and LTB 4 production by nontransgenic primary mouse astrocytes, G93A-SOD1 mouse astrocytes, or wild type human SOD1-expressing mouse astrocytes Figure 2 Comparison of basal and cytokine-stimulated PGE 2 and LTB 4 production by nontransgenic primary mouse astrocytes, G93A-SOD1 mouse astrocytes, or wild type human SOD1-expressing mouse astrocytes. Insets show western blot analysis of basal COX-II and 5-LOX protein expression. Data bars indicate mean ± SD of 6 wells of cells from a typical experiment. p < 0.05 overall by ANOVA; * indicates specific difference between nontransgenic and G93A-SOD1 cultures assessed by Bonferroni post-hoc tests.
signal: noise and nonspecific binding artifacts. For these reasons, experiments were undertaken to assess genotyperelated differences in glial protein carbonylation through means of the biotin labeling technique.
Cells were stimulated with 40 ng/mL TNFα plus 50 U/mL IFNγ, or vehicle for 48 hours and lysed for carbonyl assessment. The 48 hours timepoint was chosen as duration of treatment sufficient to induce obvious increases in protein carbonylation within nontransgenic astrocytes. The inclusion of IFNγ was also necessary to insure this effect. As shown in Fig. 4, G93A-SOD1 glia contained approximately 2-fold greater levels of protein carbonyl than did nontransgenic cells, in the absence of an applied cytokine challenge. After exposure to TNFα + IFNγ, protein carbonyl levels increased in both nontransgenic and G93A-SOD1 cells. Whereas the cytokine-stimulated increase in carbonylation was approximately 2-fold in nontransgenic cells, it was approximately 150-fold in G93A-SOD1 astroglia ( Fig. 4; estimates for relative levels of carbonylation were made by repeated serial dilution of the labeled samples). Curiously, no major protein carbonylation band assignable to SOD1 was found in any G93A-SOD1 astrocyte lysates whereas a major carbonylated protein identifiable as SOD1 was previously demonstrated in spinal cord extracts from symptomatic G93A-SOD1 mice [7,12].
Discussion
The role of astrocytes in paracrine inflammatory networks has become increasingly appreciated in recent years. In this capacity astrocytes likely respond to neural damage, infection, or tumorigenisis in such a way as to modulate necessary innate immune responses. Contrastingly, chronic unremitting neuroinflammation has been widely implicated in diverse neurodegenerative diseases. In murine models of ALS, neuroinflammation is robust as indicated by broad-spectrum cytokine upregulation plus oxidative stress, astroglial morphological changes, and microglial proliferation [6][7][8]14,[9][10][11][12]5]. Aberrations in eicosanoid production, largely mediated by inducible cyclooxygenase-II (COX-II) [13] but perhaps also by 5LOX [14] represent another major component of the neuroinflammatory phenotype that might be amenable to therapeutic intervention. Thus far it has been difficult to separate the cell type-dependent contributions to the neuroinflammatory phenomenon. This limitation has prevented detailed molecular dissection of relevant pathways that are perturbed by the insertion of mutant SOD1 transgenes, and has slowed the development of new therapeutic modalities. The ability to recapture certain aspects of neuroinflammation in primary astrocyte cultures will likely facilitate detailed studies of signal transduction pathways that are sensitive to mutant SOD1.
The findings from the present study corroborate recent reports of cytokine hyper-expression in the CNS of mutant Basal and cytokine-stimulated protein carbonylation is increased in G93A-SOD1 astrocyte cultures Figure 4 Basal and cytokine-stimulated protein carbonylation is increased in G93A-SOD1 astrocyte cultures. Cells were stimulated for 48 hours with 50 U/mL IFNγ plus 40 ng/mL TNFα, lysed in the presence or absence of biotin-LChydrazide (+ or -label as indicated), blotted onto a PVDF membrane and probed with streptavidin-conjugated horseradish peroxidase.
iNOS protein expression and NO 2formation in cultured nontransgenic or G93A-SOD1+ astrocytes in the basal state and after 48 hours stimulation with recombinant murine TNFα (40 ng/mL) plus IFNγ (50 U/mL) Figure 3 iNOS protein expression and NO 2formation in cultured nontransgenic or G93A-SOD1+ astrocytes in the basal state and after 48 hours stimulation with recombinant murine TNFα (40 ng/mL) plus IFNγ (50 U/mL). Bars represent mean ± SD from 6 wells of cells in a typical experiment; * p < 0.05 for stimulated G93A-SOD1 + cells relative to correspondingly treated nontransgenic cells, by two-tailed t-test. SOD1 mice preceding motor neuron death [6][7][8]. In particular the new data suggest that astrocytes cultured from 7 day old neonates reside in a metastable state that is exquisitely prone to activation, resulting in elevated expression of specific cytokines, upregulation of eicosanoid biosynthetic pathways, and increased oxidant production. The act of plating and culturing the cells seemed sufficient to induce expression of TNFα, COX-II and to a lesser extent 5LOX and iNOS. None of these inflammatory correlates were detectably elevated in cortical tissue extracted directly from the same transgenic neonates. Nonetheless, cultured glia from the same animals showed clear evidence for activation of the respective gene inductive pathways. Thus, glial over-expression of mutant SOD1 (but not wild-type SOD1) elicits a fundamental influence upon multiple gene regulatory pathways.
One of the most important, unaccomplished necessities in understanding ALS is to elucidate the toxic gain-offunction(s) inherent to SOD1 mutants. In this study we have demonstrated a cellular gain-of-function inasmuch as G93A-SOD1 fundamentally alters astrocyte response to relevant pro-inflammatory cytokines such as TNFα. Efforts are currently underway to discern the molecular mechanism(s) by which G93A-SOD1 alters glial sensitivity. One likely mode of action is through accumulation of mutant SOD1 within the mitochondrial intermembrane space [21,22] which may facilitate electron transport chain deficits, either directly or indirectly [23][24][25][26]. We have previously documented that mitochondrial inhibitors such as antimycin-A that disrupt electron transport, are sufficient to stimulate cytokine transcription in primary astrocyte cultures [27]. Thus factors including, but not restricted to reactive oxygen species may be released from glial mitochondria secondary to accumulation of mutant SOD1. These mitochondria-derived oxidants, lipids and proteins then can act through redox-sensitive mitogen-activated protein kinases [27] or directly upon transcription factors [28] to facilitate gene expression events thereby plausibly accounting for some of the hypersensitivity inherent to the G93A-SOD1 glial cultures. These concepts deserve closer scrutiny in future research and are under active investigation within our laboratory.
A major question that remains to be answered is whether or not increased cytokine and eicosanoid production in G93A-SOD1 central nervous system tissue, actually endangers ambient neurons. Most cytokines, including TNFα and IL6, that we find upregulated in primary glial cultures or in vivo [7,8], exert pleiotropic effects and can be trophic to pure neurocultures. In the presence of microglia however these cytokines trigger production of diffusible oxidants and could dysregulate key metabolic pathways, such as the kynurenine pathway, leading to production of excitotoxins (eg. quinolinic acid) and other paracrine factors that might injure neaby neurons [29]. Research ongoing in our laboratory is underway in attempts to address this issue.
|
2017-08-03T02:04:50.185Z
|
0001-01-01T00:00:00.000
|
{
"year": 2006,
"sha1": "a39340a0edb05e7ac695ab89887d9b8079680562",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/1742-2094-3-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0245749a0d4085257aa216b7facc60665dc7b1a5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
268848016
|
pes2o/s2orc
|
v3-fos-license
|
Rebound Inverts the Staphylococcus aureus Bacteremia Prevention Effect of Antibiotic Based Decontamination Interventions in ICU Cohorts with Prolonged Length of Stay
Could rebound explain the paradoxical lack of prevention effect against Staphylococcus aureus blood stream infections (BSIs) with antibiotic-based decontamination intervention (BDI) methods among studies of ICU patients within the literature? Two meta-regression models were applied, each versus the group mean length of stay (LOS). Firstly, the prevention effects against S. aureus BSI [and S. aureus VAP] among 136 studies of antibiotic-BDI versus other interventions were analyzed. Secondly, the S. aureus BSI [and S. aureus VAP] incidence in 268 control and intervention cohorts from studies of antibiotic-BDI versus that among 165 observational cohorts as a benchmark was modelled. In model one, the meta-regression line versus group mean LOS crossed the null, with the antibiotic-BDI prevention effect against S. aureus BSI at mean LOS day 7 (OR 0.45; 0.30 to 0.68) inverted at mean LOS day 20 (OR 1.7; 1.1 to 2.6). In model two, the meta-regression line versus group mean LOS crossed the benchmark line, and the predicted S. aureus BSI incidence for antibiotic-BDI groups was 0.47; 0.09–0.84 percentage points below versus 3.0; 0.12–5.9 above the benchmark in studies with 7 versus 20 days mean LOS, respectively. Rebound within the intervention groups attenuated and inverted the prevention effect of antibiotic-BDI against S. aureus VAP and BSI, respectively. This explains the paradoxical findings.
Introduction
Patient colonization [1,2], ICU colonization pressure [3,4] and length of stay (LOS) are each major risk factors for ICU-acquired infection with S. aureus.In European ICU cohorts, S. aureus colonization increases the risk of S. aureus pneumonia by up to 15-fold, with pneumonia onset generally within 14 days following ICU admission [1,2].
Reducing both patient and ICU colonization, through decontamination interventions, would seem logical to prevent S. aureus infections, whether occurring as pneumonia or blood stream infections (BSIs), and multiple potential candidate agents have been tested in various singleton and combination regimens [5,6].However, the results of numerous studies of decontamination interventions conducted in ICU populations are unclear at four levels [7,8].
Thirdly, there is the unresolved issue of spillover effects to concurrent patients within the ICU not receiving decontamination which might also 'drive' event rates [17][18][19].This is especially concerning for high quality studies of decontamination interventions with concurrent controls [23].
Finally, decontamination interventions presumably mediate their prevention effects against S. aureus infections by targeted decolonization of S. aureus.Hence, modelling of S. aureus colonization would provide more accurate insights into the various 'drivers' within these studies, particularly where the prophylactic measures do not eradicate colonizing staphylococci completely.
There are four objectives here.Firstly, to recapitulate indicative prevention effect size estimates for antibiotic-BDI versus other interventions against both overall VAP and overall BSI, as well as S. aureus VAP and S. aureus BSI end points within the literature [9][10][11][12][13][24][25][26][27][28][29][30][31][32][33][34][35][36][37].Secondly, to determine whether the study level prevention effect sizes vary with study LOS.The third objective is to detect rebound in incidence of S. aureus VAP and S. aureus BSIs within these studies versus that within ICU patient cohorts and within study arms not exposed, either directly or indirectly, to decontamination interventions.To this end, metaregression and structural equation modelling of S. aureus colonization are each applied to the data from studies in the broader literature which serve, 'in toto', as a natural experiment of various exposures.The fourth objective is to triangulate the findings here with the previous effect size summaries for antiseptic-and antibiotic-BDIs within the literature.
Characteristics of the Studies
Of the 282 studies identified by the search, 61 of 139 intervention studies were found in either ten Cochrane or other systematic reviews [9][10][11][12][13][24][25][26][27][28][29][30][31][32][33][34][35][36][37].Most studies were published between 1990 and 2010 and most had a mean ICU-LOS exceeding ten days.Fourteen studies of infection prevention interventions had more than one type of intervention group and fourteen studies had either more than one or no control group.Most groups from observational studies had more than 150 patients per group versus less than 150 patients in the groups of the interventional studies.Most studies originated from either North American or European ICUs.Both S. aureus VAP and S. aureus BSI incidence data were available from 32 studies, while only S. aureus BSI incidence data were available from 34 studies and only S. aureus VAP incidence data from the remaining 215 studies (Table 1).
Study characteristics
Listing Table S1 Table S2 Table S3 Table S3) a .Note that several studies had more than one control and/or intervention group.Hence, the number of groups does not equal the number of studies; b .Studies that were sourced from 16 systematic reviews (references in webonly supplementary); c .Study originating from an ICU in Canada of the United States of America; d .Studies for which less than 90% of patients were reported to receive >48 h of MV; e .Use of protocolized parenteral antibiotic prophylaxis (PPAP) for control group patients; f .NA-not applicable; g .Data are median and inter-quartile range (IQR); h .Effect sizes for total VAP and BSI prevention includes data from all studies for which this effect size could be calculated whether or not S. aureus VAP or BSI data were available.
Indicative Effect Sizes
The indicative study-specific and summary effect sizes for the three categories of intervention against S. aureus VAP and S. aureus BSI are presented as caterpillar plots (Figures S1-S3).Significant summary prevention effects against both overall VAP and against S. aureus VAP were evident for all three categories.A summary prevention effect against overall BSI was apparent for the antiseptic-BDI and antibiotic-BDI categories but no category demonstrated a summary prevention effect against S. aureus BSI (Table 1).Of note, of the 25 studies of antibiotic-BDI against S. aureus BSI, the study specific odds ratios were <0.5 for seven and >2.0 for six (Figure S3).
Prevention Effect Size Meta-Regression versus LOS
The relationship between S. aureus VAP prevention effect sizes versus group mean LOS for the three categories of interventions are presented as meta-regressions (see Table S5; Figures S4-S11).In these models, all three categories of intervention demonstrated significant prevention effects against S. aureus VAP at the day 7 intercept but in each case the effect attenuated towards the null in association with increasing group mean LOS.
The relationship between S. aureus BSI effect sizes versus group mean LOS for the three categories of interventions are presented as meta-regressions (see Table S5 and Figure S4-S11).In these models, only the antibiotic-BDI category demonstrated significant prevention effects against S. aureus BSI at the day 7 intercept.For all three categories, there was attenuation of prevention effects against S. aureus BSI [manifested as a positive slope coefficient] in association with increasing group mean LOS which, in the case of the antiseptic-BDI (Figure 1a) and antibiotic-BDI (Figure 1b) categories, crossed the null.Consequently, the predicted antibiotic-BDI prevention effect against S. aureus BSI at day 7 (OR 0.45; 0.30 to 0.68) inverts at day 20 (OR 1.7; 1.1 to 2.6).
Antibiotics 2024, 13, x FOR PEER REVIEW 4 o The relationship between S. aureus BSI effect sizes versus group mean LOS for three categories of interventions are presented as meta-regressions (see Table S5 and ure S4-S11).In these models, only the antibiotic-BDI category demonstrated signific prevention effects against S. aureus BSI at the day 7 intercept.For all three categories, th was a enuation of prevention effects against S. aureus BSI [manifested as a positive sl coefficient] in association with increasing group mean LOS which, in the case of the a septic-BDI (Figure 1a) and antibiotic-BDI (Figure 1b) categories, crossed the null.Con quently, the predicted antibiotic-BDI prevention effect against S. aureus BSI at day 7 ( 0.45; 0.30 to 0.68) inverts at day 20 (OR 1. S5) and all three intervention categories in preventing S. aureus V are presented in the online supplement as figures (Figures S4, S6 and S8) and as a summary t (Table S5).
S. aureus VAP and BSI Incidence Versus LOS
The incidence of S. aureus VAP and S. aureus BSI across all categories varied in e case by >100-fold.The S. aureus VAP and S. aureus BSI data versus group mean LOS (Table S5).
S. aureus VAP and BSI Incidence versus LOS
The incidence of S. aureus VAP and S. aureus BSI across all categories varied in each case by >100-fold.The S. aureus VAP and S. aureus BSI data versus group mean LOS as derived from the meta-regression models for each category in comparison to the benchmark groups are displayed (Tables 2 and S6, and Figures 2, S10 and S11).derived from the meta-regression models for each category in comparison to the benchmark groups are displayed (Tables 2 and S6, and Figures 2, S10 and S11).S6.The NCC (non-concurrent control) category includes control groups that, being non-concurrent to antibiotic-BDI or antiseptic-BDI groups, were not exposed to spillover effects from these interventions.Note that the y-axis is a logit scale and the x-axis is the group mean LOS transformed to a logarithmic scale after dividing by 7 such that the model intercept values correspond to group mean day 7 estimates as derived in the meta-regression mod- S6.The NCC (non-concurrent control) category includes control groups that, being non-concurrent to antibiotic-BDI or antiseptic-BDI groups, were not exposed to spillover effects from these interventions.Note that the y-axis is a logit scale and the x-axis is the group mean LOS transformed to a logarithmic scale after dividing by 7 such that the model intercept values correspond to group mean day 7 estimates as derived in the meta-regression models.
The day 7 predicted S. aureus VAP per 100 patients for the antiseptic-BDI and antibiotic-BDI intervention groups were each approximately 1.5 percentage points below that predicted for benchmark groups, whereas the day 20 predicted S. aureus VAP for the antibiotic-BDI and antiseptic-BDI intervention groups were each 1.5 to 2.5 percentage points (p = NS) above that predicted for benchmark groups.By contrast, the predicted S. aureus VAP for antibiotic-BDI control groups was approximately 4 percentage points above that predicted for benchmark groups at both day 7 and day 20 (Table 2).
The day 7 predicted S. aureus BSI per 100 patients for the antibiotic-BDI intervention groups was 0.4 percentage points below that predicted for the benchmark groups whereas the day 20 predicted S. aureus BSI for the antibiotic-BDI intervention groups was >2 percentage points above that predicted for not only the benchmark groups (being double that of the benchmark groups) but also that predicted for all other control and intervention groups (Table 2).
GSEM Modelling of S. aureus Colonization
The GSEM model (Figure 3 and Table 3) is based on the postulated model of causation wherein S. aureus colonization, a latent variable in the model, 'drives' the count of S. aureus VAP and S. aureus BSI. S. aureus colonization in turn is 'driven' by increasing LOS, membership of a concurrent control group (in response to spillover), and origin from a trauma ICU as positive factors and exposure to antibiotic-BDI and antiseptic-BDI as negative factors.Adding the interaction terms between increasing LOS and exposure to antiseptic-and antibiotic-BDI, which represent rebound, lowers the Akaike Information Criteria (AIC), which indicates improvement to the model.
Discussion
In this analysis of the results from 282 studies of three broad categories of infection prevention interventions, there were four objectives.Firstly, in contrast to the nondecontamination interventions, both antiseptic-BDI and antibiotic-BDI have strong prevention activities against both overall and S. aureus VAP, and indeed overall BSI, but neither have activity in preventing S. aureus BSI in summary estimates.The indicative summary prevention effect size estimates here recapitulate previous estimates in the literature based on fewer studies (Table 4).Secondly, attenuation of prevention activity is generally apparent across all categories of intervention and for both S. aureus infection end points in association with increasing mean study LOS.However, antibiotic-BDI prevention activity against S. aureus BSI atten-uates, crosses the null and inverts in studies with LOS > 14 days.This inversion could account for the otherwise paradoxical finding of apparent lack of effect of antibiotic-BDI against S. aureus BSI in the summary effect size estimates derived across all studies as generally reported (Table 4).
Thirdly, the S. aureus BSI incidence rebounds among the antibiotic-BDI groups with LOS > 14 days to exceed that predicted for not only benchmark but also all other categories of the control and intervention groups (Figure 2).This rebound underlies the inversion of the antibiotic-BDI prevention effect against S. aureus BSI in studies with mean LOS > 14 days.There are too few studies of antiseptic-BDI to be able to determine whether or when rebound might occur and whether prevention effect inversion occurs.
The postulated prevention effect of antibiotic-BDI (appearing in the model as TAP) against S. aureus colonization is affirmed within the GSEM, together with the confounding effects of group mean LOS and rebound (appearing as an interaction term between LOS and antibiotic-BDI appearing as TAP).Rebound is a positive 'driver' in this model.
Finally, the findings here can account for rebound and spillover resulting from antibiotic-and antiseptic-BDI and yet the summary effect size estimates derived here are consistent with previous published effect size summaries (Table 4).This triangulation reconciles an outstanding paradox in the literature.
That antibiotic-BDI's should have strong prevention effects against both overall VAP, overall BSI and S. aureus VAP in summary estimates, and yet not against S. aureus BSI, is paradoxical (Table 4; Figure S3).This paradox is also reflected in the wide range of study-specific effect sizes of three large individual studies included in this analysis, in which decreased S. aureus BSI was [38] or was not observed [39,40] where similar antibiotic-BDIs regimens were used.Moreover, in one large study (>2000 patients per arm) of an antiseptic-BDI regimen [39], the S. aureus BSI incidence in the chlorhexidine arm (1.2%) was double that of the control arm (0.6%).Of further note, a narrow spectrum anti-S.aureus monoclonal antibody failed to prevent S. aureus BSI or S. aureus VAP in a large RCT with a mean LOS ~20 days, contrary to the results of preclinical studies [41].
Moreover, antibiotic-BDI intervention groups have higher day 20 incidences of S. aureus BSI than control groups (Table 2) among studies which, overall, appear to indicate prevention of S. aureus VAP (Table 4).This observation is profoundly paradoxical and represents the effect of rebound which is more apparent for S. aureus BSI than for S. aureus VAP (Figures S8 and S9).
Rebound infection on premature withdrawal of antibiotic-BDI had been noted among patients neutropenic following cytotoxic chemotherapy in hematology units in the 1970s.These severe and occasionally fatal infections were observed in patients who had prematurely discontinued antibiotic-BDI due to its intolerable taste.Rebound sepsis has been noted following hospital discharge among patients exposed to antibiotic therapy considered high risk for causing microbiome disruption [42].
Rebound may be imperceptible without specific surveillance for colonization and infections on withdrawal of decontamination interventions [20][21][22].Rebound following antibiotic-BDI discontinuation and ICU discharge manifests as a 50% (non-significant) increased risk of hospital-acquired infection [22].Rebound of ceftazidime-resistant Gramnegative bacteria may occur as a 'whole of ICU' phenomenon not limited to the antibiotic-BDI recipients, persisting as an ecological effect for several months after antibiotic-BDI withdrawal [20].
The use of PPAP within some concurrent control groups may have modified the rebound and spillover effects from the intervention groups within some of the antibiotic-BDI studies.
Increased rates of S. aureus and other Gram-positive isolates were noted in the five years following the introduction of antibiotic-BDI into Dutch and Austrian ICU's [15,16].Any ecological effect following antibiotic-BDI withdrawal will likely contribute to the spillover effect [43].Whether reversible antibiotic tolerance induced in S. aureus by exposure to colistin, a common constituent of antibiotic-BDI regimens, contributes to this emergence of Gram-positive isolates is unclear.This tolerance will be undetectable by standard susceptibility testing [44].
The high S. aureus VAP and S. aureus BSI incidences within the concurrent control groups of antibiotic-BDI studies noted here as a spill-over effect, which is as previously noted for several end points [8,[17][18][19]23,43,45,46], would conflate the apparent prevention effect.
For antiseptic-BDI, the results of prevention studies may differ depending on the end point of interest, the patient risk category and LOS.For example, pneumonia prevention is noted among cardiac ICU patient populations, with short LOS, whereas possible harm occurs with medical ICU patient populations, in association with long LOS [13,14].
Limitations
Several limitations should be considered.In meeting objective one, the indicative summary estimates derived here are comparable with those elsewhere in the literature (Tables 1 and 4).Given the uncertain amount of spillover effect in these concurrent controlled decontamination studies, achieving unbiased patient level effect estimates will be elusive and here are designated indicative.In meeting the other objectives, group-level estimates were derived from broadly selected studies from the literature, not limited to those that were randomized, blinded or even controlled.Hence, the literature search has been opportunistic rather than systematic.By using existing systematic reviews as a starting point, the key interventions can be readily identified and classified.
In meeting objective two, there is considerable heterogeneity in the interventions, populations, modes of VAP diagnosis, study quality and study designs among studies published over several decades included in the analysis here with no ability to adjust for underlying patient risk.Hence, these effect estimates are also considered 'indicative' and primarily relate to the group level rather than the patient level of analysis.The definitions of VAP used in various studies vary and this further adds to the heterogeneity for endpoints related to VAP incidence.
In meeting objective three, the use of broad inclusion criteria was intentional as this objective required estimating the incidence of S. aureus infections in cohorts with various exposures to interventions, spillover and LOS.This requires a benchmark derived from non-interventional (observational) cohorts for comparison.Studies with a non-concurrent control (NCC) design are also of great interest as they will be free from any spill-over effect from any intervention concurrently under study in the ICU, in contrast to the case in studies where control and intervention groups are concurrent.NCC studies are usually excluded from Cochrane reviews which rate these studies with lower quality scores.However, the potential effects of spillover and rebound are not recognized in the scoring of study quality.
Of note, there were no obvious individual study result outliers driving the overall findings despite the heterogeneity among the results of studies included in accord with broad selection criteria.
Mean LOS and even median LOS are crude measures of group-level exposure of each group to the ICU context and exposure to the infection prevention interventions in the intervention groups.The analysis is ecological, and the estimates relate to the impacts of antiseptic-BDI and antibiotic-BDI on ICU patient cohorts.Of note, even cohorts with short mean LOS will contain patients with long LOS and vice versa.The associations for group-wide exposures may not equate to associations at the patient level of exposure.
This analysis here is unable to determine either the duration of S. aureus spillover as a driver nor the relationship between the timing of the S. aureus rebound and the cessation of the antibiotic-or antiseptic-BDI, which for many studies was unclear.
Many studies of decontamination interventions will have been underpowered to adequately assess key safety end points.Moreover, none were able to assess for whatever microbiome interactions that might underlie the bacteremia prevention effect [19].
The GSEM is a group-level modelling of the latent variable, Staphylococcus aureus colonization, within a postulated model of causation which can accommodate all studies and both infection end points.This latent variable and the coefficients derived in the GSEM are indicative only.They have no counterpart at the level of any one patient or study.They indicate the propensity for invasive infection arising, by whatever mechanism, from colonization as a latent construct rather than colonization measured by its presence and density.
Finally, there is the potential for publication and reporting bias as studies finding a significant effect size are more likely to be published.Whilst S. aureus infection incidences were rarely the primary end point, those that have found significant differences for S. aureus infection incidences may be more likely to report these findings as secondary end points than if the differences were neutral or negative.Whilst the possibility of competing risks in estimating S. aureus infections, such as early exclusion due to ICU discharge or mortality, would likely be similar for the studies regardless of study intervention, the development of VAP would be expected to prolong the ICU stay, and this is likely reflected in the higher LOS among studies of antibiotic-BDIs.
Conclusions
Antibiotic-BDIs appear to prevent S. aureus VAP and S. aureus BSI, although this effect varies with mean study LOS.The prevention effect of antibiotic-BDIs against S. aureus VAP attenuates in association with group mean LOS.The prevention effect of antibiotic-BDIs against S. aureus BSI inverts in studies with mean LOS > 14 days due to rebound within the intervention groups.This rebound would explain the paradoxical findings in the literature.
Materials and Methods
Being an analysis of published work, ethics committee review of this study was not required.
The inclusion criteria were cohorts of patients requiring prolonged (>24 h) ICU stay for which either or both incidence proportions of S. aureus VAP or S. aureus BSIs and either mean or median LOS were reported.Where possible, data were extracted for each identifiable sub-cohort representing different patient types or observation eras from the studies.
The studies were classified into four broad groups of either non-decontamination, antiseptic-BDI or antibiotic-BDI and studies (observational) without an intervention under study.The fourth category served as the benchmark category in the meta-regression of S. aureus VAP or S. aureus BSI incidence.These observational studies were screened to exclude any in which an infection prevention intervention was under study.
Non-decontamination interventions were studies of various approaches to the control of upper gastro-intestinal tract colonization through various stress ulcer prevention or feeding approaches and various approaches to control airway colonization through airway management.
Antiseptic-BDI included use of agents such as chlorhexidine, povidone-iodine and iseganan.All antiseptic exposures were included regardless of whether the application was to the oropharynx, by toothbrushing or by body-wash.Antibiotic-BDI is the use of topical antibiotic prophylaxis (TAP) to the oropharynx or stomach without regard to the specific antibiotic constituents or whether protocolized parenteral antibiotic prophylaxis (PPAP) was used in addition as part of the antibiotic-BDI regimen.Note that mupirocin appears as a component within arms of one antiseptic-BDI and two antibiotic-BDI studies.The use of antibiotic therapy outside of that dictated by study protocols has not been factored into this study.
The inclusion criteria were deliberately broad without regard to the frequency or duration of intervention under study or any criteria of study quality.
Outcomes of Interest
The independent variable in the regression models was the mean length of ICU stay (LOS).If this was not available, the median LOS or the mean or median duration of mechanical ventilation (MV) were used.The S. aureus VAP, S. aureus BSI and LOS data were all derived from the original publications.The S. aureus VAP, S. aureus BSI and LOS data required transformation.The first two, being count data, were logit transformed.The S. aureus VAP incidence proportion is the proportion with S. aureus VAP using the number receiving prolonged (>24 h) MV as the denominator.The S. aureus BSI incidence proportion is the proportion with S. aureus BSI using the number of patients with prolonged (>24 h) ICU stay as the denominator.The LOS data are positively skewed and were transformed for all analyses as follows: any LOS < 4 days was truncated to 4 days; the LOS was divided by 7 and then log transformed.With this transformation, all model intercepts equated to group mean LOS 7 days.
Summary Effect Size
Indicative summary prevention effect sizes, versus each of S. aureus VAP and S. aureus BSI, for each category were derived from all studies regardless of whether the intervention was randomly assigned and whether study blinding was achieved.The summary prevention effect sizes versus overall VAP and overall BSI were also derived from the same studies where available.
The study-specific and summary prevention effect sizes of each of the three broad categories of intervention toward the prevention of S. aureus VAP and S. aureus BSI incidence were generated by random effects using the meta-analysis command in Stata (Stata 18, College Station, TX, USA) [47].
Meta-Regression Model 1: Study Effect Size versus LOS
Models of the relationship between study-specific prevention effect sizes versus log transformed LOS of each of the three broad categories of intervention toward the prevention of S. aureus VAP and S. aureus BSI incidence were generated by meta-regression.The estimated effect sizes for studies with mean LOS of 7 and 20 days were derived post estimation using the margins command in Stata.
Meta-Regression Model 2: S. aureus VAP and S. aureus BSI Incidences versus LOS
Models of the relationship between logit transformed S. aureus VAP and S. aureus BSI incidence proportion versus log transformed LOS were generated using the meta-regress command in Stata.Scatter plots also were generated to facilitate a visual summary.In the scatter plots, the linear regression derived for each of the transformed S. aureus VAP and S. aureus BSI incidence versus the transformed LOS among observational cohorts were used as the respective benchmarks.The estimated S. aureus BSI and S. aureus VAP incidence for studies with a mean LOS of 7 and 20 days were derived post estimation using the margins command in Stata.
Generalized Structural Equation Modelling
Generalized structural equation modelling (GSEM) methods are an extension of SEM methods applied to count data.In the GSEM models, the VAP and BSI incidence proportion data serve as the measurement components, the group-level exposure parameters serve as the indicator variables and S. aureus colonization, being represented as a latent variable, links the indicator and measurement components in the model.In these models the antibiotic-BDI are factorized into TAP and PPAP components.The GSEM methodology used here is described in detail in previous publications [19,48].
Study identifiers were used in the models to enable the generation of robust variance covariance matrices of the coefficient estimate parameters of observations clustered by study.The 'GSEM' command in Stata (Stata 18, College Station, TX, USA) was used.
Availability of Data and Materials
All data generated or analyzed during this study are included in this published article and its supplementary information files (see ESM).
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/antibiotics13040316/s1,Table S1: Observational studies (Benchmark groups); Table S2: Studies of non-decontamination-based methods of VAP prevention; Table S3: Studies of topical antiseptic-based methods of VAP prevention; Table S4: Studies of antibiotic-based methods of VAP prevention; Table S5.Meta-regression models of prevention effect size versus LOS; Table S6.Meta-regression models of S. aureus infection incidence versus LOS; References which are cited in the supplement (with duplicates) as S1-S283]; Figure S1
Figure 1 .
Figure 1.Meta-regression (red line with shaded 95% confidence intervals) of effect size (as log o ratio, with individual studies appearing as blue circles with size proportional to the inverse v ance) of antiseptic-BDI (a) and antibiotic-BDI (b) in preventing S. aureus BSI versus group mean (logarithmic scale).Meta-regression plots for non-decontamination category of interventions in venting S. aureus BSI (Figure S5) and all three intervention categories in preventing S. aureus V are presented in the online supplement as figures (Figures S4, S6 and S8) and as a summary t (TableS5).
Figure 1 .
Figure 1.Meta-regression (red line with shaded 95% confidence intervals) of effect size (as log odds ratio, with individual studies appearing as blue circles with size proportional to the inverse variance) of antiseptic-BDI (a) and antibiotic-BDI (b) in preventing S. aureus BSI versus group mean LOS (logarithmic scale).Meta-regression plots for non-decontamination category of interventions in preventing S. aureus BSI (Figure S5) and all three intervention categories in preventing S. aureus VAP are presented in the online supplement as figures (Figures S4, S6 and S8) and as a summary table(TableS5).
Figure 2 .
Figure2.Meta-regression of S. aureus VAP (○, broken regression line) and BSI (▲, solid regression line) incidence among groups of ICU patients within studies of various prevention interventions versus group mean LOS (logarithmic scale).The do ed horizontal lines at 1 and 3 percent are the day 7 and day 20 intercepts for the S. aureus BSI (as derived in FigureS11) and the do ed horizontal lines at 3 and 7.6 percent are the day 7 and day 20 intercepts for the S. aureus VAP (as derived in FigureS10), each as derived in the meta-regression model for observational groups.The regression model coefficients are presented in TableS6.The NCC (non-concurrent control) category includes control groups that, being non-concurrent to antibiotic-BDI or antiseptic-BDI groups, were not exposed to spillover effects from these interventions.Note that the y-axis is a logit scale and the x-axis is the group mean LOS transformed to a logarithmic scale after dividing by 7 such that the model intercept values correspond to group mean day 7 estimates as derived in the meta-regression mod-
Figure 2 .
Figure 2. Meta-regression of S. aureus VAP ( , broken regression line) and BSI (▲, solid regression line) incidence among groups of ICU patients within studies of various prevention interventions versus group mean LOS (logarithmic scale).The dotted horizontal lines at 1 and 3 percent are the day 7 and day 20 intercepts for the S. aureus BSI (as derived in Figure S11) and the dotted horizontal lines Figure 3 and Figure S12.c .v_sr_n is the count of Staphylococcal VAP; b_sr_n is the count of Staphylococcal bacteremia; and Staph col is Staphylococcal colonization (Staphylococcal col; which is a latent variable).d .PPAP is the group-wide use of protocolized parenteral antibiotic prophylaxis; non-D is a non-decontamination intervention; MVP90 is use of mechanical ventilation by more than 90% of the group for >24 h; TAP is topical antibiotic prophylaxis; e .The effect of concurrency to TAP use equates to spillover.f .Rebound equates to the interaction term between LOS and exposure to antibiotic-(1.tap#c.lnlos7)or antiseptic-BDI (1.a_S#c.lnlos7).g .LnLOS7 is Length of stay transformed by dividing by 7 and logarithmic transformation.h .Trauma50 is an indicator variable for those groups with a majority of patients admitted for trauma.Antibiotics 2024, 13, x FOR PEER REVIEW 7 of 27
Figure 3 .
Figure 3. GSEM of Staphylococcus colonization.Staphylococcus col (oval) is a latent variable representing Staphylococcus colonization.The variables in rectangles are binary predictor variables representing the group-level exposure to the following: trauma ICU se ing (trauma50), mean or median length of ICU stay transformed by dividing by 7 and logarithmic transformation (lnLOS7), exposure to a topical antiseptic-BDI (a_S), exposure to an antibiotic-BDI (tap being topical antibiotic prophylaxis as a component of antibiotic-BDI), exposure to a non-decontamination-based prevention method (non-D), exposure to protocolized parenteral antibiotic prophylaxis (ppap) and more than 90% of the cohort receiving mechanical ventilation (mvp90).In this model, the effect of spillover equates to the effect of concurrency of a control group with an antiseptic or antibiotic-based BDI intervention group (CC), and the effect of rebound equates to the interaction term between LOS and exposure to antibiotic-(1.tap#c.lnlos7)or antiseptic-BDI (1.a_S#c.lnlos7).Note that the model factorizes exposures from compound regimens (e.g., SDD and SOD, which combine topical antibiotic prophylaxis [TAP] with or without PPAP as an antibiotic-BDI) into singleton TAP and PPAP exposures.The circle represents the model error term.The three-part boxes represent the binomial pro-
Figure 3 .
Figure 3. GSEM of Staphylococcus colonization.Staphylococcus col (oval) is a latent variable representing Staphylococcus colonization.The variables in rectangles are binary predictor variables representing the group-level exposure to the following: trauma ICU setting (trauma50), mean or median length of ICU stay transformed by dividing by 7 and logarithmic transformation (lnLOS7), exposure to a topical antiseptic-BDI (a_S), exposure to an antibiotic-BDI (tap being topical antibiotic prophylaxis as : S. aureus VAP prevention effect sizes; non-decontamination interventions; Figure S2: S. aureus VAP prevention effect sizes; decontamination interventions; Figure S3: S. aureus BSI prevention effect sizes; all studies; Figure S4: Meta-regression S. aureus VAP prevention effect sizes, non-decontamination studies; Figure S5: Meta-regression S. aureus BSI prevention effect sizes, non-decontamination studies; Figure S6: Metaregression S. aureus VAP prevention effect sizes, antiseptic studies; Figure S7: Meta-regression S. aureus BSI prevention effect sizes, antiseptic studies; Figure S8: Meta-regression S. aureus VAP prevention effect sizes, antibiotic studies; Figure S9: Meta-regression S. aureus BSI prevention effect sizes, antibiotic studies; Figures S10 and S11: S. aureus VAP incidence among observational cohorts & NCC groups; Figure S12: GSEM of postulated model.Funding: This research has been supported by the Australian Government Department of Health and Ageing through the Rural Clinical Training and Support (RCTS) program.
Table 1 .
Characteristics of studies a .
Table 2 .
Predicted S. aureus infection incidence per 100 patients from meta-regression models versus LOS a .
a .Predicted S. aureus VAP and S. aureus BSI incidences derived from meta regression models as presented in Figure2[as the broken and solid regression lines, respectively].TableS6contains the slope and intercept coefficients of these models.Antibiotics 2024, 13, x FOR PEER REVIEW 5 of 27
Table 4 .
Comparison with previous prevention effect size estimates.
|
2024-04-02T15:03:29.754Z
|
2024-03-29T00:00:00.000
|
{
"year": 2024,
"sha1": "7a3e18729bcc173143f04d701ce89054c0c6647f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/13/4/316/pdf?version=1711730306",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18ca14609b77530db49ddca6902e99da4f6233d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239458034
|
pes2o/s2orc
|
v3-fos-license
|
Intensification of Diabetes Medications at Hospital Discharge and Clinical Outcomes in Older Adults in the Veterans Administration Health System
Key Points Question What is the association between intensification of outpatient diabetes medications at hospital discharge and clinical outcomes in older adults hospitalized for common medical conditions? Findings In this cohort study of 5296 propensity-matched older veterans with diabetes who were hospitalized for common medical conditions, discharge with intensified diabetes medication was associated with an increased risk of severe hypoglycemia within 30 days and was not associated with a reduction in severe hyperglycemia events or hemoglobin A1c level at 1 year. Meaning These findings indicate that short-term hospitalization may not be an effective time to intervene in long-term diabetes management.
Introduction
Modification of older adults' home medications during short-term hospitalization is common.
Changes to home medications may be temporary in response to acute illness or may reflect planned changes to management of chronic disease. [1][2][3][4][5] A particularly common scenario is adjustment of diabetes medications. 5,6 During hospitalization for acute illness, older adults with diabetes may experience fluctuating blood glucose control, driven by changes in eating patterns, medication exposures, and catecholamine surges. As a result, hospitalization is a high-risk time for serious hypoglycemia and hyperglycemia events. [7][8][9][10][11] Practice guidelines advise broader ranges for inpatient blood glucose levels than are recommended in the outpatient setting and recommend stopping the use of home oral agents and initiating the use of short-term insulin in many clinical scenarios. [12][13][14] Safe diabetes management for older adults requires careful balancing of the short-term risks of medication-induced hypoglycemia [15][16][17] with the long-term benefits of blood glucose control. 18,19 Transient elevations in blood glucose during hospitalization likely have little long-term significance, yet these increases commonly precipitate intensification of home regimens, including discharging patients with new insulin or oral agents. 5 For patients with uncontrolled diabetes, hospitalization could be an opportune time to address hyperglycemia and set patients on the path toward improved chronic disease control. However, the posthospitalization period is also a particularly high-risk time for adverse drug events and medication errors. Although clinical trials and guidelines have helped inform long-term diabetes treatment strategies for ambulatory older adults, this evidence does not reflect the perihospitalization period, during which time older adults have an increased susceptibility to adverse drug events 20,21 and may not be generalizable to hospitalized older adult populations, who face greater frailty and more limited life expectancy than clinical trial participants. 22,23 Because the clinical outcomes associated with diabetes medication intensifications made at hospital discharge are unknown, we conducted a retrospective cohort study of older adults with diabetes who were hospitalized in the national Veterans Health Administration (VHA) health system for common noncardiac conditions. We evaluated the association between intensification of home diabetes medications at hospital discharge and postdischarge outcomes, including severe hypoglycemia and hyperglycemia events, mortality, hemoglobin A 1c (HbA 1c ) control at 1 year, and persistent use of discharge medications at 1 year after discharge. discharged to the community setting (eFigure 1 in the Supplement). Diabetes was defined by the presence of 2 outpatient diagnoses or any hospital discharge diagnosis of diabetes in the 2 years that preceded the index hospitalization using previously validated algorithms. 24,25 Because diagnosisbased algorithms may capture patients with a history of diabetes or currently being evaluated for diabetes, to enhance specificity we examined only patients who were taking a diabetes medication before hospitalization or had an HbA 1c level greater than 6.5% (to convert to proportion of total hemoglobin, multiply by 0.01) in the year before hospitalization.
Conditions were identified by primary discharge diagnosis code grouped by Clinical Classification Software categories and included the following: acute coronary syndrome, arrhythmia, asthma, chest pain, chronic obstructive pulmonary disease, coronary artery disease, conduction disorders, heart failure, heart valve disorders, pneumonia, sepsis, skin infection, stroke, transient ischemic attack, and urinary tract infection. These conditions were chosen because they are among the most common medical discharge diagnoses for older adults and their short-term management does not typically require intensification of outpatient diabetes medications. Patients discharged with a secondary discharge diagnosis of diabetic ketoacidosis or nonketotic hyperglycemichyperosmolar coma were excluded because these conditions typically necessitate an immediate change in diabetes treatment.
To ensure accurate classification of medication use, we excluded patients likely to receive medications outside the VHA, including patients who received more than 20% of their outpatient care outside the VHA, patients admitted from skilled nursing facilities, and patients who had been hospitalized in the 30 days that preceded the index hospitalization. 26 Patients enrolled in hospice were excluded given differing goals of care. Because instructions to modify insulin dosing are infrequently accompanied by a new prescription, dosing changes cannot be accurately assessed using pharmacy databases; thus, we limited our study to patients not using insulin before hospitalization.
Exposure
We compared patients discharged with intensified diabetes medication regimens to those discharged without intensifications. Intensifications were defined as newly prescribed diabetes medications that were not being used before hospitalization and medications present on admission for which a discharge prescription was filled for a dose increase of more than 20%. Intensifications were ascertained based on VHA pharmacy dispensing data using previously published methods, which included medications filled within 2 days before to 2 days after discharge. 26,27 We examined all medication classes in use during the study period: biguanides, sulfonylureas, thiazolidinediones, α-glucosidase inhibitors, dipeptidyl peptidase 4 inhibitors, meglitinides, glucagon-like peptide 1 (GLP-1) agonists, sodium-glucose cotransporter 2 (SGLT2) inhibitors, and insulins.
Outcomes
The 2 primary outcomes were chosen a priori to assess possible benefits and harms of diabetes medication intensification: severe hyperglycemia events and severe hypoglycemia events. Primary outcomes were examined at 30 days to assess immediate outcomes and at 365 days to assess longer term outcomes. On the basis of prior studies, 15,28,29 primary outcomes were defined as a composite of emergency department (ED) visits, observation stays, and hospitalizations for severe hypoglycemia and severe hyperglycemia (eTable 1 in the Supplement). Secondary outcomes included all-cause readmissions at 30 and 365 days, mortality at 30 and 365 days, change in HbA 1c at 1 year, and persistent use of diabetes medication prescriptions filled at discharge at 1 year.
to clinical expertise 18 and included demographic characteristics, comorbidities, 31 prehospitalization and hospital vital signs, laboratory values, health care use, and medications (eTable 2 in the Supplement). Missing data were imputed using the fully conditional specification method and 20 imputation sets. One-to-one nearest neighbor matching without replacement was performed and covariate balance between groups was assessed using standardized mean differences. 32,33 Second, within propensity score-matched groups, survival analyses were conducted for hyperglycemia, hypoglycemia, mortality, and readmission outcomes using Cox proportional hazards regression models for mortality and Fine and Gray proportional subdistribution hazards models for all other outcomes to account for the competing risk of death. 34 For all models, SEs accounted for clustering of patients within hospitals. To aid in interpretation of subdistribution hazard models, unadjusted event rates are presented for each group.
Third, within propensity score-matched groups, the change in HbA 1c at 1 year after discharge was estimated using a difference-in-differences approach. Linear regression models were used to estimate the change in HbA 1c level associated with discharge with intensified diabetes medications after subtracting the background change among patients who did not receive medication intensifications. 35,36 Fourth, for the unmatched cohort, we examined persistence to diabetes medications filled at discharge during the subsequent year. We examined diabetes medication prescriptions filled at discharge, including new medication prescriptions and prescription fills of admission medications at higher, lower, or the same doses. For each diabetes medication filled at discharge, we calculated persistence as the number of days between the discharge fill and the last refill for the same or greater dose plus the days supplied by the latest refill. 37 We constructed Kaplan-Meier curves and used the log-rank test to examine differences in persistence by type of fill: continuation, dose increase, dose decrease, new oral medication, or new insulin.
We conducted subgroup analyses to determine the differential impact of exposure to intensified diabetes medications by prehospitalization diabetes control. We classified patients as having controlled or elevated prehospitalization HbA 1c levels using a threshold HbA 1c of 7.5%, acknowledging the uncertainty surrounding exact HbA 1c targets in older adults. 18,22,38 We then repeated propensity score matching and analyses for each baseline HbA 1c group separately.
Analyses were conducted using Stata software, version 14.1 (StataCorp LLC). For all analyses, we determined statistical significance using 95% CIs. A 2-sided P < .05 was also considered statistically significant.
Results
The unmatched cohort included 28 198 older adults with diabetes admitted to 115 VHA hospitals patients with a similar propensity score who did not receive intensifications (95.7% match rate).
Matched groups were well balanced on propensity score distribution and baseline characteristics (standardized mean differences for all covariates, <0.1) ( Table 1; eTable 3 and eFigure 2 in the Supplement). a Selected covariates are presented; a full list of covariates included in the propensity score is given in eTable 2 in the Supplement.
c Balance between the groups was assessed before and after matching by comparing SMDs for each variable for which a difference of less than 0.10 was considered to indicate adequate balance.
Persistent Use of Diabetes Medications
In Absolute change in HbA 1c , %
Received discharge intensification
Analysis includes the 4215 patients in the propensity-matched cohort who met the inclusion criteria for the HbA 1c analysis (outpatient HbA 1c measured between 6 and 18 months after the index hospitalization discharge). Postbaseline HbA 1c level was assessed before censoring as the HbA 1c level recorded at the closest date to 1 year after index hospitalization discharge, within the range of 6 to 18 months after discharge. Absolute change in HbA 1c calculated as postdischarge HbA 1c level minus preadmission HbA 1c level.
Complete difference-in-differences results are given in eTable 4 in the Supplement. The horizontal lines in the center of each box indicate the median; the lower and upper bounds of each box, the 25th and 75th percentiles; the lower and upper error bars, the most extreme value between the 25th percentile minus 1.5 times the interquartile range and the most extreme value between the 75th percentile plus 1.5 times the interquartile range.
Prehospitalization Baseline Hemoglobin A 1c Subgroup Analyses
Propensity score matching yielded a cohort of 2672 patients with controlled preadmission HbA 1c levels (Յ7.5%) and a cohort of 2524 patients with elevated preadmission HbA 1c levels (>7.5%), each equally split between those who received intensifications and those who did not. Covariate balance between groups in each cohort was excellent except for differences in the regional distribution of patients (eTables 5 and 6 and eFigure 3 in the Supplement).
Among matched patients with controlled baseline HbA 1c levels, the mean (SD) prehospitalization HbA 1c level was 6.8% (0.5%) for the intensified and not intensified groups. No differences were found in severe hypoglycemia events, severe hyperglycemia events, or secondary clinical outcomes among patients with controlled baseline HbA 1c levels who were discharged with or without diabetes medication intensifications ( Table 3; eTable 7 in the Supplement).
Among matched patients with elevated baseline HbA 1c levels, the mean (SD) prehospitalization HbA 1c level was 9.1% (1.5%) for the intensified group and 9.1% (1.6%) for the not intensified group.
No differences were found in severe hypoglycemia or hyperglycemia events among patients with elevated baseline HbA 1c levels who were discharged with or without diabetes medication intensifications ( Persistence analysis includes 18 455 patients who filled 1 or more diabetes medication prescriptions at discharge. A total of 24 085 unique medication prescriptions were included because patients could fill multiple diabetes medication prescriptions at discharge. For each unique medication prescribed at discharge, we calculated persistence as the number of days between the discharge prescription and the last refill for the same or greater dose plus the days supplied by the latest refill. If a patient died during the follow-up period, persistence was truncated at date of death. We reported persistence to 12 months after discharge by type of treatment, and to avoid undercounting because of transient nonadherence, we assessed refill history for 18 months.
Discussion
In this cohort study of older adults with diabetes who were hospitalized for common medical conditions, intensification of diabetes medications at hospital discharge was associated with increased short-term risk of severe hypoglycemia events without reduction in risk of severe hyperglycemia events or improvement in HbA 1c control at 1 year. Moreover, nearly half of discharge intensifications were not continued at 1 year. Despite the lack of association with improved diabetes control, older adults receiving diabetes medication intensifications at discharge had a lower risk of mortality at 30 days but no difference in mortality at 1 year. These results suggest intensification of older adults' outpatient diabetes medications during unrelated hospitalizations should generally be avoided. confound the association between discharge intensification and discharge outcomes. 6 Our study builds on these prior studies 6,39 by examining postdischarge outcomes in the VHA, a large, national, integrated health system, which allows for the inclusion of richer clinical characteristics and complete identification of postdischarge events that occur inside and outside the VHA. In addition, we examined a more recent period than prior studies 6,39 ; thus, differences in findings may in part reflect differences in diabetes medication classes.
In a secondary analysis, we observed that older adults receiving diabetes intensifications had a substantially lower risk of death in the first 30 days of discharge. This unexpected finding was consistent in elevated but not controlled HbA 1c subgroups and merits further examination in future studies. This finding contrasts with the Canadian study, 6 which found that discharge with new insulin was associated with an increased risk of death, and a prior trial 40 of intensive inpatient blood glucose control among critically ill patients, which found more intensive blood glucose control was associated with higher 90-day mortality. We anticipate this finding may be attributable to unmeasured suggests, readmission for hyperglycemia is rare, occurring in less than 0.5% of cohort patients with elevated baseline HbA 1c levels. Adverse drug events are typically highest in the initial weeks of treatment, and this risk is likely to be multiplied in the postdischarge period, during which older adults are typically exposed to multiple medication changes and hospital-associated disability. 20,21,42 Additional research is needed to determine best management practices for older adults with diabetes whose primary reason for hospitalization requires short-term treatment with medications known to greatly increase blood glucose levels, which will be continued to the outpatient setting (eg, corticosteroids for respiratory or autoimmune disease flairs) because these patients may benefit from short-term monitored intensifications.
Limitations
Our study has several limitations. First, it took place in the VHA health care system, a national integrated system that serves a predominately male population with greater multimorbidity that may not be generalizable to the entire US. Second, because we focused on older adults who are at higher risk of adverse drug events, our findings are not generalizable to younger populations. Third, although our study was strengthened by examining both VHA and Medicare data, we were not able to identify hyperglycemia and hypoglycemia events for which patients did not seek emergency care; thus, these events are likely underestimated. 43 Fourth, because of limitations of pharmacy data, we were unable to examine the impact of changes in insulin dosing and thus did not examine patients taking insulin before hospitalization. Pharmacy claims do not allow for the identification of discontinued medications; thus, medication classes started as substitutions for other classes were included in the study as intensifications. Fifth, as an observational study, there is a risk of unmeasured confounding by variables not included in the propensity score-matched analyses. Sixth, subgroup analyses were exploratory and may have been underpowered to demonstrate differential associations of diabetes medication intensification across baseline HbA 1c levels.
Conclusions
Among older adults hospitalized for common medical conditions, discharge with intensified diabetes medications was not associated with reduced severe hyperglycemia events or HbA 1c levels within 1 year but was associated with an increased risk of severe hypoglycemia events within 30 days. For
|
2021-10-23T06:16:57.516Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b73daa0e81ca2896dcba088ef552ec7ddb3f727f",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2785326/anderson_2021_oi_210852_1634154068.53524.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dea2a356be5a53124ad814682a737456c10852d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221107862
|
pes2o/s2orc
|
v3-fos-license
|
Unravelling the Time Scale of Conformational Plasticity and Allostery in Glycan Recognition by Human Galectin‐1
Abstract The interaction of human galectin‐1 with a variety of oligosaccharides, from di‐(N‐acetyllactosamine) to tetra‐saccharides (blood B type‐II antigen) has been scrutinized by using a combined approach of different NMR experiments, molecular dynamics (MD) simulations, and isothermal titration calorimetry. Ligand‐ and receptor‐based NMR experiments assisted by computational methods allowed proposing three‐dimensional structures for the different complexes, which explained the lack of enthalpy gain when increasing the chemical complexity of the glycan. Interestingly, and independently of the glycan ligand, the entropy term does not oppose the binding event, a rather unusual feature for protein‐sugar interactions. CLEANEX‐PM and relaxation dispersion experiments revealed that sugar binding affected residues far from the binding site and described significant changes in the dynamics of the protein. In particular, motions in the microsecond‐millisecond timescale in residues at the protein dimer interface were identified in the presence of high affinity ligands. The dynamic process was further explored by extensive MD simulations, which provided additional support for the existence of allostery in glycan recognition by human galectin‐1.
Introduction
Humang alectins are b-galactoside (bGal) binding lectins that participate in the regulation of an extraordinary variety of biological phenomenam ost of them related, but not only,t oi mmunity. [1] At the same time, their connection with severald is-eases, such as cancer [2] or diabetes has been established, increasingt he interests in exploiting them in different therapeutic strategies, as well in the development of disease biomarkers. These carbohydrate binding lectins are broadly distributed throughout the body,and while someofthem are restricted to certain tissues or cells, others such as humang alectin-1 (Gal-1) and human galectin-3 (Gal-3) are ubiquitous. [3] Gal-1 in particular,h as been provent op articipate in B-cell development and signalling, [4] T-cell immunity, [5] and the regulationo fd ifferent inflammatory responses. [6] Gal-1 has been recentlys hown to promoteb acterial infections, [7] and to have ap rominent relationship with certaint ypes of cancers, [8] where its increased expression has been relatedt od ifferent processes in the disease progression. In fact, it has been pointed out as ak ey player for cancer immunotherapyr esistance. [9] Galectins perform their biological functions through the recognition of specific bGal-containing epitopesp resent on glycoproteins and glycolipids. Their multimeric nature endows galectinsw ith the ability to cross-link these glycoconjugates, which is at the heart of their regulatory mechanisms. These oligomerization phenomena strongly depend on the organization of their carbohydrate-recognition-domains (CRD), according to which galectins are in fact classified. Thus, prototypeg alectins, as Gal-1, display two identicalC RDs that dimerize in an on-covalentm anner,w hile tandem-repeat contain two distinct CRDs covalently linked throughap eptidef ragment. Finally,c himeratype, the only member being Gal-3, displays as ingle CRD connectedt oal ongt ail domain at the N-terminus throughw hich it oligomerizes. Although the quaternary organization of galectins is fundamentalf or their biological functions, in most cases it is not clear how it does influencel igand binding. Al arge part of our current knowledge abouth ow galectins bind to their carbohydrate ligands has been obtained through X-ray crystallography [10] although, for these particular systemsh owever,t his cannota ccount for dynamic effects that couldh ave an impact in ligand binding , including conformational plasticity or allostery. [11] Gal-1 is ah omodimer with ad imerization equilibrium constant in the low micromolar range. [12] This oligomeric architecture may be relevant for itsb iological activity, [13] and in fact Gal-1 mutants with alteredd imerization properties have shown to have alteredb iological functions. [14] Early studies [15] postulated that lactosebinding to Gal-1 occurs with an egative cooperativity between the two lectin binding sites, which was related to ag lobal increaseo fp rotein dynamics in the low frequencym otion range (picoseconds timescale), with ac oncomitant increasei nc onformational entropy. More recently, [16] based on hydrogen-deuterium exchange experiments, lactose binding was found to increasethe exchange rates of Gal-1 residues located on the opposite side of the ligand-binding site, strongly suggesting the existence of protein allostery,w hich seems difficult to reconcile with the very fast picoseconds timescale of motion. These studies used lactose (Lac) as a ligand,w hich binds Gal-1 ca. twofold weaker than lactosamine (Galb1-4GlcNAc, LacNAc, 1,F igure 1). Glycan binding preferences of Gal-1, in fact, point to extended glycan chains terminating in LacNAc, both on N-and O-glycans. [17] Opposite to other galectins, further chemical modifications of this simple disaccharide epitope do not improvebinding affinity for Gal-1.
Herein,w ep rovidef urthere xperimental and theoretical evidences of allosterism operating in Gal-1. Motivated by the unexpectedp ositive entropyc ontribution measured for LacNAc binding,o pposite to that observed for Gal-3/LacNAcr ecognition, [18] the changes in protein flexibility upon ligand binding have been scrutinized. Our results show that upon LacNAc binding, but not upon binding to other lower affinity LacNAccontaining glycans, such as the blood group antigen (4), local protein flexibility increases in the ms-ms time scale. This is a rather different time frame dynamics to that previously reported in the ps timescale. [15] Thist ransitiont os low dynamic motions upon LacNAc binding influences the energy balance for the recognition process, and thus the affinity,t hrough af a-vourable contribution to the bindinge ntropyt erm. Remarkably,t he combined experimental (relaxation dispersion NMR) and theoretical analysis (ms-MD) performed herein allowed identifying specific residues with ac oncerted dynamic behaviour that cluster at the dimerizationi nterface, revealing ac ommunication pathway between the two Gal-1 domains.
As mentioneda bove, individual galectins show al arge variation in terms of affinity towards naturally occurring ligandsa s extensively and consistently highlighted in severals tudies. [19] Indeed, galectinse xhibit rather different recognition patterns for sialylated glycans, polyLacNAc structures,a nd blood group antigens among others,a nd this fine specificity has been related with differential ligand interactions at regions adjacent to the canonical b-Gal binding site. [20] However,t he detailed understanding of galectin-binding specificities is stillm odest given the lack of structurald etailsf or galectin complexes involvingg lycan structures larger than trisaccharides,a sw ell as details regarding dynamics and flexibility of both interacting partners. We have recently addressed the impacto fg lycan flexibility on the bindingo ft he Aa nd Bb lood group tetrasaccharidea ntigens to Gal-3. [18] Interestingly,i th as also been reportedt hat Gal-1 and Gal-3 display opposing affinities towards these antigens. [21] Isothermal titration calorimetry experiments To obtain accurate information on the bindinga ffinity and thermodynamics, isothermal titration calorimetry (ITC) experiments were performed for the four glycans (1-4). Fitting of the ITC binding isotherms to as ingle-site model yieldedt he dissociation binding constants (K D )s hown in Ta ble 1. All of them are in the high-medium micromolar range,w ith the values for LacNAc (99 mm)i na greement with previously reported data. [17b] Data fitting to as equential binding model (Table 2), as suggested by those previouss tudies, [15] was as good as or even better than the one-site model in terms of fitting quality (represented by c 2 ). In this case, the first binding eventdisplays bettere nergetics than the second one, indicating an egative cooperativity between the two bindings ites. This differencei s Figure 1. Structure and symbol representation of the oligosaccharides whose interaction with Gal-1 is studied herein. For either binding model,t he affinity for the galili trisaccharide (2), which incorporates an additional Gala residue with respect to LacNAc 1,w as very similar to that obtainedf or 1. Indeed, the binding enthalpy remained unaltered, strongly suggesting the absence of significant stabilizing intermolecular contactsp rovided by the Gala residue (see below in the NMR analysis). In contrast, the Htype-II analogue (3), and especially the tetrasaccharide Bt ype-II antigen (4), displayed somehow lower binding affinities compared to LacNAc 1. In fact, the enthalpy contribution decreased for the fucosylated ligands, 3 and 4.A lso, variations in the binding enthalpy and entropy terms are not correlated, deviating from the commonly observed enthalpy-entropy compensation paradigm.I ntriguingly, independently on the binding model used, the thermodynamic analysis( Ta ble 1a nd Supporting Information) revealed ap ositive entropyc ontributiont ot he binding for all the four ligands. [22] Although this entropy gain is alwaysm oderate (below 0.5 kcal mol À1 ), it strongly contrasts with the loss of entropy (ca. 5kcal mol À1 )o bserved for LacNAc binding to Gal-3. [18] This highlightst he different molecular recognition mechanisms operating in both lectins.
Generating the initial 3D models of the complexes Initial 3D models of the ligand/Gal-1 complexes were built using the X-ray crystallographic structure reportedf or Gal-1:lactose, [23] by pair-fitting the binding residue(s) of each studied ligand to lactose followed by MD simulations as described in the experimental section. The complex formed with LacNAc (1) ( Figure 2) is basically identical to that described in the X-ray crystallographic structure with lactose. Briefly,t he Gal residue stacks on top of the indole moiety of Trp68, establishing key CH-p interactions, [24] with additional hydrogen bonding interactions involving residues His44,A rg48,A sn61 and Glu71 of the lectin and atoms Gal O4, Gal O5 and GlcNAc O3 of the ligand. The loop L4, which connects strandsS 4a nd S5, is folded towards the ligand and narrows the binding site cavity.T his loop has been shown to exhibit high conformational flexibility in apo structures, populating open and closed conformations. [15,25] According to the X-Ray crystallography data, His52, located at this loop, participates in hydrogen bonding with the Gal 2-OH of lactose. In our models for 1 and 2 ( Figure 2), however,t his is at ransient interaction, occurring only ca. 25 %a long the 100 ns MD trajectories.
For 2 and 4,t he modelss how that the Gala residue is fairly close to the protein surface, although it does not provide additional van der Waals and/or hydrogen bondingi nteractions. Alternatively,t he Fuc moiety,p resent in 3 and 4,i sc lose to the L4 loop,a nd His52 establishes transient( ca. 25 %) hydrogen bondingi nteractions with Fuc O5 and/or Fuc 4-OH.
NMR experiments STD-NMR: As an initial experimental validation of the proposed 3D complexesa nd to back up the experimental ITC results, information on the molecular basis of the interaction between Gal-1 and glycans 1-4 was obtained through NMR experiments, [26] startingw ith 1 H-STD-NMR (STD = saturationt ransfer difference), [27] which allows obtainingi nformation on the ligand binding epitope. For all the ligands, significant STD signals were detected for the protons of the central b-Gal unit, in particularf or H4, H5 and H6, whichi st he typicalp attern for the interaction of b-Gal-containing saccharides with galectins. [18,28] Additionally,l igands 2 and 4,w hich contain the Gala unit, showede vident STD signals for some protons of this residue (H1, H2 and H3), while the ligands containing Fuc (3 and 4)s howeda dditional and significant STD effects for Fuc H1 (Figure 3a nd Supporting Information). These data provide experimental evidence on the bindinge pitope of glycans 1-4, which involves primarily the b-Gal ring, and with the Gala (in 2 and 4)a nd Fuc moieties (in 3 and 4)a lso in close proximity to the lectin surface, and can be satisfactorily explained involving Table 2. Affinity constants obtained from ITC data fitting to one-site and sequential binding models.T he quality of the fitting is provided by c 2 .
LacNAc (1) the recognition modes predicted by MD described above ( Figure 2). The presenceo ft he loop L4 close to the binding site is a unique feature of Gal-1 [23] and permits explainingt he STD NMR effects observed for the Fuc residue. In fact, irradiation at the aromaticr egion of the protein (d 7.7 ppm), increased the relative STD intensities for Fuc H1 with respect to the aliphatic irradiation, corroborating its proximity to His52.T his latter result is in sharp contrast with that reportedf or the interaction of Gal-3 with 4, [18] which demonstrated that the Fuc residue is exposed to the solvent, and does not interact with that lectin.
Chemical shift perturbation analysis: 1 H- 15 Nh eteronuclear single quantum coherence( HSQC) NMR spectroscopy experiments were employed to analyse the chemical shiftp erturbation (CSP) of the amide signals of the lectin upon ligand addition and to obtain additional structurali nformation on the sugar-protein molecular recognition processes from the lectin perspective. [25,29] The addition of 0.5, 1, 3, 5a nd 10 equivalents of LacNAc (1)p rovided ap rogressive perturbation of the signals of specific amino acids. Most of them are included in the region Asn46-Val76, in b-strands S4, S5, S6 and L4 loop. This observation is again in agreementw ith the proposed binding mode described above ( Figure 2) and reported by X-ray crystallography( PDB IDs 4Y1U, 4Q26 and 1W6P). Intriguingly,p erturbations on severala mino acids far beyond the binding site, especially on those located in F3-F4 sheets and close to the dimer interface (S1 and loop connecting S1-F2) were also detected ( Figure 4A and Supporting Information). These results strongly suggest that the interaction with LacNAc induces changes on thewhole structure of the protein. Similarobservations have been described for the interaction with lactose [15,16] and lacto-N-neotetraose. [30] The chemical shift perturbation profile forg alili 2 was very similart ot hat of LacNAc,i ndicatingc omparable binding modes( Figure 4A and Supporting Information). This fact is in agreement with the MD simulations that show that the Gala residue establishes only short-lived interactions with the protein (see Supporting Information for details). Again, these observations contrast with our previous results for Gal-3, [18] where 2 displayed additional stabilizing contacts with severala mino acids located at b-strand S3 and thus impacting on the measured CSP for these residues.
The CSP for the fucosylated glycans 3 and 4 ( Figure 4B and C) were again similartothat of LacNAc. However,the perturbations corresponding to amino acids located at the L4 loop, in particularA la51-Ala55, were markedlyd ifferent ( Figure 5). This fact indicates ad ifferent interaction of this loop region with the non fucosylated and fucosylated glycans,a ss hown by the STD-NMR analysisa nd predicted by the MD simulations. ( Additional structural information on the role of the Fuc residue in the binding process was inferredf rom the behaviour of the histidine side chain signals (His44 and His52) upon binding. His44 is conserved among galectins, located at strand S4 and consistently involved as hydrogen bondinga cceptor from Galb 4-OH, while His52 is located at the L4 loop, unique for Gal-1. Thus, long-range ( 2 J NH ) 1 H- 15 NH SQC experiments were acquired for Gal-1 apo and in the presence of 1 (without Fuc) and 3 (with Fuc) (Figure 6). In the apo form, only the signals corresponding to His52 wereo bserved. Its pattern ( Figure 5A,s ee Supporting Information for details) revealed the existence of an equilibrium among the Ne2-H and the Nd1-H tautomers and the protonated form. [31] Interestingly,a ddition of LacNAc 1 did not produce substantial changesont he shape of the His52 signals, suggesting no major changes on the equilibrium state ( Figure 6B). In contrast, the signals forH is44 were now detected and the pattern pointed out to the presence of av ery major Ne2-H tautomer,a se xpected for its role as hydrogen bond acceptor. Upon addition of 3,t he situation for His44 did not change with respectt ot he addition of 1.I nc ontrast, the signals for His52 became broader,e ven displaying multiple peaks, evidencingt he presence of multiple states in slowmediume xchanger egime in the chemical shift timescale (Figure 6C). Thus, upon binding to ligand 3,t he chemical equilibrium for His52 is kept, although its dynamics is clearly altered, probablyr eflecting that instead of providing further contacts with the ligand, the loop L4 precludes ap ropera ccommodation of the Fuc moiety.
In summary of this section, ligands 1-4 share as imilar binding mode to Gal-1, as deduced from STD and HSQC NMR experiments. Although the Gala and Fuc epitopes are located close to the protein surface in the so-called subsite B( strand S3) and close to the loop L4, respectively,M Ds imulations and NMR resultss upport that the contacts of these moieties with the lectin are merely transient, with no clear stabilizing interactions taking place. These evidences are also in agreement with the ITC resultsd escribed above, which show no enthalpy gain when the Gala and Fuc moieties are present. Moreover,t he presenceofthe Fuc unit even decreases the enthalpy contribution. However,t here is no clear explanation fort he observed moderate entropy enhancemento bserved by ITC. Therefore, additional experiments and simulations focused on protein flexibility and dynamics were carried out.
Protein dynamics upon ligand binding: CLEANEXe xperiments
The long-range CSP observed for Gal-1 HSQC titratione xperiments with ligands 1-4,together with the observed favourable binding entropy,a re indicative of structurala nd dynamic changes in the whole structure of the protein upon ligand binding.F ast motions (in the ps timescale) of Gal-1 [15] have been previously investigated by NMR through standard R 1 and R 2 experiments and highlighted the conformational entropyo f the protein as af avourable contribution to the free energy of binding. However,t he effects mentioned above regarding long-range chemical shift perturbationss trongly suggest the presence of conformational fluctuations in am uch slower timescale. [16,32] In order to detect local structuralf luctuations and their potential relationship with sugar recognition,p hasemodulated CLEAN chemical exchange spectroscopy NMR experiments (CLEANEX-PM) [33] were performed for the apo and bound forms of Gal-1. CLEANEXe xperiments allow detecting NH protons with fast exchange rates with water (exchange lifetimes in the 5-500msr ange) and are employed to estimate changes on the hydrogen bond stabilityo rs olvent accessibility of the backbone amides. In particular,the changes in exchange rates upon addition of medium/high-and low-affinity ligands such as LacNAc 1 and Btype-II 4 were analysed. The CLEANEX spectrum of Gal-1 showed 13 amide NH cross-peaks out of the 135 total ones (10 %). They belongt oa mino acids located at the dimer interface and the loops connecting S2-F5, S3-S4,S 4-S5, S6-F3 and F3-F4, which correspond to solvente xposed regions of the protein (Figure 7). Interestingly,t hey comprise residues directly involved in the bindinga swell as amino acids located far away from the binding site, rendering them as suitable probestom onitorc hanges on thep rotein structure.
The obtained average exchange rates for Gal-1 were k ex = 23 s À1 for the apo form, and1 0s À1 and 20 s À1 for the LacNAc (1)and Btype-II (4)bound forms, respectively.Infact, high protein:ligand ratios were employed in order to assure complete saturation of the protein. Thus, binding to the highera ffinity ligand produced ag lobal reduction on the exchange rates, while binding to the weaker affinity ligand produced minor changes.R emarkably,t he four residues at the L4 loop were differently affected in the presence of both ligands. These results clearly demonstrate adifferent dynamic behaviour of the S4-S5 connecting loop in the presence of the fucosylated and nonfucosylated ligands, as also described above in the HSQC NMR analysiso fH is52. In fact, both results likely indicate that, in the presenceo ff ucosylated sugars,H is52, and in turn, the L4 loop, populates different conformations, which are lessp rotected in average from water exchange than for non-fucosylated ligands.
As in the HSQC-basedC SP experiments,s ignificant changes in exchange rates were also detected for residues that are far away from the binding site. Particularly,t he exchange rates of residues Ala1-Cys3a tt he dimer interface, Ser38 at the S3-S4 loop, Ala94 at the F3-F4 loop, and Asn113 and Glu115 at the S2-F5 loop were reduced upon addition of 1.T he effect due to the presence of 4 was less pronounced, and did not follow a single trend. These resultsc onfirm that the whole structureo f the protein is perturbed upon ligand binding and demonstrated that these effects are larger in the presence of the stronger binder.S imilarly,p reviously reported NMR-HDX experiments indicatedt hat lactose bindingm odulatesH DX protection factors also for residues far beyondt he binding site. [16]
Relaxation dispersion NMR experiments
To fully discern the conformational fluctuations of Gal-1 in the apo state and in the presence of the ligands, relaxation dispersion (RD) NMR experiments werea cquired for the backbone amides. The analysis of these experiments allowedi dentifying al arge number of Gal-1 residues (up to 34) showing ms-ms dynamics upon LacNAc binding, with line-broadenings ranging from 102 to 768 Hz. Since the individual fitting of the RD profiles showedt hat, for an umber of residues,t here was ah igh degree of consistency in the obtained parameters (homogeneous k ex and p B values), ac ollective fitting procedure was employed. In the end, ag roup of 13 residues (Leu4, Ser7, Leu9, Arg18,A sp54, Ala55, Val76, Asp92, Ala121, Ala122, Asp123, Phe126,a nd Phe133)s howed concerted dynamics at 380 s À1 (k ex )w ith an excited state showing ap opulation (p B )o fa bout 1.5 %( Figure 8C). Remarkably,t his group of residues naturally clusters in the dimerization region of the protein, distal from the LacNAc bindings ite. The same RD experiment was performed for the Gal-1:4 complex, and for Gal-3 in its apo and LacNAc bound forms,a sc ontrol. For all these cases, only al imited number of residues (between 6a nd 13) showed dispersion. Moreover,t he RD-dispersions failed to statistically cluster into collective motions, indicating that they can be attributed to residual thermalmotion.
Hence, the RD experimentss upport the notion that there is ac onformational entropy gain of the protein upon ligand binding, consistent with the previousr eport. [15] Yet, the previous study focusedi nf ast librations in the ps-ns timescale, more prone to capture thermal motion and less associated to functional dynamics. Herein, the observed msd ynamics associated to LacNAc binding provides the adequate experimental framework to support the idea of an allosteric transmission induced upon LacNAc binding.
Allosteric communication analysis through MD simulations
In order to further support the NMR findings anda nalyse in detail the existence of allosterice ffects,m icrosecondm olecular dynamics simulations (ms-MD) were carried out, payinga ttention to possible pathways for dynamic correlation between amino acids at the bindings ite and any other amino acids in the protein, as described in the experimental section. As in the previouss ection, Gal-3 was also included as control,s ince it lacks changes in internal dynamics in the mst imescale upon LacNAc binding.
The analysis showed that, for Gal-1, the motion of the residues at the bindings ite propagates throughout the whole protomere ven reaching the homodimeric interface (Figure 8A). Remarkably,t he amino acids appearing at the highest frequency in the calculated pathways are concentrated in the internal b-strands, constituting the spine of the homodimer ( Figure 8B). Fittingly,t hey match those determined experimentally to show concerted dynamics in the micro-to-millisecond time scale ( Figure 8C). In contrast, for Gal-3, the correlated motions dissipated nearbyt he binding site (FigureS18-S20 in Supporting information). Accordingly,r emarkable differences in the flexibility of the whole protein were also calculated fort he apo and bound states of Gal-1 and Gal-3 with different ligands ( Figure S21).
Conclusions
The interaction of human galectin-1 with N-acetyllactosamine (1), the blood Btype-II antigen tetrasaccharide (4)a nd its two constituting trisaccharides (2 and 3)i sf avoured by entropy, in strong contrastw ith the observations for galectin-3 and most of lectin-sugar interaction events. In fact, the smaller disaccharide displays the best affinity for the lectin. The addition of the Fuc and Gal moieties (from 1 to 2 &f rom 1 to 3 and 4)p rovides similaro rw eaker binding affinities, strongly suggesting that the Fuc and Gal do not establish stabilizing contacts with the lectin. Indeed, ligand-based NMR experiments indicatet hat these residues only provide, if any,m inor contacts with galectin-1. Receptor-basedH SQC chemical shift perturbation experiments,o nt he other hand revealed importante ffects for amino acids far from the bindings ite, which have been furthera ssessed by water-exchange CLEANEX-PM experiments. Interestingly,t he magnitude of those effects correlated with ligand affinity,v ery significant for the best affinity ligand,L acNAc (1). Moreover,r elaxation dispersion NMR experiments have shown that there is importantm otion,i nt he microseconds-milliseconds time scale, for more than 30 amino acid residues upon LacNAc binding, many of them located distant from the binding site. More than ten of these residues cluster at the dimer interface. This behaviour is neither observed in the presence of the lowest affinity ligand (4)n or for LacNAc binding to galectin-3. Moleculard ynamics simulations also predictt he existence of dynamic correlation between the binding site and distant amino acids, reachingt he lectin dimer interface upon LacNAc binding. In fact, once the first glycan molecule is bound, the second one is bound with smaller affinity,a sd e-duced by the ITC measurements. The results presented herein show that sugar recognition by galectins is an extremely complex process that depends on many factors. Motions in the proteins may take place at different timescales, ligandsd isplay different flexibility and presentation of the epitopes both partners are important features to considert hat affect the experimental observations. Indeed, despite their similarity,t he prototype galectin-1 and the chimera-typeg alectin-3 show rather distinctf eatures in their molecular recognition events. For instance,G al-1 shows an oticeablep reference to bind to terminal LacNAc structures in complex N-glycans, whereas Gal-3 preferentially recognizes internal LacNAc moieties· [17a] Their different conformational flexibility and protein architecture (dimer versus monomer) could underlie the observedf eatures. In fact, binding enthalpies, binding entropies, and motionf eatures are drastically different,highlightingt he difficulty for achieving the full control of protein-sugari nteractions. These findings shed light on structural andt hermodynamic binding features of the analysed systemst hat, overall, can be used as clues fort he rational design of compounds capable of selectively bindingG al-1.
Expression of unlabelled Gal-1. The gene encoding the carbohydrate recognition domain (CRD, 135 amino acids) of human galectin-1 was inserted into the pET21a expression vector.B L21 (D3) E. coli competent cells were transformed with the expression vector by heat shock method (42 8Cf or 90 s, 5min in ice). After one night of incubation on agar plates in the presence of ampicillin at 37 8C, a single colony harbouring the expression construct was inoculated into 200 mL Luria Broth (LB) medium containing 100 mgmL À1 ampicillin and it was cultured overnight at 37 8Cw ith shaking. Ap recise amount of the culture was then added to 2L of fresh LB medium containing ampicillin so as to achieve af inal OD600 of 0.1. Cells were grown at 37 8Cu ntil OD600 reached 0.6-1.2 and subsequently induced with 1mm Isopropyl b-d-1-thio-galactopyranoside (IPTG). Growth continued for 3hat 37 8C. The induced culture was harvested by centrifugation at 5500 rpm for 20 min. The pellet was then purified as explained in Purification of Gal-1 section.
Expression of 15 Nl abelled Gal-1. The gene encoding the carbohydrate recognition domain (CRD) of human galectin-1 was inserted into the pET21a expression vector.B L21 (D3) E. coli competent cells were transformed with the expression vector by heat shock method (42 8Cf or 90 s, 5min in ice). After one night of incubation on agar plates in presence of ampicillin at 37 8C, as ingle colony harbouring the expression construct was inoculated into 5mLo f LB medium containing 100 mgmL À1 ampicillin for 6h at 37 8Cw ith shaking. The culture was centrifuged at 4400 rpm for 5min. and the pellet was re-suspended in 1mLo fM 9m edium containing ampicillin, transferred in af lask with 200 mL of the same medium and then incubated overnight at 37 8Cw ith shaking. Ap recise amount of the overnight culture was then added to 2L of fresh M9 labelled ( 15 N-NH 4 Cl as nitrogen source) medium containing ampicillin so as to achieve af inal OD600 of 0.1. Cells were grown at Chem. Eur.J.2020, 26,15643 -15653 www.chemeurj.org 2020 The Authors. Published by Wiley-VCH GmbH 37 8Cu ntil OD600 reached 0.6-1.2, then induced with 1mm IPTG and again grown 3h at 37 8C. The induced culture was harvested by centrifugation at 5500 rpm for 20 min. The pellet was purified as explained below.
Purification of Gal-1. The pellet obtained from the expression in BL21 E. coli cells was suspended in lysis buffer containing 22 mm Tris-HCl pH 7.5, 5mm EDTA, 1mm PMSF and 1mm DTT (10 mL of lysis buffer were used per go fp ellet) and left in ice for 30 min with shaking. The cell suspension was lysed by sonication in ice (60 %a mplitude, 12 20 s, with 59 si ntervals between each burst). The crude extract was clarified by ultracentrifugation at 35000 rpm for 1h at 4 8C. The soluble fraction was loaded onto 5mL a-Lactose-Agarose resin (Sigma-Aldrich) previously equilibrated with equilibration buffer (50 mm TRIS pH 7.2, 150 mm NaCl). The column loaded was washed with 100 mL of equilibration buffer and then the lectin was eluted with 7mLo fe lution buffer (150 mm a-Lactose pH 7.4 in PBS 1X). Gal-1 purity was checked by 4-12 %S DS-PAGE and by LC-MS. To eliminate the lactose from the protein sample, as eries of dialysis and washes with centrifuge filters (Sartorius Vivaspin 65 000 MWCO) using fresh buffer (50 mm sodium phosphate, 150 mm NaCl, 2mm DTT pH 7.4) were performed. The absence of lactose was checked by NMR. The addition of the reducing agent in the buffer of Gal-1 is justified by the presence of cysteine residues exposed to the solvent that could cause the formation of non-specific dimers or aggregates through intermolecular disulphide bonds.
NMR experiments. General information. The total volume for the NMR samples was 500 mL, using ap recision NMR tube with 5mm outer diameter (New Era Enterprises, Vineland, USA). The pH of the buffer was measured with pH meter Crison Basic 20 (Crison Instruments SA, Barcelona, Spain) and adjusted with the required amount of NaOH and HCl or NaOD and DCl.
Saturation Transfer Difference (STD) NMR. All the STD experiments [27] were acquired using Bruker AVANCE 26 00 MHz spectrometer equipped with standard triple-channel probe. The samples were prepared in deuterated phosphate-buffered saline (50 mm sodium phosphate, 150 mm NaCl, pH 7.4) with 2mm of dithiothreitol-d 10 (DTT-d 10 ). The standard ratio ligand/Gal-1 used was 1:50, with the concentration of protein (unlabelled) set at 50 mm.E xperiments at higher equivalents of ligand (ratio 1:100) were performed to amplify the STD effect and to confirm the preliminary results. In the case of 3,t he ratio used was 1:138. The temperature during the acquisition was 298 Kf or all the set of experiments. The 1D STD sequence from Bruker library with spoil and T2 filter (stddiff.3) was employed for STD experiments. STD spectra were acquired with 1028 scans, 2s of saturation time using at rain of 50 ms Gaussian-shaped pulses and 2s of relaxation delay.T he spin-lock filter used to remove the NMR signals of the macromolecule was set at 20 ms. The on-and off-resonance spectra were registered in an interleaved mode with the same number of scans. The on-resonance frequency was set for the aliphatic region between 0.55 and 0.85 ppm and for the aromatic region between 7.67 and 7.73 ppm, while the off-resonance frequency was set at 100 ppm. The STD NMR spectra were obtained by subtracting the on-resonance spectrum from the off-resonances pectrum. The STD Amplification Factor (STD-AF) and the percentage of STD (STD%) were calculated on the basis of the STD spectra. Reference experiments were acquired on samples containing only the protein as well as only the ligands under the same experimental conditions to verify the authenticity of the binding. No signals were detected in the blank STD NMR spectra of the ligands alone, made exception for the acetyl and methyl group of GlcNAc, GalNAc and Fuc moieties, respectively,a sh ighlighted in Figures S3B and S4B, which displayed weak STD signals likely due to direct irradiation effects. The analysis of the spectra was carried out using the proton signal with the strongest STD effect as reference (100 %o fS TD effect). On this basis the relative STD intensities for the others protons of the molecules were calculated.
Chemical Shift Perturbation (CSP) Analysis. The 1 H-15 NH SQC experiments were acquired using Bruker AVANCE 28 00 MHz spectrometer equipped with cryoprobe. The samples were prepared using 15 = 2 )a nd the results were plotted in graphics with the respective standard deviation.
CLEAN Chemical Exchange (CLEANEX-PM). The samples were prepared using 15 Nu niformly labelled Gal-1 at 1mm in 90 %p hosphate-buffered saline (50 mm sodium phosphate, 150 mm NaCl, pH 7.4) with 2mm DTT and 10 %D 2 O. The CLEANEX spectra [33] were acquired for Gal-1 alone, as well as for Gal-1 in presence of ligands 1 (12 equivalents) and 4 (10 equivalents). CLEANEX-PM experiments were performed using Bruker AVANCE 28 00 MHz spectrometer equipped with ac ryoprobe. These experiments provided information about the exchange rates of NH groups of the protein with the bulk water.T he setup has been optimized with TD of 2048 (F3) x 128 (F1) 15 Na nd 1 Hd imensions, respectively and 4 points in the F2 dimension, corresponding to 3d ifferent mixing times (25, 50 and 75 ms) and the reference spectrum. The temperature during the acquisition was set at 298 Kfor all the experiments. The ratio between the peak intensities in the CLEANEX spectra (V i ) and reference HSQC spectra (V 0 )w as considered as am easurement of the exchange rate. The analysis was carried out using CcpNmr Analysis software and Vi V0 were fitted as function of the mixing time. Exchange rates, k ex ,w ere obtained from the fitting to Equation (1).
Where k is k ex and R 1 is the effective NH relaxation rate during CLEANEX mixing time. 15 Nd imensions, [35] respectively and 16 points in the F1 dimension. Different datasets were collected for the following samples:G al-1:Apo, Gal-1:1,G al-1:4, Gal-3:Apo and Gal-3:LacNAc. The samples were prepared using 15N uniformly labelled Gal-1 or Gal-3 at ac oncentration between 450 and 650 mm in 90 %p hosphate-buffered saline (50 mm sodium phosphate, 150 mm NaCl, pH 7.4) with 10 % D2O and the addition of 2mm DTT as reducing agent only in the case of Gal-1. The 15N CPMG Relaxation Dispersion experiments were acquired for Gal-1 and Gal-3 alone and in the presence of ligands 1 and 4 with ar atio lectin:ligand of 1:20. Dispersion data were fit to the Carver and Richards equation using in-house Matlab scripts, either to one field alone or simultaneously using data from two fields. Finally ac ollective fitting was done using different clustering residues. Duplicate data were used to obtain an estimation of the error and F-test statistics to validate the suitability of the different models adjusted.
Molecular modeling and MD simulations. General information about 100 ns MD simulations of Gal1/sugar complexes. The starting geometries for the initial modelling procedures were built based on the X-ray structures of Gal-1 complexed with N-acetyllactosamine (PDB ID:1 W6P). The structure of the sugar was superimposed in the binding site, employing the most populated conformation found for the free state (according to as tandard NOE/molecular modelling approach). The complex structure was then submitted to 100 ns molecular dynamics simulation. Then, the MD simulations were performed using Amber16 program with the ff14SB force field parameters for protein and GLYCAM06j-1 for the oligosaccharides. [36] Thereafter,t he starting 3D geometries were placed into a 12 octahedral box of explicit TIP3P waters, and counterions were added to maintain electroneutrality.T wo consecutive minimizations were performed:1 )involving only the water molecules and ions, and 2) involving the whole system. The system was then heated and equilibrated in two steps:1 )20pso fM Dh eating the whole system from 0t o3 00 K, followed by 2) equilibration of the entire system during 100 ps at 300 K. The equilibrated structures were the starting points for MD simulations (100 ns) at constant temperature (300 K) and pressure (1 atm) and constant volume. A detailed analysis of each MD trajectory (r.m.s.d.,d ihedral angles and hydrogen-bond evaluation) was accomplished using the cpptraj module included in Amber-Tools 16 package.
Microsecond Molecular Dynamics (ms-MD) simulations. These simulations were carried out with AMBER 18 package [37] implemented with ff14SB [38] and GLYCAM 06j-1 [36] force fields for the proteins and carbohydrate ligands, respectively.B inding histidine residues (H44 and H52 in Gal-1 and H158 in Gal-3) were modeled in their Nd1-H tautomeric state (reside name HID in Amber). Protein complexes were immersed in awater box with a10 buffer of TIP3P [39] water molecules and neutralized by adding explicit Na + or Cl À counterions. At wo-stage geometry optimization approach was performed. The first stage minimizes only the positions of solvent molecules and ions, and the second stage is an unrestrained minimization of all the atoms in the simulation cell. The systems were then heated by incrementing the temperature from 0t o3 00 K under ac onstant pressure of 1atm and periodic boundary conditions. Harmonic restraints of 10 kcal mol À1 were applied to the solute, and the Andersen temperature coupling Scheme [40] was used to control and equalize the temperature. The time step was kept at 1f sd uring the heating stages, allowing potential inhomogeneities to self-adjust. Water molecules were treated with the SHAKE algorithm [41] such that the angle between the hydrogen atoms is kept fixed through the simulations. Long-range electrostatic effects were modelled using the particle mesh Ewald method. [42] An 8 cut-off was applied to Lennard-Jones interactions. Each system was equilibrated for 2nsw ith a2fs time step at ac onstant volume and temperature of 300 K. Five independent production trajectories were then run for additional 1.0 msu nder the same simulation conditions, leading to accumulated simulation times of 5.0 msf or each system. Allosteric communication analysis through MD simulations. The Weighted Implementation of Suboptimal Paths (WISP) [43] was used for the analysis of dynamical networks. First, ac orrelation matrix (C ij )i sg enerated from 1.000 snapshots extracted every 1.0 ns from ac onverged ms-MD trajectory,b ycalculating the correlation motion among node-node pairs with Equation (2). In our model, nodes are defined by the whole-residue centre of mass, and two nodes are considered to be in contact if the mean distance between them along the MD simulation is 6 or less. The length of the edges connecting these nodes quantifies the degree of dynamic communication between pairs of connected nodes as defined in Equation (3). This pathway length is inversely proportional to the correlation motion between nodes, meaning that shorter w ij values indicate tightly correlated or anticorrelated nodes, whereas larger values indicate less correlated nodes.
C ij ¼ Dr i ðtÞ Á Dr j ðtÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Dr i ðtÞ 2 i D hr j ðtÞ 2 q ð2Þ Then, Dijkstra'sa lgorithm is used to generate all force-node paths, finding the shortest (i.e.,optimal) path. To identify not only the optimal but also suboptimal pathways, WISP employs ab idirectional search. Suboptimal pathways are defined as those closest in length to the optimal one, but not including it. The available code rapidly calculates both optimal and suboptimal communication pathways between two user-specified residues of ap rotein. For each galectin, 100 pathways were calculated between selected binding site resides (H44, R48, H52, W68, E71 for Gal-1;R 144, H158, R162, W181, E184 for Gal-3) and all the other residues of the protein.
These paths were recalculated for the apo and bound forms with to two different ligands (LacNAc 1 and Btype-II 4).
Isothermal titration calorimetry (ITC). Isothermal Titration Calorimetry experiments were performed using MicroCal PEAQ-ITC calorimeter. Gal-1 in the presence of the ligand samples (1-4)w ere prepared in phosphate-buffered saline (50 mm sodium phosphate, 150 mm NaCl, pH 7.4) with 1mm TCEP as reducing agent. The concentration of the protein solution was set between 100-200 mm and that of the sugar stock between 5-9 mm.D uring the automated experiment, small amounts of the sugar solution (2-5 mL) were titrated into acell containing the protein solution and the heat dispersed was detected. The analysis of the curves was accomplished using the MicroCal Origin 7s oftware. The association constants and the thermodynamic parameters obtained from the fit of the titration profile to as ingle-site binding and to as equential binding model are reported in Ta bles 1a nd 2. Examples of titration profiles for each complex with data fitting to single-site binding model are reported in Figure S22.
|
2020-08-13T10:09:09.377Z
|
2020-08-11T00:00:00.000
|
{
"year": 2020,
"sha1": "8524d2176f503eec23c0554a2de316ec39ddb7f4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/chem.202003212",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc2573f4ca00bbe2b755d9ff1cb973ca95bdbcf9",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
92994677
|
pes2o/s2orc
|
v3-fos-license
|
Nutritional status and fruit production of Carica papaya as a function of coated and conventional urea
Estado nutricional e produção do mamoeiro Formosa em função da aplicação de ureia protegida e convencional R E S U M O Como estratégia para minimizar as perdas de N no solo tem-se estudado fontes minerais de N-fertilizantes protegidas com polímeros a partir das quais é possível aumentar o sincronismo entre a liberação do nutriente pelo fertilizante e sua absorção pela planta. Neste sentido teve-se, como objetivo, avaliar os teores dos macronutrientes e a produção do mamoeiro Formosa em função de fontes e doses de N-fertilizante aplicadas em cobertura na região de Bom Jesus, PI. Os tratamentos foram dispostos em esquema fatorial 2 x 4 correspondentes a fontes de nitrogênio (ureia protegida e ureia convencional) e doses de nitrogênio (350, 440, 530 e 620 g planta-1 de N) com quatro repetições e quatro plantas por parcela. Avaliaram-se os teores de macronutrientes na massa seca foliar e a produção de frutos. As fontes e doses de adubação nitrogenada em cobertura incrementam as concentrações foliares de macronutrientes e a produção do mamoeiro Formosa híbrido Caliman 01. Nas condições em que o experimento foi desenvolvido e considerando os teores de macronutrientes admitidos como adequados para a nutrição da cultura associada à produção máxima de frutos (8,08 kg planta-1) recomenda-se o fornecimento de 525 g planta-1 de N na forma de ureia protegida.
Introduction
Formosa papaya is a plant that absorbs large amounts of nutrients and has continuous requirements, especially during its first year, reaching the maximum point twelve months after transplantation (Fontes et al., 2010).The characteristics of intermittent harvest of papaya causes the plant to require water and nutrient supplies in frequent intervals, thus allowing the continuous flow of flower and fruit production (Brito Neto et al., 2011).
Despite the nutritional requirement of the crop, it is notorious the necessity to provide correct quantity of fertilizer in order to promote maximum plant yield (Santos et al., 2014).The recommendations of nitrogen (N) fertilization for the crop vary considerably from region to region, in both the applied amount of nutrients and fertilization installments, which is due to the different ecological systems that directly influence fertilization efficiency.
Incorrect fertilizer management with respect to the dose, source and application frequency, has direct impact on plant nutrition, an essential factor for the achievement of economically viable productions.Thus, a nutritional balance must be promoted, especially regarding N, since it has great importance in plant nutrition and participates as a constituent in proteins, nucleic acids and chlorophyll molecule, besides directly acting on the processes of cell division and expansion (Marchner, 2005).On the other hand, N fertilization management is complex, because of the multiplicity of soil biochemical reactions, dependence on edaphoclimatic conditions and vulnerability to losses in the soil (Hu et al., 2012).
Excessive application of N fertilizers can cause nutritional imbalance in the plants and pollute the environment through the contamination of the water table, which makes the practice of fertilization uneconomic, because there may be higher concentration of soluble forms of N in the soil solution, which are more susceptible to losses (Lorenzini et al., 2012).Therefore, as a strategy to minimize the availability of nitrate (N-NO 3 -) through leaching in the soil profile and ammonia (N-NH 3 ) through volatilization, mineral sources with controlled N release, such as polymer-coated urea, have been used, from which it is possible to increase the synchronization between N release from the fertilizer and its absorption by plants, compared with the use of conventional urea (Azevedo et al., 2009).According to Morgan et al. (2009), urea granules are coated with three layers of polymers and the outmost layer comprises a low-solubility additive, which requires a greater volume of water to dissolve (10 to 20 mm), while the other layers remain in the solution along with ammonium (NH 4 + ), thus compromising their recognition by nitrifying bacteria and reducing losses through leaching.
Currently, studies on the use of controlled-release fertilizers in the cultivation of fruit crops have been expanded in order to reduce the number of fertilizations per cycle and the final production cost (Kandil et al., 2010).
Therefore, this study aimed to evaluate the contents of macronutrients and the production of Formosa papaya as a function of sources and doses of N fertilizer applied as topdressing in the region of Bom Jesus-PI, Brazil.
Material and Methods
The experiment was carried out from November 2011 to February 2013, at the Fruticulture Experimental Area of the Campus Professora Cinobelina Elvas (CPCE), at the Federal University of Piauí (UFPI), in Bom Jesus-PI, Brazil (09º 04' 28" S; 44º 21' 31" W; 277 m).The municipality of Bom Jesus belongs to the semiarid region of Piauí and has hot and humid climate, Cwa, according to Köppen's classification.
The seedlings were produced using certified seeds of Formosa papaya, hybrid Caliman 01, provided by the Capixaba Institute of Research, Technical Assistance and Rural Extension (INCAPER).The substrates consisted of sandy soil and cattle manure at the proportion of 3:1, respectively, mixed with 1.4 kg of single superphosphate (18% of P 2 O 5 ) and 1.0 kg potassium chloride (60% of K 2 O) per m 3 of substrate, following the recommendations of Marin (2004).The seedlings were transplanted when they were approximately 20 cm high, 60 days after sowing (Marin, 2004).
The treatments were arranged in a 2 x 4 factorial, corresponding to two N sources [coated urea (Kimcoat N ® ), covered with polymer layers and conventional urea, 45% of N] and four N doses (350, 440, 530 and 620 g plant -1 of N).The treatments were distributed in randomized blocks with 4 replicates and 6 hermaphrodite plants of Formosa papaya per plot, 4 evaluated plants and 2 border plants, cultivated at double spacing (3.8 m between double rows x 1.8 m between rows x 2.0 m between plants), totaling 192 plants with a density of 1,785 plants ha -1 .
N doses corresponded to 80, 100, 120 and 140% of N fertilization, monthly divided, in a total of twelve applications, following the recommendations of Costa & Costa (2007).
Potassium chloride (60% of K 2 O) was used as K source, whereas single superphosphate (18% of P 2 O 5 ) was used as P source.All the fertilizers (urea, single superphosphate and potassium chloride) were applied in a circle under the canopy projection, 20 cm distant from the stem and gradually incorporated to the soil.
The soil of the experimental area is classified as Quartzarenic Neosol (EMBRAPA, 2013) with sandy texture and showed the following physical and chemical characteristics before the experiment (Table 1).
The data referring to the climatic variables (temperature, relative air humidity and rainfall), collected at the weather station of the UFPI/CPCE during the experimental period, are shown in Figure 1.
Liming was performed 60 days before transplantation, based on the result of the soil chemical analysis for the experimental area, through the application of 1.22 t ha -1 of dolomitic limestone (RNV = 75%) in the total area.The pits were opened with dimensions of 40 x 40 x 40 cm, where 170 g of single superphosphate (18% of P 2 O 5 ) were applied according to the recommendations of Costa & Costa (2007).
Seedlings transplantation was performed using two plants per pit and thinning occurred 120 days after flowering, leaving only the most vigorous hermaphrodite plant, according to Marin (2004).
Plants were irrigated by a drip irrigation system using two emitters per plant with flow rate of 3.74 L h -1 each.The applied water depths were daily determined according to the reference evapotranspiration (ETo), obtained from the multiplication of the Class A pan evaporation and the adopted Kp, which was 0.75.The obtained ETo values were multiplied by the crop coefficient of papaya in its respective development stages, according to Marin (2004).Always when the amount of rain exceeded the evaporation of the pan, irrigation was suspended.Plants were subjected to the cultural practices recommended by Marin (2004).
For the determination of the nutritional status of papaya plants, "F" leaves (leaf base with the first flower fully developed) were collected 120 days after transplantation, when plants were in full flowering (Malavolta et al., 1997), at the laboratory of Plant Science of the UFPI/CPCE.Then, the contents of macronutrients were determined according to the methodology described by Malavolta et al. (1997).
Harvest was performed from October 2012 to February 2013, along with the determination of the production of fruits plant -1 (kg plant -1 ), considering fruits collected weekly in the maturation stage 3, when the yellow color covered only 25 to 50% of the surface of the peel (Marin, 2004).Then, the fruits were selected, counted and weighed on a precision scale (0.01 g), for the determination of the mass of fruits per plant.
The data were subjected to analysis of variance in order to identify significant effects between N sources and doses, by F test, and comparison of means of N sources by Tukey test.N doses were evaluated based on simple polynomial regression analysis, using the programs Assistat and Sigmaplot.
Results and Discussion
As observed in Table 2, there was individual effect of N sources only on leaf contents of N and K and on production CV -Coefficient of variation; LSD -Least significant difference; ns Not significant; **Significant at 0.01 probability level Means followed by different letters in the columns differ statistically by Tukey test (p < 0.01), while N doses promoted significant differences for the leaf contents of N, K and Ca and production (p < 0.01), and for the contents of P, Mg and S (p < 0.05).For the interaction between the studied sources and doses, there was significant effect on the contents of N, K and Ca at 0.01 probability level.
There was an increment of 2.52% in N contents for plants fertilized with polymer-coated urea in comparison to conventional urea (Table 2).The superiority of coated urea occurred because polymer-coated granules have resins that allow long time of solubilization, thus releasing the nutrient, gradually and by diffusion, through the micropores to the soil solution (Hu et al., 2012), with reduction of leaching losses, especially in sandy soils (Osman & El-Rahman., 2009), as the soil of the experimental area (Table 1).These results agree with those observed for other fruit crops of economic importance, such as peach (Kandil et al., 2010) and guava (Osman & El-Rahman, 2009).
It should be pointed out that the texture of the soil cultivated with papaya, 920 g kg -1 of sand (Table 1), favors the efficiency of coated fertilizers, since N losses through leaching along the profile in the form of nitrate (N-NO 3 -) and through volatilization of ammonia (N-NH 3 ) are intensified in sandy soils, a common phenomenon in orchards of fruit crops (Barlow et al., 2009).Leaf N contents were also significantly affected by the N doses applied to the soil.For the coated fertilizer, the maximum estimated dose of N was 533.63 g plant -1 , which corresponds to the maximum N content of 45.38 g kg -1 in the leaf dry matter.For the N contents as a function of the application of conventional urea (Figure 2B), the maximum estimated value was 405.40 g of N plant -1 , promoting leaf N content of 42.56 g kg -1 .
The lowest leaf N contents observed with the increment in N doses from the maximum estimated doses for both sources can be attributed to the N sufficiency achieved by the plant, according to the range proposed by Malavolta et al. (1997).According to Morgan et al. (2009), at the highest N doses applied, reactions of dissolution with greater increase in the pH of the site are expected, favoring the formation and, consequently, the emission of N-NH 3 , decreasing the use of N applied as mineral fertilizer by papaya plants.
The values of leaf N contents, from 40.0 to 53.1 g kg -1 , are consistent with 42.6, 44.9 and 45.0 g kg -1 reported by Falcão & Borges (2006), Almeida et al. (2002) and Campostrini et al. (2001).Malavolta et al. (1997) described the range of 40 to 50 g kg -1 as ideal for papaya at flowering.Therefore, the plants cultivated in the present study, regardless of the N source applied to the soil, had adequate N supply.
The mean K content in papaya plants that received coated urea as N source was 13.32% higher compared with plants cultivated under conventional urea (Table 2).N application in the form of urea promoted higher K contents in the leaves, probably because this source has part of the N in the nitric form, which may have favored the absorption of the cation K + , as reported by Marschner (2005).The increment in leaf K contents with the application of polymer-coated N fertilizer was also observed by Kandil et al. (2010), with superiority of 9.14% in comparison to the application of conventional urea.
The K contents of 35.07 g kg -1 observed in this study are higher than 27.30 g kg -1 of K in the leaves of 'Baixinho de Santa Amália' papaya at 120 days after transplantation and higher than the range of 25.0 to 30.0 g kg -1 of K, considered as adequate to supply the plants with K, according to Malavolta et al. (1997), although no visual symptom of excess was identified in the plants.
The increment in the absorption of N and K due to the application of polymer-coated fertilizers may contribute to the reduction of the losses of these nutrients through leaching, thus decreasing the effects of environmental pollution of nitrate, which is easily leached through the drainage of rainwater or irrigation, especially in sandy soils (Luna et al., 2013).
K contents increased until 40.77 and 31.71g kg -1 (Figures 2C and 2D) referring to the maximum estimated doses of 489.47 and 402.31 g plant -1 of N for the sources coated urea and conventional urea, respectively, with reductions in leaf K contents with the application of doses higher than the respective maximum estimated doses.Additionally, for the maximum estimated K doses, the application of coated urea resulted in an increment of 22.22% in leaf N contents, compared with conventional urea, which represents increase of 9.06 g kg -1 in leaf K content.Santos et al. (2014), evaluating N doses at different planting spacings of Formosa papaya, cv.Caliman, also observed increments in K contents with the increase in N fertilization.
The increase in leaf K contents as a function of N application can be explained by the fact that N is transported to plant shoots in the form of potassium nitrate (KNO 3 ) (Marschner, 2005) and the monthly application of K contributed to its ideal supply during crop development.
As to leaf Ca contents (Figure 2E), the maximum value for coated urea was 22.82 g kg -1 , promoted by the N dose of 464.04 g plant -1 , while for conventional urea (Figure 2F), the maximum N dose was 490.91 g plant -1 , with leaf Ca content of 21.24 g kg -1 .These values are within the range adopted as sufficient for papaya (Malavolta et al., 1997) and are similar to 22.82 g kg -1 , obtained by Falcão & Borges (2006).
For P contents in the shoot dry matter of Formosa papaya (Figure 3A), the increment in N doses increased P contents until the maximum estimated dose of 483.78 g of N plant -1 , corresponding to the maximum content of 7.25 g kg -1 of P, which are within the range considered as sufficient for papaya (Malavolta et al., 1997) and higher than the range of 5.0 to 7.0 g kg -1 of P, recommended by Costa & Costa (2007) for plants with age from 120 to 140 days after transplantation.
The increment of leaf P contents with the increase in N doses in the soil was probably due to the synergism between N and P contents in the plants (Marschner, 2005), since N fertilization has a positive effect on leaf P contents.
Regardless of the studied N source, Mg contents (Figure 3B) in the leaf dry matter of Formosa papaya increased with the increment in N dose until the maximum estimated value of 464.57g plant -1 of N, which promoted Mg content of 6.37 g kg -1 .These maximum results are lower than 10.0 g kg -1 , described by Malavolta et al. (1997) as sufficient for the nutrition of papaya plants.The fact that leaf Mg contents are below the range considered as ideal for the crop and, in contrast, the ideal K contents are above it, is related to the inhibition of Mg 2+ absorption caused by the high K + contents (Marschner, 2005).
Sulfer (S) contents in the leaf dry matter of Formosa papaya (Figure 3C) also increased as a function of N doses applied to the soil, and the maximum estimated dose was 485.58 g plant -1 , which corresponds to the S content of 4.29 g kg -1 .Therefore, it is superior to the maximum mean of 3.86 g kg -1 of S, reported by Santos et al. (2014), who evaluated the nutritional status of Formosa papaya (hybrid Caliman 01) as a function of N fertilization.On the other hand, the maximum S values were higher than that observed by Santana et al. (2004) in papaya plants from the 'Solo' group, cultivated under the application of conventional fertilizer.
Comparing the S values shown in Figure 3C with 6.0 g kg -1 of S, considered by Malavolta et al. (1997) as sufficient for the crop, the plants were deficient in S.Although the P source used in the experiment was single superphosphate (10% of S), it was not enough for an adequate supply of S and, probably, the balance between N supply through urea application and S through P fertilization was not adequate as well, thus causing an apparent N accumulation in the plant (Table 2).This situation was also evidenced by Jamal et al. (2010) and Santos et al. (2014), who observed that significant relationships between N and S in the soil, caused by higher doses of N fertilization, result in lower S availability.
The mean production of fruits per plant, for both N sources, increased with the increments in N doses, and the coated fertilizer (Figure 4) was superior to the conventional fertilizer at all of the studied N doses.Thus, coated urea promoted maximum estimated production of 8.08 kg plant -1 , corresponding to the N dose of 525 g plant -1 , while the maximum production promoted by conventional urea fertilization was only 6.42 kg plant -1 for a maximum N dose of 500.9g plant -1 .
The increment in fruit production promoted by the application of coated urea is due to the fact that polymer-coated fertilizers reduce N losses through leaching and volatilization, showing greater efficiency in the yield of some crops (Osman & El-Rahman, 2009).Additionally, Brito Neto et al. (2011) claim that the gradual supply of N, with better spatial distribution in the soil during the productive stage, is of great importance for the papaya crop, favoring the synchronization between the supply of this nutrient and the physiological demand of the plant, such as the formation of flowers and fruits.
The highest estimated production of Formosa papaya is consistent with the value of 8.0 kg plant -1 , reported by Brito Neto et al. (2011) in a study with 'Sunrise Solo' papaya as a function of N doses, and lower than 17.34 kg plant -1 , reported by Souza et al. (2007) for Formosa papaya, cv.Tainung 01, fertigated with different combinations of N fertilization.The lower production may be attributed to the short period of harvest, with duration of 120 days, mainly associated with the low relative air humidity, according to the data recorded during the cultivation period (Figure 1B), because Reis & Campostrini (2008) claim that the mean relative air humidity recommended for the flowering stage of papaya is 60 to 85%.Values below this range promote high rates of flower abortion, which was observed during the flowering of papaya plants.
Conclusions
1. N sources and doses improve the nutritional status with macronutrients and the production of Formosa papaya, hybrid Caliman 01.
2. The increase in N doses for the coated source applied as top-dressing promotes higher N contents and fruit production, compared with conventional urea.
3. Based on the contents of macronutrients considered as adequate for the nutrition of the crop, associated with the maximum fruit production, the supply of 525 g plant -1 of N is recommended in the form of coated urea.
Figure 3 .
Figure 3. Leaf contents of phosphorus (A), magnesium (B) and sulfur (C) of Formosa papaya as a function of different doses of top-dressing nitrogen fertilization
Figure 4 .
Figure 4. Fruit production of Formosa papaya cultivated under different doses of coated and conventional urea
|
2019-04-03T00:48:35.596Z
|
2016-04-01T00:00:00.000
|
{
"year": 2016,
"sha1": "26c9814ffba24257f99c76b79eacf86a1f89aa37",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbeaa/v20n4/1415-4366-rbeaa-20-04-0322.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "26c9814ffba24257f99c76b79eacf86a1f89aa37",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
16358338
|
pes2o/s2orc
|
v3-fos-license
|
Iranian Journal of Basic Medical Sciences Inhibitory Effect of Corcin on Aggregation of 1n/4r Human Tau Blockinprotein Blockinin Blockinvitro
Article BLOCKINtype: Original BLOCKINarticle Objective(s): Alzheimer's disease (AD) is the most common age‐related neurodegenerative disorder.
treatment for
AD is becoming a global health care concern (3).Two hallmarks of AD in the brain are accumulation of the amyloid-β-peptide (Aβ) as senile plaque and intracellular aggregation of the microtubule associated tau protein as neurofibrillary tangles (4).Consequently, abnormal protein accumulations lead to neuronal loss and episodic memory impairment in AD patients (4).Current therapeutic approaches such as acetyl cholinesterase (AchE) inhibition and N-methyl Daspartate (NMDA) glutamate receptor blockers have limited beneficial efficacy with several side effects (5).
Recent evidence indicates that abnormal hyperphosphorylated tau protein is a critical aspect in the pathology of AD. Surprisingly, there is a growing interest in focusing on the structure and function of tau protein to develop new drugs (6,7).In this regard, tau-directed drug d scovery is divided into three major categories, namely anti-aggregation strategy context (methylene blue chloride-Rember TM ), inhibitors of tau hyperphosphorylation (lithium) and microtubule stabilizing agents such as Paclitaxle (8)(9)(10).Combinatorial library screen methods show inhibitory activity of small molecules with several chemical properties against tau protein aggregation, but most of these compounds have toxic side effects and low permeability across the bloodbrain barrier (BBB) (11)(12)(13)(14).Thus, more attention has been attracted to natural phytochemicals such as polyphenols including curcumin (15,16), fulvic acid (17), cinnamon (18), oleocanthal (19,20) and oleuropein (21) which have anti-aggregation effects.Saffron (Crocus sativus, L.) has been frequently used in traditional herbal medicine for its sedative, anti-spasmodic, eupeptic, anti-depressant, stomachic, and anti-catarrhal features as well as protectting characteristics against age-dependent neurodegeneration and dementia (22)(23)(24).In addition, recent pharmaceutical studies in human and animal models show that saffron extract reveals therapeutic effects such as anti-tumor cell proliferation, insulin resistance prevention and neuronal injury protection (22,25).Carotenoids are the major secondary metabolites of saffron and crocin (di-glucosyl esters of crocetin) is the main glycosylated carotenoids constituent (Figure 1).This compound is water soluble carotenoid which is responsible for saffron's color (26).
The cumulative basic and clinical evidences support the memory-enhancing and anti-Alzheimer's disease properties of saffron (27)(28)(29).Experimental studies have indicated acetylcholinesterase inhibition, pro-inflammatory markers reduction, as well as impaired memory retention inhibit on effects of saffron (29).More importantly, recent clinical trials demonstrated that saffron has preventive properties against mild to moderate AD (29).Moreover, Ghahghaei et al (30) and Papandreou et al (31) showed anti-aggregation activity of crocin on amyloid-β 1-40/1-42 peptides in different in vitro and in vivo experimental models.To the best of our knowledge, there is no study on the inhibitory effects of crocin on tau aggregation process.According to the similarity of structural fibril formation in both amyloid β and tau protein (32), in the present study, we investigated the inhibitory effect of crocin on the aggregation of recombinant human tau protein (1N/4R) isoform, in vitro.
Materials and Methods
Materials
All chemicals were of analytical grade and purchased from Merck, GmbH, Germany.
Purification of crocin
Crocin was purified from Crocus sativus L. extract as described previously (33).In all steps, crocin stock (2 mg/ml) was prepared from its powder tha
sion and purification
xpression and purification of tau protein were done based on our previous work with minor modification (34).Briefly, Escherichia coli strain BL21 (DE3) was infected with pET-21a vector including human tau 1N/4R gene (htau34).Recombinant
u was purified via a succession of Ni-NTA-Agarose pr
cipitation (equilibrated with 10 mM HEPES, 100 mM NaCl, and 15 mM imidazole, pH 7.4) and eluted with 80 mM imidazole.The concentration of purified tau was determined using OD 280 nm with extinction coefficient 7700 M −1 cm −1 and the purity of the protein was verified with SDS-PAGE gel electrophoresis.The protein was stored at -80 °C until use.
Tau aggregation via Thioflavin T (ThT) fluoresce
e
The tau fibrillation was assayed using Thioflavin
T emission fluorescence based on Monti et al methods with minor modification (20).In brief, solutions of tau (20 μM) were prepared using an assembly buffer (10 mM HEPES, 100 mM NaCl, 3 mM dithiothreitol (DTT), and 800 μM arachidonic acid as inducer of fibrillation) into a Grenier solid black 96-well plate.After 1 hr incubation at 37 °C, ThT (50 μM) was added to assay the fibrillation reaction.The plate was covered with selfadhesive aluminum foil to avoid exposure to light and incubated with shaking at 250 rpm for 120 hr at 37 °C.Finally, fluorescence was measured every 24 hr by a multimode microplate reader Synergy H4 (Biotek Instruments, Winooski, VT) at excitation 440 nm and emission 490 nm.The background fluorescence of tau, crocin, arachidonic acid and ThT was subtracted.To study the inhibitory effect of crocin on tau protein fibrillation, tau was incubated in the absence and presence of crocin at different concentrations ranging from 0.2 μg/ml to 600 μg/ml.Briefly, aggregation procedure for 20 μM tau protein in the presence of 800 μM arachidonic acid was performed at different concentrations of crocin (0.2, 2, 20, 50, 100, 200, 400 and 600 μg/ml).The amount of filament formation was The percentage of inhibition of tau aggregation in the presence of crocin was compared with tau aggregation in the absence of crocin (100%).The normalized data was plotted against the logarithm of crocin concentrations and fitted to dose-response curve.In essence, 100 μM methylthioninium chloride (Methylene blue) was used as the reference of tau inhibition.All measurements were carried out in triplicate separate assays with at least two preparations of purified proteins.
Circular dichroism (CD) spectroscopy
Far-UV CD spectra were
cumented in the presence and absence
f crocin to monitor changes in secondary structure of tau protein during aggregation.At the end of the experiment after 120 hr incubation, samples were diluted 1:3 in buffer containing 10 mM HEPES.The measurements were done in a 0.1 cm path length cuvette, using an Aviv model 215 Spectropolarimeter (Lakewood, NJ, USA).Spectra were recorded in the range of 195-260 nm with a data interval of 1 nm.Each spectrum was an average of two scans with a subtraction of buffer baseline.
Dynamic light scattering (DLS)
Next, samples were diluted 1:3 ag
n in 10 mM HEPES buffer and DLS
measurements were performed by a ZetaPlus (Zeta Potential Analyzer-Brookhaven, USA) using the particle sizing software (Version 5.2).Samples were thermally equilibrated at 25 °C for 2 min before data collection.Particle size was recorded as the average of five measurements and expressed as percentage of mass and mean radius (nm).
Transmission electron microscopy (TEM)
Aliquots of samples (2 μl)
re diluted 1:3 again in 10 mM HEPES buf
er and absorbed into carboncoated gold TEM grids (SPI Supplies, Westchester, USA).The grids were dried with filter paper and were negatively stained with 2% uranyl acetate.The observations were performed with a H600 transmission electron microscope (Hitachi Co.) operating at 50,000× at 75 kV excitation voltages.
Cell culture
For detection of suspected toxicity of producing aggreg
es, cell viab
lity was evaluated with conventional MTT reduction assay in the presence and absence of crocin in PC12 cell line (35).PC12 cell line was obtained from Pasture Institute of IRAN, Tehran, Iran.All cells were cultured in sterile flasks with DMEM medium and 10% fetal bovine serum (FBS).In order to evaluate cell viability, cells were incubated with 10 µl of crocin (after 120 hr) for 24 hr at 37 °C.
Statistical analysis
Aggregation data were adjusted to a sigmoidal model
nd graphed by SigmaPl
t version 12.0 Ink.Data are expressed as mean±standard deviation (SD).Cell viability was compared by t-test and P<0.05 was considered as statistically significant.Statistical analyses were performed using SPSS software version 15.
Results
Tau expression and purification
Our previous study showed that ta
N/4R) can be expressed in E. col
strain BL21 (DE3) with the pET-21a vector in high quantity (34).As shown in Figure 2, the tau protein 412 amino acid (htau 34) was the major protein expressed.Two samples of induced and none induced by IPTG were compared in lane A and B which showed a considerable concentrated band at 48-63 kDa.Next, the purification of htau34 monomeric with a purity of >98% was achieved following Ni-NTA-Agarose precipitation step as described above with a final volume of 5 ml containing 1 to 2 mg/ml of protein.
The eluted fractions containing htau34 showed a single dense band at 48-63 kDa ( ane E).
Evaluation of tau aggregation in the presence of crocin using ThT fluorescence
say
The ThT fluorescence assay can be used in order to show the polymerization of ta
to filaments.The ThT binds to beta-sheet structures and changes the fluorescence emission spectra.This can be used in order to confirm the polymerization of tau to form an aggregate.In our experiment, the time-course of tau aggregation was nucleation-elongation reaction model that involves the formation of the betastructure followed by a sigmoid curve that reached plateau after 120 hr of incubation (Figure 3).The amyloid aggregation systems characterized by sigmoidal curve with three steps, nucleation, elongation and steady state phase.As shown in Figure 3, the nucleation phase occurred within 5 hr of incubation; however, the elongation step happened between 5 and 96 hr.Its kinetic reached a steady state phase after 120 hr of incubation followed by slow drop in ThT fluorescence.
For determination of IC 50 , we used crocin at various concentrations ranging from 0.2 o 600 μg/ml.Results showed that 100 μg/ml of crocin was required for inhibition of 50% of tau aggregates in a dose-dependent manner (Data not shown).As indicated in Figure 3, under fibril condition, when crocin at final concentration of 100 μg/ml was added to tau protein, incubated for 120 hr, the intensity of ThT fluorescence was significantly changed (P<0.001).This implied that crocin can bind to intermediate structures of tau protein and inhibit its conversion to more aggregated conformations during the fibrillation process.
Circular dichroism (CD) spectroscopy
CD spectroscopy is widely used for testing protein
ructures in solutions.Figure 4 shows
he CD spectra of monomer, aggregated and incubated in the presence of crocin (black, green and blue lines, respectively).As shown in Figure 4, the CD spectrum of tau protein monomer (black line) before applying the fibrillation process spectacles very small positive transition near [] 220 and a single large negative peak at [] 200 .This spectrum typically shows the random coil structures of tau protein.Moreover, the CD spectrum after 120 hr of incubation converted into strong negative ellipticities at near [] 217 which is the expected spectrum of beta-sheet structures of tau fibrils the transition from the random coil to beta-sheet structure was clearly observed after 120 hr.In addition, after adding crocin (100 μg/ml) to tau protein under fibril condition, the intensity of ellipticity of [] 217 was significantly reduced and crocin impeded the formation of beta-sheet structures of tau protein.These results reflect an increase in the stability of the random coil structure of tau protein and a decline in the amount of betasheet structure of tau fibrils.
Dynamic light scattering (DLS)
In our study, DLS was performed to observe the size distribution of population of tau protein aggregate species using the particle sizing software (Version 5.2).As shown in Table 1, at the end of the fibrillation process of tau protein without crocin, the diameter of particles was significantly increased.Filaments represented a heterogeneous mixture of aggregates of different size with an average diameter of approximately 1745 nm (95% of total mass).Additionally, regarding tau protein with cro
n (100 μg/ml) under fibril cond
tion, the diameter of aggregate forms was remarkably decreased with a mean size of approximately 111 nm (87% of total mass).
Transmission electron microscopy (TEM)
Morphological forms of the aggregates in the absence and presence of crocin were observed by TEM (Figure 5).After incubation for 120 hr under fibril condition, tau protein without crocin was distinct mature fibers as well as amorphous aggregates with dimensions of approximately 10-22 nm in width and up to 1 μm in length (Figure 5A).These structures are characterized by paired helical filaments (PHFs) and contained extended beta-sheet and hydrophobic structures.On the other hand, in the pr
ence of crocin, the majority of tau pro
eins were in amorphous form (Figure 5B).
PC12 cell culture
To investigate the toxicity of structures that were produced in the presence of crocin, MTT assay was performed.After incubation for 120 hr under fibril condition, the result of MTT assay showed less toxicity (about 20%) in samples containing tau protein and crocin compared to samples that contained tau protein only which proposed that the mixture of tau protein and crocin was not toxic for the cells (Figure 6).
Discussion
Considering the complexity of pathogenesis and multi-step process of AD development, curre
therapeutic appro
ches involve Multi-Target-Directed Ligands (MTDLs) (36,37).According to MTDLs, natural products such as phytochemicals including alkaloids, polyphenols and terpenoids are important therapeutic substances (38).The major advantages of herbal compounds are their multiple actions and multi-target mechanisms (39).Therefore, AD therapeutic candidates should have several effects such as anti-oxidation, antiinflammation, in
bition of a
yloid-β and tau protein aggregation and Acetylcholine esterase inhibitory activity (39)(40)(41).
A previous study showed that crocin acts as a potent radical scavenger (42) and Nam et al suggested that crocin represses microglial activation in rat brain and could be effective in the inhibition of LPS-induced nitric oxide (NO) release (43).Moreover, an in vitro study has shown that saffron extract has an acetylcholinesterase (AchE) inhibitory activity (44).In addition, Ghahghaei et al (30) and Papandreou et al (31) showed that crocin effectively reduces the amount of amyloid-β fibrils.
Herein, we showed that crocin can inhibit the aggr gation of human tau protein.Our results revealed that in the presence of crocin, the betastructure/random coil ratio of tau protein under fibril condition decreased significantly (Figure 4).The probable mechanism of anti-tau-aggregation of crocin could be related to its chemical structure that consists of three parts containing a polyene hydrocarbon chain, carbonyl groups and β-D-gentiobiosyl at both ends (Figure 1) (26,30).The partial negative charge of carbonyl groups can likely interact wi h positive residues such as Lysine and Argenin.On the other hand, positive residue especially lysine exists in hexapeptide aggregation cores of protein ( 275 VQIINK 280 and 306 VQIVYK 311 ) and it plays a critical role in the self-assembly of tau protein into abnormal fibrils (45).According to the above mentioned facts, carbonyl groups of crocin could interact with lysine residues and impair the self-assembly procedure of nucleation and elongation of fibril formation.
Additionally, previous evidence has identified that adding sugar moieties to curcumin for making sugar-curcumin has led to disruption of tau aggregation fibrils at low IC 50 and caused potent neuroprotective effects (46).Meanwhile, the permeability of sugar-curcumin across the blood brain-barrier was improved (46).In comparison with water-soluble sugar derivatives of curcumin, crocin naturally has two sugar moieties attached to the end of the polyene hydrocarbon chain and these sugar groups probably have similar actions.After attaching crocins to t u protein, gentiobiose units could increase the total hydrodynamic radius of tau protein and consequently inhibit the required hydrophobicity for beta-sheet formation of the aggregates.Finally, soluble species of tau protein were non-toxic in the presence of crocin which suggest it as a safe candidate.
From the above data, it is evident that crocin appears to have multiple neuro-protective mechanisms and it also has a safety profile in human (28,30,31,43,(47)(48)(49)(50).Importantly, a randomized controlled trial illustrated the efficacy of saffron in the treatment of patients with mild to moderate AD (50).As mentioned above, anti-tau-aggregation property of crocin makes this molecule a multifunctional drug.
Conclusion
We conclude that corcin has a multi structural characteristics which leads to a variable neuroprotective properties.We suggest tha the effect of gentiobiose on tau aggregation process.Would be a good target for future investigations.
Figure 1 .
1
Figure 1.Chemical structure of crocin
Figure 2 .
2
Figure 2. Expression and purification of Tau 412 AA (htau 34); The samples were mixed with SDS-sample buffer, separated on 12% SDS-PEGE gel and stained with Coomassie Brilliant Blue R250.The tau monomer band is in the range of 48-63 kDa.A: Befor
|
2018-04-03T04:16:24.103Z
|
0001-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "498df949e05e6b8e34bac1f745615ec1b6473300",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "498df949e05e6b8e34bac1f745615ec1b6473300",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250334650
|
pes2o/s2orc
|
v3-fos-license
|
Academic information retrieval using citation clusters: In-depth evaluation based on systematic reviews
The field of scientometrics has shown the power of citation-based clusters for literature analysis, yet this technique has barely been used for information retrieval tasks. This work evaluates the performance of citation based-clusters for information retrieval tasks. We simulated a search process using these clusters with a tree hierarchy of clusters and a cluster selection algorithm. We evaluated the task of finding the relevant documents for 25 systematic reviews. Our evaluation considered several trade-offs between recall and precision for the cluster selection, and we also replicated the Boolean queries self-reported by the systematic review to serve as a reference. We found that citation-based clusters search performance is highly variable and unpredictable, that it works best for users that prefer recall over precision at a ratio between 2 and 8, and that when used along with query-based search they complement each other, including finding new relevant documents.
Introduction
Researchers and other knowledge workers need special information retrieval (IR) tools because their IR tasks and practices differ from the general public and from each other (Ellis, 1993;Kuhlthau, 1991;Russell-Rose et al., 2018).Academic literature search is an essential part of any research project, and the most commonly used IR method is query-based retrieval: search using keyword queries to retrieve a ranked list of documents.However, some users complement this method with citationbased IR methods that follow the citations of the documents (Hemminger et al., 2007;Ortuño et al., 2013).These methods have two major advantages over query-based retrieval: 1) They are independent of the keywords, helping with lack of vocabulary knowledge or semantic ambiguity, and 2) they use the intellectual information of the citations, helping find documents that other researchers already connected.However, these methods can be timewise inefficient for users (Wright et al., 2014).
Given the prominence of citation clusters in scientometric research (Waltman & van Eck, 2012), it is remarkable that citation cluster-based IR (CCIR) is largely absent from the toolset of users (Wolfram, 2015).CCIR combines citation-based IR and cluster-based IR by making use of clusters of documents identified based on citation links.CCIR could allow users to also use approaches developed in scientometric research, such as science maps (Chen, 2017), cluster labeling (Sjögårde et al., 2021), and visualization software (van Eck & Waltman, 2010).CCIR offers two potential benefits over other citation-based IR methods: 1) it is less hindered by documents that cite the relevant literature poorly (Robinson et al., 2014) and 2) it communicates the topic structure of a document corpus, including the relative size of different topics and the relations between topics (Pirolli et al., 1996).
Effective cluster-based IR requires the clusters to group together the documents that are relevant for the IR task of the user (i.e., the cluster hypothesis (van Rijsbergen, 1979)).The extent to which this condition is fulfilled by CCIR is an open question.The answer may be different for different types of IR tasks (Hearst & Pedersen, 1996) and for different CCIR implementations.We consider one specific IR task, namely performing a literature search to write a systematic review (SR), and one specific CCIR implementation, namely a tree hierarchy of citation-based clusters of MEDLINE documents.As discussed below, we believe this to be a sensible use of CCIR.Moreover, data for experimentation was relatively easily available for this task.To determine the extent to which CCIR groups together relevant documents, we address the following research questions: • What types of users are best served by CCIR?
• What types of SRs are best served by CCIR?
• What are the strengths and weaknesses of CCIR?
We answer these questions by simulating a CCIR search process, evaluating its performance and analyzing its results.We simulated the CCIR search process in the tree hierarchy with an algorithm that aims to simulate the behavior of a human user.The idea of a CCIR hierarchy is based on classical cluster-based IR strategies (Cutting et al., 1992;Jardine & van Rijsbergen, 1971) and on a frequently used scientometric approach for creating classification systems of science (Waltman & van Eck, 2012).We evaluated the performance of CCIR for the task of finding the relevant documents for 25 SRs from a benchmark dataset (Scells et al., 2017), using as performance reference the SRs' selfreported Boolean query search retrieved documents, obtained through intensive manual annotation.This task is well-suited for cluster-based IR because all relevant documents are considered equally important; the task is considered a Boolean retrieval task, so there is no ranking of documents.From these results we analyzed the different preferences of hypothetical users regarding the trade-off between precision and recall, the overlap between documents retrieved by CCIR and by a Boolean query, and how the topic of a SR affects its task performance.
To our knowledge, our work is the first study that evaluates the performance of CCIR.We additionally provide two outputs that can be reused by other researchers: 1) an evaluation protocol for clusters-based IR methods that uses SRs, and 2) an extension of the original SR dataset with the annotated Boolean queries.
Science mapping
Our research on CCIR is part of a bigger trend of research that attempts to connect the fields of scientometrics and information retrieval.Experts agree that these fields have much to gain from each other (Frommholz et al., 2021;Mayr & Scharnhorst, 2015).While research on CCIR seems to have slowed down in recent years, research on clustering methods in the field of scientometrics continues to move forward.
Closest to our research are the citation clusters used for science mapping and field delineation studies (Chen, 2017;Cobo et al., 2011).It has been shown that these clusters create communities of documents with semantic similarity (i.e., a common topic) (Klavans & Boyack, 2006) and that they provide insights for analyzing these documents (Small & Garfield, 1985).Citation clusters are also used to represent communities of documents in the visualization of a citation network (which is a network of documents and their citations to each other) (Chen, 2006;van Eck & Waltman, 2017).Text similarity-based clusters, both on their own (Callon et al., 1983) and enriched with citations (Ahlgren et al., 2020;Janssens et al., 2008), have also been used to map science.Waltman et al. (2020) compare citation-based similarity clusters with text similarity-based clusters.We decided not to include the use of text similarity in our research because text similarity-based cluster IR is already a well-studied method (see Section 2.3).
Citation-based IR
Citation-based IR methods are frequently used in academic search.The most common method is to retrieve the documents that cite or are cited by a given document (a.k.a.citation tracking).A further step of this method is to track the citations of these retrieved documents (a.k.a.snowballing).Some of the developments in citation-based IR are tools to track citations (Chandra et al., 2021;Inciteful, 2022;Janssens et al., 2020;Liang et al., 2011;Madeira & Vot, 2018;Pitt et al., 2022;van Eck & Waltman, 2014), protocols to find relevant documents to write a SR by tracking citations (Belter, 2016;Horsley et al., 2011), tools that delineate fields by tracking citations (Zitt, 2015), methods to rank search results by tracking citations (Belter, 2017;Mutschke & Mayr, 2015), and methods to find the seminal documents of a topic by tracking citations (Haunschild & Marx, 2020).Additionally, citation-based IR is addressed by the communities around the workshop series Bibliometricenhanced Information Retrieval (BIR) (Frommholz et al., 2021) and the related workshop series Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL) (Cabanac et al., 2017).
The most significant difference between CCIR and citation tracking is that CCIR creates clusters and retrieves documents using the structure of the whole citation network, while citation tracking retrieves documents using only the structure of the documents closest to the initially selected document in the citation network.Both methods focus on different aspects of the citation network, so both can be valuable to the academic IR toolset.
Cluster-based IR
Cluster-based IR methods retrieve one or more clusters of documents, and these clusters are usually based on text similarity.These methods have been used for academic search both in commercial (Iris.Ai, 2019) context and academic (Open Knowledge Maps: A Visual Interface to the World's Scientific Knowledge., 2019) contexts, and have also included the text from cited documents in their similarity score (Abbasi & Frommholz, 2015).Non-academic IR has also been used to cluster web search results (Stefanowski & Weiss, 2003).Additionally, the seminal Scatter/Gather browsing model (Cutting et al., 1992) (on which we draw inspiration for our evaluation) proposes a user interaction protocol where the user removes irrelevant documents over several iterations by creating new sets of documents using the clusters from the previous iteration.Bascur et al. (2019) proposed specifications for a CCIR tool that uses the Scatter/Gather model.
Cluster-based IR works have a wide methodological variety, reflected in the following methodological choices: • Relatedness attribute between documents: Connections (e.g.citations, as we did) or shared elements (e.g.text, authors, keywords); • Which set of documents to cluster: Either the whole corpus (as we did) or a subset of the corpus that is retrieved by a query; • What is the structure of the clustering solution: Either hierarchical (as we did) or flat (a.k.a.independent clusters); • How to select clusters during the evaluation: Either select clusters using knowledge of the document relevance (as we did) or select clusters using a query match; • How to retrieve documents during the evaluation: Either retrieve all documents within a cluster (as we did) or retrieve only some.
Our purpose is not to compare the pros and cons of each of these methodological choices.Instead, our focus is on evaluating the specific methodological choices considered in our work.Similar to our work is the work of He et al. (2019), who visualize academic search results using, among other elements, citation-based clusters.The difference between their approach and ours is that we use the clusters as a means to retrieve documents, while they use the clusters for visualization of search results.In their work, they showed that their visualization can increase the efficiency (i.e., completion time) and user satisfaction for complex tasks, but not for simple tasks.This result suggests that the effectiveness of CCIR may depend on the task.Therefore, we look at individual SR tasks to see how the effectiveness differs between them.
Measuring the effectiveness of clustering, both for IR and for other purposes, is not trivial, as no clustering solution can satisfy every possible search task (Yuan et al., 2022).Our approach is to measure clustering effectiveness without the participation of real users (a.k.a.offline evaluation).
Many other studies have adopted the same approach.For instance, Abdelhaq et al. (2013) created a metric for evaluating Twitter data clustering based on the stability and coverage of the most common keywords in a cluster.In a bioinformatics example, Atkinson et al. (2009) 2012) created an evaluation framework where the relevant documents are known and the clustering solution is compared with a random baseline.Abbasi and Frommholz (2015) evaluated clustering with a simulation where a virtual user already knows which are the relevant documents.Our evaluation is most similar to the latter two studies because our cluster selection algorithm already knows which are the relevant documents, which is a common assumption in evaluation of retrieval methods (Manning et al., 2008).
Task design and data collection
The task we address is to find the documents necessary to write a given SR.The data that we use for this task comes from the dataset published by Scells et al. (2017) (from now on referred to as the Scells dataset).This dataset contains: • 177 SRs published by the Cochrane library between 2014 and 2016.
• The references of each SR that belong to the included studies or excluded studies category of that SR.We consider both categories necessary for the task of writing a SR, so we included documents from both categories in the set of relevant documents of the task (see below for an explanation).• The self-reported Boolean query that the authors of each SR used when they searched using the OVID search platform with the MEDLINE database, hereafter referred to as the Boolean query.
We intend to retrieve the documents that the authors of the SR found in their search, thus we use the authors' Boolean queries to retrieve documents.We retrieved these documents following these steps: 1. We manually confirmed that the Boolean queries in the Scells dataset were the same as the ones self-reported by the SRs, and when this was not the case, we used the self-reported one.2. We translated the Boolean queries from the OVID format into the PubMed format because the OVID search platform does not have an API service, while the PubMed search platform does (PubMed API, 2018) and it also includes the MEDLINE database.We translated the formats using the TRANSMUTE software (Scells et al., 2018) and then we manually checked that the translation was correct (i.e., that both formats would retrieve the same documents).Some translations were not possible because the OVID search platform provides functionalities that the PubMed search platform does not (e.g., word distance-based arguments).A full report on the translations and how we handled difficult cases can be found in the supplementary material, Tables S1 and S2. 3.For each SR, we performed a search using the PubMed API based on the PubMed Boolean query, and we included the retrieved documents in the document set retrieved by the Boolean query.4. We removed from the retrieved document set the documents that were not in the citation network (which is described in Section 3.2).We also removed from the relevant document set (see below) the documents that were absent from the document set retrieved by the Boolean query in order to maintain consistency between both sets (i.e., so that the relevant document set is a subset of the document set retrieved by the Boolean query).
To improve the quality of our evaluation, we selected a subset of the SRs in the Scells dataset to be used in our evaluation.Our selection criteria were: • The relevant document set contains at least 10 documents.We chose this value because with fewer relevant documents, the increase in recall for each retrieved document would be more than 0.1 and we wish a more fine-grained increase to facilitate interpretation of the results.• The number of retrieved documents self-reported by the authors (i.e., from all their search sources) is of a similar order of magnitude (i.e., between 10 times less and 10 times more) as the size of the document set retrieved by us with the Boolean query.This condition excludes SRs whose self-reported number of retrieved documents is vastly different from ours.
This selection resulted in 25 SRs (see Figure 4A in Section 4 for the number of relevant documents per SR), of which 7 were published in 2014, 10 in 2015 and 8 in 2016.The number of SRs may seem small, for instance in comparison with the work by Janssens et al. (2020), who used 250 SRs.However, we manually annotated the Boolean queries, which is very labor intensive.Additionally, while the number of SRs is modest, the number of document in our citation networks is very large (~7 million per network, see below).
Cochrane library SRs have, for our purposes, three categories of documents in their references: • Included studies: Studies that provide information that advances the objective of the SR.
• Excluded studies: Studies that were considered for the included studies category but were discarded because they did not match the selection criteria of the SR.• Additional references: Documents that were not considered for the included studies category.
The Cochrane library has a clear rule for which documents should go into the excluded studies category: When a user discards a document, after they have read the document full text to any extent, the document is an excluded study, else it is not (e.g., discarded after reading the abstract).
We decided to regard the excluded studies as relevant documents for the retrieval task because, by the above rule, the user needs to find and read these documents in order to exclude them.Additionally, the selection criteria that discard an excluded study can be so particular (e.g., number of participants in the study) that we believe it is not reasonable to expect an IR tool to be able to discard these documents.
Citation network
We needed to create a citation network for the tree hierarchy of clusters.We used the in-house Dimensions database, which contains all the documents included in MEDLINE and also their citation links.We created the citation network following these steps: 1. We retrieved all the documents contained in the Dimensions database.2. We removed all the documents published the same year or later than the SRs to make sure we do not provide unfair advantageous information to the clustering (see below).Therefore, we created a different citation network for each publication year in the Scells dataset: One until 2013, another until 2014 and another until 2015.3. We limited the documents of the citation networks to the ones available in the MEDLINE database, because the self-reported Boolean queries were performed exclusively within the MEDLINE database.We identified the MEDLINE documents using the PubMed database available at Leiden University's Centre for Science and Technology Studies (CWTS).4. Because of the computing resources needed to handle large citation networks, we limited the publishing years of each network to 11 years (2003-2013, 2004-2014, and 2005-2015).
The sizes of the citation networks were: Documents that are in the reference lists of a given SR are connected to the SR by a citation link.These connections help the clustering algorithm to put all these documents in the same cluster, which would artificially increase the performance of CCIR.This is not fair because in a real scenario these connections could not exist because the SR has not been published yet.We removed not only these connections, but all the documents published in the same year and in later years because they could be influenced by these connections.Because we remove the documents published in the same year, we may also remove some documents that existed before the publication of the SR.However, none of the relevant documents were removed in this process.
Simulation of CCIR
In this section we explain how we simulated the CCIR search process so we can evaluate the performance of CCIR.
Clustering
We created a tree hierarchy of clusters for each citation network.We started by clustering the documents into at most 10 clusters, based on the idea that in practice it may be difficult for users to handle more than 10 clusters.Then, the documents of each cluster were again clustered into at most 10 smaller clusters, and so on.As discussed below, the documents that could not be included in these clusters were excluded from the tree.This process created a nested tree of clusters with a depth of 13 levels (not counting the root level).We only clustered into smaller clusters the clusters that contained relevant documents because otherwise they were irrelevant for the evaluation.
We performed the clustering using a methodology built on the work of Waltman and van Eck (2012).This methodology is used in combination with the Leiden algorithm (Traag et al., 2019).This combination provides a state-of-the-art approach for document clustering in the field of scientometrics.This approach has been used in a large number of research articles (e.g.Boyack et al., 2020;Held & Velden, 2022;Sjögårde & Ahlgren, 2020).It is also used in products of the analytics companies Elsevier (Elsevier, n.d.) and Clarivate (Potter, 2020).We therefore consider it the state-ofthe-art approach for citation-based clustering.
In the methodology of Waltman and van Eck (2012), the tree hierarchy is built in a bottom-up manner while we take a top-down approach.We made this change because it reflects how a real user would create a tree, going from the general to the specific.It also saves computer resources by not creating sub-clusters for clusters that are of no interest.Another change is that Waltman and van Eck merged small clusters based on a cluster size threshold, while we merged small clusters based on a number of clusters threshold (at most 10 clusters, as mentioned before).We made this change because for a real user it is more intuitive to control the maximum number of clusters than the minimum number of documents per cluster.
The purpose of the Leiden algorithm is to assign documents to clusters based on the connections between the documents.The algorithm rewards pairs of documents in the same cluster that are connected by a citation link and penalizes pairs of documents in the same cluster that are not connected.The magnitude of the penalty is determined by the resolution parameter of the algorithm, which must be provided externally.A higher resolution leads to more and smaller clusters.
Mathematically, the clustering algorithm maximizes the following quality function: In this quality function, i and j are documents, xi is the cluster of document i, and r is the resolution parameter.aij equals 1 if there is a citation link between documents i and j.Otherwise aij equals 0. δ equals 1 if xi and xj are equal (i.e., documents i and j are in the same cluster).Otherwise δ equals 0.
The Leiden algorithm returns the clustering solution that maximizes Equation 1.To limit the number of clusters per clustering (i.e., children clusters per parent cluster) to at most 10, we merged the smaller clusters following these steps: 1.If there are more than 10 clusters in the clustering solution, select the smallest cluster.If there is a tie in the size, randomly select one of the smallest clusters.If the number of clusters is 10 or fewer, stop.2. If there are no citation links between the documents in the selected cluster and documents outside the selected cluster, remove the selected cluster from the clustering solution and then go back to step 1. 3. For each cluster other than the selected cluster, calculate the highest resolution under which this cluster would merge with the selected cluster (method from Waltman and van Eck (2012)).This resolution is always lower than the current resolution because otherwise the clustering algorithm would have already merged these clusters.4. Merge the selected cluster with the cluster for which the highest resolution was obtained in step 3, and then go back to step 1.
The resolution parameter must be provided externally, but the literature has not yet established a rule of thumb for selecting a suitable value (although the work of Sjögårde andAhlgren (2018, 2020) goes in that direction).We therefore used our own heuristic.Using a trial-and-error approach, we tried to find resolution values for each level so that the following conditions were satisfied as much as possible: • The size of the 10 largest clusters after merging was similar to the size of these clusters before merging.This condition aims to minimize the effect of cluster merging.• The 10 largest clusters after merging were of similar size.This condition aims to avoid creating one or a few clusters with a disproportionally large number of documents.
Our heuristic resulted in a resolution of 2×10 −5 for the first level of the tree hierarchy.For each subsequent level we multiplied the resolution by 3. At level 13 the resolution is greater than 1 (2 * 10 −5 * 3 12 = 1.06) which is why we have 13 levels (a resolution greater than 1 yields only singleton clusters).
Cluster selection
We use a greedy algorithm to select the clusters, starting from the root of the tree hierarchy.The algorithm goes down the tree hierarchy selecting child clusters based on their score, until none of the child clusters has a score higher than the currently selected cluster (see Figure 1).We use a greedy algorithm because this reflects how a real user would navigate a tree hierarchy.The score function is the F-score of retrieving the documents in a cluster, determined based on the relevant documents of a given SR: The precision and recall of each cluster are calculated based on the number of documents in the cluster (i.e., number of positives), the number of relevant documents in the cluster (i.e., number of true positives), and the number of relevant documents not in the cluster (i.e., number of false negatives).A real user does not have access to these numbers.The greedy algorithm therefore simulates an optimistic scenario in which a user is able to accurately assess the quality of different clusters.
The parameter β of the F-score function (Equation 2) reflects how a hypothetical user balances recall against precision (van Rijsbergen, 1979): Lower values of β favor precision, while higher values favor recall.If β = 1, precision and recall have equal weight.For each SR we retrieve several clusters, each one using different values of β to cover a wide range of precision-recall trade-offs: β ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128}.The idea of using a greedy algorithm and different values of β to reflect real users is inspired by the "what-if" experiments methodology (Azzopardi et al., 2011).
Quantitative analysis
For our quantitative evaluation, we group the results of the SRs according to value of β used by the cluster selection algorithm.In this way, we can compare the aggregated results for different values of β.We report the number of retrieved documents, the tree-level of the retrieved cluster, precision, recall, and F-score (β = β used by the cluster selection algorithm).
We report four more metrics that are generated by comparing the cluster selection algorithm results with the Boolean query retrieved documents: • Intersection proportion of the cluster selection algorithm: Proportion of the documents retrieved by the cluster selection algorithm that are also retrieved by the Boolean query.The purpose of the F-score difference is to evaluate the performance of CCIR while also taking into consideration the difficulty of the task for the authors of the SR.We refrain from using the F-score difference to make claims about the relative performance of CCIR compared to the Boolean query.
We do not consider such claims to be justified, because there are too many issues that we are not able to take into account in our analyses.For instance, we assume that the Boolean query retrieves all relevant documents, but we are unable to assess the accuracy of this assumption.Also, in practice, a Boolean query is written over several iterations of trial and error.We are unable to analyze the impact of this iterative process, since we have access only to the final version of a Boolean query.
Instead of directly comparing the performance of a CCIR approach with a Boolean query approach, our quantitative analysis focuses on answering the following questions: • To what extent does the performance of CCIR varies between individual SRs?We answer this by analyzing the dispersion of the F-score difference grouped by of β. • How similar are the sets of documents retrieved by CCIR and the Boolean query?We answer this by analyzing the intersection proportion of both CCIR and the Boolean query • For which values of β is CCIR more effective?We answer this by analyzing most of the quantitative metrics, and how their values change when the value of β increases or decreases.
Qualitative analysis
In our qualitative analysis we address the following questions: • How does the nature of a SR affect the performance of CCIR and a Boolean query?
• What type of documents does CCIR or a Boolean query retrieve or miss?
We address these questions by an expert reading of the SRs performed by the first author of our paper (Juan Pablo Bascur), who is trained in the biomedical field, and supported by an expert in Boolean query searches for biomedical purposes (Jan W. Schoones).
We performed the qualitative analysis on the retrieved documents of three SRs.We selected the SRs based on their F-score difference for β = 4 (we used β = 4 because it had the highest recall dispersion, which helps highlight the differences between SRs; see Section 4).We selected the SRs with the lowest, highest and third highest F-score difference, which in the Scells dataset correspond to the ids SR59 (Peinemann et al., 2013), SR47 (Cousins et al., 2016) and SR80 (Ma, 2015), respectively.
For each SR, we characterized: • Goal: The question that the authors of the SR want to answer.
• Needs: The nature of the documents that the authors need to retrieve to achieve the goal.
• Boolean query components: The components of which the Boolean query consist.A component is a group of Boolean terms that belong to the same topic.
For each SR we also selected one of the clusters that CCIR retrieved for this SR, that we subjectively found it had good precision and recall (hereafter known as the optimal cluster).We also selected from the clusters that CCIR retrieved the parent and the child of the optimal cluster to expand the range of our analysis, but we discarded the child clusters because they were so small that they did not provide qualitative information.Therefore, we selected the parent of the optimal cluster, hereafter known as the parent cluster.
We inferred the topic of each set of documents (these are, the clusters and the document retrieved by the Boolean query) from the titles of the documents.For the bigger document sets, we facilitated this process by inferring the topics from the most common noun-phrases in the titles of the documents.We extracted noun phrases from titles using the spaCy Python library (Honnibal et al., 2020).
To guide our analysis, we use Venn diagrams of the overlap between the relevant documents, the selected clusters of CCIR and the documents retrieved by the Boolean query.We also look for documents retrieved by CCIR but not by the Boolean query that, given their nature, could have been relevant documents if the authors of the SR had found them.
Quantitative results
In this section we describe the quantitative analysis of the 25 SRs evaluation results.Figure 2 shows the precision, recall, F-score and F-score difference, Figure 3 shows the intersection proportions, Figure 4 shows the number and ratio of retrieved documents, and Figure 5 shows the level of the selected clusters.
To what extent does the performance of CCIR vary between individual SRs?
Figure 2E shows that the F-score difference values have a large dispersion: within β groups the interquartile range is 0.2 or higher, and the highest range (at β = 4) is 0.5.This result shows that the performance varies between SRs, and it highlights the importance of analyzing individual SRs in the qualitative analysis presented in Section 4.2.
4.1.2How similar are the sets of documents retrieved by CCIR and the Boolean query?
Figure 3 shows that these two sets of documents are very different because their intersection proportion is very low.We analyzed Figure 3 focusing on three β groups, which we selected based on Figure 4D: when both document sets are of the same size (β = 16), when the CCIR set is 10 times bigger than the Boolean query set (β = 128), and when the CCIR set is 10 times smaller than the Boolean query set (β = 2).When both sets are the same size and when the CCIR set is 10 times bigger, the intersection proportion is surprisingly low: 0.1 for the former (Figures 3A and 3B) and 0.5 for the latter (Figure 3B).When the CCIR set is 10 times smaller, the proportion is also low (0.6), but additionally this value starts to fall dramatically on the subsequent groups of β (Figure 3A).
4.1.3
For which values of β is CCIR more effective?
Figure 5 shows that the tree-level of the selected clusters is linearly correlated with the value of β (using our powers of 2 scale), or in other words, the median level goes up by 1 level for each sequential β value.
Figure 4D shows that, for β between 2 and 128, the CCIR retrieved document set was between 10 times smaller and 10 times bigger than the Boolean query document set. Figure 2B shows that the β groups after β = 8 have less precision than the Boolean query (0.025, Figure 2A).Figure 2C shows that recall improves little after β = 8.Therefore, we think that the results of groups β = 2, β = 4 and β = 8 balance size, precision and recall the best.Also, outside these groups the balance decreases much faster from β = 1 to the lower values of β than from β = 16 to the higher values of β.
Qualitative results
In this section we describe the qualitative analysis of three selected SRs and their evaluation results.
Figure 6 shows their Venn diagram of the intersection between the Boolean query, the CCIR and the relevant documents.Table 1 shows their quantitative data, Table 2 shows their characterization and Table 3 shows the topic of their sets of documents.The details on the construction of their Boolean query components can be found in supplementary material Figures S1, S2 and S3, and their topics in supplementary material Tables S3, S4 and S5.
SR59: Retinoic acid post consolidation therapy for high-risk neuroblastoma patients treated with autologous hematopoietic stem cell transplantation
This SR had the lowest F-score difference and also a high Boolean query precision (Table 1).Its goal was to determine if patients with the condition Neuroblastoma recuperate better from the treatments Chemotherapy and Bone Marrow Transplant if they are treated with the medication Retinoic Acid (Table 2).
The document set of Boolean query and the two clusters had similar topics, but the cluster topics were missing the component Retinoic Acid (Table 3), which is one of the needs of SR59 (Table 2).This suggests that CCIR did not create a cluster with Retinoic Acid, and we wonder why.All the relevant documents of SR59 clearly share a common topic (we read their titles) so it would seem that they should be mostly in the same CCIR cluster.An explanation for this mystery seems to be given by the topic of the parent cluster.Here, we found that the topic fulfills the needs of SR59, except that instead of Retinoic Acid it has the component 131L-MIBG, which is a medication with similar uses to Retinoic Acid.It seems then that the existence of a cluster with the needs of SR59 and Retinoic Acid was mutually exclusive with the existence of a cluster with the needs of SR59 and 131L-MIBG, and CCIR created the latter instead of the former because of its higher fitness.This likely resulted in CCIR spreading the relevant documents of SR59 among other clusters, decreasing the F-score difference value.
The Boolean query of SR59 is missing the component Bone Marrow Transplant from the needs of SR59 (Table 2), yet the Boolean query achieves a high precision (Table 1).This is because the combination of the components Neuroblastoma and Retinoic Acid was so infrequent in the literature that it was enough for Boolean query.This shows that the Boolean query can give high precision for highly specific needs.
SR47: Surgery for the resolution of symptoms in malignant bowel obstruction in advanced gynaecological and gastrointestinal cancer
This SR had the highest F-score difference (Table 1).Its goal was to determine how effective the treatment Surgery is to treat the condition Intestinal Obstruction when caused by the conditions Gynecological Cancer or Gastrointestinal Cancer (Table 2).
We could not identify the topic of the Boolean query document set because the most common nounphrases were present in only a minor portion of the documents.This could be either because the set of documents was big and therefore has too much diversity, or because it has several disconnected topics, and we believe the latter explanation is the correct one.On the other hand, the topics of the two clusters (Table 3) were similar to the needs of SR47 (Table 2).
We believe that the Boolean query has several disconnected topics because the needs SR47 were hard to express in a Boolean query format, which ends up retrieving a noisy set of documents.The needs are documents on Surgery to treat Intestinal Obstruction due to Gynaecological and Gastrointestinal Cancer (Table 2).However, the Boolean query cannot specify if Surgery treats Intestinal Obstruction or treats Gynaecological and Gastrointestinal Cancer.This case shows that CCIR can help with searches where the relation between the Boolean query terms is ambiguous.
Additionally, we saw an interesting phenomenon happening with the topics of the clusters.Among their documents, there were three synonym noun-phrases that refer to intestinal obstruction: Malignant Bowel Obstruction, Malignant Colorectal Obstruction and Malignant Colonic Obstruction.
The optimal cluster only had the first form, while the parent cluster had all three of them.This implies that the documents with the first form cite each much more intensely than the documents with the other two forms.We see no science-related reason for this to be the case, so we imagine that this citation pattern arises from a community of researchers with the same writing conventions that cite each other.This citation pattern shows one of the risks of CCIR and of citation-based clustering in general: The citations may not only represent an intellectual relationship between two documents, but also other non-scientific relationships that are of no use for IR purposes.
We saw another interesting phenomenon happening with the topics of the clusters.Two of the most common noun-phrases of the optimal cluster were Inoperable Bowel Obstruction and Octreotide (which is a medication for inoperable tumors).Both noun-phrases imply that their documents lack surgery, but Surgery is a need of SR47.This shows that, even when the F-score difference value is high, CCIR may still not have created a cluster with the topic that the user needs.
SR80: Rituximab for rheumatoid arthritis (Review)
This SR had the third highest F-score difference (Table 1).Its goal was to evaluate the medication Rituximab to treat the condition Rheumatoid Arthritis.There are two things we must mention for our analysis of SR80: First, that the medication Rituximab belongs to a group of medications called DMARDs (which means Disease-Modifying Antirheumatic Drugs), and second, that the needs of SR80 include comparing Rituximab treatments with either no treatment (a.k.a.placebo) or other DMARDs treatments (Table 2).
The topic of the Boolean query and the clusters is the same and fits the needs of SR80.This shows that CCIR created a cluster for the right topic.However, CCIR still missed several relevant documents, which shows that creating a cluster for the right topic can be insufficient.We believe that the reason these relevant documents were not in the clusters is that, even if two documents are about the same topic, they may be poorly connected to each other by direct or indirect citations due to the citing practices of their research community.This result challenges one of the core assumptions of CCIR: That two given documents that share a topic will be directly or indirectly well connected by citations.
It seems that the authors of the SR made the conscious decision of building the Boolean query in such a way that it sacrifices precision in favor of recall.This is suggested by the following difference between the required needs of SR80 and the Boolean query components of SR80 (Table 2): SR80 requires comparisons between treatments with Rituximab (itself a DMRAD) and treatments with placebo or other DMRADs, but the Boolean query components do not require a document to mention Rituximab, resulting in several retrieved documents that do not serve the needs.We believe that the authors made this decision because they expected many documents that use Rituximab to mention it in their metadata under the more general term DMRADs.This case shows that CCIR can help with searches where the Boolean query cannot be sufficiently specific.
An interesting observation is that, among the most common noun-phrases, the Boolean query mentions the same DMRADs as the parent cluster, but the latter also mentions one extra DMRAD (Certolizumab Pegol).This is interesting because the component DMRADs of the Boolean query searched for all the available DMARDs, so it should also have found Certolizumab Pegol.We found that this happens because of the MeSH term that the component DMARDs uses ("Antibodies, Monoclonal"[Mesh Terms:noexp]) does not retrieve Certolizumab Pegol (which goes under "Antibodies, Monoclonal, Humanized"[Mesh Terms:noexp]).Biologically speaking, DRMADs is better described by the latter MeSH term than by the former, but it seems that the convention of the National Library of Medicine is to use the former MeSH term for all DRMADs except for Certolizumab Pegol.The authors may not have been aware of this because otherwise they presumably would have incorporated the second MeSH term in the Boolean query.We believe that this case shows that CCIR can help Boolean query users to ensure they include all necessary vocabulary in their Boolean query.
We wondered if any of the documents of the parent cluster with Certolizumab Pegol in their title may have been a relevant document if the authors of SR80 had seen the document during their literature search.We tested this hypothesis by comparing these documents with the needs SR80.We found one document (Weinblatt et al., 2012) which cannot be discarded based only on the title or the abstract, and therefore is a relevant document.This case shows that CCIR can find relevant documents that the Boolean query does not.
Discussion
In this section we discuss our findings in relation to our research questions and then discuss the limitations of our work.
What types of users are best served by CCIR?
We can answer questions about users by connecting user preferences for recall and precision with the β value (user prefer recall β times as much as precision).We saw that β = 2, β = 4 and β = 8 had the best balance, and that outside these β values the balance decreases faster for lower β values than for higher β values.Therefore, we can say that CCIR serves best users that prefer recall over precision with a ratio between 2 and 8 times, and for users outside that range it serves higher ratios better than lower ratios.
We wondered if users that perform a literature search for a SR are within this range of ratios, and we used the Boolean queries values as a proxy to answer this. Figure 2A shows that the precision of the Boolean queries is between 0.01 and 0.06, and by definition the Boolean queries have a recall 1.0, so the ratio of recall over precision is 1 over 0.01-0.06,or 17-100, very far from our prior range of 2-8.While it is true that the recall of the Boolean query is unrealistically high, the recall would have to be 10 times lower for the ratio to be within the range, which, given that SR literature searches aim for maximum recall, is unlikely.Therefore, we believe that the users that are best served by CCIR are not users that do a literature search for SR.It is beyond our knowledge which type of user might prefer the range 2-8.
We saw that the median tree-level is sensitive to the β value.While we do not have a standard to evaluate which levels are better for users, we know that the more a user prefers recall, the closer to the root, the less effort the user needs to make to reach that level.
We also saw that the Boolean query and CCIR retrieve different documents (Figure 3), and these documents could be relevant (analysis of SR80).Therefore, CCIR could serve users willing to use more than one IR method by finding more relevant documents.
What types of SRs are best served by CCIR?
We saw that there is a substantial variance among the F-score difference values of the SRs (Figure 2E), meaning that for some SRs, CCIR performs much worse than for others.We would imagine that, for CCIR, a SR with general needs (e.g. a disease) would perform better than a SR with specific needs (e.g.interaction between two medications), while the opposite would be true for Boolean queries (Carmel et al. (2006) analyzed how the needs affect query difficulty).However, the three SRs that we analyzed had specific needs (Table 2) yet one had bad performance and two had good performance.
The only clue that we can use to infer the performance of a SR is in SR47: its need is hard to write as a Boolean query, so we can infer that IR methods not based on a Boolean query are likely to have an advantage.However, this inference is more about the bad performance of the Boolean query than the good performance of the CCIR.
5.3
What are the strengths and weaknesses of CCIR?
Strengths
CCIR may find documents that the Boolean query does not.We know this from the results of intersection proportions (Figure 3), where it shows that CCIR and the Boolean query retrieve different documents.We also know this from the newly discovered relevant document of SR80.
CCIR may reduce the noise of searches that are hard to write as a Boolean query.We know this from how CCIR performs well for SR47 and SR80: The former's Boolean query could not be sufficiently specific because the Boolean query format does not allow to specify subject-object relations between terms.The latter's Boolean query could not be specific because of the risk of missing documents with poorly annotated metadata.
CCIR may help expand the vocabulary used in a Boolean query.We know this from our experience with SR80.By looking at the difference between the noun-phrases of the parent cluster and the Boolean query of SR80, we realized that the Boolean query was missing a relevant search term which was likely not considered by the authors of the Boolean query.
Weaknesses
CCIR may not create a cluster with the exact topic that the user needs.We know this because in SR47 and SR59 there was a divergence between the user needs and the topic of the CCIR sets of retrieved documents.The tree hierarchy did not had a cluster with the same topic as the user needs, which may happen because documents may relate to multiple topics.
The performance for a given SR can be unpredictable.We know this because of the high dispersion of the F-score difference values (Figure 2E) and because the characteristics of SR59, SR47 and SR80 did not give a clue about their performance.
Documents that share the same topic may be poorly directly or indirectly connected in a citation network.We know this from our experience with SR80.While a cluster with the relevant topic was retrieved, several relevant documents were missing.Also, the noun-phrases differences between the retrieved documents of the optimal cluster and the parent cluster of SR47 suggest that the optimal cluster was created based on the citation practices of the authors instead of the topic of the documents.Potentially, this issue could be diminished by combining citation-based and semanticbased clustering.
The clusters at the highest levels have too many documents, which makes the topic of the clusters hard to interpret for a real user because the documents are so diverse.This is a serious problem because selecting the wrong cluster at this level is a critical mistake (Willett, 1988).Our evaluation did not suffer from this issue because CCIR already knows in which clusters the relevant documents can be found.In a real situation, a user may be able to handle this issue if they know at least some of the relevant documents, and then they could even select clusters bottom-up instead of top-down (Van Rijsbergen & Croft, 1975).Alternatively, the user can create the tree hierarchy with fewer documents.
Limitations of this work
We identified four potential limitations to our work.
First, we did not cover all the possible clustering solutions.We used a single clustering solution, instead of using several clustering solutions or letting a user create clustering solutions on the run.Some of the characteristics of the tree hierarchy could have been different, like the clustering algorithm that we used, the clustering resolution parameters, the number of child clusters, the number of levels and the fact that we created the tree hierarchy by a top-down division of clusters instead of a bottom-up agglomeration of clusters.
Second, we did not cover all the possible citation networks.We used a citation network of direct citations, and not a more densely connected citation network using co-citations (Small, 1973) or bibliographic coupling (Martyn, 1964), which when combined with direct citation improve the representation of the structure of science (Waltman et al., 2020).We made the citation network using the full corpus, but we could also have used for example the documents retrieved from a query, which some studies reported to be more effective for cluster-based IR (Tombros et al., 2002).
Third, the cluster selection algorithm does not reflect fully realistic (and noisy) user behavior.The cluster selection algorithm knows the relevant documents -an assumption commonly made in information retrieval evaluation -, which a real user would not.A real user would have to select the clusters based on their own personal evaluation of which cluster is more likely to contain the relevant documents, and also they would have to evaluate when to stop going down the tree hierarchy.This process would take cognitive effort, which our evaluation does not consider.A less cognitively heavy alternative for a user could be to eliminate a cluster that does not contain relevant documents and then create a new clustering solution, as there is likely to be an obvious candidate for elimination.This is the same process as selecting more than one cluster, as we discussed in the weaknesses (Section 5.3.2), and we decided against implementing it in the evaluation because it would create too many steps and the clustering would take too much computational resources.Another unrealistic behavior is that the cluster selection algorithm never chooses the wrong cluster, unlike a real user.We could have implemented mistakes by giving imperfect information to the cluster selection algorithm, but we decided not to so to have less variables that could affect the interpretation of our results.Finally, it is not realistic to allow the cluster selection algorithm to choose very small clusters (size between 1 and 10 documents) because this size of clusters does not appear in real situations (as discussed by Willet (Willett, 1988)).Future work could address more noisy user behavior, similar to user behavior modeling in information retrieval (Hofmann et al., 2013).
The final limitation is that it could be argued that the Boolean queries we used are not realistic.A real Boolean query is created over several iterations, where the creators of the query keep refining the query until they are satisfied with the search results.Our evaluation does not consider this.Also, our Boolean queries had a recall of 1.0 (i.e., they found all the relevant documents), which is unlikely for a real IR method.Additionally, we only considered the documents retrieved by the Boolean query on MEDLINE, while the authors of the SRs usually used more than one database or method to search for documents, including the expert knowledge of their colleagues.We did not include more sources because it would be too much effort to retrieve the documents of each method and to harmonize the results between SRs that used different methods.Finally, the translation from OVID format to PubMed format is likely to have modified the set of retrieved documents, especially if the Boolean query used OVID-specific features (like distance between words).We tried to remove the cases with the biggest modification of the set of retrieved documents by removing the SRs with Boolean queries that retrieved a number of documents too different from the number documents self-reported by the authors (see Section 3.1).
Conclusion
In this work we have shown some of the advantages and limitations of using CCIR for academic search, both for generic CCIR and for our specific tree hierarchy implementation.We have also introduced an evaluation protocol for cluster-based IR methods with the task of finding relevant documents for SRs.This protocol can be used and modified by other researchers.We release our data for use by other researchers in the form of the three tree hierarchies, the set of relevant documents and the set of documents retrieved by the Boolean query, the latter one created through intensive manual annotation.The current CCIR implementation can be used as a straightforward CCIR tool of value for real users.
Our research shows that the best served users are those who prefer recall over precision 2 to 8 times.Users that prefer even more recall, like SR users, are less well served, and users that prefer more precision are the worst served.CCIR may complement Boolean query searches in various ways: it may help SR users that have problems to state their requirements as Boolean queries, it may suggest terms for Boolean queries, and it may retrieve relevant documents not retrieved by a Boolean query.
A problematic aspect of CCIR is that performance varies significantly because there sometimes is no cluster that contains the topic of the SR.This may happen because documents may relate to multiple topics, leading to clusters that do not match with the topic of the SR.It may also happen because of a lack of citation connections between the documents related to the topic of interest.Another problematic aspect is that the current implementation of CCIR demands a high cognitive effort from a user.
For future work related to CCIR, interesting research directions are how to improve its performance (how to create better clusters, re-clustering based on the selection of multiple clusters by a user, mixing with semantic-based clustering), how it compares to other IR methods (especially citationbased or cluster-based methods) and how real users interact with it (how to select clusters, how to complement with other IR tools).
Figures and tables
Figures Tables Table 1.Quantitative data of the SRs in the qualitative analysis.These are the SRs selected for qualitative analysis (SR59, SR47 and SR80).The F-score values of β = 4 were the ones used to select the SRs.The optimal cluster was selected for its good precision and recall, and the parent clusters because it was the parent cluster of the optimal algorithm (see methods, section 3.5).
Goal
To determine if retinoic acid helps neuroblastoma patients recuperate from chemotherapy and bone marrow transplants.
To assess the efficacy of surgery for intestinal obstruction due to advanced gynaecological and gastrointestinal cancer.
To evaluate the benefits and harms of Rituximab for the treatment of Rheumatoid Arthritis.
Needs
Randomized controlled trials that evaluate if retinoic acid helps neuroblastoma patients recuperate from bone marrow transplants by comparing retinoic acid treated patients to untreated patients.
Documents that mention the evolution
of patients after surgeries to treat intestinal obstruction due to advanced gynaecological and gastrointestinal cancer.
Studies that compare the outcomes of treatments with Rituximab with placebo or other Disease-modifying antirheumatic drugs (DMARD).
Gynecological or gastrointestinal cancer AND Intestinal obstruction AND Surgery
Rheumatoid Arthritis AND Disease-modifying antirheumatic drugs AND Randomized Controlled Trials and Controlled Clinical Trials Table 3. Topic of the sets of documents of the SRs.These are the SRs selected for qualitative analysis (SR59, SR47 and SR80).We obtained these topics by analyzing the most common noun-phrases in the titles of the retrieved documents.The details on the construction of the topics are in the supplementary material, Tables S3, S4 and S5 OR NAPROXEN[Mesh Terms:noexp]).This mistake happened when I had OR statements in the same row, even when they were grouped by a parenthesis (e.g.(medical therap$ OR medical treatment$).tw.).I fixed these mistakes manually.
SR12
Because the way the Boolean query is written, it lefts a row with information out (row 25).The query also utilizes the same row two times (row 33).From these observations, I believe there is some mistake in the report of the Boolean query that the authors reported in the paper.For this reason, I will try to reconstitute what I believe was the original query.Row 25 contains info about the chemical component, while row 33 contains info about the trials.Row 37 seems to be an expansion of the concepts in row 33.I see that row 38 joins row 33 and 37 with an OR, which gives weight to my suspicion that row 37 is an expansion of row 33.If this is the case, then there should be a row containing [25 AND 38].Row 35 is [25 AND 33], so this is the row I am looking for, but it excludes the information in row 37. Therefore, I will include a row [25 AND 38].The other missing part of the query is any rows that includes (NOT 34), so the final row will be equivalent to [(25 AND 38) NOT 34].
SR169
Line 5 was (detox* or methadone) in .ti,ab..I assumed that .ti,ab.are the categories of the line.Line 10 had an incomplete parenthesis, I assumed that the parenthesis encompassed the whole line.Some lines had no categories.I assumed that their category was [All Fields].
SR78
The Boolean query that was annotated in the Scells dataset is not the same as the Boolean query that was used by the authors of the systematic review (https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD001843.pub5/appendices).I believe that the annotator of the dataset made a mistake.I will be using the Boolean query as reported by the systematic review.
SR51
Line 29 is used twice, so I duplicated it.Line 36 is used twice, so I duplicated it.
SR53
The query defines .mp..I believe that the definition the author uses is similar to the general definition of .mp., so I used the standard translation of .mp..
SR115
I had to discard SR115 because of the size of the translated Boolean query plus the API internet address is too big (11089 characters).Neither the internet browser or the API can manage addresses this long.
SR54
The authors use the term (Succimer/du), but the subheding (du) does not exist.I google Succimier and it is a drug, so the authors must be using du for something related to the drug.It could be Drug Effects (DE) or Drug Therapy (DT).They don't name the word Succimier in their paper, but the paper is about diagnostics, and they include the row Succimer/du in a Boolean parenthesis with (radionuclide image) and (Technetium Tc 99m Dimercaptosuccinic Acid) (a component used in radiology), so I believe that they use the drug for diagnostics and radiology.This doesn't fit the subjedings used for drugs, but it does fit Diagnostic Imaging (DG).Therefore, I will use Diagnostic Imaging for (Succimer/du).
The same happens for (Kidney/ri).(ri) is not a subheading, but I believe that the authors refer to kidney radiology.Therefore, I will use Diagnostic Imaging (DG).Table S3.Noun-phrases for SR59.The noun-phrases were extracted from the titles of the documents.
•
Intersection proportion of the Boolean query: Proportion of the documents retrieved by the Boolean query that are also retrieved by the cluster selection algorithm.• Ratio of retrieved documents: Number of documents retrieved by the cluster selection algorithm divided by the number documents retrieved by the Boolean query.• F-score difference: F-score of the cluster selection algorithm minus the F-score of the Boolean query (β = β used by the cluster selection algorithm).
Figure 1 .Figure 2 .Figure 3 .Figure 4 .
Figure 1.Cluster selection algorithm.The bubbles represent clusters of documents.The text in a bubble shows the label and the score of a cluster.The lines are the connections between the parent and the child clusters in the tree hierarchy.The arrows point toward the child clusters.Only the child clusters of the selected clusters are shown.The orange bubbles represent the clusters selected by the algorithm.The orange lines indicate the path followed by the algorithm.The pointer finger shows the selection of the algorithm.A: Calculate the score of each cluster at the highest level of the tree hierarchy (Clusters 1, 2,and 3).B: Select the cluster with the highest score (Cluster 2).C. Calculate the score of each child cluster of the selected cluster (Clusters 2.1, 2.2, and 2.3).D. Retrieve the cluster that was already selected (Cluster 2) because it has a higher score than any of the child clusters.
Figure 5 .Figure 6 .
Figure5.Tree-level of the retrieved clusters.Each data point is a SR, the X axis is the β group, and the Y axis is the level of the cluster selected by that greedy algorithm for that SR.Level 0 is the set of all documents in the citation network.
Figure S3 .
Figure S3.SR80 PubMed Boolean query and its components.The indentations and line breaks represent the dependencies, but they are unnecessary when submitting to PubMed.The colors are the components.Meaning of the colors: Orange: Time Period; Blue: Rheumatoid Arthritis; Purple: Disease-modifying antirheumatic drugs; Red: Randomized Controlled Trials and Controlled Clinical Trials.Components ensemble: Time Period AND Rheumatoid Arthritis AND Disease-modifying antirheumatic drugs AND Randomized Controlled Trials and Controlled Clinical Trials.
SR69
Some of the words in line 3 have no category, so I give them the category that is used by the other words in the row,[Title/Abstract].One row has the concept (work'*[Text Word]), which is interpret as (work*[Text Word]) by PubMed.Therefore, I replaced (work'*[Text Word]) with (work*[Text Word]).Also, the (') can produce issues in the API, so I better do the replacement.SR80 I had to duplicate line 27 because it is referred to 2 times SR151 It has many concepts without categories, I will use the category [All Fields] for these concepts.The query uses [mh], which I will translate as [Mesh Terms] (I tested the terms with [mh] as [Mesh Terms] in PubMed and the words are recognized as mesh terms).There is a term (drug therapy[sh]) which I'm sure it means (drug therapy[MeSH Subheading:noexp]) because I have seen it in the other queries.SR130 The starting date of the query says 2010-12-31 in the test set but it says 2011-01-01 in the paper.I will use 2011-01-01.SR119 The test set says from 2006-12-31 to 2007-01-20 but the SR says from 2007 to 7 February 2013.I will use the latter.SR62 Some of the rows that use .sh(comparative study.shand evaluation studies.sh)are actually publication types.I changed their category to publication types.(evaluation studies[Publication Type]) is not recognized by the PubMed API but it is recognized when used in the PubMed webpage.(evaluation studies[Mesh Terms:noexp]) is not recognized by either.SR105 (et?omidat*[All Fields]), (r?26?490[All Fields]), and (radenar?on[AllFields]) were not recognized by the by the PubMed API or by the PubMed webpage.I decided to change (et?omidat*[All Fields]) to (etomidate[All Fields]) and (r?26?490[All Fields]) to (r26490[All Fields]) because they refer to etomidate, which i believe is what the author is looking for with this term.I didn't modified (radenar?on[AllFields]).Removed (ill$) (translated into ill*[All Fields]) because PubMed need words to have at least 4 characters to use the asterisk.
evaluated the effectiveness of a gene similarity network clustering by observing to what extent each cluster had a single gene function.Yuan et al. (2022) created novel metrics that consider the number of clusters necessary to retrieve a given percentage of the relevant documents.De Vries et al. (
Table 2 .
Characterization of the SRs.These are the SRs selected for qualitative analysis (SR59, SR47 and SR80).Goal: The question that the authors of the SR want to answer.Needs: The nature of the documents that the authors need to retrieve to achieve the goal.Boolean query components: The components of which the Boolean query consist.The details on the construction of the Boolean query components are in the supplementary material, Figures S1, S2 and S3.
Figure S1. SR59 PubMed Boolean query and its components. The
. indentations and line breaks represent the dependencies, but they are unnecessary when submitting to PubMed.The colors are the components.Meaning of the colors: Orange: Time Period; Blue: Retinoic acid; Purple: Neuroblastoma; Red: Randomized Controlled Trials and Controlled Clinical Trials.Components ensemble: Time period AND Retinoic acid AND Neuroblastoma AND Randomized Controlled Trials and Controlled Clinical.
SR47 PubMed Boolean query and its components.
The indentations and line breaks represent the dependencies, but they are unnecessary when submitting to PubMed.The colors are the components.Meaning of the colors: Orange: Time Period; Blue: Gynecological or gastrointestinal cancer; Purple: Intestinal obstruction; Red: Surgery.Components ensemble: Time period AND Gynecological or gastrointestinal cancer AND Intestinal obstruction AND Surgery.
Table S2 . Notes from the translation of the Boolean queries of individual SRs.
The first column is the identity of the SR in the Scells dataset, and the second Colum is the notes from translating the OVID query of that SR into a PubMed query.part in the Boolean query where 2 clauses are not connected by any operator (emollient$.ti,ab.(skinadj6(...)).ti,ab.).It seemed to me that (skin adj6 (...)).ti,ab.isanothernameforcreams, so I conclude that the missing operator is OR.I had to remove the asterisk * from oil* and gel* because PubMed needs 4 characters or more for wildcard search.SR4There is a term that has a wildcard before the end (8 abnormal menstrua$ bleeding.tw).This can't be translated into PubMed because PubMed doesn't allow asterisks before the end.If I include the asterisk (e.g.abnormal menstrua* bleeding[Text Word]) then PubMed will separate the terms in the asterisk (e.g.abnormal menstrua* bleeding[Text Word]).To include this term, I create 2 terms connected by an AND (e.g.abnormal menstrua*[Text Word] AND bleeding[Text Word]).This will retrieve the documents retrieved by the OVID query but unfortunately it will also retrieve another papers.The content of .mp. using [mp=...].The content of [mp=...] in this Bololean query is similar to the definition of .mp. by OVID (https://ospguides.ovid.com/OSPguides/medline.htm), so I decided to use [mp=...] as a normal .mp. (i.e.translated to [All Fields]).Many times, TRANSMUTE made the following type of mistake: (naproxen.tw.OR NAPROXEN/) is translated to (naproxen.tw. The dataset Boolan query says to search until 2006-01-30, but this is wrong.The search in the paper is until 26 February 2014.I will use the latter.Some of the [MeSH Subheading:noexp] terms were not recognized by PubMed.I believe that this is okay because the authors searched for these terms also in the titles and abstract, so they knew that some terms were not valid for the category [MeSH Subheading:noexp].Title/Abstract] that starts with * (*amphetamine[Title/Abstract]) and I don't know that that means.I found no info about it in the documentation (https://ospguides.ovid.com/OSPguides/medline.htm) so I will just remove the *.Lines 28 and 29 make no sense, they don't include the rest of the lines in a logical way.Because of this, I will exclude this SR for now.Mesh Terms:noexp]) is recognized by the PubMed website but not by the PubMed API.Some of the rows that use .sh. (comparative study.shandevaluationstudies.sh)areactuallypublication types.I changed their category to publication types.(evaluationstudies[PublicationType]) is not recognized by the PubMed API but it is recognized when used in the PubMed webpage.(evaluationstudies[MeshTerms:noexp]) is not recognized by either.SR47Line 15 is not used in the final query.Why? Line 15 is about vaginal cancer.Line 23 is about gastrointestin cirgury.I reviewed the original query of the SR and the Scells dataset is missing a line of the query!Line 24 is present in the original query of the systematic review but not in the dataset, and it contains line 15 and 23.The reason that line 15 was not used in the final query is that the query is incomplete in the dataset.Line 15 is used in the SR.I included the missing line in the PubMed translation.
SR59The authors use the categories .ms. and .tiab., which are not in the vocabulary of OVID.I believe they mean [MeSH Subheading:noexp] and [Title/Abstract], respectively.The line (randomised controlled trial[pt]) is misspelled.It should read (randomized controlled trial [pt]), otherwise PubMed does not recognize the publication type.I corrected the spelling.
Table S4 .
Noun-phrases for SR47.The noun-phrases were extracted from the titles of the documents.
Table S5 .
Noun-phrases for SR80.The noun-phrases were extracted from the titles of the documents.
|
2022-07-08T01:16:22.100Z
|
2022-07-07T00:00:00.000
|
{
"year": 2022,
"sha1": "dfd5b85d6ff1246fbf666abc64b88c394672dc1c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11192-023-04681-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "dfd5b85d6ff1246fbf666abc64b88c394672dc1c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
221806105
|
pes2o/s2orc
|
v3-fos-license
|
The Increasing of Competitiveness of Agro-industry Products Through Institutional Empowerment to Support the Achievement of Sustainable Agricultural Development
The aim of present literature is to examine the importance of institutional empowerment comprehensively related to the increasing competitiveness of agro-industry products in the perspective of bio-industrial agriculture for the achievement of sustainable agricultural development, which is enriched with a review of various studies and other related literature by using a quantitative descriptive method. The depletion of petroleum reserves and the length of the process of the formation of fossil fuel materials require us to immediately innovate to produce alternative and renewable energy, and some of them have been produced from various bio-industrial studies. The use of bioenergy in agro-industry is predicted to be able to make efficient and effective the use of fuel oil and GMP of its processed products. The findings also exposed that positive links among the renewable energy resources, management of natural resources, management of human resources and sustainability of agriculture development. These findings are provided with the guidelines to the policymakers that they should increase their emphases on the renewable energy that enhance the growth of the economy.
INTRODUCTION
Indonesia, as an agrarian country that has a very strategic position and role of agriculture to realize very competitive agriculture as the main vision of agricultural development, and become able to achieve self-reliance and food prosperity as well as energy. "The realization of a sustainable agriculturebio industrial system that produces a variety of healthy food and high value-added products from biological resources and tropical marine," is the main vision of agricultural development based on sustainable bio-industry agriculture contained in the document SIPP (Main Strategy of Agricultural Development) 2015-2045 (Naz et al., 2019). In implementing this vision a bio refinery concept is implemented that optimizes the conversion of biomass that minimizes energy input and the concept of zero waste to produce food and non-food materials that have high economic values (Işıtan et al., 2016).
The acceleration of the implementation of the system and concept of agriculture-bio-industry is closely related to the presence of at least five challenges in the agricultural sector at this time, there are: (1) an increase in the majority income of farmers' land <0.5 hectares; (2) agronomic challenges in increasing the production of agricultural commodities, especially food; (3) demographic challenges, to fulfil the food needs of a population that continues to grow; (4) challenges facing global climate anomalies in realizing sustainable agriculture; (5) challenges in facilitating the process of transforming the national economy from fossil-based to bioeconomic. Related to those challenges, holistic and integrated handling of all stakeholders is needed to respond them, namely This Journal is licensed under a Creative Commons Attribution 4.0 International License the change and renewal of the paradigm of national economic development, from (i) agricultural paradigm for development (agriculture for development); towards (ii) the paradigm of a sustainable agriculture-bio-industry system. By a quantitative method, this paper comprehensively states the importance of institutional empowerment related to increasing competitiveness of agro-industrial products in the perspective of bio-industrial agriculture for the achievement of sustainable agricultural development, which is enriched with a review of various studies and other related literature. The weak point of the Indonesian economy is that the movement in the real sector has not been optimal yet, resulting in limited employment and business opportunities. The government is still struggling in preventing the poverty level and increasing unemployment because it is still a crucial problem today (Kurnianto et al., 2018). The toughness of the agricultural sector has proven to be less affected in the past crisis, and agricultural industrialization has strong links with other sectors, especially in the present, such as the relationship of consumption, investment, and labor (Handriyani, 2018).
LITERATURE REVIEW
The development of a sustainable agriculture bio-industry system is the implementation of the concept of bio-economics, which is the transformation carried out broadly, but gradually with different levels. Related to this, the development of the Integrated Agriculture-Energy System (SPET) becomes the first phase of emphasis (Elobeid et al., 2013). The development of SPET is also a strategy to improve the welfare of small farmers and poverty reduction in rural areas. In addition, to enable the development of sustainable agriculture systems, the concept of agriculture bioindustry also predicts the development of the concept of zero waste management, by integrating various socio-economic aspects of the agricultural community and environmental aspects. The process of forming fuel material derived from fossils takes a very long time and the depletion of petroleum reserves requires humans to immediately take various actions to produce fuel, substitute fuel and energy sources.
Various studies of bio-industry have been able to produce various types of renewable energy, especially when used in agro-industry efforts that are predicted to be able to make efficient and effective use of fuel oil and gas and GMP of its processed products (Yishai et al., 2016). By a quantitative method, this paper proposes more comprehensively the various businesses and opportunities as well as the development of bio-industry and agro-industry which are enriched with a review from various studies and writings and other related literature. The importance of managing natural resources and human resources from aspects of economic, social and environmental sustainability correctly and wisely. The importance of the development and expansion of various related infrastructure and equipment assistance program policies of the right type and target is intended to achieve the objectives of the program.
Sustainable Bio-industry Agriculture
Several trends' that have consequences and solutions, related to future agriculture, namely: (i) the importance of efforts to encourage economic transformation into bioenergy as anticipation of increasingly the rare fossil fuels; (ii) the increasing urgency of bio-products, healthy lifestyles, and consumption patterns of bio culture as food, feed, energy and fiber needs increase; (iii) the importance of encouraging increased capacity of adaptation and mitigation to anticipate global climate change; (iv) the obligations and inevitability for efficiency and conservation activities as anticipation of the impact of land and water scarcity; (v) the development of ecological agriculture systems and bio services as an impulse from the demand to environmental services; (vi) the importance of applying pluricultural integrated bio-cycle systems as a result of increasing marginal farmers; (vii) the development of bio economics as an impact given from the progress of science and technology bio-science and bio-engineering.
The new paradigm of national economic development includes (1) the agricultural paradigm for development (agriculture for development), i.e., the national economic development plan that needs to be designed and implemented according to the stages of agricultural development and to place the agricultural sector as a driving force for a balanced and comprehensive agricultural transformation; (2) the paradigm of a sustainable agriculture bioindustry system with the transfer of fossil fuel-based industries to renewable (biological) fuels. This paradigm places the role of agriculture as a biomass producer of bio-refinery raw materials to produce food, feed, fibers, energy, and various other bio products, and the environment; which is a global issue of agricultural development that must be faced as a challenge to be able to develop environmentally-friendly agriculture by the application of technology through the development of bio-science, the innovation to face climate change (GCC innovation response), and bioinformatics that applies information technology (bio-information) with always prioritizing environmental sustainability and natural resources (Reches et al., 2019).
Indonesia has various types of raw materials that are capable of producing various types of bioenergy such as bio-gas, bio-diesel, bio-ethanol, bio-electricity and bio-actor produced from advanced processing; which continues to be improved by the acquisition and application of its use to minimize the use of fuel oil. Agriculturalbased materials have great potential to become bio-industrial raw materials. Processing of various agricultural-based products and wastes that can produce them, among others from processing: oil palm, atrophy, pineapple, tapioca liquid waste, cattle manure (cattle, buffalo, horses, goats, and other cattle), from the results of the synthesis and distillation of another various agriculture-based product (Simoncini et al., 2019). To optimize its implementation, the main vision of agricultural development based on sustainable bio-industry agriculture is outlined in nine missions, there are: (i) spatial planning and agrarian reform (RA); (ii) integrated tropical farming systems; (iii) economic activities in production, information and technology; (iv) post-harvest, agro-energy and bio-industry rural-based; (v) marketing systems and product value chains; (vi) agricultural financing systems; (vii) quality research, innovation and human resources systems; (viii) agricultural and rural infrastructure; and (ix) imperative legislative, regulatory and management programs.
There are three principles of the sustainability of the bioindustrial agriculture system, which include: (1) self-financing: self-financing as much as possible through mutually supporting and tiered businesses; (2) applying small-scale technology; and (3) businesses that are technically and economically feasible. Integration of dairy cows with oil palm that produces milk, palm oil, biogas (fermented cow dung), and pledge which is organic fertilizer, in Aceh Province can be used as an example of the application of the three principles (Choong et al., 2018). Related to these three principles, the integrated agriculture-energy system (SPET), which is the focus of the first stage of the Agriculture-Bio industrial Development, in the primary farming subsystem is based on biotechnological innovations that can produce as high biomass as possible as bioenergy-producing feedstock's; and to prevent trade-offs on food prosperity and energy endurance, the SPET in the bio-industry subsystem is based on bio-engineering innovations to process feedstock into energy and bio products, including fertilizer for farming.
Bio-industry Agriculture: The Application of Bioenergy Technology and Participation
From various writings and research results, it can be stated that various agricultural products can be processed into various types of bioenergy, which can be used for electricity fuel, transportation facilities, and heat sources for industry and so on. Bioenergy which is 60% of the new and renewable energy no longer needs high economic incentives. Bioenergy development can be carried out at all levels of the business scale, anywhere, and can involve all communities both rural and urban.
The availability of biomass is also very abundant in Indonesia which can be utilized to produce bio-electricity. Biomass has the potential to produce 49,810 MW of electricity, which if converted can generate incomes of around Rp. 501.8 T/year (electricity price is Rp. 1,150/kWh). Coal-fired power plants (around Rp.700-800/kWh) are also carried out in line with government policies to minimize fuel oil dependency so that electricity projects are accelerated. Coal usage is predicted to be around 75% cheaper than other power plants (BBM Rp. 3200/kWh), and saves around Rp. 71.4 T 2014 State Budget (APBN) finances fuel oil for PLTD electricity converted to the coal-fired power plant . However, the use of coal fuel also produces 5% of solid pollutants (ash = fly ash and bottom ash), and this is where agriculture plays a role as the use of fly ash as an ameliorant that has high levels and saturation bases and contains complete nutrients so that it can increase peatland pH and can fix the structure of peat soil.
From the results of the synthesis of palm oil obtained various types of surfactants including DEA (ethanolamine), MES (metal ester sulfonate), APG (alkyl poly glycoside), AS (alcohol sulfate), sucrose ester, and others. Formerly, surfactants were formulated only to be limited to clean products (bath soaps, detergents), cosmetic products and other medicines. The idea to increase petroleum production by processing surfactants (most of which were imported) developed since 2003, and finally, it was able to be produced through collaborative research by ITB and PERTAMINA oil experts. Glycerol (C 3 H 8 O 3 ) is a side product of the biodiesel industry that can be utilized in a variety of bio-industrial businesses, including: as an adaptive material in WBM (Water-Based Mud) which increases the drilling mud to lubricate and cool the drill bit actively and efficiently. Glycerol through oleic acid esterification process with MESA catalyst is converted to glycerol ester which is potentially used for Oil Based Mud (oil-based drilling mud for oil well drilling needs) (Shahbaz et al., 2017).
Meanwhile, biodiesel from used cooking oil (used oil) can be used as a fuel oil mixture that aims to reduce carbon monoxide emissions and other fumes from vehicles. Meanwhile, bio-pellet is a type of biomass waste-based fuel (palm fronds, coconut cake, husk, corncobs, coffee husks, etc.) which is potentially used for bio-pellet production). Thus, various agricultural products and waste which produce from small to large scale production processes can be processed (bio-industrial) into a renewable energy source, organic fertilizer and other various product.
Agricultural-bio Industry Implementation: The Agro-industry Development
Food is a basic need that is essential and becomes human rights and as a source of energy, power and strength for every creature to live and do activities every day. Indonesia has a wealth of natural resources almost in the whole territory which produces various kinds of foodstuffs and another economic valuable commodity. Besides rice (as a staple food), sago, sorghum, cassava, corn, are plants that produce carbohydrates as a fulfilment food and energy, as well as bioenergy. Energy is very vital for life and its fulfilment greatly influences the ongoing various economic activities. Various alternative energy and renewable energy can be produced from various food commodities, horticulture, and other industrial plants.
Corn-based bio-industry products as part of an integrated energy agriculture system technology, in addition to being a food source and various processed products, corn can also be used to feed raw materials. Corn waste can also be optimized as an energy source that provides real added value through the processing process. Multiproduct from corn, including: (i) food (corn, shelled corn, cornstarch (maize), corn starch, instant corn rice, grits); (ii) fiber (fiber/ampok); (iii) feed (animal feed from clobots, stems, cob, and corn leaves); and (iv) fuel (bio-ethanol).
Sago plants are native plants of Indonesia that are present and have great potential in the Papua territory (90%) and in several other territories such as Aceh, Tapanuli, West Sumatra, Riau, Kalimantan, West Java, North and South Sulawesi, where the distribution zones do not reflect its limit of potential productions. Sago bio-industry management starts from the good seed criteria, thinning tillers will determine whether good or not the growth of sago, until the proper processing of sago harvests becomes very crucial and important to obtain and maintain the quality and quantity of sago starch (Shahbaz et al., 2015). The development of sago cultivation and bio-industry faces various problems that must be handled wisely, such as (i) the natural resources of sago has not been managed well, one of them is because it is still limited to family consumption and there are still many sago plants ready for harvest left to dry and die; (ii) sago factory production machinery that is not optimal yet due to inadequate electrical energy support, the human resources for postharvest and processing which are relatively low, and incompatible postharvest and processing equipment; (iii) the tree cutting is still manual; (iv) the absence of regulations and institutions that accommodate sago farmers, in terms of production, processing and marketing of sago.
Bio-industry through the use and optimization of sorghum for domestic use as food, fibre, energy (source of bio-ethanol/ alternative energy), liquid organic fertilizer, functional food, and animal feed (1 hectare of sorghum is enough to feed 6 fattening cows Like the sorghum plant in NTT which produces 30-45 tons of sorghum stems and from it obtained 15-20 tons of sorghum sap if when processed (bio-industry) through fermentation uses enzymes at the optimum temperature (= distillation process with a heating temperature of 78-100°C in the distiller) and purification can produce 1.2-1.6 tons of bioethanol with a purity level of 61-95%, and CO 2 . Almost all parts of sorghum plants can be utilized. As a food, sorghum has better nutrition than rice and cassava (Table 1), and it ranks after wheat, rice, corn and barley. Various foods from sorghum processing, such as (i) types of bread without yeast (chapatti, tortillas); (ii) types of bread with yeast (injera, kisia, dosai); (iii) thick porridge (to, tuwu, ugali, bagobe, sankati); (v) liquid porridge (ogi, ugi, take, edi); (v) snack foods (sorghum pop, sorghum tape, sorghum chips); (vi) boiled sorghum (urap sorghum, som); (vii) steamed food (couscous, wowoto, juadah-sorghum); etc.
As an industrial material (bio-industry), sorghum seeds contain 65-71% starch which can be hydrolyzed into simple sugars (sugar or liquid glucose or fructose syrup) which then can be fermented to produce alcohol (1 ton of sorghum seeds can produce 384 liters of alcohol). Sorghum seeds can be made of starch (white starch), which is used by industry for adhesives, thickening materials, additives in the textile industry, and the side-products are used as feed. Starch is the main ingredient in various food processing systems, such as main energy source, acts as a determinant of the structure, texture, consistency, and appearance of food; and can be a substitute for the corn starch industry, but there are few obstacles in its extraction due to binding of sorghum starch, it is around 35-38% while it is around 8-15% in corn. Another important bio-industry product of sorghum seeds is beer, where the seeds can replace barley in brewing. Bioethanol as alternative energy is used for fuel (95% content, must be pure, fuel grade ethanol) with lower exhaust rate of gas/carbon monoxide . Pineapple and cassava bio-industry produce canned pineapple and tapioca flour (aci) and produce bio-gas energy from tapioca liquid waste processing and pineapple processing which can replace fuel oil energy sources. Moreover, solid waste from pineapple and tapioca factories is used for cattle feed; cow dung is processed into compost (organic fertilizer) for pineapple garden, cassava and other plants; and the company (PT GGP/Great Giant Pineapple set a target along with 30-40-50, which means reducing: 30% fuel oil, 40% chemicals and 50% an increase of product result. As animal feed, the nutritional value of seeds, leaves and stems of sorghum is no less than corn and it is more supplementary. Stems (containing quite high sugar content, preferably by a cow) and sorghum leaves must be withered about 2-3 h before being given to cattle. Sorghum seeds are also used for poultry feed (chicken and quail) as a substitute for corn flour because it has a fairly good protein value (11%). Bio-slurry (bias pulp) is used for agriculture, cattle and fish. Its use as a separate organic fertilizer and/or combined with synthetic fertilizers wisely and regularly, both in the form of solids and liquids. It can increase land fertility and reduce pests and diseases so that the impact is increasing agricultural production. There are three forms of bio-slurry: fresh, solid and liquid. Various uses of bio-slurry, such as (1) fresh bio-slurry, for vermeil-compost, land-based fertilizer and ponds; (2) solid bioslurry, for land-based fertilizer and ponds, organic fertilizer mix, soil cleaning, mushroom media mix, non-cow alternative feed mix; (3) bio-slurry, for liquid organic fertilizers, biological fertilizers, organic pesticides, decomposers, plant hormones, seed protectors, cage odor (Shahbaz et al., 2018).
Agro-industry which is a processing industry activity/business is needed to overcome the nature of agricultural products which are generally perishable and seasonal, especially horticultural products, become into various processed products. To overcome the results of farming that have not been able / not sold out yet, agro-industry is needed to process it (heating, fermentation, drying, cooling, packaging, canning, etc.) so as not to lose due to damage/rot, which of course requires additional costs. Harvests should be sold immediately after harvesting (mainly horticultural commodities). Agro-industry is also defined as a business, process and policy program to build the competitiveness of agricultural products, empower capabilities and improve the performance of human resources to do it, be fair and sustainable to ensure food prosperity, and the welfare of the agricultural community (especially farmers), taking into account the preservation of natural resources and the environment, to remind that the majority of the poor population lives and makes a living in this sector, especially agriculture in a broad meaning (Shahbaz et al., 2013).
Farmer limitations related to the relatively weak the bargaining position, it can be studied in the subsystem between (middle) to downstream consisting of collectors and wholesalers who are the main supplier for processed product business players. This is more due to the large number, uniformity (shape and quality) in accordance with the specifications desired by the business industry of processed products and the continuity of raw material supply. In this subsystem, collectors and wholesalers are function and act as suppliers of industrial raw materials (growers). Collector and wholesalers also distribute it to various regions that need it. Furthermore, sorting and trading activities are carried out by wholesalers that receiving supplies in their area (related to distribution and sales).
Role of Empowerment to Agriculture Institutions in the Development of Agricultural Bio-industry and Agro-industry
The role of institutions in building and developing the agricultural sector in Indonesia is particularly can be seen in agricultural activities. At the national macro level, the role of agricultural development institutions is very prominent in the programs and projects of intensification and increase in food products. Agricultural development activities are outlined in the form of programs and projects by building agricultural institutions. Agricultural institutions are norms or habits that are structured and patterned and also practiced continuously to fulfil the needs of community members who are closely related to the livelihoods of rural agriculture. In the life of the farmers' community, the position and function of farmer institutions are as part of social institutions that facilitate social interaction or social interplay within a community (Chaudhary et al., 2019). The farmer institutions also have a strategic point (entry point) in driving the agribusiness system in rural areas (Sánchez-Mejía, 2018). For this reason, all resources in rural areas need to be directed/prioritized to increase professionalism and bargaining position of farmers (farmer groups). At present, the portrait of farmers and farmer institutions in Indonesia is recognized but as not yet as expected (Paroda, 2019).
The above conditions indicate the significance of institutional empowerment in accelerating the development of the agricultural sector (agro-industrial processed products). This is in line with the results of various observations which conclude that if the agricultural development initiative is carried out by an institution or organization, where individuals who have an organizational spirit combine their knowledge in the planning and implementation initiative stages, then the chances of success in agricultural development will be even greater.
To protect the agricultural sector from competition in world markets in order to support the success of agro-industrial processed products, several policies are needed, including: (i) fight for the concept of strategic product (SP) in the WTO forum; (ii) applying tariffs and non-tariff barriers to agricultural commodities that is considered very sensitive; (e) Industrial development policies that more emphasize to small-scale agro-industry in rural areas in order to increase the added value and income of farmers; (f) Investment policies that is conducive to further encouraging investor interest in the agricultural sector; (g) Development financing that prioritizes the budget for the agricultural sector and its supporting sectors; (h) The local governments' attentions in agricultural development includes: agricultural infrastructure, empowerment of agricultural extension workers, development of agricultural scope agencies, eliminating various levies that reduce agricultural competitiveness, and also adequate budget allocation.
Trade globalization includes a variety of challenges that should be interpreted as opportunities for Indonesian processed products to be able to compete in international markets, including (i) the solid domestic market of the product, so that it is not only flooded with imported products; (ii) the supply of safe, hygienic, high-quality and guaranteed products and competitive prices; (iii) continuous supply of products and adequate support of environmental conditions and facilities. To improve the competitiveness of Indonesian trade products, the diversity of domestic agricultural product processing technologies in each region must be able to be utilized and adapted to global conditions as a source of strength in the development of competitive agro-industrial products (Peters and Pingali, 2018).
As an effort to develop and increase competitively-processed agroindustry products, it is needed to increase production efficiency and quality through improved production, post-harvest and processing systems (GAP and GMP). The competitiveness of Indonesian agricultural commodity processed products is still relatively weak because it only relies on the comparative advantage of the abundance of natural resources and uneducated labor (cost driven factor) so that the products produced are dominated by natural primary products (resource-based and unskilled-labour intensive).
The flood of processed products abroad should be interpreted as challenges and opportunities that must be faced by increasing the competitiveness of processed products by accelerating and developing a domestic agro-industry performance that improves product quality, quantity and efficiency. With efforts to reduce imports of processed products, exports gradually change from primary agricultural products (raw materials) to processed products (Mutenje et al., 2016).
The development of agro-industry to increase the competitiveness of processed products and the development of export markets (global), and multi-utility (double profits) are: (a) as an export promotion as well as import substitution, (b) to create agricultural added value, (c) to create industrial employment, and (d) to increase technology adoption. The increase of the value-added of agricultural products is one of the main targets of the ministry of agriculture in the context of developing agro-industry, through the increase of processed products traded, the development and improvement of processed products based on agricultural products and aimed at exports to obtain the increase of the foreign trade surplus.
All types of agricultural, plantation and animal husbandry commodities in Indonesia have a strategic function and role as components and raw materials for food needs (medicines), clothing (cosmetics), as well as including tools, and furniture. Almost all of these final products have entered the export market and they are developing quite well. In the perspective of law/regulation and policy, the government has a political-economic manner to build the prosperity of the nation, especially in the future. Agro-industrial businesses can be used as a source of income for the majority of the population (along with the development of the primary agricultural sector, as employment and a place of business, which is expected to increase income acquisition and realize the welfare of farmers. The implementation of agroindustry acceleration and the achievement of the added value of processed products can support the achievement of product competitiveness and acceleration of agricultural development in Indonesia. Therefore, revitalization of agro-industry activities is needed as a breakthrough action and strategy and also to make it as a locomotive of national economic growth. Agro-industry also has strong links between sectors that are not only product-related but also through consumption, investment and labor linkages. The linkage is due to manpower and capital being reallocated to the processing (from primary products to processed products) which is equipped with a business feasibility analysis; which includes: general viability, financial viability, economic viability, social and environmental viability, technical viability, infrastructure support; and policy, as supporting data.
Various studies and research can be done to determine the benefits and feasibility of the performance of agro-industry is a type of agricultural commodity both crops, horticulture, plantations, livestock, and so on. Based on Permeating (Regulation of the Minister of Agriculture) No. 35/Permeating/OT.140/7/2008 concerning Processing of agricultural products from plants is to change raw materials into primary products, semi-finished or finished products, which aims: To increase the shelf life or increase the added value of agricultural products; and minimize the losses because the added value of the product is taken over by other countries (Reghunath and Kumar, 2018).
Regarding Law No. 13 of 2014 concerning Industry, several considerations regarding the need and importance of the implementation of business activities based on agricultural products and their development include: (i) Indonesia's natural resources are rich and spread evenly throughout the country, so it is necessary to encourage the business of the processed product industry; (ii) creating job opportunities as wide as possible; (iii) increasing value-added; (iv) to increase income related to the welfare of the farmer; (v) to open up export opportunities; and (vi) it is believed to have an impact and create equitable development. The strategy is a result of the intelligent attitude that management of natural resources must be improved and developed so that the implementation and acceleration of revitalization of agro-industry activities are needed. The issuance of Law No. 13 of 2014 concerning Industry, should be interpreted as an affirmation that the national industrial system has substantively provided the correct direction for regulation on the importance of developing natural resource-based industrial sectors (Reghunath and Kumar, 2018).
Indonesia's agricultural development should immediately anticipate the development of processed products through accelerated implementation of agro-industry (agricultural industrialization) so that exports of agricultural products can gradually change from primary products (raw materials) to processed products. Various problems arise related to the inability of domestic agroindustries related to their development efforts to produce quality and competitive processed products, diversity and level of market demand, accompanied by complete regulations and siding rules of farmers producing raw materials. There are at least 5 components in the agro-industry product business (agribusiness), namely: (i) supplier and distributor of production input, alsintan; (ii) agricultural products (primary/unprocessed products); (iii) agro-industry; (iv) marketing of various agricultural products; (V) public services (storage, banking, transportation, insurance). The marketing of agricultural products is generally a critical point in the business chain of various agribusiness products, especially agro-industrial products in: because of the limited time and location of implementation and depends on the role of collector trader. About marketing, its role as a forum for transaction activities (selling/ buying) needs to be empowered to facilitate stakeholders to access and ease of obtaining the items needed.
RESEARCH METHODS
The aim of the existing study is to inspect the importance of institutional empowerment comprehensively related to the increasing competitiveness of agro-industry products in the perspective of bio-industrial agriculture for the achievement of sustainable agricultural development, which is enriched with a review of various studies and other related literature by using a quantitative descriptive method. The data was retrieved from World Development Indicate (WDI) and STATA was employed to test the hypotheses. Institutional empowerment of agro-industry product is measured by three elements such as renewable energy resources (RER) with the management of natural resources (MNR) and management human resources (MHR). The competitiveness is measured by the sustainability of agriculture development (SAD) and based on the above mentioned literature the present study develops the following equation.
FINDINGS
The first assumption is multicollinearity that means the constructs are not highly correlated that is calculated by the following equations: The figures show that no issue with multicollinearity assumption and variance inflation factor (VIF) is given below in Table 2.
The second assumption regarding the normality is verified by using the Skewness and kurtosis test is calculated with the help of the following equation: Kurtosis The figures show that data has abnormality issues and Skewness and Kurtosis test is given below in Table 3.
The third and four assumptions regarding the autocorrelation and heteroscedasticity also confirmed by using the Wooldridge test and Breusch-Pagan test results show that data has both the issues and fixed the effects of both the issues by using the GMM estimator.
The findings also exposed that positive links among the RER, MNR, MHR and SAD because all the beta have a positive sign and t-values and P-values meet the standard and Table 4 given below show the path analysis of the study.
DISCUSSIONS
The findings also exposed that positive links among the renewable energy resources, management of natural resources, management of human resources and sustainability of agriculture development. These findings are similar to the outcomes of Liu et al. (2017) who also examined that renewable energy is an essential part of agriculture development in the country. A study by Alola and Alola (2018) exposed that positive association has been found among the consumption of renewable energy and agriculture development and these outcomes are the same as the output of the present study. These findings are provided with the guidelines to the policymakers that they should increase their emphases on the renewable energy that enhance the growth of the economy. The policy of construction and agro-industry development in rural areas, especially to encourage the creation of a balanced economic structure. The development of agro-industry is intended to play a role in the creation of added value (value-added), absorption and productivity of labor and markets; it needs to be accompanied by programs that go directly to the target (farm households as subjects), where agro-industrial development is combined with rural development so that it becomes a comprehensive rural development program, namely: "Rural-agro-industrial development." To improve these conditions, it requires development and empowerment that starts from the community to be essential to achieve optimum synergy in its activities at the local level; to help the increasing towards industrialization; and to make it easier for farmers to develop agro-industrial systems (Reghunath and Kumar, 2018).
In planning and implementing agricultural development at rural areas, it should be emphasized the improvement of a variety of agro-industries that are efficient and effective and also to increase income, employment and business opportunities in rural areas.
As an advocate for agricultural development, agro-industry is expected to be able to create a variety of agricultural products and processed products, to be able to drive rural industrialization and also be able to create employment and income at rural areas. Some obstacles in the development of agro-industry, include: (i) the processing technology that is not yet developing because it is still small and limited to capital resources; (ii) low-quality human resources which are not professional yet; (iii) inadequate facilities and infrastructure; (iv) the low quality of assurance and continuity (availability) of raw materials; (v) the marketing that has not yet developed because the agricultural processing industry products have not met market requirements, especially international markets; (vi) there are no real policies that encourage the development of domestic agro-industry.
Strengthening product processing technology with the empowerment and participation of farmer communities is one of the important and supporting factors in the development of agro-industry at rural areas so that technology development and investment programs can become a "driving engine" for resilient economic progress at rural areas. To grow the rural economy it is necessary to strengthen community social networks that are efficient (both from the structure/configuration aspects), membership (level of community participation), or role or function (an organic division of work). Various socio-economic aspects of rural agriculture and the marketing of processed products need to be addressed in the process and support the development of agro-industry, it must be able to play a role in increasing the value-added of processed products, absorption and productivity of the workforce, and expanding the reach of marketing through quantitative studies. The implementation of sustainable agriculture bio-industry should be able to fulfil some profit criteria, which include benefits: economic, social, conservation and sustainability of natural resources (SDA), land ecosystems and healthy and natural environmental quality wisely and sustainably. Sustainable agricultural systems are also the backbone (backbone) for the realization of independence and food prosperity. Related to the shrinking of petroleum which is still become one of the main energy sources, various agriculturalbased materials and materials can be further processed to produce bioenergy substituting fuel oil (BBM) and or intended to increase petroleum production (for example surfactants from the products palm oil synthesis) (Radhakrishnan and Francis, 2017).
Various regulations/policies on industrialization, related to agroindustry, especially food crops, were taken to strengthen the bargaining position and increase the competitiveness of various processed products in regional and international markets. Its implementation also confirms that industrialization is a strategic step to process agricultural products (agro-industry) as a result of strategic natural resources and must be pursued by strengthening the final and initial process in the agro-industry sector, as well as strengthening the bargaining position of various processed products based on Indonesian food crop agriculture, both in regional markets and in the world market (international). Its implementation is predicted as a result of the improvement and development of natural resource management. It also makes the Indonesian people gradually be able to break away from the trap of paradox of plenty (a condition where a country is rich in natural resources but the people are poor). Thus, Indonesia's extraordinary agricultural potential can be utilized to the widest possible extent to realize independence, sovereignty, national food and energy prosperity through an appropriate and consistent implementation policy by the government, sustainable and always prioritizing environmental sustainability and natural resources (Radhakrishnan and Francis, 2017).
CONCLUSIONS AND IMPLICATIONS OF POLICY
• The agricultural sector remains a potential source of income and employment opportunities • Efforts to improve the productivity and welfare of farmers as farming business actors must continue to utilize the bioenergy and diversity innovation and develop agro-industry of processed products as well as agro-industry products for crops and work opportunities outside the agricultural sector.
Work system improvements (catch, profit-sharing, etc.) and wages, mobility, mobility and labor information • Infrastructure development, education and development of labor skills (especially women), the participation of all business actors, improvement and enhancement of quality and the ability (competency) of agricultural human resources seriously, intensively and sustainably to be able to utilize bioenergy sustainably • The importance of supporting the improvement and development of bioenergy technology to increase production and productivity as well as work and business opportunities (agro-industry of processed products) • The importance of alignments and government support for the role of farmer groups of processed products, especially with the policy program of training and technology guidance intensively and continuously in the use of bioenergy to produce processed products, and to realize the strengthening of farmer groups of processed products from the upstream subsystem (aquaculture) to the downstream subsystem ( marketing and being a business actor of processed products) in accordance with the concept of value chain market-based solutions • The importance of empowering land resources (land, water, minerals and air), biological resources (humans, animals, plants, and other living things) environmental resources (interactions between creatures), 6 M (man, money, materials, machines, methods, management), as well as the participation of all business actors and related parties • Synergy needs to be done optimally and integrated with regard to the implementation of sustainable bio-industry agriculture programs so that all stakeholders have the will, ability, opportunity and authority to make real contributions and obtain optimal benefits • The importance of uniformity and mutual agreement/ commitment of each of the stakeholders in the central level to the regional level, so that it can help the coordination and implementation of work programs in the region smoothly • The importance of natural resource management in terms of economic, social and environmental sustainability correctly and wisely • The importance of various regulations and institutions that accommodate a variety of bio-industrial and agro-industrial activities starting from the side of production, processing, marketing and sustainability • The importance of developing bio-industry and agro-industry businesses, especially in rural areas, that indicates an increase in the quality and competence of human resources and of course has an impact on increasing income acquisition which is supposed to be able to realize food self-sufficiency and security and farmers' welfare.
|
2020-08-13T10:05:16.765Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "fb651b3a3d62227d7d175c8610326ba6a2d65a5b",
"oa_license": "CCBY",
"oa_url": "https://econjournals.com/index.php/ijeep/article/download/10376/5333",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c837dd4c02cee3d61d08410b126e33f2a7a6febf",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
235792696
|
pes2o/s2orc
|
v3-fos-license
|
Lateral Preoptic Area Neurons Activated by Angiotensin-(1–7) Increase Intravesical Pressure: A Novel Feature in Central Micturition Control
Central micturition control and urine storage involve a multisynaptic neuronal circuit for the efferent control of the urinary bladder. Electrical stimulation of the lateral preoptic area (LPA) at the level of the decussation of the anterior commissure in cats evokes relaxation of the bladder, whereas ventral stimulation of LPA evokes vigorous contraction. Endogenous Angiotensin-(1–7) [(Ang-(1–7)] synthesis depends on ACE-2, and its actions on binding to Mas receptors, which were found in LPA neurons. We aimed to investigate the Ang-(1–7) actions into the LPA on intravesical pressure (IP) and cardiovascular parameters. The gene and protein expressions of Mas receptors and ACE-2 were also evaluated in the LPA. Angiotensin-(1–7) (5 nmol/μL) or A-779 (Mas receptor antagonist, 50 nmol/μL) was injected into the LPA in anesthetized female Wistar rats; and the IP, mean arterial pressure (MAP), heart rate (HR), and renal conductance (RC) were recorded for 30 min. Unilateral injection of Ang-(1–7) into the LPA increased IP (187.46 ± 37.23%) with peak response at ∼23–25-min post-injection and yielded no changes in MAP, HR, and RC. Unilateral or bilateral injections of A-779 into the LPA decreased IP (−15.88 ± 2.76 and −27.30 ± 3.40%, respectively) and elicited no changes in MAP, HR, and RC. The genes and the protein expression of Mas receptors and ACE-2 were found in the LPA. Therefore, the LPA is an important part of the circuit involved in the urinary bladder control, in which the Ang-(1–7) synthetized into the LPA activates Mas receptors for increasing the IP independent on changes in RC and cardiovascular parameters.
INTRODUCTION
Urinary bladder dysfunctions can make the daily life activities difficult, causing social and mental discomfort. In the nephrology and urology outpatient care units, almost 40% of the patients present disorders of the lower urinary tract (Bakker et al., 2002;Kajiwara et al., 2004;Hashim et al., 2009;Sureshkumar et al., 2009). Among the urinary bladder dysfunctions, urinary incontinence symptoms have been reported with higher prevalence in women (Aoki et al., 2017).
Central control of micturition and urine storage involves a complex and not fully understood mechanism. The maintenance of excretion and urinary storage depends on reflex mechanisms; however, this reflex arc can undergo direct cortical influence through facilitatory and inhibitory mechanisms. The onset of micturition is facilitated by the Pontine Micturition Center (PMC) (de Groat, 1998), while urinary storage is influenced by the Pontine Urine Storage Center (PUSC), which is located ventrolaterally in PMC.
Studies carrying out the injection of pseudorabies virus into the urinary bladder wall have shown that, after a long incubation period, infected neurons are found in the lumbosacral spinal cord, raphe nucleus, reticular formation, pontine urination center (PMC), locus coeruleus, red nucleus, hypothalamus, preoptic area, and cortical areas (Nadelhaft et al., 1992). This evidence indicates that a multisynaptic neuronal circuit is involved in the efferent control of the urinary bladder.
The lateral preoptic area (LPA) is located in the hypothalamus and connects with limbic structures involved in physiological and behavioral responses to stress (Wayner et al., 1983;Briski and Gillen, 2001). The LPA also has osmosensitive neurons that regulate water intake (Osaka et al., 1995). Evidence suggests that LPA neurons would be important in the inhibitory control of water intake, as the chemical damage of this area with ibotenic acid increased water intake either upon administration of hypertonic saline or in water-deprived rats for 14 h (Saad et al., 1996). In addition, chemical lesions of the LPA using high doses of kainic acid have demonstrated that rats show polydipsia accompanied by increased urine production; however, this effect was reversed 1 week after lesions (Osaka et al., 1993). Kabat et al. (1936) have shown that electrical stimulation in scattered points of the LPA at the level of the decussation of the anterior commissure in cats evoked relaxation of the bladder. Nevertheless, the stimulation of the ventral portion of LPA causes a strong bladder contraction. This portion of LPA contains fibers of the medial forebrain bundle (Kabat et al., 1936).
Evidence indicates that the ACE-2/Ang-(1-7)/Mas receptor axis is capable of promoting effects contrary to the harmful actions of the ACE-1/Angiotensin II/AT-1 receptor, especially in pathological conditions . Nevertheless, the functions of Ang-(1-7) are not limited to counter-regulatory actions (Santos et al., 2000). Studies with gene deletion of the Mas receptor lead to the appearance of several alterations; among which are cardiac dysfunction , increased blood pressure (Xu et al., 2008), decreased baroreflex function (de Moura et al., 2010), endothelial dysfunction (Xu et al., 2008), and also changes similar to the metabolic syndrome . Different areas in the medulla oblongata involved in cardiovascular control, as well as forebrain areas as the supraoptic nucleus and the LPA, showed the existence of neurons labeled for Mas receptors, using the immunofluorescence technique (Becker et al., 2007).
The prior studies in cats were performed, using electrical stimulation into the LPA, which can stimulate either cell bodies or axons from other brain areas. Earlier reports showed the Ang-(1-7) actions in some hypothalamic areas, and Mas receptors have been found in LPA. Nevertheless, to the best of our knowledge, no previous evidence showed whether Ang-(1-7) could act or not in the LPA to modulate the control of urinary bladder. Thereby, we hypothesized that Ang-(1-7) acts in LPA neurons, changing the intravesical pressure (IP) in female rats. In order to do that, we investigated the effects of Ang-(1-7) and A-779 (Mas receptor antagonist) injections into the LPA on the IP. In addition, the gene and protein expressions of Mas receptors and ACE-2 were evaluated in the LPA for demonstrating the existence of Ang-(1-7) receptors in the LPA neurons and also for understanding if Ang-(1-7) is produced in neurons located in LPA, respectively.
Animals
Female Wistar rats (∼230-250 g, 14-16 weeks old) supplied by the Animal Facility of the Faculdade de Medicina do ABC were used. The animals were initially housed in groups of four rats per plastic cage. After stereotaxic surgery for implantation of guide cannulas into the LPA, the rats were maintained in individual plastic cages and provided with standard chow pellets and tap water ad libitum. The humidity of the animal room was maintained at ∼70%, and the room temperature was controlled with an air conditioner set at 22-24 • C with a 12:12-h light-dark cycle. All procedures were performed in accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals, and were approved by the Animal Ethics Committee of the Faculdade de Medicina do ABC, Centro Universitario FMABC (protocol number 13/2017).
Implantation of Guide Cannulas Into the LPA
Rats were anesthetized with i.p. ketamine (50 mg/kg, Dopalen R , Ceva Saude Animal Ltda, Paulinia, Brazil) and i.m. xylazine (10 mg/kg, Anasedan R , Ceva Saude Animal Ltda, Paulinia, Brazil). They were placed in a stereotaxic apparatus (David Kopf, Tujunga, CA, United States), and the antisepsis in the surgical field was performed, using polyvinyl pyrrolidone. A midline incision was carried out in the skin on the skull to expose the bregma and lambda sutures that were positioned at the same horizontal plane. A stainless steel guide cannula (12-mm length, 23 gauge, 0.642-mm OD, 0.337-mm ID, BD, Juiz de Fora, Brazil) was implanted into the brain with the tip located.8-m caudal from bregma, ± 1.5 mm lateral from midline, and 7.2 mm ventral to the cranial surface at the anteroposterior level on the spot for insertion of the guide cannula. Two jeweler screws were implanted in the skull, and the guide cannula was anchored to the screws with acrylic cement. The guide cannula was closed, using a mandrel of 12-mm length with the external tip involved by a polyethylene tubing cap, which fitted to the guide cannula. At the end of the surgery, the rats received a single dose of i.m. Veterinary Pentabiotic for Small Animals (2,000 U/mL, 0.1 mL/rat, Fort Dodge Saude Animal, Campinas, Brazil) as a prophylactic procedure and i.m. meloxicam (0.2 mg/kg per day, Maxicam, OurofinoSaude Animal, Campinas, Brazil) for 3 days to produce postoperative analgesia and anti-inflammatory effect.
Surgical Preparation for Cardiovascular Parameters and Intravesical Pressure Recordings
The rats were anesthetized with 2% isoflurane in 100% O 2 and submitted to:
Cannulation of the Femoral Artery and Vein
Polyethylene tubing (PE-50 connected to PE-10, Clay Adams, NJ, United States) was inserted into the femoral artery and veinfor pulsatile arterial pressure (PAP), mean arterial pressure (MAP), and heart rate (HR) recordings in the data acquisition system (PowerLab 16 SP, AD Instruments, Castle Hill, AU), and for drug administration, respectively.
Measurement of Regional Blood Flow
A midline laparotomy was carried out in order to isolate the left renal artery, and a miniaturized pulsed Doppler flow probe (0.8 mm in diameter, Iowa Doppler Products, Iowa City, IA, United States) was placed around this artery for indirect measurement of the blood flow and renal conductance (RC). The probe was connected to a Doppler flow meter (Department of Bioengineering, The University of Iowa, Iowa City, IA, United States), and the amplified signal was digitalized in a data acquisition system (PowerLab 16 SP, AD Instruments, Castle Hill, AU). Additional details about the Doppler technique, including the readability of this method for estimation of the blood velocity, were previously described by Haywood et al. (1981). Relative renal vascular conductance was calculated as the ratio of Doppler shift (kHz) and mean arterial pressure (MAP, mmHg). The data were presented as percent change from the baseline [(final conductance-initial conductance)/initial conductance] × 100.
Cannulation of the Urinary Bladder
A small incision in the bladder wall was carried out for insertion of polyethylene tubing (PE-50 connected to PE-10, Clay Adams, NJ, United States), filled with saline at the apex of the bladder as previously described by Cafarchio et al. (2016Cafarchio et al. ( , 2018Cafarchio et al. ( , 2020 and Magaldi et al. (2020). A small drop of tissue glue was used to fix the catheter on the bladder wall for intravesical pressure (IP) recordings in a data acquisition system (PowerLab 16 SP, AD Instruments, Castle Hill, AU). The urethra outlet was not submitted to ligature in order to permit the bladder voiding if necessary. A baseline intravesical pressure (IP) value was set at ∼5-7 mmHg by saline infusion or urine withdrawal through the catheter inserted into the urinary bladder. Percent changes in intravesical pressure (% IP) were calculated as [(peak IP response-baseline IP)/baseline IP] × 100.
Microinjection of Drugs
Microinjections of drugs into the LPA were made with a needle (27 gauges, 0.413-mm O.D., 0.210-mm I.D., 13-mm length, Injex, São Paulo, Brazil) connected to a 10-µL Hamilton syringe (Reno, NV, United States) by polyethylene tubing (PE-10, Clay Adams, NJ, United States). The volume of all the drugs injected into the LPA was 1 µL.
Histology
At the end of the experiments, the animals were deeply anesthetized with i.v. sodium thiopental (170 mg/kg, Cristalia, Itapira, Brazil), and a microinjection of 4% Chicago Sky Blue dye (Sigma Aldrich, St. Louis, MO, United States) in a volume of 1 µL was made through the guide cannula in order to determine the sites of drug injections. The animals were transcardially infused with 10% formalin solution (Synth, Diadema, Brazil). The brains were harvested and maintained in 10% formalin for at least 24 h and, thereafter, cut in 40-µm sections, using a freezing microtome (Leica Biosystems, Buffalo Grove, IL, United States), and stained with 2% neutral red (Sigma Aldrich, St. Louis, MO, United States). The slices were covered with Entellanmounting medium (Merck) and analyzed with a light field microscope (Nikon Eclipse E-200, Tokyo, Japan) to verify the presence of the Chicago Sky blue dye in the site of injection.
Experimental Design
Effects of Ang-(1-7) and A-779 Into the LPA on Intravesical Pressure, Arterial Pressure, Heart Rate, and Renal Conductance in the Female Rats (n = 6/group) As shown in Figure 1, first of all, the rats under ketamine + xylazine anesthesia underwent an implant of a guide cannula into the LPA, using a stereotaxic apparatus. Five days later, after recovery of the stereotaxic surgery, the animals were anesthetized with 2% isoflurane in 100% O 2 and submitted to the cannulation of the femoral artery and vein for PAP, MAP, and HR recordings, and infusion of drugs, respectively. A miniaturized Doppler flow probe was placed around the left renal artery for indirect blood flow measurement. Polyethylene tubing was also inserted into the urinary bladder for IP recordings. Rectal temperature was maintained between 37 and 38 • C, using a heating pad. The animals were anesthetized with 2% isoflurane in 100% O 2 during the whole experiment and were unresponsive to a noxious toe pinch. This experimental approach was carried out as previously reported by Cafarchio et al. (2016Cafarchio et al. ( , 2018Cafarchio et al. ( , 2020 and Magaldi et al. (2020). A steady level of arterial pressure was maintained under anesthesia. The rats were kept in supine position in order to avoid pressure of abdominal organs on the urinary bladder, which could affect the IP values. After baseline PAP, MAP, HR, renal blood flow, and IP recordings for 15 min, Ang-(1-7) (5 nmol/µL, catalog # A9202, Sigma Aldrich, St, MO, United States) or saline (vehicle, 1 µL) or A-779 trifluoroacetate salt (50 nmol/µL, a Mas receptor antagonist, catalog # SML1370, Sigma Aldrich, St, MO, United States) was injected into the LPA unilaterally, and all the parameters were recorded for additional 30 min. In another group of the rats, after the baseline recordings, saline (vehicle, 1 µL) or A-779 (50 nmol/µL, 1 µL) was injected bilaterally into the LPA, and the effectiveness of Mas receptor blockade was evaluated by Ang-(1-7) injections (5 nmol/µL) bilaterally into the LPA at 15-min post-A-779 injections into the LPA, and all the parameters were recorded for at least 30 min. At the end of the experiment, a 4% Chicago Sky blue dye (1 µL) was administrated in the injection sites. An overdose of sodium thiopental (170 mg/kg, i.v.) was used to euthanize the animals. The brains were removed for posterior histological evaluation. Figure 2 shows the site of dye deposition in the LPA. Only the animals with histological confirmation of microinjection sites in the LPA were considered in this study.
Gene Expression of Mas Receptors and ACE-2 in the LPA Neurons (n = 6)
As depicted in Figure 1, the animals were deeply anesthetized with isoflurane 4% in O 2 100% and submitted to a thoracotomy for transcardial perfusion of 40 mL of saline. After that, the skull bone was removed, using a rongeur (WPI Instruments, Sarasota, FL, United States), and the brain was harvested with forceps and immediately frozen in liquid nitrogen and stored at −80 • C in an ultrafreezer until the day of total RNA extraction with the TRizol R reagent. To obtain LPA samples, the brain was sliced and a micropunch was performed on the frozen sections of rat brain. The animals used in this protocol were not previously instrumented for cardiovascular recordings. The procedures for gene expression of Mas receptors, ACE-2, and cyclophilin A were FIGURE 2 | Photomicrograph of histological sections (coronal), showing the drug injection site into the lateral preoptic area unilaterally (A) and bilaterally (B) marked with 4% Chicago Sky Blue dye (1 µL). A schematic representation of the lateral preoptic area at -0.84 mm from bregma (C) and at -0.96 mm from bregma (D) is shown in the brain sections according to Paxinos and Watson (2009). The arrows indicate the location of the LPA. CP, caudate nucleus/putamen; GP, globus pallidus; LPA, lateral preoptic area; LV, lateral ventricle; SFO, subfornical organ; 3V, third brain ventricle. Amplification: 160×.
performed by quantitative real-time polymerase chain reaction (qPCR) as follows: Total RNA was isolated from frozen LPA samples with TRIzol Reagent R (Thermo Fisher Scientific) according to the protocol of the manufacturer. RNA integrity was checked by agarose gel electrophoresis, and RNA purity reached the following criteria: A260/280 ≥ 1.8. The extracted total RNA concentration was measured, using a NanoDrop TM (One-One c) spectrophotometer (Thermo Fisher Scientific), and 1 µg of total RNA was subjected to reverse transcription reaction. Complementary DNA (cDNA) synthesis was generated, using ImProm-II TM Reverse Transcription System (Promega, Madison, WC, United States) according to the protocol of the manufacturer. Quantitative realtime PCR (qPCR) was carried out, using 2 µL of cDNA and the EvaGreen TM qPCR Mix Plus (Solis BioDyne, Tartu, Estonia) in the ABI Prism 7500 Sequence Detection System (Applied Biosystems, Foster City, CA, United States) to amplify specific primers sequences for the Mas receptors, ACE-2, and cyclophilin A. The forward and reverse primers sequences (Thermo Fisher Scientific) for rats used in this study follow below: Mas receptor: (forward) 5 -CCTGCATACTGGGAAGACCA-3 (reverse) 5 -TCCCTTCCTGTTTCTCATGG-3 ACE-2: (forward) 5 -TTGAACCAGGATTGGACGAAA-3 (reverse) 5 -GCCCAGAGCCTACGATTGTAGT-3 Cyclophilin A: (forward)-5 -CCCACCGTGTTCTTCGACAT-3 (reverse)-5 -CTGTCTTTGGAACTTTGTCTGCAA-3 Cyclophilin A was used as internal control (a housekeeping gene). The procedure consisted of an initial step of 10 min at 95 C, followed by 45 cycles of 20 s each at 95 C, 20 s at 58 C, and 20 s at 72 C. Gene expression was determined by CT, and all values were expressed, using cyclophilin A mRNA as an internal control.
Protein Expression of the Mas Receptor and ACE-2 in LPA Neurons (n = 6)
A different group of rats (from that used to gene expression, according to the approach described in Figure 1) was deeply anesthetized with isoflurane 4% in 100% O 2 and underwent a thoracotomy for transcardial perfusion of saline. Afterward, the skull bone was removed, using a rongeur (WPI Instruments, Sarasota, FL, United States), and the brain was removed with forceps and immediately frozen in liquid nitrogen and stored at −80 • C in an ultrafreezer for later determination of protein expression of the Mas receptor and ACE-2 in the LPA neurons by Western Blotting. The rats used in this protocol were not animals previously instrumented for cardiovascular recordings. The procedures for Western Blotting assay were carried out as follows: The samples from the LPA were placed in RIPA lysis and an extraction buffer, added with a mixture of protease and phosphatase inhibitors (Thermo Fisher Scientific). The tissues were homogenized in a lysis buffer, incubated on ice for 10 min, and centrifuged at 7,000 g for 5 min at 4 • C, and the supernatant containing the soluble proteins was stored at −80 • C. The protein concentration was determined, using NanoDrop TM (One-One c) spectrophotometer (Thermo Fisher Scientific). The total proteins were separated on a 10% SDS-acrylamide gel and then transferred electrophoretically to the nitrocellulose membrane (Bio-rad), using the Trans-blot Turbo Transfer device (Bio-rad). The membrane was stained with Ponceau solution to check successful transfer. The membrane was photographed in the Chemidoc device (Biorad) for determination of total protein by densitometry, using the Image Lab TM software (Biorad). After that, the membrane was washed with milli-Q water at least three times and then incubated for 1 h with 5% non-fat milk in Tris-buffered saline −0.1% Tween 20 (TBS-T). After this period, this solution was discarded, and the membrane was incubated at 4 • C overnight, with a polyclonal primary antibody specific for the Mas receptor (rabbit anti-Mas, Novus Biologicals, catalog # NBP1-78444) and for ACE-2 (rabbit anti-ACE-2, Cloud-Clone Corp., catalog # PAB886Ra01) diluted to a concentration of 1:250 in TBS-T. The blots were washed with TBS-T and then incubated with a goat-antirabbit secondary antibody (Alexa Fluor 488, ThermoFischer Scientific) in a 1:10,000 dilution for 1 h, which produced a chemiluminescent reaction. After that, the membrane was filmed in the Chemidoc device (Bio-rad). The blot corresponding to the protein of interest was quantified by densitometry, using the Image Lab TM software (Bio-rad). The optical density (O.D.) of the proteins of interest was normalized by the expression of total protein.
Statistics
A Kolmogorov-Smirnov test for normality was used to verify the data distribution. Results fit to a normal distribution were expressed as a mean ± S.E.M. The data were submitted to unpaired Student's T-tests for comparison between the % IP or % RC responses evoked by Ang-(1-7) or A-779 versus saline into the LPA. Paired Student's T-tests were used to compare MAP and HR before and after Ang-(1-7) or A-779 or saline into the LPA. Statistical analysis was conducted, using the statistical software package Sigma Stat 3.5. The significance level was set at P < 0.05.
Responses Evoked by Unilateral Injection of A-779 (Mas Receptor
Antagonist) Into the LPA on Intravesical Pressure, Arterial Pressure, Heart Rate, and Renal Conductance in the Female Rats (n = 6) At the baseline (prior to injections of saline or A-779 into the LPA), the MAP was 90 ± 2 mmHg, the HR was 389 ± 12 bpm, and the IP was 6.47 ± 0.45 mmHg (n = 6) ( Figure 5A).
The unilateral injection of saline (vehicle) into the LPA elicited no significant changes in MAP (1 ± 1 mmHg), HR (−4 ± 1 bpm), RC (−3.74 ± 2.02%), and IP (−1.95 ± 1.06%) compared with baseline parameters (Figure 5B). In contrast to Ang-(1-7) injections, we observed that the unilateral injection of A-779 into the LPA significantly decreased the intravesical pressure (−15.88 ± 2.76%), both compared with baseline and saline injections into the LPA (p < 0.05). However, the MAP (−1 ± 1 mmHg), HR (−5 ± 2 bpm), and RC (−1.30 ± 3.43%) showed no significant change compared with the baseline values and also in comparison to the saline injection into the LPA (Figures 5C, 4). The latency for the onset of IP decrease induced by unilateral injection of A-779 into the LPA was ∼10 min, and the peak response was achieved at ∼23-25 min after the injection (Figure 5).
Responses Elicited by Bilateral
Injections of A-779 Into the LPA on Intravesical Pressure, Arterial Pressure, Heart Rate, and Renal Conductance in Female Rats (n = 6) Considering that the blockade of Mas receptors unilaterally in the LPA even in the absence of agonist produced a small reduction of intravesical pressure, another experimental group was performed, in which the guide cannulas were implanted bilaterally into the LPA in order to understand if any control could be exerted by the contralateral LPA neurons on intravesical pressure and also on the cardiovascular parameters.
In this group of rats, MAP was 91 ± 4 mmHg, HR was 340 ± 14 bpm, and IP was 7.16 ± 0.47 at the baseline ( Figure 6A). Bilateral injections of saline into the LPA produced no significant changes in MAP (−2 ± 2 mmHg), HR (−3 ± 2 bpm), RC (−2.95 ± 5.32%), and IP (0.65 ± 6.41%) compared with baseline values (Figure 6B). After bilateral injections of A-779 into the LPA, a significant decrease in IP was observed (−27.30 ± 3.40% for 15 min after both injections) compared with the baseline and also in comparison with bilateral injections of saline into the LPA (p < 0.05). The decrease in IP evoked by bilateral injections of A-779 into the LPA was significantly greater (∼1.7-fold) than the reduction induced by the unilateral injection (−15.88 ± 2.76%, p = 0.026, unpaired Student's T-test). No significant changes were observed in MAP (1 ± 1 mmHg), HR (1 ± 13 bpm) and RC (-11.97 ± 11.65%) after A-779 injected bilaterally into the LPA compared with the baseline and in comparison with the bilateral injections of saline into the LPA (Figures 6C-6E, 4).
The effectiveness of the Mas receptor blockade was further evaluated by the bilateral injections of Ang-(1-7) 15 min after the bilateral administration of A-779 into the LPA. The responses of Ang-(1-7) previously observed were abolished after a Mas receptor blockade as follows: % IP (1.14 ± 2.66%), (Figures 7B,C).
DISCUSSION
The results of this study demonstrated that injections of Ang-(1-7) into the LPA evoked a marked increase in intravesical pressure compared with saline, whereas the blockade of Mas receptors for Ang-(1-7) with A-779 decreased intravesical both uni-or bilaterally compared with saline injections. However, it is noteworthy that bilateral injections of A-779 produced a synergistic effect compared with the unilateral injection. Despite the changes in intravesical pressure, no changes were observed in renal conductance, arterial pressure, and heart rate. Those findings suggest that the increases in intravesical pressure are not dependent on increases in urinary volume due to increases in the glomerular filtration rate. Although the unilateral blockade of Mas receptor into the LPA has yielded a slight but statistically significant decrease in intravesical pressure compared with saline injection, the bilateral injection of A-779 into the LPA enhanced the reduction in intravesical pressure. This suggests that neurons containing Mas receptors contralaterally into the LPA are tonically active and influence the control of the detrusor muscle tonus.
Previous studies of Kabat et al. (1936) showed that electrical stimulation of the LPA at the decussation level of the anterior commissure in cats evoked relaxation of the bladder. In the current study, the sites of injections into the LPA were located caudal from the anterior commissure level, which suggests that a different population of neurons has been activated by Ang-(1-7) injections into the LPA from extending .80-to .96-mm caudal from bregma (Paxinos and Watson, 2009). Kabat et al. (1936) also performed the stimulation of the ventral portion of LPA producing a strong bladder contraction. This portion of LPA contains fibers of the medial forebrain bundle (Kabat et al., 1936). It is not unlikely that the electrical stimulation performed in the study of Kabat et al. (1936) has also activated the cell bodies of neurons in the LPA, which could be the same population of neurons that underwent activation by Ang-(1-7) in the current study and increased the intravesical pressure.
Studies of Cafarchio et al. (2018) have demonstrated that the blockade of V1a and V2 receptors evokes a long-lasting decrease in intravesical pressure, suggesting that vasopressin is important for the maintenance of the detrusor muscle tonus. Both neural (de Groat, 1998) and humoral mechanisms (Cafarchio et al., 2018 seem to be involved in the control of the urinary bladder tonus. The increases in intravesical pressure induced by Ang- (1-7) showed a latency of roughly 10 min, and the peak response was observed at 23-25 min after injections into the LPA. This response with a delay for achieving the maximum response does not seem to be dependent on activation of autonomic efferents, and, instead of that, a humoral-dependent mechanism could be more likely involved in the development of the response evoked by Ang-(1-7) into the LPA. Earlier studies showed that LPA sends projections to the perinuclear shell of paraventricular nucleus of the hypothalamus, but a direct projection to magnocellular neurons responsible for vasopressin synthesis has not been demonstrated (Larsen et al., 1994). In contrast, in vitro studies performed in hypothalamic slices have shown that the electrical stimulation of cholinergic neurons in the LPA produces synaptic activation of vasopressin-synthesizing neurons in the supraoptic nucleus, which is blocked by hexamethonium but not by atropine (Hatton et al., 1983). Despite no previous study showed that Mas receptors-containing neurons synapse with cholinergic neurons in the LPA, the hypothesis that such interaction could be involved in the LPA in order to increase the intravesical pressure via vasopressin release should not be refused. In the current study, we have not measured the plasma vasopressin release after Ang-(1-7) into the LPA, which is a limitation of this study and requires further investigation.
The dose of A-779 used for the blockade of Mas receptors (50 nmol/µL or 50 mM) in the present study was 10-fold higher than the dose of Ang-(1-7) (5 nmol/µL or 5 mM). Nevertheless, we have observed a decrease in intravesical pressure that was not great as much as the increase in intravesical pressure evoked by Ang-(1-7) injected into the LPA. This likely happened because LPA should be one of the brain areas responsible for the maintenance of the urinary bladder tonus, and, upon the blockade of LPA neurons, other brain areas still provide the excitatory drive to the efferent pathways involved in the control of the detrusor muscle tonus. The decrease in intravesical pressure induced by the blockade of Mas receptors into the LPA also showed a latency of approximately 10 min, and the peak response was observed at 23-25 min after bilateral injections into the LPA. This pattern of response is suggestive of a tonic control played by both sides of the LPA that could be inhibiting the release of any humoral factor as vasopressin and, consequently, reducing the intravesical pressure. However, it is noteworthy that other brain areas involved in urinary bladder control as the cholinergic neurons in the medulla oblongata can also increase the plasma vasopressin release for rising the intravesical pressure at ∼30 min after carbachol injections into the 4th brain ventricle (4thV) (Cafarchio et al., 2016). The blockade of cholinergic receptors with atropine into the 4thV yields a decrease in intravesical pressure at 30 min after injections (Cafarchio et al., 2016), which shows similarities to the responses observed with A-779 injections bilaterally into the LPA, except by the differences in the latency for the onset of the responses.
In the current study, we have demonstrated the existence of Mas receptor and ACE-2 genes into the LPA by qPCR and also the protein expression of them by Western Blotting in LPA neurons. Angiotensin-(1-7) can be formed by different pathways. One of them is dependent on cleavage of Angiotensin I in Angiotensin-(1-9), which undergoes the action of ACE and subsequent hydrolysis by NEP (Chappell et al., 2000;Rice et al., 2004). The other pathway is through the cleavage of Angiotensin II by ACE-2 in Ang-(1-7), which has been the most suggested as the most relevant physiologically and biochemically (Vickers et al., 2002). Despite studies using the immunofluorescence assays in the central nervous system of Wistar rats have shown the presence of the Mas receptor in the lateral preoptic area (LPA) (Becker et al., 2007), no previous study demonstrated the gene expression by qPCR and protein expression by Western Blotting of the Mas receptors or ACE-2 in the LPA. Indeed, our findings suggest that the increases in intravesical pressure elicited by Ang-(1-7) are due to binding in the Mas receptors in LPA neurons, and the presence of ACE-2 in the LPA neurons suggests that Ang-(1-7) is endogenously synthetized by the neurons in this area.
Previous reports showed that lesions of the LPA neurons with ibotenic acid increased water intake upon administration of hypertonic saline or in water-deprivated rats for 14 h, suggesting a role of LPA in the inhibitory control of water intake (Saad et al., 1996). Furthermore, chemical lesions of the LPA, using high doses of kainic acid, have demonstrated that rats develop polydipsia accompanied by increased urine production; however, this effect was reversed 1 week after lesions (Osaka et al., 1993). In the current study, we have demonstrated that Ang-(1-7) neurons in the LPA are involved in the urinary bladder control; thereby, the LPA can be deemed as an important forebrain area of integrative mechanisms of hydroelectrolytic control and urinary bladder regulation; nevertheless, the physiological conditions in which the LPA mediates this integration still require further investigation.
In conclusion, our findings suggest that the LPA is an important part of the circuit involved in urinary bladder control, in which the Ang-(1-7) released into the LPA binds to the Mas receptors for increasing the intravesical pressure independent on changes in renal conductance and cardiovascular parameters. Our data are also suggestive that the neurons containing the Mas receptors bilaterally into the LPA are tonically active in the circuitry involved in the regulation of urinary bladder tone.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by Comissão de Ética no Uso de Animais (CEUA-FMABC).
AUTHOR CONTRIBUTIONS
GBL carried out the functional experiments. JSS, RMM, and GG were responsible for the qPCR experiments. GBL, BV, BBA, DPV, and MAS performed the Western Blotting experiments. GBL, EMC, BA, DPV, and PA worked on data analysis and discussion. MAS designed the experiments, performed the statistical analysis, obtained the research grant, and also wrote the manuscript with GBL and PA. All the authors equally contributed to the development of the manuscript.
|
2021-07-12T13:23:57.460Z
|
2021-07-12T00:00:00.000
|
{
"year": 2021,
"sha1": "1063bc90963c5d660b921a0da3dd148758bc43d3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.682711/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1063bc90963c5d660b921a0da3dd148758bc43d3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
158407900
|
pes2o/s2orc
|
v3-fos-license
|
Transdisciplinary Sustainability Research and Citizen Science: Options for Mutual Learning
ustainability science is a problem-oriented academic field that contributes to sustainability by applying transdisciplinary methods. The vision of Future Earth, a global research initiative, recog nizes this approach, calling for a new type of science that links disciplines, knowledge systems and societal partners to support a more agile global innovation system (and foster sustainable development).1Within the Future Earth discourse, co-design, co-pro duction and co-dissemination of knowledge are keywords to describe integrative research that aims to address these challenges (Mauser et al. 2013). Like sustainability science, citizen science (CS) is an approach to research that integrates non-academic actors into research activities. Although not explicitly oriented towards sustainability outcomes, CS takes place in relevant areas, such as biodiversity and climate change. Based on these similari ties, we argue that transdisciplinary research (TDR) can learn from the experiences and potential of CS and vice versa. Here, we canvas lessons learned from these fields to contribute to the further development of sustainability science in Future Earth. We focus on three key challenges: knowledge integration, quality criteria, and normativity (see also BlättelMink et al. 2016). In doing so, we aim to stimulate exchange between the communities within Future Earth, transdisciplinary sustainability research and CS.2
Citizen Science -Uncovering the Potential of Societal Actors
The need for actionable knowledge to address societal challenges and better epistemic governance of science has created high hopes for CS and scientific citizenship.CS, a specific form of public participation in scientific research, includes centuries-long traditions of what is often called "amateur research" in the fields of biology, astronomy and local history as well as relatively new projects that use digital technology to allow for distributed data collection and analysis (Bonney et al. 2009, Pettibone et al. 2016, Pettibone et al. 2017).CS includes approaches that may transcend disciplines (Crain et al. 2014) to address diverse goals, from individual curiosity to protection of biodiversity (Kullenberg andKasperowski 2016, Pettibone et al. 2017).CS projects practice diverse approaches to co-design, co-production and co-dissemination relevant also to transdisciplinary sustainability science. in areas of interest (Ottinger 2009).For both approaches, standards enforced through scientific norms play a crucial role in managing knowledge production with somewhat wider participation in CS (Dickel 2016).
Reflecting on these different approaches reveals avenues for mutual learning: exploring the epistemological mechanisms at play in CS could help TDR scholars better understand how knowledge that is perceived as scientifically sound can be generated with broader societal involvement and what assumptions such processes rely on.CS in turn could benefit from considering methods for knowledge integration to achieve more intentional plurality regarding the types of knowledge involved.
Quality Criteria
Since transdisciplinarity complements disciplinary and interdisciplinary routines in producing new knowledge that is both societally and scientifically relevant and robust, the criteria used to measure quality have long been a subject of fierce discussion.Quality cri teria should refer to the different tasks in co-design, co-production and co-dissemination.Belcher et al.'s (2015) review of the principles and criteria for a multi-criteria assessment of TDR highlights specific qualities in different phases: relevance of the research problem, legitimacy of the research process, credibility of results and the effectiveness of the outcome.Jahn and Keil (2015) address quality assurance in TDR in three fields: the research problem, the research process, and research results.Defila and Di Giulio (1999) focus on external evaluation, identifying aspects to be considered across different phases of evaluation (ex-ante, intermediary, ex-post).An ex-ante evaluation would include description of the problem, project goals and research questions, involvement of nonacademic partners and methods of knowledge integration, while ex-post evaluation criteria cover the development of outputs and dissemination.
These three approaches to quality criteria for TDR (and there are many others) follow different objectives and aim at different contexts of usage.They provide assistance to those designing, as sessing the reliability of or evaluating transdisciplinary process es.Altogether they provide ways to capture process quality, re quire ments for results and the overall performance of transdisci plinary projects.
Similar discussions on quality criteria are emerging in CS and suggest comparable levels of controversy (e. g., Heigl et al. 2018).Debates center around policy or scientific outcomes (e.g., Schmeller et al. 2009), data accessibility (Groom et al. 2016) and evaluation of the learning benefits for participants (e. g., Brossard et al. 2005), in terms of education, engagement in environmental or other issues and scientific or civic empowerment.There seems to be a trade-off between focusing on scientific, policy or educational goals in CS projects (Chase and Levine 2015).As with the debate in transdisciplinarity, the various quality criteria suggested in CS highlight diverse and sometimes competing goals.Despite this, discussions on quality criteria in CS tend to concentrate on areas of practical application, for example, justifying usability of data for science/policy and educational value.tion, 2. quality criteria for assessing the different aspects of the research process, and 3. normativity issues concerning how delib era tion among project partners takes into account different norms and values and how this shapes the research process.
Knowledge Integration
Co-design, co-production and co-dissemination focus upon joint efforts between scientific and societal actors (Beck et al. 2014, Mau ser et al. 2013). 3But how exactly to establish cooperation in research processes has not been sufficiently discussed.Two questions highlight the potential for mutual learning: who is involved, that is, inclusion and exclusion, and how to deal with different kinds of knowledge.
In TDR, inclusion is criteria-based: bound to expertise and dependent on invitation by project coordinators (mostly professional researchers).The selection of participants often applies methods to determine who relevant knowledge holders are (Bergmann et al. 2012).CS, by contrast, offers what we call an "opportunitybased approach": in most initiatives, anybody can join a given project; each participant is able to contribute to the generation of new knowledge.Depending on the project design, citizen scientists' role ranges from conducting independent research on local phenomena and providing field observations to analyzing digital media, as well as building knowledge bases from independent wikis to international databases such as the Global Biodiversity Informational Facility (Pettibone et al. 2017).This openness is possible because in most CS approaches, "citizens" are an implicit category, a catch-all term (Pettibone 2015) synonymous with layperson or volunteer.Thus, TDR is more sensitive to challenges of selection for participation, while CS's more open approach allows all comers to participate.
Successful collaboration in TDR requires distinguishing between and integrating different kinds of knowledge (e. g., Defila and Di Giulio 2015).Recent research, however, indicates that transdisciplinary projects usually integrate nonscientific knowledge in reaching out to societal actors to develop interventions, but not to involve them in reflecting the final results (Di Giulio et al. 2016).This may be because researchers lack the tools to judge the reliability of knowledge that has not been scientifically validated (Defila and Di Giulio 2015).CS, on the other hand, does not stress the integration of different kinds of knowledge, but focuses on the generation of new knowledge by involved citizens -be it as supportive data collection for the academic research of professional scientists (Bonney et al. 2009) CS can learn from the debate in TDR that different contexts and goals make it unlikely that one unified approach can be agreed upon.Choosing appropriate criteria may thus depend on the project's objectives and its normative orientations (e. g., policy-, education-or science-focused).TDR in turn might be enriched by consideration of benefits to participants as seen in CS.Finally, both TDR and CS should experiment with decision-making processes to determine relevant quality criteria at the project level and beyond.
Normativity
TDR is often characterised as solution-oriented, transformative and participatory.These attributes point to the threefold ambition of TDR to produce knowledge that is also relevant beyond academia, that positively impacts socio-ecological systems and that opens the often "closed club" of science.All of these ambitions include normative considerations, with which CS, perhaps by virtue of its popularity in the natural sciences, has engaged less than TDR.But even in TDR, different dimensions of normativity often lack reflection.This is particularly the case in the co-design stage, where agenda-setting takes place.The often neglected dimensions include 1. the communication of underlying normative agendas, 2. open deliberation about (presumed) consensus and conflicting issues, and 3. research practices that can deal with contested and conflicting norms.
First, if taken seriously, the issue of normativity in TDR goes beyond the merely instrumental aims of devising solutions to perceived problems, which uses an ideal-typical conceptualization of value-neutral science.More than other forms of research, TDR itself needs to be understood as a normative instrument, that means as part of an explicitly transformative political agenda.Normativity therefore extends beyond epistemological issues of good scientific practice into the moral and political arena (Potthast 2015).The same is true for CS, which is linked to normative goals such as enhancing the quality of science, empowering nonscientists and achieving sustainability.
Second, a simplistic integration of different disciplines, methods, conceptual understandings and scientific approaches ignores power relations and social inequalities that affect citizenship (Me lo-Escrihuela 2008).While integration is necessary for TDR, a simplistic mode of integrating lay knowledges into dominant scien tific frames threatens the plurality of epistemic approaches to knowledge production, which both TDR as well as CS often claim as their strength.Both approaches need to be more sensitive to how knowledge is integrated and on whose terms.
Third, established processes of research governance and management, which can inhibit transdisciplinarity and citizen participation in science, can only be addressed if their guiding norms are opened up for discussion.Establishing quality criteria is an evalu a tive and normative issue in itself, as discussed above.To the extent that research governance frameworks are insufficient for transdisciplinarity and CS, different criteria need to be employed to monitor and evaluate projects in a way that promotes the qualities espoused by these approaches.
TDR and CS have much to learn with respect to normativity in research.CS can learn from transdisciplinarity's deliberative approach, which allows it to understand issues of normativity and epistemic quality as integral to open research, as well as its contested problem-framing and the implications of participation as a value in and of itself.In turn, CS can share its experiences of doing science, which is a diverse practice when preformed in a participatory way and suggests multiple modes of knowledge production that are mutually embedded in science and other parts of society.
Conclusions
Implementing transdisciplinarity as a model of sustainability research poses a variety of challenges: inclusion of nonacademic actors throughout the research process, integration of different types of knowledge and worldviews, development of appropriate quality criteria and sensitivity to normativity.We have identified areas of mutual learning to address these challenges.From CS, transdisciplinary researchers can learn about the diversity of ways to engage knowledge holders, produce and share knowledge.We argue that this richness of practice can inspire participatory and TDR and thus provide new ideas of how to overcome the above challenges.At the same time, CS is no silver bullet.It often faces limited societal inclusion and precludes consideration of representation or normativity in the production of knowledge.Here, CS can -and should -examine lessons from TDR.In particular, the rich debates and experiences within transdisciplinarity related to quality criteria and consideration of normativity could greatly stimulate CS practice.
With these considerations in mind, we suggest that transdisciplinary sustainability research should strive to systematically integrate CS formats as types of participatory practice.In this way, different knowledge domains and expertise from various sectors of society can be included to enhance the innovation potential of Future Earth science.
|
2019-05-20T13:05:29.634Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "45804ebf32e4ffec438236fb0e4397417216e6f2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14512/gaia.27.2.9",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0a6699e4f85393b68b6960cb6066170696e6c432",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Economics"
]
}
|
256390004
|
pes2o/s2orc
|
v3-fos-license
|
Weak dispersion of exciton Land\'e factor with band gap energy in lead halide perovskites: Approximate compensation of the electron and hole dependences
The photovoltaic and optoelectronic properties of lead halide perovskite semiconductors are controlled by excitons, so that investigation of their fundamental properties is of critical importance. The exciton Land\'e or g-factor g_X is the key parameter, determining the exciton Zeeman spin splitting in magnetic fields. The exciton, electron and hole carrier g-factors provide information on the band structure, including its anisotropy, and the parameters contributing to the electron and hole effective masses. We measure g_X by reflectivity in magnetic fields up to 60 T for lead halide perovskite crystals. The materials band gap energies at a liquid helium temperature vary widely across the visible spectral range from 1.520 up to 3.213 eV in hybrid organic-inorganic and fully inorganic perovskites with different cations and halogens: FA_{0.9}Cs_{0.1}PbI_{2.8}Br_{0.2], MAPbI_{3}, FAPbBr_{3}, CsPbBr_{3}, and MAPb(Br_{0.05}Cl_{0.95})_{3}. We find the exciton g-factors to be nearly constant, ranging from +2.3 to +2.7. Thus, the strong dependences of the electron and hole g-factors on the band gap roughly compensate each other when combining to the exciton g-factor. The same is true for the anisotropies of the carrier g-factors, resulting in a nearly isotropic exciton g-factor. The experimental data are compared favorably with model calculation results.
I. INTRODUCTION
Lead halide perovskite semiconductors attract currently great attention due to their exceptional electronic and optical properties, which make them highly promising for applications in photovoltaics, optoelectronics, radiation detectors, etc. [1][2][3][4][5] Their flexible chemical composition APbX 3 , where the cation A can be cesium (Cs + ), methylammonium (MA + ), formamidinium (FA + ) and the anion X can be I − , Br − , Cl − , offers huge tunability of the band gap from the infrared up to the ultraviolet spectral range.
The optical properties of perovskite semiconductors in vicinity of the band gap are controlled by excitons, which are electron-hole pairs bound by the Coulomb interaction.The exciton binding energy ranges from 14 to 64 meV 6,7 , making them stable at room temperature at least for the large binding energies.In-depth studies of exciton properties such as of their energy and spin level structure or their relaxation dynamics, disclosing unifying trends for the whole class of lead halide perovskites, are therefore of key importance for basic and applied research.
The band structure of lead halide perovskites is inverted compared to conventional III-V and II-VI semiconductors.As a result, the strong spin-orbit interaction influences the conduction rather than the valence band.Spin physics provides high precision tools for addressing electronic states in vicinity of the band gap: the Landé g-factors of electrons and holes are inherently linked via their values and anisotropies to the band parameters, which in turn determine the charge carrier effective masses 8,9 .On the other hand, the g-factors are the key parameters for the coupling of carrier spins to a magnetic field and thus govern spin-related phenomena and spintronics applications, a largely uncharted area for perovskites.
We recently showed experimentally and theoretically that a universal dependence of the electron and hole g-factors showing strong variations with the band gap energy can be established for the whole family of hybrid organic-inorganic and fully inorganic lead halide perovskite crystals 10 .These measurements were performed by time-resolved Kerr rotation and spin-flip Raman scattering spectroscopy including an analysis of the g-factor anisotropy.As the exciton g-factor g X is contributed by the electron and the hole g-factor, one may expect a similarly universal dependence for g X .As some g-factor renormalization due to the finite carrier k-vectors in the exciton is expected, a direct measurement of the free exciton Zeeman splitting, e.g., by magneto-reflectivity or magneto-absorption, provides valuable insight.The data published so far concern mostly polycrystalline materials with broad exciton lines, which diminish the accuracy of exciton g-factor evaluation 6,7,11,12 .Magneto-optical studies of high quality crystals are needed to that end, in combination with high magnetic fields providing large Zeeman splittings [13][14][15] .
In this manuscript, we measured the exciton g-factors in lead halide perovskite crystals with different band gap energies using magneto-reflectivity in strong magnetic fields up to 60 T. We compare the experimental data with the electron and hole g-factors measured by timeresolved Kerr ellipticity and spin-flip Raman scattering.We find that the exciton g-factor is nearly independent of the band gap energy that varies from 1.5 to 3.2 eV through the choice of cations and/or halogens in the perovskite composition.This behavior is in good agreement with the results of model calculations.We also find that the anisotropies of the electron and hole g-factors compensate each other, such that the net exciton g-factor becomes isotropic.
II. RESULTS
We studied five lead halide perovskite crystals with band gap energies (E g ) covering basically the whole visible spectral range from 1.5 up to 3.2 eV.Details of the synthesis of these high-quality crystals are given in the Supporting Information, S1.The hybrid organic-inorganic compounds FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 (E g = 1.520 eV at the cryogenic temperature of T = 1.6 K) and MAPbI 3 (E g = 1.652 eV) have band gap energies close to the near-infrared.Replacing the iodine halogen with bromine and chlorine results in a blue-shift of the band gap for FAPbBr 3 (2.216eV) and MAPb(Br 0.05 Cl 0.95 ) 3 (3.213eV).To develop a complete picture, we also study the fully inorganic perovskite CsPbBr 3 (2.359eV).
A. Optical properties
An overview of the optical properties of the studied crystals at the temperature of T = 1.6 K is given in Figure 1.We are interested in the properties of free excitons, which exhibit pronounced resonances in the reflectivity spectra of four studied crystals, but not for FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 .For the latter, we measured the photoluminescence excitation (PLE) spectrum, where the pronounced peak at 1.506 eV corresponds to the exciton resonance energy, E X .The exciton resonances are marked by the arrows in all panels and their energies are summarized in Table I.
Photoluminescence (PL) spectra are also shown in Figure 1.At low temperature, the excitons are typically localized or bound to impurities, and the emission from free excitons is weak.It can be seen as weak PL shoulder for FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 and MAPbI 3 only.The PL spectra are Stokes-shifted from the free exciton resonances and are composed of several lines, the origins of which are discussed in literature, but are not yet fully clarified [16][17][18][19] .The recombination dynamics, examples of which we show in the Supporting Information, Figure S1, are complex and show different characteristic time scales ranging from hundreds of picoseconds, as typical for free exciton lifetimes, up to tens of microseconds.The longer times are associated with resident electrons and holes, which are spatially separated due to localization at different sites.The dispersion in distances between an electron and a hole, resulting in different overlaps between their wave functions and trapping-detrapping processes, provide a strong variation of decay times.The coexistence of long-lived localized electrons and holes, which we refer to as resident carriers, is typical for lead halide perovskites, as confirmed by spin-dependent experimental techniques 10,14,20,21 .
B. Measurement of exciton, electron and hole g-factors
The band gap in lead halide perovskite semiconductors is located at the R-point of the Brillouin zone for the cubic crystal lattice and at the Γ-point for the tetragonal or orthorhombic lattices 22 .In all these cases the states at the bottom of the conduction band and the top of the valence band have spin 1/2.In an external magnetic field B, the electron (hole) Zeeman splittings, E Z,e(h) = µ B g e(h) B, are determined by the g-factors, g e and g h .Here µ B is the Bohr magneton.
The bright (optically-active) exciton, composed of an electron and a hole, has the angular momentum L = ±1.Its spin sublevels are split by the Zeeman energy E Z = µ B g X B with the exciton g-factor g X .In the Faraday geometry with the magnetic field parallel to the light wave vector (B F k), the exciton states with opposite spin orientation can be distinguished by the circular polarization of the reflected light.
An example of such measurements for the FAPbBr 3 crystal is given in Figure 2a, where the reflectivity spectra detected in σ + or σ − polarization at B F = 7 T are shown.One clearly sees the Zeeman splitting of the exciton resonance with E Z = 1.09 meV.The dependence of E Z on B F is a linear one, see Figure 2b.From the slope we evaluate the exciton g-factor g F,X = +2.7.Note that in this experiment the g-factor sign can be determined: positive values correspond to a high (low) energy shift of the σ + (σ − ) polarized resonance.We performed similar magneto-reflectivity experiments for MAPbI 3 , CsPbBr 3 , and MAPb(Br 0.05 Cl 0.95 ) 3 and the measured exciton gfactors (g F,X ) are given in Table I.
The Zeeman splitting of the bright exciton is given by the sum of the electron and hole Zeeman splittings, so that To what extent this equation is exactly fulfilled, needs to be checked specifically for each material, as some renormalization of the exciton g-factor may take place when the carriers are bound to form an exciton, in which they both are in motion.The renormalization could be caused by band mixing at finite wave vectors.We show below that Eq. ( 1) is reasonably well fulfilled for the studied materials.Time-resolved Kerr rotation and Kerr ellipticity are powerful techniques for measuring directly the carrier gfactors by analyzing the Larmor precession frequencies, ω L , of their spins 23 .In lead halide perovskites, resident electrons and holes coexist at low temperatures, so that their g-factors can be measured in one sample in a single experiment 10,14,20,21,24,25 .We perform measurements of the time-resolved Kerr ellipticity (TRKE) on the FAPbBr 3 crystal, with the pump and probe laser energy tuned to the exciton resonance.The TRKE dynamics measured in the Voigt geometry with the magnetic field perpendicular to the light wave vector (B V ⊥ k, B V = 0.5 T) are shown in Figure 2c by the blue line.It contains two Larmor precession frequencies, which we decompose by the fit function given in the Experimental Section.We plot the electron and hole contributions separately in the same figure.Based on the established universal dependence of the carrier g-factors on the band gap energy in lead halide perovskites 10 , we assign the higher Larmor frequency to the electrons and the smaller frequency to the holes.Their g-factors of g V,e = +2.44 and g V,h = +0.41 are evaluated with high accuracy from the magnetic field dependence of the Zeeman splitting E Z,e(h) = ω L,e(h) (Figure 2d), where is the reduced Planck constant, using |g e(h) | = ω L,e(h) /(µ B B).
The sign of the g-factor cannot be determined directly from the TRKE dynamics in the Voigt geometry.However, dynamic nuclear polarization in tilted magnetic field 20 and theoretical calculation of the carrier g-factors' universal dependence 10 allow us to identify it as positive for both electrons and holes in FAPbBr 3 .
We measure the anisotropy of the electron and hole gfactors in FAPbBr 3 by rotating the magnetic field using a vector magnet, see the Supporting Information, Figure S2.The g-factor anisotropy is small, namely the g-factors in the Faraday geometry (g F,e = +2.32 and g F,h = +0.36)are close to the ones in the Voigt geometry (g V,e = +2.44 and g V,h = +0.41).This is typical for lead halide perovskites with an FA cation, as in these materials the structural tolerance factor is close to unity and structural modifications at low temperatures are weak.Also for FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 , we earlier reported nearly isotropic electron and hole g-factors 20 .
The sum of the carrier g-factors g F,e + g F,h = +2.68 in FAPbBr 3 coincides closely with the exciton g-factor g F,X = +2.7 obtained from magneto-reflectivity measurements.These values are also close for MAPbI 3 (g F,e + g F,h = +2.23 and g F,X = +2.3)and furthermore are not differ much in CsPbBr 3 (g F,e + g F,h = +2.71and g F,X = +2.35),see Table I.Note that the experimental accuracy of g-factor values received from magnetoreflectivity is ±0.1 and from the TRKE is ±0.05.
C. Anisotropy of exciton g-factor
In CsPbBr 3 crystals, the anisotropy of the electron and hole g-factors is pronounced.The results measured by spin-flip Raman scattering (SFRS), for details see the Supporting Information, Figure S3, are shown in Fig- ure 3a.Here θ is the angle of the magnetic field tilt from the Faraday (θ = 0 • ) to the Voigt (θ = 90 • ) geometry.In this experiment, the c-axis is oriented perpendicular to the light k-vector, so that B c corresponds to the Voigt geometry.Note, that contrary to TRKE, spin-flip lines are detectable also in the Faraday geometry and, therefore, g F,e(h) are measured directly.The g e and g h dependences on θ are described by The electron and hole g-factors are both positive in CsPbBr 3 , but their anisotropies are orthogonal to each other.The electron g-factor is largest in the Faraday geometry (g F,e = +2.06)and decreases in the Voigt TABLE I. Parameters of lead halide perovskite crystals at cryogenic temperatures of 1.6 − 10 K. * MAPbI3 shows a complex anisotropy of the electron and hole g-factors 10 .Here the given values of gV correspond to θ = 60 • and gF to θ = 150 • .# The value is obtained from a linear approximation of the dependence of the exciton binding energy EB on Eg. mX is the exciton effective mass.& In the original papers, the sign of the g-factor was not given.
Material
Eg (eV) EX (eV) 6 gV,e 10 g V,h 10 gV,e +g V,h gF,e geometry to g V,e = +1.69,while the hole g-factor increases from the Faraday towards the Voigt geometry from g F,h = +0.65 up to g V,h = +0.85.Interestingly, the sum of the carrier g-factors changes only weakly from +2.71 to +2.54, see the blue crosses in Figure 3a, demonstrating that the exciton g-factor anisotropy is weak.This is a common feature for the lead halide perovskites, as confirmed by the data for MAPbI 3 , where the strong anisotropies of the electron and hole g-factors nearly compensate each other in their sum, see Table I.
D. Exciton Zeeman splitting in strong pulsed magnetic fields
We examined the exciton Zeeman splitting in the CsPbBr 3 crystal in very strong magnetic fields up to 60 T, using a pulsed magnet at the National High Magnetic Field Laboratory in Los Alamos.The experiments were motivated by the possibility to reach large Zeeman splittings, thus improving the accuracy of the exciton g-factor evaluation, and also by searching for a possible nonlinearity of the Zeeman splitting by fieldinduced band mixing.Magneto-reflectivity was measured at T = 1.6 K and the Zeeman splitting of the oppositely circularly polarized exciton resonances (similar as in Figure 2a) was assessed.The results are shown in Figure 3b.The exciton Zeeman splitting increases linearly with magnetic field over the whole range up to 60 T. The evaluated g F,X = +2.35.The high linearity indicates that band mixing is negligibly small in lead halide perovskites even in very strong magnetic fields.This is explained by the simple spin structure (spin 1/2) of the electronic states in the vicinity of the band gap, which contribute to the exciton wave function.The shift of the higher-lying electron states with momentum 3/2 due to the spin-orbit splitting in the conduction band is about 1.5 eV, which is large enough to exclude a significant admixture of these states to the ground exciton state by the magnetic field.Note that in this respect the lead halide perovskites principally differ from conventional III-V and II-VI semiconductors, for which strongly nonlinear Zeeman splittings were reported in GaAs-and CdTe-based quantum wells 27,28 .
E. Band gap dependence of exciton g-factor
To summarize the information on the exciton g-factors for the whole class of lead halide perovskites and highlight general trends, we show in Figure 4 the experimental data collected in Table I as function of the band gap energy.Our data for g F,X measured by magneto-reflectivity are shown by the closed red circles.We combine them with literature data (open red circles), also measured by magneto-reflectivity on MAPbI 3 and MAPbBr 3 crystals 13,15 .One can see that the exciton g-factor is nearly independent of the band gap energy varying from 1.5 to 3.2 eV.The exciton g-factors change only in a small range from +2.3 to +2.7.Compared to the average, this corresponds to a variation well below 10%, while the electron g-factor varies by significantly more than 50%.I.The blue line is the sum of their contributions calculated with Eq. ( 5).
We show in Figure 4 also the sum values of g F,e + g F,h by the crosses, which follow closely the g F,X values.Therefore, we conclude that the renormalization of the carrier g-factors in the excitons is small in the lead halide perovskites.
Let us compare our experimental data with model predictions.We showed recently experimentally and theoretically that the electron and hole g-factors in the lead halide perovskites follow universal dependences on the band gap energy 10 .According calculations are given in Figure 4 by the dashed lines.They account for the fact that in the vicinity of the band gap the band structure of hybrid organic-inorganic and of fully inorganic lead halide perovskites is strongly contributed by Pb orbitals.Then, for the holes in the valence band, the main contribution to the g-factor is due to k • p mixing with the conduction band.For the cubic phase it is described by 10,29 : Here p is the interband matrix element of the momentum operator, ∆ = 1.5 eV is the spin-orbit splitting of the conduction band, m 0 is the electron effective mass, and p/m 0 = 6.8 eV Å.For electrons, the k • p mixing with the top valence band and the remote valence states are important: Here ∆g e = −1 is the remote band contribution evaluated as fitting parameter in Ref. 10 .One clearly sees that both electron and hole g-factors change strongly with increasing E g .Using Eqs. ( 3) and ( 4) we obtain for the sum of the electron and hole g-factors: It is shown by the solid blue line in Figure 4.One can see that it has much weaker dependence on the band gap compared to the g-factors of the individual carriers.The calculated dependence for g e + g h is in good agreement with the experimental data on both g F,X and g e + g h shown in Figure 4.Note that the similar cancellation is valid for the angle dependence of the exciton g-factor, shown exemplary for CsPbBr 3 in Figure 3a.The related formalism has been developed in Ref. 10 , and we give its key equations in the Supporting Information, S5.The observed, approximate independence of the exciton g factor on the band gap E g is a consequence of the underlying simple band structure, determined by the lowest conduction band c, ±1/2 and the highest valence band v, ±1/2, showing both only a two-fold spin degeneracy.As shown above 10 , the contributions to the individual g-factors from the k • p mixing of these bands proportional to 1/E g cancel each other in the exciton g-factor.This is in striking contrast to the Zeeman splitting of the exciton sublevels in conventional III-V or II-VI bulk semiconductors of cubic symmetry, like GaAs or CdTe.First, in these well-studied semiconducting materials, the conduction band Γ 6 is simple, but the electron g factor is strongly dependent not only on E g , but also on the spinorbit splitting ∆ of the valence band Γ 8 , the latter varying in a wide range 30 .Second and more importantly, the hole is characterized by the effective angular momentum j = 3/2 so that it is four-fold degenerate.The valence band degeneracy leads to complicated exciton level splitting and structure of its wave function as result of the interplay of heavy hole-light hole mixing and electronhole exchange interaction [31][32][33] .Even if the field-induced hole mixing and the exchange interaction in the exciton are disregarded, the effective g-factor of the bright exciton (A-exciton) contains 1/E g contributions.
While the mixing of the bands forming the gap cancels thus in g X , the mixing of the valence band with the higher lying, remote conduction band still contributes, but here the variation with E g is damped by adding the spin-orbit splitting which is comparatively large in the perovskites.The situation is similar to the one in transition metal dichalcogenide monolayers, where the bright exciton gfactor is as well largely determined by the remote bands, while the mutual contributions of the valence band to the electron g factor and of the conduction band to the hole g-factor cancel out in the exciton g-factor [34][35][36] .
Up to now we discussed the band gap dependence of the bright exciton g-factor, g X = g e + g h .For the optically-forbidden dark exciton the g-factor value is given by the difference g e −g h .Due to the opposite variation of the electron and hole g-factors, the dependence on E g is expected to be even stronger for the dark exciton than for individual electron and hole, see the Supporting Information, Figure S4.Also a stronger anisotropy for the dark exciton g-factor, compared to the bright exciton g-factor which is nearly isotropic, is expected.
To summarize, the exciton g-factors in hybrid organicinorganic and fully inorganic lead halide perovskites have positive signs and show relatively constant values over a large range of band gap energies.Further, they are nearly isotropic.It turns out that the strong band gap dependences and anisotropies of the individual electron and hole g-factors contributing to the exciton largely compensate each other.It would be important to extend these studies to perovskite semiconductors based on another metal ions, e.g., lead-free Sn-based materials.Also the role of quantum confinement for carriers and excitons in perovskite nanocrystals and two-dimensional materials would be interesting to examine.
Samples:
The class of lead halide perovskites possesses the composition APbX 3 , where the A-cation is typically Cs + , methylammonium (MA + , CH 3 NH + 3 ), or formamidinium (FA + , CH(NH 2 ) + 2 ), and the X-anion is one of the halogens Cl − , Br − , or I − , giving rise to a high flexibility.The flexibility is only limited by a favorable ratio of the anion to cation ion radii, named the Goldschmidt tolerance factor t, which should be close to unity 37 .By varying the composition, the band gap of these perovskite materials can be tuned from the infrared up to the ultraviolet spectral range.All studied samples are lead halide perovskite single crystals grown out of solution with the inverse temperature crystallization (ITC) technique 5,38,39 .For specific crystals the ITC protocols were modified, and for the five crystals studied here (FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 , MAPbI 3 , FAPbBr 3 , CsPbBr 3 , and MAPb(Br 0.05 Cl 0.95 ) 3 ) details of their synthesis are given in the Supporting Information (Section S1) and in Ref. 10 .
Magneto-reflectivity: The samples were placed in a cryostat and immersed in either superfluid liquid helium at T = 1.6 K or in gas helium for T = 6 − 10 K.For experiments in dc magnetic fields, two cryostats equipped with a split-coil superconducting magnet were used.One with a single split coil can generate magnetic fields up to 10 T in a fixed direction.Another is a vector magnet with three orthogonally oriented split coils, allowing us to apply magnetic fields up to 3 T in any direction.A sketch of the experimental geometry is shown in Figure 3a.
A halogen lamp was used for measuring magnetoreflectivity.The light wave vector, k, was perpendicular to the sample surface and the reflectivity was measured in backscattering geometry.The signal was analyzed for σ + and σ − circular polarization and recorded after dispersion with an 0.5-meter spectrometer with a Silicon charge-coupled-device camera.The external magnetic field was applied parallel to the light k-vector, B F k (Faraday geometry).In the TRKE and SFRS experiments, we also used the Voigt geometry (B V ⊥ k), as well as tilted geometries between Faraday and Voigt.Here the tilt angle θ is defined as the angle between B F and B V , where θ = 0 • corresponds to the Faraday geometry.Experiments in pulsed magnetic fields up to 60 T were performed in the Faraday geometry, following previouslydescribed methods 41 .
Time-resolved Kerr ellipticity (TRKE): The coherent spin dynamics were measured by a pump-probe setup, where pump and probe had the same photon energy, emitted from a pulsed laser 23 .The used titaniumsapphire (Ti:Sa) laser emitted 1.5 ps pulses with a spectral width of about 1 nm (1.5 meV) at a pulse repetition rate of 76 MHz (repetition period T R = 13.2 ns).An optical parametric oscillator (OPO) with internal frequency doubling was used to convert the photon energy of the Ti:Sa laser so that it can be resonantly tuned to the exciton resonance in FAPbBr 3 .
The laser beam was split into two beams, the pump and the probe.The probe pulses were delayed with respect to the pump pulses by a mechanical delay line.Both pump and probe beams were modulated using photo-elastic modulators (PEMs).The linear polarization of the probe was fixed, but its amplitude was modulated at the frequency of 84 kHz.The pump beam helicity was modulated between σ + and σ − circular polarization at the frequency of 50 kHz.The elliptical polarization of the reflected probe beam was analyzed via balanced photodiodes and a lock-in amplifier (Kerr ellipticity).In a transverse magnetic field, the Kerr ellipticity amplitude oscillates in time due to the Larmor spin precession of the carriers, decaying at longer time delays.When both electrons and holes contribute to the Kerr ellipticity signal, which is the case for the studied perovskite crystals, the signal can be described as a superposition of two decaying oscillatory functions: A KE = S e cos(ω L,e t) exp(−t/T * 2,e ) + S h cos(ω L,h t) exp(−t/T * 2,h ).Here S e(h) are the signal amplitudes that are proportional to the spin polarization of electrons (holes), and T * 2,e(h) are the carrier spin dephasing times.The g-factors are evaluated from the Larmor precession frequencies ω L,e(h) via |g e(h) | = ω L,e(h) /(µ B B).It is important to note that TRKE provides information on the g-factor magnitude, but not on its sign.The sign of the carrier g-factors can be identified in tilted magnetic field geometry through the dynamic nuclear polarization effect, for details see Refs. 20,21.
Spin-flip Raman scattering (SFRS): This technique allows us to measure directly the Zeeman splitting of the electron and hole spin sublevels via the spectral shift of the scattered light from the laser photon energy 40 .The energy shift is provided by a spin-flip of the carriers, with the required energy taken from or provided by phonons.The typical shifts do not exceed 1 meV at magnetic field of 10 T, which demands a high spectral resolution provided by high-end spectrometers with excellent suppression of scattered laser light.The experiments were performed for the CsPbBr 3 sample immersed in superfluid liquid helium (T = 1.6 K).We used resonant excitation at 2.330 eV in the vicinity of the exciton resonance of CsPbBr 3 in order to enhance the SFRS signal.The resonant Raman spectra were measured in backscattering geometry using the laser power density of 1 Wcm −2 .The scattered light was analyzed by a Jobin-Yvon U1000 double monochromator (1 meter focal length) equipped with a cooled GaAs photomultiplier and conventional photon counting electronics.The spectral resolution of 0.2 cm −1 (0.024 meV) allowed us to measure the SFRS signals in close vicinity of the laser line for spectral shifts ranging from 0.1 to 3 meV.The SFRS spectra were measured for cross-polarized (σ − /σ + ) circular polarizations of excitation (σ − ) and detection (σ + ).
The class of lead halide perovskites possesses APbX 3 composition, where the A-cation is typically Cs, methylammonium (MA + , CH 3 NH + 3 ) or formamidinium (FA + , CH(NH 2 ) + 2 ), and the X-anion is one of the halogen ion Cl − , Br − , or I − , giving rise to a high flexibility.This flexibility is only limited by a favorable ratio of the anion to cation ion radii, named the Goldschmidt tolerance factor t, which should be close to unity 1 .By varying the composition, the band gap of these perovskite materials can be tuned from the infrared up to the ultraviolet spectral range.All studied samples are lead halide perovskite single crystals grown out of solution with the inverse temperature crystallization (ITC) technique [2][3][4] .For specific crystals the ITC protocols were modified.
FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 crystals.α-phase FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 single crystals were grown by the method described in detail in Ref. 2 .First, a solution of CsI, FAI, PbI 2 , and PbBr 2 , with γ-butyrolactone (GBL) as solvent is mixed.This solution is then filtered and slowly heated to 130 • C temperature, whereby single crystals are formed in the black phase of FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 .Afterward the crystals are separated by filtering and drying.The α-phase (black phase) exhibits a cubic crystal structure at room temperature 5 .In the experiment the crystal was oriented with [001] pointing along the laser wave vector k.Note that the g-factor isotropy, the small shift of the PL line with temperature, and further analysis 2,6 suggest that the typical lead halide perovskite crystal distortion from cubic symmetry is small at cryogenic temperatures.The size of the studied crystal is about 2 × 3 × 2 mm3 .The crystal shape is non-cuboid, but the crystal structure exhibits aristotype cubic symmetry.Sample code: 515a.MAPbI 3 crystals.Methylammonium lead triiodine (MAPbI 3 ) single crystals were low temperature solutiongrown in a reactive inverse temperature crystallization (RITC) process, which utilizes a mixture of GBL with alcohol 4 .The mixed precursor solvent polarity is changed compared to pure GBL, causing a lower solubility of MAPbI 3 and an optimization of nucleation rates and centers, which result in an early crystallization at low temperatures.Black MAPbI 3 single crystals were obtained at a temperature of 85 • C. At room temperature a tetragonal phase with lattice constants of a = 0.893 nm and c = 1.25 nm was determined by powder X-ray diffraction (XRD) in reflection geometry 4 .The size of the studied crystal is about 4 × 3 × 2 mm 3 .The crystal has the shape of a planar truncated dodecahedron.The front facet was X-ray characterized to point along the a-axis 4 .Sample code: MAPI-SC04.
FAPbBr 3 crystals.The FAPbBr 3 single crystals were grown with an analogous approach as the other samples following the ITC approach.Specific extended information are given in Ref. 7 .The crystal is of reddish transparent appearance and has a rectangular cuboid shape with a size of 5 × 5 × 2 mm 3 .Sample code: OH0071a.
CsPbBr 3 crystals.The CsPbBr 3 crystals were grown with a slight modification of the ITC as stated above.Further information can be found in Ref. 3 .First, CsBr and PbBr 2 were dissolved in dimethyl sulfoxide.Afterward a cyclohexanol in N,N -dimethylformamide (DMF) solution was added.The resulting mixture was heated in an oil bath to 105 • C whereby slow crystal growth appears.The obtained crystals were taken out of the solution and quickly loaded into a vessel with hot (100 • C) DMF.Once loaded, the vessel was slowly cooled down to about 50 • C.After that, the crystals were isolated, wiped with filter paper and dried.The obtained rectangular-shaped CsPbBr 3 is crystallized in the orthorhombic modification.The crystals have one selected (long) direction along the c-axis [002] and two nearly identical directions along the [ 110] S1.The fits are shown by the black lines.
temperature window 58-62 • C. The obtained crystal has a cuboid shape with dimensions of 1.64 × 1.65 × 2.33 mm 3 .The crystal is transparent and colorless.Sample code: dd2924.
S2. Time-resolved photoluminescence
The exciton recombination times were measured by pump-probe differential reflectivity and by timeresolved photoluminescence with a streak camera.In FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 at T = 6 K the decay of the exciton population dynamics is 0.45 ns 6 .In MAPbI 3 at T = 4 K it is 0.3 ns 9 and in CsPbBr 3 at T = 10 K it is 0.9 ns 10 .
In this study, the long-lived recombination dynamics up to 100 µs were measured for the lead halide perovskite crystals by time-resolved photoluminescence (PL).The PL was excited by a pulsed laser with photon energy of 2.33 eV (532 nm wavelength), pulse duration of 5 ns, repetition rate of 10 kHz, and average excitation power of 8 µW.The detection energy (E det ) was selected by a double monochromator.The signal was detected using an avalanche photodiode and a time-of-flight card with a time resolution of 30 ns.The recombination dynamics cover a broad temporal range up to 100 µs.They cannot be fitted by monoexponential decays, evidencing that several recombination processes are involved.The PL dynamics contains three exponential decays for FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 in addition to the short exciton recombination time, which is too short to be resolved in this experiment, see Figures S1b,c.The dynamics of MAPbI 3 show four exponential decays, see Figures S1e,f.The extracted times are given in Table S1.
The long recombination dynamics can be associated with the following processes: (i) recombination of electrons and holes localized at different crystal sites with significant dispersion in their separations 6,9-11 , (ii) carrier trapping and detrapping processes 12 , (iii) polaron formation 13 , (iv) slow in-depth carrier diffusion 14 .The clarification of the role of the specific mechanisms goes beyond the scope of the present study.Spectral dispersion of the recombination times is observed for both samples.
S3. Anisotropy of electron and hole g-factors in FAPbBr3
We measured the anisotropy of the g-factors of electrons and holes in FAPbBr 3 by rotating the magnetic field in a vector magnet using time-resolved Kerr ellipticity.The magnetic field direction was changed from the Voigt geometry (θ = 90 • ) to the Faraday geometry (θ = 0 • ).The oscillations disappear in the Faraday geometry, but fitting by Eq. ( 2
S4. Spin-flip Raman scattering in CsPbBr3
An example of spin-flip Raman scattering (SFRS) spectrum in CsPbBr 3 is given in Figure S3, which was measured in tilted magnetic field geometry with θ = 45 • .In order to reduce the background contribution of the resonantly excited photoluminescence, the SFRS spectrum was measured in the anti-Stokes spectral domain, i.e. the measured spectral range is shifted to higher energies from the laser.The laser excitation energy E exc = 2.331 eV was in the vicinity of the exciton resonance.Clearly resolved lines of the hole and electron spin-flips are seen.Their spectral shifts at the magnetic field of 5 T correspond to g e = +1.85 and g h = +0.75.Detailed model consideration of the electron, hole and exciton g-factors, including their anisotropy in the lead halide perovskites with tetragonal symmetry is present in Ref. 11 .Here we give for convenience some equations from this paper.
For the bottom conduction band one has Here the subscripts and ⊥ of the g-factors denote the direction of the magnetic field with respect to the C 4 axis (c-axis).p and p ⊥ are the interband momentum matrix elements.ϑ is the parameter that determines the relation between the crystalline splitting and the spinorbit interaction.
For the valence band one obtains (S2b) Here, ∆ he and ∆ le are split-orbit splittings to between the bottom conduction band and the higher-energy bands of heavy electrons (he) and light electrons(le).In the cubic case ∆ le = ∆ he ≡ ∆ and p = p ⊥ ≡ p.
Note that the valence band g-factors in the electron and hole representation have the same sign because the transformation from the electron to the hole representation includes both a change in the sign of energy and the time reversal.We define the Landé factor in such a way that, e.g., for B z the splitting E +1/2 − E −1/2 between the states with spin projection +1/2 and −1/2 onto the z axis is given by g e µ B B z and g h µ B B z .Using Eqs.(S1) and (S2) we evaluate also the bright exciton g-factor, which describes the splitting of the exciton radiative doublet into circularly polarized components, by g X = g e + g h .In the tetragonal crystals the exciton g-factor is (S3b) One can see, that the contributions to the individual gfactors due to the k • p-mixing of the valence band with the bottom conduction band ∝ 1/E g cancel each other in the exciton g-factor.
S6. Modeling of the dark exciton g-factor
Here we give model results for the dark exciton g-factor calculated as g e − g h .We show in Figure S4 model calculations for the electron, hole and bright exciton, which are presented in Figure 4 in the main text, and add here results for the dark exciton (purple line).One can see that the dark exciton dependence is stronger than the one for the bright exciton and even stronger than the dependence for the electron.3) and (4) from the main text: for the hole (green) and electron (black) g-factors.Blue line is for the bright exciton, calculated as ge + g h .Purple line is for the dark exciton, calculated as ge − g h .
3 Figure 2 .
Figure 2. Magneto-optical properties of excitons as well as resident electrons and holes in a FAPbBr3 crystal.(a) Reflectivity spectra measured in σ + (red line) and σ − (blue line) polarization in the longitudinal magnetic field BF = 7 T at T = 1.6 K. (b) Exciton Zeeman splitting as function of BF measured in magneto-reflectivity. Slope of the linear fit gives gF,X = +2.7.(c) Time-resolved Kerr ellipticity signal (blue) measured at BV = 0.5 T, T = 6 K, using the laser photon energy of 2.188 eV.The electron (black) and hole (green) components are obtained from decomposing the signal.(d) Dependence of the Zeeman splitting of the hole (green), the electron (black), and their sum (blue) on BV.
Figure 3 .
Figure 3. (a) Anisotropy of the electron (black circles) and hole (green circles) g-factors measured by SFRS for CsPbBr3.B = 5 T and T = 1.6 K.The black and green lines are calculated with Eq. (2), using the parameters from TableI.The blue crosses are experimental data of ge + g h , the blue line is the sum of the fits shown by the black and green lines.(b) Magnetic field dependence of the exciton Zeeman splitting measured from magneto-reflectivity in the Faraday geometry for CsPbBr3.T = 1.6 K.The symbols are experimental data and the line is a linear fit.
Figure 4 .
Figure 4. Dependence of the exciton g-factor measured by magneto-reflectivity (closed red circles are our data and open red circles are from Refs.13,15 ) and of the sum of carrier gfactors evaluated from TRKE and SFRS (crosses) on the band gap energy.The data are taken at cryogenic temperatures of 1.6 − 10 K.The dashed lines are model calculations10 with Eqs.(3) and (4) for the hole (green) and the electron (black) g-factors, which closely match the experimental data given in TableI.The blue line is the sum of their contributions calculated with Eq. (5).
Figure S1.Time-integrated PL of (a) FA0.9Cs0.1PbI2.8Br0.2 and (d) MAPbI3 measured at T = 1.6 K.The arrows show the spectral positions at which the PL dynamics were detected.Time-resolved PL measured at various energies for (b),(c) FA0.9Cs0.1PbI2.8Br0.2 and (e),(f) MAPbI3 in a 3 µs temporal range and in a 100 µs temporal range, respectively.The PL dynamics are fitted with a multi-exponential function with characteristic times collected in TableS1.The fits are shown by the black lines.
Figure
FigureS1shows the recombination dynamics in the FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 and MAPbI 3 crystals detected at various spectral energies.The time-integrated PL spectra measured for pulsed laser excitation are shown in FigureS1afor FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 and in FigureS1dfor MAPbI 3 .The arrows in the panels indicate the detection energies of the photoluminescence dynamics.The experiments were performed at the temperature of T = 1.6 K.The recombination dynamics cover a broad temporal range up to 100 µs.They cannot be fitted by monoexponential decays, evidencing that several recombination processes are involved.The PL dynamics contains three exponential decays for FA 0.9 Cs 0.1 PbI 2.8 Br 0.2 in addition to the short exciton recombination time, which is too short to be resolved in this experiment, see FiguresS1b,c.The dynamics of MAPbI 3 show four exponential decays, see FiguresS1e,f.The extracted times are given in TableS1.
3 Figure
Figure S2.Angle dependence of electron and hole g-factors measured for FAPbBr3 by time-resolved Kerr ellipticity using the experimental parameters given in the caption of Figure 2 of the main text.
Figure S4.Modeling of the g-factor band gap energy dependence for the bright and dark excitons in lead halide perovskites.Dashed lines are model calculations with Eqs.(3) and (4) from the main text: for the hole (green) and electron (black) g-factors.Blue line is for the bright exciton, calculated as ge + g h .Purple line is for the dark exciton, calculated as ge − g h .
|
2023-01-31T06:42:55.499Z
|
2023-01-30T00:00:00.000
|
{
"year": 2023,
"sha1": "74c54bc01faf6512221cb6dabf38cbffd0996ea7",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/smll.202300935",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "74c54bc01faf6512221cb6dabf38cbffd0996ea7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
52139969
|
pes2o/s2orc
|
v3-fos-license
|
Foreign Influence and Sound Change: A Case Study of Cantonese Alveolar Affricates
Language contact is one major factor for language change. In some cases such changes are brought from a language of higher status. The present study examines a systematic phonological change among current young Hong Kong Cantonese speakers. L1 Cantonese Speakers of both genders in three age groups were tested for production of Cantonese alveolar affricates /ts/ and /ts/ phonemes in a carrier sentence. English and Cantonese control sounds were also added to the reading list. Results show that speakers of the younger generation have a larger tendency in substituting /ts/ and /ts/ with the English sound /ʧ/ in the back-vowel context. Two probable reasons to such change, language contact and gestural proximity, were identified. The findings clearly acknowledge a sociolinguistic change of /ts/ > [ʧ] for the younger generation in contrast to the elder, and suggest that foreign influence that could be possibly traced back to the influence of the English language.
Introduction
Language change through languages contact has been recorded in ways including lexical or structural borrowing [1]. However, structural borrowing, especially that of sound change, is less commonly documented. This study explores a sound change occurring in Cantonese by a probable language contact from English. The sound change under investigation is usually found among Cantonese speakers of younger generation. Recent studies have exemplified that many segmental (e.g., the merge between /n/ and /l/ [2]) and supra-segmental (e.g., tone merging between Cantonese tone 2 and tone 4 [3] [4]) sound changes had taken place in Cantonese. However, apart from the sound change within Cantonese itself, as has illustrated above, could it be probable that this dialect can accommodate foreign influences as well? The present study intends to test whether young speakers in their 20s will produce the Cantonese alveolar affricates /ts/ /ts h / with an "English touch" as the post-alveolar laminal affricate [tʃ]. It attempts to tackle language change not from an evolutionary point view but from one of foreign contact: i.e., the language under investigation borrowing some new features from another language. To investigate this question, a production experiment sampling speakers from different age groups producing Cantonese and English sounds was performed.
Literature Review
Hong Kong Cantonese, a variant from the canonical Cantonese language or the Yue dialect, has a rich consonantal inventory including alveolar affricates (/ts/ and /ts h /) but without post-alveolar laminal affricates (/ʧ/ and /ʤ/). On the other hand, standard British and American English have a three-way distinction of /ts/, /ʧ/ and /ʤ/. Similar processes have been identified in some other languages in literature. From a diachronic point of view, language change influenced by foreign languages, or creolization, may take place within or across typological boundaries [9]. For example, a Mayan dialect has palatalized the nasal sounds /m, n/ under the influence of a neighboring communities (ibid.). Sri Lanka Creole has stress pattern rules transferred from Portuguese. A colonial inheritance was also identified from the latter, whose speakers are perceived to have more power and as higher-class individuals. Similarly, Lai & Gooden [10] identified the socio-phonetic change of [ɮ]>[l] in Yami, a language in Taiwan, due to language contact with a more powerful language.
The reason for proposing foreign influence to resolve the current problem also lies in the observed instability of the alveolar affricates in Cantonese. Labov [11] denotes that the instability, usually age or class differences within a phoneme is a signpost for socio-phonetic change. Thus, the present study brings about a new possible explanation of foreign influence to the sound change of Cantonese in the current multilingual landscape of Hong Kong. Such an explanation differs from mainstream theory suggesting these changes being mostly intrinsic within the evolution of a single language.
Thus, as backed up by previous literature, the study intends to investigate the following research questions: 1. How do Cantonese speakers of different age groups produce Cantonese words with /ts h , ts/ and English words with /ʧ/? 2. Can the sound change /ts h / /ts/ > /ʧ/ be identified from any of these age groups?
3. If yes, what may be the underlying reason(s) for such sound change?
Methods
The study intends to test whether young speakers in their 20s produce the Cantonese alveolar affricate /ts h / and /ts/ in an English accent as the post-alveolar laminal affricate [tʃ]. A production experiment is designed. In the experiment, 12 native Cantonese-speaking participants from three age groups read out both English and Cantonese stimuli in a recording booth. All recorded sounds are identified by three trained phoneticians.
Participants
Participants are twelve speakers in three age groups, namely 20s, 30s and 60s (mean age=25.5, 37.4 and 61.6, std<3.506). The gender ratio is 1:1. They are all native Cantonese speakers, children of monolingual Cantonese parents, and are all educated with English. The 20s group are all college students and learned English from elementary school. No speech or hearing disorder has been reported. They are asked to read aloud the stimuli in front of a MD recorder in a sound booth.
Stimuli
Stimuli words in the experiment consist of both target stimuli and control words. The target stimuli are 15 Cantonese character with their pronunciation having /ts h / and /ts/ as the initial consonant. A series of control sound are also chosen to test the hypothesis of English foreign influence. Firstly, 15 Chinese characters whose pronunciation begins with /t h / sounds are recorded for controlling age groups. Secondly, 10 monosyllabic English words with /tʃ/ as the initial consonant are also recorded for controlling language. All stimuli are grouped with vowel four contexts: high front (/i, y/), high back (/u/), low front (/a, ɛ/), and low back (/ɔ/). The number of stimuli of each vowel context is not balanced because of the lack of words for some conditions. The complete list of stimuli Chinese characters and English words are listed in Table 1.
Procedure
The production experiment took place in a soundproof booth in Hong Kong. First, the participants were asked to read aloud the Cantonese stimuli in a carrier sentence "佢嘅名系唔系叫_____吖 (Is his name called_____?)". For English words, a similar sentence "Now I say_____ again" was used. Both Cantonese and English carrier sentences were controlled for V__V phonetic environment for the acoustic clarity of segmenting the target affricate for analysis. Participants were asked to read these carrier sentences for 10 times each in randomized order. The total number of tokens for Cantonese is 15 words × 2 word groups × 10 repetitions = 600 and for English, the total number is 10 words ×10 repetitions = 100. All carrier sentences were recorded by a Shure SM 57 Microphone with the sampling rate of 44100Hz in mono channel.
Then, the target sound in both Cantonese and English tokens were segmented from the sentence and stored as isolated sound tokens. The data of which was transferred to a laptop PC with a headphone for sound classification and judgement. To measure and classify the production in terms of phonemic transcription, three phonetically-trained Cantonese speakers listened to both Cantonese and English productions, and then judged their phonetic categorization.
Results
The results of the production experiment are presented in this section. For both Cantonese and English speech, we present statistical comparisons of the dependent variable of correct percentages of productions judged by the phonetically trained listeners, and the comparisons were grouped by the independent variables of age groups (participants in their 20, 30 and 60s) and vowel contexts.
Cantonese Speech
Overall, for Cantonese speech, the main factor of age group and the intermediate factor of vowel were examined for two groups of Cantonese words, the /t h / control group and the experiment group. The inter-rater reliability for the Cantonese speech was 86%. Rater confidence was also high.
For the control group, 100% sound were pronounced as /t h / as predicted, and we did not see any of the tokens with a palatalized sound change. Therefore, no further statistical comparison was done for the control group.
For the experiment group, overall findings of statistical comparisons showed that the 20s and 40s speakers are producing significantly different patterns for the sounds /ts h / and /tʃ/. In detail, for age groups, it was found that only the 20s showed significantly more /tʃ/ sound tokens.
As for the effect of vowel context, the substitution of /tʃ/ only occured after vowels of /ɔ/ and /u/ (p<.001), which are all back vowels (see Table 2). However, the /ts h /sound remained less changed or even unchanged in vowels of /i/, /y/, /ɛ/, and /a/. One-way ANOVA comparisons showed that the differences between the three age groups were significant [F(2, 543)=3.245, p<.01]. Within the 20s and 30s group, the effect of the independent variable of vowel condition was significant, with the 20s' age group having a larger significance. However, there was no significance of vowel quality for the 60s group
English Speech
The English speech of speakers from each age group was also investigated for qualitative analysis by phonetic judgement. The inter-rater reliability for the Cantonese speech is 92%. Rater confidence was very high. The results were shown below.
The correct English pronunciation of /tʃ/ is much higher for 20s young age group as expected. However, in terms of vowel context variability, the correct tokens of /tʃ/ productions mostly occurred after vowels of /ɔ/ and /u/ (see Table 3). One-way ANOVA comparisons showed that the differences between the three age groups were significant [F(2, 97)=2.253, p<.01]. The effect of the independent variable of vowel condition was significant as in the Chinese speech for all three age groups, with the 20s' age group having a larger significance.
Summary
If we combine the vowel groups /i/ and /y/ for Cantonese, data in both languages can be divided into 5 vowel groups (/a, ɛ, i ɔ, u/). The Cantonese and English data could show some common tendencies when they were superimposed together for each age group (See Figure 1). The English and Cantonese percentages seem inversely proportional for all groups but the 20s group shows the highest tendency in all three groups. for the 20s (upper) 30s (center) and 60s (lower) age groups. The blue line represents Cantonese/ts//ts h / and the red line represents English /tʃ/.
Discussions
In answering the research questions 1 and 2, we may conclude that the language change under discussion did exist. Speakers of all ages exhibited, at least to some extent, both [ts h ] and [tʃ] in their productions of Cantonese /ts h /. We may also regard the "newly" discovered [tʃ] as an allophone of /ts h / in the Cantonese inventory, especially for the 20s generation. In other words, the substitution of /ts h / could be regarded as a sound change /ts h / /ts/ > /tʃ/ in the group of young speakers. However, for 30s and 60s generations, the overall percentage of /tʃ/ tokens appeared significantly fewer compared with the 20s generation. Another important finding is that the difference in rates of correct production lies primarily in back vowel conditions (/ɔ, u/), as has been confirmed by post-hoc tests of the statistical analyses. The reasons for the language change and especially the effect of back vowel conditions will be addressed in the following.
As an attempt to answer research question 3, the rest of this section explains the reasons to this language change by proposing effects of (1) the universal gestural economy conditions in vowel contexts and (2) sociolinguistic contact of foreign sounds.
Firstly, the vowel condition can be attributed to anticipatory gestural economy. Results has shown that labialization is especially evident for words with back vowels. It is argued that this may be driven by speakers' gestural economic strategy to approximate these two sounds [12] seen in sociophonetic changes. The backward movement of the tongue body involved in the alveolar postalveolar change is in accordance with the backward movement of the tongue body in back vowels [13], hence the greater inclination of this change.
Moreover, the English speech results show that the speakers had not pronounced the sound fully as English /tʃ/ but having a similar pattern of vowel variation, as shown in the inverse proportion of /ts, ts h / and /tʃ/ in Figure 1. This further agrees with the gestural hypothesis of the language change stated above. The gestural economy of moving the tongue body up to form an affricate in anticipation of back vowels as pulling the tongue backward has made the sound change easier in gestural terms.
Secondly, the sound change of /ts/ > /tʃ/ as a whole can be regarded as from language contact. The Cantonese phonological inventory contains no post-alveolar sounds in general, and that the Chinese dialectal system is rare with laminal postalveolar affricates. It is more plausible to consider this case as from foreign influence, despite the scarcity of such cases [14].
But what might have motivated the change from language contact? The /tʃ/ sound has affected the Cantonese language systematically in phonology instead of just through loan words. One reason of such systematic change might be the social drive for the younger generation to acculturate or even assimilate to the western way of speaking. The "language identity" factor may had hoisted this sound as a more socially accepted norm in the peer group than the conventional /ts h /.
Tracing back the foreign influence leads to the sociolinguistic impact of such trend of changing. The senior and young participants of this study, without any linguistic knowledge, showed divided opinions towards it as the researcher seek their attitudes towards the change. When the researcher randomly sampled the senior group (60s)'s attitude towards such linguistic use, the response was that such usage "comes from younger generations", which is valid. They commented that such usage is "talking when biting the tongue", "pretentious" and "almost a kind of polluted language". However, when the researcher asks younger participants (20s), irrespective of whether they do use /tʃ/ or not, they responded that "everybody does that", "it's cute and lovely" and "I think it is going to be a norm in the future". Such polarized perceptions towards the same phenomenon in language change clearly portraits the ideological construct of a linguistic form; and in that new linguistic forms may or may not be welcomed by social ideals. A similar viewpoint was proposed by Labov [15] where the Canadian French /r/ sound pronunciation witnessed the coexistence of some clear-cut different productions, namely apical /r/ and uvular /r/, by speakers from two generations. He thus concluded that parental influence on the next generation often accelerates the polarization of sounds undergoing language change.
Conclusion
The study provides empirical evidence to an ongoing sound change in Hong Kong Cantonese. Young college students in around their 20s has been using a different variety of alveolar affricates /ts h / and /ts/, producing it as the post-alveolar laminal affricate [tʃ]. Since this sound, with its place of articulation, does not appear in Cantonese phonology and is spoken only by younger generations, we speculate that language contact may be responsible for this undergoing sound change. Also, the cross-linguistic tendency of /tʃ/substitution exists mainly in back vowel conditions was found in both Cantonese and English, as a probable result from anticipatory gestural configurations. From the above two findings, we have identified an interwoven influence of articulatory phonetics and foreign influence. For future studies, there could be more foreign influences on other phonemes could be identified as young Cantonese speakers continue to be exposed to and identify themselves with the English language. In the long run, such features in the phonemic inventory may be preserved.
|
2018-09-01T13:04:54.807Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "605e339e3df24b4cd34e78ba80dfd12a8b2772cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "605e339e3df24b4cd34e78ba80dfd12a8b2772cd",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science",
"Psychology"
]
}
|
126017169
|
pes2o/s2orc
|
v3-fos-license
|
The determination of kernel function parameter initial value of the KPCA based on adaptive contribution rate
In this article, we start from the samples and the kernel principal component contribution rate adaptive point of view. Our aim is to research that how to choose the initial parameters of the mixed kernel function in comprehensive evaluation of KPCA. To some extent, it will overcome the shortcomings that the kernel function parameter is difficult to determine, and it provides an effective method that KPCA applies to determine the kernel function initial parameter of the multi-indexes comprehensive evaluation. We believe that it will be more widely applied to feature extraction with its advantages appear in specific issues, and makes the evaluation method more and more scientific and effective.
Introduction
Evaluation and summary evaluation is the extremely important activity of understanding human society, almost all activities can make comprehensive evaluation. With the increasing of people's awareness levels, the evaluation objects that we see become more and more complex. Due to the factors that affect the evaluation tend to be numerous and complex, it is unreasonable that we use a single indicator to evaluate the objects, so we can get a comprehensive index through collecting the multi-information of the objects. We can reflect the overall situation of objects. This is the multi-indexes comprehensive evaluation method.
In recent years, there have been appeared many methods that using modern methods to research multi-indexes comprehensive evaluation method [1][2][3] . There have some new discipline methods, such as the fuzzy mathematics [6] , the artificial neural networks [8] , the gray system theory [4] and the kernel principal component analysis(KPCA) [5] and so on. They are also be introduced to the comprehensive evaluation in the past.
Kernel principal component analysis is the effective method which combine the kernel function and principal component analysis, its aim is to solve nonlinear problems and provide more information. It lets the original variable space X mapped into a high dimensional feature space F by a nonlinear transformation φ, we use linear principal component analysis in F by nuclear techniques. We do not know the specific form φ, but we can calculate just in the original space. We can compare with the principal component analysis, the evaluation method has a widely application and the dimension reduction effect is significant. Finally, it has practical value.
Specific evaluation steps of KPCA method is as follows: 1) We use the standardized approach to initialize the input samples, and then use the mapping of the kernel function to obtain a matrix K .
2) We want to solve the matrix K , in which l l l l
5)
We can find the contribution rate and cumulative contribution rate of the eigenvalues. 6) Finally, we can use the extracted kernel principal component analysis to make comprehensive evaluation.
Choosing kernel function plays a key role on the effect of comprehensive evaluation. Each kernel function has its own advantages and disadvantages, and different kernel functions exhibit the different characteristics. The kernel function can be divided into two categories now: global and local kernel function. Global kernel function has global feature that allows the distant data points have an influence on the value of the kernel function. But the local kernel function has locality that allows the close data points have an influence on the value of the kernel function. Local kernel function has strong learning ability, but the generalization performance is weak. On the other hand, the generalization performance of the global kernel function is strong, and learning capability is weak. Therefore, we can combine the two kinds of kernel function, and we can use the respective advantages of each function adequately. So the mixed kernel function not only has good learning ability, but also has good generalization ability. The result of the comprehensive evaluation is more significant when we use the mixed kernel function on the KPCA.
However, there is no proven theory to guide which kernel function we can select and how to determine the initial kernel of the kernel function parameters of the mixed kernel function, so we must make several tests to finalize the kernel functions in practical applications. And how to select the initial kernel parameters is very important in this process. Because it directly affects the number and the test results of the comprehensive evaluation. The time of the kernel principal component analysis which provided is not long, and few people study this. In this paper, we start from the samples and the kernel principal component contribution rate adaptive point of view. Our aim is to research that how to choose the initial parameters of the mixed kernel function in comprehensive evaluation. I believe that with its advantages embodied in specific issues this method will be more widely applied in feature extraction.
2.1Analyze the original data
When we collect the original data, we don't use all multivariate data to make kernel principal component analysis and comprehensive evaluation. The basic idea of kernel principal component is let the combination of the original value which has a certain correlation into a new variable, we should calculate the correlation coefficient between the variables. If the correlation coefficient is small, we will don't use the kernel principal component analysis. When the absolute value of the correlation coefficient is large, we can use this method to analyze, and then we can make a comprehensive evaluation.
Then we analyze the original data of the 2007 year energy consumption (we choose data from China Energy Statistical Yearbook 2007), the original data is normalized as follows:
2.2Determination of the Gaussian kernel function initial parameters
We need to give a fixed initial value to the parameter of the kernel function when we make comprehensive evaluation of KPCA. Then we can discuss the conclusion of the comprehensive evaluation. We should to determine the range of the kernel function parameters through the above data and the aim is the first principal component. First we need to determine the 2 of the Gaussian radial basis kernel function.
2.3Determination of the polynomial kernel function initial parameters
Polynomial kernel function: In the polynomial kernel function, when the 1 m , the changing result of is as follows. . But when the value of is too large, the generalization ability of the kernel function is too large and it will loses their role. And after reaching a certain extent, the kernel function matrix will don't exist. So in order to ensure the kernel matrix will exists the largest characteristic root, the value of is ten. When the value of is ten, the value of m is as follows.
Table3 The values of m m
The first principal component
Conclusion
Kernel function method has solid theoretical foundation and has been successfully applied in many fields, so a large number of researchers focus on this method. Kernel function need to get rid of the trouble of dimension through quoting the inner product kernel function and solving the dual problem, and this method will solve the dual problem effectively. We will face with the problem that a larger amount of memory capacity and computing too long when the size of the training set is so large. How to determine the form of kernel function and select the parameter is the difficult problem to solve. Although the researchers do a lot of work on the choice of kernel function parameters, the selection of kernel function parameters still lack a unified approach. The selection of kernel function parameters play a decisive role on how to improve the performance of the kernel function, so it is necessary to study it. Particularly, it is a good method which based on principal component analysis that can be the tool which can solve the nonlinear problems and it has been widely used in various fields. The problem of the selection of kernel function parameters are rarely involved. In this article, we infer the method of the determination of the kernel function initial value based on the sample and the contribution rate adaptive of kernel principal component, to some extent, this method overcome the difficult to determine the kernel function parameters. And this method provides an effective evaluation methods to the multi-index comprehensive evaluation method. And this method will make the evaluation methods more scientific and effective.
|
2019-04-22T13:08:22.940Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "76e418d8c546fc23bad47172caa1199f950bd60d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/94/1/012097",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6f92bda30bd16cad44fde4b1e6abe9c8c77273c7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
109225213
|
pes2o/s2orc
|
v3-fos-license
|
Investigations on the effect of wire EDM process parameters on surface integrity of HSLA: a multi-performance characteristics optimization
Surface roughness (Ra) and micro-hardness (μh) are two important constituents of the surface integrity (SI) of the machined component. In Wire Electric Discharge Machining (WEDM), the machining factors that affect SI are generally the pulse on-time, pulse off-time, current, etc. This paper presents a study that investigates the effect of the WEDM parameters on the surface roughness average and the micro-hardness of the High Strength Low Alloy steel (ASTM A572-grade 50). Nine experimental runs based on an orthogonal array of Taguchi method are performed and grey relational analysis (GRA) method is subsequently applied to determine an optimal WEDM parameter setting. The SI parameters i.e. surface roughness and micro-hardness are selected as the quality targets. An optimal parameter combination of the WEDM process is obtained using GRA. By analyzing the grey relational grade matrix, the degree of influence for each controllable process factor onto individual quality targets can be found. The pulse off-time is found to be the most influential factor for both the surface roughness and the micro-hardness. Further, the results of the analysis of variance reveals that the pulse off-time is the most significantly controlled factor for affecting the SI in the WEDM, according to the weighted sum grade of the surface roughness and the micro-hardness.
Introduction
High Strength Low Alloy (HSLA) steel finds its wide application in the manufacturing of potentially high load bearing parts of automobiles because of its high strength to weight ratio, components such as steering, chassi parts,door intrusions, and suspension parts are made of this alloy steel. The use of HSLA steel is increasing, since it results in weight saving and a resulting saving in fuel and effluents. Its use is ever increasing and more parts of carbon steel are being replaced with HSLA steel. Several cutting and finishing operations are required to make these parts and various other products of HSLA steel. Owing to the inherent characteristics of HSLA steel such as high strength and hardness, its machinability is poor and it often requires high speed for machining. Further, the quality of the machined surface is also relatively not up to the mark. HSLA steels are usually 20 to 30% lighter than carbon steel with the same strength *Corresponding author. Email: ansiddiqui@jmi.ac.in (Lin, Wang, Yan, & Tarng, 2000). HSLA steels provide better mechanical properties and greater resistance to atmospheric corrosion than conventional carbon steels. HSLA steels were developed for the automotive industry to reduce weight without losing strength. One of the largest uses of machining as a process of production is automobile sector. Non-conventional machining processes such as Wire Electric Discharge Machining (WEDM) have the potential to machine this special category of alloy steel. Thus, the performance of WEDM process when applied to HSLA steel becomes contemporary important. However, it is important to select optimum combination of WEDM parameters for achieving optimal machining performance (Lin et al., 2000).
The electro discharge machining (EDM) process is one of the best alternatives for machining an ever increasing number of high-strength, non-corrosion, and wear-resistant materials (Abu Zeid, 1997). The technology of monitoring and control of the machining processes has been accelerated because of the need for improvement in machining efficiency and part quality. WEDM process has been a key process for the tooling and manufacturing industry. WEDM was introduced in the late 1960s', and has revolutionized the tool and die, mold, and metal-working industries. It can machine anything that is electrically conductive regardless of the hardness, from relatively common materials such as tool steel, aluminum, copper, and graphite, to exotic space-age alloys including titanium, carbide, polycrystalline diamond (PCD) compacts, and conductive ceramics. The cutting of PCD and conductive PCBN tooling blanks is heavily dependent on WEDM although the application is specialized and from a global perspective, represents a small segment of the market (Aspinwall, Soo, Berrisford, & Walder, 2008). In WEDM, material is removed by means of rapid and repetitive spark discharges across the gap between the tool and the workpiece. The WEDM process plays a predominant role in some manufacturing sectors, because this process has the capability to cut complex and intricate shapes of components in all electrically conductive materials with better precision and accuracy (Mahapatra & Patnaik, 2006).
In past, a lot of work has been carried out to investigate the effect of WEDM parameters on various performance parameters. Scott, Boyina, and Rajurkar (1991) formulated a multi-objective optimization problem and presented a solution for the selection of the best parameter settings to achieve the desired metal removal rate (MRR) and surface quality during WEDM. Lajis, Radzi, and Amin (2009) optimized the process parameters in the cutting of tungsten carbide ceramic using EDM with a graphite electrode using Taguchi methodology. In their paper, EDM parameters such as peak current, voltage, pulse duration, and interval time were found to have a significant influence on machining characteristic such as MRR, electrode wear rate (EWR), and surface roughness. The results of their paper revealed that, in general, the peak current significantly affects the EWR and surface roughness, while the pulse duration mainly affects the MRR. Singh and Maheshwari (2007) presented an investigation on the optimization of process parameters for the EDM of 6061Al/Al2O3p/20p work specimens by employing the Taguchi Design of Experiment methodology. They selected one noise factor i.e. aspect ratio (with two levels), and five control factors, viz. pulse current, pulse on-time, duty cycle, gap voltage, and tool electrode lift time (three levels each), for the experiment to obtain the optimal settings of factors and the effect of these factors on multiple performance characteristics, namely, MRR, tool wear rate, and surface roughness. Tzeng and Chen (2007) performed an experimental study to optimize the precision and accuracy of the high-speed EDM process. Their paper describes the application of the fuzzy logic analysis coupled with Taguchi method to optimize the precision and accuracy of the high-speed EDM process. Kumar, Kumar, and Kumar (2012) made an attempt to model the response variable i.e. surface roughness in WEDM process using response surface methodology. They varied six parameters i.e. pulse ON time, pulse OFF time, peak current, spark gap voltage, wire feed, and wire tension to investigate their effect on surface roughness and subsequently, they optimized the surface roughness using multi-response optimization through desirability. Rao and Sarcar (2009) evaluated the optimal parameters for machining brass with wire and studied the influence of these parameters on MRR and surface roughness.
Surface integrity (SI) is a composite property which describes the deviations of surface characteristics from the substrate. Its constituent parameters include change in surface metallurgy, residual stresses on surface, depth of surface affected by metallurgy and residual stresses, geometrical accuracy including accuracy of size and form, and surface roughness. Surface roughness and micro-hardness play major role in describing the deviations of surface characteristics from the substrate. In other words, these two surface characteristics may be aptly considered to define the SI of the machined component. The micro-hardness, indeed, represents the surface hardness due to several factors such as residual stresses, metallurgical changes, grain refinement, etc. Keeping this in view, in the present paper, these two important surface characteristics have been chosen to define SI.
The establishment of adequate machining guidelines requires the study of SI generated in the part by a machining operation. SI generated after machining is important, as it determines the functional behavior and reliability of the components such as fatigue life and wear resistance when they are put to use. Maintaining SI is one of the most critical requirements (Kundrak, Mamalis, Gyani, & Bana, 2011;Saini, Ahuja, & Sharma, 2012;Umbrello, Jayal, Caruso, Dillon, & Jawahir, 2010). In order to maintain a high production rate with an acceptable quality level and SI of the machined parts, it is important to select the optimum combination of WEDM parameters such as pulse ontime, pulse off-time, current, and feed rate as these parameters have impact on multiperformance characteristics like surface roughness, micro-hardness, and microstructure, which are indeed constituents of SI (Goel, Khan, Siddiquee, Kamaruddin, & Gupta, 2012). Among the several SI parameters, surface roughness and micro-hardness are very important as they correlate with the surface profiles in order to better characterize the different machining processes. Surface roughness plays an important role in many areas and is a factor of great importance in the evaluation of machining accuracy.
Taguchi method can be applied for optimization of process parameters to obtain optimum condition with lowest cost and minimum number of experiments which leads to production of high-quality products. Owing to the advantages offered by the Taguchi method, researchers have extensively used this method to plan experiments for the purpose of optimization of process and design parameters (Kamaruddin, Khan, & Wan, 2004;Verma, Agrawal, & Bajpai, 2012). Rao, Ramji, and Satyanarayana (2010) applied Taguchi method to find the optimal cutting parameters for surface roughness in WEDM machining of Aluminum BIS-24345. Saini, Khan, and Siddiquee (2013) used Taguchi method with analysis of variance (ANOVA) to optimize the WEDM parameters, while cutting composite material Al6061/SICP. Another Taguchi method-based study was conducted by Kaladhar, Subbaiah, and Rao (2012) to investigate the effect of cutting parameters on surface finish to obtain optimal setting of the cutting parameters.
The grey relational analysis (GRA) is a method for measuring the degree of approximation among the sequences using a grey relational grade (Siddiquee, Khan, & Mallick, 2010). It is a new technique for performing prediction, relational analysis, and decisionmaking in many areas. Theories of the GRA have attracted considerable interest among researchers . Some other researchers have also performed the optimization of process parameters. For example, Ramanujam, Muthukrishnan, and Raju (2011) presented the detailed experimental investigation on turning aluminium silicon carbide particulate metal matrix composite (Al-SiC-MMC) using PCD 1600 grade insert. The objective was to establish a correlation between cutting speed, feed, and depth of cut to the specific power and surface finish on the work piece. The optimum machining parameters were obtained by GRA. Tzeng, Lin, Yang, and Jeng (2009) investigated the optimization of CNC turning operation parameters for SKD11 alloy tool steel using GRA method. Taguchi based grey relational analysis was applied by Sharma and Bhambri (2012) to investigate the optimization of two responses (surface roughness and material removal rate [MRR]) by varying three cutting parameters (cutting speed, feed rate, and depth of cut) during high speed turning of AISI H13 under dry conditions. Kuram and Ozcelik (2013) employed Taguchi-based GRA for multi-objective optimization of micro-milling parameters that simultaneously minimize tool wear, force, and surface roughness and they found that the combination of spindle speed of 10,000 rpm, feed per tooth of .5 μm/tooth, and depth of cut of 50 μm minimizes the tool wear, Fx, Fy, and surface roughness simultaneously. Ibrahim et al. (2011) studied the effect of process parameters (Injection pressure, injection temp, mold temp, injection time, holding time) in micro metal injection molding of 316L stainless steel micro component on multiple quality characteristics (Strength and density) using a Taguchi-based GRA. They reported that the optimum combination of micro injection molding parameter is A1 (Injection pressure at 11 bar), B2 (Injection temp at 160°C), C1 (mold temp at 55°C), D0 (Injection time at 5s), E0 (Holding time at 5s), and the significant parameters in descending order are injection time, injection pressure, holding time, mold temperature, and injection temperature. Lin and Lin (2002) studied the optimization of the EDM parameters, namely workpiece polarity, pulse on-time, duty factor, open discharge voltage, discharge current, and dielectric fluid with considerations of multiple performance characteristics including MRR, surface roughness, and electrode wear ratio using GRA. They observed that the performance characteristics of the EDM process such as MRR, surface roughness, and electrode wear ratio are improved together using the method and finally concluded that the machining performance in the EDM process can be improved effectively through this approach. Kumar, Babu, Venkatasamy, and Raajenthiren (2010) used Grey-Taguchi Method to demonstrate optimization of Wire Electrical Discharge Machining process parameters of Incoloy800 super alloy with multiple performance characteristics such as MRR, surface roughness, and Kerf. They considered gap voltage, pulse on-time, pulse off-time, and wire feed as input parameters and conducted experimental study using Taguchi's L9 Orthogonal Array. They found that the optimal process parameters include a 50 V Gap Voltage, 10 μs pulse on-time, 6 μs pulse off-time, and 8 mm/minute Wire Feed rate. They also observed that while applying the Grey-Taguchi method, the MRR shows an increased value of .05351 g/min to .05765 g/min, the surface roughness shows a reduced value of 3.31 to 3.10 μm and the Kerf width shows a reduced value of .324 to .296 mm, respectively, which are positive indicators of efficiency in the machining process.
It appears from the literature presented above that a little amount of work has been done to investigate the effect of WEDM parameters on SI of the HSLA steel machined surface. The use of HSLA steel is growing rapidly and it subsumes several parts which were traditionally been manufactured by mild steel. Consequently, the machining activity on HSLA steel is also increasing. Keeping this in view, the present work is aimed to investigate the effect of three WEDM parameters (pulse on-time, pulse off-time, and current) on SI during WEDM of HSLA steel. Out of several SI parameters, surface roughness and surface hardness are the most important and relevant to the present investigations. Hence, they were selected as measure for SI. The objectives of the study are to determine (i) optimum levels of the WEDM parameters that yield optimum multiperformance characteristics i.e. SI, (ii) the WEDM parameters that significantly affect SI, and (iii) the most influential WEDM parameters for individual constituents of SI i.e. surface roughness and micro-hardness. The Taguchi L 9 (3 3 ) design is utilized for experimental planning for this purpose. The GRA is then applied to examine how the input factors influence the quality targets of surface roughness and micro-hardness. An optimal parameter combination was then obtained. Through analyzing the grey relational grade matrix, the most influential factors for individual quality targets of cutting operations can be identified. Additionally, the ANOVA is performed to investigate the more influencing parameters for the multi-performance characteristics i.e. surface roughness and micro-hardness, which are indeed constituents of SI.
Grey relational analysis
The following sections present the procedure for GRA that has been used in this study to obtain the optimum WEDM parameters and also to identify the most influential parameters that affect SI. The steps involved in the GRA are given below: (1) Normalization of the data sequence using data preprocessing.
(2) Calculation of the corresponding grey relational coefficients.
(3) Calculation of the grey relational grades.
Data preprocessing
Data preprocessing is used to transform the given data sequence into dimensionless data sequence and it involves the transfer of the original sequence to a comparable sequence.
Let the original reference sequence and comparability sequence be represented as x ðOÞ 0 ðkÞ and x ðOÞ i ðkÞ, i = 1, 2, … , m; k = 1, 2, … , n, respectively, where m is the total number of experiment to be considered, and n is the total number of observation data. Data preprocessing converts the original sequence to a comparable sequence. Several methodologies of preprocessing data can be used in GRA, depending on the characteristics of the original sequence (Deng, 1989;Tzeng et al., 2009;Yang, Shie, & Huang, 2007). For the original sequence 'the-larger-the-better', the original sequence is normalized as follows (Tzeng et al., 2009): For 'the-smaller-the-better' characteristic of the original sequence, the original sequence is normalized as follows (Tzeng et al., 2009): In case, if a defined target value, OB, exists, then the original sequence is normalized as follows (Tzeng et al., 2009): There is an alternate simple method for normalizing the original sequence where the original sequence is divided by the first value of the sequence i.e. x ðOÞ i ð1Þ as follows: (4) where, x
Grey relational coefficients and grey relational grades
After the data preprocessing, a grey relational coefficient is calculated using the preprocessed sequences. The grey relational coefficient can be calculated as (Tzeng et al., 2009): where D 0i ðkÞis the deviation sequence of the reference sequence x à 0 ðkÞ and comparability sequence x à i ðkÞ; namely with f: distinguishing coefficient, f 2 ½0; 1. After calculation of the grey relational coefficients, grey relational grade is calculated using the following relationship (Tzeng et al., 2009): The grey relational grade cðx à 0 ; x à i Þrepresents the degree of correlation between the reference and comparability sequences. In case of two identical sequences, the grey relational grade is equal to 1. The grey relational grade also indicates the degree of influence exerted by the comparability sequence on the reference sequence. Consequently, if a particular comparability sequence is more important to the reference sequence than other comparability sequences, the grey relational grade for that comparability sequence and the reference sequence will exceed as compared to other grey relational grades. The GRA is actually a measurement of the absolute value of data difference between the sequences, and can be used to approximate the correlation between the sequences.
Materials
A572-grade 50 HSLA steel (composition given in Table 1) with 200 mm × 40 mm × 10 mm size was used as workpiece material.
Schematic of machining
The experimental studies were performed on a Steer Corporation DK7712 NC WEDM machine. This machine can be used to cut workpiece in accordance with the predetermined locus (The schematic is shown in Figure 1). Different settings of pulse on-time, pulse off-time, and current are used in the experiments. Frequency and voltage settings are kept constant throughout the experiments.
Experimental parameters and design
The experiments were conducted with three controllable 3-level factors and two response variables. Nine experimental runs based on the orthogonal array L 9 (3 3 ) were carried. Table 2 shows three controlled factors, i.e. pulse on-time (i.e. A (μs)), pulse offtime (i.e. B (μs)), and the current (i.e. C (Ampere)) with three levels for each factor. Table 3 shows the nine cutting experimental runs according to the selected orthogonal table. After cutting, two quality objectives of the workpieces were chosen, including the surface roughness (i.e. R a (μm)) and micro-hardness (i.e. μh (HV)). The surface roughness is detrimental for the in-service performance of the products especially under conditions of fatigue and tribological effects. Therefore, it is essential for effective in-service performance of the surface that the surface roughness should be minimum. Any change in surface hardness due to reasons such as heating and quenching, which is characteristics of WEDM, is considered to be undesirable for the satisfactory performance of the in-service parts. Hence, for optimal performance, the surface hardness of the machined part should not change and must remain as close as possible to that of hardness of the substrate. In light of these facts, typically, small values of surface roughness and target values of micro-hardness were considered as they are desirable for the SI in the machining operation.
Measuring apparatus
The surface roughness values were measured by the surface roughness tester (model: SURFTEST, SV-2100 (Resolution, X axis -.05 μm, main unit -.01 μm (800 μm), stylus tip radius of 1 μm and angle 60°); make: Mitutoyo, Japan) which used cut-off length of .8 mm and evaluation length of 4 mm. Surface micro-hardness was measured using Mitutoyo Micro Vickers Hardness Testing Machine using a load of 2 N for 15s.
Results and discussion
The following sections describe the results of the present study and also present a discussion on the results in light of the available literature.
Best experimental run
The experimental results for the surface roughness and micro-hardness are listed in Table 4. A typical surface roughness profile for the three replicates of experiment number 2 is shown in Figure 2(a)-(c). Typically, smaller values of the surface roughness and target values of micro-hardness are desirable for SI of the machined surface. Thus, the data sequences have a 'the-smaller-the-better characteristic' for surface roughness and therefore, Equation (2) was employed for data preprocessing. Similarly, Equation (3) was used for data preprocessing for micro-hardness. It may be noted that the average micro-hardness value of the workpiece material before machining was 206 HV, and therefore, the same is also the target value. The values of the surface roughness and the micro-hardness are set to be the reference sequence x ðOÞ 0 ðkÞ, k = 1, 2. Moreover, the results of nine experiments were the comparability sequences x ðOÞ i ðkÞ, i = 1, 2, … , 9, k = 1, 2. Table 5 listed all of the sequences after implementing the data preprocessing using Equations (2) and (3). The reference and the comparability sequences were denoted as x à 0 ðkÞ and x à i ðkÞ, respectively. Also, the deviation sequences D 0i , D max ðkÞ and D min ðkÞ for i = 1, 2, … , 9, k = 1, 2 can be calculated.
The distinguishing coefficient f can be substituted for the grey relational coefficient in Equation (5).
If all the process parameters have equal weighting, f is set to be .5. Table 6 shows the grey relational coefficients and the grade for all nine comparability sequences. Figure 3 shows the grey relational grade for all experiments. It is clear from this figure that the experiment 2 has the maximum value of grey grade (.8172), and has the optimal parameters setting for best multi-response characteristics, such as surface roughness and micro-hardness. The grey relational graph for factor A (Pulse on-time), factor B (Pulse off-time), and factor C (Current) is shown in Figures 4, 5, and 6, respectively. These figures represent the level-wise effect of each factor on the grey relational grade.
This investigation employs the response table of the Taguchi method to calculate the average grey relational grades for each factor level, as illustrated in Table 7.
Since the grey relational grades represent the level of correlation between the reference and the comparability sequences, larger grey relational grade means the comparability sequence exhibiting a stronger correlation with the reference sequence. Based on this study, one can select a combination of the levels that provide the largest average response. Figure 7 shows the mean value of grey relational grade at different levels of each WEDM parameters. The dashed line in this figure is the value of the total mean of the grey relational grade. In Table 6 and Figure 7, the combination of A 1 , B 2 , and C 2 shows the largest value of the grey relational grade for the factors A, B, and C, respectively. Therefore, A 1 B 2 C 2 i.e. pulse on-time of 15 μs, a pulse off-time of 4 μs, and a current of 3 A is the optimal parameter combination of the cutting operations.
Most influential factor
GRA is applied to examine how the process parameters influence the quality targets of workpieces. The values of the factor level in nine experimental runs are set to the comparability sequences for three controllable factors. Table 8 lists all of the sequences. Data preprocessing was performed based on Equation (4), and Table 8 shows the normalized results. Subsequently, the deviation sequences were calculated using the method mentioned above. The deviation sequences and the distinguishing coefficient then were substituted into Equation (5) to obtain the grey relational coefficients. Additionally, the grey relational coefficients are averaged using an equal weighting to obtain the grey relational grade. Table 9 listed the grey relational coefficients and the grade of the surface roughness of the reference sequence and comparability sequences. Table 10 gives the grey relational coefficients and the grade of the micro-hardness for the reference sequence and the comparability sequences. The grey relational grades in Tables 9 and 10 can be further arranged in a matrix form shown as follows: By comparing row 1 and row 2, some conclusions can be drawn from this matrix. In the first row cðR a ; BÞ [ cðR a ; AÞ [ cðR a ; CÞ, it means that the order of importance for the controllable factors to the surface roughness, in sequence, is the factor B, A, and C. Similarly, from the second row ½cðlh; BÞ [ cðlh; CÞ [ cðlh; AÞ, the order of importance for the controllable factors to the micro-hardness, in sequence, is the factor B, C, and A. The most influential factors that affect the output variables are determined by identifying the maximum values in each row. Hence, based on the maximum values in the matrix of the grey relational ðcðR a ; BÞ; ðlh; BÞÞ ¼ ð:6957; :7026Þ, it can be found that the factor B, the pulse OFF time, has the most influence on both the surface roughness and the micro-hardness with c value of .6957 and .7026, respectively.
Additionally, Table 11 gives the results of the ANOVA for the surface roughness, and the micro-hardness using the calculated values from the grey relational grade of Table 6 and the response table of Table 7. According to Table 11, the factor B, the pulse OFF time with 38.72% of contribution, is the most significant controlled parameters for the cutting operation; the factor A, the pulse on-time, is with 34.59% contribution and the factor C, the current with 25.35% of contribution if the minimization of the surface roughness and the target value of the micro-hardness is simultaneously considered. It has been reported that an increase in the pulse on-time results in an increase in both the surface roughness and the discharge energy (Rao et al., 2010). The increased discharge energy as well as the increased duration for which this energy is discharged to the workpiece leads to the formation of bigger craters on the workpiece and thereby surface roughness increases. Further, increase in pulse on-time also provides more time for conduction of greater amount of heat to the workpiece. Consequently, the workpiece material is heated to a greater depth due to presence of high temperature and the same thickness of the workpiece gets hardened due to quenching during subsequent spark off.
Pulse OFF time provides a pause in spark and the time for removal of debri produced during spark on. During this period, quenching of the workpiece also takes place. Hardness decreases on reducing pulse OFF time as its lower values result in lesser time being available for quenching of the workpiece. Further, when the pulse OFF time is short, next spark takes place before the work surface has fully cooled and quenched. This explains a reduction in hardness with a decrease in pulse OFF time and vice versa.
However, the current has a different effect on roughness and hardness. At a low current, a small quantity of heat is generated and a substantial portion of it is absorbed by the surroundings, consequently, the amount of utilized energy in melting and vaporizing the workpiece is not so intense. But by increasing the pulse current per unit pulse on-time, a stronger spark with higher thermal energy is produced, and a substantial quantity of heat is transferred to the work piece. Furthermore, as the current increases, discharge strikes the surface of the workpiece more intensely, and creates an impact force on the molten material in the crater and causes more molten material to be ejected out of the crater, so the surface roughness of the machined surface increases. On the other hand, an increase in current also increases the temperature of workpiece, which consequently increases the micro-hardness as it depends on the temperature of the work piece before quenching.
Thus, the combination of the parameters and their levels i.e. A 1 B 2 C 2 with minimum contribution of the current (25.35%) and maximum contribution of the pulse OFF time (38.72%) would set out just right parameter combination to cause minimum surface roughness along with the nominal micro-hardness leading to optimum SI.
Confirmation test
After identifying the most influential parameters, the final phase is to verify the surface roughness and the micro-hardness by conducting the confirmation experiments. The A 1 B 2 C 2 is an optimal parameter combination during WEDM process via the GRA. Therefore, the condition A 1 B 2 C 2 of the optimal parameter combination was treated as a confirmation test. The result of the confirmation test gives the surface roughness average and the micro-hardness similar to those given in Table 4. The comparison of predicted and experimental values of surface roughness and micro-hardness using the optimal WEDM parameters is presented in Tables 12 and 13, respectively. Both Tables 12 and 13 reveal that the experimental value is very close to the predicted value and validate the correctness of the experimental investigation.
Conclusions
The effects of pulse on-time, pulse off-time, and current are experimentally investigated in machining of ASTM A572-grade 50 HSLA steel using NC Wire-cut EDM process. The GRA based on the Taguchi method's response table was used to optimize the WEDM parameters for HSLA steel. Based on the results of the present study, the following conclusions are drawn: (1) Increase in the pulse on-time leads to the increase in both the surface roughness and the micro-hardness.
(2) Increase in the pulse current leads to the increase in the surface roughness.
(3) From the response table of the average grey relational grade, it is found that the largest value of the grey relational grade is for the pulse on-time of 15 μs, the pulse OFF time of 4 μs, and the current of 3 A. It is the recommended levels of the controllable parameters of the WEDM machining process as the optimization of SI involving multi-performance characteristics with minimization of the surface roughness average and achieving the target value of the micro-hardness are simultaneously considered. (4) The order of the importance for the controllable factors to the surface roughness average, in sequence, is the pulse OFF time, the pulse on-time, and the current. However, for micro-hardness the sequence is the pulse OFF time, the current and the pulse on-time. (5) Through ANOVA, the percentage of contribution to the WEDM process, in sequence, is the pulse off-time, the pulse on-time, and the current. Hence, the pulse off-time is the most significantly controlled factor for the WEDM operation when the minimization of the roughness average and achieving the target value of the micro-hardness are simultaneously considered.
|
2019-04-12T13:53:39.650Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "2cfd244b7dba27de02b27814d0b2f94a247fb9cc",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21693277.2014.931261?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "ba05ba0ec6a8cb49f98e722f7ff06b8e782a594f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
201133163
|
pes2o/s2orc
|
v3-fos-license
|
An Attribute-Based Access Control Model in RFID Systems Based on Blockchain Decentralized Applications for Healthcare Environments
: The growing adoption of Radio-frequency Identification (RFID) systems, particularly in the healthcare field, demonstrates that RFID is a positive asset for healthcare institutions. RFID o ff ers the ability to save organizations time and costs by enabling data of traceability, identification, communication, temperature and location in real time for both people and resources. However, the RFID systems challenges are financial, technical, organizational and above all privacy and security. For this reason, recent works focus on attribute-based access control (ABAC) schemes. Currently, ABAC are based on mostly centralized models, which in environments such as the supply chain can present problems of scalability, synchronization and trust between the parties. In this manuscript, we implement an ABAC model in RFID systems based on a decentralized model such as blockchain. Common criteria for the selection of the appropriate blockchain are detailed. Our access control policies are executed through the decentralized application (DApp), which interfaces with the blockchain through the smart contract. Smart contracts and blockchain technology, on the one hand, solve current centralized systems issues as well as being flexible infrastructures that represent the relationship of trust and support essential in the ABAC model in order to provide the security of RFID systems. Our system has been designed for a supply chain environment with an use case suitable for healthcare systems, so that assets such as surgical instruments containing an associated RFID tag can only access to specific areas. Our system is deployed in both a local and Testnet environment in order to stablish a deep comparison and determining the technical feasibility.
Introduction
The healthcare field is aware of the essential need to adopt and use healthcare information technology (IT) successfully.Radio-frequency Identification (RFID) provides several opportunities for healthcare transformation [1].The same reference before argues that RFID provides an enhanced method to decrease errors in patient-care, to improve tracking and tracing for both patients and equipment, as well as to enable better management of health assets and improving the audit process and predictability.
In general terms, four sub-systems describe an RFID system's architecture (Figure 1): (1) a transponder or tag, which contains the identification data, (2) a reader to interact directly with the tag exchanging information with it, (3) a RFID middleware and (4) a business and/or information management layer.RFID middleware supports RFID tag data management by handling devices, filtering, collecting, integrating and constructing data.The business layer also includes applications such as back-end databases (DBs), enterprise resource planning (ERP), customer relationship management (CRM), warehouse management solutions (WMS), tracking and tracing and electronic product code (EPC) applications.The GS1 (Whilst "GS1" is not an acronym it refers to the organization offering one global system of standards) standards ( [2]) address three wide categories: identify, capture and share [3,4].The capture process could be performed through sub-systems ( 1) and ( 2), using EPC-enabled for RFID tags (i.e., through EPC, GS1 also provides a construction to write and read unique identifiers on RFID tags).The identification process would be covered by sub-system (3) and an identification numbers is performed, for instance, when it is encoded (e.g., to GTIN (Global Trade Item Number)) or it is decoded (e.g., from RFID Tag EPC).Finally, the sub-system 4) carried out the sharing category.In particular, GTIN describes a data structure that uses 14 digits with the option to encode in some combinations.GTIN is currently used in both barcodes and RFID [2].The structure of the GTIN number is shown below: urn:epc:id:sgtin:CompanyPrefix.ItemReference.SerialNumber, (1) These fundamental principles are used to explain how the GS1 standards system can be used to enable traceability solutions, where RFID systems are involved in both data capture and data sharing.In addition, RFID systems are able to achieve traceability in a variety of supply chains such as fresh food, health, technical industries, transportation and logistics.The supply chain in the healthcare sector will be taken as use case.In that sense, RFID is the industry-leading technology used by medical device manufacturers to enable smart devices to provide higher-quality patient care, the most common RFID applications include [5]: (1) Tracking and tracing of trust device to individual patients.
(3) Control of servicing and calibration of medical equipment.(4) Invoicing procedures, to associate patients with medical device and prescription use.
(5) Stock management.(6) Decreased time spent by staff tracking articles and devices.
In this way, a model based on control and traceability of assets is a determining factor in safety.Based on the analysis of the six points mentioned above, interviews with specialists were carried out in order to determine the needs existing in institutions, where a specific use case related to point 1) has been identified.Hospitals employ large numbers of assets (e.g., surgery medical instruments (SMI)), which can flow through constant cycles such as sterilization department, surgery room, laboratories, etc.A location mistake could risk the patients' lives.In addition, the lack of detailed asset records causes asset losses.However, given that RFID is one of the most well positioned technologies to perform the data capture and sharing process, the biggest challenge for any RFID systems is its security.The security threats encountered in RFID systems are distinct from traditional wireless security threats, which can be grouped into: (1) physical components of RFID (e.g., cloning tags, reverse engineering, tag modification), (2) the communication channel (e.g., eavesdropping, skimming, replay attack), and 3) global system threats (e.g., spoofing, Denegation of Service (DoS) and "tracing and tracking").Other examples and details can be obtained from reference [6].Therefore, our proposal must focus on both safety risks and security risks.
Access control (AC) is a core piece of any organization's security infrastructure.In particular, AC has popularized as a solution for some of the threats mentioned [7].Below is an overview of our proposal.Based on GS1, SMI are tagged (passive RFID tag) with GTIN.The coding scheme (see expression ( 2)) contains a company prefix (e.g., Hospital A: 000389), an article reference (product type) to categorize the asset (e.g., scissors: 000162) and, finally, a serial number to identify a specific asset (e.g., serial number: 000169740).Figure 2 helps to detail how our healthcare system works.The source room (e.g., sterilization department) sends some assets (e.g., SMI) to the destinations rooms (e.g., surgery room 0 , surgery room 1 ).Since asset 1 has been assigned to destination room 1 (e.g., surgery room 1 ) and due to human mistake (e.g., in transportation) attempts to access to destination room 0 (e.g., surgery room 0 ), our system establishes access denied status to asset 1 .
In short, our proposal is an access control system of a healthcare asset (e.g., SMI) in order to prevent unwanted assets from entering the wrong area (e.g., room) because of human error or an external security threat.Therefore, our proposal is a prevention system that provides a security-safety solution.By considering the general model presented in Figure 1, RFID middleware commonly deploys the access control mechanism (ACM) in an RFID system [8].For instance, the EPC global community standardizes four main layers within the middleware (Figure 1): low level read protocol (LLRP), discovery initialization and configuration (DCI), read management (RM) and application level event standard (ALE) [9].ALE is the sub-system that applies AC policies.
The Table 1 contains different implementations of the ALE sub-system, included on EPC global middleware, as it is established by the following specifications [10,11].These traditional AC systems present two major challenges for supply chain application environments: (1) almost all include role-based access control (RBAC) as AC model and (2) the implementations are based on centralized architectures.Therefore, from the technical point of view, our proposal consists of an ABAC system for RFID systems that executes access control (AC) policies from a decentralized application (DApp) based on a blockchain architecture.Our proposal integrates several technologies, which allow in the first place the tracking of assets, i.e., an asset (e.g., SMI) is associated to a GTIN code.The system allows us to verify the existence of a certain asset based on the coding scheme presented.Finally, the proposal makes it possible to permit or deny access of an asset to a certain area (e.g., surgery room).
For this, smart contracts are used as an interface between the DApp and the blockchain, i.e., all these functionalities, including the AC policy, are executed from the DApp, which interacts with the smart contract, which, in turn, interacts directly with the blockchain, e.g., by a method to insert assets or a method to query certain attributes.
The remainder of the article includes the related work section, which is the keystone for the design of our system based on the reviewed literature, allowing us to arrive at conclusions.From this point onwards, we present the technical proposal, followed by the evaluation methodology, the results obtained and finally the conclusions and future research lines.
Related Work
In this section, we present the literature justification that allows us to establish the use of an access control model based on attributes (ABAC) over an access control model based on roles (RBAC) for our use case.In addition, we indicate the preference of decentralized architectures over centralized architectures for supply chain environments and our use case.Additionally, we justify the blockchain selection within the set of distributed ledger technologies (DLT).Next, the type of blockchain that best fits our proposal is analyzed.Once the blockchain has been selected, if this is a public blockchain, a business model must be associated with it for implementation to be feasible.For this reason, we present a proposal based on an asset tokenization model for healthcare environments.However, the implementation of the tokenization model is established for future research lines.The related work section concludes with a discussion sub-section.
Attribute-Based Access Control (ABAC) vs. Role-Based Access Control (RBAC)
Although the RBAC model is well established, Gartner predicts that by 2020, 70% of companies will use ABAC to protect critical assets [13].In addition, the following references [14][15][16], provide some clear limitations for the RBAC model such as: (1) It is not possible to configure rules using parameters, which are unknown to the system.
(2) Permissions can only be assigned to user roles, not to objects and operations.
(3) Since the RBAC model is predominantly based on static organizational positions, there are problems in particular RBAC architectures where dynamic AC decisions are needed.
(4) It is possible to restrict access to specific system actions but not to data model.
(5) RBAC does not support multi-factor decisions (e.g., decisions that depend on location and timestamp).
On the other hand, the ABAC model presents important benefits that are adapted to our use case (Section 2) such as: (1) ABAC provides access based on the attributes of each system component and not based on the user function [14].
(2) ABAC supports AC decisions without previous understanding of the object by the subject or understanding of the subject by the object owner [15].
A comparison that includes other features can be analyzed in Table 2.As a conclusion, we can establish that the ABAC model is suitable for our case of use based on the supply chain, i.e., applications that require flexibility and scalability.
Decentralized Model vs. Centralized Model
As it can be analyzed in reference [9], most middleware implementations are based on centralized architectures.In order to examine the disadvantages of this model we use a common application environment for RFID systems such as the supply chain.Although the centralized approach is well adopted, it is not scalable, introduces bottlenecks and makes difficult to synchronize information, e.g., product status among different parts with their centralized DBs or to add new elements [17].In addition, this model does not provide the degree of trust that must exist between the parties and therefore someone who is accountable for the data shared [18].
Different works focus on attribute-based access control models on centralized architectures.For instance, the reference [19] presents an AC model for IoT, in which it is established a coupling between ABAC and trust concepts.In addition, the reference [20] promotes an ABAC mechanism, which is applied to give the system the ability to implement policies to detect any unauthorized entry.
On the other hand, a decentralized model provides a solution to the aforementioned problems: firstly, the supply chain adopts a method in products can be tracked through every step of the chain, from suppliers, through manufacturers, to end users and secondly, with a certain degree of trust between the parties.Although a model based on decentralized architecture is the solution, the type of architecture to be used must be studied in depth, above all, to establish selection criteria.
Blockchain over Other Distributed Ledger Technologies
Blockchain technology is now entering a maturity stage that determines the use cases where the technology is applicable, which determines even the type of blockchain to be used.However, blockchain is not the only type of DLT, e.g., directed acyclic graphs (DAG) is considered another way to represent the data structure with advantages over the blockchain approach [21].Therefore, we want to emphasize below the reason why the blockchain is suitable as a decentralized solution.First, it is clear that all data are not located on a central server, but are decentralized.These are distributed across all devices connected to the blockchain, so the blockchain can be thought of as a network of nodes from peer-to-peer where a device (e.g., miner device) connected to the blockchain as the node, which talks to all the other nodes.In addition, this device will share the same responsibilities as the other peers and it will get a copy of all the data that is shared across the blockchain.All of this data is contained in packets of records called blocks that are chained together to create a "public" ledger and all of the network nodes work together to ensure that all of the "public" ledger data remains secure and unchanged and this is important for a AC application.The blockchain is fundamentally a DB and because all nodes communicate with each other in the blockchain, it is a network, so instead of the traditional centralized model, it is possible to think on a blockchain as a network and a DB all in one [22].Once it is determined that blockchain is the type of decentralized architecture to implement, we focus on defining the type of blockchain suitable for our use case.
Selecting the Blockchain Type
Although the technical criteria of selection is fundamental, we first review the existing proposals in the literature and then the technical selection criteria.In that sense, there are some proposals that use AC based on blockchain, including RBAC.For example, the reference [23], proposes an approach based on blockchain to publish policies expressing the right to access a resource and to allow the distributed transfer of that right between users.In addition, the reference [24] includes a dynamic access control scheme for direct data communication between Internet of Things (IoT) devices.The reference [25] presents a RBAC using Smart Contract to realize trans organizational utilization of roles.Finally, transaction-based access control (TBAC) is a platform that integrates the ABAC model and the blockchain, combining four types of transactions and Bitcoin-type cryptographic scripts to describe the TBAC access control procedure corresponding to subject registration, object holding and publication, access request and grant [26].By analyzing existing proposals that combine access control models in decentralized architectures, we conclude that these are mostly based on RBAC models.In addition, the proposal found based on ABAC uses Bitcoin as a blockchain.The technical criteria are detailed below.
The selection of the type of blockchain depends on factors such as the use case, the technical requirements and even the business model.For this reason, firstly it is considered the most recent Gartner recommendations, which indicate that to ensure a successful blockchain project, it is necessary to focus on the business problem, not on the technology solution [27].According to the use case and the characteristics of the technological project, it is necessary to select between: a model based on governance with some trust between the parties and a certain level of centralization, represented by the Hyperledger Fabric Blockchain (HFB) or a model where there is no trust between the parties and fully decentralized, represented by the Ethereum (ETH) blockchain.The Table 3 presents common criteria to establish a comparison among different blockchain types.From our use case of a supply chain based on healthcare environments and taking into account, the project scalability, community support, skill availability, multi-functionality and adaptability, we consider deploy our DApp on the ETH blockchain.
From the decision (type of blockchain) based on technical criteria and taking into account the literature review carried out, we can affirm that our proposal contains high novelty value.The following sub-section summarizes all the points analyzed throughout the related work section, and based on our selection criteria and the revised bibliography; we arrive at conclusions and introduce a model.
Discussion
From the analysis performed throughout this section, we determined that ABAC is a suitable access control model for applications requiring flexibility and scalability.In addition, we analyzed that our use case is not optimal to build a centralized application for two reasons.Since our system is based on the ABAC model, asset attributes can change at any time, so that decision support is highly scalable.In addition, all the code in the application could change at any time and this means that the rules in AC policy also could change.Additionally, blockchain has been selected as DTL, thanks to features such as immutability, necessary to ensure both AC and a reliable history of asset attributes behavior.Based on common criteria of selection, we determined the type of blockchain suitable to our proposal: the ETH blockchain.However, if we take up Gartner's recommendation [27] to be able to deploy our proposal on the main ETH network, even if the project is technically feasible, it needs to be endowed with a business model that makes it viable.For that reason, we present below one based on asset tokenization.
The tokenization through a blockchain platform (the most used is ETH) enables us to leave not only the use of expensive and complex transactions, but also the exchange itself.Any person enrolled in the blockchain could potentially act as an issuer of a legitimate asset that he or she would like to tokenize [28].Applied to healthcare, tokenization can contribute to reducing the cost of private medical treatment by transferring the ability to maintain and hold data from intermediaries, like insurance companies, hospitals and pharmacies to patients.In the existing scheme, neither of these subjects share information with patients, and patients are unable to verify the data's correctness.Through tokens, both patients and the general public can keep their data and share it with anyone they want [29].Tokenization can also automate the payment process.In addition, since tokens are a secure and protected way to make transactions, the payment system is simplified.However, the main challenge is that, so far, no nation has a strong regulation for cryptocurrency.As a result, tokens do not have legal rights to property and are not protected by law.Therefore, legislative changes are required to adapt these new business models [30].
Therefore, a business model based on tokenization is applicable to a public blockchain such as ETH, which it contextualized to healthcare through the supply chain and, therefore, to our model based on control and traceability.A tokenization model is applicable to both assets (e.g., SMI) and for patient information (e.g., blood pressure sensors).Thus, a security model based on AC is highly suitable and applicable in these environments.
Since our article is a proposal, the technical behavior of our implementation evaluates firstly in a local environment (i.e., our own ETH node, without joining the main ETH network) and secondly scales to an ETH Testnet.Ropsten Ethereum, also known as "Ethereum Testnet", is a testing network that runs the same protocol as Ethereum and is used for testing purposes before deploying on the main network (Mainnet).In order to scale our proposal to the Mainnet, we will propose a tokenization-based model, which we introduced above and is part of our future research lines.
Proposal
As we mentioned in the introduction, our proposal consists of a complete system in which several technologies converge.However, we want to start this section with a basic architecture that enables a general understanding of how our system performs ABAC.The physical node is composed of the RFID Reader Control (RFID-RC), the DApp and the smart contract.When a medical instrument (previously tagged with an RFID tag) attempts to gain access to a room, the RFID-RC sends a request for access to the DApp.The DApp sends a query via smart contract to the blockchain network, which returns some attributes related with the asset (e.g., company prefix, product type, serial number).In addition, the DApp receives other attributes (e.g., timestamp) from the RFID-RC.Then, the DApp uses the attributes to execute the ABAC security policy, which determines whether tag access is permitted or denied.The next section details the implementation framework used.Additionally, we need to confirm that one of the main advantages of a decentralized system is scalability, so this physical node (smart oracle) can be replicated in a way that establishes a new connection with the blockchain (via smart contract), without affecting any of the existing nodes.
Access Control Mechanism (ACM)
For a subject to be able to execute a policy on an object (e.g., permit or deny access), ABAC access control mechanism (ACM) must be enabled.ACM includes the next steps: (1) check the subject's attributes, (2) check the AC policies (rules), (3) check the object's attributes, and 4) check the environmental conditions.Although it is normal to expect that the subject is a human, a non-person entity (NPE), such as an autonomous service or an application, could also occupy the subject's role, as the reference [15] indicates.In our case, the reader requests the DApp for the tag RFID (associated with a SMI) access.
Before analyzing the AC policy, some boundary conditions are established for the transfer of an asset from the source room (e.g., sterilization area) to the destination room (e.g., surgery room) and vice versa should be mentioned: (1) The transaction that authorizes the transfer of an asset is invoked by an authorized employee through a trusted application connected to DApp (Figure 4).
(2) The tag uses an EPC code with a pattern similar to the Expression (1) and illustrated in Expression The process that is performed by DApp when it receives an access request is described next.The variables' names used to define the AC policy are included.(1) The subject (reader) is verified based on two attributes: reader name (variable 01: "rdr_nm", e.g., rdr_nm: "roomA") and location (variable 02: "loc", e.g., loc: "41.40338, 2.17403")).( 2) The company prefix (variable 03: "cmp_prf", e.g., cmp_prf: 000389), the product type (variable 04: "item_ref", e.g., item_ref: 000162), the serial number of a specific asset (variable 05: "ser_nmb", e.g., ser_nmb: 000169740) and the asset status (variable 06: "st", e.g., st: "STERILIZED") are verified.(3) The environmental condition is verified based on the time elapsed since an asset is sent to an existing reader in a medical room (through a transaction) and that reader receives the request for access to that asset (tag).The environmental condition is approved if the interval is less than 10 min (600 s).This time is set for moving assets between locations once the transaction has been invoked.In that sense, variable 07: "time_in" (e.g., time_in: 1,560,209,335) is the time record once the transaction is completed and variable 08: "time_out" (e.g., time_out: 1560209455) is the time given when the reader requests access to this RFID tag.
Based on the AC policy notation established in the reference [31], our AC policy C is defined in the Expression (3).
We decided to implement the AC policy on the DApp and not as part of the smart contract for two reasons.Firstly, as we indicated in Table 3, one of the constraints of a public blockchain is the speed, so if the AC policy is executed as part of the smart contract, it would lead to a delay.Secondly, since smart contracts are public the AC policy would be exposed.In this way, one of the future research lines is the implementation of this model in a private blockchain (e.g., HFB); so that the AC policy can be located within the smart contract (chaincode) in order to analyze these results.
Technical Implementation Details
The two previous sub-sections allowed respectively to define the general functioning of our ABAC model and to detail how the AC policy is executed in the DApp; therefore, it is time to present our system in detail.In order to better understand it, it begins with a summary of the main technologies used.
Table 4 summarizes the technologies used in each sub-systems and their associated blocks.Table 4 follows the Figure 4 design principles.For a better understanding of our work, the reference [32] is a film reference included as external document.
In order to analyze the technical implementation details, Figure 4 shows the specific architecture of our system, which consists of three sub-systems: ABAC configuration, ABAC execution and ETH blockchain monitoring.Legend "X" implies that we implements the block based on the technology or library "[ ]" implies that we use the project.
ABAC Configuration
The ABAC configuration sub-system includes a graphical interface (GUI), based on ReactJS (https://reactjs.org/)web technology that is launched from a browser (Figure 5).This GUI includes two views (Figure 5).Since a demonstration environment is presented, the two views are included within the same browser window, however as it is detailed, each view has its own functionality.The first view allows an authorized employee to add new assets to the system.This employee introduces the code of the company prefix, the code of the product type, the asset ID (e.g., serial number) and so on (Figure 5).Each time a new asset is stored in the ETH blockchain, a new transaction is generated.In order to transfer an asset between rooms the authorized employee first needs to verify the ID (e.g., serial number) of the asset, through a simple query to the blockchain via smart contract.To do this, the authorized employee uses the button ("VERIFY ID") of the second view.This blockchain query does not generates transactions.Next, the same second view enables the transfer assets from the source room (e.g., sterilization area) to the destination room (e.g., surgery room), before attributes values, such as asset status (e.g., "STERILIZED") and timestamp are updated (Figure 5).This action is carried out from "TRANSFER ASSET" button.Since asset transfer involves changes (e.g., room, status, timestamp) new transactions are generated via smart contract.Details of blockchain interface operation are analyzed in the following sub-section.
ABAC Execution
The ABAC execution sub-system contains a smart oracle to permit or deny asset access and it is located in each of the medical rooms.Our smart oracle includes the RFID reader, the LLRP server, the attribute parser (AP), the ABAC security policy (ABAC-SP) and the blockchain interface (BI).The AP, the ABAC-SP and the BI comprise the DApp access control execution (Figure 4, ABAC execution sub-system) and RFID-RC includes the RFID reader and the LLRP server (Figure 4, ABAC execution sub-system).
The RFID reader interacts directly with the tagged assets and the LLRP server.LLRP is a protocol that EPC global ratified as a standard that constitutes an interface between the reader and its software or control hardware [37].The protocol sends XML (eXtensible Markup Language) messages between the client (e.g., RFID reader) and the server (e.g., LLRP server).To develop our proof of concept (PoC) we use an open source tool, as known as Rifidi ( [38]), that create a virtual reader and RFID tags based on SGTIN96 standard.Details of the project that supports it, as well as the getting started guide are located in [33].In addition, since our LLRP server is based on the standard LLRP, it is agnostic to any RFID reader that supports the LLRP protocol such as Motorola FX7400, Intermec IF61and Impinj Speedway.
The AP receives the RFID Tag EPC from the LLRP server and uses a GTIN conversion system, based on a NodeJS library [34], which allows transforming the RFID TAG EPC code to the EPC Tag URI (Uniform Resource Identifier) (e.g., Expression ( 4)).AP filters the attributes: Company Prefix (variable "cmp_prf" in AC policy), Product Type (variable "item_ref" in AC policy) and Serial Number (variable "ser_nmb" in AC policy).In addition, AP controls other attributes such as timestamp (variable "time_out" in AC policy), reader name (variable "rdr_nm" in AC policy) and location (variable "loc" in AC policy).
RFID Tag EPC: 3074257bf7194e4000001a85 EPC Tag URI: urn:epc:tag:sgtin-96:3.0614141.812345.6789, The BI is built based on the truffle framework, using the drizzle library to interact with the web3.jsserver.Drizzle is a collection of front-end libraries that enable writing DApp front-end in an easier way [39].The communication is performed between the parties via GET and POST methods.For instance, ABAC-SP determines whether asset access is permitted or denied, it sets a variable, which is sent via POST method to the LLRP server.Therefore, the LLRP server sends an XML "keepAlive" message (Figure 6) to maintain the interaction with the RFID tag or simply disconnects it.
To execute the AC policy established by the Expression (3), ABAC-SP matches the attributes from the AP with the attributes queried from the blockchain.
Ethereum (ETH) Blockchain Monitoring
Although this sub-system is an integral part of our implementation ( [32]), how the following section is dedicated i.e., it enumerates and describes the monitoring tools of our system in order to verify its feasibility, we have preferred to set the analysis of this sub-system as part of the successive sections.In that sense, Figure 7 represents the monitoring tool ETH Network Stats, which as part of this sub-system.
Evaluation Methodology
In order to evaluate the feasibility of our proposal, it is necessary to indicate first that our model has been deployed in two environments, one based on local blockchain and the other based on a Testnet blockchain.
In the first case, an ETH node was deployed, although it included the property of no discover, making it impossible to connect to the Mainnet.The expression ( 5) is a sample of the command deployed based on geth (https://geth.ethereum.org/)client (main ETH client).
(2) It can be used with both geth client and parity client.
(3) It enables to join its own node to the network, i.e., to participate in the PoW or simply request the ether from a faucet (https://faucet.metamask.io).
To access the network it is necessary to create an Infura project, which generates the endpoint URL (for example, the expression ( 6)) used in the configuration files of our system (truffle-config.js).Next is a detail of the tools used and then the features that are measured: ropsten.infura.io/v3/fa42299dbea54014801bc4145d7a1a1e,(
Evaluation Tools
First, we present the tools used to evaluate our system: ETH Network Stats, Etherscan, Truffle Test, and Infura Dashboard.These tools have been deployed both for the local environment and for the Testnet.The exception is Infura which is a tool that is only associated with the Testnet.Below is a brief description of the tools.
ETH Network Stats is a tool composed by a front-end Ethereum Network Stats [35] and a back-end Ethereum Network Intelligence API [36].This is a visual interface for tracking Ethereum network status.It uses WebSockets to receive stats from running nodes and output them through an angular interface.Both servers are installed locally.This tool was presented as part of the sub-system ETH blockchain monitoring.ETH Network Status ( [40]) is the equivalent to ETH Network Stats, but used to the Ropsten Testnet.
Etherscan Ropsten Testnet Network ([41]) is a tool that we will use to monitor the state of the blockchain and the transactions that are stored in it.This tool presents an equivalent for the local environment, which it is installed as a server.
Infura Dashboard is a response to developer demand for a better understanding of how to improve DApps.The following reference [42] mentions that it has been recently updated, enabling us to obtain relevant information about calls to web3.js methods, which allow for some type of interaction (e.g., generating a transaction) with Ropsten Testnet.
Truffle test is combined with the data obtained from contract migration process in order to improve the data analysis.Truffle comes standard with an automated testing framework to make testing the smart contracts easy.This framework lets it write simple and manageable tests in JavaScript or Solidity.The JavaScript way is used from the outside world, just like an application.The Solidity way is used in bare-to-the-metal scenarios.Truffle test has been deployed for both the local environment and the Ropsten Testnet.
The Table 5 summarizes these tools with application environment and the main features measured.
Measurements
We consider the analysis as integral because can test each part of the implementation, i.e., from network monitoring, with features like the number of nodes and the network hashrate to the delay of the smart contract application and the bandwidth consumption for each web3.jsmethods.Below we present the main parameters that can be monitored through the tools listed in the previous sub-section.Considering the main characteristics associated with each tool, Figure 8 establishes the logical order of use of these tools with respect to the features analyzed.
ETH Network Stats and ETH Network Status allow measurement of a wide range of parameters within the ETH network.These parameters focus mostly on network status (Table 5).Some of the parameters that can be measured are number of successfully mined blocks, presence of uncle blocks, mined time of the last mined block, average mined time, average network hash-rate, difficulty, active nodes, gas price, gas limit, page latency, uptime, node name, node type, node latency, peers connected to one node and some others.Figure 7 illustrates the tool in use, accessed from the browser from a private IP on port 3000.
As we mentioned Etherscan allows extraction of information relative to the blockchain (Table 5).Within the parameters that can be obtained are account balance, account info, transaction hash, block number, type of token (e.g., Erc20), average gas used, transaction costs and transaction fee.
Infura Dashboard allows obtaining a wide range of parameters such as: the total number of methods called, the bandwidth consumed by each of the methods used and the total bandwidth consumed.Therefore, the main feature measured is the bandwidth (Table 5).As mentioned, the truffle test is a framework that allows running tests on smart contracts.For our case of use, the parameters we measure are time of data query, time and cost of data insertion and time of full test.In addition, these data are combined with the data obtained from contract migration process.Therefore, other parameters measures are gas and time spend to deploy the contract.The results obtained are presented below.
Results
Based on the analysis performed in the previous section, the results are presented for each of the tools.
ETH Network State and ETH Network Status enables monitoring the network all the time.For example, at the time of analysis the Testnet Ropsten has mined 5,931,224 blocks, has 14 active nodes, the average block time is 14.04 s, the average network hashrate is 120.1 MH/s, and the difficulty is 2.16 GH.These parameters can be contrasted with those shown in Figure 7 for our locally deployed blockchain.For instance, at the time of analysis our local blockchain has mined 6967 blocks, has only one active node (our node), the average block time is 27.45 s, the average network hashrate is 142 KH/s and the difficulty is 1.43 MH.As a conclusion, it is visible that the power of mining and therefore the resources available to our local device are much less than those presented by the public network.This is an expected result.
Etherscan Ropsten Testnet Network ([41]), which allows us to have a view of all transactions that have been executed from our test address (e.g., 0xe8d5487caebfb3f3e93304161cad0d5d3078b033).Other attributes that can be verified are the status of each of the transactions, the block where the transaction has been assigned, the gas percentage used (e.g., average gas used 66.67% of the established limit value), transaction costs and fee, as well as the nonce used in the PoW.Similar behaviors are obtained for the tool used locally.
Figure 9 is taken from the Infura dashboard and it details the main methods called by the web3.jslibrary in order to interact with the blockchain via Smart Contract, as well as the bandwidth they spend.Clearly, there is a relationship between the method that infura detects and the method we use in our blockchain interface (BI) based on the truffle-drizzle framework, only that Infura perceives JSON (JavaScript Object Notation) RPC (Remote Procedure Call) API (https://github.com/ethereum/wiki/wiki/JSON-RPC) methods based on web3.jslibrary and, since truffle-drizzle works with promises and web3.jsworks with callback, truffle-drizzle framework uses functions like cacheSend.Calling the cacheSend function on a contract will send the desired transaction and return a corresponding transaction hash so the status can be retrieved from the store.The procedure mentioned at web3.js level is performed by eth_getTransactionByHash, however since we work at a higher level, our cacheSend function agglutinates this and other methods.On the other hand, when performing debugging via node inspect tool on the migration process (truffle migrate), it is determined that both methods: getTransactionReceipt and eth_getCode are used.This highlights the importance of the Infura dashboard and details the consumptions made by the methods at a low level.In addition, Infura dashboard includes other relevant information such as peak (e.g., 183.33 MB) and average (e.g., 9.11 MB) hourly bandwidth usage and so on.In order to examine our smart contract, we have established a combination of tests between the truffle test frame and the data obtained from the contract migration process.Figure 10 shows the succession of the applied mechanism in order to check the feasibility.The data received are shown in Table 6, which compares data insertion time, data query, time gas used and time to pass the full test.Expression ( 7) is used to calculate the percent.Since, the times achieved are not deterministic, was taken both best and worst times.The test process performed is described below.
(Local_network_time/Ropsten_network_time) × 100, Our model initiates by recovering the migration time from the contract, which is not deterministic and it establishes a considerable delay between migrations in a local environment and migration in the Ropsten Testnet.Although it is a time to consider, it is not decisive to evaluate the feasibility of the system, since its implementation is prior to the deployment of the system.Because the same smart contract is employed in both environments, the costs and computational power required for the deployment match.Another essential requirement is the insertion time of the asset attributes.The set_method is used to send attributes to the deployed smart contract and it waits for blockchain response.This procedure is equivalent to the mentioned one for inserting data via GUI (ABAC configuration sub-system, Figure 5).Although the delay between the local environment and the Ropsten Testnet is evident (Table 6), this metric will not cause a delay in the execution of the AC policy.As mentioned above, data insertion involves the generation of transactions and, therefore, associated costs, which is an indispensable measurement, so the get_transaction method is applied.We consider that since the same smart contracts deployed in different environments, the transaction cost is equivalent.The decisive metric (delay requirement over AC policy) is the query of assets data.Therefore, it is essential to execute the get_method function and wait for the delay value (Table 6).For this reason, we can conclude that the implementation of our system is technically feasible.Since the Ropsten Testnet is less stable than the Mainnet because of the smaller number of nodes joining that network, and, therefore, less computational power, as discussed in tool one (ETH Network Status), these delay would be less in a Mainnet.
Conclusions
The growing adoption of RFID systems in healthcare is evident.Based on interviews with specialists we determine the implementations needs of a trust tracking and tracing system of medical assets.Our proposal is an access control system of a healthcare asset in order to prevent unwanted assets from entering the wrong area because of human error or an external threat.Therefore, it is a prevention system that aims to solve both security and safety risks.Traditional access control systems are based on role-based access control (RBAC) and centralized architecture.From the technical point of view, our proposal consists of an attribute-based access control (ABAC) system for RFID systems that executes access control (AC) policies from a decentralized application (DApp) based on a blockchain architecture.This model is a proof of concept in both a local environment (single node) and in a public environment (Ropsten Testnet) and although the technological feasibility for its eventual production implementation is demonstrated, it requires a viable underlying business model.In order to demonstrate the implementation feasibility were used four recommended tools: ETH Network Status, Etherscan Ropsten Testnet Network, Infura dashboard and truffle test.
Future research lines are firstly, to establish a comparison between systems based on Hyperledger Fabric Blockchain and other with Ethereum blockchain.One of the common criteria in order to establish a comparison is the ABAC policy as part of the contract (Chaincode and Smart Contract).Secondly, to consider an application environment based on the public blockchain with a base on a tokenization environment.Thirdly, the creation of a robust mutual authentication RFID protocol that works together with our ABAC blockchain system in order to build a secure supply chain system.Finally, to extend ABAC and RBAC blockchain concept to industrial manufacturing and automation environments.Recently, modbus.orghas established security requirements, which include RBAC authentication based on X.509v3 certificates.
Figure 3
Figure 3 represents the general architecture of the proposed ABAC model based on ETH blockchain.The physical node is composed of the RFID Reader Control (RFID-RC), the DApp and the smart contract.When a medical instrument (previously tagged with an RFID tag) attempts to gain access to a room, the RFID-RC sends a request for access to the DApp.The DApp sends a query via smart contract to the blockchain network, which returns some attributes related with the asset (e.g., company prefix, product type, serial number).In addition, the DApp receives other attributes (e.g., timestamp) from the RFID-RC.Then, the DApp uses the attributes to execute the ABAC security policy, which determines whether tag access is permitted or denied.The next section details the implementation framework used.Additionally, we need to confirm that one of the main advantages of a decentralized system is scalability, so this physical node (smart oracle) can be replicated in a way that establishes a new connection with the blockchain (via smart contract), without affecting any of the existing nodes.
Figure 4 .
Figure 4. Details of the system architecture.
Figure 6 .
Figure 6.(eXtensible Markup Language) XML keepAlive messages to permit the access.
Figure 8 .
Figure 8. Sequence diagram of tested features and used tools.
Figure 10 .
Figure 10.Sequence diagram of truffle test and contract migration.
Table 5 .
Tools used to test the proposal.
|
2019-08-22T20:24:42.408Z
|
2019-07-31T00:00:00.000
|
{
"year": 2019,
"sha1": "cad712e658157486c8f71e80936b0adce86171a1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-431X/8/3/57/pdf?version=1567490280",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "518be891b047fe1d0154557b051be8412831f3e7",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
143593944
|
pes2o/s2orc
|
v3-fos-license
|
Deciduous Wings in Crickets:a New Basis for Wing Dimorphism
Dimorphism in wing length occurs in species o at least seven insect orders (Richards, 96 ). It is conspicuous in many species crickets. Crickets with the o,lded metathoracic wings extending beyond the tegrnina are termed macropterous, and those with the olded metathoracic wings covered by the tegmina are termed micropterou (Alexander, 96, 968). Since ying cricketssuch as those ing to light traps are always ma.cropterous, it seems s.ae to co,nclude that micropterous crickets cannot fly. It seems probable, though the evidence is sparse, that macropterous individuals generally do fly and that wing dimorphism in crickets has the same behavioral correlation with emigration that it has in aphids (Kennedy and Stroyan, x959). In studying three species o short-tailed cricket (Anurogryllus), I noted dimorphism in wing length. Most specimens had no visible metathoracic wings (i.e. were micropterous), but 5% .o those the U. S. species and about 20,% each o two VCest Indian species had conspicuously protruding metathoracic wings (i.e. were macropterous). At least 5 o the 6 macropterous West Indian specimens were collected a light, but none o the 4 macropterous specimens o the U. S. species were. Three o these 4 were recently molted specimens rom a laboratory colony and the other was a teneral specimen dug rom its burrow in the ield. Obviously none had ever flown. In studies o the same species Weaver and Sommers (969, p. 338) also noted macroptreous teneral specimens, and in addition described behavior that accounts or the absence o macropterous individuals among non-teneral specimens.: "When the cricket first transforms into the adult stage, the wrinkled, whitish hindwings extend or some distance beyond the posterior edge o the orewings Usually within 24 hr the hindwings are broken off at the base a.nd eaten."
relation with emigration that it has in aphids (Kennedy and Stroyan,x959).
In studying three species o short-tailed cricket (Anurogryllus) , I noted dimorphism in wing length. Most specimens had no visible metathoracic wings (i.e. were micropterous), but 5% .o those the U. S. species and about 20,% each o two VCest Indian species had conspicuously protruding metathoracic wings (i.e. were macropterous). At least 5 o the 6 macropterous West Indian specimens were collected a light, but none o the 4 macropterous specimens o the U. S. species were. Three o these 4 were recently molted specimens rom a laboratory colony and the other was a teneral specimen dug rom its burrow in the ield. Obviously none had ever flown. In studies o the same species Weaver and Sommers (969, p. 338) also noted macroptreous teneral specimens, and in addition described behavior that accounts or the absence o macropterous individuals among non-teneral specimens.: "When the cricket first transforms into the adult stage, the wrinkled, whitish hindwings extend or some distance beyond the posterior edge o the orewings Usually within 24 hr the hindwings are broken off at the base a.nd eaten." "The three species are presently known as .4. muticus (De Geer), but they will be given distinctive binomials in a paper now in press (Walker 1973). Manuscript received by the editor October 9, 1972 Psyche [ December Prompted by these clues, I examined more than 2o "micropterous" specimens o each o the three species o Anurogryllus. All had wing stumps rather than short wings. I pulled with tweezers on one o the wings o. an alcohol-preserved, non-teneral macropterous specimen. The wing tore loose with difficulty but let a stump like those o the "micropterous" specimens. Apparently all the specimens had once been macropterous, and the dimorphism in wing length in each o the three species is based on a dichotomy in wing deciduousness rather than a dichotomy in wing length in the newly ormed adult.
DISCUSSION
The term micropterous is inappropriate o.r crickets with the stumps o deciduous wings. The term dealated has been used or similar cases in other insects and seems appropriate here.
So xCar as now known, dealated A nurogryllus are originally macropterous rather than micropterous, but since wing shedding has not been observed in either o the West Indian species, the length the shed wings is unknown. Furthermore the timing o shedding is not known or the West Indian species. One possibility is that it occurs only in teneral adults, as in the U. S. species. I this be the case, shedding the wings could depend on either the wings being weakly attached to the stumps or on wing-removing behavior or on both. I some or all teneral adults are competent both to retain and to shed their wings, their remaining macropterous or becoming dealates would most likely be an adaptive response to some aspect their environment. For example a stimulus associated with dense population might inhibit wing shedding and promote emigration.
A second and contrasting possibility concerning the timing o.
wing shedding in West Indian Anurogryllus is that none sheds its wings while teneral and all or nearly all individuals disperse by flight before losing their wings. Situations analogous to this possibility occur in termites, ants, and perhaps certain zorapterans (Imms, 1957). I know o no case analogous to the first possibility (i.e. dimorphism with nonmigratory individuals shedding potentially unctional wings). Certain Australian roaches o the genus Panethia are apparently like the U. S. Anurogryllus--i.e. all individuals shed well-developed wings shortly ater the final molt (Mackerras,197o;p. 273).
The extent to which crickets other than Anurogryllus have deciduous wings is unknown. I such wings were characteristic ox any o( the species in which wing dimorphism has been carefully studied, 1972] tIalker--Deciduo.us Wings 313 they would have been repo.rted. The only instances known to me ot wing shedding in crickets other than Anurogryllus are in Gryllus. R. D. Alexander (personal communication ca. I96O, I972) told me he had seen a Gryllus bimaculatus De Geer emale pull off and eat the wings o a courting male. In handling living macropterous G. rubens Scudder, I have no.ted that the wings occasionally detach with only a slight pull. Mo.re recently I have tried pulling the wings rom alcoho.l-preserved specimens o G. rubens. In both macropterous and micropterous individuals the wings were oten easily detached. They tore just distal to. the axillary sclerites and let stumps like those in A nurogryllus. These specimens had been reshly killed within the first week o. adult lie and had never flown. Ot more than oo such specimens examined none was already dealated.
Two attributes o wing shedding that have probably contributed to its evolution are (I) it sometimes aids escape rom predators (cL G. rubens escaping rom my grasp) and (2) it allows unctionless or no-longer-unctional wings to be eaten (eL many insects, includin crickets, eating their exuviae, apparently to nutritional advantage).
UMMARY
In three species oi: 4nurogryllu, that are superficially dimorphic in wing length all "micropterous" individuals have the stumps o longer wings. The dimorphism is in the occurrence o wing shedding and is not known to correlate with a dimorphism in wing length.
|
2019-05-04T13:06:37.466Z
|
1972-01-01T00:00:00.000
|
{
"year": 1972,
"sha1": "591e11d03ce780b17747ac39b74ee3814a226bb8",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/psyche/1972/017254.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cfcae97e93906c8920cf4bf72c8ab014e35a9da7",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology"
]
}
|
169168265
|
pes2o/s2orc
|
v3-fos-license
|
The vulnerability of fishermen’s community and livelihood opportunity through drought and seasonal changes in border area of Indonesia-Timor Leste
Communities that live in coastal areas in Indonesia are affected by the ecosystem degradation because their livelihoods majority depends on ecosystem’s services. Fishermen in Timor Tengah Utara Regency depends on their livelihood on fish catches and crops. TTU Regency is known as a place with drought. Agriculture sector and fisheries play the central role of communal livelihood. This research was conducted to gain information and baseline study to support the intervention scheme reducing the vulnerable level of coastal communities. This research was conducted in Insana Utara, Biboki Moenleu and Biboki Anleu District. The social-ecological and statistic descriptive analysis were undertaken and involving 53 fishermen, 4 women groups, 11 clan’s elder and staffs of local government as the respondents. The data shows that the majority of the fishermen are small-scale fisheries commercial fishermen and possess a high level of vulnerability. The factors that are mostly affected the fishermen livelihood is the job diversification as farmers which is primarily supported by the crops and rely on the rainfall. The vulnerable context of fishermen in TTU can be reduced by optimizing and enhancing communal institution capacity and increasing the cooperation among the stakeholders and government also women participation.
Introduction
Coastal communities invariably depend on their livelihood from ecosystem services. Not only in economic aspect but ecosystem also mainly related to several, social and cultural values inside the community. Depressing ecosystem quality has commonly led them to poverty and interfere with their culture [1]. However, every community has the resilience context as well, which is the opposite of the vulnerability context. There are three aspects of vulnerability; exposure, sensitivity and adaptive capacity [2]. Exposure is the level which an ecosystem, is determined by environmental conditions. Sensitivity is the levels of dependence on natural resources and the technologies used to harvest resources. Adaptive capacity is a latent characteristic that reflects people's ability to anticipate and respond to changes and to minimize, cope with and recover from the consequences of change [3].
Timor Tengah Utara Regency is located between east and west region of Timor Leste and passed by an international road that connects it. Geographically, this current situation is a strategic location to be developed and could increase the economic opportunity for the communities around it. Regardless its strategic locations, the inhabitant that live in the village around the international road, which is coastal areas, still live in poverty. The majority among them are fishermen and farmers. Dwelling in 2 1234567890 ''"" the border area is considered as a challenging task. The fishermen in the border area are potential to have border conflict to claim their fishing ground with another country or to distribute and market their catch. Located in border area without sufficient infrastructure support is a form of vulnerability. In 2014, there was no fishing port for the fishermen, and due to the proximity to the market is quite far, the fishermen throw away the catches in the sea. The situation will get worse if the condition of the fishing port and the facilities for cold chain system in the neighbor country better, the resource could be easily smuggled and landed in the mentioned port.
There is no report mentioned before described coastal community structure, stratification interaction and vulnerability level. This information is to support the next policy or program that will be implemented in that area to overcome poverty and to increase their livelihood opportunities, particularly to strengthen the fishery sector considering that the northern coastal areas of TTU have the susceptibility to water availability for agricultural land management, but have unutilized fisheries resources.
Study area
This research was carried out in the Timor Tengah Utara Regency, located at the north end of NTT between Timor Leste, Indonesia (figure 1). The study sites were three districts, namely Insana Utara, Biboki Moenleu and Bibooki Anleu. Those District were selected due to the majority among the dwellers were fishermen. The area has a rural character, dominated by fishing and agriculture as major economic activities. The research sites were selected according to accessibility, high dependence on marine resources and differences regarding predominant occupations, ethnic composition.
Questioner and sampling design
Opportunistic sampling was used in this study to select participants for semi-structured interviews [4].
To reduce the risk of bias inherent in opportunistic sampling the interviewees were chosen throughout the entire area of the village to get the better understanding of vulnerability level in the community. The social-ecological analysis was undertaken and involving 53 fishermen, 4 women groups, 11 clan's 3 1234567890 ''"" elder and staffs of local government as the respondents. Quantitative questionnaire surveys were preceded by qualitative assessments of the study villages to obtain background information and a better understanding of local conditions and to refine study questions. In qualitative data assessment, the focus lies on the quality of statements, meaning the analysis of what and how regarding a certain situation.
Besides the sampling and preparing the questioner to the fishermen, field observation and focus group discussion were held as the introductory research. This process involved traditional leaders of each clan who dwell in the coastal area, traders, local government, fish processor woman group and shop or outlet owner. Due to limitations of time available for this study, Participatory Rural Appraisal (PRA) techniques through semi-structured interviews, focus groups or (non-participant) observation, which mostly generated qualitative data, could collect only limited quantitative data was done as the introductory and to validate the collected data.
Fisherman profiles
Fishing activity in North Coast of Timor Tengah Utara (TTU) Regency is characterized as a traditional fisheries. Regardless of the fishing gears, the fishermen in TTU that observed in five villages indicate that they are have been undertaking professions as a fishermen during much time. They were starting their profession as a fishermen 21 years ago in average and majority among them was inherited by their parents. Most of the fishermen graduated from elementary school as high as 73.58 %. Most of the respondents' age is 42 years old and based on the further interview they are willing to inherit their skills to their children. As much as 34 % of fishermen possessed boat and based on the area study, the boat that majority under fishermen's possessions is a traditional boat that made from wood and constructed approximately 7 m length and 1 m wide and only 2 % using a machine. Another 66 % of the fishermen was boat-less and using nets to catch fish and work as a labor in other boats that they have to divide their catch equally (50 %) after it reduced by operational cost. The composition of fishing gears that 4 1234567890 ''"" (table 2). Works as a fishermen in TTU is not a primer job. Their primer job mainly as a farmer and fishing activity as a side job that they are performed during extra time, yet gradually fishing becomes main activity because farming activity cannot be done when the dry season comes. During dry season vulnerability of the fishermen increasing, because it is happening at the same time with the fishing lack season.
Catch and price dynamic during peak and low catch season
The fishermen in the site research have a unique counting system for the catch. They do not use kilograms scale to weighing their catches but using basket scales instead. One basket could be estimated as 50 kg. According to the data that were gathered from the fishermen, the effort and the catch disperse variably. It shows that effort of fishermen to gain catches different one another. Based on the linear analysis, it does not show that the increase of effort or time is in line with the increasing of catches. Based on figure 2, the uncertainty of catches occurs, where the increase of effort does not show a significant increase of the catch in a week. The optimum fishing trip hour based on this study which provides the optimum return was around 20 hours/week during low season. However further analysis of the cost-revenue analysis needs to be done in peak, low and moderate season as well. Based on figure 3, there are significant differences in price, catches and income during a week for fishermen within peak, moderate and lack fishing season. The income at peak season is higher respectively fourth times than lack season, yet the price of fish lower as high as more than 100 % compare to the lack season. There were three seasons of fishing, where lack of fish and where there are hard to catch a favourable fish. As a comparison, in Spermonde Islands South Sulawesi, there were differences between peak and low season it is also related to the wet season which usually the catch was lower than the dry season. However, the differences between the low and peak season were different compared to what had happened in TTU. They caught 28.62 kg/week with six days works which were valued 327.12 USD during low season and 38.64 kg/week which is valued 412.92 USD during the peak season [6]. Meanwhile, in TTU the fishermen caught 21 kg/week during the low season and valued 74.27USD and 198.9 kg/week which valued 337.75 USD. We revealed that there was significance during the low season that the catches were very small and during the peak season the volume of catches is high, but the value was cheap. It also regards to the commodity that is caught which mostly mackerel or sardine where it is considered as a commodity that has a regular price. Whether it is on low or in peak season the vulnerability of the fishermen were still occur.
Patron-client system in TTU
Patron-client systems are typical for small-scale fisheries especially in tropical fisheries in Southeast Asia [7]. It has widespread due to the function of the patrons provide the link between the fishermen and the buyer [6], where usually fishermen stay in the remote area and apart from the market. Patronclient relationships in coastal communities which possess high vulnerability level have profound implications for fishing household livelihood security and their vulnerability by providing economic income through fishing or fish trading and social security in times of hardship and by contributing flexible and potentially exploitative credit-debt systems [8]. Fishermen in TTU Regency performs patron-client relationship 13 %. The majority of the fishermen are individual fishermen that poses their fishing gear and needs no boat. Those fishermen perform no interaction with the middleman and 77 % of the share harvesting system is 100 % for the fishermen. It is showed that commercial fishing in TTU regency is small-scale fisheries. It was not long time ago from the first catching that informing fishermen in TTU were classified as subsistence fishermen. Fishing activity was densely undertaken in Kupang or other areas which has proximity to market. Fishermen in TTU Regency initially create small boat and function it as assets to fulfilled food to adopt drought when the dry season comes. It is learned that the elder of fishermen in TTU Regency was not increasing their fishing gear instrumental even though they use to fulfill their economic needs.
Vulnerability through the drought season
Based on Indonesian Agency for Meteorology, Climatology and Geophysics (BMKG figure 5), NTT was categorized as a location, which has a low to medium rainfall each year, further experienced long drought and has a problem with fresh water supply during several months in a year [5]. There are some evidences that rainfall and climate affected the catch [17]. The catch in TTU were usually occurred mostly in the peak season during August-September. However, those typical were changing since 2015. The drought season which usually starts in June which is indicated by the decrease of rainfall ( figure 5) also indicate the start of fishing time. However, within three years it was changing. At present, during the drought community in the coastal area don't fish either farm. It harms their livelihood, and they have to adapt to this change.
Discussion
Vulnerability context is acknowledged as the adversity for a system or community to compromise and adapt to the changing that potentially reduce their resilience [3]. Also described as a stage of the exposure, sensitivity and adaptive capacity [2,3,11,12,13,14,15]. For the coastal community, It also regards those exposure, sensitivity and adaptive capacity are interpreted as the threat to the marine resource as a habitat change, fishing and climate impacts [2,13,16]. Some of the coastal communities in Indonesia do not always depend exclusively on fishing but often prefer to diversify their livelihood portfolio to comprise activities such as farming and construction in addition to fishing [9]. It also were occurred in TTU inversed way. The native fishermen possess dual existence as a fisherman and a farmer and start from the previous time; their main job is a farmer. Related to the aspect of household capital assets base and vulnerability context, fishermen in TTU, are minimum. Human capital indicates that fishermen in TTU area possess low accessibility to the fish resources due to many aspects, firstly lack boat and fishing gear. The fishermen came from Celebes who were Buton Tribe, yet the native (Dawan Tribe) was a beginner and knew a slight fishing technique. Other access is their skill to catch and operate the fishing gear and compromise with the season. Physical capitals that support fishing activity also low occurred such as availability of facilities to support cold chain system, facility for landing and handling the catch and to process it. The traditional way for the fishermen was directly dry their catches. Developing product when catches are high can be undertaken to increase added value. Fish price at peak season falling up to 50 %. Increasing the other livelihood opportunity requires collaborations between authorities and local users. To smooth the marketing, fishermen, and stakeholder can rely on efficient information flow, involvement from government and nongovernment stakeholders which has good leader performance and a clear, structure and function, the active participation of both government and non-government stakeholders [10]. To assist and support the community, structured and transparent Governmental and nongovernmental stakeholder need to devise the fishermen and their families with several skills to extend their fishing skill and at the same time add the value of fisheries resource by proper cold chain facilities, handling or processing facilities. The facilities establishment in the border area, even more in Eastern Indonesia is a core agenda of Indonesia's national development plan. It should be followed by the capacity of the local government and also the beneficiaries communities.
The infrastructure's establishment such as road system, trading ports and fishing ports in border areas especially in the field of the fishery has been improved, since 2015, the construction of infrastructures such as roads and seaport has begun. However, there are some obstacles in the process of development of this infrastructure. The problem is the community's readiness to adapt to the changes itself. The development is expected to accelerate economic activity in the border areas and increase the sovereignty of the nations; mostly the community in the border areas has a chance for the better life. However, it also could be an exclusion potential for the local community when the economic growth lured many people from another area to join in the economic activities and left the local community behind. Therefore, the adaptive capacity of the local communities or the fishermen needs to be improved. Adaptive capacity needs to be increased as well with involved them in several productive meetings and training to protect the ecosystems on which their livelihoods depend on and making sure that there are no people in the community that are left behind towards the fast-growing access development. Lastly, there is a need to concern on the effort to repair the marine ecosystem and a water-efficient farming system to enriched their alternatives livelihood.
Conclusion
TTU community categorized as a vulnerable community. They just recognize the changing environment (climate and habitat modification) as an exposure, yet cannot compromise the changing. The increase of fishing effort was not followed by the increase of catches. It means that there were environmental issues related to the fish stock in the TTU waters and the fishermen have not compromised it yet. To assist and support the community, structured and transparent Governmental and nongovernmental stakeholder need to devise the fishermen and their families with several skills to extend their fishing skill at the same time add the value of fisheries resource by proper cold chain facilities, handling or processing facilities.
|
2019-05-30T23:44:09.127Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "d0164205d0a35aea5a96005138f8870170f6acb2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/139/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "32b10b7daf408978a9b3ee8c00d73a0d9796f6c8",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
}
|
18237409
|
pes2o/s2orc
|
v3-fos-license
|
Multiband and Lossless Compression of Hyperspectral Images
Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.). We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.
Introduction
Hyperspectral imaging instruments collect information by exploring the electromagnetic spectrum of a specific geographical area.In contrast to the human eye and traditional camera sensors, which can only perceive visible light (i.e., the wavelengths between 360 to 760 nanometers (nm)), spectral imaging techniques allow to cover a significant portion of wavelengths (i.e., the frequencies of ultraviolet and infrared rays).It is important to note that the spectrum is subdivided into different spectral bands.Therefore, hyperspectral images can be viewed as three-dimensional data (often referred as datacubes).
For instance, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [1] hyperspectral sensor (NASA Jet Propulsion Laboratory (JPL) [2]) measures from 380 to 2500 nm of the electromagnetic spectrum.In particular, the spectrum is subdivided into 224 spectral bands.
From the analysis of hyperspectral data, it is possible to identify and/or classify materials, objects, etc.Such capabilities are related to the fact that some objects and materials have a unique signature (a sort of fingerprint) in the electromagnetic spectrum, therefore this fingerprint can be used for identification purposes.
Hyperspectral data are widely used in real-life applications including agriculture, mineralogy, physics, surveillance, etc.For instance, in geological applications the capabilities of hyperspectral remote sensing are exploited to identify various types of minerals or to search for minerals and oil.
One of the most important parameters to evaluate the precision of a sensor is the spectral resolution, which is the width between two adjacent bands.For instance, by considering the AVIRIS hyperspectral images, the spectral resolution is 10 nm.The spatial resolution is a relevant aspect too.Informally, the spatial resolution denotes how extensive is the geographical area mapped by the sensor into a pixel.It could be difficult to recognize materials and/or objects from a pixel, if a too wide area is mapped into it.
Many hundreds of gigabytes can be produced every day by a single hyperspectral sensor.Therefore, it is necessary to compress these data, in order to transmit and to store them efficiently.
Since such data are often used in delicate tasks and there are high costs involved in the acquisitions, lossless compression is generally required.
This paper focuses on a novel technique for the lossless compression of hyperspectral images.The proposed algorithm is based on the predictive coding model and the proposed predictive structure uses a configurable multiband three-dimensional structure.It is possible to customize our predictor by individuating the number of the previous bands which will be used as references and the wideness of the prediction context.Through appropriate configurations of such parameters, the computational complexity and the memory usage can be optimized depending on the hardware available.
Because of its high configurability, our algorithm is suitable for "on board" implementations on hardware with limited capabilities, as for example on an airplane or on a satellite.
The experimental results we have achieved are comparable and often better, with respect to other state of the art approaches.Our scheme provides a good trade-off between computational complexity/memory usage and compression performances.
The rest of the paper is organized as follows: Section 2 briefly reviews previous work on lossless and lossy compression of hyperspectral images, Section 3 outlines the proposed lossless compression approach and Section 4 focuses on the description of experimental results.Finally, Section 5 highlights our conclusion and future work directions.
Related Works
Lossless compression of hyperspectral images is generally based on the predictive coding model.The predictive-based approaches have different advantages: they use limited resources in terms of computational power and memory usage and achieve good compression performances.Often, these models are suitable for on board implementations.
The Consultative Committee for Space Data Systems (CCSDS) has specified the CCSDS 123 standard, which outlines a method for lossless compression of multispectral and hyperspectral image data and a format for storing the compressed data [7,8].The main objective is to establish a Recommended Standard for a multispectral and hyperspectral images, and to specify the compressed data format.In literature, many proposed approaches implement the recommendations of the CCSDS 123 standard for the lossless compression of hyperspectral images, as for instance, the ones described in [9][10][11].
Other approaches are designed for offline compression, since they use more sophisticated techniques and/or they require the complete availability of the hyperspectral image.These approaches are not suitable for an on board implementation but can achieve better compression performances.Mielikainen, in [12], proposed an approach for the compression of hyperspectral image through Look-Up Table (LUT).LUT predicts each pixel by using all the pixels in the current and in the previous band, by searching the nearest neighbor, in the previous band, which has the same pixel value as the pixel located in the same spatial coordinates as the current pixel.LUT has high compression performances, but it uses more resources in terms of memory and CPU usage.
Other lossless techniques are based on dimensionality reduction through principal component transform [13] or they are based on the clustered differential pulse code modulation [14].An error-resilient lossless compression technique is proposed in [15].
For the lossy compression of hyperspectral images, the compression algorithms are generally based on 3D frequency transforms: as for examples 3-D Discrete Wavelet Transform (3D-DWT) [16], 3-D Discrete Cosine Transform (3D-DCT) [17], Karhunen-Loève transform (KLT) [18], etc.These approaches are easily scalable.On the other hand, they must maintain the entire hyperspectral image at the same time in memory.Locally optimal Partitioned Vector Quantization (LPVQ) [19,20] applies a Partitioned Vector Quantization (PVQ) scheme independently to each pixel of the hyperspectral image.
The variable sizes of the partitions are chosen adaptively and the indices are entropy coded.The codebook is included as part of the coded output.This technique can be used also in lossless mode, but the high costs required in terms of CPU and memory do not allow an on board implementation.
Lossless Multiband Compression for Hyperspectral Images (LMBHI)
Hyperspectral images present two typologies of correlations: In particular, contiguous bands are strongly correlated (inter-band correlation) and the pixels are generally correlated, since, for instance, two adjacent pixels map adjacent areas, possibly composed of the same material, etc. (intra-band correlation).Such characterizations are exploited by the compression strategies, in order to optimize the redundancy among the third dimension.The main aim of our approach, which we denoted as Lossless MultiBand compression for Hyperspectral Images (LMBHI), is to exploit the correlation with a predictive coding model.
In detail, for each pixel, X, of the input hyperspectral image, LMBHI performs the prediction of the current pixel, X, by selecting the appropriate prediction context of X (three-dimensional or a bi-dimensional contexts).
All the pixels that belong to the first band are predicted by using a bi-dimensional predictive structure: the 2-D Linearized Median Predictor (2-D LMP) [21], which exploits only the intra-band correlation, since the first band has no reference bands.The other pixels are predicted by using a new three-dimensional predictive approach, which uses a prediction context composed of the neighboring pixels of X and its reference pixels in the previous bands.
Once the prediction step is computed, the prediction error e (defined in Equation ( 1)) is modeled and coded.
Review of the 2-D Linearized Median Predictor (2D-LMP)
The 2-D Linearized Median Predictor (2D-LMP) [21] uses a prediction context that is composed by three neighboring pixels of X, namely, I A , I B , and I C , as shown in Figure 1.In particular, the predictive structure is derived from the well-established 2-D Median Predictor, which is used in JPEG-LS [22].The 2-D Median Predictor has the following predictive structure outlined in the Equation (2).
X " Algorithms 2016, 9, 16 3 of 15 The variable sizes of the partitions are chosen adaptively and the indices are entropy coded.The codebook is included as part of the coded output.This technique can be used also in lossless mode, but the high costs required in terms of CPU and memory do not allow an on board implementation.
In particular, contiguous bands are strongly correlated (inter-band correlation) and the pixels are generally correlated, since, for instance, two adjacent pixels map adjacent areas, possibly composed of the same material, etc. (intra-band correlation).Such characterizations are exploited by the compression strategies, in order to optimize the redundancy among the third dimension.The main aim of our approach, which we denoted as Lossless MultiBand compression for Hyperspectral Images (LMBHI), is to exploit the correlation with a predictive coding model.
In detail, for each pixel, , of the input hyperspectral image, LMBHI performs the prediction of the current pixel, , by selecting the appropriate prediction context of X (three-dimensional or a bi-dimensional contexts).
All the pixels that belong to the first band are predicted by using a bi-dimensional predictive structure: the 2-D Linearized Median Predictor (2-D LMP) [21], which exploits only the intra-band correlation, since the first band has no reference bands.The other pixels are predicted by using a new three-dimensional predictive approach, which uses a prediction context composed of the neighboring pixels of and its reference pixels in the previous bands.Once the prediction step is computed, the prediction error (defined in Equation ( 1)) is modeled and coded.
Review of the 2-D Linearized Median Predictor (2D-LMP)
The 2-D Linearized Median Predictor (2D-LMP) [21] uses a prediction context that is composed by three neighboring pixels of , namely, , , and , as shown in Figure 1.In particular, the predictive structure is derived from the well-established 2-D Median Predictor, which is used in JPEG-LS [22].The 2-D Median Predictor has the following predictive structure outlined in the Equation (2).
= max( , )
≥ min( , ) min( , ) ≤ max( , ) + − ℎ (2) Basically, Median Predictor is in charge of selecting one of the above three options, depending on the context.By combining all the three options, it is possible to obtain the predictive structure of 2D-LMP, defined as in the Equation (3).Basically, Median Predictor is in charge of selecting one of the above three options, depending on the context.By combining all the three options, it is possible to obtain the predictive structure of 2D-LMP, defined as in the Equation (3).
3-D MultiBand Linear Predictor (3D-MBLP)
The Multiband Linear Predictor (3D-MBLP) uses a prediction context by considering two parameters: ‚ B: number of the previous bands, that are considered for the prediction; ‚ N: number of the samples for the current and each previous band, which will be used for the creation of the prediction context.
First of all, we define a bi-dimensional enumeration E, graphically represented in Figure 2. The main aim of such an enumeration, is to permit the relative indexing of the pixels with respect to the pixel which is currently under analysis (which has 0 as index in Figure 2).
•
: number of the previous bands, that are considered for the prediction; • : number of the samples for the current and each previous band, which will be used for the creation of the prediction context.First of all, we define a bi-dimensional enumeration , graphically represented in Figure 2. The main aim of such an enumeration, is to permit the relative indexing of the pixels with respect to the pixel which is currently under analysis (which has 0 as index in Figure 2).In order to define the prediction context of 3D-MBLP, we use the following notations:
•
, : indicates the -th pixel of the -th band, according to the enumeration ; • , : denotes the pixel that has the same spatial coordinates of , of the j-th band, according to the enumeration .In the following, we suppose that the current band is the -th band.In particular, by using our notations, it is possible to observe that can be also addressed as , In detail, the 3D-MBLP predictor is based on the least squares optimization technique and the prediction is computed by means of the Equation (4).
=
• , The coefficients: = , … , are chosen to minimize the energy of the prediction error described by the Equation ( 5).
=
, − , can be rewritten in matrix notation by means of the following equation: In order to define the prediction context of 3D-MBLP, we use the following notations: ‚ I i,j : indicates the i-th pixel of the j-th band, according to the enumeration E; ‚ I 0,j : denotes the pixel that has the same spatial coordinates of X, of the j-th band, according to the enumeration E.
In the following, we suppose that the current band is the k-th band.In particular, by using our notations, it is possible to observe that X can be also addressed as I 0,k In detail, the 3D-MBLP predictor is based on the least squares optimization technique and the prediction is computed by means of the Equation (4).X " The coefficients: α 0 " rα 1 , . . ., α B s T are chosen to minimize the energy of the prediction error described by the Equation (5).
P can be rewritten in matrix notation by means of the following equation: Subsequently, by taking the derivate of P and by setting it to zero, we obtain the optimal coefficients by means of the Equation (6).
Once the coefficients α 0 , which solve the linear system Equation ( 6), are computed, the prediction, X, of the current pixel X, can be calculated.
Modeling and Coding of Prediction Errors
Starting from the consideration that a prediction error can assume positive or negative values.Similarly to [23], we use an invertible mapping function (highlighted in the Equation ( 7)), in order to have only non-negative values.It is important to note that the mapping function does not alter the redundancy among the errors.For the coding of the mapped prediction errors we use the Arithmetic Coder (AC) scheme.
Computational Complexity
The main computational costs of our approach are due to the resolution of the linear system Equation ( 6), used to generate the optimal coefficients, which need the computation of the predicted pixel.By using the normal equation method, the linear system can be solved with ˆN `B 3 ˙¨B 2 floating-point operations [24].Figure 3 shows the trend of the computational complexity of our predictive model, in terms of number of operations (Y-axis) that are required for the solving of the linear system, by using configurations with different parameters (X-axis).
Experimental Results
We performed experiments on two datasets of AVIRIS hyperspectral images: the 1997 AVIRIS Dataset (Section 4.1) and the CCSDS Dataset (Section 4.2).
In our experiments we considered also the PAQ8 algorithm (described in [25]) for the coding of the prediction error.PAQ8 is a state-of-the-art lossless compression algorithm that belongs to the PAQ family of compression algorithm.It is important to note that the PAQ8 family is strictly related to the well-established Prediction by Partial Matching scheme (PPM) [26].In general, the PAQ8 algorithm achieve a high degree of compression performances, but the PAQ8 scheme has significant computational complexity.Therefore, such scheme is not fully adequate to be used for on board applications.
The experiments are performed by using a non-optimized Java-based proof-of-concept of our If we use only the previous band as a reference (B " 1), only about 20 operations are needed to solve the system.Instead, four or nine times more operations are required if we use two previous bands (B " 2) or three previous bands (B " 3).A linear system can have three kinds of solutions: no solutions, one solution, and infinity solutions.In the first and the third scenarios, the proposed predictive structure cannot perform the prediction.In these scenarios, it is desirable to use another low-complexity predictive structure and we have used the 3-D Distances-based Linearized Median Predictor (3D-DLMP) [21].
Experimental Results
We performed experiments on two datasets of AVIRIS hyperspectral images: the 1997 AVIRIS Dataset (Section 4.1) and the CCSDS Dataset (Section 4.2).
In our experiments we considered also the PAQ8 algorithm (described in [25]) for the coding of the prediction error.PAQ8 is a state-of-the-art lossless compression algorithm that belongs to the PAQ family of compression algorithm.It is important to note that the PAQ8 family is strictly related to the well-established Prediction by Partial Matching scheme (PPM) [26].In general, the PAQ8 algorithm achieve a high degree of compression performances, but the PAQ8 scheme has significant computational complexity.Therefore, such scheme is not fully adequate to be used for on board applications.
The experiments are performed by using a non-optimized Java-based proof-of-concept of our approach, which employs few minutes on a medium-end laptop (equipped with an Intel Core i5 4200 M processor and 8 GB of RAM).
1997 AVIRIS Dataset
Each one of AVIRIS hyperspectral image of the AVIRIS '97 dataset is subdivided into scenes (the number of scenes is highlighted in Table 1).It is important to note that each scene has 614 columns, 512 lines, and 224 spectral bands.In addition, each pixel is stored by using 16 bits.In Tables 2 and 3 we report the results achieved by using B " 1 with N " 8 and N " 16, respectively.Subsequently, in Tables 4 and 5 we report the results achieved, by using B = 2. Finally, Tables 6 and 7 report the results achieved by using B " 3 and the N parameter equal to 8 and equal to 16, respectively.All the results are reported in terms of Bits Per Sample (BPS).In each table we report the results achieved by using both the AC and the PAQ8 schemes for the coding of the prediction errors.In Tables 8 and 9 the average results on all the tested hyperspectral images are reported.In detail, the first column indicates the N parameter and from the second to the fourth columns the average results for B " 1, B " 2, and B " 3, respectively.As it is possible to observe from Figures 4 and 5 which graphically represent the average results, the best results are achieved when the following parameters are used: N " 16 and B " 2 (Figures 4b and 5b).The worst results are obtained by using the following parameters: N " 8 and B " 3.
Comparison with other Approaches
In order to compare the experimental results achieved by our approach, we consider the Compression Ratio (C.R.) as a measure for the compression performances.In detail, Table 10 reports the results achieved by considering several parameters on all the hyperspectral images of the used dataset.More precisely, the results are reported in terms of C.R. and they are compared with other state of the art lossless compression schemes.
From the experimental results, it should be observed that LMBHI gets its best results by using two previous bands as references (i.e. when the following parameter is used: = 2), LMBHI outperforms, in average, all the other state of the art approaches.
On the other hand, when only the previous band is used (i.e., when = 1 ), LMBHI outperforms all the compared state of the art techniques, with the exception of LPVQ.But, LPVQ is not suited for on board implementation.
Comparison with other Approaches
In order to compare the experimental results achieved by our approach, we consider the Compression Ratio (C.R.) as a measure for the compression performances.In detail, Table 10 reports the results achieved by considering several parameters on all the hyperspectral images of the used dataset.More precisely, the results are reported in terms of C.R. and they are compared with other state of the art lossless compression schemes.From the experimental results, it should be observed that LMBHI gets its best results by using two previous bands as references (i.e. when the following parameter is used: B " 2), LMBHI outperforms, in average, all the other state of the art approaches.
On the other hand, when only the previous band is used (i.e., when B " 1), LMBHI outperforms all the compared state of the art techniques, with the exception of LPVQ.But, LPVQ is not suited for on board implementation.
In this latter case, our approach achieves better results with respect to LPVQ on three of the five hyperspectral images: Moffett Field, Jasper Ridge, and Low Altitude, but LPVQ gains on Cuprite and especially on Lunar Lake.In addition, LUT obtains better results of our approach on two of four compared hyperspectral images: Lunar Lake and Jasper Ridge.
The high flexibility and adaptability of our approach makes it considerable for on board implementations.
In fact, the coding parameters can be customized depending on the hardware available.
CCSDS Dataset
In this section we focus on the experimental results we have achieved by considering the CCSDS Dataset, which is composed by five calibrated and seven uncalibrated hyperspectral images.This dataset is provided by Consultative Committee for Space Data Systems (CCSDS) Multispectral and Hyperspectral Data Compression [27].
In Table 11, we shortly describe the dataset by reporting the number of scenes (second column) and the number of samples per line (third column) for the calibrated and the uncalibrated images (first column).The samples of the calibrated and the uncalibrated images are stored by using 16 bits (16-bit signed integer for the calibrated and 16-bit unsigned for the uncalibrated), except for the Hawaii and Maine images in which the samples are stored by using 12 bits (unsigned) [27].Each image is composed by 512 lines.In Table 12, we report our results in terms of bits-per sample (BPS).The results refer to the calibrated hyperspectral images (first column), by using several configurations for our approach (columns from the second to the fourth).Analogously to Table 12, in Table 13 we report our experimental results for the uncalibrated images.In each table, we report the results by using the AC and the PAQ8 schemes.
The best results are achieved when the following configuration is used: B " 2 and N " 16.
Comparison with Other Approaches
We have compared our results on the CCSDS dataset with other state-of-art approaches.Table 14 reports the results achieved by considering several values for the B and N parameters on the calibrated hyperspectral images of the CCSDS dataset, by using the AC scheme as well as the PAQ8 scheme for the coding of the prediction errors.Tables 15 and 16 report the comparison between the proposed approach and other approaches for the 16-bit uncalibrated and 12-bit uncalibrated hyperspectral images of the CCSDS dataset.
From Tables 12 and 13 it comes clear that the best results are achieved when the value of N is equal to 16 and B is equal to 2.
By looking at Table 14, it should be observed that our approach, when the following configuration is used: N " 16 and B " 2, achieves results that are comparable but slightly worse with respect to FL and FL# [27].Our approach outperforms all the other techniques when the PAQ8 scheme is used for the coding of prediction errors with N " 16 and B " 2. However, in such configuration the computational complexity of our approach is not suitable for on board implementations.
Conclusions and Future Works
In this paper, we have investigated on the lossless compression of hyperspectral images by introducing a multiband three-dimensional predictive structure, we named as 3D-MBLP.
Because of its configurability, it is possible to implement the algorithm on different typologies of sensors, by using appropriate configuration for each type of sensors.Moreover, the proposed approach can be also easily scaled for future generation sensors which will have better hardware capabilities.The experimental results we achieved are comparable and often outperform the other state of the art lossless compression techniques.
In future works, we will include a pre-processing stage before the compression of the hyperspectral image, which substantially reorders the bands by considering their correlation.This will possibly improve the compression performance as in [28,29].
Figure 1 .
Figure 1.The prediction context of the 2D-LMP predictive structure.The gray part is already coded and the white part is not coded yet.
Figure 1 .
Figure 1.The prediction context of the 2D-LMP predictive structure.The gray part is already coded and the white part is not coded yet.
Figure 2 .
Figure 2. The enumeration we used for the relative indexing with respect to the current pixel, identified with 0 as index.
Figure 2 .
Figure 2. The enumeration E we used for the relative indexing with respect to the current pixel, identified with 0 as index.
Figure 3 .
Figure 3.The number of operations ( -axis) required to solve the linear system Equation (6), by using different parameters ( -axis).
Figure 3 .
Figure 3.The number of operations (Y-axis) required to solve the linear system Equation (6), by using different parameters (X-axis).
Figure 4 .
Figure 4. Graphical representation of the average results by using the AC scheme for the coding of the prediction errors.= 8 (a) and = 16 (b).
Figure 4 .Figure 4 .Figure 5 .
Figure 4. Graphical representation of the average results by using the AC scheme for the coding of the prediction errors.N " 8 (a) and N " 16 (b).
Figure 5 .
Figure 5. Graphical representation of the average results by using the PAQ8 scheme for the coding of the prediction errors.N " 8 (a) and N " 16 (b).
Table 1 .
Description of the dataset used.
Table 2 .
Achieved results by using the following parameters: N " 8, B " 1. (N.P. indicates that the scene is not present).
Table 3 .
Achieved results by using the following parameters: N " 16, B " 1.
Table 4 .
Achieved results by using the following parameters: N " 8, B " 2.
Table 5 .
Achieved results by using the following parameters: N " 16, B " 2.
Table 6 .
Achieved results by using the following parameters: N " 8, B " 3.
Table 7 .
Achieved results by using the following parameters: N " 16, B " 3.
Table 8 .
Average Results on the 1997 AVIRIS Images (AC).
Table 8 .
Average Results on the 1997 AVIRIS Images (AC).
Table 10 .
Compression results, in terms of compression ratio (C.R.) achieved by LMBHI (by using various parameter configurations), compared to other lossless compression methods.
Table 11 .
Description of the CCSDS dataset.
Table 12 .
Achieved results for the calibrated images of the CCSDS dataset.
Table 13 .
Achieved results for the uncalibrated images of the CCSDS dataset.
Table 14 .
Comparison with other lossless compression methods (calibrated images).The results are reported in bits-per-sample (BPS).
Table 15 .
Comparison with other lossless compression methods (16-bit uncalibrated images).The results are reported in bits-per-sample (BPS).
Table 16 .
Comparison with other lossless compression methods (12-bit uncalibrated images).The results are reported in bits-per-sample (BPS).
|
2016-03-22T00:56:01.885Z
|
2016-02-18T00:00:00.000
|
{
"year": 2016,
"sha1": "476b225a06e4bdbca8331f081596c4271840cbc6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/9/1/16/pdf?version=1455782523",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "476b225a06e4bdbca8331f081596c4271840cbc6",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
44777589
|
pes2o/s2orc
|
v3-fos-license
|
Book review: Epigenetics (second edition, eds. Allis, Caparros, Jenuwein, Reinberg)
Epigenetics is a dynamic and well-established branch of genetics. It deals with heritable traits, which are not transmitted by the sequence of DNA but rather by the state of chromatin. The evolving landscape of epigenetic research has been reviewed in many excellent books and monographs. Amongst these, the second edition of Epigenetics by Cold Spring Harbor Laboratory Press (Allis et al., 2015) stands out as one of the most comprehensive references for all major developments and perspectives in the field. Built upon the foundation of the first edition (published in 2007), this new edition continues to deliver a solid basic knowledge of various epigenetic processes in model organisms (including yeasts, ciliates, plants, insects, and mammals), of gene imprinting, of dosage compensation, of DNA methylation, and histone modifications. Twelve new chapters track the recent developments in epigenetic processes in cancer, neuronal development, and mental illness, in responses to the environment and in long-range chromatin interactions. All chapters are written by prestigious researchers and are nicely organized to start with a brisk summary followed by a short overview and heavily and richly illustrated main text. In this respect, the book targets a broad audience and can easily serve as an educational resource in specialized higher level undergraduate and graduate university courses or as a reference for advanced level scientists. Its breadth of topics makes it a compulsory item on the shelf of every lab that is closely or remotely involved in epigenetic studies.
The first chapter (written by G. Felsenfeld) gives us a brief history of epigenetics. This history is neither short nor simple. It began some 60 years ago with the notion that all cells in a multicellular organism contain the same DNA, but the gene expression patterns in differentiated tissues dramatically vary. In chapter 3 the editors take this lead and provide an extended and comprehensive overview of chromatin structure and how its transmission, remodeling, and modifications regulate distinct gene expression programs from the same genomic DNA. The editors place a special emphasis on “genetics vs. epigenetics.” They comment on the “missing heredity” in certain enigmatic phenomena and human pathologies and introduce the idea that all these are caused by epigenetic mechanisms. This very entertaining chapter could easily be published as a stand-alone guide to the universe of epigenetic heredity. At the same time, it is a long summary of the more detailed material that is extensively discussed in the remaining 33 chapters of the book. These chapters occupy more than 800 pages and cover practically every aspect of contemporary epigenetics. They are comprehensive and informative, but at times this massive volume could be overwhelming. Some of the chapters focus on the studies of epigenetic processes in model organisms and what we learn from them. Besides being very enlightening, these chapters are an excellent reference for the names and the functions of the multiple protein complexes, enzymes, gene names and homologs in different organisms. Every researcher in the field will appreciate the thoroughness of this compiled and updated information. Other chapters encompass general phenomena observed in many species, such as the effects of the environment on the epigenome or the roles of non-coding RNAs in the regulation of chromatin structure. Finally, epigenetic aspects of mammalian stem cell biology, development, immunity, genetic disease, and cancer are discussed. These particular chapters are perfectly suited for an aspiring medical student or for graduate students in the broader fields of developmental biology, immunology, or cancer biology. A missing theme in this textbook is a dedicated chapter(s) on the emerging trends in pharmacology to target and modulate the activity of enzymes that are involved in various epigenetic processes.
A most interesting addition to the second edition of Epigenetics are the short 2–3 page articles written by junior scientists who have already left their mark in the field. They contain only one figure and emphasize a single message. This simplicity makes them an excellent additional reading material for undergraduate courses. These short essays are compiled in chapter 2 and give a historic twist and sometimes a personal perspective to recent ground-breaking discoveries. The topics span from short and long non-coding RNAs to histone modifications, from histone modifying enzymes to chromosome folding and cellular reprogramming.
At the end of the book we find two very useful appendices. The first one lists several well-maintained web resources and databases that will continue to update us with information after the publication of this edition of Epigenetics. The second appendix leads us through the maze of documented histone modifications and their known functions.
A true value of this book is its reader-friendly approach to the presentation of the material. For example, it is not easy to understand how non-coding RNAs act in the regulation of chromatin structure or how cell type specifications are determined by histone modifications. However, the editors and authors have made everything possible to unburden the reader. The text systematically and meticulously pays attention to the timing and the functional significance of each specific histone modification, to the function of each enzyme involved in this modification or to the interactions between short RNAs and chromatin remodeling factors, and so on. All these messages are accompanied by excellent illustrations. Thus, even an uneducated reader can understand very complex processes and easily pinpoint the activity of each factor in the process. For this reason I recommend this book as a reliable teaching material for all high level courses on genetics or epigenetics.
In summary, the second edition of Epigenetics by CSHL Press has met and exceeded the expectations for a reference book or a textbook. Apart from its intimidating volume and breadth, it has no apparent flaws and is likely to be the key book in the field for years to come. It is comprehensive and is loaded with useful details and substantial information. It targets a very broad audience including undergraduate and graduate students, university professors and researchers in the field of genetics, chromatin structure, or epigenetics.
Epigenetics is a dynamic and well-established branch of genetics. It deals with heritable traits, which are not transmitted by the sequence of DNA but rather by the state of chromatin. The evolving landscape of epigenetic research has been reviewed in many excellent books and monographs. Amongst these, the second edition of Epigenetics by Cold Spring Harbor Laboratory Press (Allis et al., 2015) stands out as one of the most comprehensive references for all major developments and perspectives in the field. Built upon the foundation of the first edition (published in 2007), this new edition continues to deliver a solid basic knowledge of various epigenetic processes in model organisms (including yeasts, ciliates, plants, insects, and mammals), of gene imprinting, of dosage compensation, of DNA methylation, and histone modifications. Twelve new chapters track the recent developments in epigenetic processes in cancer, neuronal development, and mental illness, in responses to the environment and in long-range chromatin interactions. All chapters are written by prestigious researchers and are nicely organized to start with a brisk summary followed by a short overview and heavily and richly illustrated main text. In this respect, the book targets a broad audience and can easily serve as an educational resource in specialized higher level undergraduate and graduate university courses or as a reference for advanced level scientists. Its breadth of topics makes it a compulsory item on the shelf of every lab that is closely or remotely involved in epigenetic studies.
The first chapter (written by G. Felsenfeld) gives us a brief history of epigenetics. This history is neither short nor simple. It began some 60 years ago with the notion that all cells in a multicellular organism contain the same DNA, but the gene expression patterns in differentiated tissues dramatically vary. In chapter 3 the editors take this lead and provide an extended and comprehensive overview of chromatin structure and how its transmission, remodeling, and modifications regulate distinct gene expression programs from the same genomic DNA. The editors place a special emphasis on "genetics vs. epigenetics." They comment on the "missing heredity" in certain enigmatic phenomena and human pathologies and introduce the idea that all these are caused by epigenetic mechanisms. This very entertaining chapter could easily be published as a stand-alone guide to the universe of epigenetic heredity. At the same time, it is a long summary of the more detailed material that is extensively discussed in the remaining 33 chapters of the book. These chapters occupy more than 800 pages and cover practically every aspect of contemporary epigenetics. They are comprehensive and informative, but at times this massive volume could be overwhelming. Some of the chapters focus on the studies of epigenetic processes in model organisms and what we learn from them. Besides being very enlightening, these chapters are an excellent reference for the names and the functions of the multiple protein complexes, enzymes, gene names and homologs in different organisms. Every researcher in the field will appreciate the thoroughness of this compiled and updated information. Other chapters encompass general phenomena observed in many species, such as the effects of the environment on the epigenome or the roles of non-coding RNAs in the regulation of chromatin structure. Finally, epigenetic aspects of mammalian stem cell biology, development, immunity, genetic disease, and cancer are discussed. These particular chapters are perfectly suited for an aspiring medical student or for graduate students in the broader fields of developmental biology, immunology, or cancer biology. A missing theme in this textbook is a dedicated chapter(s) on the emerging trends in pharmacology to target and modulate the activity of enzymes that are involved in various epigenetic processes.
A most interesting addition to the second edition of Epigenetics are the short 2-3 page articles written by junior scientists who have already left their mark in the field. They contain only one figure and emphasize a single message. This simplicity makes them an excellent additional reading material for undergraduate courses. These short essays are compiled in chapter 2 and give a historic twist and sometimes a personal perspective to recent ground-breaking discoveries. The topics span from short and long non-coding RNAs to histone modifications, from histone modifying enzymes to chromosome folding and cellular reprogramming.
At the end of the book we find two very useful appendices. The first one lists several well-maintained web resources and databases that will continue to update us with information after the publication of this edition of Epigenetics. The second appendix leads us through the maze of documented histone modifications and their known functions.
A true value of this book is its reader-friendly approach to the presentation of the material. For example, it is not easy to understand how non-coding RNAs act in the regulation of chromatin structure or how cell type specifications are determined by histone modifications. However, the editors and authors have made everything possible to unburden the reader. The text systematically and meticulously pays attention to the timing and the functional significance of each specific histone modification, to the function of each enzyme involved in this modification or to the interactions between short RNAs and chromatin remodeling factors, and so on. All these messages are accompanied by excellent illustrations. Thus, even an uneducated reader can understand very complex processes and easily pinpoint the activity of each factor in the process. For this reason I recommend this book as a reliable teaching material for all high level courses on genetics or epigenetics.
In summary, the second edition of Epigenetics by CSHL Press has met and exceeded the expectations for a reference book or a textbook. Apart from its intimidating volume and breadth, it has no apparent flaws and is likely to be the key book in the field for years to come. It is comprehensive and is loaded with useful details and substantial information. It targets a very broad audience including undergraduate and graduate students, university professors and researchers in the field of genetics, chromatin structure, or epigenetics.
|
2016-06-17T02:07:15.276Z
|
2015-10-01T00:00:00.000
|
{
"year": 2015,
"sha1": "b4d41b71d5b92aefdd6e2e533a2599a622d116c6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2015.00315/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4d41b71d5b92aefdd6e2e533a2599a622d116c6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18183352
|
pes2o/s2orc
|
v3-fos-license
|
How did the domestication of Fertile Crescent grain crops increase their yields?
Summary The origins of agriculture, 10 000 years ago, led to profound changes in the biology of plants exploited as grain crops, through the process of domestication. This special case of evolution under cultivation led to domesticated cereals and pulses requiring humans for their dispersal, but the accompanying mechanisms causing higher productivity in these plants remain unknown. The classical view of crop domestication is narrow, focusing on reproductive and seed traits including the dispersal, dormancy and size of seeds, without considering whole‐plant characteristics. However, the effects of initial domestication events can be inferred from consistent differences between traditional landraces and their wild progenitors. We studied how domestication increased the yields of Fertile Crescent cereals and pulses using a greenhouse experiment to compare landraces with wild progenitors. We grew eight crops: barley, einkorn and emmer wheat, oat, rye, chickpea, lentil and pea. In each case, comparison of multiple landraces with their wild progenitors enabled us to quantify the effects of domestication rather than subsequent crop diversification. To reveal the mechanisms underpinning domestication‐linked yield increases, we measured traits beyond those classically associated with domestication, including the rate and duration of growth, reproductive allocation, plant size and also seed mass and number. Cereal and pulse crops had on average 50% higher yields than their wild progenitors, resulting from a 40% greater final plant size, 90% greater individual seed mass and 38% less chaff or pod material, although this varied between species. Cereal crops also had a higher seed number per spike compared with their wild ancestors. However, there were no differences in growth rate, total seed number, proportion of reproductive biomass or the duration of growth. The domestication of Fertile Crescent crops resulted in larger seed size leading to a larger plant size, and also a reduction in chaff, with no decrease in seed number per individual, which proved a powerful package of traits for increasing yield. We propose that the important steps in the domestication process should be reconsidered, and the domestication syndrome broadened to include a wider range of traits.
Introduction
The origins of agriculture transformed human societies and drove some of the most important cultural changes in human history (Lev-Yadun, Gopher & Abbo 2000). Understanding why agriculture began is thus one of the most fundamental questions in archaeology, but the mechanisms behind it remain a subject of debate (Abbo, Lev-Yadun & Gopher 2010a;Fuller, Willcox & Allaby 2011;Price & Bar-Yosef 2011). Insight may be gained into this process through greater understanding of the changes that plants underwent during crop domestication. The Fertile Crescent in western Asia was one of the major centres of plant domestication, and a number of cereals, including wheat and barley, and several pulses (grain legumes), originated there approximately 10 000 years ago.
A defining characteristic of domesticated seed crops is a loss of natural seed dispersal, whereby plants become totally dependent on people (Fuller 2007). This occurs through indehiscence (inability to shed seed at maturity) of either the spike (in cereals) or the pod (in legumes), which ensures that ripe seeds remain on the plant rather than falling to the ground. Genetic mutations for indehiscent spikes increased in frequency in a small number of wild grass species, firstly in the progenitors (the closest wild relatives) of the primary domesticates barley, einkorn wheat and emmer wheat (approximately 10 000 BP). This change also occurred in rye (some reports of 9000 years BP, but more commonly from 4000 years BP) and oats (some reports of 7000 years BP, but more commonly from 4000 years BP), which are thought to be (especially in the case of oat) secondary domesticates arising later as weeds of cultivation (Zohary, Hopf & Weiss 2012). This distinction between primary and secondary is made because it is possible that some aspects of the domestication process may have been different when occurring for the first time, compared with those occurring later. The pods of legume species are rarely found in archaeological remains, but lentil, pea, chickpea and bitter vetch are thought to have been domesticated around 10 000 BP, and Celtic bean later (possibly 7000 years BP, but more certainly from 4000 years BP) (Zohary, Hopf & Weiss 2012).
In addition to indehiscence, there are also a number of reproductive and regenerative traits typically associated with domesticated plant species and referred to as the 'domestication syndrome' (Hammer 1984). There is a substantial literature on the domestication syndrome, particularly for cereals, which have greater seed size, lower seed dormancy, synchronous tillering and maturation, more compact growth, and a reduction in dispersal traits in comparison with their wild progenitors (Harlan, de Wet & Price 1973;Hammer 1984;Fuller 2007;Brown et al. 2009). The fact that similar domestication traits are present in unrelated species indicates that these traits arose multiple times independently (Paterson et al. 1995;Meyer, DuVal & Jensen 2012).
Traits relating to other aspects of plant growth and yield are not often discussed as part of the domestication syndrome, but may also be important during the domestication process. Indeed, phenotypic integration may mean that selection for one trait results in selection for other traits as well (Murren 2002;Milla et al. 2015), a concept that was originally introduced in the context of domestication by Darwin (1859), and which has received experimental support from animal studies (Trut, Oskina & Kharlamova 2009). Identification of these additional traits would allow us to expand the domestication syndrome and reconsider the key steps in the domestication process. In particular, high yield in comparison with wild species is generally considered a significant characteristic of seed crops (Harlan, de Wet & Price 1973;Harlan 1992). We hypothesize that increased yield is a product of other correlated traits such as plant size, arising from either deliberate artificial breeding or unconscious selection by farmers.
Yield can be decomposed in two ways. First, in terms of seed size, the rate and duration of growth, and the allocation of biomass to seeds verses vegetative tissues. The duration of growth is positively correlated with total biomass and yield, for example in durum wheat (Gebeyehou, Knott & Baker 1982), spring bread wheat (Sharma 1992), pearl millet (Craufurd & Bidinger 1988) and oilseed rape (Sidlauskas & Bernotas 2003). Any increase in growth rate should also increase yield, provided that this does not negatively impact other traits. If reproductive allocation increases with domestication, this will also positively influence yield, although evidence that this occurs is mixed. Some studies have found that greater reproductive allocation causes higher yields in crops compared with wild species (Gifford & Evans 1981), but this effect depends on other factors such as plant density and size (Qin et al. 2013). Additionally, a decrease in the proportion of chaff leads to higher yield because more of the reproductive biomass is converted into edible seed (Harlan 1992). These components of yield related to size, growth and allocation are not expected to show consistent patterns of covariance.
The second way of decomposing seed yield is to consider the mass and number of individual seeds and to look at how they are packaged into infructescences (i.e. cereal spikes or legume pods) on the plant. Total seed yield increases with greater individual seed mass or a higher number of seeds per plant, and previous research suggests that both traits are important determinants of yield in elite crop varieties (Schwanitz 1966;Evans 1993). However, in this case, we might expect a trade-off between seed mass and seed number, as commonly observed across a number of different plant species (Sadras 2007;Gambin & Borras 2010). Trade-offs may be defined as a compromise between how a finite amount of resources is invested in different functions; however, if plant size varies, then resource level also varies. Therefore, this trade-off will only occur if plants are roughly equal in size, or if biomass-corrected ratios are used, otherwise no or even positive relationships can occur (Rees & Venable 2007).
The yield advantage of Fertile Crescent crop progenitors over other wild species is usually attributed directly to the fact that these crop progenitors have larger seeds (e.g. Blumler 1998). However, whether a yield advantage was already present in landraces, before agronomic improvement, has not been tested. Although landraces have been evolving during the thousands of generations since domestication, they are certainly our best living proxy for earliest domesticates as they are largely the product of their natural environment and traditional agricultural methods (FAO, 2013), rather than of modern selective breeding techniques (Hedden 2003). Extrapolation from modern crops is unwise, as the process of domestication may be very different from the later process of agronomic improvement, which has led to the development of much higher yielding varieties compared with landraces (Abbo, Lev-Yadun & Gopher 2010b). It has recently been proposed that traits showing clear and consistent differences between domesticated landraces and their wild progenitors indicate changes from the original domestication episode, rather than subsequent post-domestication evolution (Abbo et al. 2014). Therefore, if we see consistent effects across all of the landraces in comparison with wild progenitors, we can infer that these arise from domestication.
Recent work on wild Fertile Crescent grasses and legumes has produced conflicting evidence over whether crop progenitors produce higher yields than smaller seeded wild species from the same region. Whilst one study looking at nine Fertile Crescent grasses found that crop progenitors had, on average, higher potential grain yields (Cunniff et al. 2014), a later study including a larger number of species (24 grasses and 19 legumes) found no such yield advantage (Preece et al. 2015). This unexpected result arose from trade-offs between seed number and mass, and in cereals, also between spike number and mass, that is crop progenitors had larger seeds and spikes but fewer of them (Preece et al. 2015). If these trade-offs are also present within domesticated cereal and pulse species, crops may not necessarily be higher yielding than their wild progenitors and, if they do have higher yields, these may arise from changes in growth or allocation, rather than a direct effect of having larger seeds.
There is also reason to believe that seed size could impact crop yield indirectly, through an effect on overall plant biomass, as previous studies have found positive correlations between these traits. For example, field trials with wheat showed that larger seeds produced plants of greater biomass and height, with higher yields (Donald 1981;Chastain, Ward & Wysocki 1995). It is well-established for a wide range of species that juvenile plant size is predominantly controlled by seed size, such as between accessions of wild and cultivated barley species (Chapin, Groves & Evans 1989), between 32 species from arid central Australia (Jurado & Westoby 1992) and between 58 British semi-woody species (Cornelissen 1999). However, correlations between seed size and plant size at maturity are typically weaker (Rees & Venable 2007), indicating that the importance of plant final size in the domestication of crops requires further investigation.
In this paper, we test the hypothesis that domestication has increased seed yield in the landraces of Fertile Crescent cereal and pulse crops. We investigate which yield components are responsible for these increases, hypothesizing that any change in yield might be mediated via trade-offs or positive correlations among its components. We carried out a comparative experiment in a common greenhouse environment, where cereal and pulse crops and their progenitors were grown individually to maturity. For each crop species, we used multiple landrace accessions, which represent some of the diversity in the least improved extant forms of domesticated species. These are therefore much more closely related to the earliest crops than modern cultivars (McCouch 2004), which is important as it allows inference about the early domesticated states.
P L A N T M A T E R I A L
For our experiments, we used the landraces of three cereal and three pulse crops known with certainty to have been domesticated at early sites in the Fertile Crescent: barley (Hordeum vulgare subsp. vulgare), einkorn wheat (Triticum monococcum subsp. monococcum), emmer wheat (Triticum turgidum subsp. dicoccon), chickpea (Cicer arietinum), pea (Pisum sativum subsp. sativum) and lentil (Lens culinaris subsp. culinaris) (Zohary, Hopf & Weiss 2012). In addition, we included oats (Avena sativa) and rye (Secale cereale), which were also domesticated, probably at a later date and not necessarily in the Fertile Crescent. This may or may not have affected the traits that changed in cereal crops during the domestication process, and there may be ecological reasons why oats and rye did not become domesticated at the same time as wheat and barley. Differences between these two groups of cereal crop progenitors are therefore also interesting. We also used the wild progenitors for each crop, resulting in a total of 10 grasses and seven legumes (Table 1, with more details in Tables S1 and S2, Supporting Information), with two putative pea progenitors included (P. sativum subsp. elatius and P. sativum subsp. elatius var. pumilio), due to debate in the literature over the closest wild relative (Smykal et al. 2011;Zohary, Hopf & Weiss 2012). Seeds for each of the study species were acquired from a number of different seed banks: The National Plant Germplasm System (United States Department of Agriculture), the John Innes Centre Germplasm Resources Unit (UK) and IPK Gatersleben Genebank Table 1. Summary of the 17 species used in this study and their domestication status (crop or progenitor), noting whether each crop and its progenitor are primary (1°) or secondary (2°) domesticates. Primary domesticate denotes one of the first species to be domesticated (c. 10 000 years ago), whereas secondary domesticate refers to a species thought to be domesticated much later, possibly as weeds of cultivation
G R O W T H C O N D I T I O N S
Two greenhouse experiments were conducted in summer 2011 and summer 2013 (described in Preece et al. 2015) in order to measure the components of yield, and these are referred to as Yield Experiment 1 and Yield Experiment 2. In 2011, a functional growth analysis was also carried out in a separate study to further understand differences in growth rates between crops and their progenitors and is hereafter called the growth analysis experiment. In all cases, cereal seeds had outer glumes removed where necessary. For pulses, scarification with sandpaper was used to break seed dormancy. Seeds were germinated on a 1 : 1 mixture of John Innes no. 2 compost (LBS Garden Warehouse, Lancashire, UK) and Chelford 52 washed sand (Sibelco UK Ltd, Cheshire, UK). The growth medium was saturated with water, and seeds were planted in rows to enable identification of individuals. Seeds were germinated in a controlled-environment growth cabinet (Conviron BDW 40; Conviron, Winnipeg, MB, Canada). Temperature range was 20°C/10°C (day/night), with an 8-h photoperiod and photosynthetic photon flux density (PPFD) of 300 lmol m À2 s À1 , conditions which approximate the growing season for winter annuals in the Fertile Crescent. Seedlings used in Yield Experiments 1 and 2 were transferred to another growth cabinet when they reached the two-leaf stage, where they were vernalized for 6-8 weeks (the variation was due to small differences between species and between years) to enable subsequent flowering. In this cabinet, the temperature was 4°C and PPFD 300 lmol m À2 s À1 with an 8-h photoperiod. After vernalization, plants were moved to a greenhouse (Arthur Willis Environment Centre, University of Sheffield, Sheffield, UK), and individuals planted into 11-L square pots (21 9 21 9 25 cm), whilst the temperature was maintained at 24°C/15°C (day/night).
In the growth analysis experiment, a vernalization period was not needed, as the experiment was concerned with the initial phase of rapid vegetative growth and not with seed production. Therefore, 3 days after germination, twelve seedlings were randomly selected from those which had successfully germinated. Seedlings were transferred to 1-L pots containing washed sand and returned to the controlled-environment room with the following conditions: 20°C/10°C (day/night) with a 16-h photoperiod, maximum PPFD of 756 lmol m À2 s À1 .
E X P E R I M E N T A L D E S I G N
Yield Experiment 1 used a randomized block design with 20 blocks divided between three greenhouse rooms. Watering occurred three times per week, and plants received Long Ashton nutrient solution (50% concentration) twice during the experiment (Hewitt 1966;tables 40, 41). The two Secale species (Secale vavilovii and S. cereale) do not self-pollinate, and manual cross-pollination was therefore carried out using a paintbrush. A subset of cereal spikes (at least five per plant) was covered with translucent, cellophane crossing bags (Focus Packaging and Design Ltd, Scunthorpe, UK), to prevent seed dispersal in the wild species (through natural shattering of the brittle rachis). These bags are specially designed for use with cereals and no differences were observed between bagged and un-bagged spikes. Crossing bags were not used for wild legume speciesinstead, seeds were harvested as soon as they were ripe (prior to shattering).
In Yield Experiment 2, the experimental set-up was the same as the first experiment, except that there were ten blocks in total, divided between two greenhouse rooms. In both years, each block contained one individual of each species where possible, so in total there were up to 30 replicates per species. The Avena and Secale species were not used in Yield Experiment 2 so the maximum total number of replicates was 20. Replicates were divided approximately equally between accessions in both years.
In the growth analysis experiment, two identical experiments were established, each with the same experimental set-up, but with different accessions used (see Tables S1 and S2). There were 12 plants per accession, divided between six randomized blocks and pots were top-watered with full strength Long Ashton solution (Hewitt 1966; tables 40, 41) every 2 days and bottom-watered with distilled water on alternate days.
T R A I T M E A S U R E M E N T S
In Yield Experiment 1, the duration of the growing period (from germination to flowering) was measured. Final above-ground biomass was harvested at the end of the experiment, when spikes and pods had reached maturity. Plants were divided into vegetative and reproductive tissues, then oven dried at 40°C for 3 days and weighed. Allocation to reproductive biomass was calculated as the proportion of the total biomass allocated to reproduction, including culm and chaff. Reproductive biomass was further divided into grain and chaff. The mean individual seed mass was measured before sowing and then again from the harvested seed, calculated from a subset of the infructescences for each plant. Total seed number per plant was also measured, and thus, total seed yield was calculated as the product of mean individual seed mass and the total number of seeds per plant. In cereals but not pulses, the number of seeds per infructescence, the number of infructescences per plant and the total mass of seeds per infructescence were also measured. Maximum plant height (when fully extended) of mature plants was also recorded for cereals and pulses. In Yield Experiment 2 individual seed mass (of sown seed), total seed yield and total above-ground biomass were measured following the same methods as before. Therefore, for these measurements, data are combined from the 2 years.
For the growth analysis experiment, six harvests were carried out within a 3-week period, starting on day 8 or 9 after germination and proceeding every 3-4 days, finishing on day 27 or 28. At each harvest, two plants of each species were removed from the pots, washed clean and divided into roots, leaves and stems (in grasses defined as leaf sheath plus culm). Plants were dried to a constant weight for 3 days at 45°C, and then, dry weight was determined. All species were determined to be in the exponential growth phase between the first and final harvests.
Using the results from the Yield Experiments 1 and 2, we investigated yield (total mass of seed) by decomposing it into its separate components. Total seed yield (Y) can be calculated in two ways, first: where M s is mean individual seed mass at sowing (g),k is relative growth rate (RGR, g g À1 day À1 ) averaged over the growth period, d (days), A r is allocation to reproductive biomass (dimensionless fraction) and c is the proportion of chaff or pods in reproductive biomass (dimensionless fraction). These five components of yield, M s , d,k, A r and c, may covary in some cases. We note that the 'harvest index' is a measurement often used in agricultural contexts and can be calculated as the mass of grain (Y) as a proportion of above-ground biomass. The second way of decomposing seed yield is: where M s is again mean individual seed mass, N s is the number of seeds per infructescence (where infructescence refers to the fruiting part of the plant, either spike, panicle or pod), and N i is the number of infructescences per plant. For this equation, M s refers to individual seed mass measured at the time at final harvest. In general, sown and harvested individual seed mass are highly correlated with a~1:1 relationship (Fig. S1). From eqn 2, total seed yield would increase with greater individual seed mass (M s ) or a higher number of seeds per plant (the product of seed number per infructescence and the number of infructescences per plant, N s 9 N i ).
C A L C U L A T I O N O F G R O W T H R A T E
We calculated RGR using two approaches. First, average RGR, k, was calculated using the final harvest data from Yield Experiments 1 and 2, where M s is the individual seed mass at sowing and M d is the final plant mass at the end of the growing period. Note this method of estimating average RGR is valid even when growth is not exponential, and if we substitute this into eqn 1, we find Y = M d A r (1 À c) as expected. Seed mass is used here as a measure of initial mass, as we are particularly interested in the efficiency with which seed mass is converted into final plant mass. This method therefore differs from the usual way of calculating RGR, although previous work has shown a strong correlation between seedling mass and seed mass in these species (Cunniff et al. 2014). The advantage of this method is that it averages across the entire growth period and does not just look at seedlings. Nonetheless, care should be taken interpreting RGR data calculated in this way when there is large variation in initial seed size. RGR calculated in this way does not account for differences in plant size, which makes comparisons between taxa ambiguous as differences may arise from size-related effects rather than intrinsic differences in maximum RGR Turnbull et al. 2012). For these reasons, we also compared RGR at a common size (k s ), in seedlings by performing a species-specific functional growth analysis with data from the growth analysis experiment. Growth functions were fitted to plots of logged total plant mass against time (plots shown in Fig. S2), using a four-parameter logistic model, which allowed estimates of RGR at a common size for each species during the initial phase of growth (k s ), when RGR is expected to be highest (for full details of the fitting and RGR estimation see: Rose et al. 2009;Rees et al. 2010;Taylor et al. 2010;Turnbull et al. 2012). For this analysis, the common size used was the log of the minimum seedling mass (mg) for the largest species, which for grasses corresponded to 42Á1 mg and for legumes was 64Á7 mg. These sizes were selected as all species occur at these sizes and resource limitation should be minimal.
S T A T I S T I C A L A N A L Y S E S
In order to determine how the various terms in eqn 1 influence the variance in yield, we used variance decomposition. To do this, we first write the parameter vector as h = (M s ,k, d, A r , c), then the standard first-order approximation to the variance is where Cov is the covariance, Cov(h i , h i ) = Var(h i ) the variance, and the subscripts i and j refer to different traits. This approach allows both the direct effects of variation in a trait, and indirect effects mediated through correlated changes in other traits (the covariance terms) to be assessed. For eqn 2, the same approach can be used with h = (M s , N s , N i ). The terms on the right-hand side of eqn 4 define a square variance-covariance matrix (Table S3) and so we can calculate the contribution to the variance of each trait by summing along the rows and dividing by the total, see Rees et al. (2010) for more details. Note the approach differs from that used in Rees et al. (2010) as yield (eqn 1) cannot be expressed as the sum of its components, and so we have to approximate the variance in yield using eqn 4 (more detail in Appendix S1). Data from Yield Experiments 1 and 2 were then analysed in a phylogenetic context using R (R Core Team 2014). Data sets of plastid markers assembled previously for the grasses and legumes (Preece et al. 2015) were combined, and a tree including both groups (Fig. S3) was inferred with BEAST (Drummond & Rambaut 2007) as previously described (Preece et al. 2015). We used generalized least squares, using the pgls function in the CAPER package (Orme et al. 2013), to test for differences in species means. The difference in plant traits between crops and their progenitors was tested as a fixed effect, with models specified as follows: mod <pgls(ln.yield~status, data = dat, lambda = 'ML').
Two other analyses using linear mixed effect models were performed in order to confirm the results of the pgls analysis. These were done with the lmekin function in the COXME package (Therneau, 2015) and the lme function in the NLME package (Pinheiro et al. 2014). For the lmekin analyses, the random effects fitted were block nested in experiment, and species, for example mod <lmekin (ln.yield.g~status + (1|species) + (1|experiment/block), data = data.f, varlist = list(list(spp.var,var.cov.tree))). The species random effect included a phylogenetic component, and a between-species component unrelated to phylogeny. For the lme analyses, the random effects fitted were accession, nested in crop, nested in family nested in block (and experiment where relevant), for example mod <lme (ln.yield.g status, random =~1|experiment/block/family/crop/acc, data = data.f). In the results section, we show effect sizes and P-values from the most conservative analysis (the pgls analysis). Correlations between traits among all species were also tested using the same statistical methods. The results of these analyses (Tables S4 and S5) are consistent with the pgls analysis, with some minor differences for the lme analysis.
For the growth analysis experiment, size-corrected RGR, (k s ), was calculated for each species, and a pgls model was used to compare crops and their progenitors, similar to the other plant traits. Natural log transformations were applied to all variables exceptk, k s , d, A r and c, and all comparisons were tested at the 0Á05 significance level.
W H I C H T R A I T S A R E I M P O R T A N T F O R D E T E R M I N I N G Y I E L D ?
Overall, when considering eqn 1 for all species, variation in the mean individual mass of seeds sown (M s ) made the greatest contribution to variation in total seed yield (Y) followed by variation in mean RGR (k) ( Table 2). The negative effect of variation ink occurs because this trait negatively covaries with M s , the growth period (d) and the allocation to reproductive biomass (A r ). Hence, the positive effect of faster growth on yield is more than offset by reductions in M s , d and A r . When considering cereals in isolation, after M s , the proportion of chaff (c) is the second most important trait that contributes to variation in Y, althoughk and d were also fairly important. Pulses show the same pattern as all species considered together, so variation in yield was mainly due to variation in M s and, to a lesser extent,k.
When considering eqn 2 for all species together, variation in individual seed mass, M s (this time measured at harvest), is again the largest contributor to variation in Y, with seed number much less important. This result is mirrored for pulses analysed separately. However, for cereals, a different pattern is revealed by the additional trait data for these species, which shows that variation in seed number and infructescence number per plant is also important ( Table 3).
W H I C H T R A I T S D I F F E R B E T W E E N C R O P S A N D T H E I R P R O G E N I T O R S ?
The comparison of crops with their progenitors showed that crops have significantly larger total seed yield (1Á59 larger, 95% CIs [1Á1, 2Á1], P < 0Á05), in support of our overall hypothesis (Fig. 1). There are differences among crop species, and notably, the effects of domestication on secondary cereal domesticates appear small, with no significant difference in total seed yield for either oats or rye (Fig. 1a). Crops also had greater individual sown seed mass (1Á99 larger, 95% CIs [1Á4, 2Á5], P < 0Á0001) (Fig. 2) and greater total above-ground biomass (1Á49 larger, 95% CIs [1Á2, 1Á7], P < 0Á05) (Fig. 3). There was, however, no difference in the duration of growth (d), allocation to reproductive biomass (A r ) or height (Table 4). Growth rate did not differ between crops and progenitors, either when calculated as average RGR in the yield experiments (k) or as size-corrected RGR in the separate growth analysis (k s ), using the functional approach at a common size (Fig. 4).
Crops had a lower proportion of chaff making up their reproductive biomass (24Á2%) than their progenitors (39Á0%) (38% less, 95% CIs [18Á0, 58Á2], P < 0Á01) (Fig. 5). Cereals also had a greater number of seeds per spike (1Á39 greater, 95% CIs [1Á2, 1Á6], P < 0Á01) and greater spike mass (1Á79 greater, 95% CIs [1Á2, 2Á5], P < 0Á05). Total seed number per plant did not differ between crops and progenitors. Seed number per gram of plant biomass was also calculated and did not differ between crops and their progenitors. Mean values of all Table 2. Contributions to the variance in total seed yield (Y) from variation in individual seed mass (M s ), growth rate (k), duration of growth (d), reproductive allocation (A r ) and the proportion of chaff (c). The contribution to the variance of each trait is calculated by summing along the rows of the variance-covariance matrix (Table S3) Table 3. Contributions to the variance in total seed yield (Y) from variation in individual seed mass (M s ), and total seed number, subdivided into seed number per infructescence (N s ) and infructescence number per plant (N i ) for the analysis of the cereals. The contribution to the variance of each trait is calculated by summing along the rows of the variance-covariance matrix ( measured traits are shown for each species in the Supporting Information (Table S6). Above-ground biomass was strongly positively correlated with total seed yield across all species (P < 0Á0001, R 2 = 0Á84), with no interaction with domestication status (crop versus progenitor) (Fig. S4). Individual seed mass at sowing was positively correlated with individual seed mass at harvest, (P < 0Á0001, R 2 = 0Á97) total seed yield (P < 0Á001, R 2 = 0Á54) and above-ground biomass (P < 0Á001, R 2 = 0Á59) when compared across species (Figs S1, S5 and S6).
Discussion
This study examined the components of yield that distinguish crop landraces from their wild progenitors. Total seed yield was greater for crops than their wild relatives, and we also looked at the specific ways in which the components of yield differed consistently between domesticated and wild plants. Individual seed mass was a key trait, determining total seed yield, and in cereals, the production of more seeds per spike and less chaff was also important. The processes of cultivation and domestication may represent a continuum (Gepts 2012), in which the species taken into cultivation depend on particular plant traits, and then, the way in which they are domesticated depends on other plant traits. This study follows previous work that investigated traits common to crop progenitors, in order to understand why some types of plants were domesticated instead of others (Preece et al. 2015). Here, a similar experimental approach is used, with the focus on a later stage of the same process. Together these two studies indicate that the high yield of crops arose later in the cultivation-domestication continuum and was not a trait already present in crop progenitors. Large seed size is a key trait in Fertile Crescent crops because these species were larger seeded in the cultivation stage (progenitors larger than other wild species), and then, there was a further increase in the domestication stage (landraces larger than their progenitors). However, bigger seed size alone is not enough to increase yields (as shown by the previous work on crop progenitors), and this new study also supports the inclusion of widening the plant functional traits that we associate with the domestication process.
The study is novel in looking at whether the domestication syndrome can be expanded to include additional traits common to multiple crop species. Specifically, it showed how traits related to growth and allocation differed between the species, including the ways in which some of these traits covaried. Overall, we demonstrate the importance of large size for both cereal and pulse crops, both at the scale of individual seeds and the whole plant.
G R O W T H A N D A L L O C A T I O N
The first approach for calculating total seed yield, using traits relating to growth and allocation (eqn 1) predicts that greater yield should arise from any increase in plant size (a function of initial seed size, growth rate and the duration of growth), greater allocation to reproductive tissues, or a decrease in allocation to chaff. Domesticated species had greater biomass than their progenitors in agreement with previous work showing that landraces are larger than their wild progenitors (Evans 1993;Milla et al. 2014). However, whether these larger sizes were the consequence of different growth strategies remains uncertain. Growth rates can be calculated in different ways, and the use of classical RGR has been questioned in situations where initial plant sizes are very different, leading to the development of methods accounting for size (Turnbull et al. 2008). In this study, we found no differences in growth rate calculated for plants at a common size. Instead, the final size advantage appears mostly to come from a seed size that was initially larger, that is a 'headstart'. Overall, we found no evidence for an effect of domestication on growth rates. During modern crop breeding programmes, there has been a focus on increasing allocation to reproductive biomass, and in particular the harvest index (Hay 1995;Fischer & Edmeades 2010), including the breeding of semidwarf varieties of modern cereals (Sakamoto & Matsuoka 2004). The landraces of crops in our study had greater overall biomass than their wild progenitors, but did not differ in allocation patterns between reproductive and vegetative biomass, with no differences in harvest index or final height. No difference in biomass allocation between wild progenitors, landraces and modern cultivars of wheat has previously been suggested (Damisch & Wiberg 1991). Therefore, without any difference in allocation to reproductive tissue, total seed yield was increased as a result of the larger initial and final plant sizes of crops. Crops also had a lower proportion of chaff or pod material compared with their wild progenitors, further augmenting the yield.
It is important to note that our results apply to plants grown individually, and that plant size may be affected by competition (Gurevitch et al. 1990). In general, as plants are grown at higher densities, biomass and yield per unit area increase up to a threshold level, after which values remain more or less constant. The occurrence of this 'constant final yield' happens when plants experience intraspecific competition for resources, and individual size and yield cannot reach maximum levels (Donald 1951;Weiner & Freckleton 2010). The amount of competition and relative competitive ability of different species is therefore an important factor in determining yield per unit area. Another consideration is that high yields per individual do not necessarily correspond to high yield per hectare. Individuals that produce a high seed output often do this by being excellent competitors, but high individual competitive ability can lead to low overall biomass of a stand, as there a few 'winners' and many 'losers' (Anten & Vermeulen 2016). It has been suggested that ideal crops should be weak competitors to keep intraspecific competition at a minimum (Donald 1968;Zhang, Sun & Jiang 1999). However, crop progenitors may have been better competitors than other wild species, demonstrating some traits that may be undesirable in crops growing at high density (e.g. tall stature and large leaf area), so the species in our study may be particularly prone to not maximizing community performance (Anten & Vermeulen 2016).
Whilst care should therefore be taken when considering the importance of plant size, our results allow us to compare the different species under optimum conditions of no or minimal competition, as would occur when sown at low densities. We also do not know at what densities the plants were grown during the early phases of domestication, and it may well be that plants were grown at close to optimal conditions.
S E E D M A S S A N D N U M B E R
The second way of calculating total seed yield (eqn 2) predicts that yield can increase as a function of individual seed mass, the number of seeds per infructescence or the number of infructescences per plant. All of the crops in this study had greater individual seed mass than their progenitors, which would, in the absence of trade-offs, lead to greater yields. However, across species, larger individual seed size tends to be negatively correlated with seed number, which stabilizes yield (Leishman 2001;Coomes & Grubb 2003;Sadras 2007). In fact, a previous study of yield-related traits found that these large-seeded cereal crop progenitors have significantly fewer seeds per plant than other Fertile Crescent grasses (Preece et al. 2015). Seed size-number trade-offs have been linked with competition-colonization trade-offs, with smaller seeds having greater dispersal ability (Turnbull, Rees & Crawley 1999), and tolerance-fecundity trade-offs, whereby large seeds have an advantage of higher tolerance of stresses (Muller-Landau 2010).
In this study, the large-seeded cereal crops had a greater number of seeds per spike but we found no evidence of any reduction in seed number per plant, such that domesticated and wild plants had similar numbers of seeds. The lack of a negative relationship between seed size and number may be a consequence of yield being largely determined by variation in seed mass, which in turn implies that the amount of resources captured increases with seed mass. When large-seeded species are much better at capturing resources, and become larger adult plants, then positive relationships between seed mass and number are possible (Venable 1992). Also, crops had a lower proportion of chaff than their progenitors, indicating a change in resource allocation between grain and chaff and possibly helping to explain the lack of a seed size/number trade-off.
We also did not see a conclusive reduction in seed number per gram of plant biomass, at least when analysed in a phylogenetic context (see Table S4). This is important as it rules out the possibility that the seed size-number trade-off was absent because larger crop plants acquired more resources than their wild progenitors, enabling them to produce a similar number of larger seeds. The fact we did not see a reduction in seed number is important for understanding how the yield advantage of crops may have arisen through unconscious selection; cultivation and harvesting of plants relaxes selection on seed size for the purpose of dispersal (i.e. it allows larger seed size) (Brown et al. 2009). In a genetically diverse population of a crop progenitor under cultivation, larger seeded genotypes might therefore gain a selective advantage under competition in dense stands, once selection for dispersal and dormancy is relaxed. By also having the same or a higher number of seeds, they would be able to increase their numerical advantage within the population. This two-pronged strategy, when found in combination with indehiscent seeds, may provide a mechanism that enables species to produce high seed yields that would be easily harvestable, and thus be successful crops. Alternatively, people could have bred from the plants with the largest ears, which had both larger seeds and more seeds.
Interestingly, if we look at the comparisons of total seed yield between the cereal crops and their corresponding progenitors, it is noticeable that only the three primary domesticates (barley, einkorn and emmer wheat) show a significant yield advantage over their wild relatives. Our data suggest that the later domestication of oats and rye resulted in smaller increases in yield, which is interesting because these species had been under selection as agricultural weeds in cultivated habitats since the origins of agriculture (Vavilov 1926;Zohary, Hopf & Weiss 2012). The absence of a domestication effect in these species may therefore arise because their wild progenitors had already been under similar selection pressures to barley, einkorn and emmer for several thousand years.
Conclusions
Overall, these experiments demonstrate the general importance of size throughout the life cycle of a crop, whereby under optimum conditions large seeds grow into large plants, which in turn produce high yields. Reproductive organs also change, such that a higher proportion of reproductive biomass is edible grain, and seed number is not negatively impacted by the increases in individual seed mass. The combination of these traits, together with a mutation for indehiscence, resulted in plants that were successful food resources for traditional farmers. It is therefore important to broaden the domestication syndrome to recognize the importance of both individual seed size and plant size as components of the domestication syndrome for cereals and pulses from the Fertile Crescent.
Supporting Information
Additional Supporting Information may be found online in the supporting information tab for this article: Fig. S1. Plot of natural-logged sown seed mass and harvested seed mass in Yield Experiment 1. Fig. S2. Plots of natural-logged biomass over time for species in the growth analysis experiment. Fig. S3. Phylogeny for the 17 species used in our experiments. Fig. S4. Correlation between ln(total seed yield) and ln(total above-ground biomass) for the cereal and pulse species. Fig. S5. Correlation between ln(total seed yield) and ln(individual seed mass) for the cereal and pulse species. Fig. S6. Correlation between ln(total above-ground biomass) and ln(individual seed mass) for the cereal and pulse species. Appendix S1. A brief description of the variance approximation used for the variance decomposition analysis. Table S1. List of grass accessions used in study. Table S2. List of legume accessions used in study. Table S3. Variance-covariance matrices for eqns 1 and 2. Table S4. Comparison of three statistical methods used to analyse differences between crops and their progenitors. Table S5. Comparison of three statistical methods used to analyse between-species correlations. Table S6. Species means of traits measured in the yield experiments.
|
2018-04-03T04:02:04.127Z
|
2016-10-03T00:00:00.000
|
{
"year": 2016,
"sha1": "e79bf31047526c04bbc335ff35fc967c229df427",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/1365-2435.12760",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0b7688ea5a88ae70c51453bec2597f7e05840a3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255654335
|
pes2o/s2orc
|
v3-fos-license
|
Evaluations on the Consequences of Fire Suppression and the Ecological Effects of Fuel Treatment Scenarios in a Boreal Forest of the Great Xing’an Mountains, China
: With global warming, catastrophic forest fires have frequently occurred in recent years, posing a major threat to forest resources and people. How to reduce forest fire risk is a hot topic in forest management. Concerns regarding fire suppression and forest fuel treatments are rising. Few studies have evaluated the ecological effects of fuel treatments. In this study, we used the LANDIS PRO model to simulate the consequences of fire suppression and the ecological effects of fuel treatments in a boreal forest of the Great Xing’an Mountains, China. Four simulation scenarios were designed, focusing on whether to conduct fuel treatments or not under two fire-control policies (current fire suppression policy and no fire suppression policy). Each scenario contains nine fuel treatment plans based on the combinations of different treatment methods (coarse woody debris reduction, prescribed burning, coarse woody debris reduction plus prescribed burning), treatment frequency (low, medium, and high), and treatment area (large, medium, and small). The ecological effects of the fuel treatments were evaluated according to the changes in fire regimes, species succession, and forest landscape patterns to find a forest fuel management plan that is suitable for the Great Xing’an Mountains. The results showed that long-term fire suppression increases fuel loads and the probability of high-intensity forest fires. The nine fuel management plans did not show significant differences in terms of species succession and forest landscape patterns while lowering forest fire intensity, and none of them were able to restore historical vegetation structure and composition. Our results consolidate the foundation for the practical performance of forest fuel treatments in fire-prone forest landscapes. We suggest a suitable fuel treatment plan for the Great Xing’an Mountains, with a low treatment frequency (20 years), large treatment area (10%), and coarse woody debris reduction, plus the prescribed burning measure.
Introduction
As the major component of terrestrial ecosystems, forests store 86% of the global vegetation carbon pool and play an irreplaceable role in the protection of regional ecological environments, maintaining the carbon balance of forest ecosystems and alleviating climate change [1].However, forests are often disturbed by fires.Fire is one of the key driving agents of forest renewal and forest succession [2] and plays a critical role in maintaining biodiversity and landscape heterogeneity [3].Despite its natural functions, fire also poses threats to people's lives, infrastructure, and valuable forest resources [4].Hence, fire suppression policies have been implemented worldwide over the last century to reduce the negative effects caused by fires [5,6].China adopted a strict fire suppression policy to reduce the losses caused by forest fires.Fire suppression has reduced the number of fires and changed natural fire patterns, such as extending the mean fire-return interval [7] Forests 2023, 14, 85 2 of 17 and affecting forest succession [8].This has caused the wildfire problem to become more serious in some areas [9], such as the Great Xing'an Mountains, where long-term fire suppression has led to an excessive accumulation of fuels, higher forest fire intensity, and an increased probability of catastrophic forest fires [10].Therefore, it is important to assess the consequences of implementing fire suppression policies on species succession and forest landscapes from the perspective of ecological effects.
To reduce the negative impacts of fire suppression, some regions around the world have begun to implement fuel treatment policies, but China started implementing fuel treatment policies later [11][12][13].The three factors that primarily drive wildfire behavior are as follows: fuel, topography, and weather.Of these factors, only fuels can be manipulated by land managers [14].All organic materials in the forest belong to fuels, including the surface layer, herbaceous layer, shrub layer, and tree layer [15].The main fuel treatment methods generally include coarse woody debris reduction [16], prescribed burning [17], thinning [18], and fuel breaks [19].Coarse woody debris reduction and prescribed burning aim to reduce the fine and coarse fuel loads [20], while fuel breaks via the removal of fuels decrease the spatial continuity of fuels [21].Implementing forest fuel treatments can effectively reduce the frequency and intensity of forest fires [21].For example, Salis simulated fuel treatments in the Mediterranean region and analyzed the changes in fire behavior after treatment and found that the probability of fire occurrence and burnt area significantly decreased with increases in the area of fuel treatment [22].
Forest fuel treatments have two purposes: one is to reduce the occurrence of highintensity fires, and the other is to restore historical vegetation structure and composition [18].Most studies only focused on reducing fire intensity and not on the ecological effects of fuel treatments.At the same time, previous evaluations of fuel treatment effects often focused on one or two aspects of fire disturbance, species composition, and forest landscape.For example, Wu evaluated fuel treatment effects in terms of fire disturbance in the Huzhong Forest [23].Volkova evaluated fuel treatment effects in terms of fuel load and forest carbon in southeastern Australia [16].Therefore, it is necessary to conduct a comprehensive assessment of the ecological effects of fuel treatments.Evaluating the ecological effects of fuel treatments could help in the formation of appropriate fuel treatment plans.The impacts of fuel treatments are complex, pertaining to fire regime, species succession, forest landscape [13,[24][25][26][27], and the effective duration of fuel treatments [11,28,29].Forest succession after fuel treatments may vary depending on the treatment methods, intensity, the climatic environment of the treatment area, etc. [28,30].The effectiveness of fuel treatments can also be affected by the pre-treatment state of the fuels, the fuel treatment itself, and the productivity of the vegetation [31].Most current fuel treatment studies have paid attention to reducing high-intensity fires, ignoring the effects of fuel treatments on the forest landscape [22,24,26].Furthermore, these studies only focus on fuel treatments under the current fire suppression policy, without a comparative study in the context of different fire control policies.A comparison with the ecological effects of fuel treatments in a natural fire scenario can provide a more scientific basis for selecting suitable forest management plans.
Descriptive studies and field experiments are often inadequate for managers to develop and implement forest fuel treatment plans on a large landscape.In addition, forest fuel treatments are restricted in China, except for several places in Yunnan province [32].Model simulations have become a crucial tool for evaluating fuel treatment effects.Simulation models include fire behavior models, non-spatial models, and spatially explicit models.Fire behavior models, such as BEHAVE [33], cannot simulate the relationship between vegetation and fuel decomposition and accumulation.Non-spatial models, such as FVS-FEE [34], cannot simulate fire occurrence and spread processes.The spatially explicit landscape model, LANDIS [35], can simulate the interaction between fuel treatments and other landscape processes and has been successfully applied worldwide, including in the Great Xing'an Mountains, China [23,36].Liu et al. [25] parameterized the LANDIS model using a dataset (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000) and simulated the effects of long-term fire suppression on fuel dynamic and fire risks in Huzhong of the Great Xing'an Mountains.
In this study, we used the parameters (1990-2000) of Liu et al. [25] to parameterize LANDIS PRO to simulate forest landscape dynamics and changes under different fuel treatment plans and fire control policies in the Great Xing'an Mountains.These scenarios were designed by combining different fire control policies, fuel treatment methods, fuel treatment frequencies, and fuel treatment areas.The ecological effects of various fuel management scenarios were comprehensively evaluated in terms of the changes in fire regime, species succession, and forest landscape over a simulated 100-year period.Our aim is to illustrate what the ecological consequences would have been if we had adopted fuel treatments 30 years ago and to ultimately select an appropriate fuel treatment plan for the Great Xing'an Mountains, China, to reduce forest fire risks.
Study Area
Our study area was the Huzhong Forestry Bureau of northeastern China (51 • 14 40 N-52 • 25 00 N, 122 • 39 30 E−124 • 21 E).This area spans 125 km from north to south and 115 km from east to west, with a total area of 937,244 km 2 (Figure 1).The study area belongs to the cool temperate zone and has a continental monsoon climate, with warm, wet, short summers and cold, dry, long winters, influenced by the high-pressure Siberian-Mongolian air mass.The mean annual temperature is 4.7 • C. February is the coldest month, with an average of −28.9 • C; July is the hottest, with an average of 17.
The LANDIS PRO Model
We used LANDIS PRO 7.0, a spatially explicit forest landscape model, to simulate forest landscape succession, seed dispersal, spatial disturbance, and fuel treatments [37][38][39] on large spatial and temporal scales (10 3 to 10 6 ).This model has been widely used worldwide [36][37][38].The LANDIS PRO model is based on a raster data structure, which treats the landscape as a grid of equally sized pixels, with each recording information about the trees.This information is used to simulate changes in forest succession and disturbance.This model also does not track individual trees, which is different from stand simulation models [40].
In the LANDIS PRO model, a heterogeneous landscape is stratified into land types Forest fire is an important disturbance factor in the Daxing'an Mountains.On May 6, 1987, a catastrophic fire occurred on the northern slopes of the Great Xing'an Mountains.As a result, the country issued the "Regulation on Forest Fires" in 1988 and started officially preventing forest fires.In recent years, Heilongjiang Province has implemented regular forest fuel load survey work and used prescribed burning to reduce fuel loads on a small scale, including the Huzhong Forestry Bureau.Firefighting agencies in the Great Xing'an Mountains also carry out fire barrier ignitions in autumn and winter.According to statistics, from 1990 to 2019, there were 72 fires in the area.There were 42 fires with a single burned area greater than 20 ha in the Huzhong Forestry Bureau.The total burned area is 30,846.16ha, averaging 734.43 ha per fire, and a maximum burned area of 8327.7 ha.Fires in the Huzhong Forestry Bureau are mainly caused by lightning strikes, accounting for more than 60% of all fires.Affected by forest fire disturbance, the primary forest in Daxing'an Mountains gradually degenerates into a secondary forest.White birch is the major species in the pioneer stage.Larch is the dominant species in the top stage.With the progress of succession, the abundance of white birch in each age class gradually decreases, whereas that of larch gradually increases.Forest fire, as the main disturbance factor in the area, significantly affects the relative proportions of white birch and larch.
The LANDIS PRO Model
We used LANDIS PRO 7.0, a spatially explicit forest landscape model, to simulate forest landscape succession, seed dispersal, spatial disturbance, and fuel treatments [37][38][39] on large spatial and temporal scales (10 3 to 10 6 ).This model has been widely used worldwide [36][37][38].The LANDIS PRO model is based on a raster data structure, which treats the landscape as a grid of equally sized pixels, with each recording information about the trees.This information is used to simulate changes in forest succession and disturbance.This model also does not track individual trees, which is different from stand simulation models [40].
In the LANDIS PRO model, a heterogeneous landscape is stratified into land types by using GIS layers of climatic, soil, or topographic variables (slope, aspect, and landscape position).Each land type is assumed to have a relatively homogeneous range of ecological conditions corresponding to the same patterns of species establishment and fire disturbance, such as ignition frequency and mean fire return intervals (MRI) [39].The model assumptions were validated by relevant studies [41,42].The LANDIS PRO can simulate several spatial processes and many non-spatial processes; the following modules were utilized in this study.The data sources for performing the parameterization are shown in Table 1.
Table 1.Use of datasets in the parameterization process.
Datasets Obtain Parameters
Two scenes of the 2000 TM remote sensing imagery (WRS: P/R: 120/24 and 121/24) For determining the land-cover types The 1990 forest stand maps For determining the forest composition map, land-cover types of water and non-forested areas.Topographic maps (1: 50,000) For determining the land-cover types.
Fire records in the Huzhong Forest from 1990-2000 For determining the fire return interval, burned area, and ignition density.
Relevant literature and field surveys
For determining the species' life history attributes, fire return interval, burned area, and ignition density during the no fire suppression period
Succession Module
Succession is a non-spatial, site-level process.Succession at each site is a competitive process driven by the species' life history attributes [39], including longevity, age of sexual maturity, shade tolerance class, fire tolerance class, effective seeding distance, maximum seeding distance, probability of vegetative propagation, and minimum age of vegetative reproduction.Since the model tracks the presence or absence of species age cohorts, the succession dynamics were simplified and simulated, such as birth, growth, and death processes acting on species age cohorts.Firstly, LANDIS PRO simulates seed dispersal based on species' effective and maximum seeding distance [39].Secondly, due to competition among or within species, when seeds reach a location, only seedlings of shade-tolerant species can grow if the growing space is already occupied.Once the canopy of the area is completely closed, intra-stand competition, meaning self-thinning, starts.Less shade-tolerant trees, as well as old, vulnerable trees that are close to longevity, might be outcompeted and die as a result of the self-thinning process.Later, the resulting canopy gaps are filled by the cross-growth of adjacent trees or by new seedlings.In this iterative process, the succession also interacts with other modules.The main input files of this module are the species attribute file, land type attribute map file, species composition map file, and map attribute file [43].The species attribute file contains all the life history attributes as mentioned above for each species, along with the reclassification coefficient, species group, biomass group, maximum diameter, maximum stand density, number of seeds, and carbon coefficient.The main parameters are listed in Table 2.The LANDIS model requires land type data reflecting differential species establishment coefficients among land types.To achieve this, a synthetic land type or ecoregion was created from abiotic data layers such as climate, soil, geology, and topography.The species composition map or map attribute file consists of species and their age classes.
Fire Module
Fire disturbance is a spatial landscape process based on the probability distributions of the fire cycle and mean fire size for various land types [39].Fires occur at random at individual locations.However, at the landscape scale, there are repeating patterns in ignition, location, size, and shape.To show the influence of vegetation and topography on the occurrence of forest fires, we classified different land types and determined the input parameters for different land types by reviewing literature and fire record data.This is reflected in the different input parameters, such as mean fire return interval, mean fire size, standard deviation of fire size, and ignition density.The fire disturbance module has three main processes: when and where the fire occurs, how the fire propagates, and the impacts this has on the forest landscape [35,39].
First, for a given time step, LANDIS PRO generates the number of ignitions (X) in a given fire unit based on the Poisson distribution and the parameter λ (i.e., the average number of ignitions per decade).At the time of ignition, LANDIS is subjected to a Bernoulli trial.The result is denoted by Yi, and the parameter is the probability of fire (Pi), whose value is defined by the time since the last fire in the ignited unit.The probability of fire (P) for each unit is calculated by the following equation: where FC is fire cycle, t is the time since last fire, and P is fire initiation probability.After ignition is completed, LANDIS will randomly select a fire size, denoted by Z, from a lognormal distribution with parameters µ (mean fire size (MFS)) and σ 2 (standard deviation of fire size (STD)) to simulate fire spread.The specific structure is described in reference [43].
Secondly, LANDIS PRO 7.0 has two algorithms to simulate fire spread.The first one is a percolation algorithm.The fire can only spread in four directions: up, down, left, and right.The second algorithm is a modified penetration algorithm that takes fuel accumulation, topography, and prevailing winds into account.The fire can spread from the fire line to adjacent locations in eight directions (N, NE, E, SE, S, SW, W, NW).
Third, fire intensity is determined by the quantity and quality of fuels.Fires of specific intensities interact with individual species and age groups based on two attributes, species' fire resistance and age susceptibility.Differences in species' fire resistance are reflected by the different species' fire resistance classes in the species' property input table.Once the fire intensity is determined, the model will calculate the mortality rate for each age group within each species using a series of functions, which are not listed.
Fuel Module
The LANDIS PRO tracks fine fuels, coarse fuels, and live fuels and simulates various types of fuel management [43].In this module, fine fuels (FF), mainly the leaf litter and small dead branches of less than 1 4 inch in diameter, is the main prerequisite for ignition.Coarse fuels (CF) include any dead tree materials greater than or equal to three inches in diameter.They affect the intensity class of the fire.Live fuels, also known as canopy fuels, are live trees that may be ignited during high-intensity fires, such as canopy fires.Fine fuel quantities were derived from the age of the tree species and corrected by the fuel quality factor (FQC).In contrast, coarse fuels were determined using stand age (oldest age group) in combination with disturbance history (time since last disturbance) [44,45].Accordingly, both coarse and fine fuel loads were classified into five grades.The actual fuel load cases corresponding to the five grades of coarse fuels are Grade 1 (<0. Fuel management is divided into two categories: prescribed burning and physical fuel load reduction (removal and mechanical thinning).The intensity of treatment is reflected by the degree of changes in rank before and after fuel treatment in the input file.At the same time, parameters such as the area and frequency of treatment can also be set by the input file [35,46].The model will first treat areas with the highest potential fire risk based on potential risk ranking.
LANDIS Parameterization
The data imported for the LANDIS model's initialization were split into two categories: non-spatial parameters (DAT files) and spatial parameters (GIS layers).Non-spatial parameters included the biological attributes of tree species, species establishment probability, fire disturbance parameters, etc. Spatial parameters included tree species composition maps, stand type maps, management area maps, etc.The available data for parameterization of LANDIS PRO included two scenes of the 2000 TM remote sensing imagery (WRS: P/R: 120/24 and 121/24), topographic maps (1: 50,000), 1990 forest stand maps and fire records in the Huzhong Forest from 1990 to 2000.The utilization of these datasets is shown in Table 1.
Species' Vital Attributes and Forest Composition Map
Eight tree species and their vital attributes were incorporated into LANDIS PRO.The major life history characteristics of tree species included: longevity, maturity age, shade tolerance value, fire tolerance value, species type, carbon coefficient, etc. [47].The life history attributes of the eight species were obtained from the literature and field survey studies [41,48,49].The main parameters are shown in Table 2.The forest composition map was generated from the 1990 forest stand map, which contains species and age information for the species in each image element.The forest stand map records the boundaries of stands and compartments.A forest inventory unit usually contains 10-100 forest stands.In order to reduce the computational load during the model simulation, the forest composition map was processed at a resolution of 90 m × 90 m, with 1480 rows × 1274 columns generated.
This study only discusses two major tree species (larch and white birch), as the other six species represent a small proportion of the study area.
Land Type Map
The LANDIS model divides the heterogeneous landscape into relatively homogeneous land-cover type units.The same land type is assumed to have the same environmental conditions.We classified the study area into six land types: non-forest, water, terrace, southern slope, northern slope, and ridge top (>1000 m).Non-forested land type and water areas were obtained from the 1990 forest stand map of the Huzhong Forestry Bureau.The southern and northern slopes were divided according to the slope direction from the DEM.The ridge top and terrace were further determined by calculating the elevation from the DEM.Non-forest and water accounted for 0.82% of the total area.Terrace, southern slope, northern slope, and ridge top areas accounted for 4.95%, 38.41%, 45.19%, and 10.61%, respectively.The land type map was also sampled to a 90 m × 90 m resolution (Figure 2).
Land Type Map
The LANDIS model divides the heterogeneous landscape into relatively homogeneous land-cover type units.The same land type is assumed to have the same environmental conditions.We classified the study area into six land types: non-forest, water, terrace, southern slope, northern slope, and ridge top (>1000 m).Non-forested land type and water areas were obtained from the 1990 forest stand map of the Huzhong Forestry Bureau.The southern and northern slopes were divided according to the slope direction from the DEM.The ridge top and terrace were further determined by calculating the elevation from the DEM.Non-forest and water accounted for 0.82% of the total area.Terrace, southern slope, northern slope, and ridge top areas accounted for 4.95%, 38.41%, 45.19%, and 10.61%, respectively.The land type map was also sampled to a 90 m × 90 m resolution (Figure 2).
Forest Fire Data
The study only simulates the fire disturbance, as windfall and insect pests are not common in the Huzhong Forest.It assumes that each land type has the same fire disturbance pattern (mean fire return intervals, mean burned area, etc.).By calculating the fire records from 1990 to 2000, we obtained the mean fire return intervals for different land types in the current fire suppression scenario.The mean fire return intervals under the no fire suppression scenario were obtained by consulting the other literature [7,50].
Simulation Scenarios 2.4.1. Scenarios Design
We considered two fire-control policies: the no fire suppression (NFS) policy, representing historical fire regimes before 1950, and the current fire suppression (CFS) policy representing the current fire regimes after 1950.Four simulation scenarios were formed based on whether or not fuel treatment was performed under the two policies: A) no fuel treatment (NFT) under CFS, B) fuel treatment (FT) under CFS, C) FT under NFS, and D) NFT under NFS.Scenarios A and D were for contrast and did not include fuel treatments.Scenarios B and D showed the implementation of fuel management policies under the CFS scenario and NFS scenario, respectively, and each of them included nine fuel treatment plans (Table 3).
Plans Treatment Methods Treatment Area Treatment Frequency Type Grade
No treatment We simulated three fuel treatment methods, including reductions in coarse woody debris, prescribed burning, and coarse woody debris reduction plus prescribed burning.Coarse woody debris reduction was designed to remove all the coarse fuels less than or equal to grade three on a pixel, while the coarse fuels of grades four and five were reduced to levels one and two, respectively.Coarse fuel treatment results in a one-level increase in each of the fine fuel classes.Prescribed burning did not change the grade of coarse fuels but consumed all the fine fuels.Coarse woody debris reductions plus prescribed burning only reduced the coarse fuels of less than grade three by one level, while the coarse fuels greater than or equal to grade three were changed to level one.At the same time, the fine fuels were all burned.The details are described in Table 3.
To find a suitable treatment size and frequency, we designed three combinations of treatment size and frequency by considering the treatment costs for each treatment method, small size (2%) with high frequency (5 years), medium size (6%) with medium frequency (10 years), and large size (10%) with low frequency (20 years).A total of nine simulation plans were determined (Table 3).
Model Run Setting
To reduce the simulation uncertainty, we chose different random seed numbers and replicated ten simulations for each simulation plan, and then used the average value.
Parameter Test and Data Analysis
The Wilcoxon signed ranks test was used to test inferred and simulated MRI at 90% confidence intervals, and no significant difference was found (CFS, p = 0.875; NFS, p = 0.7715) (Table 4).In this study, one-way ANOVA analysis was used for significance tests among various simulation plans using SPSS 22.0.The test model output variables included average burn intensity, total burned area, burned area with different intensities, aggregation index, etc. (Table 5).The tested values were the average value from the ten replicate simulations.
Property Indicator Calculation Method
Fire disturbance
Burnt area
The model directly outputs relevant data such as burnt area.
Fire intensity
The model outputs burnt area maps of different fire intensities.
Fuel load
The model outputs fuel load maps.
Species distribution and age structure
Percentage of landscape (PLAND) Total patch area for a given species/total area of the study area.
Percentage of mature age species (MAS) (Total number of broadleaf trees older than 60 years + total number of conifers older than 100 years)/total number of tree species.
Forest landscape pattern Largest patch index (LPI) Area of the largest patch/total area of the study area.
Aggregation index (AI)
Common boundary of identical patch of the tree species/all boundaries of the tree species.
Effects of Fire Suppression and Fuel Treatments on Fire Regimes and Forest Fuel Loads
By comparing the simulation results of scenario A and scenario D, the ecological effects of the current fire suppression could be inferred.We found that fire suppression significantly reduced the total burned area (p < 0.001) (Figure 3a) and obviously increased the mean fire intensity (p = 0.004) (Figure 4a).Furthermore, fuels accumulate significantly faster in scenario A compared to scenario D (Figure 4b).Both the mean coarse fuel loads class (p = 0.001) and the mean fine fuel loads class (p = 0.007) of scenario A were significantly higher than those in scenario D (Figure 4b).).The greatest reduction in the area burnt by high-intensity fires occurred under fuel treatment plan B33, with a low treatment frequency (20 years), large treatment area (10%), and coarse woody debris reduction in addition to the prescribed burning measure.By comparing scenario C with scenario D, the fuel treatment effects under the no fire suppression policy could be explored.The results showed that the total burnt area for the nine fuel treatment plans showed no obvious difference from scenario D (p = 0.213) (Figure 3a).Unlike the results of the fuel treatments under the CFS scenario, the coarse woody debris reduction (C11, C12, C13) and coarse woody debris reduction plus prescribed burning (C31, C32, C33) under the NFS scenario could obviously reduce the area burnt by medium-and high-intensity fires, while prescribed burning (C21, C22, C23) did not have a significant effect (Figure 3c).
Effects of Fire Suppression and Fuel Treatments on Species Abundance and Age Structure
As larch is the dominant coniferous species, and white birch is the major broadleaved species in our study area, we chose these two species to analyze species abundance and age structure affected by fire suppression and fuel treatments.In general, with a simulation time ranging to 50 years, fire suppression significantly reduced the abundance of white birch (p = 0.036) (Figure 5a), while that of larch was overall increased (p = 0.022) (Figure 5b), as indicated by the PLAND.Fuel treatments had no significant influence on the abundance of white birch and larch under the CFS policy (Figure 5a,b) and NFS (Figure 5d,e).
In terms of age structure, the percentage of mature age species (MAS) can reflect the tree species' ability to resist disturbance.Although there was no significant difference in MAS between scenario A and scenario D for the first 50 years of the simulation, fire suppression could increase the MAS of forests after 50 years of simulation (Figure 5c).All the fuel treatment plans did not significantly affect MAS in the study area under both the CFS policy (Figure 5c) and the NFS policy (Figure 5f).By comparing scenario A with scenario B, we could determine the fuel treatment effects under the current fire suppression policy.Although the total burnt area for the nine fuel treatment plans showed no obvious difference compared to scenario A (p = 0.429) (Figure 3a), it was worth noting that all the nine fuel treatment plans significantly reduced the area burnt by medium-intensity fires (p < 0.001) and high-severity fires (p < 0.001), while the area burnt by low-severity fires was significantly increased (p < 0.001) (Figure 3b).The greatest reduction in the area burnt by high-intensity fires occurred under fuel treatment plan B33, with a low treatment frequency (20 years), large treatment area (10%), and coarse woody debris reduction in addition to the prescribed burning measure.
By comparing scenario C with scenario D, the fuel treatment effects under the no fire suppression policy could be explored.The results showed that the total burnt area for the nine fuel treatment plans showed no obvious difference from scenario D (p = 0.213) (Figure 3a).Unlike the results of the fuel treatments under the CFS scenario, the coarse woody debris reduction (C11, C12, C13) and coarse woody debris reduction plus prescribed burning (C31, C32, C33) under the NFS scenario could obviously reduce the area burnt by medium-and high-intensity fires, while prescribed burning (C21, C22, C23) did not have a significant effect (Figure 3c).
Effects of Fire Suppression and Fuel Treatments on Species Abundance and Age Structure
As larch is the dominant coniferous species, and white birch is the major broadleaved species in our study area, we chose these two species to analyze species abundance and age structure affected by fire suppression and fuel treatments.In general, with a simulation time ranging to 50 years, fire suppression significantly reduced the abundance of white birch (p = 0.036) (Figure 5a), while that of larch was overall increased (p = 0.022) (Figure 5b), as indicated by the PLAND.Fuel treatments had no significant influence on the abundance of white birch and larch under the CFS policy (Figure 5a,b) and NFS policy (Figure 5d,e).
Effects of Fire Suppression and Fuel Treatments on Forest Landscape Pattern
We used the largest patch index (LPI) and aggregation index (AI) to quantify forest landscape patterns.The change in LPI can reflect the intensity and frequency of disturbances and, in AI, the connectivity of the landscape.The LPI and AI of white birch under scenario D were generally larger than those under scenario A (Figure 6a,c), indicating that fire suppression could limit the spread of white birch.However, the LPI and AI of larch under scenario D were smaller than those under scenario A (Figure 6b,d), demonstrating that fire suppression favors the expansion of larch.Although there were fluctuations in the LPI of larch under scenario C after 70 years of simulation, they were not significant compared to scenario D (p = 0.992).The various fuel treatment plans implemented under both CFS and NFS policies had no significant influence on the forest landscape pattern.In terms of age structure, the percentage of mature age species (MAS) can reflect the tree species' ability to resist disturbance.Although there was no significant difference in MAS between scenario A and scenario D for the first 50 years of the simulation, fire suppression could increase the MAS of forests after 50 years of simulation (Figure 5c).All the fuel treatment plans did not significantly affect MAS in the study area under both the CFS policy (Figure 5c) and the NFS policy (Figure 5f).
Effects of Fire Suppression and Fuel Treatments on Forest Landscape Pattern
We used the largest patch index (LPI) and aggregation index (AI) to quantify forest landscape patterns.The change in LPI can reflect the intensity and frequency of disturbances and, in AI, the connectivity of the landscape.The LPI and AI of white birch under scenario D were generally larger than those under scenario A (Figure 6a,c), indicating that fire suppression could limit the spread of white birch.However, the LPI and AI of larch under scenario D were smaller than those under scenario A (Figure 6b,d), demonstrating that fire suppression favors the expansion of larch.Although there were fluctuations in the LPI of larch under scenario C after 70 years of simulation, they were not significant compared to scenario D (p = 0.992).The various fuel treatment plans implemented under both CFS and NFS policies had no significant influence on the forest landscape pattern.
Effects of Fire Suppression and Fuel Treatments on Fire Regimes and Forest Fuel Loads
To alleviate catastrophic forest fire occurrences, fire suppressions and fuel treatments were implemented worldwide, which significantly altered fire disturbance regimes [2].Our simulation results indicated that fire suppression reduces the total burnt area (Figure 3a) while increasing the mean fire intensity (Figure 4a) and fuel loads (Figure 4b).Our results were in line with previous studies [10,50].Our results also demonstrated that fuel treatment plans under the CFS policy could significantly reduce the areas burnt by medium-intensity and high-intensity fires.Some of the medium-intensity and high-intensity fires were dropped to low-intensity fires, and therefore, the area burned by low-intensity fires increased.Our results confirm the effectiveness and goal of forest fuel treatments [31].At the same time, the effectiveness of forest fuel treatments is simultaneously influenced by the combination of fuel treatment methods, area, and frequency [28,30].We found that, among the nine fuel treatment plans, the fuel treatment plan B33, with ten percent of the area treated, frequency of application every 20 years, and coarse woody debris reduction plus the prescribed burning measures, could greatly reduce the area burnt by high-intensity fires.Due to the limited growth rate of vegetation and slow accumulation of fuels, a high frequency of fuel treatment does not necessarily consume large amounts of fuel.Instead, it may be more effective to increase the area of fuel treatment.Coarse woody debris reductions produce large amounts of fine fuels during implementa-
Effects of Fire Suppression and Fuel Treatments on Fire Regimes and Forest Fuel Loads
To alleviate catastrophic forest fire occurrences, fire suppressions and fuel treatments were implemented worldwide, which significantly altered fire disturbance regimes [2].Our simulation results indicated that fire suppression reduces the total burnt area (Figure 3a) while increasing the mean fire intensity (Figure 4a) and fuel loads (Figure 4b).Our results were in line with previous studies [10,50].Our results also demonstrated that fuel treatment plans under the CFS policy could significantly reduce the areas burnt by mediumintensity and high-intensity fires.Some of the medium-intensity and high-intensity fires were dropped to low-intensity fires, and therefore, the area burned by low-intensity fires increased.Our results confirm the effectiveness and goal of forest fuel treatments [31].At the same time, the effectiveness of forest fuel treatments is simultaneously influenced by the combination of fuel treatment methods, area, and frequency [28,30].We found that, among the nine fuel treatment plans, the fuel treatment plan B33, with ten percent of the area treated, frequency of application every 20 years, and coarse woody debris reduction plus the prescribed burning measures, could greatly reduce the area burnt by high-intensity fires.Due to the limited growth rate of vegetation and slow accumulation of fuels, a high frequency of fuel treatment does not necessarily consume large amounts of fuel.Instead, it may be more effective to increase the area of fuel treatment.Coarse woody debris reductions produce large amounts of fine fuels during implementations, which increases the probability of forest fires [51].While prescribed burnings remove fine fuels, they do not affect coarse fuels, resulting in the accumulation of coarse fuels and high-intensity fires [52].This observation is consistent with a previous study [26] and has crucial and practical implications for guiding forest fuel management in our study area and other related regions.However, the effects of fuel treatments under the NFS policy were less effective than those under the CFS policy.This may be due to the large burnt area under the NFS policy (Figure 3a) reducing the accumulation of fuel loads (Figure 4b), which results in fuel treatments having weaker effects [9,53,54].
Effects of Fire Suppression and Fuel Treatments on Species Abundance and Age Structure
Fire suppression has been found to alter forest composition, species abundance, and age structures [9,10].Our results showed that fire suppression increased the abundance of larch while decreasing the abundance of white birch.This is in line with previous studies [25,50].Larch is a shade-tolerant and non-pioneer species.White birch is a shadeintolerant and pioneer species.According to the succession process of vegetation in our study area [55,56], white birch is the major species in the pioneer stage, and larch is the dominant species in the top stage.When fires occur, canopy openings are formed.White birch, relying on its strong seed dispersal and colonization ability, is easier to establish and dominate the landscape.As the canopy cover gradually increases, the forest gap becomes smaller, resulting in less exposure to light, and larch relies on its higher shade tolerance to gradually dominate this environment.Therefore, larch is the dominant species in this area in the late successional stage.To determine the forest age structure, we only analyzed fire suppression's impact on mature-age species.The results indicated that, over the last 50 simulation years, the percentage of mature trees significantly increased under the CFS scenario compared to the NFS scenario.This is because, after decades of forest harvest in our study area, the stand age is mainly within 40 years; therefore, the percentage of mature trees showed no change during the first 50 simulation years.
There has been no documentation regarding the effects of fuel treatment on forest age structures.Our simulation results demonstrated that fuel treatments did not significantly affect the percentage of mature trees.This is somewhat expected and provides a scientific basis for the implementation of forest fuel treatments without changing forest age structures and the forest succession process while lowering forest fire severity.Our results consolidate the foundation for forest fuel treatments in fire-prone forest landscapes.
Effects of Fire Suppression and Fuel Treatments on Forest Landscape Pattern
Landscape pattern is the core concept of landscape ecology regarding many ecological processes, such as seed dispersal, animal movements, landscape fragmentation, etc. [57][58][59].Many indices have been designed to quantify landscape patterns [60,61].We chose LPI and AI to describe landscape patterns, as LPI can reflect the intensity and frequency of disturbances [62][63][64], and AI can reflect the connectivity of the landscape [63].Our results indicated that fire suppression reduces the patch size and aggregation of white birch, which might impact the habitats of animals favoring white birch forests, such as red deer (Cervus elaphus), and roe deer (Capreolus capreolus), which prefer birch forest ecosystems [65,66].However, fire suppression leads to large and aggregated patches of larch, which may reduce habitat fragmentation and edge effects [67,68] and increase the core area of suitable habitats.This is beneficial to other wildlife species, such as sables (Martes zibellina) [69].Therefore, a more comprehensive wildlife habitat suitability analysis is needed [70].Our simulation results provide the basis for such an analysis.As for the fuel treatment effects, we found that various fuel treatments had no significant influences on the forest landscape patterns.However, they affected the fire pattern, with an increase in lower-severity fires (Figure 3b).The large burnt area caused by lower-severity fires may favor the growth of grass [71,72], which could provide more forage for herbivory animals [73,74], and more food for carnivory animals [75].Thus, the food chain of the forest ecosystem is better maintained.Our results highlight the roles that forest fuel treatments play in maintaining the ecological balance of forest ecosystems.
The Practical Application and Promotion of Fuel Treatments
Plan B33, derived from this study, is useful for maintaining forest fire safety in the Great Xing'an Mountains and can be considered to start implementation.From the simulation results, if we had started to implement Plan B33 in 1990, the high-intensity fire in the Great Xing'an Mountains would have been significantly reduced, and it would not have affected the forest age structure and forest succession process.The study considered the difficulty and cost of implementation, so only nine representative fuel treatment plans were simulated.However, there are many fuel treatment methods, sizes, and frequency combination schemes, which should be adjusted to the specific fire situation in practice.Furthermore, the LANDIS PRO model does not simulate the dynamics of herbs and shrubs.Herbs and shrubs are also important sources of fuel [50], and they have high flammability and large biomass.Therefore, this study may have underestimated the fuel load and fire intensity.
The ideas provided in the study on assessing the ecological effects of fuel treatment scenarios can be extended and learned from, but the results obtained in this paper may not be directly applicable to other regions of China.The reason for this is the variability of vegetation cover, climatic conditions, and other factors in different regions.Currently, the LANDIS PRO model is also being used successfully in other regions [76][77][78]; thus, it is possible to simulate relevant fuel treatment and to provide a scientific basis for forest fire prevention work in China.
Conclusions
Fire is a major natural disturbance and will be exaggerated in the future with climate warming.To alleviate the damage that forest fires cause to society and forest resources, concerns regarding fire suppression and forest fuel treatments are increasing.We used the LANDIS PRO model to simulate the consequences of fire suppression and the ecological effects of fuel treatments in a boreal forest of the Great Xing'an Mountains, China.The ecological effects were comprehensively evaluated in terms of fire disturbance, species succession, and forest landscape.We observed that fire suppression led to fuel accumulation and a higher fire intensity while reducing the total burned area.Forest fuel treatments could lower forest fire severity without changing the forest age structure and forest succession process.If we had started to treat 10% of the area with coarse woody debris reduction plus prescribed burning every 20 years in 1990, there would be far fewer high-intensity fires now, and the forest age structure and forest succession process would develop as they naturally do without being affected.Our results form a foundation for practical forest fuel treatments in fire-prone forest landscapes.We suggest a suitable fuel treatment plan for the Great Xing'an Mountains, with a low treatment frequency (20 years), large treatment area (10%), and coarse woody debris reduction, in addition to the prescribed burning measure.
1 • C. The average annual precipitation is ~500 mm, with most occurring between June and August.Spring and autumn are the seasons of high fire incidence due to the influence of the Mongolian dry winds, alternating high and low temperatures, and significant windy weather.The vegetation type is cold-temperate coniferous forest, with larch (Larix gmelini Rupr.Kuzen) as the dominant species in the area, and other species include pine (Pinus sylvestris var.mongolica Litv.) and spruce (Picea koraiensis Nakai).The main broad-leaved tree species are birch (Betula platyphylla Sukaczev), two species of poplar (Populus davidiana Dode, Populus suaveolens Fisch.ex Poit.& A.Vilm.), and willow (Chosenia arbutifolia (Pall.) A. Skv).
Figure 1 .
Figure 1.The geographic location of the Huzhong Forest.
Figure 1 .
Figure 1.The geographic location of the Huzhong Forest.
Figure 2 .
Figure 2. Land type map of the Huzhong Forestry Bureau.Figure 2. Land type map of the Huzhong Forestry Bureau.
Figure 2 .
Figure 2. Land type map of the Huzhong Forestry Bureau.Figure 2. Land type map of the Huzhong Forestry Bureau.
Figure 3 .
Figure 3.Total burnt area and area burned by different fire intensities for all scenarios.(a) Mean decadal total burnt area for all scenarios.This includes scenario A for the fire suppression scenario, scenario D for the no fire suppression scenario, and scenarios B and C for the implementation of fuel treatments corresponding to scenarios A and D. (b) Area of different-intensity fires under the nine fuel treatment plans included in scenario B. (c) Area of different-intensity fires under the nine fuel treatment plans included in scenario C. Low-intensity: class 1 and class 2; medium-intensity: class 3; high-intensity: class 4 and class 5.
Figure 3 .
Figure 3.Total burnt area and area burned by different fire intensities for all scenarios.(a) Mean decadal total burnt area for all scenarios.This includes scenario A for the fire suppression scenario, scenario D for the no fire suppression scenario, and scenarios B and C for the implementation of fuel treatments corresponding to scenarios A and D. (b) Area of different-intensity fires under the nine fuel treatment plans included in scenario B. (c) Area of different-intensity fires under the nine fuel treatment plans included in scenario C. Low-intensity: class 1 and class 2; medium-intensity: class 3; high-intensity: class 4 and class 5.
Figure 4 .
Figure 4. Mean fire intensity (a) and mean fuel loads grade (b) under scenario A and scenario D.
Figure 4 .
Figure 4. Mean fire intensity (a) and mean fuel loads grade (b) under scenario A and scenario D.
Figure 5 .
Figure 5. Percentage of landscape and percentage of mature age species for all scenarios.Scenario A and scenario D are the baseline scenarios for comparison, and they do not treat for fuels.(a-c) are the PLAND for white birch, the PLAND for larch, and MAS under the nine fuel treatment plans included in scenario B, respectively.(d-f) are the PLAND for white birch, the PLAND for larch, and MAS under the nine fuel treatment plans included in scenario C.
Figure 5 .
Figure 5. Percentage of landscape and percentage of mature age species for all scenarios.Scenario A and scenario D are the baseline scenarios for comparison, and they do not treat for fuels.(a-c) are the PLAND for white birch, the PLAND for larch, and MAS under the nine fuel treatment plans included in scenario B, respectively.(d-f) are the PLAND for white birch, the PLAND for larch, and MAS under the nine fuel treatment plans included in scenario C.
Figure 6 .
Figure 6.Largest patch index (LPI) and aggregation index (AI) for all scenarios.Scenario A and scenario D are the baseline scenarios for comparison, and they do not treat for fuels.(a,b) are the LPIs for white birch and larch, respectively, under the nine fuel treatment plans in scenario B. (c,d) are the AI of white birch and larch, respectively, under the nine fuel treatment plans in scenario B. and (e,f) are LPI for white birch and larch, respectively, under the nine fuel treatment plans in scenario C. (g,h) are the AI of white birch and larch, respectively, under the nine fuel treatment plans in scenario C.
Figure 6 .
Figure 6.Largest patch index (LPI) and aggregation index (AI) for all scenarios.Scenario A and scenario D are the baseline scenarios for comparison, and they do not treat for fuels.(a,b) are the LPIs for white birch and larch, respectively, under the nine fuel treatment plans in scenario B. (c,d) are the AI of white birch and larch, respectively, under the nine fuel treatment plans in scenario B. and (e,f) are LPI for white birch and larch, respectively, under the nine fuel treatment plans in scenario C. (g,h) are the AI of white birch and larch, respectively, under the nine fuel treatment plans in scenario C.
Table 2 .
Species' vital attributes for the study area.
Table 3 .
Fuel treatment plans and parameters.
Table 4 .
Statistical test for mean fire return intervals (MRI) between the inferred and the simulated for the current fire suppression (CFS) and no fire suppression (NFS) scenarios.
Table 5 .
Evaluation indicators used in this study.
|
2023-01-12T18:09:06.605Z
|
2023-01-02T00:00:00.000
|
{
"year": 2023,
"sha1": "bc8555ff44bf6b596340643223f777e652bc1992",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/14/1/85/pdf?version=1672656438",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "47957561d7c1d60a913baba6dfb568b1ef10e20b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
237541428
|
pes2o/s2orc
|
v3-fos-license
|
DEVISING OPTIMAL TECHNOLOGICAL PARAMETERS FOR SPRAY DRYING TO PRODUCE WHOLE CAMEL MILK POWDER
The basic raw material to produce dairy products in the world is cow’s milk. However, many people suffer from intolerance to cow’s milk protein, which could cause an allergic reaction in the body. In turn, camel milk, in its quantitative and qualitative protein composition and other biological properties, is close to breast milk and belongs to the so-called albumin group. The absence of allergies in the human body to camel milk is explained by good digestion of an easily digestible clot, which, under the action of enzymes, acquires the form of small flakes. It is also proved that camel milk has high therapeutic and prophylactic and dietary properties, owing to which it has wide medical indications for the use of products based on it. Due to the presence of antimicrobial properties, raw camel milk has a slightly longer shelf life than cow’s milk. However, in order to preserve its biological and nutritional properties for a long time, it also needs to be processed. There are various methods for preserving milk in th world; all of them are based on the suppression of pathogenic microorganisms, preventing their further growth and development. Drying is one of the common methods of milk preservation, in which free moisture is removed, as much as possible inhibiting the reproduction of microorganisms. An effective method of drying milk, in terms of the energy cost and output of finished products, is spray drying. In this case, the production of dry dairy products based on camel milk could not only expand the range of products but would also stimulate milk production and the growth of livestock at camel farms. On one hand, this could provide an impetus for the industrial introduction of export-oriented products with high added value, on the How to Cite:Aralbayev, N., Dikhanbayeva, F., Yusof, Y. A. B., Tayeva, A., Smailova, Z. (2021). Devising optimal
Introduction
The basic raw material to produce dairy products in the world is cow's milk. However, many people suffer from intolerance to cow's milk protein, which could cause an allergic reaction in the body. In turn, camel milk, in its quantitative and qualitative protein composition and other biological properties, is close to breast milk and belongs to the so-called albumin group. The absence of allergies in the human body to camel milk is explained by good digestion of an easily digestible clot, which, under the action of enzymes, acquires the form of small flakes. It is also proved that camel milk has high therapeutic and prophylactic and dietary properties, owing to which it has wide medical indications for the use of products based on it.
Due to the presence of antimicrobial properties, raw camel milk has a slightly longer shelf life than cow's milk.
However, in order to preserve its biological and nutritional properties for a long time, it also needs to be processed. There are various methods for preserving milk in th world; all of them are based on the suppression of pathogenic microorganisms, preventing their further growth and development.
Drying is one of the common methods of milk preservation, in which free moisture is removed, as much as possible inhibiting the reproduction of microorganisms. An effective method of drying milk, in terms of the energy cost and output of finished products, is spray drying. In this case, the production of dry dairy products based on camel milk could not only expand the range of products but would also stimulate milk production and the growth of livestock at camel farms. On one hand, this could provide an impetus for the industrial introduction of export-oriented products with high added value, on the With the right choice of methods and parameters of drying, it is possible to obtain dry camel milk with a long shelf life and with maximum preservation of biological and nutritional properties. In this case, a special role belongs to the physical properties of the resulting powder as they predetermine the consumer qualities of the finished product. These physical properties include, among others, solubility, moisture content, hygroscopicity, density, water activity, the stickiness and size of particles. Therefore, devising the optimal technological parameters for the spray drying of camel milk, in order to obtain improved physical properties of the finished product, is a relevant task.
Literature review and problem statement
Paper [1] reports the results of research into the production of camel milk in the world, which amounted to about 5.3 million tons per year. Of them, 1.3 million tons are spent for direct consumption by people, and the rest is spent on feeding camel colts. This is due to the low volumes of processing of raw camel milk into dairy productsmainly due to the non-prevalence of processing technology. It was found that during lactation the average daily production of camel milk is from 3 to 10 liters. Under appropriate conditions (improvement of animal feeding ration, the availability of water, and proper veterinary care of animals), this figure can reach up to 20 liters [2]. The low productivity of camels in comparison with cows is compensated for by the chemical composition, biological and nutritional value, as well as the proven antimicrobial and immunomodulatory properties of camel milk.
It has been shown that, despite the relatively small amount of average daily milk yield, camel milk is a valuable source of nutrients. It contains volatile acids, especially linoleic acid, polyunsaturated and monounsaturated fatty acids, which have an important role in human nutrition [3]. They belong to the indispensable factors of nutrition as they are not formed in the body and must come from food. In addition, camel milk is a rich source of protein: lysozyme, lactoferrin, lactoperoxidase, immunoglobulins, etc. According to studies, the protein that defines peptidoglycans has been found only in camel milk. Camel milk contains low amounts of β-casein and lacks β-lactoglobulin, so it can be consumed by people suffering from allergies to cow's milk [4,5]. Therefore, it could be used as a therapeutic and prophylactic agent and a full-fledged replacement of cow's milk in human nutrition.
It was found that the immunomodulatory property of camel milk is determined by the high content of vitamin C, which is three times higher than in cow's milk, and one and a half times higher than that in breast milk. It was determined that camel milk contains a large amount of minerals such as sodium, potassium, iron, copper, zinc, selenium, and magnesium [6], necessary to maintain the vital activity of the body. In addition, for people suffering from lactose intolerance, camel milk could be recommended since the lactose contained in it is easily metabolized [7]. These data show the advantages of camel milk compared to other types of milk, in particular, cow's milk. At the same time, it was noted in work [8] that the chemical composition of camel milk may differ depending on the regions of animal habitat. Thus, the fat and protein content in the milk from camels in Kazakhstan was 3.65 % and 3.59 %, while in the camel milk obtained from Australia it was 2.53 % and 2.97 %, respectively. The difference in the nutritional and biological value of camel milk is also explained by the diet of animals, the diversity of vegetation, and climatic conditions. However, the issues related to the preservation of the initial quality indicators of camel milk over a long time remained not fully resolved. The reason for this may be the objective difficulties associated with the preservation of camel milk. A rational way to preserve the quality indicators, biological and nutritional value of milk, preceding the reduction of costs for storage and transportation of finished products, is drying.
Drying of milk in the production of dry dairy products is the process of removing free water from products, which is carried out in two stagesby condensing and drying pre-condensed products. Condensing the product is achieved by its evaporation to obtain 18-20 % of the mass fraction of casein-calcium phosphate complex in water; at the same time, the product should remain fluid. Despite the choice of drying technique, certain requirements for the physical properties of the product must be met during and as a result of the process. These include the specified final moisture content, free friability, the minimum content of free surface fat, the required fullness and rate of dissolution at the minimal losses of raw materials [9]. These properties must meet the standards as they determine the consumer qualities of the product.
It was established that there are various methods of milk drying: freeze-dried, convective, conductive, drum, etc. However, due to high energy costs and low production efficiency, not all types of drying are advisable to use in the production of milk powder. The best way to overcome the relevant difficulties may be to use spray drying. When spray drying in the flow of hot air and using a contact technique, there should not be overheating, drying, and burning of milk powder, as well as the phenomenon of adhesion and cohesion. The duration of stay of the material in the chamber should be a few seconds, in order to achieve high performance of the dryer at low energy costs. However, for some thermolabile materials, its use is undesirable due to the high-temperature level of the heat carrier, which, at the inlet to the dryer, is from 100 °C to 170 °C, and at the outlet -50-95 °C. Therefore, in the production of dry camel milk by spray drying, the authors used an inlet temperature of 150 °C, which corresponded to the temperature of the finished product at the outlet of 94 °C. In addition, during spray drying, there are significant losses of the dried product due to its removal with the spent heat carrier [10].
In addition, one of the main tasks in the production of milk powder is to preserve the biological and nutritional value of the raw materials. This approach was used in work [11], whose authors indicated that as a result of drying camel milk, the qualitative composition of amino acids did not change while the quantitative content increased. In this case, the amino acid composition determines the nutritional and biological values of milk.
However, spray-drying camel milk when using the same temperatures and feed rates as cow's milk could lead to undesirable results. Thus, an increase in the temperature and rate of supply of dairy raw materials could lead to a deterioration in solubility due to the Maillard reaction, and the decreaseto the growth of stickiness and the reduction in friability.
All this suggests that it is advisable to conduct a study on the optimization of the technological parameters of spray drying to manufacture dry camel milk with improved physical properties.
The aim and objectives of the study
The aim of this study is to optimize the technological parameters in the production of dry whole camel milk with improved physical characteristics and preservation of nutritional and biological value.
To accomplish the aim, the following tasks have been set: -to work out the technological parameters of spray drying in the production of dry whole camel milk; -to determine the physical characteristics of the dry whole camel milk obtained by spray drying; -to treat the results mathematically to determine the optimal technological parameters of spray drying.
The study materials and methods 4 1. The study materials and equipment
The research materials were fresh whole camel milk and whole camel milk powder. The fresh whole camel milk (Camelus dromedarius) was obtained from the camel farm TOO Daulet-Beket, Akshi, Almaty oblast, Kazakhstan. The samples of camel milk were delivered in a thermal container; after delivery to the laboratory, they were stored in a refrigerator at a temperature of 4±0.5 °C.
We dried camel milk at the laboratory spray drying unit Buchi mini Spray Dryer B-290 (Switzerland).
In determining the solubility index and absorption of water samples, we used a vortex mixer (ZX4, Velp Scientifica, Italy), a centrifuge (Model 4200, Kubota, Japan), and a convection oven (ED 23, Binder Gmbh, Germany).
A digital hydrometer (Pro's Kit, NT-113, USA) was used to control and monitor the parameters in determining the hygroscopicity of samples.
We determined the density of the samples after shaking using a helicoidal pycnometer (Micromeritics Accu-Pyc II 1340, USA) by measuring 1.0±0.1 g of the sample.
The value of water activity in the samples was determined at a digital analyzer of water activity (Model 3TE, Aqualab, USA).
To measure the stickiness of the samples, a texture analyzer (TA-HT2, Stable Micro Systems, UK) was used.
The particle sizes of the samples and their distribution were determined at a laser diffraction particle size analyzer (Mastersizer 2000, Malvern, UK).
2. Methods of studying the physical properties of dry whole camel milk
Determining the solubility index in water. The solubility of the samples was determined according to the procedures described by the author of work [12]. We poured 2.5 g of the sample in a graduated test tube of 50 ml, added 30 ml of distilled water, and stirred. Next, the test tube was put in a water bath (37 °C) for 30 minutes. Af-ter incubation, the resulting mixture was centrifuged at 3,500 rpm for 30 min. The resulting liquid phase was poured into a pre-dried and weighed glass Petri dish. Then the Petri dish was put in a convection oven for drying at 105 °C for 24 hours. After drying, the Petri dish was removed from the oven and put in an exicator. The chilled Petri dish with sample residues was re-weighed until a constant weight was obtained. The residue was a solubilized powder; the weight of the residue by the initial mass of the sample was expressed as an indicator of solubility in water (WSI), which can be determined as follows: where WSI is the water solubility index, %; m 1 is the initial mass of the sample, g; m 2 is the mass of residue after drying, g.
Determining the water absorption index. After centrifugation and separation of the liquid phase, the resulting sediment was weighed. The water absorption index (WAI) was calculated as the weight of the sediment in relation to the reference weight of the sample, which can be determined as: where WAI is the water adsorption index, %; m 1 is the initial mass of the sample, g; m 2 is the sediment mass after centrifugation, g. Determining hygroscopicity. Hygroscopicity is the ability of the dry powder to absorb moisture from the environment. One gram of milk powder was weighed in a pre-dried and weighed glass Petri dish and placed in a closed exicator at room temperature of 25±1 °C. The relative humidity of the medium inside the exicator was 75±2 %, which was maintained by 150 ml of a saturated solution of NaCl [13]. After seven days, the samples were removed and re-weighed. To determine the hygroscopicity of the sample, the weight difference between the reference and final sample was calculated per 100 g of dry matter (g/100 g) [14]: where H is hygroscopicity, %; m 1 is the initial mass of the sample, g; m 2 is the final mass of the sample, g; W is the moisture content in the sample, %.
Determining bulk density. The bulk density was determined by measuring the mass of the powder sample at specified volumes. Each sample was carefully poured without sealing into a dry graduated cylinder with a volume of 25 ml, weighed, and registered [15]. This procedure was repeated 3 times for each sample. The value of the bulk density was determined as follows: where: ρ b is the bulk density of the sample, g/cm 3 ; m is the sample mass, g; V is the sample volume in the graduated cylinder, cm 3 .
Determining density after shaking. The density after shaking was determined by measuring the mass per unit volume of powdery substances, excluding voids. For each sample of milk powder, the measurement procedure was carried out 3 times.
Determining water activity. 2 g of the sample was weighed in a cup and placed in a water activity meter. The activity of water in the sample was determined at room temperature of 25±1 °C. The results of the tests were calculated as an arithmetic mean of three repetitions [16].
Determining stickiness. The applied constant compression force used in the texture analyzer is 40 g, and the displacement height is 10 mm. 3 ml of glycerin is added to 2 g of the milk powder sample and stirred until a homogeneous state is formed. The resulting mixture is placed in the compartment for the sample, then, for 1 s, the probe of the device is in contact with it. The analyzer recalculates the value of the gravity at which the probe is separated from the surface of the mixture, which corresponds to the value of the stickiness of the sample [17].
Determining particle sizes. A sample of the milk powder was placed in the supply compartment of the device. Compressed air was fed to the analyzer, and the sample particles were moved to the laser chamber under vacuum conditions. Particle size values were calculated as diameter at 10 %, 50 %, and 90 % cumulative volume with a distribution curve constructed by the volumetric distribution over particlesize (μm) [18].
The results are presented as an average value±standard deviation. Statistical analysis was carried out using the Microsoft Excel software, Statistica 10. Reliable differences between the mean values of repeated measurements at each data point were analyzed using variance analysis, P≤0.05.
5.
The results of studying the physical properties of dry whole camel milk
1. Testing the technological parameters of spray drying
To obtain finished products with certain physical indicators, it is necessary to find the optimal drying parameters. These parameters are factors that affect the resulting indicators such as the temperature at the inlet and the speed of raw material feed. The resulting indicators are the physical properties of the finished product. They include an outlet temperature, solubility index, absorption index, moisture content, hygroscopicity, bulk and post-shake density, water activity, stickiness, and particle sizes.
When making cow's milk powder using a spray drier, high temperatures at the inlet are applied, which usually vary from 170 °C to 220 °C. However, as discussed above, camel milk is a thermolabile product, so it is necessary to reduce the upper limit of the temperature used; to this end, the temperature at the inlet was initially set at 180 °C. When applying an inlet temperature of 170 °C and 180 °C (the feed rate was 35 ml/min), the resulting whole camel milk powder revealed unsatisfactory organoleptic characteristics (Fig. 1). The color of the sample included a pronounced yellow tint, the taste was bitter, the appearance and structure contained individual burnt particles.
When the inlet temperature was below 140 °C, the resulting milk powder had undried fractions and high humidity. Based on the results of our experiments, it was found that in order to obtain dry whole camel milk with good physical and organoleptic indicators, the temperature at the inlet should be in the range from 140 °C to 160 °C.
The second factor in the study, affecting the properties of the finished product, was the speed of supply. The use of a feed rate below 30 ml/min led to partial burning of milk powder and slowing down the drying process. It should also be noted that the feed rate above 40 ml/min (at an inlet temperature of 140 °C to 160 °C) led to the increased moisture content in the finished product: the dairy raw materials did not have time to dry. Given the above, in further research, the feed rate of raw materials was in the range of 30 to 40 ml/min.
2. Determining the physical indicators of dry whole camel milk
The physical properties of dry whole camel milk obtained by spray drying at an inlet temperature of 140 °C to 160 °C and a feed rate of 30-40 ml/min are given in Table 1.
According to the data in Table 1, the best indicators for the solubility index, absorption index, hygroscopicity, and particle size correspond to the inlet temperature of 150 °C at a feed rate of 30 ml/min. In terms of moisture content, water activity, hygroscopicity, bulk density, and density after shaking, the best results corresponded to the following parameters: inlet temperature, 160 °C; feed rate, 30 ml/min.
3. Processing of experimental data and the mathematical substantiation of the choice of technological parameters for spray drying
To substantiate the choice of technological parameters used in the spray drying of camel milk to obtain the best indicators of the physical properties of the final product, the plots of factor correlation for each parameter are built.
Outlet temperature. An increase in the rate of supply of raw materials by 1 measurement unit leads to a decrease in the temperature at the output by an average of 0.867 measurement units. An increase in temperature at the inlet by 1 measurement unit leads to an increase in the temperature at the outlet by an average of 1.266 units. Based on the maximum coefficient, β 2 =0.935, we conclude that the temperature factor at the outlet has the greatest influence on the result of the temperature at the outlet. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in a given event, 97.61 % of the total temperature variability at the output is explained by a change in factors. It was also established that the parameters of the model are statistically significant (Fig. 2).
Water solubility index. An increase in the air temperature at the inlet by 1 measurement unit leads to a decrease in the solubility index by an average of 0.763 units. An increase in the feed rate of the raw materials by 1 measurement unit leads to an increase in the solubility index by an average of 0.0895 units. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in the examined event, 35.89 % of the total variability of the solubility index is explained by a change in the factor of the feed rate of raw materials. The response surface is shown in Fig. 3.
It follows from our calculations that the solubility index is more influenced by the feed rate of raw materials than the temperature of the raw material at the inlet. Fig. 3 demonstrates that the optimal temperature regime for the solubility index is the range of 142-150 °C at a feed rate of 24-32 ml/min.
Water absorption index. The result of our calculations is the established influence of the model parameters on the water absorption index. An increase in the temperature at the inlet by 1 measurement unit leads to an increase in the water absorption index by an average of 2.967 units. An increase in the feed rate by 1 measurement unit leads to a decrease in the absorption index by an average of 0.579 units. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in the examined event, 38.36 % of the total variability Y is explained by a change in the factors. Note that it is necessary to exclude the influence of the feed rate factor from the formula (Fig. 4).
Moisture content. An increase in the temperature at the inlet by 1 °C leads to a decrease in the moisture content by an average of 0.0504 %. Increasing the feed rate by 1 ml/min leads to an increase in the moisture content in the finished product by an average of 0.069 %. Based on the maximum coefficient, β 2 =0.53, we conclude that the greatest influence on the moisture content in dry milk is exerted by the feed rate of camel milk. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in the examined event, 87.81 % of the total variability of the quantitative moisture content is explained by a change in the selected factors. It was also established that the parameters of the model are statistically significant (Fig. 5).
Hygroscopicity. An increase in the inlet temperature by 1 °C leads to an increase in the hygroscopicity by an average of 0.351 %. Increasing the feed rate by 1 ml/min leads to a decrease in the hygroscopicity by an average of 0.172 %. Based on the maximum coefficient, β 1 =0.503, we conclude that the factor x 1 has the greatest influence on the result Y. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. In the examined model, 49.45 % of the total variability of hygroscopicity is explained by a change in the selected factors (Fig. 6).
Bulk density. An increase in the temperature at the inlet by 1 °C leads to a decrease in the bulk density by an average of 0.00687 g/cm 3 . An increase in the feed rate by 1 ml/min leads to an increase in the bulk density by an average of 0.00536 g/cm 3 . Based on the maximum coefficient, β 2 =0.754, we conclude that the greatest influence on the result related to bulk density is exerted by the feed rate factor of the raw materials. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in the examined event 80.17 % of the total variability of the bulk density is explained by a change in the factors x 1,2 . The formula is statistically significant and reliable (Fig. 7, a).
Density after shaking. With an increase in the inlet temperature by 1 °C, the density after shaking decreases by an average of 0.00723 g/cm 3 . An increase in the feed rate of the raw materials by 1 ml/ min leads to an increase in the density after shaking by an average of 0.00599 g/cm 3 . Based on the maximum coefficient, β 2 =0.76, we conclude that the factor x 2 has the greatest influence on the result Y. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in the examined event 78.72 % of the total variability y is explained by a change in the selected factors (Fig. 7, b).
Water activity. An increase in the feed rate by 1 measurement unit leads to an increase in the water activity by an average of 0.0057 units. An increase in the temperature at the inlet by 1 measurement unit leads to a decrease in the water activity by an average of 0.00702 units. Based on the maximum coefficient, β 1 =0.363, we conclude that the greatest influence on the result related to water activity is exerted by the feed rate factor of the raw materials. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in the examined event 92.95 % of the total variability of the water activity is explained by a change in the selected factors. It was also established that the parameters of the model are statistically significant (Fig. 8). Stickiness. With an increase in the feed rate of the raw materials by 1 measurement unit, the stickiness of milk powder increases by an average of 3.338 units. An increase in the temperature at the inlet by 1 measurement unit leads to a decrease in the stickiness by an average of 1.321 units. The statistical significance of the equation was verified using the coefficient of determination and the Fisher criterion. It was found that in the examined event 98.26 % of the total variability of stickiness is explained by a change in the selected factors (Fig. 9).
Particle sizes: d(0.1), d(0.5), and d(0.9). With an increase in the temperature at the inlet by 1°C, there is a decrease in the diameter of the particles by an average of 1.568 measurement units. An increase in the feed rate of the raw materials by 1 measurement unit leads to an increase in the diameter of milk powder particles by an average of 3.392 units. Based on the maximum coefficient, β 2 =0.574, we conclude that the greatest influence on the result related to the particle diameter is exerted by the factor of the feed rate of the raw materials (Fig. 10, a-c).
Mathematical modeling of the spray drying process of camel milk has shown that the best physical properties of the powder are achieved at the inlet air temperature of 150 °C and the raw material feed rate of 30 ml/min.
Discussion of results of determining the physical parameters of whole camel milk powder
The physical properties of milk powder are the main indicators that determine the consumer characteristics of the product. These include a water solubility index, a water absorption index, hygroscopicity, bulk density, density after shaking, water activity, stickiness, and particle sizes. While Fig. 7. Response surface of the dependence of density on the inlet temperature and feed rate: a -bulk density; b -density after shaking a potential consumer determines the quality of milk powder by its solubility, then this property directly depends on the absorption index, the size and shape of the particles. In addition, its shelf life is affected by its hygroscopicity, bulk density, water activity, and stickiness. The density after shaking the product determines its transportability.
A special feature of the proposed drying method is that when it is used, improved physical properties of the final product are achieved, better than those with other methods. The water solubility index of dry whole camel milk produced by spray drying was 81.25±0.11 %, which is a good indicator of the solubility of powdered products (Table 1). It was found that solubility depends on the drying method used and the shapes and sizes of the powder particles obtained. Therefore, the spray drying particles, small and grouped into agglomerates, showed good solubility. High indicators of the solubility index in water determine the consumer qualities of milk. Thus, for instant skimmed cow's milk powder, it is at least 95 %, and for whole cow's milk powderat least 75 % [19]. The absorption capacity of milk powder determines how much water the undissolved sediment can bind. The higher the absorption index, the lower the solubility of the dry powder. For a spray drying sample, this figure was 123.41±0.34 % and is the average water absorption indicator (Fig. 4).
According to standards set in [20], the moisture content in dry milk should not exceed 5 %. The study results showed that this indicator was within the permissibility rangefrom 2.25±0.11 % to 4.05±0.08 % ( Table 1). The low moisture content in dry milk prevents the development of microorganisms and increases its shelf life.
Hygroscopicity determines the storage capacity of the finished product; high hygroscopicity leads to a decrease in the shelf life of the product. Our study has shown that by optimizing the parameters of spray drying, it is possible to achieve a good hygroscopicity index (16.29±0.31 %) (Fig. 6). It was found that hygroscopicity does not depend on the type of raw milk and is directly proportional to the water absorption index. It is also known that products with an indicator of more than 25 % refer to products that have high hygroscopicity [21]. This indicates that the samples do not exceed the norm for hygroscopicity.
According to data in Table 1, at an inlet air temperature of 150 °C and a raw material feed rate of 30 ml/min, the bulk density of the spray drying samples was 0.455±0.05 g/cm 3 , and the density after shaking was 0.751±0.07 g/cm 3 . The value of bulk density depends on the size of the milk powder particles and the drying technique. The high bulk density of the final product reduces its storage and transportation costs. In addition, depending on the packaging of the finished product, the density after shaking plays an important role in the packaging, storage, and transportation of powdered products. It was found that this indicator depends more on the shaking force used than on the value of the bulk density.
Water activity is one of the main indicators of milk powder, which determines the shelf life and affects its microbiological indicators. The lower the value of water activity, the higher the storability of the product. For the whole camel milk powder samples, this indicator ranged from 0.193±0.04 to 0.401±0.05 (Fig. 8). The results reported by other authors had shown that the activity of water in the samples of powdered milk obtained by spray drying ranged from 0.17 to 0.286, which is also consistent with our data [22].
Stickiness is also an important characteristic of milk powder, determining its dispersibility and solubility. It was found that a decrease in the air temperature at the inlet (140 °C) of the spray tower leads to an increase in the stickiness value (80.42±0.57 g), and an increase in temperature (160 °C) -to a decrease in the stickiness value (22.61±0.55 g) (Fig. 9). Some authors also described that the partial replacement of lactose with maltodextrin leads to a decrease in the stickiness of milk powder, and when replaced with sucrose, on the contrary, it increases. Hydrolysis of whey proteins, which could occur during drying or storage, also led to an increase in the stickiness of milk powder [23].
Sample particle sizes were determined at diameters d(0.1), d(0.5), and d(0.9) in cumulative size distribution. At an inlet air temperature of 150 °C and a feed rate of 30 ml/min, the particle sizes of milk powder were 36.22±0.33 μm, 108.89±0.56 μm, and 229.19±0.74 μm, respectively (Table 1). Such particle size values promote their agglomeration, which improves the dispersibility and solubility of milk powder. Many of the physical properties described above also depend on and stem from particle sizes.
Spray drying towers have large dimensions, which is typical for use in medium and large milk processing enterprises. Therefore, the use of a given drying method and developed technology may be limited by the production capacity of an enterprise.
The disadvantages of this study include a possible decrease in the amount of water-soluble vitamins during the drying process. This is due to the use of relatively high temperatures during the spray drying of camel milk. To determine the possible change in these indicators, in the future it is necessary to study the vitamin composition of camel milk obtained by spraying and freeze-drying methods, where very low temperatures are used.
Further advancement of the current study might involve the application of the drying method for fermented dairy products based on camel milk. Probable difficulties that may arise in this case are associated with the increased acidity of fermented milk products. When fermented foods are exposed to high temperatures, there is a possibility of partial changes in proteins, which could lead to a deterioration in their physical and organoleptic parameters. To address these issues, it is necessary to continue research into optimizing the technological parameters of drying for various dairy products.
Conclusions
1. Whole milk powder was manufactured from fresh camel milk using spray drying. We have determined the optimal technological parameters of drying to obtain the resulting product with good physical properties. To this end, the physical properties under different modes of milk drying were comparatively studied.
2. Physical properties such as water solubility index, water absorption index, moisture content, hygroscopicity, water activity, bulk density, density after shaking, stickiness, and particle sizes have been determined. The air temperature at the inlet from 140 °C to 160 °C and the feed rate of raw materials from 30 ml/min to 40 ml/min on the spray drying unit were applied. We have established dependences of the physical parameters on the specified temperature regimes and feed rate.
3. Our calculations have shown that to achieve the improved physical performance of milk powder, it is necessary to use the air temperature at the inlet of 150 °C and the feed rate of raw materials of 30 ml/min. With a decrease in the air temperature at the inlet from 160 °C to 150 °C, an increase in solubility to 23.9 % was observed. At the same time, an increase in the inlet air temperature from 140 °C to 150 °C was accompanied by a decrease in the quantitative moisture content by 20 %. In addition, under the selected technological parameters, the lowest hygroscopicity of whole camel milk powder was achieved (16.29±0.31 %). The data obtained could help in the development and optimization of the production technology of whole camel milk powder with improved physical properties and long shelf life.
the Department of Process and Food Engineering, Faculty of Engineering, Universiti Putra Malaysia, for their assis-tance in conducting a study into the physical parameters of dry products.
|
2021-09-16T14:42:43.380Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6be39a2ef059428f23e96a49b4376af32cddb489",
"oa_license": "CCBY",
"oa_url": "http://journals.uran.ua/eejet/article/download/238686/237807",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6be39a2ef059428f23e96a49b4376af32cddb489",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
233868665
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic value of extranodal extension in axillary lymph node-positive breast cancer
Several studies have demonstrated that extranodal extension (ENE) is associated with prognosis in breast cancer. Whether this association should be described in pathological reports warrants further investigation. In this research, we evaluated the predictive value of ENE in axillary lymph nodes (ALNs) in invasive breast cancer and explored the feasibility of employing ENE to predict clinicopathological features, nodal burden, disease recurrence-free survival (DRFS) and overall survival (OS) in clinical practice. In addition, the cutoff values of perpendicular diameter ENE (PD-ENE) and circumferential diameter ENE (CD-ENE) of ENE were investigated. A total of 402 cases of primary invasive breast cancer were extracted from Fudan University Shanghai Cancer Center; these patients underwent axillary lymph node dissection (ALND) between 2010 and 2015. ENE in the ALN was defined as the tumor cells breaking through the lymph node capsule into peripheral adipose tissue and causing connective tissue reactions. Relationships between ENE and clinicopathological features, nodal burden, disease recurrence-free survival (DRFS) and overall survival (OS) were analyzed. PD-ENE was defined by measuring from the point where tumor tissue broke the node capsule to the highest point of the tumor cells in the perinodal adipose tissue.K The average PD-ENE was 1.8 mm; therefore, we divided ENE-positive patients into two groups: PD-ENE no greater than 2 mm and PD-ENE greater than 2 mm. CD-ENE was defined as measuring along the nodal capsule as the distance between peripheral edges of the ENE area. According to the average circumferential diameter (CD-ENE), we classified ENE-positive patients into two groups: CD-ENE no greater than 3 mm and CD-ENE greater than 3 mm. Correlations between ENE cutoffs and prognosis were analyzed. In this cohort of patients, 158 (39.3%) cases were positive for ENE in ALN.98 (24.4%) cases had PD-ENE no larger than 2 mm, and 60 (14.9%) cases had PD-ENE larger than 2 mm. Also, 112 (27.9%) cases had CD-ENE no larger than 3 mm, and 46 (11.4%) cases had CD-ENE larger than 3 mm. Statistical analysis indicated that histological grade, N stage, and HER2 overexpression subtype were associated with ENE. The presence of ENE had significant statistical correlations with nodal burden, including N stage, median metastatic tumor diameter and peri-lymph node vascular invasion (p < 0.001, p < 0.001, p = 0.001, respectively). Cox regression analysis demonstrated that patients with ENE exhibited significantly reduced DRFS in both univariable analysis (HR 2.126, 95% CI 1.453–3.112, p < 0.001) and multivariable analysis (HR 1.745, 95% CI 1.152–2.642, p = 0.009) compared with patients without ENE. For overall survival (OS), patients with ENE were associated with OS in univariable analysis (HR 2.505, 95% CI 1.337–4.693, p = 0.004) but not in multivariable analysis (HR 1.639, 95% CI 0.824–3.260, p = 0.159). Kaplan–Meier curves and log-rank test showed that patients with ENE in ALN had lower DRFS and OS (for DRFS: p < 0.0001; and for OS: p = 0.002, respectively). However, neither the PD-ENE group (divided by 2 mm) nor the CD-ENE group (divided by 3 mm) exhibited significant differences regarding nodal burden and prognosis. Our study indicated that ENE in the ALN was a predictor of prognosis in breast cancer. ENE was an independent prognostic factor for DRFS and was associated with OS. ENE in the ALN was associated with a higher nodal burden. The size of ENE, which was classified by a 3-mm (CD-ENE) or 2-mm (PD-ENE) cutoff value, had no significant prognostic value in this study. Based on our findings, the presence of ENE should be included in routine pathological reports of breast cancers. However, the cutoff values of ENE warrant further investigation.
www.nature.com/scientificreports/ Invasive breast cancer is the most common malignancy in women and has a number of different treatments and prognoses. In 1977, the American Joint Committee on Cancer (AJCC) published the TNM staging system. TNM stage included tumor size (T), nodal status (N), and metastases (M), which were updated consistently. Axillary lymph node metastasis is closely related to the prognosis of breast cancer patients 1,2 . Extranodal extension (ENE) is defined as the tumor cells breaking through the lymph node capsule into peripheral adipose tissue and causing connective tissue reaction (Fig. 1A,B). In 1976, Fisher and his colleagues 3 reported extranodal extension for the first time, and they believed that ENE in axillary lymph nodes may represent an important prognostic discriminant. In the following decades, many findings have shown that ENE is associated with the number of positive lymph nodes 1,4,5 and the prognosis of breast cancer patients 3, 6 . ENE has been recognized as a prognostic predictor in several types of malignancies [7][8][9][10][11][12] and has been included in the AJCC TNM staging system of head and neck cancers 13 . ENE is recommended to be described in routine pathological reports of breast cancers according to the College of American Pathologists (CAP) 14,15 . However, ENE was not included in the eighth edition of the AJCC Cancer staging system of breast cancers 16 , which may be due to the absence of a standardized measurement method and cutoff values for ENE to date.
The study attempted to establish the pathological assessment of ENE in positive axillary lymph nodes and to evaluate the clinical significance of ENE-positive breast cancers, including the association of ENE with clinicopathological parameters, lymph node burden, disease recurrence-free survival (DRFS) and overall survival (OS). In addition, the cutoff value of ENE was explored in this study.
Materials and methods
Patients. In this study, 402 patients with primary invasive breast cancer at Fudan University Shanghai Cancer Center from 2010 to 2015 were investigated. All patients underwent axillary lymph node dissection (ALND) with positive axillary lymph nodes and had complete clinical information. Patients with incomplete clinical information, recurrence/metastasis at diagnosis, or previous axillary surgery or who had received neoadjuvant chemotherapy were excluded. Informed consent was obtained from all patients. All tumor tissues and axillary lymph nodes were fixed in 10% neutral formalin, embedded in paraffin wax and examined using hematoxylin and eosin (H&E) staining. Each lymph node was sliced with the largest profile. According to National Comprehensive Cancer Network (NCCN) guideline recommendations and patients' intention, all patients were treated with surgery (breast conserving resection or mastectomy with ALN dissection) with or without radiotherapy, systematic chemotherapy, and endocrine therapy. Among this cohort of patients, 391 (97.1%) received chemotherapy, 333 (82.7%) received radiotherapy, 301 (74.6%) received endocrine therapy and 69 (17.2%) received targeted therapy. Patient characteristics. Two senior breast pathologists reviewed clinicopathological features. The presence and size of ENE, median metastatic tumor diameter, and peri-lymph node vascular invasion were reviewed by two breast pathologists in a blinded way. The clinicopathological features included patient age, histological grade, T stage, N stage, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor 2 (HER2) status, and peri-lymph node vascular invasion. Nodal burden included N stage, median metastatic tumor diameter, number of axillary lymph nodes, peri-lymph node vascular invasion and ENE foci. Molecular subtype, disease recurrence-free survival (DRFS) and overall survival (OS) were also analyzed. ER and PR were judged as positive if ≥ 1% of tumor cells showed nuclear staining in immunohistochemistry (IHC) 17 . HER2 was judged as positive by HER2 protein IHC 3 + score or HER2 gene amplification by fluorescent in situ hybridization (FISH) detection 18 . Metastatic tumor diameter was defined as the maximum diameter of tumor metastasis in positive lymph nodes. Peri-lymph node vascular invasion was defined as the presence of tumor cells in the vessels surrounding the lymph nodes. The molecular subtypes included the luminal-A-like subtype, luminal-B-like subtype, HER2-overexpression subtype and triple negative breast cancer (TNBC) [19][20][21] . The presence and size of ENE in ALN was evaluated. ENE in the ALN was defined as tumor tissue breaking through the nodal capsule into peripheral adipose tissue with or without an associated desmoplastic stromal response (i.e., inflamed granulation tissue and/or fibrosis). ENE size was measured as the highest (perpendicular diameter ENE, PD-ENE) or widest (circumferential diameter ENE, CD-ENE) diameter of the invasive front of ENE. PD-ENE was defined as measuring from the point where the tumor tissue breaks the node capsule to the highest point of the tumor cells in the perinodal adipose tissue (Fig. 1D). CD-ENE was defined as measuring along the nodal capsule to determine the distance between peripheral edges of the ENE area (Fig. 1C). The original data of PD-ENE and CD-ENE both followed normal distribution, so we observed the average and median values of the data. The average and median of PD-ENE were 1.8 mm and 2 mm, respectively, and the average and median of CD-ENE were 2.9 mm and 3 mm, respectively. Then we did a sensitivity analysis related to the cut-off values, receiver operating characteristics (ROC) curve analysis was used to identify the cut-off values for PD-ENE and CD-ENE to predict DRFS and OS. Cutoff value of 2 mm for PD-ENE had relatively higher sensitivity and specificity to predict DRFS and OS, compared with 1 mm and 3 mm (Fig. 2, Table 1). It was revealed that the area under the curve (AUC) of PD-ENE level was 0.539 (95% CI 0.458-0.619, P = 0.461) and the relatively optimal cutoff value of PD-ENE to predict DRFS was 2 mm ( Fig. 2A), the sensitivity and specificity are 41.18% and 66.94% (Table 1). The curve (AUC) of PD-ENE level was 0.520 (95% CI 0.439-0.600, p = 0.733) and the relatively optimal cutoff value of PD-ENE to predict OS was 2 mm (Fig. 2B), the sensitivity and specificity are 92.31% and 12.88% (Table 1). Cutoff value of 3 mm for CD-ENE had relatively higher sensitivity and specificity to predict DRFS and OS, compared with 1 mm, 2 mm and 4 mm (Fig. 2, Table 1). It was revealed that the area under the curve (AUC) of CD-ENE level was 0.555 (95% CI 0.474-0.834, P = 0.461) and the relatively optimal cutoff value of CD-ENE to predict DRFS was 3 mm (Fig. 2C), the sensitivity and specificity are 58.82% and 57.26% (Table 1 www.nature.com/scientificreports/ value of CD-ENE to predict OS was 3 mm (Fig. 2D), the sensitivity and specificity are 88.46% and 31.06% (Table 1). So we divided subgroups by 2-mm (PD-ENE) and 3-mm (CD-ENE) cutoffs.
Study end points. This study primarily investigated the relationships between ENE in ALN and clinicopathological features, nodal burden, molecular subtype, DRFS and OS. After undergoing surgery for primary breast cancer, patients were assessed for disease recurrence or/and metastasis by following standard clinical practice. DRFS was defined as the time from surgery to events including local recurrence, distant recurrence, or death resulting from any cause (whichever occurred first). OS was defined as the time from surgery to death from any cause.
Statistical analysis. All statistical analyses were carried out using IBM SPSS Statistics 21.0. All figures were depicted using GraphPad Prism7 (GraphPad Software). The χ 2 test or Fisher's exact test was used to test categorical variables. The distributional assumption was checked and the data were approximately normally distributed, so variables were analyzed in different ENE groups using independent t-test. Logistic regression analysis was used to evaluate relationships between clinicopathological parameters and ENE in a univariable model and in a multivariate model. The variables had a statistical relationship with ENE (p ≤ 0.05) in univariable analysis were chosen for multivariable analysis. Before performing Cox regression analysis, the Proportional Hazards assumption (PH assumption) was checked and it was satisfied. Cox regression analysis was used to analyze the correlations between ENE and DRFS or OS in a univariable model and a multivariable model. The variables had a statistical relationship with prognosis (p ≤ 0.05) in univariable analysis were chosen for multivariable analysis. The Kaplan-Meier method and log-rank test were used to analyze the relationship between ENE in ALN and the duration of DRFS or OS. Two-sided exact tests were employed, and p-values < 0.05 were considered to be significant.
ENE in ALN and clinicopathological features.
All breast cancer patients who entered the inclusion criteria are listed in Table 2. This cohort of 402 female patients all underwent axillary lymph node dissection (ALND). The median age of all patients was 52 years (ranging from 30 to 83 years). A total of 158/402 (39.3%) patients had positive ENE in the ALN. Statistical analysis showed that patients with ENE in ALN were associated with histological grade (p = 0.022), N stage (p < 0.001), and peri-lymph node vascular invasion (p = 0.001) compared with patients without ENE in ALN. However, there were no significant differences between ENE and patient median age, T stage, ER status, or PR status HER-2 status (Table 2). In this cohort of patients, 72 (17.9%) cases were luminal-A-like subtype, 182 (45.3%) were luminal-B-like subtype, 52 (12.9%) were HER2-overexpression subtype and 45 (11.2%) were TNBC. Logistic regression indicated that the HER2-overexpression subtype (p = 0.048) was associated with the presence of ENE in univariable analysis. Multivariable analysis demonstrated that the HER2 overexpression subtype (OR 0.418, 95% CI 0.179-0.977, p = 0.044) was an independent predictor of ENE. However, the remaining three molecular typing and ENE were not significant ( Table 2). www.nature.com/scientificreports/ The average diameter of CD-ENE was 2.9 mm. There were no significant differences among the PD-ENE groups in nodal burden, but differences were also not observed among the CD-ENE groups. In addition, the two PD-ENE groups had no statistical consistency in the number of ENE foci, nor did the CD-ENE groups (Table 3). Table 4). The DRFS and OS of patients with SLN involvement were classified according to ENE (Fig. 3A,B), N1 stage (Fig. 4A,B), N2 stage (Fig. 4C,D) and N3 stage (Fig. 4E,F). The DRFS and OS in those patients with ENE in ALN were categorized by PD-ENE (Fig. 3C,D) and CD-ENE (Fig. 3E,F). Kaplan-Meier curves and log-rank tests showed that patients with ENE in the ALN group had poorer outcomes than did those in the ENE-negative group (for DRFS: p < 0.001; and for OS: p = 0.002, respectively). Patients in the N3 stage who had ENE in the ALN had significantly lower DRFS but not OS.
ENE in ALN and prognosis.
In ENE-positive patients, Cox multivariable regression analysis indicated that the number of ENE foci and median metastatic tumor diameter were independent factors for DRFS, and the number of ENE foci was also an independent prognostic factor of OS. However, the size of ENE (PD-ENE and CD-ENE) subdivided by 2 mm (or 3 mm) cutoff values was not an independent factor for DRFS and OS in these patients (Table 5). Kaplan-Meier curves and log-rank tests showed that the size of ENE (PD-ENE and CD-ENE) subdivided by 2 mm (or 3 mm) cutoff values was not significant in DRFS and OS (Fig. 3C,D).
Discussion
Breast cancer is a heterogeneous disease and has poor prognosis. To help the clinician analyze the patient's condition, choose the treatment plan and judge the prognosis, the AJCC proposed the TNM staging system. This system considers tumor size, nodal status and metastasis. As physicians deepen their understanding of breast tumors, an increasing number of criteria have been added to this evaluation system, including immunohistochemistry and biomarkers. ENE was defined as the tumor cells breaking through the lymph node capsule into peripheral adipose tissue and causing connective tissue reactions. ENE was included in the N staging criteria for oral squamous cell carcinoma in the eighth edition of the AJCC 13 , but was not included in the staging criteria for breast cancer 16 . CAP mentioned that ENE should be included in routine pathology reports 22 . Therefore, 6 . The presence of ENE was associated with clinicopathological parameters, including histological grade and molecular subtype. However, the relationships between ENE and histological grade were not mentioned in recently published studies. The statistical analysis of the correlations between the molecular subtype of breast cancer and ENE showed that the HER2 overexpression subtype was an independent predictor of the presence www.nature.com/scientificreports/ of ENE, which has not been widely observed in recent studies. Ahmed, ARH and his colleagues 23 showed that HER-2 expression in pT1 and pT2 tumors elevated the risk of ALN metastasis by 7.7-fold and 7.6-fold, showed that HER-2 status expression is a strong independent predictor of nodal metastasis in breast cancer. Statistical analysis demonstrated respectively, and grade 1 and 2 tumors that expressed HER2 were 16.0 and 7.8 times more likely to have ALN metastasis, respectively. In our research, ENE-positive patients had significant differences in nodal burden, including N stage, median metastatic tumor diameter, and peri-lymph node vascular invasion, compared with ENE-negative patients. This result is in keeping with the findings of several studies that indicated that ENE has significant relationships with nodal burdens 1 27 . Nottegar et al. demonstrated that ENE was associated with a higher risk of both mortality and recurrence of disease 28 . Some studies focused on ENE in early breast cancer patients 26,29,30 . Kanyilmaz et al. 30 demonstrated that the extent of extracapsular extension was an important prognostic factor for survival in pT1-2N1 breast cancer patients. In our study, statistical analysis demonstrated that ENE in N3-stage patients was significantly correlated with prognosis, while there was no significant relationship between ENE and prognosis in N1-and N2-stage patients.
The cutoff value of ENE has been investigated in the literature. Aziz et al. divided the clinical significance of ENE into circumferential (CD-ENE) and perpendicular (PD-ENE) extranodal growth, and the results showed that PD-ENE (with 3 mm serving as the cutoff value) was an independent prognostic factor for disease-free This research showed that the presence of ECE was an independent predictor for survival outcomes in pT1-2N1 breast cancer patients, and grade 3-4 ECE appeared to be associated with a lower OS and DRFS 30 . However, Kanyilmaz only explored the prognostic value of ECE on the prognosis of patients in N1 stage. In our study, ENE was classified into CD-ENE and PD-ENE by 3-mm and 2-mm cutoffs, respectively. However, Cox proportional hazards regression analyses indicated that neither CD-ENE (with 3 mm serving as the cutoff value) nor PD-ENE (with 2 mm serving as the cutoff value) had a significant relationship with DRFS or OS, which demonstrated that the presence of ENE in ALN, either subdivided by a 2-mm cutoff value or 3-mm cutoff value, had no predictive value in invasive breast cancer.
Our study had several limitations. First, it was a single-center retrospective analysis and included a smaller sample size. We need to perform multicenter studies and large-scale prospective and retrospective studies to investigate the prognostic value of ENE in invasive breast cancer. Meanwhile, the cutoff values of ENE warrant further investigation.
Conclusion
Our study indicated that ENE in ALN was a predictor for prognosis in breast cancer. ENE was an independent prognostic factor for DRFS and was associated with OS. ENE in the ALN was associated with a higher nodal burden. The size of ENE, which was classified by a 3-mm (CD-ENE) or 2-mm (PD-ENE) cutoff value, had no significant prognostic value in this study. Based on our findings, the presence of ENE should be included in routine pathological reports of breast cancers. However, the cutoff values of ENE warrant further investigation. www.nature.com/scientificreports/ www.nature.com/scientificreports/ Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2020-11-05T09:07:07.869Z
|
2020-11-02T00:00:00.000
|
{
"year": 2021,
"sha1": "911c0147f7052b82206a7b6d9267be62b86f31d7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-88716-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6634a32323de258ca740f0c24f1fcc6695b956d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118869285
|
pes2o/s2orc
|
v3-fos-license
|
Summary of the"Diffraction&Vector Mesons"working group at DIS05
We survey the contributions presented in the working group"Diffraction&Vector Mesons"at the XIII International Workshop on Deep Inelastic Scattering (http://www.hep.wisc.edu/dis05)
INTRODUCTION
In diffractive interactions in hadron-hadron or photon-hadron collisions at least one of the beam particles emerges intact from the collision, having lost only a small fraction of its initial energy, and carrying a small transverse momentum. Therefore no color is exchanged in the t-channel. The signature for such processes is the presence of a gap in rapidity between the two hadronic final states. At high energy this is described by the exchange of an object with the quantum numbers of the vacuum, referred to as the Pomeron in the framework of Regge phenomenology [1]. Note that at low energies similar reactions can also proceed when quantum numbers are exchanged through subleading Regge trajectories (Reggeons); however, these contributions are exponentially suppressed as a function of the gap size and are negligible at small values of the longitudinal momentum loss. The understanding and description of diffractive processes is one of the aims of QCD.
Diffractive events are being extensively studied at HERA, TEVATRON, RHIC, JLAB and CERN and there is a growing community planning to continue these studies at the LHC. Updates on the available experimental data and on their theoretical interpretation were given at this workshop; many discussions also took place on the future plans. In the present summary we focus on the path from HERA to the LHC through the TEVATRON.
Selection of diffractive processes
Let us first look at the diffractive reaction ep ! eX p at HERA, depicted in Fig. 1a: a photon of virtuality Q 2 diffractively dissociates interacting with the proton at a center of mass energy W and squared four momentum transfer t and produces the hadronic system X with mass M X in the final state. The fraction of the proton momentum carried by the exchanged object is denoted by x IP , while the fraction of the momentum of the exchanged object carried by the struck quark is denoted by β (note that sometimes z is used instead of β ). The virtual photon emitted from the lepton beam provides a pointlike probe to study the structure of the diffractive exchange, similarly to ordinary DIS probing proton structure. The fact that a large fraction ( 10%) of deep inelastic (DIS) events at HERA is diffractive has thus opened the possibility of investigating the partonic nature of the Pomeron and has established a theoretical link between Regge theory and QCD. At the TEVATRON inclusive diffraction is mainly studied via the reaction pp !pX , sketched in Fig. 1b; in the TEVATRON jargon x IP is usually indicated as ξ .
At HERA three methods are used to select diffractive events [2]. The first is based on the measurement of the scattered proton with a spectrometer installed very close to the beam in a region with acceptance for protons which have lost only a small fraction of their initial longitudinal energy. A second method requires the presence of a large rapidity gap (LRG) in the forward region. A third method is based on the different shape of the M X distribution between diffractive and non diffractive events. At the TEVATRON diffractive interactions are selected by tagging events by either a rapidity gap or a leading antiproton [3].
The proton tagging method has the advantage of excluding the proton dissociation processes ep ! eX N, where the proton also diffractively dissociates into a state N of mass M N that escapes undetected into the beam pipe. In order to ensure that the scattered proton resulted from a diffractive process one requires x IP < 0: 01. This cut removes contributions coming from Reggeon exchanges [4].
The large rapidity gap method selects events which include some proton dissociation processes and some Reggeon contributions. The latter can be removed by the same x IP cut as above. If the mass M N of the dissociative system is large enough to be measured in the forward detector the proton dissociation background can be removed, whereas the contribution of low mass proton dissociation can be estimated with a Monte Carlo simulation (10% of background with M N < 1: 6 GeV is quoted from the H1 analysis [5]).
In the M X method the statistical subtraction of the non diffractive background eliminates also the the Reggeon contribution, but the selected sample is left with an important contamination from proton dissociative events with masses M N < 2: 3 GeV [6]. By comparing the measured cross sections with those coming from the leading proton analysis one can estimate the amount of this background (around 30% [7]) and determine a correction factor.
HERA diffractive structure function and PDFs
H1 and ZEUS have presented recent precise measurements of the diffractive structure function obtained with all three HERA methods and covering a wide kinematic range (proton tagging method: [2,8], LRG method: [5], M x method: [6]). In Fig. 2 the diffractive structure function is presented as a function of x IP for fixed values of Q 2 and β . The data points come from two samples analysed by H1 with the LRG method and by ZEUS with the M X method, respectively. The ZEUS M X data have been scaled to M Y < 1: 6 GeV, the region of dissociative masses included in the H1 data. There is a reasonable agreement between the two data sets, but at a closer inspection it turns out that the Q 2 dependences are different, namely the positive scaling violations in the ZEUS 1 ZEUS data are smaller than in the H1 data. This discrepancy has been investigated very recently by a combined set of next-to-leading-order (NLO) QCD fits of the diffractive structure function, attempted by two different groups (P. Newman et al. [9] and A. Levy et al. [10] -see also the upcoming proceedings of the HERA-LHC workshop, http://www.desy.de/ heralhc). Such fits are based on the validity of a collinear factorization theorem in diffractive processes [11], which allows F D 2 to be written as a convolution of the usual partonic cross sections as in DIS with Diffractive Parton Distribution Functions (DPDFs). The DPDFs, parametrised at a starting scale, are evolved according to the DGLAP equations [12] and fitted to the data. In the ideal case we would evolve in Q 2 for fixed t and x IP , or at least for fixed x IP if t is integrated over, but this is not allowed by the rather limited statistics of the present data. An alternative approach is the assumption, known as "Regge factorization" hypothesis, that F D 2 can be expressed as the product of a flux, depending only on x IP and t, and the structure function of a particle-like object. Whether the data support this assumption or not is a controversial problem. It translates into determining whether or not the intercept α IP (0)of the Pomeron trajectory α IP (t)= α IP (0)+ α 0 t depends on Q 2 . Fig. 3a shows α IP (0)as a function of Q 2 , as measured by H1 : there is a suggestion of a dependence of α IP (0)on Q 2 , though firm conclusions are not possible with the present uncertainties. In Fig. 3b, where the ZEUS measurement is presented, the Pomeron intercept rises by ∆α diff = 0: 0741 0: 0140(stat.) + 0: 0047 0: 0100 (syst.) between Q 2 of 7.8 GeV 2 and 27 GeV 2 , with a significance of 4.2 standard deviations. This scenario suggests a possible violation of Regge factorization and a clear need for more precise data. Nevertheless it has been shown [10] that when restricting the analysed range to x IP < 0: 01 Regge factorization is a sufficiently good approximation, and this is the compromise at the basis of the NLO DGLAP fits discussed in the following.
In H1 LRG data (shaded line), the latter being essentially the well known H1 fit 2002 [5]. Note that most of the data points from the high β region, where discrepancies arise between the data sets ( Fig. 2), have not been included in the fit. As a reflection of the difference in the scaling violations between the two sets of measurements (Fig. 2), the quark density is similar at low Q 2 and evolves differently to higher Q 2 ; the gluon density is a factor 2 smaller in the ZEUS data than in the H1 sample. This disagreement is confirmed and quantified in Fig. 5, which shows the fraction of the Pomeron momentum carried by quarks (red/dark line) and by gluons (blue/light line), as a function of Q 2 , as resulting from the fit by A. Levy et al.; this fit is similar to that of P. Newman et al., but completely independent, performed on the H1 LRG data (Fig. 5a) and on the ZEUS M X data (Fig. 5b). The fraction of the Pomeron momentum carried by gluons is between 70% and 90% in the H1 data and between 55% and 65% in the ZEUS M X data. The same study has been carried out also on the ZEUS proton tagged data and the resulting integral of the fractional momentum is in agreement with the H1 value.
The same data have also been analysed according to a new approach by A. Martin et al. [13], which does not assume Regge factorization and shows that the collinear factorization theorem, though valid asymptotically in diffractive DIS, has important modifications at the energies relevant at HERA, which can be quantified using perturbative QCD. The DPDFs are shown to satisfy an inhomogeneous evolution equation and the need of including both the gluonic and sea-quark components of the perturbative Pomeron is considered. The DPDFs resulting from a combined fit to the ZEUS proton tagged data and M X data and to the H1 LRG data are shown in Fig. 6 (solid line), together with H1 fit 2002 (dashed line). While the quark densities are not very different from those of H1, the gluon distribution is significantly lower than that from H1.
The discrepancies between the various DPDFs shown in Figs. 4 and 6 are large and presently not understood. They are due to a combination of effects: disagreement in the data, different fit methods and assumptions behind them. Therefore these differences between the DPDFs are at the moment the only estimate we have of their uncertainties. A precise and consistent determination of the DPDFs is certainly one of the main tasks that the HERA community has to face in the near future. Among other reasons, they are a crucial input for the prediction of any inclusive diffractive cross section at the LHC.
QCD factorization tests
According to the factorization theorem, calculations based on DPDFs extracted from inclusive measurements should allow to predict cross sections for other diffractive processes. Calculations based on H1 fit 2002 agree well with the data on diffractive D production in DIS [14] and diffractive dijet production in DIS [15]. A further test of factorization comes from the study of events with a large rapidity gap in charged current interactions at high Q 2 : in Fig. 7 the differential cross section dσ di f f cc =dQ 2 , as measured by H1 [16], is presented as a function of Q 2 and is well described by a calculation based on H1 fit 2002. A similar result was obtained by ZEUS [17]. However, the important uncertainties on the DPDFs discussed in the previous section make the conclusions on the validity of QCD factorization in DIS rather weak.
The factorization theorem does not hold in the case of diffractive hadron-hadron scattering [11]: indeed it has been known for years that the DPDFs extracted from HERA data overestimate the rate of diffractive dijets at the TEVATRON by one order of magnitude [18]. It was shown in [19] that this breakdown of factorization can be explained by screening (unitarization) effects. In the t-channel Reggeon framework, these effects are described by multi-Pomeron exchange diagrams. Because of the screening, the probability of rapidity gaps in high energy interactions to survive decreases since they may be populated by rescattering processes. The screening corrections are accounted for by the introduction of a suppression factor, which is often called the survival probability of rapidity gaps. As shown in [19] and [20], the current CDF diffractive dijet data, with one or two rapidity gaps, are in good quantitative agreement with the multi-Pomeronexchange model.
In photoproduction at HERA (Q 2 0), the exchanged photon, which is real or quasi real, can either interact directly with the proton or first dissolve into partonic constituents which then scatter off the target (resolved process). In the former case dijet photoproduction is described by a photon gluon fusion process. In the latter case the photon behaves like a hadron. Factorization should then be valid for direct interactions as in the case of DIS with large Q 2 , whereas for the resolved contribution it is expected to fail due to rescattering corrections. In the ideal theoretical limit, the suppression factor of 0.34 is evaluated for the resolved process within the multi-Pomeron exchange model [21]. However, in reality there is no clear model independent separation between the direct and resolved processes. In particular, the direct contribution is smeared by the experimental resolution and uncertainties. Moreover, at NLO these contributions are closely related. Recently Klasen and Kramer [22] have performed an analysis of diffractive dijet photoproduction data at NLO where they suppressed the resolved process by a factor 0.34. Fig. 8 shows the differential cross section, as measured by H1 [15], for the diffractive photoproduction of two jets as a function of x γ (the fraction of the photon momentum entering the hard scattering), where the NLO prediction has been tested in two different weighting schemes: in Fig. 8a only the resolved part has been scaled by the factor 0.34, while in Fig. 8b a global suppression factor 0.5 is applied to both the direct and resolved components. In Fig. 9 the ratio of the ZEUS data [23] to the NLO predictions of Klasen and Kramer [22] with no suppression factor (R = 1) is shown separately for the sample enriched (a) in the direct (x γ 0: 75) and (b) in the resolved (x γ < 0.75) components. Both for resolved and direct photoproduction the ratio is flat, but the data are lower by a factor of 2 compared to the NLO calculations. The overall message from the data of Figs. 8 and 9 is that, while a suppression of only the resolved contribution at NLO is disfavored by the data, a good agreement is achieved with the global suppression 0.5, which furthermore yields a good description of all measured cross sections.
The fact that the data, apparently against expectations, support suppression of direct photoproduction, has been addressed by M. Klasen [25] and has been related to the critical role of an initial state singularity in the way factorization breaks down and to the need of a modification of the suppression mechanism: separation of direct and resolved photoproduction events is a leading order concept. At NLO they are closely connected. The sum of both cross sections is the only physical relevant observable, which is approximately independent of the factorization scale, M γ [26]. By multiplying the resolved cross section with the suppression factor R = 0: 34, the scale dependence of the NLO direct cross section is compensated against that of the LO resolved part [22]. But at NLO collinear singularities arise from the photon initial state, which are absorbed at the factorization scale into the photon PDFs; the latter become in turn M γ dependent. An equivalent M γ dependence, just with the opposite sign, is then left in the NLO corrections to the direct contribution. Hence, in order to get a physical cross section at NLO, that is the superimposition of the NLO direct and LO resolved cross section, and to restore the scale invariance, one must multiply the M γ dependent term of the NLO correction to the direct contribution with the same suppression factor as the resolved cross section.
The situation with factorization breaking in dijet photoproduction is not completely understood and further experimental and theoretical efforts are needed. On the other hand the uncertainties on the DPDFs largely discussed previously are a further ingredient which makes it difficult a clear understanding. As was emphasized in [21], a possible way to study the effects of factorization breaking due to rescattering in diffractive photoproduction is to measure the ratio of diffractive and inclusive dijet photoproduction as a function of x γ . For such a quantity (at least) some of the theoretical and experimental uncertainties will cancel.
The understanding of factorization breaking in hadron-hadron collisions is of fundamental importance for the diffractive physics at the LHC. The rapidity gap survival factor is an essential ingredient of the predictions [27] on exclusive diffractive Higgs production, which will be discussed in the last section. x obs γ > 0: 75 . Ratio of the ZEUS diffractive dijet data to the NLO QCD predictions [22] of the single differential cross section in y for the sample enriched in direct (a) and resolved (b) photoproduction.
Diffraction at the TEVATRON
As discussed in the previous section, factorization is not expected to hold in hadronhadron collisions. A strong breakdown of factorization at the TEVATRON has been known for some time from run-I (1992-1995) results [28]: the single-diffractive to nondiffractive ratios for dijets, W , b-quark and J=ψ production, as well as the ratio of double-diffractive to non-diffractive dijet production are all 1%, a factor 10 less than at HERA. However, the ratio of double-to single-diffractive dijets is found to be about a factor 5 larger than the ratio of single-to non-diffractive dijets, suggesting that there is only a small extra suppression when going from one to two rapidity gaps in the event, as confirmed by predictions [20]. In this respect the TEVATRON data are being a very powerful tool to shed light on the factorization breaking mechanism.
One of the major challenges of run-II is the measurement of central exclusive production rates (dijets, χ 0 c , diphotons). By central exclusive, we refer to the process pp ! p φ p, where denotes the absence of hadronic activity ('gap') between the outgoing hadrons and the decay products of the central system φ . As we will discuss in the last section, the exclusive Higgs signal is particularly clean and the signal-tobackground ratio is especially favorable, in comparison with other proposed selection modes. However, the expected number of events is low. Therefore it is important to check the predictions for exclusive Higgs production by studying processes mediated by the same mechanism, but with rates which are high enough to be observed at the TEVATRON (as well as at the LHC) [29]. The CDF search for exclusive dijet production is based on the reconstruction of the dijet mass fraction R j j in double Pomeron exchange events. R j j is defined as the mass of the two leading jets in an event divided by the total mass measured in all calorimeters. At first sight, we might expect that the exclusive dijets form a narrow peak concentrated at R j j close to 1. In reality, the peak is smeared out due to hadronization and jet searching procedure as well as due to a 'radiative tail' phenomenon [30]. So it is not so surprising that within the CDF selection cuts no peak has been seen. CDF reports production cross sections for events with R j j > 0: 8, which are interpreted as the upper limits for exclusive production. Fig. 10 [28] shows such cross sections as a function of E min T , the E T of the lower E T jet. These data agree, within errors, with recent predictions for exclusive dijet production [29]. The analysis benefits from using dijet events in which at least one of the jets is b-tagged: presently more data on heavy flavor exclusive dijets are being collected with a special b-tagged dijet trigger.
Diffraction at RHIC
New interesting experimental results from RHIC were presented by Guryn, White and Klein. In particular, Guryn [31] described the results of the measurement of the single spin analyzing power A N in polarized pp elastic scattering at 200 GeV. The recent results on inelastic diffraction with Au-Au, d-Au and pp beams were reviewed by White [32]. Klein [33] showed the results of the STAR collaboration for coherent photonuclear ρ and 4 charged pion production.
Updates on theory
Several excellent mini review type theoretical talks were presented. Hard diffraction in DIS and the origin of hard Pomeron from rescattering were discussed by Brodsky [34].
He also reviewed such effects as Color Transparency, Color Opaqueness and Intrinsic Charm. Levin [35] gave a brief review of the current status of high density QCD with its ups and downs. The recent progress in the BFKL studies was covered by Andersen [36]. In particular, he discussed the high-energy limit of diffractive scattering processes in the BFKL resummation framework. He showed that the BFKL equation was solved at full next-to-leading logarithmic accuracy.
EXCLUSIVE MESON PRODUCTION AND DVCS
The dynamics of diffractive interactions can also be studied through exclusive vector meson (V = ρ 0 ;ω;J=ψ;: : : ) and photon production, l + N ! l + V + Y , where Y is either an elastically scattered nucleon or a low-mass state dissociative system. At low transverse momentum transfer at the nucleon vertex, the photoproduction of ρ 0 , ω and φ mesons is characterized by a "soft" dependence of their cross-sections in the γ p center-of-mass energy, W . This can be interpreted in the framework of Regge theory as due to the exchange of a "soft" Pomeron (IP) resulting in an energy dependence of the form dσ =dt ∝ W 4(α IP (t) 1) , where the Pomeron trajectory is parametrised as α IP (t)= α IP (0)+ α 0 t ' 1: 08 + 0: 25t. However, in the presence of a "hard" scale like large values of the photon virtuality Q 2 or of the momentum transfer j tjor of the vector meson mass, perturbative QCD (pQCD) is expected to apply. Diffractive vector meson production can then be seen in the nucleon rest frame as a sequence of tree subprocesses well separated in time: the fluctuation of the exchanged photon in a qq pair, the hard interaction of the qq pair with the nucleon via the exchange of (at least) two gluons in a color singlet state, and the qq pair recombination into a real vector meson. This approach results in a stronger rise of the cross section with W , which reflect the strong rise at small x of the gluon density in the nucleon. Such an energy dependence is observed in J=ψ production, where the quark charm mass provides a hard scale. It is of particular interest to study the role of other hard scales like Q 2 and t as well as the transition from a "soft" to "hard" behavior expected for light vector mesons. Furthermore, to take into account the skewing effect, i.e. the difference between the proton momentum fractions carried by the two exchanged gluons, one has to consider generalized parton distributions (GPDs). GPDs are an extension of standard PDFs, which include additional information on the correlations between partons and their transverse motion. There are four different types of GPDs, H(x;x 0 ;t) and E(x;x 0 ;t), where x and x 0 are the momentum fraction of the two parton considered, in the unpolarized case to which one should addH(x;x 0 ;t)and E(x;x 0 ;t)in the polarized case. While E andẼ have no equivalent in the ordinary PDFs approach, H andH reduce to the usual unpolarized and polarized PDFs respectively in the forward limit (x = x 0 and t = 0).
The COMPASS experiment has presented [37] a study of the diffractive elastic leptoproduction of ρ 0 mesons, µ + N ! µ + ρ 0 + N, where N is a quasi-free nucleon from any of the nuclei of their polarized target, at < W > = 10 GeV for a wide range of Q 2 , 0: 01 < Q 2 < 10 GeV 2 . Several spin density matrix elements (SDME), which carry information on the helicity structure of the production amplitudes, have been extracted from the production and decay ρ 0 angular distributions. The COMPASS data provide a large statistics which allows to extend the previous measurements towards low Q 2 . Measurements of the r 04 00 matrix element, which can be interpreted as the fraction of longitudinal ρ 0 in the sample, have been performed as a function of Q 2 . If one assumes s-channel helicity conservation (SCHC) between the exchanged photon and the ρ 0 meson, one can obtain the ratio R between the longitudinal (σ L ) and the transverse (σ T ) cross sections (see Fig. 11a). A weak violation of SCHC is observed through the r 04 1 1 matrix element (see Fig. 11b, in agreement with results of previous experiments. It has to be noted that the study of systematic effects is still ongoing and that only the statistical errors are provided. Elastic electroproduction of φ mesons has been studied in e p collisions by the ZEUS experiment [38] in the kinematic range 2 < Q 2 < 70 GeV 2 , 35 < W < 145 GeV and j tj< 0: 6 GeV 2 . The energy dependence of the γ p cross section has been measured and can be parametrised as σ ∝ W δ , with δ ' 0: 4. This value is between the "soft" diffraction value and the one observed for J=ψ. No Q 2 or t dependence of the slope δ was observed with the present precision. When parametrised as a falling exponential, the t dependence of the cross section leads to b slopes in the range from 6: 4 0: 4 GeV 2 at Q 2 = 2: 4 GeV 2 to 5: 1 1: 1 GeV 2 at Q 2 = 19: 7 GeV 2 . The values of δ and b were found to scale with respect to other vector mesons results when plotted as a function of Q 2 + m 2 V , where m V is the mass of the vector meson, suggesting that this could be a good approximation of the universal scale in this process. The ratio between the longitudinal (σ L ) and the transverse (σ T ) cross sections, extracted from the φ angular distributions, was found to increase with Q 2 and when compared with results obtained for other vector mesons to scale with Q 2 =m 2 V . H1 has presented [39] comprehensive results on elastic J=ψ production in the γ p center-of-mass energy ranges 40 < W < 305 GeV in photoproduction and 40 < W < 160 GeV in electroproduction up to 80 GeV 2 in Q 2 and in both cases for j tj< 1: 2 GeV 2 .
In such a process, the hard scale provided by the mass of the involved charm quark ensures the validity of a pQCD description. This is even more so in electroproduction where Q 2 can provide a second hard scale. The Q 2 and W dependent γ p cross-sections have been extracted. A steep rise with energy, σ ∝ W δ , was observed with values of δ ' 0: 7 independently of Q 2 (see Fig. 12a). The effective Pomeron trajectories α IP (t)= α IP (0)+ α 0 t have been extracted from the study of the doubly differential dσ =dt cross-section as a function of W and t. In photoproduction (see Fig. 12b), a positive value of α 0 = 0: 164 0: 028 0: 030 GeV 2 was obtained, leading to a shrinkage of the forward scattering peak, even if the effect is smaller than observed in hadron-hadron interactions. In electroproduction, within its large error, the obtained value of α 0 was found compatible both with the photoproduction result and zero. Finally, the helicity structure has been analyzed as a function of Q 2 and t and no evidence for a violation of SCHC has been observed. Assuming SCHC, the ratio of the longitudinal and the transverse cross sections has been extracted as a function of Q 2 . Teubner [40] has presented a model for vector meson production based on k T factorization, which uses a parton-hadron duality ansatz to avoid the large uncertainties arising from the poorly known vector meson wave functions. The predictions obtained for J=ψ cross section as a function of W (see Fig. 13) with different sets of gluon distribution show a huge spread. This indicates a possible sensitivity to the gluon at small x and small to intermediate scales, i.e. a kinematic region where fits to the inclusive data do not constrain the gluon with high precision. Getting high precision data on vector meson production at HERA and reducing the remaining theoretical uncertainties might then allow to pin down the gluon at low x.
Kroll [41] presented a LO QCD calculation for light vector meson electroproduction taking into account the transverse momenta of the quark and the anti-quark as well as Sudakov factors. The GPDs are modeled according to the ansatz of Radyushkin and Gaussian wavefunctions are used for the vector mesons. A fair agreement with the available data on ρ 0 and φ production at HERA is obtained between the predictions for the transverse and the longitudinal cross section as well as for the spin density matrix elements. Photoproduction of vector mesons at large j tjis largely studied since a few years as it is expected to be described by perturbative models involving the BFKL dynamics in the exchanged gluon ladder [42]. These models predict a power law behavior of the t dependence of the cross section and a rise with j tjof the steepness of the W dependence. H1 has presented [43] results on ρ 0 photoproduction in the kinematic range 75 < W < 95 GeV and 1: 5 < j tj< 10 GeV 2 where the mass of the proton dissociative system Y is limited to M Y < 5 GeV. The measured t dependence of the cross-section is well described by a power law of the form j tj n with n = 4: 41 0: 07 + 0: 07 0: 10 and can be reproduced by BFKL model predictions. A study of the helicity structure has been performed and confirms the violation of SCHC in the case of ρ 0 photoproduction at large j tj in contrast to what was observed for high j tjJ=ψ production [44,45]. This is generally attributed to differences in the wave function between ρ and J=ψ.
J=ψ photoproduction at large j tjhas been studied by ZEUS [46] in the kinematic range 50 < W < 150 GeV, j tj> 1 GeV 2 and M Y < 30 GeV. Both the t dependence and the W dependence of the cross-section have been extracted, as shown on Fig. 14. Fits of the form W δ to the W dependence of the cross section lead to values of δ ' 1 with an indication for a rise of δ with j tj. The model based on BFKL has been found to describe the t dependence.
The opportunity to study Generalized Parton Distributions (GPDs) was discussed in a common session with the Spin Physics working group. Information about GPDs in lepton nucleon scattering can be provided by measurements of exclusive processes in which the nucleon remain intact. The simplest process sensitive to GPDs is Deeply Virtual Compton Scattering (DVCS), i.e. exclusive photon production off the proton γ p ! γ p at small j tjbut large Q 2 , which is calculable in perturbative QCD. Such a final state also receives contributions from the purely electromagnetic Bethe-Heitler process, where the photon is radiated from the lepton. The resulting interference term in the cross section vanishes as long as one integrates over the azimuthal angle between the lepton and the hadron plane. It is then possible to extract the DVCS cross section by subtracting the Bethe-Heitler contribution, as done by H1 and ZEUS. The azimuthal asymmetries resulting from the interference are also sensitive to GPDs and are studied by HERMES, COMPASS and at JLAB. Extracting GPDs from the DVCS process would allow, through the Ji's sum rule, to determine the total angular momentum carried by the quarks which contribute to the proton spin.
A new high statistics analysis of DVCS has been performed by the H1 experiment [47] in the kinematic region 2 < Q 2 < 80 GeV 2 , 30 < W < 140 GeV and j tj< 1 GeV 2 . The γ p ! γ p cross section has been measured as a function of Q 2 and as a function of W . The W dependence can be parametrised as σ ∝ W δ , yielding δ = 0: 77 0: 23 0: 19 at Q 2 = 8 GeV 2 , i.e. a value similar to J=ψ production indicating the presence of a hard scattering process. For the first time, the DVCS cross section has been measured differentially in t (see Fig. 15) and the observed fast decrease with j tjcan be described by the form e bj tj with b = 6: 02 0: 35 0: 39 GeV 2 at Q 2 = 8 GeV 2 . This measurement allows to further constrain the models, as their normalization depends directly on the t slope parameter. NLO QCD calculations using a GPD parametrization based on the ordinary parton distributions in the DGLAP region and where the skewedness is dynamically generated provide a good description of both the Q 2 and the W dependences. A review of the HERMES results on DVCS [48] has been presented, including new data on polarized targets. On basis of unpolarized target data, one can measure the beam charge asymmetries, which are sensitive to the real part of the DVCS amplitudes, and the beam spin asymmetries, which are sensitive to the imaginary part. These are in fact mainly sensitive to the H GPD. Both asymmetries have been extracted and show the expected cos(φ ) and sin(φ ) behavior, respectively. A measurement of the t dependence of the beam charge asymmetry has been performed and comparison with models indicate the possible sensitivity of the data to constrain GPDs. Polarized target have been analyzed and the longitudinal target spin asymmetry, which is sensitive to theH GPD, has been measured for the fist time. The resulting sin(φ ) and sin(2φ ) moments are shown as a function of t in Fig. 16, together with prediction based on GPD models. The sizeable sin(2φ )moment might indicate a sensitivity to the twist-3 H andH contributions. The installation of a new recoil detector will allow to tag directly the final state proton and to reduce the uncertainties due to the backgrounds arising from the missing mass techniques used up to now to guarantee exclusivity.
Gavalian [49] summarized the previous results on DVCS obtained by the CLAS experiment at JLAB, which measured in particular the beam spin asymmetry. In 2004 a dedicated DVCS experiment has been operated in Hall A and new results are expected soon. He also reviewed the status of the upgrade of the CLAS experiment which would allow to measure DVCS with the expected 12 GeV beam.
Exclusive meson production processes provide as well access to GPDs. HERMES has studied [50] exclusive π + production which is sensitive to theH and theẼ GPDs. The Q 2 dependent cross section has been measured and found to be in good agreement with a GPDs based model. A first measurements of the target spin asymmetry for ρ 0 production, which probes the E GPD, has been performed.
Weiss [51] reviewed the theoretical status of hard electroproduction of pions and kaons and their link to GPDs.
TOWARDS THE LHC
Diffractive physics has provided a rich source of important results from both HERA and the TEVATRON. Within the past few years there has been increasing interest to the study of diffractive processes at the LHC in connection with the proposal to add forward proton detectors to the LHC experiments. Various aspects of physics with forward proton tagging at the LHC have been under discussion in our working group.
Eggert [52] described the status of the TOTEM detector and the prospects of measurements of total and elastic pp cross section. In particular, the total pp cross section will be measured with the record (order 1 %) accuracy. This would allow to strongly restrict the range of existing theoretical models. Elastic cross section will be measured in the wide interval of momentum transfer 10 3 < t < 8 GeV 2 . The measurement of the elastic slope b(t = 0) at the LHC is especially important, since it is expected (see for example [53,54]) that this quantity is much more sensitive to the effects of the multi-Pomeron cuts than the total cross section.
Studies of diffractive physics at TOTEM require integration with CMS. CMS and TOTEM together will provide the largest acceptance detector ever built at a hadron collider. From the point of view of testing different regimes of the asymptotical behavior of the pp scattering amplitude, it will be very informative to measure accurately the survival probabilities of one, two, three (maybe even four) rapidity gaps [55,56,57]. CMS/TOTEM physics menu will include also measurements of the centrally produced low mass systems (χ-bosons, dijets, diphotons). Special attention in his talk Eggert paid to the new (β =172m ) optics aimed at optimization of diffractive proton detection at L= 10 32 cm 2 s 1 .
Several speakers (Albrow, Cox, Kowalski, Piotrzkowski and Royon ) discussed the unique physics potential of forward proton tagging at 420m at the LHC. The use of forward proton detectors as a means to study Standard Model (SM) and new physics at the LHC has only been fully appreciated within the last few years, (see e.g. [57,58] and references therein). By detecting protons that have lost less than 2 % of their longitudinal momentum, a rich QCD, electroweak, Higgs and BSM program becomes accessible, with a potential to study phenomena which are unique at the LHC, and difficult even at a future linear collider [59,60,61,62,63].
It was emphasized by Albrow, Cox and Royon [64] that the so-called central exclusive production (CEP) process might provide a particularly clean environment to search for, and identify the nature of, new particles at the LHC. There is also a potentially rich, more exotic physics menu including (light) gluino and squark production, gluinonia, radions, and indeed any object which has 0 + + or 2 + + quantum numbers and couples strongly to gluons [57]. Central exclusive production processes pp ! p φ p, where denotes the absence of hadronic activity ('gap') between the outgoing hadrons and the decay products of the central system φ , are attractive for two main reasons. Firstly, if the outgoing protons remain intact and scatter through small angles, then, to a very good approximation, the central system φ must be dominantly produced in a spin 0, CP even state, therefore allowing a clean determination of the quantum numbers of any observed resonance. Secondly, as a result of these quantum number selection rules, coupled with the (in principle) excellent mass resolution on the central system achievable if suitable proton detectors are installed, signal to background ratios greater than unity are predicted for SM Higgs production [65], and significantly larger for the lightest Higgs boson in certain regions of the MSSM parameter space [66]. Simply stated, the reason for these large signal to background ratios is that exclusive b quark production, the primary background in light Higgs searches, is heavily suppressed due to the quantum number selection rules. Another attractive feature is the ability to directly probe the CP structure of the Higgs sector by measuring azimuthal asymmetries in the tagged protons [67]. Another strategy to explore the manifestation of the explicit CP violation in the Higgs sector was recently studied by Ellis et al. [68].
The 'benchmark' CEP process for new physics searches is SM Higgs production, sketched in Fig. 17. The cross section prediction for the production of a 120 GeV Higgs at 14 TeV is 3 fb, falling to 1 fb at 200 GeV [27]. 1 The simplest channel to observe the SM Higgs in the tagged proton approach from the experimental perspective is the WW decay channel [60,69]. More challenging from a trigger perspective in the bb channel. This mode, however, becomes extremely important in the so-called 'intense coupling regime' of the MSSM, where the CEP is likely to be the discovery channel. In this case it is expected close to 10 3 exclusively produced double-tagged Higgs bosons in 30fb 1 of delivered luminosity. About 100 would survive the experimental cuts, with a signalto-background ratio of order 10.
Furthermore, as was reported in [32,33,63], forward proton tagging will make possible a unique program of high energy photon interactions physics at the LHC. For example, the two photon production of W pairs will allow a high precision study of the quartic gauge couplings [63]. Photon interactions are enhanced in heavy ion collisions and studies of such ultra peripheral collisions were discussed in [32,33]. In addition, two photon exclusive production of lepton pairs provides an excellent tool for calibrating both luminosity and the energy scale of the tagged events [63].
Finally, by tagging both outgoing protons, the LHC is effectively turned into a glueglue collider. This will open up a rich, high rate QCD physics menu (especially in what concerns diffractive phenomena), allowing to study the skewed unintegrated gluon densities and the details of rapidity gap survival [62]. Note that the CEP provides a source of practically pure gluon jets (gluon factory [72]). This can be an ideal laboratory to study the properties of gluon jets, especially in comparison with the quark jets, and even the way to search for the glueballs.
Cox and Kowalski [60,62] discussed the outline of the FP420 R&D project aimed at assessing whether it is possible to install forward proton detectors with appropriate acceptance at ATLAS and/or CMS, and to fully integrate such detectors within the experimental trigger frameworks [61].
SUMMARY AND OUTLOOK
A wealth of diffractive data is available over an extended kinematic range from the HERA experiments, allowing precise measurements of the diffractive structure function and the extraction of the Diffractive Parton Distribution Functions (DPDFs). The main news here is that, for the first time, several independent next-to-leading-order QCD fits to the data have become available. While all fits suggest that the DPDFs are gluondominated, significant differences between the various sets are evident. The origin of these differences is under investigation; at the moment they are the only information we have on the DPDFs uncertainties. A precise and consistent determination of the DPDFs is one of the main aims of the HERA community in the near future. These functions provide an important input for the prediction of inclusive hard diffractive cross sections at the LHC. They are also essential to test the validity of the QCD factorisation in diffractive processes.
Recent results on deeply virtual Compton scattering and exclusive meson production from the HERA experiments and from COMPASS and CLAS are especially interesting because of their sensitivity to the Generalized Parton Distributions (GPDs). GPDs extend the standard PDFs by including the information on correlations between partons and on their transverse momentum. GPDs would allow, through Ji's sum rule, the determination of the contribution of the quark angular momentum to the proton spin. Moreover, they provide a useful input for the prediction of the diffractive exclusive cross sections at the LHC. In this respect, the measurements of central exclusive production rates, on the way with run-II TEVATRON data, play also an important role.
In the past few years there has been an increasing interest in the study of diffractive processes at the LHC. The TOTEM detector, integrated with CMS, will allow the study of diffractive physics with the largest acceptance detector ever built at a hadron collider. The LHC physics programme can be significantly enlarged by equipping a region at 420 m from the ATLAS and/or CMS interaction point, as has been recently proposed by the FP420 R&D project. This would provide a particularly clean environment to search for, and to identify the nature of, the new particles (first of all, the Higgs boson). At the same time this would open a rich QCD menu and allow a unique program of high-energy photon interactions.
|
2019-04-14T02:38:09.312Z
|
2005-08-01T00:00:00.000
|
{
"year": 2005,
"sha1": "39673d88d1a5ba4f148a749a4e873eca2de36ae7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0508005",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8b3c7a4f64058702c3ea8fd4c071fc22502bcee8",
"s2fieldsofstudy": [
"Physics",
"Economics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
16379340
|
pes2o/s2orc
|
v3-fos-license
|
Full Abstraction for the Resource Lambda Calculus with Tests, through Taylor Expansion
We study the semantics of a resource-sensitive extension of the lambda calculus in a canonical reflexive object of a category of sets and relations, a relational version of Scott's original model of the pure lambda calculus. This calculus is related to Boudol's resource calculus and is derived from Ehrhard and Regnier's differential extension of Linear Logic and of the lambda calculus. We extend it with new constructions, to be understood as implementing a very simple exception mechanism, and with a"must"parallel composition. These new operations allow to associate a context of this calculus with any point of the model and to prove full abstraction for the finite sub-calculus where ordinary lambda calculus application is not allowed. The result is then extended to the full calculus by means of a Taylor Expansion formula. As an intermediate result we prove that the exception mechanism is not essential in the finite sub-calculus.
Introduction
In concurrent calculi like CCS [23], guarded processes are resources that can be used only once by other processes. This fundamental linearity of resources leads naturally to nondeterminism, since several agents (senders and receivers) can interact on the same channel. In general, various synchronization scenarios are possible, giving rise to different behaviours. On the other hand in the λ-calculus [1], a function (receiver) can duplicate its argument (sender) arbitrarily. Thanks to this asymmetry, the λ-calculus enjoys a strong determinism in analogy with the standard Taylor formula of the entire functions. The Taylor expansion has been studied in [14] where the authors relate it to the Böhm tree of a λ-term, giving the intuition that the former is a resource conscious improvement of the latter. The main difference between Boudol's resource λ-calculus and Ehrhard and Regnier's differential λ-calculus is that the first is lazy -this means that in many cases linear substitutions must be delayed. To that effect, the calculus features a linear explicit substitution mechanism. Moreover, it implements a fixed reduction strategy similar to linear head reduction. Therefore, Boudol's calculus is not an extension of the ordinary λ-calculus. Also, the resource λ-calculus is rather affine than linear, since depletable resources cannot be duplicated but can be erased. Another difference lies in the respective origins of these calculi: the resource λ-calculus originates from syntactical considerations related to the theory of concurrent processes, while the differential one arises from denotational models of linear logic where the existence of differential operations has been observed. These models are based on the well-known relational model of Linear Logic [17], and the interpretation of the FULL ABSTRACTION FOR RESOURCE CALCULI 3 new differential constructions is as natural and simple as the interpretation of the ordinary Linear Logic constructions.
In this paper we work with a resource-sensitive λ-calculus because our techniques depend on the linear logic structure underlying the calculus and on the presence of a Taylor expansion formula. Two main syntaxes have been proposed for the differential λ-calculus: Ehrhard and Regnier's original one [11], simplified by Vaux in [31], and Tranquilli's resource calculus of [30] whose syntax is close to Boudol's one. These calculi share a common semantical backbone as well as similar connections with differential Linear Logic and proof nets. We adopt roughly Tranquilli's syntax and call our calculus ∂λ-calculus. To avoid the problem of handling the coefficients introduced by the Taylor formula we conveniently suppose that the formal sum in the calculus is idempotent; this amounts to saying that we only check whether a term appears in a result, not how many times it appears. This is very reasonable when studying convergency properties since M + M converges exactly when M does.
Full Abstraction. A natural problem when a new calculus is introduced is to characterize when two programs are operationally equivalent, namely when one can be replaced by the other in every context without noticing any difference with respect to a given observational equivalence. In this paper we prove a full abstraction result (a semantical characterization of operational equivalence) for the ∂λ-calculus in the spirit of [5]. As in that paper, we extend the language with a convergence testing mechanism. Implicitly, this extension already appears in [10], in a differential linear logic setting: it corresponds to the 0-ary tensor and par cells. To implement the corresponding extension of the λ-calculus, we introduce two sorts of expressions: the terms (variable, application, abstraction, "throw" τ (V ) where V is a test) and the tests (empty test, parallel composition of tests and "catch" τ (M ) where M is a term). Parallel composition allows to combine tests in such a way that the combination succeeds if and only if each test succeeds. Outcomes of tests (convergence or divergence) are the only observations allowed in our calculus, and the corresponding contextual equivalence and preorder on terms constitute our main object of study.
This extended ∂λ-calculus, that we call ∂λ-calculus with tests, has a natural denotational interpretation in a model of the pure λ-calculus introduced by Bucciarelli, Ehrhard and Manzonetto in [8], which is indeed a denotational model of the differential pure nets of [10] as one can check easily. This model is a reflexive object D in the Kleisli category of the linear logic model of sets and relations where !X is the set of all finite multisets over X. An element of D can be described as a finite tree which alternates two kinds of layers: multiplicative layers where subtrees are indexed by natural numbers and exponential layers where subtrees are organized as non-empty multisets. To be more precise,`−? (negative) pairs of layers alternate with ⊗−! (positive) pairs, respecting a strict polarity discipline very much in the spirit of Ludics [18]. The empty positive multiplicative tree corresponds to the empty tensor cell and the negative one to the empty par cell. The corresponding constructions τ ,τ are therefore quite easy to interpret.
We use this logical interpretation to turn the elements of D into ∂λ-calculus terms with tests. More precisely, with each element α of D, we associate a test α + · with a hole · for a term, and we show that α belongs to the interpretation of a (closed) term M iff the test α + M converges. From this fact, we derive a full abstraction result for the fragment of the ∂λ-calculus with tests in which all ordinary applications are trivial, that we call ∂ 0 λ-calculus with tests. To extend this result to the ∂λ-calculus with tests, we use the Taylor formula introduced in [11] which allows to turn any ordinary application into a sum of infinitely many linear applications of all possible arities. One exploits then the fact that the Taylor formula holds in the model, as well as a simulation lemma which relates the head reduction of a term with the head reduction of its Taylor expansion.
Contributions. In Section 2 we provide the abstract categorical framework which is needed to interpret the ∂λ-calculus and its extension with tests. The syntax and operational semantics of the ∂ 0 λ-calculus with tests (which is the promotion-free fragment) are presented in Section 3, while its relational model D is described concretely in Section 4. The definability of the elements of D in the ∂ 0 λ-calculus with tests is the main conceptual contribution of this paper -it shows that, in this setting, the standard syntax versus semantics dichotomy is essentially meaningless. From definability it follows easily that the relational model is fully abstract for the ∂ 0 λ-calculus with tests, as shown in Section 5. This result is analyzed further in Section 6, where it is proved that in the absence of promotion the test operators do not add any discriminatory power to the contexts, thus showing that D is also fully abstract for the ∂ 0 λ-calculus without tests.
We then focus on the full ∂λ-calculus with tests. Section 7 is devoted to present its syntax, operational semantics and relational semantics. In Section 8 we consider the use of Taylor expansions to reduce the full abstraction problem for ∂λ to its "∂ 0 λ" version, thus introducing an original and promising reduction technique.
Categorical semantics of linear logic
Before introducing the syntax of our resource λ-calculus with tests, we describe the general categorical structures needed to interpret this calculus. Our goal here is to give general motivations for our syntactic constructs. In the sequel, we consider a particular model, based on the category of sets and relations, and it is not hard to check that this particular category is an instance of the general setting we present here. In Section 4, we shall present this relational interpretation concretely in order to avoid the admittedly heavy categorical formalism.
Our main reference for categorical models of linear logic (LL) is [22]. We denote by N the set of natural numbers.
Let C be a Seely category. We recall briefly that such a structure consists of a category C, whose morphisms should be thought of as linear maps, equipped with a symmetric monoidal structure for which it is closed and * -autonomous with respect to a dualizing object ⊥. The monoidal product, called tensor product, is denoted as ⊗, the linear function space object from X to Y is denoted as X Y , the composition of morphisms in C is simply denoted as juxtaposition. We use ev ∈ C((X Y ) ⊗ X, Y ) for the linear evaluation morphism and cur(f ) ∈ C(Z, X Y ) for the "linear currying" of a morphism f ∈ C(Z ⊗ X, Y ). The dual object X ⊥ is denoted as X ⊥ . We also assume that C is cartesian, with a cartesian product denoted as & and a terminal object . By * -autonomy, this implies that C is also cocartesian; we use ⊕ for the coproduct and 0 for the initial object. In any cartesian and cocartesian category, there is a canonical morphism a ∈ C(0, ) and a canonical natural transformation a X,Y ∈ C(X ⊕ Y, X & Y ). One says that the category is additive if these morphisms are isomorphisms. In that case, each homset C(X, Y ) is equipped with a structure of commutative monoid, and all operations defined so far (composition, tensor product, linear currying) are linear with respect to this structure.
If C has cartesian products of all countable families (X i ) i∈I of objects, we say that it is countably cartesian, and in that case, C is also countably cocartesian. If the canonical morphism a (X i ) i∈I ∈ C( i∈I X i ,˘i ∈I X i ) is an isomorphism, we say that C is countably additive. In that case, homsets have countable sums and composition as well as all monoidal operations commute with these sums.
Last, we assume that C is equipped with an endofunctor ! which has a structure of comonad (unit d X ∈ C(!X, X) called dereliction, multiplication p X ∈ C(!X, !!X) called digging). Moreover, this functor must be equipped with a monoidal structure which turns it into a symmetric monoidal functor from the symmetric monoidal category (C, &) to the symmetric monoidal category (C, ⊗): the corresponding isomorphisms m : ! → 1 and m X,Y : !(X & Y ) → !X ⊗ !Y are often called Seely isomorphisms. The following diagram is moreover required to be commutative.
Using this monoidal structure, we can equip the ! functor with a lax symmetric monoidal structure from the symmetric monoidal category (SMC) (C, 1, ⊗) to itself. In other words, one can define a morphism µ : 1 → !1 and a natural transformation µ X,Y : !X ⊗ !Y → !(X ⊗ Y ) which satisfy compatibility conditions with respect to the structure isomorphisms of the SMC (C, 1, ⊗). Given an object X of C and k ∈ N, this allows to define a morphism µ (k) : (!X) ⊗k → !(X ⊗k ) which is essential in the interpretation of λ-terms.
2.1. Structural natural transformations. Using these structures, we can define a weakening natural transformation w X ∈ C(!X, 1) and a contraction natural transformation c X ∈ C(!X, !X ⊗ !X) as follows. Since is terminal, there is a canonical morphism t X ∈ C(X, ) and we set w X = m !t X . Similarly, we have a diagonal natural transformation ∆ X ∈ C(X, X & X) and we set c X = m X,X !∆ X .
This contraction morphism c X : !X → !X ⊗ !X is associative, and therefore can be generalized to a unique morphism c (n) More generally we can define a morphism c (k,n) X : (!X) ⊗k → ((!X) ⊗k ) ⊗n for the generalized contraction morphism which is defined as the following composition where σ is the obvious isomorphism, defined using associativity and symmetry of ⊗.
Similarly, we define a generalized weakening morphism w (k) X as the composition where λ is the unique canonical isomorphism induced by the monoidal structure. As usual the (co)Kleisli category C ! of the comonad ! is defined as the category that has the same objects as C and C ! (X, Y ) = C(!X, Y ), with composition denoted as • and defined using the comonad. One can prove C ! is cartesian closed, with & as cartesian product and !X Y as function space object: this is a categorical version of Girard's translation of intuitionistic logic into linear logic.
Given f ∈ C((!X) ⊗k , Y ), it is standard to define f ! ∈ C((!X) ⊗k , !Y ), this operation is usually called promotion in linear logic. This morphism is defined as the following composition: The notion of categorical model recalled above allows to interpret standard classical linear logic. If one wishes to interpret differential constructs as well (in the spirit of the differential λ-calculus or of differential linear logic), more structure and hypotheses are required. Basically, we need that: • the cartesian and cocartesian category C is additive, and • the model is equipped with a codereliction natural transformation d X ∈ C(X, !X) such that d X d X = Id X . More conditions are required if one wants to interpret the full differential λ-calculus of [11] or full differential linear logic as presented in e.g. [26]: these conditions represent a categorical axiomatization of the usual chain rule of calculus and are well explained in [15]. When these conditions, that we give explicitely now, hold, we say that the chain rule holds in C.
The first condition is the following commutation.
It would be interesting to know if this condition can be reduced to a more primitive one, involving d X and the isomorphism m (of course, one can replace µ by its expression in terms of m in the diagram above, so that this diagram is actually a condition on m, but we would like to find a simpler and more elegant commuting diagram involving m).
Last we have to provide a commutation relating d X and p X . We have of course d !X d X : X → !!X. Also, µ 1 : 1 → !1 and therefore !w X µ 1 : 1 → !!X. Keeping implicit the isomorphism X ⊗ 1 X, we get (d !X d X ) ⊗ (!w X µ 1 ) : X → !!X ⊗ !!X, and we require the following diagram to commute: If C is a weak differential LL model, we can define a coweakening morphism w X ∈ C(1, !X) and a cocontraction morphism c X ∈ C(!X ⊗ !X, !X) as we did for w X and c X . Similarly we also define c (n) X ∈ C((!X) ⊗n , !X). Due to the naturality of d X we have 2.3. The Taylor formula. Let C be a weak differential LL model which is countably additive. Remember that each homset C(X, Y ) is endowed with a canonical structure of commutative monoid in which countable families are summable. We assume moreover that these monoids are idempotent. This means that, if f ∈ C(X, Y ), then f + f = f .
We say that the Taylor formula holds in C if, for any morphism f ∈ C(X, Y ), we have If the idempotency condition does not hold, one has to require the homsets to have a rig structure over the non-negative real numbers, and the Taylor condition must be written in the more familiar way !
To give a precise meaning to this kind of expressions, we need of course more structure on homsets: they need to have some completeness properties, typically expressible in topological terms.
Remark 2.2. If the chain rule holds in C, the Taylor condition reduces to the particular case of identity morphisms: one has just to require that !Id X (in the idempotent setting).
2.4.
Models of the pure differential λ-calculus. A model of the pure differential λcalculus of [11] or of the ∂λ-calculus to be introduced below, is simply a reflexive object in C ! , where C is a model of differential linear logic in which the chain rule holds. More precisely, it consists of such a category C and of a triple (U, app, lam) where U is an object of C and app ∈ C(U, !U U ) and lam ∈ C(!U U, U ) satisfy app • lam = Id !U U in C. It is crucial to take app and lam in the "linear" category C and not in C ! .
In the present paper, we concentrate on the case where U satisfies a stronger condition. We assume that C is countably cartesian, and, given an object X, we denote as X N the cartesian product˘i ∈N X i where X i = X for each i ∈ N. We consider an object U of C together with an isomorphism ϕ ∈ C(U, (!U N ) ⊥ ). We have clearly (!U N ) ⊥ by * -autonomy of C. Using ϕ, we get finally that U !U U and we define app and lam using this isomorphism.
We also assume that C is a model of the MIX rule of linear logic (see [16]). This means that ⊥ is equipped with a structure of commutative monoid in the SMC C. We use mix (n) to denote the corresponding morphism ⊥ ⊗n → ⊥ so that in particular mix (0) : 1 → ⊥ and mix (1) The interpretation of the calculi presented in this paper is based on the following toolbox. The first constructions we give deal with "terms", which are represented here by morphisms (!U ) ⊗k → U (the number k ∈ N corresponds intuitively to the number of variables on which the term depends).
• Given a family of terms f 1 , . . . , f n : (!U ) ⊗k → U , we can define a morphism [f 1 , . . . , f n ] : (a morphism of this type will be called a "bag"). • Let f : (!U ) ⊗k → U be a further term. Remember that we have defined the promotion of f , which is a bag f ! : (!U ) ⊗k → !U . Therefore we can define Finally we define the application of f to g as the term Here we present the categorical constructions required for dealing with such tests.
• Let h 1 , . . . , h n : (!U ) ⊗k → ⊥ be tests. Then we can define their parallel composition, using the mix structure of ⊥, as the test
The ∂ 0 λ-Calculus with Tests
The definition of the ∂ 0 λ-calculus with tests requires some preliminary notations that we give below.
3.1. Sets and modules. We denote by N the set of natural numbers and by 1 an arbitrary singleton set. Given a set S, we write P(S) (resp. P f (S)) for the set of all (resp. all finite) subsets of S. Given k ∈ N, we denote by S k the set of all permutations of {1, . . . , k}. Let 2 be the semiring {0, 1} with 1+1 = 1 and multiplication defined in the obvious way. For any set S, we write 2 S for the free 2-module generated by S, so that 2 S ∼ = P f (S) with addition corresponding to union, and scalar multiplication defined in the obvious way. However we prefer to keep the algebraic notations for elements of 2 S , hence set unions will be denoted by + and the empty set by 0.
3.2.
Multisets. Let S be a set. A multiset a over S can be defined as an unordered list a = [α 1 , α 2 , . . .] with repetitions such that α i ∈ S for all indices i. A multiset a is called finite if it is a finite list; we denote by #a its cardinality. We write M f (S) for the set of all finite multisets over S. Given two multisets a = [α 1 , α 2 , . . .] and b = [β 1 , β 2 , . . .] the multiset union of a, b is defined by a b = [α 1 , β 1 , α 2 , β 2 , . . .]; summing up, N S ∼ = M f (S). Given two finite sequences of multisets a, b of the same length n we define a b = (a 1 b 1 , . . . , a n b n ). Given a strict order > on S, the multiset ordering [2, Def. A.6.2] is the smallest transitive relation > m on M f (S) such that (∀β ∈ b. α > β) ⇒ (a [α] > m a b), for all α ∈ S and all a, b ∈ M f (S). Intuitively, a > m b holds if b can be obtained from a by replacing some of its elements by finitely many (possibly zero) smaller elements.
Notation on parallel composition of tests. We now introduce the ∂ 0 λ-calculus with tests which is the promotion-free fragment of the ∂λ-calculus with tests we will present in Section 7.
3.3. Syntax. The ∂ 0 λ-calculus with tests has four syntactic categories: terms that are in functional position, bags that are in argument position and represent multisets of linear resources, tests that are "corked" multisets of terms having only two possible outcomes and finite formal sums representing all possible results of a computation. Expressions are either terms, bags or tests and will be used to state results holding for all categories.
Definition 3.1. The formal grammars defining terms, bags, tests and sums are given in Figure 1(a).
Terms are the real protagonists of the ∂ 0 λ-calculus with tests. The term λx.M represents the λ-abstraction of the variable x in the term M and M P the application of a term M to a bag P of linear resources. Thus, in (λx.M )P , each resource in P is available exactly once for λx.M and if the number of occurrences of x in M "disagrees" with the cardinality of P then the result is 0 (see later, when sums of expressions are introduced). The operator τ (·) will be discussed later on, after the notion of test is explained.
As usual we assume that application associates to the left and lambda abstraction to the right. Therefore we will write λx 1 . . . x n .M P 1 · · · P k for λx 1 .(· · · (λx n .(· · · (M P 1 ) · · · P k )) · · · ). Moreover, the notation M P ∼n will stand for M P · · · P (n times). Tests are multisets of terms, the "τ " being a tag for distinguishing them from bags. Intuitively, they are expressions that can produce two results: either success, represented by ε, or failure, represented by 0.
Throughout the paper, we will enforce the distinction between bags and tests by using systematically the following notational conventions. The test V |W represents the (must-)parallel composition of V and W (i.e., V |W succeeds if both V and W succeed and the order of evaluation is inessential). We prefer to use the parallel notation as syntactic sugar in order to avoid both the explicit treatment of associativity and commutativity axioms (plus neutrality of ε). This is perfectly coherent with the implementation of tests as multisets of terms.
The operatorτ (·) allows to build a term out of a test: intuitively, the termτ (V ) may be thought of as V preceded by an infinite sequence of dummy λ-abstractions. Dually, the "cork construction" τ [L 1 , . . . , L k ] may be thought of as an operator applying to all its arguments an infinite sequence of empty bags. This suggests in particular that it is sound to reduce τ [τ (V )] to V .
Hence the termτ (V ) raises an exception encapsulating V and the test τ [L 1 , . . . , L k ] catches the exception possibly raised by, say, L i and replaces L i by the multiset of terms encapsulated in that exception. The context of the exception is thrown away by the dummy abstractions ofτ and the dummy applications of τ . A test needs to catch an exception in order to succeed; for instance, τ [M ] fails as soon as M is aτ -free, closed term.
Sums. Remember from Subsection 3.1 that 2 Λτ (resp. 2 Λ τ , 2 Λ b ) denotes the set of finite formal sums of terms (resp. tests, bags) with an idempotent sum. We also set 2 Λ e := 2 Λ τ ∪ 2 Λτ ∪ 2 Λ b . This is an abuse of notation as 2 Λ e here does not denote the 2-module generated over Λ τ ∪ Λτ ∪ Λ b , but rather the union of the three 2-modules; this means that sums should be taken only in the same sort. The typical metavariables to denote sums are given in Figure 1(a).
The α-equivalence relation and the set FV(A) of free variables of A are defined as usual, like in the ordinary λ-calculus [1]. Hereafter, (sums of) expressions are considered up to α-equivalence.
Because of the absence of promotion the number of linear resources that a term λx.M is expecting is just the number of occurrences of x in M (the degree of x in M ).
, is the number of free occurrences of x in A and is defined by induction as follows: Linear Substitution • 3.4. Two Kinds of Substitutions. In this subsection we introduce two kinds of substitutions: the usual λ-calculus substitution and a linear one, which is proper to differential and resource calculi (see [4,11,30]). In order to proceed, we first need to introduce some notational conventions concerning the sums. Indeed the grammar for terms and tests does not include any sums, so they may arise only on the "surface". For instance, I + I is a legal sum of expressions, while λx.(x + x) cannot be generated using the grammar of Figure 1(a). Convention 3.5. As a syntactic sugar -and not as actual syntax -we extend all the constructors to sums by multilinearity, setting for instance in such a way that the equations in Figure 2(a) hold.
This kind of meta-syntactic notation is discussed thoroughly in [14].
In the following two definitions we make an essential use of the extended syntax. We recall that an operator F (−) is extended by linearity by setting Definition 3.8 (Substitution). Let A ∈ Λ e and N ∈ Λτ . The (capture-free) substitution of N for x in A, denoted by A{N/x}, is defined as usual. Accordingly, A{N/x} denotes an expression of the extended syntax. Finally, we extend this operation to sums as in A{N/x} by linearity in A. Definition 3.9 (Linear Substitution). The linear (capture-free) substitution of N for x in A, denoted by A N/x , is defined in Figure 2(b). The expression A N/x belongs to the extended syntax. We extend this operation to sums as in A N/x by linearity in A, as we did for usual substitution.
Roughly speaking, the linear substitution A N/x replaces exactly one free occurrence of x in A with the term N . If there is no occurrence of x in A then the result is 0. In presence of multiple occurrences, all possible choices are made and the result is the sum of terms corresponding to them.
The above notation A P/x makes sense because, by Theorem 3.12, the expression A L 1 /x · · · L k /x is actually independent from the enumeration of L 1 , . . . , L k in P . Moreover recall that we use α-equivalence, so that bound variables can be renamed in order to avoid capture of free variables during substitution.
3.5. The Operational Semantics. In this section we are going to introduce the reduction rules defining the operational semantics of the ∂ 0 λ-calculus with tests.
Definition 3.14. The reduction semantics of the ∂ 0 λ-calculus with tests is generated by the rules in Figure 3(a).
The reduction preserves the sort of an expression in the sense that terms rewrite to (sums of) terms and tests to (sums of) tests.
The left side of a reduction rule in Figure 3(a) is called a redex while the right side is its contractum. Redexes are classified, depending on their kind, as follows.
The following remark gives a more explicit characterization of a β-contractum. Remember that the degree of x in M has been defined in Definition 3.4.
Remark 3. 16. If M has k free occurrences of x (represented by x 1 , . . . , x k ) then we have From Remark 3.16 it is clear that, because of the presence of linear substitution, the β-reduction is a relation from terms to sums of terms, namely → β ⊆ Λτ × 2 Λτ .
containing R and respecting the rules of Figure 3 We now provide some examples of reduction. Note that parallel composition is treated From Definition 3.19 we have that 0 is in normal form. The following lemma gives an explicit characterization of terms in normal form.
Lemma 3.20. If a term M ∈ Λτ is in normal form then 1. either M = λ x.yP 1 · · · P n for some n ≥ 0 and each P i is a bag of terms in normal form, where n ≥ 0, k i ≥ 0 and each P i,j is a bag of terms in normal form.
3.6. Operational properties. In this subsection we show that the ∂ 0 λ-calculus enjoys Church-Rosser and strong normalization, even in the untyped version of the calculus.
The proof of strong normalization is purely combinatorial, based on a measure given in the following definition.
Definition 3.21. The size of an expression A, written size(A), is defined by induction as follows: The intuition behind strong normalization is that size m (A) becomes smaller by replacing one (or more) of its elements by an arbitrary number of smaller elements, i.e., with respect to the multiset ordering > m induced on M f (N) by the usual order > of N. It is well known that > m is well-founded.
Proof. The fact that there are no infinite reduction chains is trivial, since every reduction step decreases the size of an expression. In other words For the Church-Rosser property just check local confluence and conclude by Newman's lemma.
The following lemma formalizes our intuition behind the behaviour of the cork τ (·). As a corollary we get that a closed test can only reduce either to ε or to 0. Proof. As ∂ 0 λ-calculus with tests is strongly normalizing, we have that , and this latter expression reduces to a finite (possibly empty) sum of ε's, which is thus equal either to 0 or to ε.
Therefore, it makes sense to define the convergence of a test as follows.
It is easy to check that a test V can converge only if it is closed; indeed, a free variable x occurring in V cannot be erased during the reduction.
3.7.
Operational Pre-order. A term-context D · is a term having one occurrence of a hole, denoted by · , appearing in term-position; a test-context C · is a test having one occurrence of a hole, still appearing in term-position.
Definition 3.26. Term-contexts D · and test-contexts C · are defined by the following grammar: The set of term-contexts is denoted by Λτ · and the set of test-contexts by Λ τ · . Given M ∈ Λτ we indicate by C M the test resulting by blindly replacing M for the hole (allowing capture of free variables) in C · . Similarly, given a term-context D · , D M denotes the term obtained by blindly substituting M for the hole in D · .
We say that a test-context C · (resp. a term-context Definition 3.28. The operational pre-order τ O on the ∂ 0 λ-calculus with tests is defined as follows (for all M, N ∈ Λτ ): This coincides with a standard idea of operational preorder. The restriction of observations to test-contexts deserves however a discussion. First, note that tests provide a canonical notion of observation since -by design -they either converge (to ε) or reduce to 0. Hence, the choice of test-convergence as the basic observation in our calculus is very natural.
A second motivation comes a posteriori. Indeed, as we will prove in Section 6 (Theorem 6.14), for test-free terms M, N we have M τ O N exactly when, for all test-free term-contexts D · , D M is solvable entails D N is solvable (the notion of solvability for test-free terms is given in Definition 6.2).
A Relational Semantics
This section is devoted to build a relational model D of ∂ 0 λ-calculus with tests, that has been first introduced in [8] as a model of the ordinary λ-calculus.
We first give a sketchy presentation of the Cartesian closed category where D lives. We recall that the definitions and notations concerning multisets have been introduced in Subsection 3.2.
4.1. The Category MRel. The category MRel is the co-Kleisli category for the finitemultiset comonad on the category Rel of sets and relations. This category can be described directly as follows: • The objects of MRel are all the sets.
Given two sets S, T , we denote by Hereafter we adopt the following convention.
It is easy to check that this is actually the categorical product of S and T in MRel; given s : U → S and t : U → T , the corresponding morphism s, t : U → S & T is given by: Given two objects S and T , the exponential object [S ⇒ T ] is M f (S) × T and the evaluation morphism is given by: Again, it is easy to check that in this way we defined an exponentiation. Indeed, given any set U and any morphism s : As shown in [20], MRel is actually a Cartesian closed differential category [3]. It is not difficult to check that it is moreover an instance of the categorical framework presented in Section 2.
4.2.
An Extensional Reflexive Object. We build a reflexive object D, which is extensional in the sense that D ∼ = [D ⇒ D]. The elements of D are infinite sequences of multisets, that are quasi-finite in the following sense. We build a family of sets (D n ) n∈N as follows: Since the operation mapping a set S into M f (S) (ω) is monotonic with respect to inclusion 1 and D 0 ⊆ D 1 , we have D n ⊆ D n+1 for all n ∈ N. Finally, we set D = n∈N D n .
To define an isomorphism between D and M f (D) × D just note that every element α = (a 1 , a 2 , a 3 , . . .) ∈ D stands for the pair (a 1 , (a 2 , a 3 , . . .)) and vice versa. From this simple remark, it follows that D ∼ = [D ⇒ D] (we have a canonical bijection between these two sets, and therefore an isomorphism in MRel). = (a 1 , a 2 , a 3 , . . .) ∈ D and a ∈ M f (D), we write a :: α for the element (a, a 1 , a 2 , a 3 Remark that [] :: * = * .
4.3.
Interpreting the ∂ 0 λ-calculus with tests. We now define the interpretation of an expression A of the ∂ 0 λ-calculus with tests in the model D. As usual, an expression A will be interpreted by a morphism of the category MRel.
For all terms M , bags P , tests Q and repetition-free sequences x, y, z respectively containing the free variables of M, P, Q, we define by mutual induction the interpretations M x : D n → D, P y : D m → M f (D) and Q z : D k → 1 (1 is a singleton set and n, m, k are the lengths of x, y, z) as follows 2 : The interpretation is then extended to the elements of 2 Λ e by setting Σ k Since every test V is of the form τ [L 1 , . . . , L k ] we might define its interpretation directly by setting V Clearly the interpretation is monotonic, in the sense expressed by the following lemma.
Lemma 4.9. For any test-context C · (resp. term-context D · ) with free variables y, if M x ⊆ N x then C M x, y ⊆ C N x, y (resp. D M x, y ⊆ D N x, y ).
Proof. By a straightforward mutual induction on C · , D · .
The substitution lemmas above generalize straightforwardly to sums. Although Lemma 4.11 is stated in full generality, for the ∂ 0 λ-calculus with tests is only useful for N = 0. We keep this formulation since it is closer to the one needed in Section 7 for the full ∂λ-calculus with tests. Proof. It is easy to check that the interpretation is contextual. The fact that the semantics is invariant under reduction follows from Lemmas 4.10 and 4.11.
Full Abstraction for ∂ 0 λ-Calculus with Tests
A model is equationally fully abstract if the equivalence induced on terms by their interpretations is exactly ≈ τ O ; it is inequationally fully abstract if the induced preorder is τ O . Obviously, every inequationally fully abstract model is also equationally fully abstract.
In this section we prove that D is inequationally fully abstract for the ∂ 0 λ-calculus with tests (Theorem 5.
Building Separating Test-Contexts.
We are going to associate a test-context α + · with each element α ∈ D, the idea being that -for every closed term M -we have α ∈ M if and only if α + M converges.
By Corollary 3.24, we know that α + βreduces either to ε or to 0. The result follows by soundness (Theorem 4.12). Proof. The result follows by applying n times (one for each variable in x) Lemma 4.10 and Corollary 5.5.
The ensuing proposition is the key argument for proving that the model D is inequationally fully abstract. (i) ( a, α) ∈ M x , (ii) α + M a -/ x ε.
Proof. We have the following chain of equivalences: ( a, α) ∈ M x ⇔ a ∈ α + M x , by Corollary 5.5, ⇔ α + M a -/ x = ∅ and deg x i (M ) = #a i , by Lemma 5.8, using Remark 5.9, ⇔ α + M a -/ x ε, by Corollary 3.24, i.e. the fact that closed tests can only reduce to either ε or 0, and Theorem 4.12, i.e. the soundness of the model.
We are now able to prove the main result of the section.
Theorem 5.11. D is inequationally fully abstract for the ∂ 0 λ-calculus with tests (for all M, N ∈ Λτ ): (⇒) Assume that M x ⊆ N x , and let C · be a test-context closing both M and N and such that C M ε. By Theorem 4.12, C M = ε = {()}. By monotonicity of the interpretation we get C M ⊆ C N , thus C N = ∅. By Corollary 3.24 this entails that C N ε. (⇐) Suppose, by the way of contradiction, that M τ O N holds but there is an element ( a, α) ∈ M x − N x . Then the test-context C · = α + (λ x. · ) ais such that C M α + M a -/ x ε and C N ε by Proposition 5.10, which is a contradiction. The reader who is only interested in the extension of Theorem 5.11 (and of its corollary) to the full ∂λ-calculus with tests can skip safely the next section.
Full Abstraction for ∂ 0 λ-Calculus without Tests
In this section we are going to prove that tests do not add any discriminatory power to the contexts already present in the ∂ 0 λ-calculus. This means that whenever there is a test-context C · separating two test-free terms M, N (sending, say, M to ε and N to 0) there exists also a term-context D · that is still able to separate M from N , without using the operators τ andτ . (As we will discuss in Section 9, this is not the case for the full ∂λ-calculus with tests.) From this syntactic result and the full abstraction for the ∂ 0 λ-calculus with tests (Theorem 5.11) we conclude that the model D is also inequationally fully abstract for its test-free fragment (Theorem 6.14, below).
6.1. The ∂ 0 λ-Calculus (Without Tests). The ∂ 0 λ-calculus is a restriction of the ∂ 0 λcalculus with tests presented in Section 3. The restriction is obtained by erasing from the syntax the constructors τ andτ and the corresponding reduction rules, i.e. (τ ), (τ ) and (γ). In other words the tests are no longer part of the language and → β is the only reduction rule of the system. This description is enough to completely characterize the system -for a more detailed description, see [13,14]. Notation 6.1. We write Λ r (resp. 2 Λ r ) for the set of (resp. finite sums of) terms of the ∂ 0 λ-calculus. The set of all (term-)contexts of the ∂ 0 λ-calculus will be denoted by Λ r · . We still write M, N, L, H for terms in Λ r , M, N, L, H for sums of terms in 2 Λ r , P, Q for bags and D · for contexts. This will not create confusion because we will always specify the set they belong to.
In order to properly define the operational pre-order in this setting, we first need to introduce the notion of solvable term.
6.2. Solvability in the ∂ 0 λ-Calculus. In λ-calculus [1] a term M is solvable whenever there exist suitable arguments that, once supplied to M , make it reduce to the identitythis means that M it is able to interact operationally with the environment.
In resource calculi solvability has been thoroughly studied by Pagani and Ronchi Della Rocca in [27,28]. Their work needs to be adapted because of the absence of promotion in our system. For the ∂ 0 λ-calculus the good notion of solvable term is the following. Definition 6.2. A term M ∈ Λ r is solvable if there is a term-context D · such that D M β I. We say that M is unsolvable otherwise. Reading [27,28] one may wonder why in the previous definition we do not ask more generally that D M β I + N for some N ∈ 2 Λ r . This is due to the fact that in our ∂ 0 λ-calculus the two definitions are equivalent, as shown in the next lemma. (So we choose the easier formulation.) Lemma 6.3. Let M ∈ Λ r be a closed term. If M β I + M for some M ∈ 2 Λ r , then there exists a sequence P of closed bags such that M P β I.
Proof. Suppose M closed such that M β I + M. Then M is also closed and normalizes to a sum M = Σ j=1 λ y j .M j such that each M j is not an abstraction itself. Now, if M = I then we are done as M β I + M β I + I = I. Otherwise, let h be the maximum among the lengths of the sequences y j . Then M [I] ∼h is again a sum of closed terms and normalizes to a sum M of closed abstraction terms whose size is strictly smaller than M . The reason is that for each summand (λ y j .M j )[I] ∼h which does not reduce to 0, M j must contain exactly one occurrence of each variable in y j . Hence M j [I]/ y j {0/ y j } has the same size as (λ y j .M j ) but it reduces (via contraction of the I that has replaced the head variable of M j ) to a term having a strictly smaller size, unless λ y j .M j ≡ I. Iterating this reasoning for at most a number of times equal to k = size(M) + [27,28] we are going to characterize solvability from both a syntactic and a semantic point of view (Theorem 6.5).
Proposition 6.4. Let M ∈ Λ r and let FV(M ) = x. If M reduces to a normal form different from 0, then there are two sequences P , P of closed bags such that: Proof. By induction on the size of M . Let x = x 1 , . . . , x n and suppose that M β λy 1 . . . y m .yQ 1 · · · Q q + M where m, q ∈ N, Q i = [M i,1 , . . . , M i,k i ] for all 1 ≤ i ≤ q, each M i,j is in normal form for every 1 ≤ j ≤ k i and M ∈ 2 Λ r . For the sake of simplicity, assume y = y h for some 1 ≤ h ≤ m (the proof is analogous when y ∈ x).
By induction hypothesis, for all 1 ≤ i ≤ q and 1 ≤ j ≤ k i there are sequences P i,j , P i,j , P i,j of closed bags such that M i,j P i,j P i,j / y P i,j / x {0/ y, x} β I + M i,j for some M i,j ∈ 2 Λ r . In the following, we will denote by σ i,j the substitution P i,j / y P i,j / x {0/ y, x}.
Note that in the statement above M must be closed because M P P / x {0/ x} is. Theorem 6.5. Let M ∈ Λ r , then the following three sentences are equivalent.
(i) M is solvable, Proof. (i ⇒ ii) Suppose by contradiction that there is no normal N such that M β N + N for some N ∈ 2 Λ r . Since the ∂ 0 λ-calculus is strongly normalizing, the only possibility is that M β 0. Therefore, for every term-context D · we would have D M β D 0 = 0. This is a contradiction since the calculus is Church-Rosser and by hypothesis there should be a term-context D · such that D M β I. Induction case p > 0. By induction hypothesis, there exist ( c i,j , β i,j ) ∈ L i,j x, z for each 1 ≤ i ≤ p and 1 ≤ j ≤ k p . Let b i = [β i,1 , . . . , β i,k i ] for every 1 ≤ i ≤ p and a 0 = ( [], [b 1 :: · · · :: b p :: α], []) ∈ M f (D) n+m where the only non-empty multiset is in n + h position. Then ( a 0 , b 1 :: · · · :: b p :: α) ∈ z h x, z and ( a i , b i ) ∈ P i x, z for a i = k i j=1 c i,j . It follows that ( a 0 a 1 · · · a p , α) ∈ z h P 1 · · · P p x, z . We conclude since z h P 1 · · · P p x, z = ∅ if and only if λz 1 . . . z m .z h P 1 · · · P p x = ∅.
(iii ⇒ ii) Suppose that M β 0. Then by Theorem 4.12 we have M x = 0 x = ∅, which is a contradiction. Definition 6.6. The operational pre-order O on the ∂ 0 λ-calculus is defined as follows (for all M, N ∈ Λ r ): Let us consider the restriction of the preorder O (see Definition 3.28) to the terms of the ∂ 0 λ-calculus (without tests). Theorem 5.11 shows that for all terms M, N of the ∂ 0 λ-calculus (without tests) we have M x ⊆ N x ⇔ M τ O N . Later in this section (Theorem 6.14) we will prove that M x ⊆ N x ⇔ M O N . Hence the preorder O coincides, on the test-free language, with τ O . This is an a fortiori justification of Definition 6.6, which was anyway supported by the intuition that solvable ∂ 0 λ-terms are a kind of arenas over which the solvability game can be successfully played and simulated by the throw/catch game of the test constructions.
6.3. Full Abstraction via Test Expansion. As mentioned in Section 3, the termτ (V ) roughly corresponds to V preceded by an infinite sequence of dummy λ-abstractions; dually, the test τ [L 1 ]| · · · |τ [L k ] corresponds to providing each L i with an infinite sequence of empty bags. (This is also clear from the reduction rules (τ ) and (τ ).) In this section we show that the infinite nature of these sequences is not essential in the ∂ 0 λ-calculus. Roughly speaking, one can find an n such that λx 1 . . . x n .V has the same behaviour ofτ (V ) and n i 's such that each L i [] ∼n i has the same behaviour of τ [L i ]. The parallel composition V = V 1 | · · · |V k can be simulated in the ∂ 0 λ-calculus by M = λx.x[V 1 , . . . , V k ] in the sense that V converges iff each V i converges and, similarly, M is solvable iff each V i is solvable.
We then define a test-expansion (Definition 6.10), from terms of the ∂ 0 λ-calculus with tests to test-free terms, formalizing this intuition. In order to expand the correct number of times the occurrences ofτ and of the elements of a test, we first need to "name" each occurrence in a different way. For this reason we label such occurrences with pairwise distinct indices. Definition 6.7. A labelled expression A is an expression of the ∂ 0 λ-calculus with tests such that every occurrence of aτ and every element of a test have been decorated with distinct natural numbers (called indices). We denote by (Λτ ) lab , (Λ b ) lab , (Λ τ ) lab , (Λ e ) lab , (Λ τ · ) lab the set of labelled terms, labelled bags, labelled tests, labelled expressions, labelled termcontexts, respectively.
Let A ∈ 2 (Λ e ) lab be a sum of labelled expressions. We writeà for its underlying expression; in other wordsà is obtained stripping off all indices from A. We write dom(A) for the set of indices occurring in A. Note that the domains of two summands A, A ∈ A may have a non-empty intersection. From (2) we note thatà = A for all test-free labelled expressions. From (5) we note that in a labelled bag the labels actually occur within its elements.
Definition 6.9. The reduction semantics for labelled expressions is inherited straightforwardly from the ∂ 0 λ-calculus with tests. In the β-rule, the terms are substituted together with their indices.
Since there is no duplication during the reduction, if A is a labelled expression reducing to A then A is a sum of labelled expressions (that is, all the indices occurring within each A ∈ A are pairwise distinct). Definition 6.10. Let A ∈ (Λ e ) lab be a labelled expression and be a function from N to N. The -expansion A of A is an expression of the ∂ 0 λ-calculus without tests, defined by induction on A as follows: In particular ε = λx.
x[] for all . This is extended to sums by setting ( i A i ) = i A i and to contexts by setting · = · .
Obviously, for all test-free labelled expressions A we have A = A for all . The proofs of the following lemmas are given in the technical Appendix A. Lemma 6.12. Let V ∈ 2 (Λ τ ) lab be a sum of labelled closed tests. If V ε then there exists a map : N → N such that V ( +k) is solvable for all k ∈ N. Lemma 6.13. Let V ∈ 2 (Λ τ ) lab be a sum of labelled closed tests. If V 0 then there exists a natural number k such that V ( +k) 0 for all : N → N.
We are now ready to state and prove the main theorem of this section, from which immediately follows the equational full abstraction result for the ∂ 0 λ-calculus. Theorem 6.14. D is inequationally fully abstract for the ∂ 0 λ-calculus (for all M, N ∈ Λ r ): (⇐) Suppose, by the way of contradiction, that M O N holds but there is an element ( a, α) ∈ M x − N x . By Proposition 5.10 the test-context C · = α + (λ x. · ) ais such that C M α + M a -/ x ε and C N ε (therefore C N 0 by Lemma 3.23). Let C · ∈ (Λ r · ) lab such thatC · = C · . By Lemma 6.12 there exists such that (C M ) ( +k ) is solvable for every k ∈ N. By Lemma 6.13 there exists k ∈ N such that (C N ) ( +k) is unsolvable. From Remark 6.11 (1) we get (C M ) ( +k) = C ( +k) M ( +k) and (C N ) ( +k) = C ( +k) N ( +k) . Since M, N are test-free we have M ( +k) = M and N ( +k) = N . We conclude because we found a term-context C ( +k) such that C ( +k) M is solvable and C ( +k) N is unsolvable, which is a contradiction.
Remark 6.16. A direct proof of Corollary 6.15 might be obtained exploiting a corollary of the Böhm Theorem for the ∂λ-calculus proved in [21]. We preferred to provide this proof based on test-expansion because it clarifies the behaviours of our test operators and works also in the inequational case.
The rest of the paper is devoted to extend the full abstraction results of Subsection 5.2 to the ∂λ-calculus with tests. The main ingredients will be the head reduction introduced in Subsection 7.5 and the Taylor expansion we define in Subsection 8.1.
The ∂λ-Calculus with Tests
The ∂λ-calculus with tests is an extension of the ∂ 0 λ-calculus with tests with a promotion operator available on resources. In this calculus a resource can be linear (it must be used exactly once) or not (it can be used ad libitum) and in the latter case it is decorated with a "!" superscript. 7.1. Syntax. The grammar generating the terms, the tests and the expressions of the ∂λcalculus with tests, is given in Figure 4(a). Note that such grammar is equal to the one for the ∂ 0 λ-calculus with tests (in particular tests are still plain multisets of linear resources), except for the rule concerning bags which becomes: where N is a finite sum of terms of this new syntax. We write Λτ ! for the set of terms generated by this new grammar, Λ τ ! for the set of tests, Λ b ! for the set of bags, Λ e ! for the set of expressions.
It should be clear that from now on bags are no more plain multisets of terms: they are compound objects, consisting of a multiset of terms [L 1 , . . Remark 7.1. The ∂ 0 λ-calculus with tests is the sub-calculus of the ∂λ-calculus with tests in which all bags have the shape [L 1 , . . . , L k , 0 ! ], and this identification is compatible with the reduction rules.
As in the ∂ 0 λ-calculus with tests, we extend this syntax by multilinearity to sums of expressions with the only exception that the bag [L 1 , . . . , L k , (N + M) ! ] is not required to be equal to [L 1 , . . . , L k , N ! ] + [L 1 , . . . , L k , M ! ]. The intuition is that in the first expression N+M can be used several times and each time one can choose non-deterministically N or M, whereas in the second expression one has to choose once and for all one of the summands, and then use it as many times as needed.
Linear Substitution (New Rule)
7.2.
Substitutions. Linear substitution is denoted and defined as in the ∂ 0 λ-calculus with tests ( Figure 2(b)), except of course for bags, where we use the rule of Figure 4(b). Linear substitution is extended to sums, as in A N/x , by bilinearity in both A and N. We also define the regular substitution A{N/x} for the ∂λ-calculus with tests, by simply replacing each occurrence of x in the expression A with N -in that way we get an expression FULL ABSTRACTION FOR RESOURCE CALCULI 29 of the extended syntax, since N is a sum in general. This operation is then extended to sums, as in A{N/x}, by linearity in A.
A Schwarz Theorem, analogous to Theorem 3.12, holds for the ∂λ-calculus with tests. Hence, given a sum of expressions A and a bag P = [L 1 , . . . , L k ] with x / ∈ FV(P ), it still makes sense to set A P/x := A L 1 /x · · · L k /x because this expression does not depend on the enumeration of L 1 , . . . , L k in P . In particular A []/x = A. 7.3. Operational semantics. The reduction rules of ∂λ-calculus extend those of the ∂ 0 λcalculus with tests in the sense that they are equivalent on !-free expressions.
Definition 7.4. The rules (τ ) and (γ) are exactly the same as the corresponding rules of the ∂ 0 λ-calculus, while the β-reduction andτ -reduction are rephrased as in Figure 4(c). In this setting, their contextual closure needs to be closed also under the rule of Figure 4(d).
The ∂λ-calculus with tests is still Church-Rosser (just adapt the proof in [29]), while it is no more strongly normalizing. For instance the term Ω := ∆[∆ ! ], for ∆ := λx.x[x ! ], has an infinite reduction chain, just like the paradigmatic homonymous unsolvable λ-term. Indeed, the usual λ-calculus can be embedded into the ∂λ-calculus with tests by translating every application M N into M [N ! ].
Remark 7.5. Reductions in the ∂λ-calculus with tests may be tricky, due to the combination of linear and non linear resources and substitutions. For instance, we can obtain eight Ω-like terms of the ∂λ-calculus with tests, of the form M [N (!) ] where M, N ∈ {D, ∆} and (!) denotes the optional presence of the promotion. Not surprisingly all these terms, except for Ω, reduce to 0. E.g., Here are some other examples of reductions, involving tests.
In this framework a test-context C · (resp. term-context D · ) is a test (resp. term) of the ∂λ-calculus with tests having a single occurrence of its hole, appearing in term-position. Definition 7.7. Term-contexts D · and test-contexts C · are defined by the following grammar: The set of term-contexts is denoted by Λτ ! · and the set of test-contexts is denoted by Λ τ ! · . Definition 7.8. A test V converges, notation V ↓, if there exists a (possibly empty) sum V such that V ε + V.
Convergence should not be confused with normalization. Note that Definition 7.8 is the natural extension of Definition 3.25; in presence of promotion, ε and 0 are not the only possible "outcomes" of closed tests because there are looping terms that may never interact with an outer cork τ [·]. That case represents "failure", i.e., a scenario where there is no possible sequence of choices (among summands of terms resulting from reduction) leading to the positive test ε.
Definition 7.9. The operational pre-order τ ! O on the ∂λ-calculus with tests is defined by: The comparison between D (Example 4.7(3)) and ∆ (item (1)) gives a grasp on the semantic counterpart of non-linearity.
It is easy to check that both the linear and the classic substitution lemmas generalize to this context. While we can keep the same statement for Lemma 4.11, Lemma 4.10 must be rephrased as follows (indeed, deg x (M ), deg x (V ) are undefined when M, V contain non linear resources). Lemma 7.11 (Linear Substitution Lemma). Let M, L 1 , . . . , L k ∈ Λτ ! , Q ∈ Λ τ ! and P = [L 1 , . . . , L k ] (with y ∈ FV(P )). Then we have: (i) (( a, b), α) ∈ M P/y x,y iff there exist ( a i , β i ) ∈ L i x (for i = 1, . . . , k) and a 0 ∈ M f (D) n and b ∈ M f (D) such that (( a 0 , [β 1 , . . . , β k ] b), α) ∈ M x,y and k i=0 a i = a. (ii) ( a, b) ∈ Q P/y x,y iff there exist ( a i , β i ) ∈ L i x (for i = 1, . . . , k) and a 0 ∈ M f (D) n and b ∈ M f (D) such that ( a 0 , [β 1 , . . . , β k ] b) ∈ Q x,y and k i=0 a i = a. From these lemmas it ensues that D is also a model of the ∂λ-calculus with tests.
Theorem 7.12. D is a model of ∂λ-calculus with tests. 7.5. Head Reduction. We now provide a notion of head reduction for the ∂λ-calculus with tests. Intuitively, head reduction is obtained by reducing a head redex, that is a redex occurring in head position in an expression A. The main interest of introducing this reduction strategy is that it "behaves well" with respect to Taylor expansion in the sense of Proposition 8.6.
The definition of term-and test-redexes is inherited from Definition 3.15. Among these redexes we distinguish those that are in "head" position.
Definition 7.13. A head redex is defined inductively as follows: -every test-redex V is a head redex, -a term-redex H is a head redex in both the term λ y.H P and the test τ [H P ]|V . Definition 7.14. We say that A → B is a step of head reduction if B is obtained from A by contracting a head redex. If A → B is a step of head reduction then also A + A → B + A is.
One-step head reduction is denoted by → h , while h indicates its reflexive and transitive closure.
Remark 7.15. Unlike in ordinary λ-calculus, an expression A may have more than one head redex, hence there may be more than one head reduction step starting from A.
Head reduction induces a notion of head normal form on (sums of) expressions.
This notion of head normal form differs from that given by Pagani and Ronchi Della Rocca in [28]. We keep this name because their definition captures the notion of "outernormal form" rather than that of head normal form, and in fact they changed terminology in [27].
The following lemma gives a characterization of terms and tests in head normal form. The following two lemmas concern reduction properties of promotion-free closed tests.
Lemma 7.18. Let V ∈ Λ τ . If V is closed and V = ε then it has a head redex (hence, V → h V for some V).
Proof. By structural induction on V . It suffices to consider the case V = τ [M ]. We then proceed by cases on the structure of M (which must be closed). If M = λx.N then V head reduces using (τ ). If M is an application then it must be written either as M = (λy.N )P 1 · · · P k or as M =τ [W ]P 1 · · · P k (in both cases k ≥ 1) and hence V head reduces using either (β) or (τ ), respectively. If M =τ (W ) then V head reduces using (γ). 18.
An analogous proof shows that V 0 entails V h 0. (⇐) Trivial since h ⊆ . Head reduction will play an essential role in the next section.
Full Abstraction via Taylor Expansion
In this section we are going to define the Taylor expansion of terms and tests of the ∂λcalculus with tests. We will then use this expansion, combined with head-reduction, to generalize the full abstraction results obtained in Subsection 5.2 to the framework of ∂λcalculus with tests. 8.1. Taylor Expansion. The (full) Taylor expansion was first introduced in [11,12], in the context of λ-calculus. The Taylor expansion M • of an ordinary λ-term M gives an infinite formal linear combination of terms (equivalently, a set of terms) of the ∂ 0 λ-calculus. In the case of ordinary application it looks like: in accordance with the intended meaning and the denotational semantics of application in the resource calculus. In the syntax of Ehrhard-Regnier's differential λ-calculus the above formula looks like ∞ n=0 1 n! M (n) (0)(N, . . . , N ), hence the connection with analytical Taylor expansion is evident.
Following [21], we extend the definition of Taylor expansion from ordinary λ-terms to expressions of the ∂λ-calculus with tests. Since in our context the sum is idempotent, the coefficients disappear and our Taylor expansion corresponds to the support of the actual Taylor expansion.
As the set 2 Λ e ∞ of possibly infinite formal sums of expressions is isomorphic to P(Λ e ), in the following we feel free of using sets instead of sums.
The (full) Taylor expansion of A is the set A • ⊆ Λ e which is defined (by structural induction on A) in Figure 5.
The following are examples of Taylor expansion of terms and tests. In (1) and (2) we see that the Taylor expansion of an expression A can be infinite. In (3) we have an example of two different terms sharing the same Taylor expansion. In [20] it is proved that the Taylor formula holds in MRel. This property entails that Taylor expansion preserves the meaning of an expression in D, as expressed in the next theorem.
Proof. By adapting the proof in [20] of the analogous theorem for the differential λ-calculus.
We now need the following technical lemma stating the commutation of Taylor expansion with respect to ordinary and linear substitutions. The proof is lengthy but not difficult and is provided in Appendix A. For the sake of readability, in the next statements we use sums and unions interchangeably.
The next proposition is devoted to show how Taylor expansion interacts with headreduction. To ease the formulation of the next proposition we assimilate 2 Λ e ! to P f (Λ e ! ). Proposition 8.6. Let A ∈ Λ e ! and let A ∈ A • be such that A → h B , for some B . Then there exists B such that A → h B and B ⊆ B • .
Proof. The idea is that the syntactic tree of A has the same structure as that of A and we can define a surjective mapping of the redexes of A into those of A.
We only treat the case A = λ x.H P 1 · · · P p where H = (λy.M )P is a head-redex. From A ∈ A • we get A = λ x.HP 1 · · · P p for some H such that H ∈ H • . Hence We can conclude that λ x.M P /y {0/y}P 1 · · · P p ⊆ (λ x.M [ L]/y {N/y}P 1 · · · P p ) • . All other cases are simpler. Proof. Suppose that there exists V ∈ V • such that V ε. By Lemma 7.19 there is a head-reduction chain of the form V → h V 1 → h · · · → h V n = ε. By iterated application of Corollary 8.8 there are tests V i (for i = 1, . . . , n) such that We conclude since ε ∈ V • n is only possible when ε ∈ V n .
8.2.
Full Abstraction for the ∂λ-Calculus with Tests. We are now going to prove that the relational model D is inequationally fully abstract for the ∂λ-calculus with tests.
(ii ⇒ i) Suppose that α + M a -/ x ε + V, for some V; then α + M a -/ x x = ∅. Hence, by Theorem 8.4, there is a closed test V ∈ (α + M a -/ x ) • such that V = ∅. By Lemma 8.10 V = α + M a -/ x for some M ∈ M • and since its interpretation is nonempty we have V ε. By applying Proposition 5.10 we get ( a, α) ∈ M x ⊆ M x (by Theorem 8.4).
Theorem 8.12. D is inequationally fully abstract for the ∂λ-calculus with tests (for all M, N ∈ Λτ ! ): ε+V, for some V, we have C M = ∅. Thus, by monotonicity of the interpretation we get C M ⊆ C N = (C N ) • = ∅. By Corollary 3.24 there is V ∈ (C N ) • such that V ε and we conclude that C N ↓ by applying Proposition 8.11. (⇐) Suppose by contradiction that M τ ! O N , but there is an ( a, α) ∈ M x − N x . By Proposition 8.11 α + M a -/ x ↓ and since M τ ! O N we have α + N a -/ x ↓. Again, by Proposition 8.11 ( a, α) ∈ N x . Contradiction. Corollary 8.13. D is equationally fully abstract for the ∂λ-calculus with tests.
Conclusions and Further Works
In this paper we defined the interpretation of several resource calculi into the relational model D and characterized the equality induced on the terms from an operational point of view. The analogous question for untyped λ-calculus was addressed in [19], where it is shown that the λ-theory induced by D is H , therefore D is fully abstract for λ-calculus.
In the first result of our paper we proved that the model D is also (in)equationally fully abstract for the ∂ 0 λ-calculus with tests. Such a proof is simplified by the absence of promotion in the calculus, which allows us to work in a strongly normalizing framework. The interest of this proof is that it generalizes along two directions.
The first direction aims to get rid of the tests, while remaining in the promotion-free fragment of the calculus. To extend this result to the ∂ 0 λ-calculus without tests we defined the test-expansion -a translation from tests to terms replacing every occurrence of a test operator τ,τ by a suitable number of empty applications or dummy lambda abstraction. By applying this translation to a test-context separating two terms, we obtain a term-context having the same discriminatory power. This is not surprising since everything is finite in the ∂ 0 λ-calculus (finite sums, finite reduction chains) therefore the infinitary nature of our test operators can be simulated by terms whose size is big enough.
The second direction aims to extend the full abstraction result to the ∂λ-calculus with tests (and promotion available on resources). The main contribution of the paper is to show that this generalization can be done just by combining the properties of the head reduction and of the Taylor Expansion.
It is worth to notice that the test expansion method cannot be applied in presence of promotion because D is not fully abstract for the ∂λ-calculus; in other words the tests are necessary to obtain the last result. This has been recently showed by Breuvart [6], who exhibited two terms of the ∂λ-calculus being observationally equivalent, but having different interpretations in D. The idea of the counterexample is to build, using fixpoint combinators, a term M reducing (eventually) to an infinite sum of terms whose head variable is preceded by an increasing number of lambda abstractions. This term is annihilated by the context τ [ · [τ (ε)]] because the operator τ "eats" all the lambda abstractions and substitutes the head-variable of each component of the sum by 0, while we know that the same context sends I to ε. The author then proved that no context of the ∂λ-calculus can simulate this behaviour.
The following table summarizes all these results. The definition of ! O is analogous to O with the definition of may-solvable given in [28]; the definition of is the usual one given in [25].
Calculus
Operational Breuvart's counterexample raises the problem of finding a model that is actually fully abstract for the ∂λ-calculus without tests. It is known that the structure of the underlying Cartesian closed category may effect the theories of all models living in it. For instance in [20] it is shown that terms having the same Taylor expansion are equated in all models living in MRel. It is therefore possible that Question 9.1 admits a negative answer. If this is the case, then the following question becomes interesting. Question 9.2. Is it possible to find a new comonad T , such that the (co)Kliesli Rel T contains a fully abstract model of ∂λ-calculus?
Indeed, the comonad M f (−) of finite multisets is not the only one that leads to models of ∂λ-calculus. For instance it has been shown by Carraro, Ehrhard and Salibra in [9] that one can consider exponential functors with infinite multiplicities. However, their models do not even validate the Taylor expansion, therefore are not suitable to solve Question 9.2. The challenge is to find other kinds of comonads.
Let be a function from N to N. Given a natural number k ∈ N we write [n := k] for the map which coincides with , except on n, where takes the value k. We let ( + k) denote the function defined by (x) = (x) + k.
In the following proofs we write A n h B if A reduces to B in n steps of head reduction, which is introduced in Section 7.5 for the full ∂λ-calculus with tests. Lemma A.3. (Lemma 6.12) Let V ∈ 2 (Λ τ ) lab be a sum of labelled closed tests. If V ε then there exists a map : N → N such that V ( +k) is solvable for all k ∈ N.
Proof. In the proof we use the characterization of solvable given in Theorem 6.5(ii). We proceed by induction on the length n of a head reduction V h ε (by Lemma 7.19). For the sake of simplicity we assume that in the sum V we first reduce a component that head reduces to ε (only when V = ε + W we start reducing within W).
x[] independently from . Case n > 0. We have V → h V n−1 h ε. The proof is divided into sub-cases depending on the redex that is contracted.
|
2012-10-08T13:21:59.000Z
|
2012-09-13T00:00:00.000
|
{
"year": 2012,
"sha1": "23a237d290969098c261a78a1bddcb459671da48",
"oa_license": "CCBYND",
"oa_url": "https://lmcs.episciences.org/1047/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "61c72dd279b0daf87cff6845edd4b555512a4294",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
250664817
|
pes2o/s2orc
|
v3-fos-license
|
A portable laser heating microscope for high pressure research
We report the progress of the construction of a portable laser heating microscope for a broad range of materials studies at extreme pressure-temperature conditions. The essential features are portability, a broad temperature range, and a modular design making it flexible for a variety of applications in high pressure research for different environments, including in synchrotron and neutron applications and optical spectroscopies. The integrated instrument functions like a microscope, containing an infinity-corrected microscope head, fiber lasers, IR and UV spectrograph modules, IR and visible CCD cameras, and a control system. It provides stable laser heating on an area greater than 30 μm in diameter. Temperature can be controlled and reliably measured down to 500 K. Extending temperature measurement over 10,000 K with the short wavelength optics is discussed.
Introduction
In the laser heated diamond anvil cell (DAC) technique, first introduced by Ming and Bassett [1], a laser beam passes through the transparent diamond anvil and only locally heats the laser-absorbing samples without heating the gasket and other DAC components, thus avoiding interference with the DAC operation. The laser heating (LH) technique has now been widely used in heating DAC samples to thousands of degrees. Temperatures of a heated spot are usually measured by monitoring thermal radiation through the so-called radiospectrometry technique [2,3]. In order to reach a uniform volume temperature in three dimensions, the double sided laser heating technique was developed by shaping laser beam profile, optimizing sample configuration, and refining optical arrangement [4,5]. The LH technique has been successfully integrated with a variety of micro-probing techniques, including synchrotron, neutron, and optical lasers (e.g., Raman, Brillouin).
Conventional LH systems with their typical meter-length lasers, >100 kg power supplies, cooling systems, rigid optical trains and optical table, are difficult to move and time-consuming to align, and must be regarded as fixed instrument at individual facilities. High-pressure research programs are greatly enhanced at a handful of facilities where dedicated on-line LH systems are installed (e.g.,Refs. [6][7][8][9][10]). Numerous facilities with powerful analytical techniques, however, have not yet been able to install LH system due to technical and budget constraints. A key issue is the portability. The highpressure DAC itself is a portable and flexible component. Making LH systems portable and available to all these analytical facilities will open new opportunities in high pressure-temperature science and technology.
With the advancement in the laser technology, the available fiber laser now has the size of a small flashlight yet available with hundreds of Watts power and better collimation than the large fixed YAG or YLF laser. It has been successfully used in the LH DAC [11]. Using fiber laser solves the major obstacle in designing a portable LH system. Recently, a compact laser heating system based on a fiber laser has been constructed for synchrotron applications [12]. The reported bench top system can be pre-aligned and minimizes the setup time for specific synchrotron applications. However, the system is still based on a fixed DAC position located almost in the middle of the bench, limiting the flexibility of integrating with various probing instruments. Here, we report a system that is as portable and flexible as a microscope with the following features: Portability -The heating microscope can be installed and integrated at a facility and optimized for high pressure-temperature DAC experiments within a short time period (1-2 hours). For double sided heating, two systems can be employed.
Co-axial alignment: The heating laser and the imaging path are co-axially aligned within the microscope. Thus, the visual object, the heating target and temperature monitoring point are engineered to be at the same location, without any additional alignment.
Modular design -The system is based on an infinity corrected microscope, making it possible for different geometries for objective lenses to maximize the flexibility. The IR and UV-VIS spectrographs are independent, pre-aligned modules that can be put together with minimal adjustment or can be added to other existing fixed LH systems.
Extended medium temperature range -The system covers the important temperature range of 500 -1200 K with an IR detector and optics.
Extended maximum temperature range -With ampleness of fiber laser power (2 x 100 W) and UV detector and optics, the system can extend temperature measurements above 5000 K.
Optics layout
A typical optical layout for a system is shown in figure 1. The infinity corrected optics allows the insertion of auxiliary devices, such as beam splitters and intermediate tubes, into the optical pathway between the objective and the zoom body without introducing spherical aberration, requiring focus corrections, or creating other image problems. The heating laser delivers into the microscope pathway by a polarizing cube beam splitter (bs2). The image signal is divided by a 50/50 beam splitter (bs3), with one branch feeding into an optical fiber for temperature measurement and the other to a camera (InGaAs or CCD) for visual observations. Depending on applications, an objective with a suitable working distance may be selected. The supporting tubes for an objective can be extended or shortened as needed. Together with additional mirrors, the system can be used for various types of DACs in different geometries. All the optics are mounted on a rigid breadboard for stability (figure 2).
Heating laser optics
The fiber laser (IPG Inc.) delivers polarized, well-collimated light at a wavelength of 1.064 µm up to 100W in continuous mode. Typically, we set a fixed output power of the laser and change the power by a power regulator consisting of a wave-plate (wp1), a polarizing cube (bs1) and a beam stop (bs). By rotating the wave-plate driven by a piezo-stage, the laser power can be regulated remotely from 0 to the maximum of 100W. The laser beam is about 7 mm in diameter with a gaussian beam profile. In order to have a uniform heating spot, a laser beam shaper (Mol Tech) is used, which effectively changes the beam profile from a gaussian shape to a flat-top like profile [13]. A reversed beam expander (be) maybe used when we need to enlarge the heating spot to its maximum. Typical focal spot size is 30-50 µm in diameter, depending on the choice of an objective. With the reverse beam expander controlled by a pneumatic device, the focal size can be further expanded by a factor of 1-2. Finally, through a polarizing beam splitter (bs2), the laser beam enters the microscope path in a coaxial manner. The co-axial arrangement ensures the alignment of the laser with the visual imaging path even when a mirror is added or the tube extended/shortened.
Infinity corrected microscope
The Navitar 12x zoom body is the main frame of the microscope. Between the zoom body and the objective, the beam path is parallel, allowing the insertion of optics (beam splitters, filters, tube extensions). Two sets of objectives with focal length of 37 mm and 77 mm are currently in use, which gives an overall magnification of 10x and 15x, respectively. A 50/50 beam splitter (sp3) is used to divide the image path into two branches: one for visual observation by a CCD or an InGaAs camera, the other for temperature measurement through an optical fiber. The optical fiber has a core size of 80 µm in diameter, thus monitoring an area of about 5-8 µm in diameter at the sample position. One feature of the system is that all optics are selected to be usable for short wavelength IR, i.e., for the wavelength range up to 1800 nm. The InGaAs camera (Goodrich) is sensitive to light radiations at 900-1800 nm, matching the spectrograph wavelength used for temperature measurements in the medium temperature range (500-2000K). The use of the short IR camera avoids any focus corrections arising from wavelength mismatch, and also allows for visualizing thermal objects as low as 450 K (figure 3b).
For applications at higher temperatures (>2000 K), shorter wavelength ranges (200-900 nm) are used for temperature measurements. A CCD camera is thus used for visual observation to match the covered wavelength range. All these changes are based on modular designs and can be made in a "plug-in" manner. Figure 4a shows a picture of the laser heating microscope with all shielding panels covered. The infinity corrected optics allows for different configurations for mounting the objective lenses (figure 4b). These configurations make the system flexible for a variety of applications in high pressure research for different environments, including axial and radial DAC geometries (see Section 4 below).
Temperature measurement
Temperatures are measured by spectroradiometry [2,3], i.e., by fitting thermal radiation signals in a given wavelength range to the Planck radiation function. Thermal radiations are delivered by an optical fiber and dispersed by an Acton spectrograph. For a temperature range of 500 -2000 K, an InGaAs OMA detector (Princeton Instruments) is used for collecting thermal radiation at wavelength of 1300-1600 nm. For higher temperatures, we collect thermal radiations in a wavelength section, spanning 200-300 nm, from a total range of 200-900 nm by using a CCD detector (Princeton Instruments), with shorter wavelength section used for higher temperatures.
Selection of wavelength range
In spectroradiometry both temperature and emissivity values are derived by fitting to the Planck radiation function: where I λ is observed spectral intensity, λ is wavelength, c 1 =3.7418x10 -16 Wm 2 , c 2 =1.4388x10 -2 mK, T temperature, and ε λ emissivity. In literatures, the most reported temperatures with the LH DAC have been obtained with the grey body assumption, i.e., with wavelength independent emissivity. From the limited available data, however, emissivity is a function of wavelength. To the first order approximation, ε λ may be expressed as a linear function of wavelength: where λ 0 could be a central wavelength of a selected range. Under the grey body approximation, ε 1 =0. Therefore, the emissivity ε 0 in Eq. (1) can be treated as a normalization factor in intensity. In this case, both temperature and emissivity ε 0 values can be obtained from thermal radiation signals fitting to the Planck radiation function (1).
The challenge arises when we consider the wavelength dependence emissivity, even if we only consider the linear dependence (Eq. 2). It requires the radiation data in the entire wavelength range to reliably obtain temperature, emissivity and its wavelength dependence. In all the laser heating systems, however, only a limited wavelength range is used, leading to that the two parameters (T, ε 1 ) are correlated with each other and cannot be uniquely derived by the curve fitting. Often times, we need to fix a wavelength dependence value ε 1 to constrain temperatures, with ε 1 either assumed to be zero (the grey body assumption) or a value from the data at ambient pressure. Because of the unavailability of the emissivity data at high pressures and high temperatures, the assumed ε 1 values inevitably involve errors. Therefore, for a reasonable temperature determination, the fitted temperature values should not be too sensitive to the choice of ε 1 values.
To check the sensitivity at different wavelength range, we used the radiation data from a standard lamp, which covers a large wavelength range of 300-2500 nm. To have a good fit in the entire wavelength range, it is necessary to introduce wavelength dependence of emissivity (ε 1 ). We find that all three parameters (T, ε 0 , ε 1 ) can be reliably obtained if we use radiation data in a wavelength range covering both sides of the main peak. If a wavelength range of only one side is used, the fitted temperature values are tied to the choice of ε 1 values. When the shorter wavelength side (500-800 nm) is used, the resultant temperatures show weakly dependence to the choice of ε 1 , within 60 degrees for different ε 1 values from zero to the maximum in literature [14]. If we use the longer wavelength side (1200 -1600 nm), the fitted temperatures are strongly sensitive to the choice of ε 1 values, with differences in temperature more than 500 degrees for only small changes in ε 1 values. The exercise above illustrates that a proper selection of wavelength range is critical for minimizing the effect of ε 1 in temperature determinations. Because of the limited wavelength range in the current spectrometer systems, we should keep a wavelength range at the shorter side of the main radiation peak.
In our portable laser heating system, the thermal radiation is delivered by an optical fiber to the entrance slit of an Acton spectrograph which holds two detectors, an InGaAs OMA detector and a CCD detector, covering wavelength ranges of 1200-1800 nm and 200-900 nm, respectively. This broad wavelength coverage allows us for selecting optimal wavelength ranges for different temperature ranges (Table 1). Detector type 500 -1700 1300 -1600 InGaAs 1200 -4000 500 -800 CCD 3000 -5000 300 -600 CCD > 5000 200 -500 CCD
Temperature measurement down to 500 K
The system can measure temperatures down to 500 K by adopting short wavelength IR optics. To cross-check the measurement accuracy, we have tested the system with a thermocouple (R type) mounted in a furnace. The thermal radiations directly from the thermocouple bead were recorded at different temperatures controlled by the furnace. Figure 5a compares the temperatures from thermal signals with those from thermocouple readings. It can be seen that these two measurements are in reasonable agreement with a standard deviation of about 10 degrees in the covered temperature range. We find that the fitting quality to the Planck radiation is very good, with small statistics errors (<2 degrees) (figure 5b) in the wavelength range of 1400 -1600 nm. At a given temperature, the temperature measurements by thermal signals are highly reproducible, within 2-3 degrees over multiple measurements. Below 500 K, the thermal signal in the measured wavelength range becomes too weak.
Extending maximum temperatures
The 500-900 nm is the most commonly used range in LH systems. This range is suitable for temperature determination between 1200 K and 4000 K. To extend maximum temperature range, shorter wavelength range should be used (Table 1). To avoid detector saturation, a set of glass filters mounted on a motorized wheel are often used for signal attenuation. In the temperature range of 10,000 K, the radiation maximum peaks at around 250-300 nm and becomes narrower. A range of 200-500 nm can in fact cover signals from both sides of the main radiation peak, measuring the shape of the radiation function. It is then possible to obtain both temperature and wavelength dependence of emissivity information simultaneously. We are still in the process of establishing a reliable reference source of >7000 K for cross-checking the temperature determination. This is a subject of research and the result will be reported elsewhere.
Applications
The flexible laser heating microscope is highly portable, with a weight of ~15 kg and dimensions of 54x38x20 cm 3 , within the airline carry-on luggage allowance. It has wide applications for high pressure researches in different environments. Here, we present examples of mostly used geometries in synchrotron high pressure experiments. The heating spot size of 30-50 µm in diameter matches well the probing x-ray beam size, which is typically less than 5 µm in many beamlines at the third generation synchrotron sources. By combining two heating microscopes, it is easy to realize double sided heating.
Axial geometry
In on-axis geometry, incident x-ray passes through diamond anvils ( figure 6). By using x-ray transparent mirrors (e.g., amorphous carbon substrate), x-ray measurements in the forward direction can be performed in situ at high pressures and high temperatures. Such measurements include angle dispersive x-ray diffraction, nuclear forward scattering, inelastic x-ray scattering among others. Using x-ray transparent gasket, scattering signals through the gasket can be measured, such as in nuclear resonant inelastic x-ray scattering and x-ray emission spectroscopy. Figure 6. Schematics of an axial geometry. The insert is a configuration for the double sided heating in the axial geometry using two heating microscopes.
Radial Geometry
In radial geometry, the loading axis of a DAC is perpendicular to the x-ray beam. This geometry is now more and more widely used in synchrotron high pressure experiments. Applications include radial x-ray diffraction for rheology and elasticity, and various x-ray scattering and spectroscopy techniques which often use beryllium gasket for relatively low energy x-rays to pass through. In this geometry, laser beam can be applied from the side. Depending on whether the geometry is horizontally (Fig. 7a) or vertically (Fig. 7b) perpendicular to the x-ray beam, additional mirrors may or may not be necessary. If no mirror is needed, then objectives of short working distance can be used for a larger numerical aperture and for better imaging quality (Fig. 7a).
Other applications
The above geometries are just a few typical examples. There are other geometries for specific x-ray techniques and measurements. For example, in radial geometry, the loading axis may be tilted to the xray beam (figure 7b). With the heating microscope, it is flexible enough to follow the tilting and realize x-ray measurements at in situ high pressures and high temperatures. The heating microscope can also be used as a thermometer for temperature determinations in both externally and internally resistively heated DAC experiments, covering temperatures above 500 K. The region between 300 and 500 K can be easily reached by heating tapes, hot plates and many other simple methods, and can be considered as small perturbation of ambient temperature. The portable system should be applicable to neutron DAC experiments, and various optical spectroscopy techniques, such as Raman and IR spectroscopy and Brillouin scattering.
|
2022-06-28T02:57:32.996Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "24b077cd7a8fc0fb24260a53bca11f668a99741c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/215/1/012191",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "24b077cd7a8fc0fb24260a53bca11f668a99741c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.