pmcid stringlengths 6 6 | title stringlengths 9 374 | abstract stringlengths 2 4.62k ⌀ | fulltext stringlengths 167 106k | file_path stringlengths 64 64 |
|---|---|---|---|---|
546001 | High-throughput sequencing: a failure mode analysis | Background Basic manufacturing principles are becoming increasingly important in high-throughput sequencing facilities where there is a constant drive to increase quality, increase efficiency, and decrease operating costs. While high-throughput centres report failure rates typically on the order of 10%, the causes of sporadic sequencing failures are seldom analyzed in detail and have not, in the past, been formally reported. Results Here we report the results of a failure mode analysis of our production sequencing facility based on detailed evaluation of 9,216 ESTs generated from two cDNA libraries. Two categories of failures are described; process-related failures (failures due to equipment or sample handling) and template-related failures (failures that are revealed by close inspection of electropherograms and are likely due to properties of the template DNA sequence itself). Conclusions Preventative action based on a detailed understanding of failure modes is likely to improve the performance of other production sequencing pipelines. | Background In the past decade, the demand for DNA sequence data has driven the transformation of sequencing from a research activity into a manufacturing process. High-throughput sequencing facilities are focused on establishing automated procedures that maintain long read length and high overall success rates. It is neither practical nor economical to test each and every DNA template before sequencing [ 1 ]. Sequencing centres, therefore, monitor sequencing success on a larger scale referencing overall pass rates and average read lengths, typically in terms of Phred 20 bases [ 2 ]. The percentage of "sporadic sequence dropouts" or failed reads that inevitably occur within a pool of high quality data is often overlooked and rarely examined. Failed reads can be a result of numerous variables ranging from pipeline methodology employed to the nature of samples being sequenced. A Failure Mode Analysis (FMA) strategy was developed to determine the likely causes of sporadic unsuccessful sequence reads. We systematically examine these failed reads in the context of a high-throughput sequencing pipeline to establish the mode and frequency of each type of failure. The standard production pipeline at Canada's Michael Smith Genome Sciences Centre (BCCRC, British Columbia Cancer Agency, Vancouver, Canada) has a capacity to generate over 3.6 million reads per year. As of December 8, 2004, we have generated 1,263,904,347 Q 20 bases using our 384-well culturing, DNA preparation, and cycle sequencing procedures. The average Q 20 read length of data generated in the past 12 months (December 2003 to December 2004) from various library types and vector systems is 751 bases. The present study was undertaken to provide insight into the causes of sequencing failures and possible corrective actions. Although our pipeline uses exclusively ABI 3700 and 3730XL automated sequencers, these results should be applicable, in principle, to the improvement of other high throughput sequencing platforms. Results We generated 9,216 reads from 2,304 clones selected randomly from two cDNA libraries. For each of the two libraries, 1,152 bacterial colonies containing cDNA inserts were picked and arrayed into 384-well microtiter plates (Figure 1 ). To verify loss of DNA due to handling or equipment mishaps (i.e. clogged capillary or tip), each microtiter plate was cultured in duplicate and replicates were processed using the same instrument model but on different physical units where available. A resulting 4,608 reads were generated for the 5' end using the M13Reverse (5'-CAGGAAACAGCTATGAC-3') primer and 4,608 reads were generated from the 3' end using the M13 Forward (5'-TGTAAAACGACGGCCAGT-3') primer. The average Q 20 read length for the entire data set was 771 bases, average pass rate was 87% which was calculated as a percentage of sequencing reactions yielding a minimum of 600 Phred 20 bases. Figure 2 illustrates a break down of Q 20 read lengths from the full data set. The analysis methodology employed to determine the failure mode of each trace is outlined in Figure 3 . 1,172 reads (13%) represent the failed portion of the data set (Q 20 < 600) for further analysis to determine failure mode. The electropherograms from the 1,172 failed reads were evaluated and subsequently categorized into failure mode categories. 64 of these reads were yielded from sequencer capillaries that were clogged and therefore were removed from further analysis and categorized into the "Blocked capillary" failure mode. The remaining 1,108 traces were further classified into nine additional failure mode categories including Low signal strength, Mixed clone with vector sequence, Mixed clone- no vector sequence, Low signal to noise ratio, Excess dye peaks, Hardstop, Repetitive sequence, Homopolymer stretch, and Poly A tail. Results and trace characteristics used to classify each read are as described in Table 1 . Eight of the classifications described in Table 1 , except "Low signal strength", are final failure mode categories and contain 74% of the total reads. Table 1 Failure mode categories Failed wells were distributed into each category based on observational data taken during sequencing pipeline procedures and manual evaluation of electropherogram traces. Failure Mode Trace characteristic No. of sequencing reactions Percent of all failed wells (Q 20 < 600) Blocked capillary Noisy or no data with a low signal intensity value (<100). Verified with capillary control results. 64 5.5 Low signal strength* Noisy or no data with a low signal intensity value (<100) that is very close to or falls below the instruments detectable limit. 310 26.5 Mixed clone w/ vector sequence Clean vector sequence followed by noisy data immediately after the cloning site. 137 11.7 Mixed clone, no vector sequence Noisy data throughout the trace with sufficient signal intensity. 27 2.3 Low signal to noise ratio Discernable sequence peaks with strong intensity background noise. 22 1.9 Excess Dye peaks Large dye front usually followed by noisy data. 10 0.9 Hardstop Abrupt end to good sequence. 2 0.2 Repetitive Sequence Long stretch of repetitive DNA sequence that is followed by slippage in sequence or noisy data. 17 1.5 Homopolymer stretch Long stretch of a single nucleotide followed by slippage in sequence or noisy data. 99 8.4 Poly A Tail Stretch of Ts (template A) followed by slippage in sequence or noisy data. 484 41.3 Total 1172 100.0 * Preliminary failure mode, further broken down to final failure modes in Table 2. Figure 1 Process pipeline. Observational checks within the pipeline are shaded in grey. Absence of bacterial colonies, no-grows, and unusual observations are recorded on logsheets then entered into the FMA database. A. Verification of the colony picking procedure to ensure that all original clones are accounted for in the source microtiter plates. B. To further confirm A, we stamp a replicate of each microtiter plate containing transformed bacterial cultures onto agar plates. The resulting pattern of colonies is examined to determine presence or loss of DNA in each well of the source microtiter plate. C. Every 384-well culture plate is visually examined for presence of DNA after bacterial DNA culturing. D. Agarose gel electrophoresis is used to evaluate presence and quality of prepared and purified template DNA. E. During sequencing reactions, all volume additions are visually verified and manually adjusted using a single channel pipettor where necessary. Figure 2 Average read length breakdown Distribution of read length (Q 20 bases) for full data set of 9,216 reads. Results were divided into 100 bp bins, failed reads (Q 20 <600 bp) make up 13% (1,172 reads) of overall reads. Reads with Q 20 <100 bp make up the largest proportion of failed reads but contributes to only 4% of overall data set. Figure 3 Analysis pipeline Analysis methodology used to determine failure modes. FM= Failure Mode. We identified various failure mode trends by examining process versus template-related failures. Figure 4 shows 62.4% of reads with Q 20 <100 bp fail as a result of process-related problems while 75.5% of reads in the Q 20 : 500–599 bp bin fail due to template-related characteristics. The proportion of failed reads due to process-related problems are more abundant in traces with lower average read lengths while the opposite trend showing template-related failures increasing with increased average read length is true. A breakdown of failure modes for each 100 bp Q 20 bin is shown for process-related failed reads in Figure 5A and template-related failures in Figure 5B . Figure 4 Distribution of overall process vs template failed reads Percentage of process versus template-related failures for each 100 bp Q 20 read length bin. Numbers in each column represent the number of reads in each bin category. Process-related failure modes are more prevalent in lower average Q 20 read lengths. Template-related failure modes are more prevalent in higher average Q 20 read lengths. Figure 5 Process failure mode distribution (A) Process-related failure mode distribution. Most of the process-related failed reads have a Q 20 < 100 with the primary mode of failure being "Low signal, no DNA," or no template DNA in the sequencing reaction (confirmed by agarose gel electrophoresis). (B) Template-related failure mode distribution. Most of the template-related failed reads are a result of "Poly A Tail," or 3' poly A stretch that leads to slippage in sequence data. Reads with a "Low signal 5', 3' template characteristic" failure mode contribute to the largest proportion of data with read lengths Q 20 < 100. The most common mode of failure due to process was "low signal, no DNA", there were 146 reads in this category, where no template DNA was present in the sequencing reactions. This was confirmed by agarose gel electrophoresis. 145 of the reads resulted in read lengths less than 100 bp. From these 146 reads, 128 original clones failed to grow in the source glycerol stock microtiter plates. The remaining 18 clones were lost during the DNA preparation/purification procedure. Mixed clone reads containing vector sequence was the second most prevalent process-related failure mode. The 137 reads in this category were distributed broadly across the Q 20 : 100–599 bp range. The more successful reads yielded electropherograms with significantly stronger signal strength from one reaction product compared to the other. The most common failure mode resulting from template-related characteristics was "Poly A tail" or failed reads with attenuated sequence due to poly A tails, there were 484 reads in this category. The distribution of these 484 reads is skewed towards the Q 20 >300 bp bins. Failures due to "Low signal, 5', 3' template characteristics" also contribute to a significant number of template-related failures. These 116 reads were Q 20 <100 bp and make up the majority failure mode within that Q 20 bin. Our sample set was made up of cDNA clones and therefore contain some 3' template biases inherent in the sequencing of cDNAs. In a best effort to obtain a distribution representing randomly generated end sequence from various library types, we remove all failed reads resulting from 3' template attributes. The two failure modes targeted for removal are "Poly A tail" and the 3' reads within the "Low signal, 5', 3' template characteristics" category. 484 and 94 reads were removed from each respective category representing 49.3% of all failed reads. The resulting template-related failure mode distribution is represented in Figure 6 . The process-related distribution of failed reads (Figure 5A ) remains the same. Failed reads due to homopolymer stretches other than those resulting from poly A tails, make up the prominent failure mode within this new template-related failure distribution with 99 reads. The distribution of these reads is weighted towards Q 20 >300. Figure 6 Template failure mode distribution Template-related failure mode distribution excluding reads failing due to poly A tail and 3' template-related attributes. A majority of failed reads are a result of "Homopolymer" or single nucleotide repeat sequence leading to slippage in sequence data, these failed reads are skewed towards read lengths Q20>300. We further analyzed the 310 reads (26.5% of failed) in the preliminary "Low signal strength" category in Table 1 to more finely determine each failure mode (Figure 2 , Table 2 ). We removed the 146 reads and classified them as process-related, "low signal, no DNA" failures as described above. For the remaining 164 reads, DNA was found to be present but there was no evidence from gel images for excess DNA. It is therefore unlikely that the low signal failures were due to overloading of capillaries with excess carryover template DNA. Table 2 Breakdown of "Low signal strength" reads Reads from the preliminary "Low signal strength" failure mode category (310 reads) are further categorized into finer failure mode classifications. Failed reads from each unique clone are grouped together where possible (excluding reads that do not confirm presence of DNA on evaluation agarose gel) to determine mode of failure. Groupings of failed reads No. of occurrences Process associated failures (No. of reads) Template associated failures (No. of reads) Failure mode F1/F2/R1/R2 4 0 16 Low signal, 5' and 3' template characteristic F1/F2/R1 1 1 2 1. Low signal, DNA lost during precipitation 2. Low signal, 5' template characteristic F2/R1/R2 1 1 2 1. Low signal, DNA lost during precipitation 2. Low signal, 3' template characteristic R1/R2 43 0 86 Low signal, 3' template characteristic F1/F2 5 0 10 Low signal, 5' template characteristic F1/R1 1 2 0 Low signal, DNA lost during precipitation No available clone pairing (singleton) 44 44 0 Low signal, DNA lost during precipitation No DNA in agarose gel 146 146 0 Low signal, no DNA The 164 reads that appear to have sufficient template DNA are grouped by source clone for further analysis. Four possible reads may have failed for each unique clone: 5' replicate #1 (F1), 3' replicate #1 (R1), 5' replicate #2 (F2), and 3' replicate #2 (R2). There were 49 occurrences where two reads failed from one source clone, 2 instances where three reads from the same clone failed and 4 occurrences where all four reads failed from the same source clone. These results are shown in Table 2 and Table 3 summarizes the overall breakdown of 310 reads in the Low Signal Strength category. A binomial test of our data set indicates that the likelihood of 2, 3, or 4 reads failing from the same source clone is 1.9 × 10 -3 , 2.2 × 10 -5 , and 9.7 × 10 -8 , respectively. Thus, it is unlikely that more than one read failed per clone by chance. Table 3 Summary of "Low signal Strength" reads Summarizing results of the 310 "Low signal strength" reads in Table 2. Low Signal Strength Failure Mode No. of reads Low signal, no DNA 146 Low signal, 5', 3' template characteristic 116 Low signal, DNA lost during precipitation 48 Total 310 Groupings with two failed reads from the same end of the template, "F1/F2" and "R1/R2" showed trace data with noisy background or no signal trace therefore there was insufficient information to determine actual template characteristic leading to the problematic results. With the low probability of seeing two failed traces from one unique clone, we conclude that these failures are most likely a result of template DNA with 5' or 3' ends that are problematic for sequencing. Possibilities include secondary structure, mutated priming sites, long poly A tail, or other homopolymer stretches near the priming sites on the template. The "F1/R1" grouping contained failed reads from both ends of the same replicate but successful end reads from the second replicate. This would indicate that there were no clone related attributes on either end of the template to prevent good sequence thus the failed reads were assessed a process-related failure due to loss of DNA during the precipitation process. The "F1/F2/R1/R2" grouping indicates that no passing sequence data were obtained from 5' or 3' ends of both replicates from the original source clone, all 16 reads showed presence of DNA at each observational check in the process pipeline. The probability that 4 reaction products from a single clone were lost during precipitation is extremely improbable and it is more likely that the original clones had attributes that prevented successful sequencing from either ends. We therefore assign 5' and 3' template-related failure modes for the 16 reads in this grouping. The final groupings of "F1/F2/R1" and "F2/R1/R2" contain 6 reads in total. Each include failed reads from both ends of one replicate and a single failed read from the second replicate. We maintain the same logic for our failure mode evaluation by assessing both a template-related failure to the 2 failed reads from the same replicate and a process-related loss of DNA during precipitation failure for the single read from the second replicate. There were 44 reads in the singleton category where only a single read (F1, F2, R1, or R2) failed from each clone. These failures cannot be attributed to the template as three other reads were successfully sequenced from the same source clone. Failure mode was therefore likely to be process related and due to loss of DNA during the precipitation wash steps. Discussion Having established a stringent pass criteria of 600 Phred 20 bases or greater, we observed a broad range of failure modes distributed among the failed reads. Reads less than 100 bp did not yield useable data and were usually a result of process-related failure modes. Process-related mishaps in the pipeline would include missed colony picks, blocked capillaries on sequencers, faulty tips on automated liquid handling devices, or lost DNA template during precipitation. The sequences generated have low average read lengths, or in cases where DNA is totally lost, zero read length. 88% of reads within the largest "process" related failure mode category, "low signal, no DNA", were a result of failed cultures at the beginning of the process pipeline. The absence of DNA was verified early on by failed growth in the source microtiter plates and further confirmed by failed growth in the stamped replicate agar plate. Failures are perhaps due to missed bacterial colony picks and adjustments such as recalibration or refining the morphology criteria of the colony picker should help. The second most prevalent failure mode in the "process related" category, "Mixed clone with vector sequence" (30% of reads in this classification), is likely a result of two side by side colonies being picked into the same well or through contamination originating from the use of automated liquid handlers. These problems might be alleviated by increasing the stringency of the proximity criteria on the colony picking device to ensure sufficient distance exists between two bacterial colonies before they are chosen for picking. As we use new tips on the instruments daily, increasing the length or number of wash cycles on the automated liquid handlers between volume transfers may help in reducing carry over contamination from one plate to another. This will need further investigation. Cross contamination between adjacent wells can also arise with the mishandling of plates, or when tips are plunged too deeply into the wells, displacing the volume into adjacent wells. Adjusting the liquid handler by reducing the depth and speed of tips entering each well may help prevent the occurrences of cross contamination within a plate. Despite the presence of mixed clones within one well, traces often yielded discernable sequence with read lengths broadly distributed up to the Q 20 : 500–599 bp bin. This is likely the result of a presence in greater amount of one reaction product over the second leading to proportionately stronger signal strength. Template-related failures often still yielded reads of several hundred base pairs. Template characteristics that might contribute to failed or truncated reads could be poly A tails, homopolymer stretches, or other highly repetitive regions. Problems in sequencing these types of regions are well cited in the literature and are dependent on chemistry and methodology used in sequencing. These problems are best avoided by using an anchored primer for first strand synthesis of cDNA as the first step in library construction. "Poly A tail" was the single largest cause of failed sequence in both the template-related failure category as well as the entire data set, yet the reads make up the majority in the higher failed read length bins. In our regular production pipeline, we often circumvent this problem by using a combination of a 3' primer to resolve sequence immediately following the poly A coupled with an oligo(dT) 23 N (N = A, G, C) anchored primer to resolve the 3' ends of cDNA inserts. Conclusions The FMA pipeline described here was tailored to our high-throughput 384-well automated sequencing pipeline but many of the components in this platform are shared within the high-throughput sequencing community such as the alkaline lysis procedure to prepare and purify DNA, DNA sequencing using Big Dye chemistry, ethanol precipitation to clean up reaction products and equipment such as the Genetix QPIX, Beckman Coulter Biomek FX, and Applied Biosystems 3730 xl DNA analyzer. For this reason our FMA methods can be readily adapted to analyze other similar sequencing platforms. Extending our present study to other library types commonly sequenced in the high-throughput community, such as shotgun or Serial Analysis of Gene Expression (SAGE) should offer further information regarding template specific failures. This failure mode analysis provides information on distinguishing between process and template-related attributes that may lead to downstream failed sequence. It can therefore be a useful tool used to audit overall sequencing procedures and identify key problematic steps in a process pipeline. Making proper adjustments to the pipeline based on the results will likely result in increased efficiency, enhanced data quality, and decreased cost. Methods Samples are processed in duplicate, with quality assurance and observational checks to account for status of each read after every procedure leading to the final sequence read. These recorded observations facilitate downstream systematic analysis of failed sequence data. The regular core production pipeline was not altered for this study, as the purpose is to assess sequence failure modes under ordinary conditions in a high-throughput sequencing environment. An overview of the observational checkpoints within the FMA pipeline is outlined in Figure 1 . Template DNA is considered present in all wells up to the completion of DNA sequencing unless absence is indicated by no growth of bacterial culture or blank lane on gel. Once cycle sequencing has completed, the reaction products are precipitated, pelleted, then washed. As it is very difficult to qualitatively asses the presence of DNA accurately after the precipitation procedures other than by sequencing, we draw conclusions regarding loss of sequencing reaction products during precipitation by a process of elimination. Any failed reads that result from a process-related loss of DNA and have no prior observations indicating absence of DNA are attributed to the precipitation procedure. Transformation and colony picking One microliter of ligation mix from each of two Populus trichocarpa cDNA libraries were transformed by electroporation into 40 μl of E. coli DH10B T1 resistant cells (Invitrogen). Transformed cells were recovered using 1 mL of SOC medium (prepared in house) and plated onto 22 cm × 22 cm agar plates (Genetix) containing 2xYT agar and 100 μg/μl Ampicillin. Agar plates were incubated overnight at 37°C for 14 hours. Bacterial colonies were picked from the agar plates and arrayed into 384-well microtiter plates (Genetix) containing 60 μl of 2xYT medium + 7.5% glycerol (made in house) using the Genetix QPIX automated colony picker (Genetix). A total of six 384-well microtiter plates, three plates for each of the two cDNA libraries, were picked. The plates were incubated overnight at 37°C for 16 hours then each microtiter plate was inspected for wells that contained no growth. The positions of all failed cultures were recorded. To further verify growth in each well after the incubation period, a disposable 384-well replicator was used to stamp bacterial culture from each 384-well microtiter plate onto a new 22 cm × 22 cm agar plate containing 2xYT agar and 100 μg/ul ampicillin. Agar plates were incubated overnight at 37°C for 14 hours then inspected the next day for colonies from every well. The positions of all failed growths were recorded. 384-well culturing and DNA purification of plasmid clones Two microliters of bacterial culture was transferred from the 384-well microtiter plate into a 240 μl 384-well deep well diamond plate (Axygen) containing 60 μl of 2xYT medium and 100 μg/ml ampicillin using a 384-well slotted inoculator (V&P Scientific). This was done in duplicate to create two sets of six 384-well deep well inoculated diamond plates. Inoculated plates were sealed with AirPore™ tape (Qiagen) and placed into a 37°C shaking incubator (New Brunswick Scientific C25 Incubator Shaker) at 350 rpm for 18 hours. After the incubation period, cultures were removed and each plate inspected for growth and contamination. All failed cultures were recorded. Cultures from both replicates were then placed onto separate multi-tube floor vortexers (VWR) at maximum speed for approximately 5 minutes until all cells were resuspended. Cultures were stored at 4°C until ready for DNA preparation. DNA was prepared using alkaline lysis [ 3 ] with the following modifications that have been implemented for the standard GSC template production pipeline. Culture blocks from both replicates were removed from the 4°C refrigerator and mixed using a multi-tube floor vortex (VWR) for 5 minutes at maximum speed (or until all cells appeared resuspended). A Titertek MapC2 liquid handling device was used to dispense 60 μl of Lysis Buffer (Qiagen Buffer P2). After 5 minutes of lysis, 60 μl of Neutralization Buffer (Qiagen Buffer P3) was added. Plates were tape sealed (Edge biosystems clear tape) and mixed on a multi-tube vortex at maximum speed for 2 minutes prior to centrifugation at 4250 × g for 45 minutes in a Jouan KR422 centrifuge. 120 μl of lysate were transferred from pelleted culture blocks into 240 μl 384-well deep well diamond plates containing 90 μl per well 100% isopropanol using a 384-well Hydra pipetting instrument (Robbins Scientific). Destination plates were sealed (Edge biosystems clear tape) and mixed by inversion, followed by centrifugation at 2830 × g for 15 minutes in an Eppendorf 5810R centrifuge (Brinkmann Instruments). After centrifugation the isopropanol was decanted, the DNA pellet washed with 50 μl 80% ethanol using a Robbins 384-well Hydra, and the plates left to dry upright for three hours on the benchtop. DNA pellets were resuspended in 10 mM Tris-HCl, pH = 8 containing 10 μg/ml RNase A (Qiagen) and mixed for 1 minute at maximum speed on a multi-tube vortexer. Plates were briefly centrifuged at low speed, stored at 4°C overnight, then transferred to a -20°C freezer until required for DNA evaluation and sequencing reactions. DNA evaluation DNA preparations were evaluated by agarose gel electrophoresis. A 1.5 μl aliquot of purified DNA was combined with 1.5 μl of bromophenol blue loading buffer (0.21% bromophenol blue; 12.5% ficoll) and 2 μl was loaded onto a 1.2% agarose gel. Samples were loaded using a 12-channel loader (Hamilton) beside 3 ng of 1 kb plus DNA marker (Invitrogen). Gels were run at 120 volts for 90 minutes in TAE (Tris/Acetate/EDTA) buffer followed by staining for 35 minutes in SybrGreen Nucleic Acid stain (Cambrex). Gels were scanned using a Fluorimager 595 (Molecular Dyanmics) scanner. The image was visually examined for genomic DNA, as well as presence and quality of DNA. All empty lanes and observations were recorded and entered into the FMA database. DNA sequencing DNA Sequencing reactions were assembled in 384-well clear optical reaction plates (Applied Biosystems) using a Biomek FX workstation (Beckman-Coulter). In each 5 μl reaction (total volume) the following were added: 3 μl of purified plasmid DNA (~45 ng/μl), 0.26 μl of sequencing primer (5 pmol/μl, Invitrogen), 0.43 μl of 5X reaction buffer (Applied Biosystems Big Dye Terminator 5X Sequencing Buffer), 0.77 μl of Ultrapure water (Gibco), and 0.54 μl of BigDye v.3.1 ready reaction mix (Applied Biosystems). Each well of the reaction plate was visually inspected for appropriate volumes after both reaction mix and DNA addition. Volumes were manually adjusted using a single channel pipet (Gilson) where required and all observations were recorded. Sequence data were obtained using universal M13 Forward (5'-TGTAAAACGACGGCCAGT-3') and M13 Reverse (5'-CAGGAAACAGCTATGAC-3') primers on each set of replicate plates. Thermal cycling was performed on PTC-225 thermal cyclers (MJ Research) with parameters of 35 cycles at 96°C for 10 seconds, 52°C for 5 seconds using M13 Forward primer or 43°C for 5 seconds using M13 Reverse, 60°C for 3 minutes, followed by incubation at 4°C. Reaction products were precipitated by adding 2 μl of 125 mM EDTA (pH8) and 18 μl of 95% ethanol per well followed by centrifugationat 2750 × g for 30 minutes in an Eppendorf 5810R centrifuge. The EDTA/ethanol was immediately decanted and reaction products washed with 70% ethanol. The 384-well cycle plates were allowed to dry inverted for 15 minutes. Samples were resuspended in 10 μl of Ultrapure water and analyzed using a 3730XL DNA analyzer (Applied Biosystems). The performance of each capillary on the four DNA analyzers used in this experiment were validated using one 384-well control plate for each instrument. Each 384-well plate contained our in-house control standard, a full-length human cDNA clone obtained from the I.M.A.G.E. Consortium (I.M.A.G.E. ID #3609158, Lawrence Livermore National Laboratories). Blocked capillaries were recorded into the FMA database and traces originating from these capillaries flagged. Sequence data were evaluated using PHRED software [ 2 ] (v.0.020425.c) and the chromatograms were viewed using a java applet based on 'ted' [ 4 ] – a publicly available trace file viewer. A relational database "FMAdb" was created using MySQL for flexible querying of results. The FMAdb was populated with sequence data, plus process observations such as absence of bacterial colonies and no-grows, trace evaluations and information, as well as equipment and sequence run details. Authors' contributions GSY was responsible for design and execution of this study, plus analysis of data and drafting of the manuscript. RAH conceived of the study and directed the analysis. JMS and SAB generated the data described in the study and participated in study design. DS developed several of the key protocols used in this study. MB provided quality assurance of the experimental procedures and participated in study design. MAM participated in the establishment of the production pipeline and read the manuscript and provided comments. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC546001.xml |
516795 | Bridging Psychology and Mathematics: Can the Brain Understand the Brain? | Mathematical measures of complexity shed light on why some concepts are inherently more difficult to learn than others | “It was not only difficult for him to understand that the generic term dog embraced so many unlike specimens of differing sizes and different forms; he was disturbed by the fact that a dog at three-fourteen (seen in profile) should have the same name as the dog at three-fifteen (seen from the front).…Without effort, he had learned English, French, Portuguese, Latin. I suspect, nevertheless, that he was not very capable of thought. To think is to forget a difference, to generalize, to abstract. In the overly replete world of Funes there were nothing but details, almost contiguous details.” —Jorge Luis Borges, “Funes the Memorius” We are told scientists are divided into experimentalists and theoreticians. The dialectic description of the dynamics of science, with one tribe gathering data and collecting evidence and another tribe providing form to these observations, has striking examples that argue for the importance of synthesis. The 16th century revolution, which settled the way in which we see the sky today, is probably one of the best examples of how comparatively ineffective each of these tribes can be in isolation. Tycho Brahe, the exquisite observer, who built, calibrated, and refined instruments to see in the sky what no one else could, collected the evidence to prove a theory that Copernicus had already stated years before (in a book he dedicated to the Pope). It was only many years later that Galileo established the bridge between theory and observation; he understood the data in terms of the theory and thereby cemented the revolution. Copernicus's statements, showed Galileo, were not only figments of his imagination; they were an adequate description of the universe as he and Brahe had observed it. Since my first steps in biology, after a prompt departure from physics and mathematics, I have looked for such encounters between theory and experiment. I began studying the visual system and the series of fundamental works by Atneave (1954) , Barlow (1960) , and Atick (1992) on the relationship between our visual world and the way the brain processes it. Their research was based on a simple hypothesis: (1) the images that we see are highly redundant and (2) the optic nerve is a limited channel, thus the retina has to compress information. Compression, redundancy? How do such concepts relate to the biology of the brain? In the middle of the last century, working on the problem of communications, languages, codes, and channels, Claude Shannon proposed a very elegant theory that formalized intuitive ideas on the essence (and the limits) of communications ( Weaver and Shannon 1949 ). One of its key aspects was that, depending on the code and on the input, channels are not used optimally and thus do not carry all the information they potentially could. When we compress (zip) a file, we are actually rewriting it in a more efficient (though not necessarily more intelligible) language in which we can store the same amount of information in less space. This compression has, of course, a limit (we cannot convey the universe in a single point), and the mere existence of an optimal code is central to Shannon's idea. Attneave was probably the first to think that these ideas could help unravel how the brain worked, and in a long series of works relating these ideas to experimental data, it was shown that the retina's business is mainly to get rid of the redundancies in the visual world. About four years ago, Jacob Feldman published a paper, similar in spirit, proposing a simple explanation for a long-standing problem in cognitive science about why some concepts are inherently more difficult to learn than others ( Feldman 2000 ). An article whose first reference is to work carried out 50 years previously makes us suspect that an important gap is being filled. As in the previous experiments, Feldman borrowed a theory—he did not invent it—to explain longstanding and previously unexplained research in this area. Feldman's idea was based on a theory developed by Kolmogorov that established formal mathematical grounds to define and measure complexity. Kolmogorov's theory was focused on categories, which are just subsets of a universe, a bunch of exemplars that constitute a group. “Dogs” is a category in the universe of animals. Different statements can define the same category, for example, “big dogs” and “dogs that are not small” are the same group, and some information in the statement may be irrelevant because it does not aid in determining which elements belong or not. In the same way that Shannon spoke of a non-redundant code, Kolmogorov showed that categories could be defined with an optimal (non-redundant) statement. The length of this statement defines a measure of complexity termed Kolmogorov complexity. To visualize the intuitive nature of his theory it helps to do a thought experiment. Imagine a set of objects defined by, say, three features: form, color, and size. And imagine, for simplicity, that each feature is binary, that is, there are only two possible cases. Size is big or small, color is yellow or red, and shape can be either a triangle or a circle. This defines, of course, a universe of eight objects. We can now define categories within this universe: for example, all the circles (a category of four exemplars), or all the big and yellow objects (two exemplars), or all the triangles that are not red (again two) (see Figure 1). We can also define a category by enumeration, for example, the small red triangle, the big yellow circle, and the small yellow circle (three exemplars). Some rules (and thus the groups defined by these rules) are intuitively easier to define than others. “All the circles” is an easier statement to make (and probably to remember) than “small circles and yellow big objects.” This notion of difficulty is what Kolmogorov's theory formalized, stating that complexity was the length of the shortest definition (among all the possible definitions) of a given set. From this thought experiment, we can understand the logic of Feldman's paper, which showed that. Kolmogrov complexity is very closely related to our intuitive notion of conceptual difficulty. Feldman presented subjects with all possible categories (of a fixed number of exemplars) in different universes and showed that the critical parameter to rank the difficulty of a given subset was its Kolmogrov complexity. Moreover by explicitly presenting all the members and the nonmembers of a category to naïve subjects, he showed that we can spontaneously reduce a category to its minimal form and remember it without any explicit instruction. Thus, what Feldman found, following the original ideas of Shepard, was that our psychological measure of complexity—our difficulty in defining and remembering a category or concept—is also determined by the Kolmogorov complexity that describes it. Figure 1 Visualizing Kolmogorov´s Complexity Intuitive categories can be defined by short statements. The universe: circles and triangles, red and yellow, big and small (A). Examples of easy categories: red objects (B); triangles (C). Example of a difficult category: yellow circles and small red circles (D). This essay is, in a way, about how we avoid becoming Borges's character Funes, who could not understand repeated observations as exemplars of a common rule and thus could not synthesize and categorize. Simply, he could not think. Probably the most disappointing moment of Feldman's paper comes at the very end, where it deals with its (somehow unavoidable) recursive quest. Understanding why some concepts are difficult to learn may itself be difficult to learn. Modern mathematics, together with Kolmogorov complexity and information theory, has taught us another fundamental concept that may be relevant when trying to understand the logic of the mind. In a long series of paradoxes enumerated by Bertrand Russell, Kurt Goedel, and others, we learn that a formal system that looks at itself is bound to fail. At the very end of his paper, Feldman writes, “In a sense, this final conclusion [that psychological complexity is Boolean complexity] may seem negative: human conceptual difficulty reflects intrinsic mathematical complexity after all, rather than some idiosyncratic and uniquely human bias.” Who invented mathematics? The Martians? On the contrary, I believe this result supports a more naturalistic and less Platonic conception of mathematics. Formal ideas in mathematics are not arbitrary constructions of an arbitrary architecture; rather, they reflect the workings of the brain like a massive collective cognitive experiment. Mathematics does not only serve to help us understand biology; mathematics is biology. We are not less original if our thoughts resemble our mental constructions, we are just consistent. It is within this loop, this unavoidable recursion—mathematics understanding the logic of the brain—that we will have an opportunity to test, as some conspire, whether among all the wonders evolution has come out with, the ultimate might be a brain good enough to avoid the risk of understanding itself. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516795.xml |
538291 | Melatonin blocks inhibitory effects of prolactin on photoperiodic induction of gain in body mass, testicular growth and feather regeneration in the migratory male redheaded bunting (Emberiza bruniceps) | Little is known about how hormones interact in the photoperiodic induction of seasonal responses in birds. In this study, two experiments determined if the treatment with melatonin altered inhibitory effects of prolactin on photoperiodic induction of seasonal responses in the Palearctic-Indian migratory male redheaded bunting Emberiza bruniceps . Each experiment employed three groups (N = 6–7 each) of photosensitive birds that were held under 8 hours light: 16 hours darkness (8L:16D) since early March. In the experiment 1, beginning in mid June 2001, birds were exposed to natural day lengths (NDL) at 27 degree North (day length = ca.13.8 h, sunrise to sunset) for 23 days. In the experiment 2, beginning in early April 2002, birds were exposed to 14L:10D for 22 days. Beginning on day 4 of NDL or day 1 of 14L:10D, they received 10 (experiment 1) or 13 (experiment 2) daily injections of both melatonin and prolactin (group 1) or prolactin alone (group 2) at a dose of 20 microgram per bird per day in 200 microliter of vehicle. Controls (group 3) received similar volume of vehicle. Thereafter, birds were left uninjected for the next 10 (experiment 1) or 9 days (experiment 2). All injections except those of melatonin were made at the zeitgeber time 10 (ZT 0 = time of sunrise, experiment 1; time of lights on, experiment 2); melatonin was injected at ZT 9.5 and thus 0.5 h before prolactin. Observations were recorded on changes in body mass, testicular growth and feather regeneration. Under NDL (experiment 1), testis growth in birds that received melatonin 0.5 h prior to prolactin (group 1) was significantly greater (P < 0.05, Student Newman-Keuls test) than in those birds that received prolactin alone (group 2) or vehicle (group 3). Although mean body mass of three groups were not significantly different at the end of the experiment, the regeneration of papillae was dramatically delayed in prolactin only treated group 2 birds. Similarly, under 14L:10D (experiment 2) testes of birds receiving melatonin plus prolactin (group 1) and vehicle (group 3) were significantly larger (P < 0.05, Student Newman-Keuls test) than those receiving prolactin alone (group 2). Also, birds of groups 1 and 3, but not of group 2, had significant (P < 0.05, 1-way repeated measures Analysis of Variance) gain in body mass. However, unlike in the experiment 1, the feather regeneration in birds of the three groups was not dramatically different; a relatively slower rate of papillae emergence was however noticed in group 2 birds. Considered together, these results show that a prior treatment with melatonin blocks prolactin-induced suppression of photoperiodic induction in the redheaded bunting, and suggest an indirect role of melatonin in the regulation of seasonal responses of birds. | Background In many birds, day length regulates seasonal changes in fattening and body mass gain, gonadal growth and development, molt, and plasma levels of several hormones, including luteinizing hormone (LH), prolactin and melatonin [ 1 - 4 ]. There occurs some degree of phase-relationship among various photoinduced events. For example, photoperiodically induced rise in LH coincides with the onset of breeding [ 1 , 2 ], and rise in prolactin coincides with the late breeding and early post-breeding periods [ 4 , 5 ]. During laying and incubation stages of the reproductive cycle plasma prolactin levels increase dramatically by 100 to 150 folds [ 6 ]. High prolactin levels in late breeding season are implicated in the development of reproductive photorefractoriness and postnuptial molt in birds [ 4 , 7 ]. Circulating melatonin levels also undergo seasonal changes. High melatonin levels in the summer months and low melatonin levels in the winter months coincide, respectively, with the breeding and non-breeding phases of the reproductive cycle in long day breeding birds [ 3 ]. Although not known in birds, Lincoln and Clarke [ 8 ] provide evidence that melatonin acts directly within the pituitary to regulate photoperiod-induced changes in prolactin secretion in seasonally breeding Soay sheep. Previous studies on how hormones interact in photoperiodic induction of seasonal responses in birds have yielded conflicting results. A number of early findings show prolactin acting both as pro- and anti-gonadal in birds exposed to stimulatory day lengths [ 9 - 11 ]. However, in many birds high plasma prolactin levels are associated with decreased gonadal activity and LH levels [ 6 , 10 - 13 ]. In their recent review Blache and Sharp [ 4 ] conclude that prolactin is involved in the regulation of avian reproduction by providing inhibitory inputs to the hypothalamo-hypophyseal-gonadal axis. The production and secretion of melatonin encodes a photoperiodic calendar to birds as they exhibit changes in both the duration and amplitude of melatonin secretion corresponding to the duration of night length/ day length [ 3 , 14 , 15 ]. Also, melatonin is part of the birds' multioscillatory circadian system and helps maintain a robust and stable phase relationship among different internal circadian oscillators, and photoperiodic induction of seasonal responses in birds is mediated by the circadian system [ 16 - 21 ]. Interestingly, however, a direct role of melatonin in avian photoperiodism is not explicitly found. Most studies negate the role of pineal/ melatonin in photoperiodic induction of seasonal responses in birds (for references see [ 17 ]). We propose that the role of melatonin in avian photoperiodism is indirect. Melatonin modulates sensitivity of the circadian response system to a stimulatory photoperiod, and/ or influences downstream the photoperiod-induced effects by interacting with other hormones released in response to stimulatory photoperiods [ 17 , 19 ]. We sought to investigate this by examining the effects of exogenous melatonin on prolactin-induced suppression of photoperiodic response in a migratory bird species, the redheaded bunting ( Emberiza bruniceps ), in which melatonin is not directly involved in the photoperiodic time measurement based on circadian rhythm of photosensitivity [ 22 , 23 ]. Previous studies show that prolactin administered subcutaneously at a dose of 100 μg day -1 suppresses ovarian response in buntings subjected to long days [ 10 ]. In this study, we specifically determined if the administration of melatonin 0.5 h before prolactin blocks the prolactin-induced suppression of the photoperiodic induction of gain in body mass, testicular growth and development, and feather regeneration in the redheaded bunting exposed to stimulatory day lengths. Methods We used adult male redheaded bunting caught in late February from the overwintering flock at 25°N. Buntings are migratory finch that breed in summer in west Asia and east Europe (~ 40°N) and overwinter in India. Birds were held outdoors and acclimatized to captive conditions for two weeks, and then brought indoors and maintained on short days (8 hours light: 16 hours darkness, 8L:16D) until subjected to experiments. Under short days, buntings do not fatten, and remain reproductively immature and responsive to photostimulation [ 24 ]; birds pretreated with short days are referred to as the photosensitive birds throughout this manuscript. Two experiments were performed as per experimental design detailed in the figure 1 , and in accordance with the guidelines in the Principles of Animal Care. Figure 1 Experimental design. Both experiments had three phases: phase 1- pretreatment with 8 hours light: 16 hours darkness (8L:16D); phase 2- injection phase under natural day length (experiment 1) or 14L:10D; phase 3- uninjected phase. Arrows on top indicate time of injections (left: zeitgeber time (ZT) 9.5 – melatonin; right: ZT 10 – prolactin or vehicle; ZT 0 = the time of sunrise for the experiment 1, the time of onset of light for the experiment 2). Lines at the base of phase 2 indicate number of days of injection. The experiment 1 was terminated after 23 days and the experiment 2 was terminated after 22 days from the beginning of the phase 2. Experiment 1 This experiment began in mid June 2001. Photosensitive birds maintained indoors on 8L:16D since early March were brought outdoors in the aviary and exposed to natural day lengths (NDL) at 27°N (day length = ~ 13.8 h, sunrise to sunset). After three days of acclimatization, they were divided in three groups (N = 6 each). Beginning on day 4, they received subcutaneous injections once daily for 10 days as follows: group 1- first melatonin and 0.5 h later prolactin; group 2- prolactin alone; group 3- vehicle (control). After 10 consecutive injections (days 4–13), birds were left uninjected for the next 10 days (day 14–23). The experiment was terminated on day 24. Experiment 2 To confirm results of the experiment 1, we performed the experiment 2 under artificial conditions providing light-dark (LD) cycles corresponding to that was available outdoors (NDL) to birds of the experiment 1. This experiment began in the second week of April 2002. Three groups (N = 6–7) of photosensitive birds were subjected to 14L:10D. Beginning on day 1, they received 13 injections (days 1–13) as in the experiment 1: group 1- first melatonin and 0.5 h later prolactin; group 2- prolactin alone; group 3- vehicle (control). Thereafter, birds were left uninjected for the next 9 days (days 14–22). The experiment was terminated on day 23. Melatonin was administered at zeitgeber time 9.5 (ZT 0 = the time of sunrise in the experiment 1; the time of light on in the experiment 2) in view of our previous study [ 23 ] and several other observations [ 17 ] showing that melatonin or vehicle given alone at this time of day does not affect photoperiodic induction of gain in body mass and testicular recrudescence in the redheaded bunting, although around this time of day melatonin administration affects photoperiodic induction in mammals [ 25 ]. The prolactin was administered at ZT 10; group 1 birds thus received prolactin 0.5 h after melatonin. Vehicle was administered to birds of group 3 at ZT 10. Thus, the timing of prolactin and vehicle injections was decided in relation to the timing of the melatonin injection that itself was timed in relation to the timing of sunrise or the timing of light on. In both experiments, melatonin and prolactin were administered each at a dose of 20 μg bird -1 day -1 in 200 μl injection volume. Prolactin was obtained from Sigma Chemical Co. USA (Luteotropic hormone; L-6520, Lot 120K1606) and melatonin from Genzyme Fine Chemicals Ltd., Haverhill, Suffolk, UK). Melatonin injections were prepared as described by Kumar [ 26 ]. Briefly, a known amount of melatonin was dissolved in 100% ethanol and diluted in saline (0.9 % NaCl) such that each injection in 200 μl volume was 0.1 % ethanolic saline containing 20 μg of hormone. Prolactin was dissolved directly in saline yielding 20 μg per 200 μl of injection volume. Controls received 200 μl injection of 0.1% ethanolic saline (vehicle). We measured the effects on changes in body mass, testis size and regeneration of feather papillae. Body mass and testis size were measured at the beginning (day 0, the day before injections began), in the middle (body mass only) and at the end of the experiments. Birds were weighed on a top pan balance providing accuracy nearest to 0.1 g. In view of the findings that the fattening accounts for the most of the gain in body weight in photostimulated passerine birds [ 27 , 28 ], in the present study we considered body mass of photostimulated redheaded bunting, a passeriform, reflecting the fat deposition. The dimensions of the left testis were recorded when birds were laparotomised under local anesthesia (for details see [ 29 ]), and testis volume was calculated from 4/3π ab 2 , where a and b denote half of the long and short axes, respectively. Feather papillae regeneration was recorded as follows. On day 1 of the experiment, feathers in a specific area on the left chest were plucked. A permanent ink marker marked an area on bare epidermis measuring 1 cm 2 . Beginning 24 h after the first injection, the number of papillae emerged from the epidermis were counted daily throughout the experiment 1 or till 4 days after the last injection in the experiment 2, and scored subjectively as outlined by Boswell [ 30 ]. Briefly the scoring was done as follows: 0- missing feather, 1- a papilla emerging, 2- a papilla grown up to one-third of full size, 3- a papilla grown up to two-third of full size, 4- a papilla grown more than two-third of full size but still not complete, and 5- a completely grown feather papilla. The feather papillae scores were limited to first 50 feathers emerged within the marked area, and so total papillae score for an individual bird ranged from 0–250. Food and water were available ad libitum . In an artificial LD cycle, light was provided by white compact fluorescent lamps at ~ 500 lux. Data are presented as mean ± SE. They were analyzed using one-way analysis of variance (1-way ANOVA) with or without repeated measures, followed by post-hoc tests if ANOVA indicated a significance of difference. 1-way repeated measure ANOVA was used to compare data generated from the same group as a function of time, and 1-way ANOVA was used to compare data of different groups at one observation. Two-way (2-way) ANOVA was used to analyze data when two factors were considered together, for example the effect of the treatment and duration of the treatment. Significance was taken at P < 0.05. In the experiment 1, one bird of group 1 and two birds of group 2 died, and their data are excluded from the statistical analyses. Results Experiment 1 Results are shown in figure 2a,2b,2c . There was no significant change in body mass in birds of all the three groups during the treatment period (Fig. 2a ). Testes were however stimulated in all birds but the mean testis volume at the end of the experiment was different among the three groups ( F (2,12) = 4.656, P = 0.0319; 1-way ANOVA). Testes were significantly larger ( P < 0.05, Newman-Keuls test) in birds that received melatonin prior to prolactin (group 1) compared to those that received prolactin alone (group 2) or vehicle (group 3) (Fig. 2b ). The rate of regeneration of feathers was not significantly different between birds of groups 1 and 3 ( F (1,180) = 3.3.19, P = 0.0701; 2-way ANOVA) although in group 1 birds the emergence of the first papilla was delayed by at least a day (Fig. 2c ). Whereas in the group 1 the first papilla in a bird was found on day 5 and in all birds by day 11, in the group 3 the first papilla in a bird was found on day 4 and in all birds by day 10. Also, mean papillae scores during first some days were relatively smaller in group 1 compared to group 3. In group 2 birds that received prolactin alone the first papilla emergence in a bird was found on day 12, and hence papillae regeneration was dramatically delayed as compared to birds in group 1 ( F (17,144) = 15.26, P < 0.0001; 2-way ANOVA) and group 3 ( F (17,144) = 16.49, P < 0.0001; 2-way ANOVA) (cf. Fig. 2c ). Figure 2 Mean (± SE) body mass, testis volume and feather papillae regeneration in response to treatment with melatonin and prolactin (group 1), prolactin alone (group 2), or vehicle (group 3) in the redheaded bunting ( Emberiza bruniceps ) subjected to natural day lengths at 27°N (experiment 1: a-c) for 23 days (from mid-June to early July) or to artificial day length (14L:10D; experiment 2: d-f) for 22 days. Birds were injected with exogenous hormones at a dose of 20 μg bird -1 day -1 in 200 μl of vehicle daily for first 10 days in the experiment 1 and for first 13 days in the experiment 2, and thereafter they were left uninjected. Controls received similar volume of vehicle. All injections except melatonin were made at the zeitgeber time 10 (ZT 0 = the time of sunrise for the experiment 1; the time of onset of light for the experiment 2); melatonin was injected at ZT 9.5. Day 0 on X-axis refers to the day before first injection. Under NDL, one bird of group 1 and two birds of group 2 died, and their data are excluded. Asterisk indicates the significance of difference at P < 0.05. Experiment 2 Figure 2d,2e,2f show results from the experiment 2. There was a significant gain in body mass in birds of groups 1 and 3 which received melatonin plus prolactin and vehicle, respectively (group 1: F (2,10) = 12.16, P = 0.0021; group 3: F (2,12) = 6.978, P = 0.0098; 1-way RM ANOVA; Fig. 2d ), but not in birds of group 2 which received prolactin alone ( F (2,12) = 0.2839, P = 0.7379; 1-way RM ANOVA; Fig. 2d ). Hence at the end of the experiment, there was a significant difference in the response of body mass among the three groups ( F (2,17) = 3.857, P = 0.0416; 1-way ANOVA). Although testes were stimulated in all groups (Fig. 2e ), the size attained at the end of the experiment was different among different groups ( F (2,17) = 4.343, P = 0.0299; 1-way ANOVA). Testes grew to full size in birds of groups 1 and 3, and hence were significantly larger ( P < 0.05, Newman-Keuls test) than those of group 2 in which they grew to less than half-maximal size (Fig. 2e ). Unlike the experiment 1, the feather regeneration was not dramatically different among the three groups (Fig. 2f ). Nonetheless, the rate of papillae regeneration in birds of group 2 that received prolactin alone was slower as compared to group 1 ( F (14,150) = 3.9550, P < 0.0485; 2-way ANOVA) and group 3 ( F (14,165) = 24.76, P < 0.0001; 2-way ANOVA) (cf. Fig. 2f ). By day 17, however, all individuals regardless of the treatment had shown papillae emergence (Fig. 2f ). Discussion The present results confirm a previous finding on female redheaded buntings [ 10 ] that exogenous prolactin suppresses the photoperiodic induction of body mass gain and testis recrudescence under long days. In general, the effects of prolactin on body mass and testes found in the present study are consistent with the evidence that high prolactin levels during late breeding phase decrease fat stores [ 31 ] by affecting lipid metabolism via increasing lipoprotein lipase activity in the adipocytes [ 32 ] and induce testicular regression by inhibiting the hypothalamo-hypophyseal-gonadal axis [ 4 , 7 , 13 ]. Prolactin-induced suppression of papillae emergence in buntings is also consistent with the suggested role of high prolactin inducing defeathering and postnuptial molt [ 4 , 7 ]. High prolactin levels induced by prolactin administration thus seem producing a physiological condition comparable to late phase of the gonadal cycle, both in terms of declining body mass, reduced testicular activity and defeathering indicated by suppression of the emergence of feather papillae (Fig. 2 ). Of more interest is however that a prior treatment with melatonin blocks prolactin-induced suppression of photoperiodic induction in the redheaded bunting (Fig. 2 ). Birds administered with melatonin 0.5 h prior to prolactin showed photoperiodic induction similar to that of controls (experiment 2; Fig. 2d,2e,2f ). Data on feather regeneration also support this. Melatonin administration blocked the suppression of papillae emergence by exogenous prolactin (cf. Fig. 2e ). It is not understood however from these experiments how melatonin acts to restore photoperiodic response in prolactin-treated birds, but we can offer some plausible explanations. One is that melatonin reduces circulating prolactin levels either directly by acting on pituitary prolactin producing cells, as reported in fish [ 33 , 34 ], or indirectly by affecting the release of hypothalamic dopamine (DA) and vasoactive intestinal peptide (VIP) [ 35 , 36 ]. There is increasing evidence from both in vivo and in vitro experiments that hypothalamic VIP acts as prolactin releasing factor [ 35 ] and its secretion is photoperiodically regulated in birds [ 35 , 36 ]. Both the DAergic and VIPergic systems interact in regulation of prolactin secretion in birds [ 35 ]. It is to be investigated if there is a relationship between the melatonin and VIP, but from studies on avian retinal system there is evidence for an inverse relationship between the melatonin and dopamine (DA) [ 37 ]. Additionally, melatonin acts directly within the pituitary to regulate prolactin secretion in seasonally breeding photoperiodic Soay sheep [ 8 ], and melatonin stimulates dopamine release from tuberoinfundibular dopaminergic neurons resulting into the suppression of serum prolactin levels in rats [ 38 ]. Current results (Fig. 2 ) that when photostimulated buntings receiving both melatonin and prolactin had greater body mass and larger testes than those receiving prolactin are consistent with one or the other of the above explanations. A second possibility is that exogenous melatonin changes phase-relationship between daily rhythms of endogenous endocrine rhythms, and this somehow enhances sensitivity of the circadian response system to stimulatory effects of long day lengths. Testicular response in birds of the experiment 1 supports this (Fig. 2b ). Birds that received both melatonin and prolactin (group 1) had significantly larger testes ( P < 0.05, Student Newman-Keuls test) than those received prolactin alone (group 2) or vehicle (group 3). A role of melatonin in enhancing responsiveness of the photoperiodic response system is shown in an experiment on the blackheaded bunting ( Emberiza melanocephala ), an allied species that shares breeding and wintering grounds with the redheaded bunting. In blackheaded buntings exposed to 11.75L:11:25D of red light (650 nm), testes grew significantly larger in individuals that carried implants filled with melatonin compared with those that carried empty implants [ 39 ]. However, one observation of the present study is not entirely consistent. Controls of the experiment 1 had significantly ( P < 0.05, Student t-test) smaller testes than those of the experiment 2. This occurred perhaps because of one or both of the following reasons. First, there was a difference in the lighting environment between two experiments, both in the shape of LD cycle (saw-tooth shape in NDL versus square-wave shape in 14L:10D) and intensity of light period (gradually changing intensity for ~ 13.8 h daylight outdoors underneath opaque roof of the aviary in NDL (light intensity in the aviary during the experiment ranged from 92.3 ± 13.4 lux at sunrise to several thousand lux during day to 56.2 ± 5.9 lux at the time of sunset) versus a continuous ~ 500 lux intensity for 14 h within the photoperiodic chambers in 14L:10D). It is reported that the duration of photoperiod and light intensity do affect photoperiodic induction of body mass and testis recrudescence in the blackheaded bunting [ 24 ]. Second, high temperatures (> 40°C) outdoors during mid June – early July may have caused rise in endogenous prolactin levels [ 40 ], similar to those in group 2 that received exogenous prolactin, and this may have suppressed the photoperiodic induction (cf. Fig 2a,2b ). The present study indicates that melatonin could be involved indirectly in regulation of photoperiod-induced seasonal responses in the redheaded bunting by modulating effects of other hormones such as the prolactin. This appears consistent with another finding on this species [ 23 ] suggesting effects of melatonin on temporal phasing of the testicular cycle (individuals that carried implant filled with melatonin peaked in testicular growth one-month later compared to those that carried empty implant) and not on the initiation of testicular recrudescence. However, unlike in several birds (for references see [ 17 ]) in which melatonin fails to produce an effect on testicular growth, a few reports do exist in the literature showing direct effects of melatonin administration on gonadal activity. In migratory European quail ( Coturnix coturnix ), for example, melatonin given in drinking water influences the reproductive cycle [ 41 ]. Daily melatonin injections inhibited testicular recrudescence in lal munia ( Estrilda amandava ) [ 42 ], and caused significant involution of enlarged testes of breeding season in both the blossomheaded parakeet ( Psittacula cyanocephala ) and the Indian weaver bird ( Ploceus philippinus ) [ 43 ]. Inhibitory effects of pineal on hypothalamo-hypophyseal-gonadal axis is reported in the Indian weaver bird ( Ploceus philippinus ) [ 44 ] although a recent study in which reproductively active individuals were implanted with melatonin did not support the antigonadal effect of melatonin [ 23 ]. Differences in the effects of melatonin probably reflect diversity of the avian photoperiodic system. It will be interesting therefore to further examine species showing divergent effects of melatonin to unravel the diversity of the role of melatonin in photoperiod-induced seasonal responses in birds. Conclusion The present results show that a prior treatment with melatonin blocks prolactin-induced suppression of photoperiodic induction in the migratory redheaded bunting. How melatonin acts to negate the effects of prolactin is unclear. Whatever is the actual mechanism of action, the current result provide evidence that melatonin modulates photoperiodic induction of seasonal responses in birds by interacting with prolactin. Whether such effect will vary during different seasons of the year remains to be investigated. Authors' contributions AKT and SR carried out the experiments and prepared the first draft of the manuscript. VK supervised the experiments and the final version of the manuscript. The study was conceived by VK but then the experiments were discussed jointly. All the authors approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC538291.xml |
544951 | Influence of passive leg movements on blood circulation on the tilt table in healthy adults | Background One problem in the mobilization of patients with neurological diseases, such as spinal cord injury, is the circulatory collapse that occurs while changing from supine to vertical position because of the missing venous pump due to paralyzed leg muscles. Therefore, a tilt table with integrated stepping device (tilt stepper) was developed, which allows passive stepping movements for performing locomotion training in an early state of rehabilitation. The aim of this pilot study was to investigate if passive stepping and cycling movements of the legs during tilt table training could stabilize blood circulation and prevent neurally-mediated syncope in healthy young adults. Methods In the first experiment, healthy subjects were tested on a traditional tilt table. Subjects who had a syncope or near-syncope in this condition underwent a second trial on the tilt stepper. In the second experiment, a group of healthy subjects was investigated on a traditional tilt table, the second group on the tilt ergometer, a device that allows cycling movements during tilt table training. We used the chi-square test to compare the occurrence of near-syncope/syncope in both groups (tilt table/tilt stepper and tilt table/tilt ergometer) and ANOVA to compare the blood pressure and heart rate between the groups at the four time intervals (supine, at 2 minutes, at 6 minutes and end of head-up tilt). Results Separate chi-square tests performed for each experiment showed significant differences in the occurrence of near syncope or syncope based on the device used. Comparison of the two groups (tilt stepper/ tilt table) in experiment one (ANOVA) showed that blood pressure was significantly higher at the end of head-up tilt on the tilt stepper and on the tilt table there was a greater increase in heart rate (2 minutes after head-up tilt). Comparison of the two groups (tilt ergometer/tilt table) in experiment 2 (ANOVA) showed that blood pressure was significantly higher on the tilt ergometer at the end of head-up tilt and on the tilt table the increase in heart rate was significantly larger (at 6 min and end of head-up tilt). Conclusions Stabilization of blood circulation and prevention of benign syncope can be achieved by passive leg movement during a tilt table test in healthy adults. | Background Several studies have confirmed that lack of movement leads quickly to profound negative physiological and biochemical changes in all organs and systems of the body [ 1 - 5 ]. It is important for patients suffering from diseases such as stroke, spinal cord and traumatic brain injury to be mobilized at an early state of rehabilitation [ 6 ]. As these patients are bedridden, their lower limbs are mainly mobilized through manual therapy or with cycling ergometers. Patients with spinal cord injuries are disposed to the occurrence of circulatory collapse when changing from a horizontal to a vertical position because of the lack of sympathetic activity and the missing contractions of leg muscles in the lower extremities that normally act as muscle pumps [ 7 , 8 ]. This instability of the circulatory system occurs at an early stage of rehabilitation and leads to delayed functional training of these patients. In a chronic phase, an overactivity of the spinal sympathetic system could take place, which can lead to vasoconstriction and hypertension [ 9 ]. Head-up tilt table testing has been used for over 50 years by physiologists and physicians for many purposes. This includes the study of the human body's heart rate and blood pressure adaptations to changes in position, for modeling responses to hemorrhage, as a technique for evaluating of orthostatic hypotension, as a method to study hemodynamic and neuroendocrine responses in congestive heart failure, autonomic dysfunction and hypertension, as well as a tool for drug research [ 7 , 10 - 14 ]. It also has become a useful device in the mobilization of spinal cord and traumatic brain injured patients, as well as in patients suffering from stroke [ 15 ]. The key feature of a tilt table is the continuously adjustable position of a patient from horizontal to vertical. This represents an orthostatic challenge, because blood pools in the lower extremities, with the danger that in susceptible individuals vasovagal syncope could occur within approximately 20 minutes. The afferent end of this reflex pathway may be mediated by left ventricular or right atrium mechanoreceptors that are activated during vigorous contraction around under-filled chambers, in a situation similar to severe hemorrhage. Information from these mechano-receptors travels along vagal afferent C fibers to the brainstem, which mediates the efferent response consisting of withdrawal of sympathetic vasomotor tone and by the vagal system [ 16 , 17 ]. In addition to the traditional tilt table, a novel apparatus with stepping device (tilt stepper) was developed in 1998 at the research department of the Paraplegia Centre of the Balgrist University Hospital in Zurich, Switzerland in collaboration with the Department Orthopaedic II of the Orthopaedic Hospital of Heidelberg to enable a mobilization with stabilized circulation and to begin with a locomotion training in an early state of rehabilitation. In the tilt stepper, the patient is strapped by a safety belt to the tilt table while the legs are moved passively in a physiological stepping pattern (Figure 1 ). The inclination can be continuously adjusted from a horizontal to a vertical position. The distribution of the blood correlates directly to the sinus of the angle of inclination. Between 30 and 60 degrees this angle is linear [ 18 ]. For inclines larger than 60 degrees, there is a plateau of hemodynamic effects. For the present study, we choose an angle of 75 degrees because in previous studies it has been shown that syncope was more likely to occur at an angle over 60 degrees [ 18 ]. To investigate if there is a difference between passive stepping and passive cycling leg movements, we also used a tilt table with an ergometer device (tilt ergometer, Figure 3 ). Figure 1 The tilt table with stepping device (tilt stepper) Figure 3 The tilt table with ergometer device (tilt ergometer) There are only a few studies that have investigated how passive movement of the legs during a tilt table treatment affects circulation. In these studies, either functional electrical stimulation of the leg muscles [ 19 - 21 ] was used, or patients were placed in sitting positions on a cycle ergometer [ 22 ]. The results of these studies suggest that passive movements of the legs could stabilize blood circulation. There have also been studies which have utilized a tilt table with passively moving legs. However, in these studies only patients with recurrent vasovagal syncopes were enrolled, the syncope was pharmacologically provoked [ 5 , 23 - 30 ]. The aim of our experiments was to investigate if passive stepping and cycling movements of the legs during a tilt table test can stabilize blood circulation and prevent neurally mediated syncope in healthy young adults. Methods Participants With the permission from the local Ethics Committee and the informed consent of the volunteers, the response of the blood circulation was analyzed in healthy subjects. The exclusion criteria included: recurrent syncope or near-syncope in clinical history, regular medication, abuse of nicotine or alcohol, cardiovascular or neurological diseases, acute or chronic infections, psychiatric disorder and body mass index <18 or >25. All subjects underwent a physical investigation and an ECG was completed one week before the experiment. In the first experiment, we examined 12 healthy young adults (age 24 ± 5 years) on a traditional tilt table. The subjects, who had syncope or near-syncope were treated on the tilt stepper after a waiting period of 4 weeks. Syncope was defined as a transient loss of consciousness associated with a loss of postural tone. Near-syncope was defined as the appearance of pallor, nausea, light-headedness, diaphoresis or blurred vision. Both conditions were associated with the following hemodynamic changes: a decrease in systolic blood pressure > 60% from baseline values or an absolute value < 80 mmHg (vasodepressor response) and/or a decrease in heart rate > 30 % form the baseline value or an absolute value < 40 beats/min (cardio-inhibitory response) [ 31 ]. In the second experiment, we enrolled 42 healthy subjects (age 27 ± 4 years). They were randomized into two groups: group I (23 subjects) was put on a traditional tilt table, while the Group II (19 subjects) on a tilt ergometer. The age of the subjects was restricted to below 35 years, because the cardiovascular response is strongly dependent on age [ 32 ]. Procedures The aim of the first experiment was to investigate if the blood circulation could be stabilized in people who have a disposition for an "early" appearance of a neurally-mediated syncope on a traditional tilt table. The appearance of a neurally-mediated syncope is physiological and it may occur in all subjects. The interpersonal difference lies in the duration that the subject can be in standing posture until syncope or near-syncope occurs. A decrease of the systolic blood pressure up to maximal 15 mmHg and/or an increase in heart rate up to 20 bpm during the first 6 minutes are considered as a normal reaction to compensate the change in the position of the body [ 31 ]. The blood pressure was non-invasively measured with a tonometric blood pressure device. Subjects who suffered a syncope or near-syncope during the first session on the traditional tilt table were in the second session treated on the tilt stepper. In the second experiment, we investigated the effect of passively induced movements on circulation by a cycle ergometer on a tilt table. We enrolled 42 subjects: 23 on a traditional tilt table and 19 on the tilt ergometer. In both experiments, after 15 minutes of rest the subjects were tilted head-upright at a 75° angle and were returned to the supine position if a syncope or near-syncope occurred or after completion at 30 minutes. Heart rate and blood pressure were measured continuously and non-invasively. Head-up tilt tests were performed in the morning in a dim room. All subjects were instructed to fast overnight and relax the muscles of their lower limbs during the trials. This was monitored with an EMG-measurement on legs (Mm. biceps femoris, rectus femoris, gastrocnemii, tibialis ant.), randomly tested in the first experiment and regularly tested during the second experiment. The EMG signals were amplified and transferred to a personal computer. They were recorded by a data acquisition tool (SolEasy by Aleasolution GmbH, Zurich, CH). Tilt table with stepping device (Tilt stepper) The tilt stepper is a traditional tilt table (Gymna, Belgium) combined with an integrated leg drive that allows a passive movement of the lower extremities (Figures 1 and 2 ) Figure 2 The tilt stepper: generation of leg movements The leg drive that is connected to the thigh by a cuff induces a hip flexion or extension movement. As the feet of the patient are fixed to footplates, the knee is also flexed or extended, respectively. In those phases where the hip and knee joints are extended, the leg pushes down a spring-dampened footplate, which is then again pushed against a foot spring that is mounted within these plates. This footplate generates a loading force on the foot sole of the patient during extension. Applying this cycle of flexion and extension in an alternating way leads to physiological kinetics of the generated motion. A special mechanism is mounted under the hip joint and allows for adjustment of hip extension up to 20°. Depending on the blood circulation condition of the patient, the device can be tilted to different angles up to a vertical position. This makes it possible for the patient to become accustomed, step-by-step, to the upright position in combination with passive leg movements. The speed of the alternating stepping movements and the range of motion of hip/knee joints can be adjusted by a control panel. The basic construction consists of a linear drive (Parker-Hannifin, Germany), with a precision ball screw that is driven by a synchronous motor via toothed belt (maximum speed 450 mm/sec, maximum force 1400 N, maximum torque of 400 Nm at the hip joint). The movement frequencies range from 0.2 to 0.5 Hz (i.e. one cycle of flexion and extension takes between 2 and 5 sec). To secure subjects on the tilt table during experiments, fixation with a special harness was used during all experiments (Figure 1 ). The tilt table with ergometer (Tilt ergometer) The tilt ergometer consists of a traditional tilt table with an additional ergometer device (Tera Joy Germany) that allows a passive cycling movement of the lower extremities. From a technical point of view, the tilt ergometer construction is simpler than the tilt stepper, but it generates a non-physiological motion concerning gait phase related forces on the foot sole. The cycle frequency was between 0.2–0.5 Hz. Recordings and Measurement Blood pressure was measured continuously and non-invasively by a Colin CBM-7000 (Hayashi, Komaki City, Japan). The Colin CBM – 7000 is a tonometric blood pressure device that allows measuring beat-to-beat blood pressure (systolic, mean, diastolic), continuous arterial blood pressure waveform, beat-to-beat and continuous electrocardiography. Statistical analysis In both experiments we used the chi-square test to compare the occurrence of near-syncope/syncope in both groups (tilt table/tilt stepper and tilt table/tilt ergometer). We performed 2 by 4 repeated measures ANOVAs with 2 between factors (device group – namely tilt table vs. tilt stepper or tilt table vs. tilt ergometer), 4 within factors (time – supine, 2 minutes, 6 minutes and end of head-up tilt) and in their interaction (groups × time) for blood pressure and heart rate. Pairwise comparisons were made with the t-test with additional Bonferroni's correction. Results In the first experiment, 7 of 12 subjects (58%) had a syncope or near-syncope on the traditional tilt table. There was an obvious increase in heart rate in the first 6 minutes after changing the position from supine to upright. None of these 7 subjects had a syncope or near-syncope during the treatment session on the tilt stepper 4 weeks later. Comparing the occurrence of near syncope/syncope in both sessions with the chi-square test, there was a significant difference ((χ 2 (1.1) = 6.465, p = 0.011). Table 1 gives a short overview of these results. The same subjects, who collapsed on the traditional tilt table, did not have syncope or near-syncope while treated on the tilt stepper. Table 1 Occurrence of near-syncope and syncope in experiment one (tilt stepper) no syncope near-syncope syncope traditional tilt table [n = 12] 5 (42%) 5 (42%) 2 (18%) tilt stepper [n = 7] 7 (100%) 0 (0%) 0 (0%) In the ANOVA for repeated measures there were no significant differences for blood pressure within each group (time 4 levels: supine, 2, 6 and end of head-up tilt; F (1,6) = 4.66, p < 0.0743), but there were significant differences between groups (two levels: tilt stepper and tilt table; F (3,33) = 6.33, p < 0.0016)) and in the interactions (F(3,18) = 7.24, p < 0.0022). The blood pressure differs between the two treatments at the end of head – up tilt (p < 0.0029), but not at 2 minutes (p < 1.000) and at 6 minutes (p < 1.000) (Pairwise comparisons with the t-test and additional Bonferroni's correction). However, there could be shown a trend for a higher blood pressure at 2 minutes and at 6 minutes after head-up tilt in the group treated in the tilt stepper. There were significant differences for heart rate within each group (time 4 levels: supine, 2, 6 and end of head-up tilt; F (1, 6) = 12.17, p < 0.0130) between groups (two levels; F (3, 33) = 21.16, p < 0.0001) and in the interactions (F (3, 18) = 8.68, p < 0.0009). For the group treated on the traditional tilt table, pairwise comparisons with the t-test with additional Bonferroni's correction showed a significantly higher heart rate at 2 minutes (p < 0.0.0060), but no significant differences at 6 minutes (p < 0.2051) and at the end of head-up tilt (p < 1.000). In the second experiment, 13 of 23 subjects (57 %) who were on the traditional tilt table had syncope (3) or near-syncope (10). None of the 19 subjects who were on the tilt ergometer had syncope but 4 subjects had near-syncope (21%). Comparing the occurrence of near syncope/syncope in both sessions with the chi-square test (χ 2 (1.1) = 5.443) there was a significant difference (p = 0.021) (Table 2 ). Table 2 Mean blood pressure and heart Rate +/- SE during 75° head-up tilt on the tilt-stepper during supine position 2-min after head-up tilt 6-min after head up tilt end of head-up tilt mean blood pressure [mmHg] traditional tilt table [n = 12] 90 +/- 4 95 +/- 4 94 +/- 4 80 +/- 3* tilt stepper [n = 7] 89 +/- 4 93 +/- 5 97 +/- 2 95 +/- 6* heart rate [beats/min] traditional tilt table [n = 12] 65 +/- 5 80 +/- 5* 78 +/- 4 65 +/- 5 tilt stepper [n = 7] 61 +/- 3 69 +/- 3* 71 +/- 3 71 +/- 5 * p < 0.05 (compared with ANOVA for repeated measures) In the ANOVA for repeated measures, there were significant differences for blood pressure within each group (time 4 levels: supine, 2, 6 and end of head-up tilt; F (1,6) = 34.43, p < 0.0001) between groups (two levels; F (3,33) = 13.42, p < 0.0001)) and in the interactions (F(3,18) = 10.95, p < 0.0001). Pairwise comparisons with the t-test (additional Bonferroni's correction) showed no significant differences at 2 minutes (p < 0.5221) and at 6 minutes (p < 0.4429) but a significant difference at the end of head – up tilt (p < 0.0001). However, there could be shown a trend for a higher blood pressure at 2 minutes and at 6 minutes after head-up tilt in the group treated on the tilt ergometer. There were significant differences for heart rate within each group (time 4 levels: supine, 2, 6 and end of head-up tilt; F (1,6) = 12.17, p < 0.0130), between groups (two levels; F (3,33) = 21.16, p < 0.0001), and in the interactions (F(3,18) = 8.68, p < 0.0009). Pairwise comparisons with the t-test (additional Bonferroni's correction) showed no significant differences at 2 minutes (p < 0.3317), but a significantly higher heart rate in the group treated on the tilt table at 6 minutes (p < 0.0007) and at the end of head – up tilt (p < 0.0002). All subjects on the tilt stepper and tilt ergometer completed 30 minutes of head-up tilt. The duration of the head-up tilt was different in the group on the traditional tilt table, as an abrupt decrease of blood pressure or symptoms of near-syncope occurred. In the head-up tilt position the subject stands on the footplates on the tilt stepper, whereas in the tilt ergometer the harness holds the whole body weight. The subjects who were investigated on the tilt stepper felt comfortable during the whole experiment, whereas the subjects examined on the tilt ergometer in experiment two complained of discomfort. The subjects on the tilt ergometer experienced more discomfort because of the perception of no lower limb support. These statements were subjective; no standardized assessment instrument was used to measure the comfort. Tables 3 and 4 and Figures 4 and 5 provide an overview about the response of the blood pressure of subjects tested on the traditional tilt table (with and without syncope, n = 12 in experiment one and n = 23 in experiment two) and subjects with passive leg movements during the tilt table test on the tilt stepper (n = 7) and tilt ergometer n = 19). Table 3 Occurrence of near-syncope and syncope in experiment two (tilt ergometer) no syncope syncope near-syncope traditional tilt table [n = 23] 10 (43%) 3(14%) 10 (43%) tilt ergometer [n = 19] 15 (79%) 0 (0%) 4 (21%) Table 4 Mean blood pressure and heart rate +/- SE during 75° head-up tilt on the tilt ergometer during supine position 2-min after head-up tilt 6-min after head-up tilt end of head-up tilt mean blood pressure [mmHg] traditional tilt table [n = 23] 92 +/- 4 92 +/- 6 90 +/- 5 80 +/- 4* tilt ergometer [n = 19] 91 +/- 5 96 +/- 3 95 +/- 2 93 +/- 4* heart rate [beats/min ] traditional tilt table [n = 23] 64 +/- 5 79 +/- 5 82 +/- 3* 78 +/- 5* tilt ergometer [n = 19] 65 +/- 3 74 +/- 4 73 +/- 5* 68 +/- 4* * p < 0.05 (compared with ANOVA for repeated measures) Figure 4 Blood pressure +/- SE during 75° the tilt table and tilt stepper test Figure 5 Blood pressure +/- SE during 75° the tilt table and tilt ergometer test In Figures 6 and 7 , recordings illustrating the development of systolic and diastolic blood pressure and heart rate for one subject with syncope (Figure 6 ) and another subject without syncope (Figure 7 ) during the tilt table test are shown. The observed progression of blood pressure and heart rate of the subject who had syncope is typical for a neurally-mediated syncope, because of the sudden decrease of systolic and diastolic blood pressure combined with bradycardia more than 20 minutes after head-up tilt. Also typical is the increase in heart rate observed in the first 6 minutes after head-up tilt. All subjects treated on the tilt table had this benign form of syncope and showed a similar blood pressure and heart rate progression during the tilt table test. Figure 6 Typical recordings illustrating a subject with syncope. RF = M. rectus femoris, BF = M. biceps femoris, TA = M. tibialis anterior, GM = M. gastrocnemius Figure 7 Typical recordings illustrating a subject without syncope. RF = M. rectus femoris, BF = M. biceps femoris, TA = M. tibialis anterior, GM = M. gastrocnemius Figure 7 is a good example for the normal progression of blood pressure and heart rate during a tilt table test. 2 minutes after head-up tilt there is a slight decrease of systolic and diastolic blood pressure and a slight increase of heart rate, a physiological mechanism of compensation for the change of position (supine to head-up tilt). Figure 8 is an example for the EMG activity in the right leg during the tilt stepper test, and Figure 9 during the tilt table test. It becomes obvious that there is no active muscle activity. The ups and downs in the curve of the muscle gastrocnemius on the tilt stepper are from the passive movements. Figure 8 Muscle activity during the tilt table and tilt stepper test Figure 9 Muscle activity during the tilt table and tilt stepper test Discussion The tilt table is an apparatus that has become an important part in the evaluation of patients with unexplained syncope or loss of consciousness [ 14 , 24 , 33 ]. It has also proven useful for circulatory training of patients suffering from several neurological diseases. However, the treatment is limited by the occurrence of circulatory collapse [ 16 ]. Both hypotension and bradicardia leading to syncope during tilt tests are also common events in healthy persons. These responses are considered to be part of a reflex response triggered by a sympathetic-induced hypercontraction of an almost empty left ventricular chamber [ 34 ]. In both experiments there were no recurrent syncope or near-syncope in the clinical history of the subjects and the ECG did not show any abnormities. For these reasons, and because of the development of the heart rate and blood pressure in our experiments, the occurring syncopes and near-syncopes that occurred ought to be benign and so called neurally-mediated syncopes or vasovagal syncopes. It is a physiological form of syncope that can occur in healthy persons. Some persons have the disposition of suffering a neurally-mediated syncope earlier than others [ 35 ]. This benign form of syncope can be differentiated from malignant syncopes, like the hyperadrenergic orthostatic hypotension (decrease of blood pressure and increase of heart rate), hypoadrenergic orthostatic hypotension (decrease of blood pressure without an increase of heart rate) and postural tachycardia syndrome (massive increase of heart rate without decrease in blood pressure) by recording heart rate and blood pressure [ 31 ]. Although the tilt table has become an accepted diagnostic tool, there are no comparable studies with the tilt table in which the effect of passive leg motion on circulation have been investigated. The aim of these two experiments was to investigate if passive leg movements during head-up tilt can prevent syncope. The data in the present study show a stabilizing effect on the blood circulation and this study suggests that there is an effect on preventing neurally-mediated syncope by both devices. In the first experiment, none of the subjects who had syncope/near-syncope on the traditional tilt table had syncope/near-syncope four weeks later on the tilt stepper. In the second experiment, only 4 subjects who were treated on the tilt ergometer had near-syncope. In both experiments the increase of heart rate was larger in the group tested on the traditional tilt table. A correlation between heart rate and appearance of syncope was described [ 16 , 36 ]. An increase in heart rate > 18 bpm during the first minutes after changing position from supine to upright leads to syncope, with a sensitivity of 90% and a specification of 100%. Consequently, the positive effect of passive leg movement on heart rate is obvious. Heart rate and blood pressure give an indication of the sympathetic activity, which is activated on the tilt table [ 37 , 38 ]. This increased sympathetic activity stimulates mechano-receptors in the ventricle, which leads to an activation of the vagus nerve and a reflexive decrease of sympathetic activity. The vagus activity leads to bradycardia and vasodilatation: the Bezold-Jarisch-Reflex [ 36 ]. We suggest that the sympathetic activity becomes reduced by the tilt stepper, preventing this vicious cycle that leads to a vasovagal syncope. This has to be proved in further studies by an intra-arterial catecholamine measurement. In the first experiment we treated the same subject twice on a tilt table. It cannot be excluded that an adaptation to the orthostatic change occurred in these subjects. However, there was an interval of four weeks between the first treatment on the traditional tilt table and the second treatment on the tilt stepper. Therefore, a training effect or an effect of habituation, such as described in another study in which patients suffering from syncope were treated each day over 6 weeks, seems to be unrealistic [ 39 ]. The results in both experiments indicate that blood circulation can be stabilized by passive leg movements. However, the movements of the two devices used in these experiments are very different: on the tilt stepper there are stepping like movements and the legs can be loaded during extension and unloaded in flexion. In the tilt ergometer, the movements are the other way round. There might be more afferent input from the load receptors in the tilt stepper compared to the tilt ergometer. For example it could be shown that the load moments acting about the bilateral hip, knee and ankle joint axes during cycling are found to be generally lower than those induced during normal level walking [ 10 ] and concluded that afferent input from hip joints, in combination with that from load receptors during walking, plays a crucial role in the generation of locomotor activity in the isolated human spinal cord [ 1 ]. Also, the range of motion is adjustable in the tilt stepper, so that the extent of flexion and extension can be increased or decreased depending on the condition of the patient. Therefore, the tilt stepper may be more effective in activating a locomotion pattern. In addition, both devices might help to decrease spasticity [ 40 ] and serve to prevent osteoporosis [ 41 ]. Although these effects were not part of our current investigation, some of these issues could be proven in trials with treadmill training in the rehabilitation of patients with stroke, spinal cord and traumatic brain injury [ 18 , 39 ]. Thus, we plan to use the tilt stepper in further studies to investigate if it leads to a stabilization of blood circulation, prevention of neurally mediated syncope in an early state of rehabilitation, decrease in spasticity, prophylaxis of osteoporosis and activation of the locomotion pattern generator of patients suffering from neurological diseases. This in turn may lead to a better outcome and quality of life for the patient. In conclusion, we could show that both passive cycle and stepping movements of the legs during head-up tilt testing can stabilize blood circulation and prevent syncope in young healthy people. In further studies, we aim to investigate if the tilt stepper could become a helpful device for patients suffering from neurological diseases. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544951.xml |
550663 | Mold sensitization is common amongst patients with severe asthma requiring multiple hospital admissions | Background Multiple studies have linked fungal exposure to asthma, but the link to severe asthma is controversial. We studied the relationship between asthma severity and immediate type hypersensitivity to mold (fungal) and non-mold allergens in 181 asthmatic subjects. Methods We recruited asthma patients aged 16 to 60 years at a University hospital and a nearby General Practice. Patients were categorized according to the lifetime number of hospital admissions for asthma (82 never admitted, 53 one admission, 46 multiple admissions). All subjects had allergy skin prick tests performed for 5 mold allergens ( Aspergillus, Alternaria, Cladosporium, Penicillium and Candida ) and 4 other common inhalant allergens ( D. pteronyssinus , Grass Pollen, Cat and Dog). Results Skin reactivity to all allergens was commonest in the group with multiple admissions. This trend was strongest for mold allergens and dog allergen and weakest for D. pteronyssinus . 76% of patients with multiple admissions had at least one positive mold skin test compared with 16%-19% of other asthma patients; (Chi squared p < 0.0001). Multiple mold reactions were also much commoner in the group with multiple admissions (50% V 5% and 6%; p < 0.0001). The number of asthma admissions was related to the number and size of positive mold skin allergy tests (Spearman Correlation Coefficient r = 0.60, p < 0.0001) and less strongly correlated to the number and size of non-mold allergy tests (r = 0.34, p = 0.0005). Hospital admissions for asthma patients aged 16–40 were commonest during the mold spore season (July to October) whereas admissions of patients aged above 40 peaked in November-February (Chi Squared, p < 0.02). Conclusion These findings support previous suggestions that mold sensitization may be associated with severe asthma attacks requiring hospital admission. | Background Most asthma patients have mild symptoms which are well controlled with anti-inflammatory and bronchodilator therapy but a minority of asthma patients have severe airway inflammation and airflow obstruction requiring multiple hospital admissions. The reasons for these differences in asthma severity are complex and not fully understood [ 1 ]. At least two thirds of asthmatic patients are atopic with skin reactivity to common allergens [ 2 - 4 ]. It has also been reported that individuals with severe ("brittle") asthma may have a greater degree of atopy than other asthmatic patients [ 5 ]. Skin reactivity to fungal allergens such as Alternaria species has been reported to be especially common in patients with life-threatening asthma [ 6 ]. Asthma deaths, hospital admissions, respiratory symptoms, and Peak Expiratory Flow rates can be adversely affected by high fungal spore concentrations in outdoor air [ 7 - 11 ]. Mold sensitivity has been associated with increased asthma severity and intensive care admissions in adults and with increased bronchial reactivity in children [ 12 - 17 ]. Indoor mold exposure might also contribute to asthma severity. Many patients believe that their asthma is aggravated by damp housing, especially if there is visible mold growth [ 18 ]. It has been reported that asthma patients are more likely than control patients to live in a damp dwelling and their asthma severity was correlated to the degree of dampness and mold growth in their home (measured by a building surveyor) [ 19 ]. It is also known that asthma deaths in young adults in England and Wales are commonest in the months of July, August and September which coincides with peak levels of mold spores in the outdoor air in the UK [ 20 - 22 ]. These studies prompted us to undertake a study of asthmatic patients in Salford in Greater Manchester (North-West England) to assess whether atopy in general and mold sensitization in particular was associated with increased asthma severity. There are few agreed definitions of severe asthma so we used hospital admissions and treatment category (according to British Thoracic Society Asthma Guidelines) as surrogate markers for asthma severity [ 23 ]. Methods This study was undertaken in Hope Hospital, a 900 bedded University Hospital in Salford, Greater Manchester, UK. The study was designed to record the atopic status for mold and non-mold allergens of asthma patients at every level of severity from very mild to very severe (defined as multiple hospital admissions despite intensive asthma medication). We studied patients with severe asthma during hospital admissions or at subsequent visits to the hospital chest clinic. Patients were recruited opportunistically during hospital admissions or during routine consultations with the Respiratory Nurse Specialist (RNS) in the hospital Chest Clinic. Recruitment was evenly spread over 30 months from January 1996 to June 1998. During this period, the Respiratory Nurse service was under development with only one part-time RNS available to serve a large population of hospital in-patients and ambulatory care patients at chest clinics. Ward-based doctors and nurses referred patients with acute asthma to the RNS on an opportunistic basis based on availability of the part-time RNS and whether the ward teams were aware of the developing RNS service. About 20% of all adult asthma patients admitted during the study period were referred to the RNS and most of these were recruited in the study (provided the RNS had time available to do so). Although recruitment was not systematic (based mainly on availability of RNS service), we were not aware of any potential bias which might have affected the recruitment process and we believe that the recruited patients were typical of adult patients with acute asthma admitted to our hospital. We recruited ambulatory patients (some of whom had previous hospital admissions) at the same hospital chest clinic and in a single Primary Care practice where 25 patients were recruited during routine consultations with the practice nurse. This recruitment was also opportunistic, based mainly on the availability of time to complete the study protocol during busy clinics. We believe that these patients were typical of patients seen at hospital chest clinics and General Practice asthma clinics in this area. Patients were grouped according to their lifetime history of hospital admissions for asthma (multiple admissions, single admission, or no admissions). Patients with no hospital admissions were further categorized according to the treatment steps described in the British Thoracic Society guidelines for asthma management; Step 1 requiring only occasional bronchodilator treatment, Step 2 requiring low-doses (<800 mcg per day) of inhaled steroid, Step 3 requiring high doses of inhaled steroids or long acting beta agonists, Step 4 requiring additional therapy such as domiciliary nebulized therapy [ 23 ]. We attempted to recruit approximately equal numbers of patients at BTS steps 1, 2 and 3–4 to allow analysis of mold sensitization according to severity in non-admitted patients as well as in admitted patients. All patients were Caucasians who were lifelong residents of the United Kingdom. Inclusion criteria were a diagnosis of asthma by the patient's doctor, age 16 to 60 and ability to give informed consent. All subjects gave written informed consent prior to partaking in the study which was approved by the Salford Research Ethics Committee. Exclusion criteria included a diagnosis of COPD, non-European ethnic group (98% of Salford residents are Caucasian) and consumption of any antihistamine in the previous 48 hours. All subjects completed a questionnaire concerning respiratory symptoms, smoking status and allergies. All tests were conducted by one author (LCH) or by one other respiratory nurse specialist using a standardized technique in an open manner. A small drop of allergen was placed on the volar surface of the forearm. Allergens were purchased from Allergopharma (Reinbek, West Germany), a single batch of each allergen was used throughout the study. We used skin test lancets with a 1 mm tip (Bayer Prick Lancetter supplied by Miles Pharmaceutical Division, Spokane, Washington, USA). The lancet was introduced vertically into the skin through the allergen solution. Allergens studied were: negative control, histamine 0.1%, Dermatophagoides pteronyssinus , cat, dog, mixed grass pollen, Aspergillus fumigatus , Alternaria alternata , Cladosporium herbarum , Penicillium notatum and Candida albicans . Weal diameter (if any) was recorded at 15 minutes. If a weal was asymmetrical, the mean of two perpendicular measurements was calculated. Weals less than 3 mm greater than the negative control reaction were regarded as negative in accordance with guidance issued by the European Academy of Allergology [ 24 ]. We devised an arbitrary numerical "sensitization score" for mold sensitization and non-mold sensitization to compare the number and size of positive allergy tests between groups of patients. This "sensitization score" was the sum of all positive weal diameters for each individual patient after subtraction of the negative control. For example, a patient with a 1 mm reaction to negative control, a 6 mm reaction to Aspergillus and a 5 mm reaction to Cladosporium would have a "mold sensitization score" of (6-1)+(5-1) = 9 mm. As a subsidiary study (not part of the initial trial protocol), we also studied the seasonality of asthma admissions in Salford by reviewing the electronic records of 520 asthma admissions under the care of two pulmonary physicians who kept a computerized database at this hospital between 1995 and June 2000 (approximately 30% of all adult asthma admissions to the hospital). This covered a wide time-span before, during and after the period of the mold sensitization study; some of the allergy study patients were admitted and recruited during this time but they represented only a small (and random) proportion of the admissions studied for seasonality. Asthma admissions were analyzed by month of admission and divided into 3 four-month "seasons"; March to June (Spring and early summer season with maximum airborne levels of shrub, tree and grass pollens; July to October (late summer and fall season with maximum airborne levels of mold spores); November to February (winter peak of general respiratory infections involving COPD and older asthma patients); [ 20 - 22 ]. Patients were analyzed in two age groups as it is known that asthma admissions and asthma deaths in Britain are commonest in July to September for patients aged under 35 but older patients are more likely to suffer asthma death in winter [ 20 , 21 ]. We used two age bands (16–40 and >40) because our patients aged 35–40 had a seasonal profile for asthma admissions which was identical to the 16–35 group and different to the group aged above 40. Statistical analysis was performed using Prism II software (GraphPad Prism, San Diego, California, USA). Chi squared test was used to compare the number of positive allergy skin tests between groups of asthma patients. Spearman Correlation Coefficient was used to compare each patients lifetime number of hospital admissions with their "Mold sensitization score" (as described above). Mann Whitney tests were used to compare mean sensitization scores between groups of patients. Results One hundred eighty-one asthmatic patients were recruited. Their characteristics are given in table 1 . No subject was on antihistamine medication and none had a negative reaction to histamine. No subjects had dermatographism (>3 mm reaction to negative control skin test). No eligible patient declined to partake in the study. There was a predominance of female subjects in all groups. Patients with hospital admissions were more likely to report current smoking than patients with no admissions (p = 0.03). The habit was commoner amongst patients with one admission than those with multiple admissions but this difference was not significant (p = 0.18). Patients with multiple admissions were more likely to have developed their asthma in childhood and they had a stronger family history of asthma. Table 1 Patient characteristics. Asthma No Hospital Admissions Asthma One Hospital Admission Asthma >1 Hospital Admission Number 82 53 46 32 BTS step 1 (Ref 23) 25 BTS step 2 25 BTS steps 3–4 Percent male 35% 25% 46% Mean Age (Range) 37 36 36 16–59 17–58 16–60 %Smokers 15% 34% 22% %Ex-Smokers 29% 23% 17% %Non-Smokers 56% 43% 61% Asthma onset before age 16 (Percent) 44% 23% 70% Family history of Asthma in parents, siblings or children (%) 56% 55% 76% Positive skin tests to all allergens were commoner in the group with severe asthma (multiple hospital admissions) than patients with milder asthma. (Table 2 and Figure 1 ). Atopic sensitization was common in all groups, especially the severe asthma group This tendency was most marked for dog sensitization (Table 2 ). Dog ownership was 31% amongst patients with mild asthma (no admissions) and 30% amongst those with multiple admissions (who were more dog-allergic). Table 2 Prevalence of mold and non-mold sensitization in asthma patients. "Sensitization score" refers to number and size of positive skin tests as defined in methods section . Asthma No admissions Asthma One Admission Asthma >1 admissions Chi squared p value Mold allergens Aspergillus 7% 6% 37% <0.0001 Alternaria 5% 6% 26% <0.0001 Cladosporium 1% 0% 41% <0.0001 Penicillium 2% 4% 30% <0.0001 Candida 10% 9% 33% 0.001 Any mold sensitization 16% 19% 76% <0.0001 >1 mold sensitisation 5% 6% 50% <0.0001 Mean mold sensitization score (95% CI) 0.9 mm 0.4–1.4 0.9 mm 0.3–1.5 6.7 mm 4.8–8.5 Mann Whitney See below Other allergens D pteronyssinus 56% 47% 67% 0.13 Grass pollen 46% 38% 63% 0.025 Cat 37% 36% 59% 0.029 Dog 18% 19% 48% 0.005 Any non-mold Sensitisation 70% 47% 74% 0.008 >1 non-mold sensitisation 43% 38% 70% 0.002 Mean non-mold sensitization score (95% CI) 8.6 mm 6.6–10.6 7.0 mm 4.7–9.3 14.5 mm 11.1–17.9 Mann Whitney See below Mann Whitney Analysis of "Sensitization Scores" between different categories of asthmatic patients. Mold sensitization score: No admission V single admission P = 0.87 No admission V multiple admissions P = < 0.0001 Single admission V multiple admissions P = < 0.0001 Non- Mold sensitization score: No admission V single admission P = 0.29 No admission V multiple admissions P = < 0.009 Single admission V multiple admissions P = 0.002 Figure 1 Mean mold sensitization scores and mean non-mold sensitization scores for asthma patients and controls (mean and 95% CI) Clear bar: 82 asthma patients with no hospital admissions. Grey bar : 53 patients with one hospital admission Striped bar: 46 patients with more than one hospital admission. Mold sensitization was uncommon in mild asthma but very common in asthma patients with multiple admissions (Figure 1 and Table 2 ). There was a striking difference in the prevalence of mold sensitization amongst the three asthma groups. Three quarters of patients with multiple hospital admissions were sensitized to molds and half of them reacted to multiple mold allergens. The patients with a single hospital admission were more similar to those with no admissions than to the multiple admission group. This trend was seen for all five mold allergens studied (table 2 ). The frequency of sensitization to any individual mold ranged from 26% ( Alternaria ) to 41% ( Cladosporium ) in the severe asthma group compared with 0–10% in the milder asthma groups. Aspergillus and Candida precipitins and specific IgE were not measured in this study. None of these patients had any clinical features suggestive of allergic bronchopulmonary aspergillosis (ABPA) such as pulmonary infiltrates, bronchiectasis or marked eosinophilia. The number of admissions correlated with the number and size of positive skin tests using the scoring system described previously. For mold sensitization, the Spearman Correlation Coefficient was 0.60 (two-tailed p < 0.0001) and for non-mold allergens was 0.34 (two-tailed p = 0.0005). The cumulative "mold sensitization score" and the "non-mold sensitization score" for each group of patients is shown in table 2 . Only two of the 99 patients with asthma admissions had ever required admission to an Intensive Care Unit. Both were sensitized to a single mold (one Aspergillus , one Penicillium ). Of the patients not admitted to hospital, 32 had very mild asthma (BTS Step 1), 25 had mild-moderate asthma (BTS Step 2) and 25 had moderate to severe asthma (BTS Steps 3–4). There was no significant difference in mold or non-mold sensitization between these groups of non-admitted patients with different grades of asthma severity. Our review of asthma admissions to this hospital between 1995 and 2000 identified 520 patients admitted under the care of the two chest physicians who kept a computerized database (approximately 30% of all asthma admissions to the hospital). There were 173 asthma admissions in the 16–40 age group, these admissions peaked in late summer and fall (figure 2 ). Of these admissions 24.3% occurred between March and June, 43.4% between July and October and 32.4% between November and February. By contrast, 347 asthma patients aged over 40 had a winter peak of admissions (30.3% March-June, 30.5% July-Oct, 39.2% Nov-Feb) These patterns of admissions were significantly different (Chi squared p < 0.02). The summer-fall peak in the 16–40 age group amounted to 33 additional admissions above the spring baseline. This represented 6.4% of all asthma admissions or 19.1% of admissions in the 16–40 age group. Figure 2 Asthma admissions aggregated by "season", comparing 16–40 age group (black bars) with age >40 (white bars). Discussion Although the present study was larger than most previous studies of mold sensitization in severe asthma, it must be regarded as a "pilot study" due to the non-systematic recruitment of asthma patients and the cross-sectional nature of the study. Our data indicate that mold (and dog) sensitization is common in patients with severe asthma requiring multiple hospital admissions in Manchester. The results are consistent with previous evidence that atopy (especially to mold allergens) is related to asthma severity or bronchial hyper-reactivity [ 4 - 6 , 12 - 17 ]. A recent cross-sectional study of 1132 adults with asthma found that sensitization to Alternaria or Cladosporium is a powerful risk factor for severe asthma[ 16 ] in several European countries and also in Australia, New Zealand and in Portland, Oregon. The link between dog sensitization and asthma severity is in agreement with previous studies [ 5 , 14 ]. We had also expected to find an excess of house dust mite ( D. pteronyssinus ) sensitization in our patients with more severe asthma [ 15 , 25 ]. However, reactivity to this allergen was common in all asthma groups and only slightly commoner in patients with multiple admissions. There has been some debate about the best cut-off point for weal size to define a positive skin-prick test. We accepted the European Academy of Allergology figure of 3 mm greater than the negative control [ 24 ]. However, re-analysis of our data using a 2 mm or 4 mm difference from the negative control would make no difference to the results. Furthermore, the number and size of positive skin tests to mold allergens was greatest in patients with a high number of admissions suggesting that the relationship is a genuine one. It was also notable that, although sensitization to non-mold allergens was common in the informal control population and in patients with mild asthma, mold sensitization was uncommon in these groups. This indicates that the positive skin tests to mold allergens are unlikely to be due to irritant reactions of a non-allergic nature. As skin test reagents from different manufactures are not standardized, different results might be obtained with different manufacturers' reagents. Until such antigens are standardized, this remains unsatisfactory. However, the consistency between the present study and the recent European Community respiratory health survey (using different antigens) supports the validity of the association between mold sensitization and severe asthma. A key question is whether severe asthma is actually caused by sensitivity to molds or is simply associated with it. In any case, mold sensitivity will certainly not be the only cause of severe attacks of asthma; upper respiratory tract virus infections and some drugs being two of other well documented causes. The greater degree of mold sensitization in the severe asthma group could simply reflect an extreme example of the generalized increase in atopy amongst this group. However, we believe that mold allergy may be responsible for severe asthma attacks for several reasons. First, the temporal relationship between high environmental spore counts and asthmatic attacks is strong. Airborne spore levels may be up to 1000 times higher than pollen levels [ 26 ]. The data of Targonski and colleagues provide strong evidence that asthma deaths in Chicago are more likely to occur on days when local mold spore counts are high [ 7 ]. High mold spore counts have been associated with asthma admissions in New Orleans (adults) and in Derby, UK (adults and children) [ 10 , 11 ]. Asthma symptoms are increased in California and Pennsylvania on days when mold spore counts are high [ 8 , 9 ]. The young patients of O'Hollaren and colleagues who were Alternaria -sensitive had their near-fatal asthma episodes in summer and early fall when mold spore levels would be expected to be high [ 6 ]. Second, the seasonal (summer -fall) peak of asthma admissions occurs when ambient air counts of molds are high. We have documented a late summer-fall peak of asthma admissions involving young adults in Manchester which coincides with the summer-fall peak of asthma deaths in UK patients aged under 35 years [ 20 , 21 ]. These asthma admissions also coincide with the peak months for outdoor levels of fungal spores [ 22 , 27 ]. Although there is no aero-biology service in Manchester, data for surrounding towns have shown a consistent summer-fall peak in mold spore counts. In Cardiff, for example, a city 150 miles south-west of Manchester with a similar climate, the highest spore counts were measured in late summer and fall [ 22 ]. The Cardiff authors reported maximal levels of Cladosporium in July, Alternaria and hyaline basidiospores in August, uredospores in September and coloured basidiospores in October. The data from Derby (52 miles south-east of Manchester) are similar[ 11 ]. In addition, similar findings have been reported from Copenhagen (600 miles north-east of Manchester) where 87% of the microfungal flora in outdoor air is accounted for by Cladosporium , Alternaria , Penicillium and Aspergillus with maximal levels between June and October [ 27 ]. A pan-European study with centres in Oregon, USA, Australia and New Zealand involving questionnaires and skin prick tests in 17,000 patients identified 1132 patients with asthma the severity of which could be determined [ 16 ]. Sensitisation to A. alternata and C. herbarium was common and associated with asthma severity – OR of 2.03 for the former, of 3.2 for the latter fungus and 2.34 for both. No such association was found for pollens or cats, although sensitization to house dust mite was slightly more frequent in those with severe asthma (OR 1.61) [ 16 ]. The present study extends these findings to a wider range of fungal allergens and a greater degree of asthma severity. Third, there is evidence that indoor mold exposure may contribute to asthma severity. Many patients report respiratory symptoms in damp and moldy houses and a review of nine population-based studies found that seven reported one or more positive associations between fungal levels and health outcomes [ 28 ]. The study of Williamson and colleagues in Scotland reported that asthmatic patients were more than twice as likely than control patients to live in a house that was considered damp or moldy by a building surveyor [ 19 ]. Furthermore, in that study, there was a positive association between a patient's asthma severity and the degree of dampness which the surveyor measured in the patient's home (r = 0.3, p = 0.006) and independently with an index of visible mold growth in their dwelling (r = 0.23, p = 0.035). In a study of German children, it was found that bronchial hyper-reactivity was associated with damp housing [ 29 ] that was only partly explained by exposure to house dust mite antigen, suggesting that other factors such as mold growth may also be important. Taskinen et al found that the prevalence of asthma was similar (4.8%) amongst children attending a school with moisture and mold problems compared with a control school but asthma symptoms such as wheeze and cough were commoner in the damp moldy school as were emergency visits to hospital (OR 2.0, p < 0.01) [ 30 ]. Fourth, we know that Aspergillus in particular is a major respiratory allergen causing the vast majority of cases of allergic bronchopulmonary mycosis [ 31 - 33 ]. It is likely that these cases represent the extreme of a spectrum of mold allergy, the slightly less severe manifestation of which is severe asthma as described in this study and without all the serological and radiological markers characterisitic of ABPA. The reactivity of asthmatic patients to multiple mold allergens could be due to genuine sensitization to a variety of molds or it could be due to cross-reactivity between mold allergens. The paper of Hemmann and colleagues suggests that Aspergillus and Candida allergens may share IgE-binding epitopes [ 34 ]. However, it is believed that multiple mold sensitization skin test reactions are usually due to sensitivity to multiple antigens rather than cross reactivity [ 25 ]. Few fungi of the >1 million species of fungi thought to exist worldwide have been subjected to the antigenic scrutiny that Aspergillus and a few other common airborne fungi have and it is likely that sensitization to other fungi will be discovered in the future. It is not known why mold allergens should produce more severe airway disease than other common allergens such as house dust mites, cat dander, or grass pollen. Fungi are very common in the environment and Candida is present in the gut of most, if not all, humans. The difference may relate to the nature or intensity of exposure to mold allergens or to their ability to become airborne and to gain entry to human airways due to their small size. Also many potent allergenic proteins have been described in Aspergillus and some in other fungi [ 35 ]. There is the probability that some fungal antigens, such as Asp f6 being a manganese dependant superoxide dismutase which is closely related to the human enzyme, might set up a self perpetuating allergic response, which is aggravated every time Aspergillus is inhaled, which is almost hourly. Some fungal antigens are proteases (Asp f5, Asp f10, Asp f13, Asp f15, Asp f18) and as DP1 is also a protease – a similar pathogenic role can be postulated [ 35 , 36 ]. It therefore seems likely that there is a causal relationship between mold allergy and asthma severity for some younger asthma patients. Our seasonal admissions data suggest that up to 6% of adult asthma admissions in Manchester and 19% of asthma admissions in the 16–40 age group may be attributable to mold allergy. These unlucky individuals seem to have more severe asthma than patients with sensitization to other common allergens and an increased risk of fatal or near-fatal asthma or hospital admissions during the mold spore season, especially on days when the local mold spore count is high. The risk may be further increased if the patient lives in a damp, moldy house or attends a damp moldy school [ 19 , 29 , 30 , 37 ]. It is not yet known which mold species are most important in causing such reactions or whether indoor or outdoor mold exposure is more important {outdoor levels are usually higher}[ 38 ]. It is very difficult to make accurate measurements of indoor mold exposure and most studies have used surrogate markers such as dampness or visible mold growth. This phenomenon needs to be studied further in a variety of geographic locations with different climates. It is not yet known if environmental modification or the use of airway protection would be of any value in the management of mold-allergic asthma patients. One of the fungi which we investigated and for which antigenic extracts are available, Candida albicans , is a yeast, does not become airborne but does produce hyphae in tissue. Thus it is possible that human asthma due to fungal allergens may have three sources of allergen exposure – outdoor mold spores, indoor mold spores and endogenous fungal growth on body surfaces including the skin and gut [ 33 ]. Fungal sensitization is relatively uncommon in our British asthma patients and in Finnish schoolchildren [ 30 ] compared with Arizona [ 12 ] and Australia where up to 31% of asthmatic children and up to 23% of non-asthmatic controls react to at least one fungal allergen [ 15 , 17 , 37 ]. The prevalence of Alternaria sensitization in Italian patients with respiratory symptoms ranges from 2% in Northern Italy to 29% in Southern Italy [ 39 ]. Some of these differences are probably due to difficulties in the standardization of mold allergen extracts or skin testing techniques [ 26 ]. Also fungal sensitization is commoner in children and declines with age. However, it is likely that the significance of a positive skin test to fungal allergens varies in different climatic zones. Our data and those of O'Hollaren, Sureik and Black [ 6 , 16 , 17 ] suggests that fungal skin sensitization tests may identify adults who are at risk of especially severe asthma. The difference in the prevalence of mold sensitization between patients with multiple asthma admissions and the other groups in our study was so striking that it is extremely unlikely to have occurred by chance. Mold allergy tests may be useful to screen for children and adults who are at greatly increased risk of developing severe or fatal asthma. A large prospective study will be required to confirm these preliminary findings. Conclusion The findings of this study support previous suggestions that mold sensitization may be associated with severe asthma attacks requiring hospital admission. Abbreviations BTS Guidelines = British Thoracic Society Guidelines for Asthma Management, RNS = Respiratory Nurse Specialist Competing interests The author(s) declare that they have no competing interests. Authors' contributions ROD originated and co-ordinated the study and contributed to the analysis of the data and preparation of the paper. LCH contributed to the design of the study and was the main clinical investigator. She also contributed to the analysis of the data and preparation of the paper. DWD contributed to the design of the study and contributed to the analysis of the data and preparation of the paper. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC550663.xml |
503398 | Ciprofibrate therapy in patients with hypertriglyceridemia and low high density lipoprotein (HDL)-cholesterol: greater reduction of non-HDL cholesterol in subjects with excess body weight (The CIPROAMLAT study) | Background Hypertriglyceridemia in combination with low HDL cholesterol levels is a risk factor for cardiovascular disease. Our objective was to evaluate the efficacy of ciprofibrate for the treatment of this form of dyslipidemia and to identify factors associated with better treatment response. Methods Multicenter, international, open-label study. Four hundred and thirty seven patients were included. The plasma lipid levels at inclusion were fasting triglyceride concentrations between 1.6–3.9 mM/l and HDL cholesterol ≤ 1.05 mM/l for women and ≤ 0.9 mM/l for men. The LDL cholesterol was below 4.2 mM/l. All patients received ciprofibrate 100 mg/d. Efficacy and safety parameters were assessed at baseline and at the end of the treatment. The primary efficacy parameter of the study was percentage change in triglycerides from baseline. Results After 4 months, plasma triglyceride concentrations were decreased by 44% (p < 0.001). HDL cholesterol concentrations were increased by 10% (p < 0.001). Non-HDL cholesterol was decreased by 19%. A greater HDL cholesterol response was observed in lean patients (body mass index < 25 kg/m 2 ) compared to the rest of the population (8.2 vs 19.7%, p < 0.001). In contrast, cases with excess body weight had a larger decrease in non-HDL cholesterol levels (-20.8 vs -10.8%, p < 0.001). There were no significant complications resulting from treatment with ciprofibrate. Conclusions Ciprofibrate is efficacious for the correction of hypertriglyceridemia / low HDL cholesterol. A greater decrease in non-HDL cholesterol was found among cases with excess body weight. The mechanism of action of ciprofibrate may be influenced by the pathophysiology of the disorder being treated. | Background Hypertriglyceridemia in combination with abnormally low concentrations of HDL cholesterol (High Density Lipoprotein Cholesterol) is one of the most common and atherogenic profiles of lipid metabolism [ 1 ]. In the PROCAM study [ 2 ], the 6-year incidence of coronary events in men aged between 40 and 60 years, was twelve times higher than in the control group. The prevalence of this abnormality varies among ethnic groups [ 3 ]. It is found in 13% of the Mexican adults living in urban areas [ 4 ]. It is more common in men than in women (20.9% vs 7.2%) and in some age groups (i.e. men aged 30 to 39 years) this prevalence is as high as 30%. This lipid profile is the most frequent form of dyslipidemia in the metabolic syndrome [ 5 ]. However it is also found in subjects affected by primary dyslipidemias (e.g familial combined hyperlipidemia). In the Veteran Affairs HDL Intervention Trial (VAHIT), the use of a fibrate (gemfibrozil) resulted in a 22% reduction in the incidence of cardiovascular events in subjects with low HDL cholesterol and a broad range of triglyceride values [ 6 ]. The benefit was accounted for by the positive effects obtained in cases with insulin resistance [ 7 ]. In spite of these positive results, there are few studies assessing the efficacy of other fibrates in the treatment of this form of dyslipidemia [ 8 ]. Relevant data such as the percentage of cases that achieve treatment goals are not described in the majority of these reports. Also, variables predicting a greater likelihood of achieving treatment goals remain to be identified. Our objective was to assess the efficacy and safety of ciprofibrate (100 mg/day) for the treatment of cases with hypertriglyceridemia / hypoalphalipoproteinemia in an open label, multicenter, international study. A clinically oriented approach is used for the description of the results. Materials and Methods The trial included men and post-menopausal or non-pregnant women aged between 30 and 70 years who had hypertriglyceridemia (fasting concentrations between 1.68–3.9 mM/l (150 – 350 mg/dl) and hypoalphalipoproteinemia (HDL cholesterol ≤ 1.05 mM/l (40 mg/dl) for women and ≤ 0.92 mM/l (35 mg/dl) for men). The LDL cholesterol had to be lower than 4.2 mM/l (160 mg/dl). Patients were excluded if they had an acute coronary event during the three months preceding the study, type 1 diabetes, uncontrolled hypertension, severe renal dysfunction, nephrotic syndrome or aspartate aminotransferase (AST) or alanine aminotransferase (ALT) levels > 1.5 × the upper limit of normal (ULN), or if their creatine phosphokinase (CPK) levels were > 3 × ULN. Consumption of any lipid-altering drug within the previous 4 weeks (6 months for probucol) prevented entry into the study. Patients could be receiving other concomitant medication as long as the dosage was not modified during the study. The Ethics Committee in each institution approved the protocol and every patient provided witnessed, written informed consent prior to entering the study. This was a multicenter, international, open-label study. Patients were recruited from 25 lipid clinics from México (n = 152), Brazil (n = 129), Chile (n = 78) and Colombia (n = 78). Cases fulfilling the inclusion criteria were invited to participate. The initial visit included a medical evaluation, a physical examination and the prescription of an isocaloric diet consisting of 50% carbohydrate, 30% fat, 20% protein with a cholesterol content of 200 mg [ 9 ]. Blood samples were obtained after a 9–12 h fasting period. All patients were assigned to receive ciprofibrate 100 mg at bedtime. The second and final visit was scheduled 4 months later. During this visit drug compliance and safety profile were assessed and body weight as well as laboratory parameters were measured. Drug compliance was measured by counting the returned pills. Adherence to the diet was not assessed during the study The primary efficacy parameter was the percentage change in triglycerides from baseline. Secondary efficacy parameters included the percentage change in total cholesterol, HDL cholesterol and non-HDL cholesterol from baseline. Non-HDL cholesterol was calculated by subtracting the HDL cholesterol from the total cholesterol levels. In a post hoc analysis, the percentage of cases that achieved the treatment goals proposed by the ATP-III recommendations [ 10 ] on the final visit was also estimated. At each visit, AST, ALT, fasting plasma glucose and CPK levels were measured. Clinically relevant complications were defined as either CPK > 5 × ULN accompanied by muscle pain, tenderness or weakness or ALT or AST > 3 × ULN. Patients were excluded from the study if they developed severe hyperglycemia or any other significant complication to treatment. Other reasons for premature withdrawal were lack of compliance to the medication. Cases were instructed to contact their physician in case of any side effect. All samples were analyzed in a central laboratory (Quest laboratories). In Brazil, the local laboratory of every center was used instead. Blood samples were taken after an overnight fast (≥ 9 hours). Measurements were performed during the first 24 hours after the blood was drawn; blood samples were kept at 4°C until the analysis. All laboratory analyses were performed with commercially available standardized methods. Glucose was measured using the glucose oxidase method. Total serum cholesterol and triglycerides levels were measured using an enzymatic method. HDL cholesterol levels were assessed using phosphotungstic acid and Mg2+. Statistical analysis was performed using SPSS for Windows version 10. An intention to treat analysis was used. Two sided ANOVA tests were used for assessing differences between groups for continuous variables. All categorical variables were analyzed using the chi squared test. Multiple logistic regression models were constructed for the identification of variables associated with the achievement of treatment goals. Results Four hundred thirty seven patients were included. The clinical characteristics of the study subjects are shown in table 1 . Almost half of the population had a body mass index between 25 and 30 kg/m 2 (n = 221); an additional 32.5% were obese (n = 142) Diabetes was present in 125 subjects (28.7%). Table 1 Baseline characteristics of the patients included in study (n = 437) Variable N = 437 Sex Male (n(%)) 239 (54.7) Female (n(%)) 198 (45.3) Age (years)* 54.7 ± 12.1 Body mass index (kg/m 2 ) 28.8 ± 4.6 Diabetes (n(%)) 125 (28.7) Fasting plasma glucose (mM/l)* 5.2 ± 0.6 Family history of dyslipidemia (n(%)) 171 (39.2) Coronary heart disease (n(%)) 196 (45) Aspartate aminotranferase (mU/l) 23 ± 11 Alanine aminotransferase (mU/l) 26 ± 14 Data expressed as mean ± standard deviation. *To convert to mg/dl, multiply by 18 Both evaluations were completed in every case. The medication was stopped before the trial was completed by 117 subjects (26.7%). In the majority of cases this was not related to side effects (see below). In addition, 46 cases (10.5%) had poor compliance to the medication. The body weight remained constant in all patients. The alcohol and tobacco consumption was not modified during the study. Ciprofibrate treatment and diet had a significant beneficial effect on the lipid profile, as shown in table 2 . After 4 months of treatment, plasma triglyceride concentrations were decreased by 44% (p < 0.001). HDL cholesterol concentrations were increased by 10.1% (p < 0.001). Non-HDL cholesterol was decreased by 19.2% (p < 0.001). Total cholesterol was also favorably modified (-14.9%, p < 0.001). In contrast, LDL cholesterol had a minor modification (-5.4%, p < 0.001). A significant decrease in fasting glycemia was observed in both obese and diabetic cases. This change was not found in lean subjects. Table 2 Changes in the lipid profile and clinical characteristics between baseline and post-treatment values Intention to treat analysis N = 437 Baseline Final Percent change Triglycerides (mM/l) † 3.01 ± 0.7 1.61 ± 0.8 -44 ± 33* HDL Cholesterol (mM/l) ‡ 0.91 ± 0.1 0.98 ± 0.4 10 ± 52* Non-HDL cholesterol (mM/l) ‡ 4.57 ± 0.9 3.61 ± 1.5 -19 ± 36* Cholesterol (mM/l) ‡ 5.5 ± 0.9 4.57 ± 1.8 -14.9 ± 35* LDL Cholesterol (mM/l) ‡ 3.1 ± 0.9 2.8 ± 1.3 -5.4 ± 59* Mean ± standard deviation are presented. *p < 0.001 † To convert to mg/dl multiple by 89 ‡ To convert to mg/dl multiple by 38 The achievement of treatment goals is the ultimate aim of lipid-lowering therapy. Almost half of the cases had reduced their triglyceride concentrations below 1.68 mM/l (150 mg/dl) (n= 191(43.7%). HDL cholesterol levels above 1.05 mM/l (40 mg/dl) were found in 51% of the cases (n = 223). Also, a significant proportion of the subjects (63.15%, n = 276) achieved the non-HDL cholesterol goal 4.2 mM/l (< 160 mg/dl). The LDL cholesterol goal < 3.4 mM/l (< 130 mg/dl) was attained by 56.2 % (n = 246). A full correction of the hypertrigliceridemia / low HDL cholesterol occurred in 129 subjects (29.5%). Many of them also had a non-HDL cholesterol level below 4.2 mM/l (160 mg/dl) (n = 101, 23.1%) The lipid response during treatment differed between cases with a body mass index above or below 25 kg/m 2 . As is shown in figure 1 , the percentage change in HDL cholesterol was higher in lean subjects. In contrast, the non-HDL cholesterol concentration had a significantly greater reduction among subjects with excess body weight. Both differences remained significant even after adjusting for age and gender. The lipid profile did not differ between these groups during the initial visit. Figure 1 The percentage of change in the lipid parameters is different in cases with a body mass index above 25 kg/m 2 compared to the response observed in lean individuals during treatment with ciprofibrate and diet. Patients with diabetes had moderate hyperglycemia during the study. Their fasting glycemia was 7.7 ± 2.6 mM/l (139 ± 47 mg/dl) at baseline. A small but statistically significant decrease in glucose concentration was observed at the end of the trial (5.7 ± 3 mM/l, 104 ± 54 mg/dl p < 0.001). At baseline, their lipid profile differed, from the non-diabetic subjects, only with regards to higher triglyceride concentrations (3.13 ± 0.97 vs 2.95 ± 0.66 mM/l, p < 0.001). The lipid response to ciprofibrate and dietary treatment did not differ from that observed in the whole group. Individuals with a family history of dyslipidemia (n = 171) had higher cholesterol, triglycerides and LDL cholesterol at baseline compared to the rest of the population. The lipid response to treatment was similar to that reported in the whole group. Multiple regression models were constructed to identify variables associated with the achievement of the treatment goals. For every target value, the main determinants were the baseline value and the percentage change after treatment. No other parameter provided additional information in any of the models. Thus, we analyzed the variables associated with the percentage change during treatment. The non-HDL cholesterol model provided more information compared to models derived from the other lipid parameters. In the non-HDL cholesterol model, a body mass index greater than 25 kg/m 2 , the triglyceride response and the coexistence of cholesterol above 5.2 mM/l (200 mg/dl) (mixed hyperlipidemia) were predictors for a greater non-HDL cholesterol response (table 3 ). For both, the HDL cholesterol and the triglycerides models, the inclusion of a body mass index < 25 kg/m 2 added little information and had only borderline statistical significance. No other clinically relevant variable was associated with the percentage change of any other lipid parameter. Table 3 Multiple regression model using as dependent variable percent of change in non-HDL cholesterol concentration Variable Beta coefficient ± standard error p value Constant 14.4 ± 2.6 < 0.001 BMI < 25 kg/m 2 7.6 ± 3.6 0.036 Mixed hyperlipidemia 0.82 ± 0.2 0.003 Percent of change of triglycerides 0.68 ± 0.4 < 0.001 Correlation coefficient = 0.647, r 2 = 0.419, p = 0.001 BMI, Body mass index Liver function tests were not modified by the treatment. No patient had a significant alteration of any of the laboratory tests. There were no incidents of either myopathy or liver dysfunction. No persistent elevations in ALT, AST or CPK, defined as clinically relevant, were reported during the course of the study. Discussion The combination of hypertriglyceridemia / low HDL cholesterol is a common abnormality of lipoprotein metabolism and is associated with increased cardiovascular risk. Our data show that ciprofibrate and an isocaloric diet are an effective treatment for this dyslipidemia. However, there was significant variation in response to treatment between individuals. Excess body weight may be an important determinant of the lipid response. It is associated with a greater degree of non-HDL cholesterol reduction and a relatively smaller elevation in HDL cholesterol. A significant improvement in plasma triglycerides and HDL cholesterol concentrations resulted from the administration of ciprofibrate and dietary modifications. Our results are in agreement with previous studies [ 11 - 15 ]. This study differs from previous reports due to its design. We wanted to assess the lipid response to ciprofibrate in a real life environment. Hence, the highly controlled conditions of a randomized, double blind study were avoided. Also, we limited our inclusion criteria to subjects with hypertriglyceridemia / low HDL cholesterol, instead of including subjects with a wide variety of lipid profiles. The results are in accordance with the uncontrolled design of the study. By the use of ciprofibrate and isocaloric diet, almost half of the cases achieved the triglyceride goal (1.68 mM/l, 150 mg/dl). HDL cholesterol levels above 40 mg/dl were found in 51% of the cases. The full correction of the combination of hypertriglyceridemia / low HDL cholesterol occurred in a third of the population. This rate is similar to that reported for the LDL cholesterol goals achieved by the use of statins. Thus, our results reflect the strengths and limitations of treating this lipid abnormality in an uncontrolled setting. A large range of lipid responses was observed between individuals. There are few reports designed to analyse the determinants of the lipid response to fibrates. Robins reported a lower HDL cholesterol elevation with a fibrate when insulin resistance is present [ 7 ]. In this report, a lower HDL cholesterol response was observed in patients with excess body weight (body mass index above 25 kg/m 2 ) in comparison to that found in lean individuals. Since excess body weight is strongly associated with insulin resistance [ 16 , 17 ], our observations may be in agreement with the findings of Robins. Interestingly, the presence of insulin resistance was associated with the lowest incidence of coronary events in the VAHIT study. In our report, obese individual had a larger decrease in non-HDL cholesterol levels. The greater response in non-HDL cholesterol observed in our obese (and possibly insulin resistant) subjects may be one possible explanation for the greater benefit found in insulin resistant subjects during the VAHIT study. Our data suggest that the mechanism of action of ciprofibrate may be altered by the pathophysiology of the disorder being treated. The same phenomenon has been observed with the use of statins [ 18 , 19 ]. The greater reduction in non-HDL cholesterol in subjects with excess body weight could be explained by an increased clearance of remnants, IDL and VLDL particles. Ciprofibrate may enhance their clearance either by decreasing the concentration of the apolipoprotein CIII (an inhibitor of the lipolytic activity of the lipoprotein lipase) [ 20 - 22 ] or by increasing the mass and activity of lipoprotein lipase [ 23 ]. Genes encoding apolipoprotein CIII and lipoprotein lipase contain a PPAR-alpha response element; hence their expression may be modified during treatment with a fibrate [ 24 ]. Additional studies are needed to identify other possible determinants of the lipid response to treatment with a fibrate. Several limitations of the study must be recognized. The uncontrolled design resulted in a relatively high rate of drug discontinuation. However, this phenomenon is a common finding in studies done in open populations assessing the adherence to different lipid-lowering medications [ 25 ]. To overcome this limitation, an intention to treat analysis was used. Also, the lack of a run-in period in which the effect of diet could be measured and the absence of information about the adherence to the diet prevented us from discerning to what extent the observed result was due to fibrate alone. Finally, some of the conclusions, like the identification of the determinants of the lipid response, came from a post hoc analysis. In conclusion, ciprofibrate is effective in the treatment of patients with hypertriglyceridemia / low HDL cholesterol. Significant reductions in triglycerides and non-HDL cholesterol resulted from ciprofibrate therapy. In addition, higher HDL cholesterol levels were found at the end of the treatment. Excess body weight alters the lipid response to ciprofibrate. A greater non-HDL cholesterol lowering is achieved in subjects with excess body weight compared to that found in lean individuals. Controlled trials are needed to compare the lipid-lowering effects of ciprofibrate in groups of subjects defined by their adiposity or other markers of insulin resistance. Competing interests This study was supported by an educational grant provided by Sanofi-Synthelabo. It included the study expenses and it will cover the article processing charge. No other competing interest need to be declared Author contributions CAAS participated in the design of the study, performed the statistical analysis and drafted the manuscript. AALV participated in the design of the study and in the preparation of the manuscript. FJGP participated in the design of the study and in the preparation of the manuscript. All other authors were responsible of the inclusion and follow-up of the study subjects. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC503398.xml |
555848 | Health-state valuations for pertussis: methods for valuing short-term health states | Background The incidence of reported adolescent and adult pertussis continues to rise in the United States. Acellular pertussis vaccines for adolescents and adults have been developed and may be available soon for use in the U.S. Our objectives were: (1) to describe patient valuations of pertussis disease and vaccination; and (2) to compare valuations for short-term and long-term health states associated with pertussis. Methods We conducted telephone surveys with 515 adult patients and parents of adolescent patients with pertussis in Massachusetts to determine valuations of pertussis-related health states for disease and vaccination using time trade-off (TTO) and contingent valuation (CV) techniques. Respondents were randomized to complete either a short-term or long-term TTO exercise. Discrimination between health states for each valuation technique was assessed using Tukey's method, and valuations for short-term vs. long-term health states were compared using the Wilcoxon rank-sum test. Results Three hundred three (59%) and 309 (60%) respondents completed and understood the TTO and CV exercises, respectively. Overall, respondents gave lower valuations (lower TTO and higher CV values) to avoid a given state for adolescent/adult disease compared to vaccine adverse events. Infant complications due to pertussis were considered worse than adolescent/adult disease, regardless of the method of valuation. The short-term TTO resulted in lower mean valuations and larger mean differences between health states than the long-term TTO exercise. Conclusion Pertussis was considered worse than adverse events due to vaccination. Short-term health-state valuation is better able to discriminate among health states, which is useful for cost-utility analysis. | Background The incidence of reported pertussis continues to rise in the United States despite high levels of childhood vaccination [ 1 , 2 ]. Waning immunity is thought to contribute to the particularly steep rise seen among adolescents and adults over the past two decades [ 3 , 4 ]. Acellular pertussis booster vaccines have been developed already and recommended for use in several other countries including Canada, France, Germany, and Australia [ 5 - 7 ]. A combined booster (TdaP) also may become available soon for use in the U.S. Recently completed clinical trials suggest that the booster may prevent cough illness related to pertussis among adolescents and adults [ 8 , 9 ]. Though such illness does not result in mortality in this age group, it can be prolonged and associated with significant complications such as pneumonia or urinary incontinence [ 10 , 11 ]. However, implementation of a vaccination program for adolescents and/or adults would carry a significant cost. Policymakers will need to decide whether or not to recommend use of a vaccine where the health benefits to adolescents and adults are reductions in short-term morbidity, rather than mortality, and the health risks include adverse events from vaccination. Thus, further information regarding the relative valuations by patients of different potential consequences should be considered. Quantifying patient preferences is relevant to decisions about allocation of limited resources and is needed to assess the cost-effectiveness of vaccination in comparison to other well-accepted health interventions [ 12 ]. Methods commonly used for measuring health-state valuations include contingent valuation (CV) and time trade-off (TTO) [ 13 ]. Contingent valuation is an economic approach to valuing different outcomes using monetary value (e.g. willingness-to-pay, WTP) as a common metric; for example, the relative amounts that individuals would pay to avoid one health state or another may be interpreted as measures of their strength of preferences for time spent in these different states. An advantage to this approach is that respondents may find it relatively easy to value short-term health states in monetary terms since they are accustomed to assessing the dollar value of goods and services in everyday transactions. However, some outcomes are difficult to quantify using contingent valuation, e.g. how much a person is willing to pay to avoid death. Additionally, CV may be subject to anchoring effects and income effects [ 14 ]. Another common approach to measuring the benefits and harms of health interventions relies on health-state utilities. Utilities measure a person's preferences for specific outcomes on a scale of 0 to 1, on which 0 typically represents a state equivalent to death while 1 represents the best imaginable health. The time trade-off method is one of several approaches used to assess health utilities. Using the TTO method, respondents are asked how much longevity they would be willing to give up, if any, to avoid living with a particular health outcome. Traditionally, TTO questions have been framed as giving up time to avoid a long-term or chronic health state [ 15 ]. However, for many common health problems, including those caused by infections, the duration of the relevant health states is limited, not permanent. A more realistic approach would be to frame these conditions as short-term health states. We conducted a survey using TTO and CV methods to determine the health-state valuations of adult patients and parents of adolescent patients diagnosed with pertussis. We compared two alternative approaches to framing TTO questions, based on either short-term or long-term health states. We hypothesized that framing questions as short-term rather than as long-term health outcomes would allow for better discrimination between states. Methods Study participants Structured telephone interviews were conducted with adult patients (≥ 18 years) and parents of adolescent patients (11–17 years) diagnosed with confirmed pertussis in Massachusetts from December 1, 2001 to January 31, 2003 [ 11 ]. There were 800 cases of confirmed pertussis among adolescents and adults during this time period, and 517 (65%) respondents completed the telephone interviews, although two were excluded because the wrong health-state valuation survey was administered. Interviews included questions about medical and non-medical costs of illness and questions regarding health-state valuations for pertussis disease and vaccination. There were no significant differences in age, gender or race/ethnicity between respondents and all confirmed cases during the enrollment period. The study was reviewed and approved by the Institutional Review Boards of Children's Hospital Boston, Harvard Pilgrim Health Care, Massachusetts Department of Public Health, and Centers for Disease Control and Prevention. Survey protocol Descriptions of the health states were derived with input from 3 pertussis experts (Table 1 ). Adults were asked questions about themselves while parents were asked to respond in reference to outcomes in their adolescents. We also asked both adults and parents of adolescents to value the prevention of infant health states (respiratory complications, neurologic complications) due to pertussis. All surveys included open-ended TTO and CV questions; in other words, respondents were asked once about the maximum amount of longevity they would give up, or the maximum amount of money they would be willing to pay, to avoid the health outcome in question. We chose the open-ended format [ 16 - 19 ] due to the large number of items evaluated and for ease of administration by telephone. Additionally, prior methodologic work on open-ended CV techniques has demonstrated similar results to the commonly used but more intensive alternative involving dichotomous-choice questions.[ 18 ] Table 1 Health-state descriptions for outcomes associated with disease and vaccination. Health states Description Local reaction A sore upper arm that is slightly red, swollen, and tender after receiving a vaccination Systemic Reaction Low-grade fevers, headache, body ache, and decreased energy after receiving a vaccination Mild cough Coughing attacks that last for 1–2 minutes at a time and occur up to 8–10×/day. These coughing attacks wake you up at night several times a week, but you otherwise feel well between coughing attacks. Severe cough A cough that is so frequent and severe that it causes vomiting at least several times a week, difficulty eating or drinking, and difficulty sleeping every night. Pneumonia A severe cough with high fevers, chills, fatigue, and shortness of breath Respiratory complications (apnea and cyanosis) A 1-month-old baby that has coughing episodes so hard that he/she stops breathing and turns blue for 10–15 seconds. These episodes happen 8 to 10 times a day and the baby needs to be hospitalized, but is completely healthy afterwards. Neurologic complications (seizures and encephalopathy) A 1-month-old baby with seizures or convulsions. The seizures cause brief periods of being unconscious and the baby's arms and legs shake. They can last for up to 5 minutes at a time and happen several times a day. The baby needs to be hospitalized, but is completely healthy afterwards. For TTO questions, respondents were asked the maximum amount of time they would be willing to trade from the end of their lives to avoid a particular health outcome now. Adults were asked how much time they would give up from the end of their lives to avoid living in a particular health state themselves, while parents were asked how much time they would give up from the end of their own lives to avert the health state in the adolescent [ 17 , 20 ]. This approach was adopted after pre-testing the survey instrument with parents, who were more willing to answer questions about trading time from their own lives than from their children's lives. For infant health states, both sets of respondents were asked to give up time from the ends of their own lives to avoid a long-term or short-term health state in an infant. Although TTO questions traditionally have been framed using permanent health states for valuations of chronic disease [ 15 ], the health states associated with pertussis and vaccination are limited, lasting anywhere from days to weeks. Thus, asking respondents to imagine that they had to live for the rest of their lives with an infection or vaccine adverse event is not realistic. In order to address this concern and to test the hypothesis that framing the question as a short-term health state would significantly alter the TTO response, we created 2 versions of the survey – one with short-term and one with long-term (permanent) health states. Long-term health states were described as lasting for the lifetime of the infant, adolescent, or adult. Short-term health states were described as lasting for a duration of 8 weeks for the infant, adolescent or adult. We chose a constant duration for the short-term health states to ensure consistent responses regarding rank order and comparability of health states. In order to determine which version of the survey the parent or adult would receive, we used a random number generator to assign interviews to respondents once they consented to participate in the study. CV questions elicited the amount of money that a respondent would be willing to pay to avoid living in a particular health state for 8 weeks. We chose to frame the CV questions as short-term health states for both versions of the survey based on prior work [ 21 , 22 ]. Respondents were instructed not to consider any money lost from missed work or any co-payments that would be required. Telephone interviews were conducted in English or Spanish using standardized forms. Some respondents either were unable to complete or refused to answer the entire set of TTO questions or CV questions. If any answers were missing within a set of questions, respondents were excluded from analyses (of that set) in order to assess population means. If the respondent completed either set of questions, trained interviewers judged how well the respondent understood the TTO or CV questions separately based on a 3-point scale (good understanding, some understanding, limited understanding). Respondents were excluded from further analyses if they were thought to have either some or limited understanding of the tasks presented [ 23 ]. Calculation of utilities – long-term states We calculated utilities based on the TTO exercise, under alternative assumptions about discounting of future health outcomes. For long-term health states, the utility was based on the proportion of time that the respondent would be willing to give up to avoid a lifetime health state for themselves, for their adolescent child, or for a hypothetical infant (Figure 1 ). Life expectancy (LE) was calculated using age- and sex-specific cohort life tables [ 24 ]. In the cases of adolescents and infants, since the time given up would come from the adult respondent's lifespan, while the healthy time gained would accrue to the adolescent or infant, the computations required LE estimates for both the respondent and the beneficiary in the trade-off. Figure 1 Conceptual model for calculating utilities for long-term and short-term health states for adults and adolescents or infants. In each of the four panels, the top bar indicates the amount of longevity that is given up from the end of the respondent's life in each of the trade-offs, and the bottom bar indicates the averted duration of time in a given health state. Abbreviations: TTO, time trade-off; LE, life expectancy. It is most straightforward computationally to start with the disutility, rather than the utility, of a particular health state, computed as the ratio of the duration of life that would be given up (to avoid the lifetime health state) to the expected duration of time lived in the health state. In the absence of discounting, the disutility was calculated simply by dividing the amount of time traded from the end of the respondent's life by the LE of the beneficiary. The utility was then calculated by subtracting this result from one. With discounting, we assumed that individuals compared the present values of the two different streams of life in the trade-off, in a way that reflects declining relative weight for future consequences, and we computed utilities based on a discount rate (r) of 3% per year [ 12 ]. As empirical studies on time preference have reported a range of discount rates [ 25 - 27 ], we also examined the sensitivity of our findings to alternative assumptions about discounting, including discount rates of 5% and 10% (as well as the 0% rate implied by the no-discounting case). Using the formula for discounting a continuous stream of life [ 28 ], we obtained the present value of future time traded from the end of life (the numerator in the disutility calculation) by taking the difference between the discounted stream of normal life expectancy for the respondent and the discounted stream of shortened life expectancy: (1/r) * (1 - e -r (LE of respondent) ) - (1/r) * (1 - e -r (LE of respondent - years of life traded) ) The present value of the current life expectancy for the beneficiary (the denominator) was (1/r) * (1 - e -r (LE of beneficiary) ) The disutility for a given health state was computed as the ratio of the two quantities, as in the undiscounted case, and the utility was computed by subtracting the ratio from one. For adult valuations, the respondent and beneficiary were the same. For parents of adolescents or for respondents considering a hypothetical infant, the numerator was based on years traded from the respondent's life, while the denominator was based on the life expectancy of the adolescent or infant. Calculation of utilities – short-term states Utilities were calculated for the short-term health states in an analogous fashion, except that time from the end of the life of the respondent was traded to avoid 8 weeks of illness in the present time for the respondent, for the adolescent child, or for a hypothetical infant (Figure 1 ). The numerator was calculated in the same way as for the long-term states, assuming a 3% discount rate in the baseline analysis (and alternatives in sensitivity analysis): (1/r) * (1 - e -r (LE of respondent) ) - (1/r) * (1 - e -r (LE of respondent - years of life traded) ) For the denominator, discounting would have minimal impact because the duration considered is only 8 weeks and begins at the present, but we nevertheless converted this duration to its present value for consistency: (1/r) * (1 - e -r (8/52) ) Again, the disutility was the ratio of these two quantities, and the utility was computed by subtracting the ratio from one. Statistical analysis Utilities and WTP values are presented as means (with standard deviations) and medians (with interquartile ranges). We assumed that the maximum amount of discounted time traded from the end of the respondent's life could not exceed the duration of the present health state; thus, any utilities that would be negative based on the computations described above were instead set to 0. For parent respondents who were asked questions about how much time they would trade to avoid long-term health states in their adolescents, we used interval regression with left censoring to calculate mean utilities [ 29 ]. In interval regression, when parents were willing to trade off their full life expectancy to avoid a lifetime health outcome in their child, we treated this observation as providing only partial information about the amount that parents would give up, since they were limited by their life span – which was always shorter than the lifespan of the beneficiary adolescent. These observations were assumed to indicate a range of time spanning between the longevity of the parent and that of the adolescent. Interval regression was used to limit bias as a result of this constraint. When parents traded-off less than their full life expectancy, interval regression was equivalent to ordinary least squares regression. For infant health states, an analogous approach based on interval regression was applied. To compare demographic characteristics of respondents for the short-term vs. long-term TTO surveys, we used the chi-squared test for categorical variables and the t-test for continuous variables. To determine if mean health state utilities and WTP values were significantly different from one another, we used Tukey's method, which is a non-parametric test that allows for multiple pairwise comparisons assuming all sample sizes are equal [ 30 ] Comparison of utilities for short-term vs. long-term health states was performed using the Wilcoxon rank sum test based on 2 independent samples [ 30 ]. Spearman rank correlation was used to determine associations between demographic characteristics and TTO or CV responses [ 30 ]. A p-value of <0.05 was considered statistically significant. Results Respondent characteristics Five hundred fifteen adult pertussis patients and parents of adolescent pertussis patients were eligible and participated in the survey (Figure 2 ). Characteristics of the respondents are described in Table 2 . There were no significant differences between respondents who received either form of the survey. Overall, 303 (59%) respondents completed and understood the TTO portion of the survey and 309 (60%) respondents completed and understood the CV portion of the survey. When response rates of parents and adults were compared, we found no significant differences in response rates to the short-term TTO questions (p = 0.28); however, adults were significantly more likely to respond than parents of adolescents to the long-term TTO questions (p = 0.006). Other respondents were not included for analysis because: (1) the TTO (27%) or CV (22%) survey was not completed by respondents; or (2) one or more answers within a set of TTO (9%) or CV (12%) questions were not completed; or (3) respondents completed but were thought not to understand the TTO (6%) or CV (5%) exercise. Figure 2 Study enrollment. Percentages indicate proportion of respondents who were given the survey that completed the entire set of questions and understood the TTO and WTP exercises. Abbreviations: TTO, time trade-off; WTP, willingness to pay. Table 2 Characteristics of respondents interviewed using short-term TTO (N = 267) vs. long-term TTO (N = 248).* Characteristics Short-term TTO Long-term TTO P-value Mean age of respondent [range]** 42.4 [18–87] 41.7 [18–81] 0.49 Gender of respondent Female 207 (78%) 205 (83%) 0.35 Male 56 (21%) 40 (16%) Not available 4 (2%) 3 (1%) Race/ethnicity of respondent: White 238 (89%) 219 (88%) 0.67 Black 7 (3%) 4 (2%) Hispanic 11 (4%) 15 (6%) Other or unknown 11 (4%) 10 (4%) Educational level of respondent: Up to high school 67 (25%) 52 (21%) 0.48 Up to college or technical school 136 (51%) 143 (58%) >College 60 (22%) 49 (20%) Refused to answer 4 (2%) 4 (2%) Annual household income: <$20,000 28 (10%) 26 (10%) 0.79 $20,000–49,999 55 (21%) 53 (21%) $50,000–79,999 52 (19%) 58 (23%) ≥ $80,000 98 (37%) 80 (32%) Refused to answer 34 (13%) 31 (13%) *Numbers may not add up to 100% due to rounding **Missing ages for 8 parents of adolescents We compared demographic characteristics of respondents who completed and understood the survey with those who did not. Parents of adolescents with higher household incomes (p = 0.022) and higher educational levels (p= 0.017) were more likely to complete and understand the CV survey. Also, parents who were white (p = 0.011) with higher educational levels (p = 0.010) were more likely to complete and understand the TTO survey. Adult respondents who completed and understood the CV and TTO survey were significantly younger (p = 0.006 for CV; p = 0.025 for TTO) and had higher educational levels (p = 0.005 for CV; p = 0.012 for TTO) than those who did not. Adolescent health states CV and TTO responses for short-term and long-term health states for adolescents are described in Table 3 . Based on mean utilities, parents of adolescents ranked the following long-term health states from best to worst: local reaction, systemic reaction, mild cough, severe cough, and pneumonia. Short-term health state rankings were similar, except mean utilities for severe cough and pneumonia were equivalent. For both short-term and long-term health states (Figure 3 ), we found significant differences in mean utilities (zero not included in the confidence interval) for most pairwise comparisons (Tukey's method, p<0.05). However, the mean utilities for health states that ranked close to each other were not significantly different, such as local reaction vs. systemic reaction, systemic reaction vs. mild cough, and severe cough vs. pneumonia. CV responses reflected rankings similar to TTO responses for parent respondents (Table 3 ). However, there were no significant differences between the amounts individuals were willing to pay to avoid adolescent health states. Table 3 Adolescent Pertussis – days or years traded, utilities, and willingness-to-pay to avoid health states Vaccination health states Disease health states Local reaction Systemic reaction Mild cough Severe cough Pneumonia Short-term TTO (N = 94) Days traded Mean (SD) 17 (46) 29 (61) 55 (117) 90 (162) 79 (114) Median [25%–75%] 2 [0–14] 7 [2–28] 25 [7–56] 45 [14–56] 41 [14–70] Utilities* Mean (SD) 0.92 (0.19) 0.86 (0.23) 0.78 (0.27) 0.67 (0.33) 0.67 (0.33) Median [25%–75%] 0.99 [0.93–1.0] 0.96 [0.85–0.99] 0.87 [0.72–0.96] 0.78 [0.61–0.92] 0.78 [0.61–0.91] Long-term TTO (N = 81) Years traded Mean (SD) 2.6 (4.1) 5.5 (5.9) 8.0 (6.7) 11.6 (9.0) 12.0 (9.5) Median [25%–75%] 1 [0.1–5] 5 [1–5] 5 [5–10] 10 [5–20] 10 [5–20] Utilities* Mean (SD) 0.97 (0.07) 0.93 (0.10) 0.89 (0.12) 0.83 (0.17) 0.82 (0.17) Median [25%–75%] 0.99 [0.96–1.0] 0.95 [0.92–0.99] 0.93 [0.87–0.95] 0.88 [0.78–0.94] 0.88 [0.76–0.94] Willingness-to-pay (N = 183) Mean (SD) $18 (58) $61 (174) $3,003 (15,889) $3,981 (16,797) $4,265 (16,860) Median [25%–75%] $3 [1–13] $13 [6–38] $300 [150–1,500] $750 [225–1,500] $750 [263–1,500] *Utilities were calculated assuming the maximum amount of time traded could not exceed the duration of the health state in the adolescent and assuming a discount rate of 3%. Figure 3 Mean difference between TTO utilities and 95% confidence intervals for short-term (squares) and long-term (circles) health states for adolescents. Adult health states Short-term and long-term TTO responses for adult respondents are described in Table 4 . Based on mean utilities, adults ranked short-term health states in the following order: local reaction, systemic reaction, mild cough, pneumonia, and severe cough. Mean rankings for long-term health states were similar, except pneumonia and severe cough were equivalent. For short-term health states (Figure 4 ), mean differences in utilities were significantly different for 5 out of 10 pairwise comparisons (Tukey's method, p < 0.05). However, the only significant differences in utilities for long-term health states were: local reaction vs. severe cough, local reaction vs. pneumonia, systemic reaction vs. severe cough, and systemic reaction vs. pneumonia (Tukey's method, p < 0.05). CV responses again reflected a rank order similar to the TTO exercise (Table 4 ). We were unable to detect significant pairwise differences in the WTP amounts to avoid adult health states. Table 4 Adult pertussis – days or years traded, utilities and willingness-to-pay to avoid health states Vaccination health states Disease health states Local reaction Systemic reaction Mild cough Severe cough Pneumonia Short-term TTO (N = 72) Days traded Mean (SD) 24 (135) 26 (130) 80 (366) 99 (446) 101 (448) Median [25%–75%] 0 [0–2] 2.5 [0–7] 8.5 [0.5–28] 14 [2–56] 14 [2–49] Utilities* Mean (SD) 0.95 (0.18) 0.93 (0.18) 0.85 (0.26) 0.81 (0.30) 0.82 (0.30) Median [25%–75%] 1.0 [0.99–1.0] 0.99 [0.95–1.0] 0.96 [0.88–1.0] 0.95 [0.81–0.99] 0.96 [0.83–0.99] Long-term TTO (N = 56) Years traded Mean (SD) 0.4 (0.8) 1.4 (2.2) 2.7 (3.4) 4.7 (6.3) 4.7 (6.2) Median [25%–75%] 0.03 [0–0.9] 0.8 [0.04–1.5] 1 [0.6–4.5] 2 [1–5] 2 [0.8–5] Utilities* Mean (SD) 0.995 (0.01) 0.98 (0.03) 0.96 (0.06) 0.92 (0.14) 0.92 (0.16) Median [25%–75%] 1.0 [0.99–1.0] 0.99 [0.98–1.0] 0.99 [0.96–1.0] 0.97 [0.92–0.99] 0.97 [0.91–0.99] Willingness-to-pay (N = 126) Mean (SD) $8 (17) $41 (78) $3,249 (14,062) $4,141 (15,409) $8,748 (66,907) Median [25%–75%] $3 [0–9] $13 [6–38] $450 [150–1,200] $750 [300–1,500] $750 [300–1,500] *Utilities were calculated assuming the maximum amount of time traded could not exceed the duration of the health state in the adult and assuming a discount rate of 3%. Figure 4 Mean difference between TTO utilities and 95% confidence intervals for short-term (squares) and long-term (circles) health states for adults. Infant health states TTO and CV responses for infant health states are described in Table 5 . We asked all respondents to imagine they had a 1-month-old infant who developed pertussis that could result in either short-term (8 weeks followed by perfect health) or long-term health states. Mean utilities for short-term infant health states, such as respiratory or neurologic complications due to pertussis, were lower than mean utilities for vaccine adverse events or adolescent/adult disease. However, mean utilities for long-term infant health states were not significantly different from adolescent/adult disease utilities. All respondents were willing to pay significantly more to avoid infant disease compared to vaccine adverse events or adolescent/adult disease. Neurologic disease was considered significantly worse than infant respiratory disease regardless of the method of valuation. Table 5 Infant pertussis – days or years traded, utilities, and willingness-to-pay to avoid health states* Infant health states Infant respiratory complications Infant neurologic complications Short-term TTO (N = 166) Days traded Mean (SD) 174 (360) 226 (431) Median [25%–75%] 56 [28–168] 56 [28–183] Utilities Mean (SD) 0.58 (0.37) 0.51 (0.38) Median [25%–75%] 0.72 [0.10–0.88] 0.64 [0.0–0.85] Long-term TTO (N = 137) Years traded Mean (SD) 12.3 (10.9) 15.2 (12.3) Median [25%–75%] 10 [5–20] 10 [5–20] Utilities** Mean (SD) 0.82 (0.21) 0.77 (0.25) Median [25%–75%] 0.89 [0.75–0.96] 0.87 [0.69–0.95] Willingness-to-pay (N = 309) Mean (SD) $13,016 (52,443) $19,426 (61,074) Median [25%–75%] $1,500 [750–7,500] $3,000 [750–10,000] *Responses from adults and parents of adolescents were pooled. **Utilities were calculated assuming the maximum amount of time traded could not exceed the duration of the health state assuming a discount rate of 3%. Comparison of utilities for short-term vs. long-term health states Overall, mean utilities were higher for long-term health states than for short-term health states. These differences were significant for adolescents with mild cough (p = 0.045), severe cough (p = 0.001), and pneumonia (p = 0.001). No significant differences were found between short-term and long-term health states for adults. For infants, we also found significant differences with higher mean utilities reported for long-term health states compared to short-term health states (p < 0.001 for respiratory complications; p < 0.001 for neurologic complications). Association between demographic variables and TTO or WTP estimates We evaluated associations between demographic characteristics such as age, race/ethnicity, education, and household income and estimates provided by respondents for the TTO or CV exercise. Older age was associated with lower utilities for short-term health states for adolescent/adult disease (i.e. mild cough, severe cough, pneumonia) and infant disease (i.e. respiratory and neurologic complications) (p < 0.05). Older age was also associated with lower utilities for long-term health states such as systemic reaction, adolescent/adult disease, and infant disease (p < 0.05). Higher income was associated with lower utilities for long-term mild cough and short-term respiratory complications, although older respondents were more likely to report higher household incomes (p < 0.001). Higher income was significantly associated with higher WTP values for the following health states: systemic reaction, adolescent/adult disease, and infant disease (p < 0.05). We also found an association between higher respondent education and higher WTP estimates for pneumonia, although we note that education and income were themselves positively correlated (ρ = 0.358, p < 0.0001). Alternative assumptions about the discount rate Tables 6A and 6B describe estimated mean utilities for short-term and long-term health states associated with disease and vaccination in adolescents, adults, and infants. At higher discount rates, the mean differences between utilities for different health states became smaller, regardless of age group or method of valuation. Table 6 Utilities based on alternative discount rates of 0%, 5%, and 10%, for (A) adolescents, adults, and (B) infants. Utilities were calculated assuming the maximum amount of time traded could not exceed the duration of the health state A. Adolescents and adults Vaccination health states Disease health states Local reaction Systemic reaction Mild cough Severe cough Pneumonia Mean (SD) adolescent utilities for short-term TTO (N = 94) 0% 0.80 (0.32) 0.68 (0.36) 0.51 (0.39) 0.35 (0.38) 0.35 (0.37) 5% 0.95 (0.14) 0.92 (0.16) 0.87 (0.22) 0.80 (0.28) 0.80 (0.26) 10% 0.99 (0.03) 0.99 (0.04) 0.97 (0.08) 0.96 (0.11) 0.96 (0.08) Mean (SD) adolescent utilities for long-term TTO (N = 81) 0% 0.96 (0.06) 0.92 (0.09) 0.88 (0.10) 0.82 (0.13) 0.82 (0.14) 5% 0.97 (0.07) 0.94 (0.11) 0.91 (0.12) 0.85 (0.18) 0.85 (0.17) 10% 0.99 (0.05) 0.97 (0.09) 0.95 (0.11) 0.91 (0.17) 0.91 (0.15) Mean (SD) adult utilities for short-term TTO (N = 72) 0% 0.91 (0.24) 0.83 (0.29) 0.67 (0.38) 0.58 (0.42) 0.62 (0.40) 5% 0.97 (0.13) 0.96 (0.14) 0.90 (0.22) 0.88 (0.23) 0.88 (0.25) 10% 0.99 (0.04) 0.99 (0.04) 0.97 (0.07) 0.97 (0.08) 0.97 (0.08) Mean (SD) adult utilities for long-term TTO (N = 56) 0% 0.99 (0.02) 0.97 (0.06) 0.93 (0.09) 0.88 (0.17) 0.88 (0.18) 5% 1.0 (0.01) 0.99 (0.03) 0.97 (0.05) 0.94 (0.13) 0.94 (0.15) 10% 1.0 (0.00) 1.0 (0.01) 0.99 (0.02) 0.97 (0.09) 0.96 (0.12) B. Infants Infant health states Infant respiratory complications Infant neurologic complications Short-term TTO (N = 166) 0% 0.27 (0.36) 0.21 (0.33) 5% 0.71 (0.35) 0.66 (0.36) 10% 0.92 (0.17) 0.90 (0.19) Long-term TTO (N = 147) 0% 0.36 (0.18) 0.33 (0.19) 5% 0.84 (0.21) 0.78 (0.26) 10% 0.89 (0.20) 0.84 (0.27) Discussion Vaccination programs in the US have traditionally been life-saving and cost-saving [ 31 , 32 ]. However, the focus of newer vaccines being developed has shifted from preventing mortality to preventing morbidity. In this situation, the risks of vaccine adverse events need to be weighed carefully against their benefits. Health-state valuation studies are useful to assess the relative risks and benefits of potential future vaccination programs under consideration in the US. Our study examined valuations associated with adolescent/adult pertussis disease and vaccination. Overall, respondents rated adolescent and adult pertussis as worse than vaccine adverse events. Also, infant complications due to pertussis were ranked as worse than adolescent/adult disease. We explored differences in utilities for short-term and long-term health states using open-ended TTO questions. Other methods for short-term health state valuation described in the literature include chained TTO, sleep tradeoff (STO), and waiting tradeoff (WTO). The chained TTO has been shown to have good consistency and reliability [ 13 , 33 , 34 ]. However, the chained procedure involves an extra step and may result in a significant cognitive burden due to the complexity of the task. The sleep tradeoff asks people how much time they would be willing to sleep in a non-refreshing/non-dream state to avoid living with a short-term health problem [ 35 , 36 ]. Unfortunately, this method may not be appropriate for valuing health states associated with pertussis since sleep disturbance occurs in a majority of infected individuals, which may confound responses regarding sleep [ 11 ]. The waiting tradeoff proposed by Swan et al. is an alternative approach for assessing process utility [ 37 ]. While this approach is clearly useful for situations that involve diagnostic procedures, its applicability to other short-term health states such as infections is limited. We felt the open-ended format was the most appropriate method for our study population given the limitations of alternatives and due to improved ease of administration by telephone. We found that the rankings of health states based on mean utilities were essentially the same using either short-term or long-term health states. However, the short-term TTO resulted in lower mean utility estimates and larger mean differences between health states than the long-term TTO exercise, thus allowing for better discrimination of health states in cost-utility analyses. These results suggest that responses may not fulfill the constant proportional trade-off assumption, which requires that the TTO utility be independent of the duration of the specified health state. While previous studies using TTO and other valuation methods have also found that the constant proportional trade-off assumption does not always hold, the direction of the discrepancy has been mixed [ 38 - 40 ]. The short-term health state approach may have violated the constant proportional trade-off assumption because respondents were less averse to giving up small amounts of time from the ends of their lives (days or weeks) compared to large amounts of time (months or years) in the long-term approach. In other words, asking respondents to give up a few days or weeks from the ends of their lives may not be considered a significant loss, even to avoid a short-duration health state lasting only 8 weeks. However, giving up months or years of life is considerably more difficult for individuals, even to avoid an intermediate- or long-term health state. It may be that a threshold exists whereby individuals are more willing to give up a very small portion of their lives for perfect health, but as the duration of health states increase, they are less willing to give up time per health unit gained, resulting in failure to behave according to the constant proportional trade-off assumption. This aversion to giving up larger amounts of time may play an important role in measuring utilities, particularly since the short duration of these health states over the lifetime of individuals would otherwise lead to nearly imperceptible, but arguably important differences in terms of quality-adjusted life years. The impact of discounting is important to consider since assuming no discounting can lead to a downward bias, while a high discount rate can lead to an upward bias, which may strongly devalue the benefits of any preventive intervention [ 12 , 41 - 43 ]. We assumed a 3% discount rate in our baseline analysis, though there is no clear standard regarding the optimal discount rate for societal decisions. We also examined the implications of varying the discount rate between 0 and 10%. If we assumed no discounting of health preferences over time, the mean utilities for all health states were lower and spread over a wider range. At a discount rate of 10%, the mean values for all health states approached 1.0 and mean differences between health states were much smaller. Further empirical investigation of societal discount rates for prevention programs is needed. The CV exercise resulted in similar mean health state rankings to our TTO exercise, and these estimates were positively correlated with income, which is not surprising since individuals were asked to respond in consideration of their actual household income [ 44 ]. For the TTO, we found an inverse association between age and utility estimates. This finding is consistent with previous studies that have shown older individuals provide lower utility estimates for health states [ 45 , 46 ]. It may be that increased awareness of the reality of living in poor health is better understood by older respondents [ 46 ]. As always, there are limitations to our study. First, our respondents were either adult pertussis patients or parents of adolescents with pertussis. We elicited patient and caregiver valuations as part of a larger study to determine societal costs of pertussis in adolescents and adults. While the U.S. Panel on Cost-effectiveness in Health and Medicine suggests that community preferences be used where possible[ 12 ], there is ongoing debate over whose preferences should be included in cost-effectiveness analyses.[ 47 ] There are certain practical advantages to surveying patients and caregivers. Because these individuals had recent first-hand experience with the disease, reasonably short descriptions could be used to sufficiently characterize a series of health states associated with pertussis, making administration by telephone feasible. While there is no perfect measure of health, patient preferences can help to inform societal values and should be given further consideration. If community valuations are collected in subsequent studies, it will be useful to compare them to the patient and caregiver valuations collected here. Second, selection bias might arise from our survey completion rate of around 60%, although this is comparable to other published valuation studies [ 20 , 23 ]. In our study, a significant proportion of respondents did not complete the survey (22–27%), refused to answer the entire set of questions (9–12%), or did not understand the exercise (5–6%). We believe this may have been due in part to respondent burden, because preference questions were asked at the end of a lengthy cost interview. Also, we asked both WTP and TTO questions for 7 separate health states. In a separate analysis that included all answers to the set of questions regardless of the level of understanding, we found that the rank order of health states remained consistent, which is reassuring. Because of the complexity of the task required, it is not surprising that respondents with higher educational levels were more likely to complete and understand the preferences exercise. In addition, most respondents were white, well educated, and had relatively high household incomes. Household income was associated with utilities as well as willingness-to-pay to avoid pertussis. To address this limitation, economic analyses of pertussis should vary willingness-to-pay and utilities over wide ranges that would reflect the preferences of a general population. Further research in more socioeconomically diverse populations should also be considered. Another issue that should be explored more thoroughly is the impact of parents as surrogate respondents for children and the method of preference elicitation. We asked parents of adolescents to serve as proxy respondents for their child. Interestingly, parents of adolescents were less likely to provide answers to the long-term TTO exercise than adult respondents, which may suggest that parents had difficulty answering the TTO question for long-term illnesses in their children. Also, trading time from the parent's life to avoid illness in a child may result in preferences that incorporate other aspects of their relationship such as altruism. Previous work in the CV literature has suggested that altruism may significantly affect valuations. For example, Liu et al. found that a mother's WTP to prevent a cold is approximately twice as large for the child as for the mother[ 21 ]. While parents are often considered to be the health care decision makers for their child, further work on eliciting health state valuations directly from children would provide useful information. In addition, we asked parents how much time they would be willing to give up from the end of their lives to avoid illness in their child because we found that some parents refused to trade time from their child's life whereas they were willing to trade from their own life. Though this did not affect our calculation of short-term utilities (since a common denominator of 8 weeks was used), it did affect our calculation of long-term utilities where the denominator was based on the life expectancy of the child. Conclusion In this study, we estimated health-state valuations regarding pertussis disease and vaccination among adult patients and parents of adolescent patients. Patient preferences in conjunction with health outcomes will be key factors in deciding whether or not to implement a universal vaccination policy for adolescents or adults. The results from our study suggest that short-term health-state valuation may provide a reasonable approach to assessing preferences given its superior ability to discriminate between states, which may be particularly useful for cost-utility analyses for future vaccination programs. Authors' contributions Dr. Lee had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Lee, Salomon, Lieu Acquisition of data: Lee, Lieu Analysis and interpretation of data: Lee, Salomon, LeBaron, Lieu Drafting of the manuscript: Lee Critical revision of the manuscript for important intellectual content: Lee, Salomon, LeBaron, Lieu Statistical analysis: Lee, Salomon Obtained funding: Lieu Administrative, technical, or material support: Lieu Study supervision: Salomon, Lieu Financial support This study, part of the Joint Initiative on Vaccine Economics Project, was supported by the National Immunization Program, Centers for Disease Control and Prevention, via cooperative agreement with the Association of Teachers of Preventive Medicine, Task order #TS-0675. Dr. Lee's work was also supported in part by the grants T32 HS00063 and K08 HS013908-01A1 from the Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services. Dr. Salomon was supported by the National Institute on Aging (Grant P01 AG17625). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC555848.xml |
518960 | Hypoxia upregulates the expression of the NDRG1 gene leading to its overexpression in various human cancers | Background The expression of NDRG1 gene is induced by nickel, a transition metal sharing similar physical properties to cobalt. Nickel may create hypoxia-like conditions in cells and induce hypoxia-responsive genes, as does cobalt. Therefore NDRG1 is likely to be another gene induced by hypoxia. HIF-1 is a transcription factor which has a major role in the regulation of hypoxia-responsive genes, and thus it could be involved in the transcriptional regulation of NDRG1 gene. Hypoxia is such a common feature of solid tumours that it is of interest to investigate the expression of Ndrg1 protein in human cancers. Results Hypoxia and its mimetics induce in vitro expression of NDRG1 gene and cause the accumulation of Ndrg1 protein. Protein levels remain high even after cells revert to normoxia. Although HIF-1 is involved in the regulation of NDRG1 , long term hypoxia induces the gene to some extent in HIF-1 knock-out cells. In the majority of human tissues studied, Ndrg1 protein is overexpressed in cancers compared to normal tissues and also reflects tumour hypoxia better than HIF-1 protein. Conclusions Hypoxia is an inducer of the NDRG1 gene, and nickel probably causes the induction of the gene by interacting with the oxygen sensory pathway. Hypoxic induction of NDRG1 is mostly dependent on the HIF-1 transcription factor, but HIF-1 independent pathways are also involved in the regulation of the gene during chronic hypoxia. The determination of Ndrg1 protein levels in cancers may aid the diagnosis of the disease. | Background Cancer is a leading cause of death in humans, and current studies in the clinical area focus on either the early detection of this disease or on the development of new selective treatment tools. New tumour markers can provide aid in cancer diagnosis and create novel treatment opportunities. Ndrg1, also named Cap43, Drg 1, RTP and rit42 following the discovery of its gene ( NDRG1 ) in different laboratories, is a stress responsive protein which shuttles between cytoplasm and nucleus upon certain insults [ 1 - 5 ]. After its independent discovery in our lab [ 1 ], we worked on the expression of this gene and the availability of its protein product in human cells and tissues under different conditions. Since our origin for the induction of the NDRG1 gene was nickel exposure, we worked on possible ways by which nickel could change the expression of this gene. Among the mechanisms investigated were epigenetic changes (DNA methylation and histone acetylation), signal transduction pathways (tyrosine phosphorylation, adenylate cyclase cascade, calmodulin, PKC, PI3-K), and ROS-mediated activation. However, none of these mechanisms were found to be involved in the induction of NDRG1 gene by nickel compounds [ 1 , 6 - 9 ]. On the other hand, it was shown that nickel induced the expressions of several hypoxia-responsive genes including vascular endothelial growth factor [ 10 ], erythropoietin [ 11 ], and glyceraldehyde-3-phosphate dehydrogenase [ 12 ]. Moreover, a well known hypoxia-mimicking agent is cobalt which is another transition metal adjacent to nickel in the periodic table. These facts led to the idea that the NDRG1 could be another hypoxia-responsive gene, and nickel could induce the expression of this gene creating hypoxia-like state in cells, as does cobalt. In the present study, the effects of hypoxia and hypoxia-mimicking agents on the NDRG1 gene expression have been investigated. Because HIF-1 transcription factor is a major regulator of hypoxia-responsive genes [ 13 , 14 ], the relationship between NDRG1 gene expression and HIF-1 has also been studied. Here, it is reported that hypoxia is an inducer of the NDRG1 gene, and that HIF-1 transcription factor is involved in the regulation of this gene, but HIF-1 independent pathways also exist in the induction of the gene in the case of chronic hypoxia. The process of tumour expansion is characterized by rapid growth of cancer cells as the tumour establishes itself in the host. Accompanying this rapid growth are alterations in the cancer cell microenvironment, typically caused by an inability of local vasculature to supply enough oxygen and nutrients to the rapidly dividing tumour cells. This makes hypoxia one common feature of solid tumours [ 15 ]. Exploration of Ndrg1 protein expression patterns in various tissues showed that Ndrg1 protein was overexpressed in cancers compared to normal tissues. Because of its differential expression in cancer tissues and the high stability of the protein, Ndrg1 is proposed as a useful new tumour marker. Results Hypoxia and its mimetics induce in vitro expression of NDRG1 gene To determine whether hypoxia could induce the transcription of the NDRG1 gene, A549 cells were exposed to hypoxia (0.5% O 2 ) for different time periods. The RNA transcript of NDRG1 started to appear after 4 hours of incubation and increased up to 18 hours (Fig. 1A ). To explore if this induction is also translated into protein response, Ndrg1 protein levels were determined after the exposure of cells to hypoxia and its mimetics. Figure 1B shows the accumulation of the protein following the incubation of A549 cells with these agents. To determine about the longevity of Ndrg1, at the end of hypoxia exposure cells were incubated under normoxic conditions for different time periods, and the levels of the protein were assessed. The results of this experiment showed that even after returning back to normoxia Ndrg1 protein levels remained elevated for at least 16 hours, indicating the stable nature of the protein (Fig. 1C ). The induction of the Ndrg1 protein by hypoxia and transition metals has also been shown in several other cell lines (Fig. 2 ), confirming that this induction is a general phenomenon rather than being a cell-specific one. Figure 1 The induction of NDRG1 gene expression by hypoxia and its mimetics. A) A549 cells were exposed to normoxia (20% O 2 , control) for 20 hours or to hypoxia (0.5% O 2 ) for the time periods indicated in the figure. 15 μg of total RNA was isolated and subjected to a Northern blot analysis as described in 'Methods' section. The blot first hybridized with NDRG1 probe (top panel), and then the membrane was stripped and rehybridized with actin probe (bottom panel) to show loading. B) A549 cells were exposed to 0.5 mM NiCl 2 (Nickel), 200 μM CoCl 2 (Cobalt), 200 μM desferrioxamine (DFX), or hypoxia (0.5% O 2 ) for 20 hrs. 40 μg of whole cell protein extracts were loaded into each lane and subjected to Western blot analysis as described in 'Methods' section, using antibody against Ndrg1. Bottom panel (actin) shows loading. C) A549 cells were first exposed to hypoxia (0.5% O 2 ) for 20 hrs, then taken out of hypoxic chamber and incubated additionally for the time periods indicated under normoxic (20% O 2 ) conditions. Western blot analysis with antibody against Ndrg1 was carried out in whole cell protein extracts. Bottom panel (actin) shows loading. Figure 2 The confirmation of Ndrg1 protein induction in different cell lines. (A) HTE cells were incubated with different concentrations of NiCl 2 (Ni), hypoxia, and 300 μM of CoCl 2 (Co) for 20 hrs. (B) HOS and MCF-7, (C) PW and DU-145 cells were incubated with 500 μM of NiCl 2 (nickel), hypoxia, and 300 μM of CoCl 2 (cobalt) for 20 hrs. Western blot analyses were done as described in 'Methods' section. The membranes were first incubated with anti-Ndrg1 antibody (top panels), then stripped, and rehybridized with anti-actin antibody (bottom panels) to show loading. The regulation of NDRG1 expression by HIF-1 transcription factor HIF-1 is a heterodimeric transcription factor consisting of HIF-1α and HIF-1β subunits. It is expressed by all cells of the human body in response to hypoxia and contributes to the regulation of hypoxia-responsive genes. HIF-1α is the unique, O 2 -regulated subunit that determines HIF-1 activity whereas HIF-1β is expressed ubiquitously and is a common partner of several other proteins. Therefore, the elimination of HIF-1α expression completely prevents the formation of HIF-1 protein. To investigate the relationship between HIF-1 transcription factor and the expression of NDRG1 gene, HIF-1 proficient (HIF-1α +/+ ) and deficient (HIF-1α -/- ) cells were exploited. Short-term hypoxia (20 hrs) experiment showed that NDRG1 mRNA was induced in HIF-1 proficient but not in deficient cells (Fig. 3A ). The transcriptional induction of the gene by both nickel and cobalt was also dependent upon HIF-1 transcription factor. These results have been confirmed with the data obtained at the protein level: the same agents (short-term hypoxia, nickel, and cobalt) caused the accumulation of Ndrg1 protein in HIF-1 proficient but not in deficient cells (Fig. 3B ). Figure 3 The role of HIF-1 in the regulation of NDRG1 gene expression. A) HIF-1 proficient (HIF-1α +/+ ) and deficient (HIF-1α -/- ) cells were exposed to 0.5 mM NiCl 2 , 300 μM CoCl 2 , and hypoxia (0.5% O 2 ) for 20 hrs. 15 μg of total RNA was isolated and subjected to a Northern blot analysis using NDRG1 probe (top panel). Ethidium bromide staining was used to adjust loading (bottom panel). B) HIF-1α +/+ and HIF-1α -/- cells were exposed to 0.5 mM NiCl 2 , 300 μM CoCl 2 , and hypoxia (0.5% O 2 ) for 20 hrs. 40 μg of protein extracts were subjected to Western blot analysis using antibody against Ndrg1 protein (top panel). Actin bands in the bottom panel show loading. C) Cells were first incubated for 24 hours under normoxic conditions for attachment (Day 0) and then exposed to hypoxia for up to three days. Western blot analysis with antibody against Ndrg1 was carried out in 25 μg of whole cell protein extracts (top panel). Actin bands in the bottom panel show loading. In another experiment, HIF-1α +/+ and HIF-1α -/- cells were exposed to long-term hypoxia up to three days, to simulate the chronic conditions that cancer cells go through. The results revealed that hypoxia increased Ndrg1 protein levels even in HIF-1α -/- cells starting from the second day of hypoxia (Fig. 3C ). However, the level of protein accumulation on the third day was considerably higher in HIF-1α +/+ cells than in HIF-1α -/- cells. These observations implied that the Ndrg1 protein induction was not totally dependent on HIF-1 transcription factor, and that some other pathways were also involved in the induction of Ndrg1 protein by long-term hypoxia. The detection of Ndrg1 and HIF-1α protein levels in normal and cancerous human tissues Figure 4 shows a variety of human normal and cancer tissues stained immunohistochemically with anti-Ndrg1 polyclonal antibody. To understand whether elevations of Ndrg1 protein coincided with the expression of HIF-1 transcription factor, we also stained the tissues with an antibody to HIF-1α (Fig. 5 ). In lung, Ndrg1 preferentially stained malignant cells including both non-small and small cell types, whereas surrounding normal tissue remained negative for staining (Fig. 4A,4B ). In contrast to Ndrg1, HIF-1α was present in both normal and lung cancer cells at similar levels (Fig. 5A,5B , and Table 1 ). As shown in Figure 5B , HIF-1α was present at higher levels in some cancer cells but not to the extent of Ndrg1 (Fig. 4B ). In brain tissue, Ndrg1 antibody selectively stained cancer cells whereas normal brain remained negative for this staining. Figure 4C shows the normal brain tissue staining and Figure 4D shows human glioblastoma multiforme. Ndrg1 preferentially stained the tumour cells adjacent to the necrotic areas which were supposed to be hypoxic. In both astrocytomas and hemangioblastomas there was intense staining for both Ndrg1 and HIF-1α in a number of different patients. Skin cancer melanoma cells showed the most intensive staining with Ndrg1 antibody (Fig. 4F ), and a benign skin lesion nevus had very limited Ndrg1 staining (Fig. 4E ). In contrast, staining with HIF-1α antibody showed little effect in melanoma cells (Fig. 5H ). Ndrg1 protein was generally found at low levels in most normal tissues with the exception of some higher expression in the distal and proximal convoluted tubule of the kidney (Fig 4G , and Table 1 ). The distal and proximal convoluted tubules of the kidney also expressed HIF-1α (Fig. 5C ). There was also expression of Ndrg1 protein in normal colon mucosa and smooth muscle, as well as some expression in normal breast and prostate (Fig. 4I,4K , and 4M ). However, with the exception of the colon samples, the expression of Ndrg1 protein in cancer cells of these tissues was considerably higher (Fig. 4L,4N ). In normal human tissues that showed immune reactivity to Ndrg1 antibody (such as kidney, prostate, breast, and colon) staining was emphasized particularly in glandular structures and tubular epithelia. Table 1 summarizes the total number of tissues and staining intensities. As seen from the table, differential expression between normal and cancer tissues was much more apparent in Ndrg1 than HIF-1α. Figure 4 Immunohistochemical detection of Ndrg1 protein in human tissues. The nature of the tissue is indicated on top of each picture. Original magnifications are as follows: A, ×400; B, ×400; C, ×100; D, ×100; E, ×400; F, ×400; G, ×400; H, ×400; I, ×100; J, ×100; K, ×400; L, ×400; M, 100; N, ×100. Figure 5 Immunohistochemical detection of HIF-1α protein in human tissues. The nature of the tissue is indicated on top of each figure. Original magnifications are as follows: A, ×400; B, ×400; C, ×100; D, ×100; E, ×100; F, ×100; G, ×40; H, ×40. Table 1 The presence of Ndrg1 and HIF-1α proteins in various normal and malignant human tissues TISSUE (n) Ndrg1 HIF-1α - + ++ +++ - + ++ +++ Normal Lung (30) 24 6 - - 4 11 15 - Lung Cancer (30) - - 3 27 - 6 21 3 Normal Liver (20) 7 13 - - 2 15 3 - Liver Cancer (20) - - 4 16 - 11 2 7 Normal Breast (18) 3 13 2 - 1 9 8 - Breast Cancer (20) - - 6 14 - 4 7 9 Smooth Muscle (30) 12 15 3 - 9 11 10 - S.M. Cancer (24) - 6 7 11 - 9 12 3 Normal Brain (28) 24 4 - - 18 10 - - Brain Cancer (36) - - 11 25 - - 15 21 Normal Kidney (15) - 6 8 1 - 5 10 - Kidney Cancer (22) - - 7 15 - 7 9 6 Normal Skin (20) 14 6 - - 11 9 - - Melanoma (10) - - - 10 4 6 - - Tissues were stained immunohistochemically with antibodies against Ndrg1 and HIF-1α as described in 'Methods' section. Each value shown in the table is from a single tissue sample from an individual patient. The numbers in parentheses (n) represent the total numbers of tissues stained with both antibodies. The expression levels were marked as follows: (-) no staining; (+) weak; (++) moderate; (+++) overexpression. Discussion Experiments conducted in this study provide clear evidence that hypoxia and its mimetics induce the expression of NDRG1 gene at both the RNA and protein level (Figs. 1 , 2 , and 3 ). It is hypothesized that nickel induces this gene by creating hypoxia-like conditions in cells. Support for this hypothesis came with the discovery of the first molecular oxygen sensors in mammalian cells, namely proline and asparagine hydroxylase enzymes which regulate the oxygen-dependent post-translational modification of HIF-1α protein and thereby change its stability and transcriptional activity. Prolyl hydroxylase PHD hydroxylates HIF-1α at residue Pro 564 in the presence of oxygen which creates a signal for pVHL to bind it, causing consecutive ubiquitination and proteasomal degradation of HIF-1α protein [ 16 , 17 ]. Likewise, in the presence of oxygen asparaginyl hydroxylase enzyme FIH-1 (factor inhibiting HIF-1) hydroxylates C-TAD domain of HIF-1α which in turn prevents it binding to coactivator p300/CBP and limits transactivation ability [ 18 - 21 ]. Under hypoxic conditions these modifications of HIF-1α by above mentioned enzymes do not occur, and the transcription of hypoxia-responsive genes are promoted [ 22 - 24 ]. The hypoxia-responsive element (HRE) is a specific five nucleotide-HIF binding-DNA sequence (5'-RCTCG-3') which is common in all hypoxia-responsive genes [ 25 , 26 ]. NDRG-1 gene has three HIF-1 binding sites in its non-coding sequence, one in its promoter and the other two in the 3' untranslated region [ 27 ]. It is known that a HIF-1 binding site in the 3' region of the erythropoietin gene regulates the transcription of this hypoxia-responsive gene [ 28 ]. Conceivably, NDRG1 is likely to be regulated by HIF-1 through the binding sites in its untranslated sequences. HIF-1 modifier enzymes, PHD and FIH-1, both have non-heme iron centres (29), and transition metals nickel and cobalt can interact with these centres, subsequently inhibiting the enzymes (30). By their effects on HIF-1 modifying enzymes, nickel and cobalt have the capacity of creating constitutive hypoxia-like conditions in cells [ 31 - 33 ]. Our hypothesis relating nickel, oxygen-sensing, hypoxia, HIF-1, pVHL, and NDRG1 expression may be elaborated as follows: when we expose cells to nickel, internalized nickel inhibits PHD enzyme interacting with its iron centre. This prevents the hydroxylation of proline residue in the ODD domain of HIF-1α and subsequent pVHL binding, rescuing HIF-1α from proteasomal degradation (Fig. 6 , upper part ). Rescued and accumulated α subunits of HIF-1 form stable heterodimers with β subunits and translocate to the nucleus, and HIF-1αβ heterodimers bind to hypoxia responsive elements (HRE) of the NDRG1 gene and promote the transcription of the gene. Nickel also inhibits the FIH-1 enzyme and subsequent hydroxylation of C-TAD domain of HIF-1α, and this in turn results in the recruitment of coactivators of HIF-1 to the NDRG1 gene regulatory sequences thereby further stimulating the expression of the gene (Fig. 6 , lower part ). However, since the experiments to show the specific interaction between metals and HIF-1 modifying enzymes are yet to be executed, this aspect of the hypothesis remains unproven. Figure 6 The illustration of the hypothetical mechanism by which nickel and cobalt upregulate the expression of the NDRG1 gene Ndrg1 protein outlasts HIF-1 after hypoxia. Despite being a major regulator of hypoxia response, HIF-1 transcription factor is a very unstable protein which is rapidly degraded under normoxic conditions; the half-life of HIF-1α in post-hypoxic cells is less than 5 minutes. On the other hand our results showed that Ndrg1 protein levels remain high at least 16 hours after returning to normoxic conditions (Fig. 1C ). Similar results have been reported by Lachat et al. (34) who showed that it took 48 hours for Ndrg1 levels to return to pre-anoxic levels after the cessation of hypoxia. Their experiment carried out in colon carcinoma cells (SW480) supports our results, indicating that the high stability of Ndrg1 protein is not cell specific. Our study of relationship between HIF-1 and NDRG1 expression indicated that the induction of the gene was primarily dependent on this transcription factor (Fig. 3 ). Neither RNA nor the protein product of NDRG1 gene was induced in HIF-1α -/- cells upon short-term exposures to hypoxia. In the long-term hypoxia experiment we detected some amount of Ndrg1 protein in HIF-1α -/- cells starting from the second day, but the levels of protein accumulation on the second and third days were considerably higher in HIF-1α +/+ cells than those in HIF-1α -/- cells. However, these results indicate that in chronic hypoxic conditions such as cancer, other factors additional to HIF-1 could be involved in the regulation of NDRG1 gene expression. Several HIF-1 independent pathways have been described to date as being effective under hypoxic conditions (35–40). Our work in several human tissues showed that in the majority of these organs Ndrg1 protein was differentially overexpressed in cancers compared to normal tissues (Fig. 4 , Table 1 ). Normal tissue samples of certain organs (lung and brain) were almost completely free of Ndrg1 expression, whereas these samples showed HIF-1 protein expression to some extent. In some cases (especially glioblastoma of brain) the expression of Ndrg1 coincided with HIF-1 protein, indicating that induction of HIF-1α by hypoxia probably resulted in Ndrg1 accumulation in these cancers. In most of the other cases though, diffuse and strong Ndrg1 expression did not coincide with HIF-1 protein expression. These differences in the detection of two proteins may be explained by (i) considerably higher stability of Ndrg1 protein compared to that of HIF-1, and (ii) reflection of HIF-1 independent hypoxia response by Ndrg1. With these features, Ndrg1 has the capacity of reflecting tumour hypoxia in a broader spectrum than does HIF-1 and could be considered as a better signature for hypoxic tumour cells than HIF-1. Therefore, despite the proposal of HIF-1 as a tumour marker [ 41 ], we present its down-stream product Ndrg1 as a stronger candidate of cancer marker especially for certain tissues (lung, brain, and skin). Several normal tissue samples showed some Ndrg1 expression albeit at lower levels than in cancer samples of similar tissues. In their comprehensive study, Lachat et al. (34) showed the expression of Ndrg1 protein in normal human tissues, reporting also the intensities and sub-cellular localizations of the stainings. We observed similar staining patterns in several tissues; more emphasized stainings in the glandular, acinar, ductal, and tubular cells of normal breast, prostate, colon, and kidney tissues. We also share the observation that Ndrg1 exist in all three locations of the cells-cytoplasm, nucleus, and membrane. But, with the exception of colon mucosa, when we stained the cancerous tissues of above mentioned organs, the staining was more intense. However, since the differential expression of Ndrg1 between normal and cancer tissues of lung, brain, and skin was much starker (Table 1 ), we propose Ndrg1 be initially tried as a marker for these tissues. For the reasons that are unknown, Ndrg1 was expressed at lower levels in colon cancer than it was in normal colon. Similar results have also been reported by others [ 42 ]. This could be due to fact that colon epithelium is a dynamic structure being continuously renewed. Studies addressing the mechanism of Ndrg1 down-regulation in colon cancers will shed more light on the function of Ndrg1 protein and its relation to cancer development. Another common finding between our study and Lachat et al's (34) is no expression of Ndrg1 in normal brain and lung epithelium. Lachat et al. (34) showed at the transcriptional level that normal brain and lung express NDRG1 , in fact these and many other tissues expressing NDRG1 mRNA did not contain detectable levels of the protein product of this gene. This could be due to degradation of the mRNA under normal conditions. Stabilizing the mRNAs of hypoxia-responsive genes is one way cells promote the expression of these genes under hypoxic conditions (38, 39). More studies are needed to resolve how Ndrg1 levels are managed in normal cells, during hypoxia, and as well as in cancer cells. Masuda et al. [ 40 ] report the down-regulation of NDRG1 gene by VHL tumour suppressor protein (pVHL). They state that no hypoxia-responsive element exists on the 5' flanking sequence of the gene and thus underplay the role of HIF-1 in the regulation of NDRG1 . As mentioned previously, NDRG-1 gene has three HIF-1 binding sites, one in its promoter and two in the 3' untranslated region. Therefore, the down-regulation of NDRG1 gene by pVHL is likely to be mediated through the HIF-1 pathway. The observation of NDRG1 being down-regulated by a major tumour suppressor further supports our observation that it is up-regulated in several cancer tissues and could be used as a marker. Hypoxia-responsive pathway (HRP) allows tumour cells to overcome harsh microenvironment conditions associated with tumour growth. The protein products of induced by this pathway (e.g. EPO, VEGF, several glycolytic enzymes) allow clones of tumour cells to gain growth advantage under unfavourable conditions, and this concept is pivotal in switching to a more malignant phenotype. Although the exact functions of Ndrg1 are still unknown, as another effector of HRP, it is also likely to help tumour cells establish themselves. Therefore, the use of drugs that specifically disrupt the functions of Ndrg1 protein may provide new cancer therapies. It is thus of interest to investigate the effects of the elimination of this protein on the cancer cell survival and proliferation. However, first Ndrg1 expression in normal hypoxic tissue (such as infarct tissues) should be determined to show the cancer specificity of the protein. Second, the potential side effects of its elimination should be assessed since several normal tissues express Ndrg1 ubiquitously. Hypoxia is also an important determinant for the success of chemotherapy and radiotherapy [ 43 ]. Masuda et al. [ 40 ] even argue that Ndrg1 could be involved in limiting sensitivity to anti-cancer drugs. However hypoxia is an equally likely limiting factor in anti-cancer therapy, and Ndrg1 may be simply the signature of the hypoxic state. Conclusions Hypoxia induces the NDRG1 gene, and nickel probably causes the induction of the gene by interacting with the oxygen sensory pathway. Hypoxic induction of NDRG1 is mostly dependent on HIF-1 transcription factor. However, regulation of the gene in long term hypoxia involves some other HIF-1 independent pathways. Ndrg1 protein was overexpressed in the majority of cancers studied here. With the exception of colon cancer, staining with Ndrg1 antibody distinguished between normal and tumour cells in most cancerous tissues. The mechanism of this overexpression is related to the hypoxic state of cancer cells. Ndrg1 protein is a better indicator of tumour hypoxia than HIF-1 in immunohistochemical analyses. This is probably due to the stable nature of the Ndrg1 protein compared to the very unstable HIF-1 protein, and also to capacity of Ndrg1 in reflecting HIF-1 independent hypoxia response. Therefore, the determination of Ndrg1 protein in tissue samples may provide a more useful tool for cancer diagnosis. Even though the exact functions of Ndrg1 protein are still unknown, as another effector in hypoxia-response, it can help cancer cells to survive and grow under unfavourable conditions. Therefore, it may also be possible to direct therapy towards Ndrg1 protein using drugs that specifically disrupt the functions of this protein. Methods Cell lines and culture conditions Human lung cancer cell line A549 (CCL185), human osteosarcoma cell line HOS (CRL 1543), human mammary carcinoma cell line MCF-7 (HTB 22), and human prostate cancer cell line DU-145 (HTB 81) were purchased from American Type Culture Collection (Rockville, MD, USA). Human trachea epithelium (HTE) cells, HIF-1α +/+ , and HIF-1α -/- fibroblasts were gifts from Dr. Konstantin Salnikow of NYU. The production of HIF-1α +/+ and HIF-1α -/- fibroblasts was described elsewhere [ 44 ]. PW cells were a gift from Dr. Qunwei Zhang of NYU. The cell lines were maintained at 37°C as monolayers in a humidified atmosphere containing 5% CO 2 . Cells were passaged by trypsinization when they reached 70–80% confluence. A549 cells were grown in Ham's F-12K nutrient mixture (Kaighn's modification) supplemented with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin [equivalent to 100 units (U)/ml and 100 μg/ml, respectively]. MCF-7, PW, HIF-1α +/+ , and HIF-1α -/- cells were maintained in DMEM (Dulbecco's modified Eagle's medium) with the same supplements. HOS and HTE cells were grown in α-MEM (α-minimal essential medium) additionally supplemented with 2 mM L-glutamine. For Northern and Western blot experiments, 5 × 10 5 cells in 10 ml of media were plated in each of 10-cm dishes (Corning Inc, Corning, NY). Cell numbers were determined using the ZM Coulter Counter (Coulter Electronics, England). To render cells hypoxic, dishes were placed in an incubator chamber flushed with 95% N 2 and 5% CO 2 . This resulted in approximately 0.1–0.5% O 2 after several hours. After 20 hours, cells were released from hypoxia and quickly scraped in ice-cold phosphate-buffered saline (PBS), and analyses were performed as described below. Northern blot analysis Total RNA was extracted from cells immediately after exposures by using TRIzol reagent (Gibco-BRL) according to instructions of the manufacturer. 15 μg of RNA/lane was separated by electrophoresis in 1.0% agarose-formaldehyde gels and then transferred to nitrocellulose membranes (BA-85; Scleicher & Schuell). NDRG1 and actin probes were labelled with [α- 32 P]dCTP by using a randomly primed-DNA labelling kit (Promega). Cloning of NDRG1 gene was as described previously [ 1 ]. The blot first hybridized with NDRG1 probe, and then the membrane was stripped and rehybridized with actin probe to show loading. The bands were visualized by exposing X-ray films (Eastman Kodak Co, NY, USA) to hybridized membranes. Western blot analysis Cells were lysed, and proteins were harvested in 125 μl of TNES buffer [50 mM Tris-HCl (pH: 7.5), 2 mM EDTA, 100 mM NaCl, 1 mM sodium orthovanadate (Na 3 VO 4 ), 10 mM sodium fluoride, and 1% NP40] containing protease inhibitors (PMSF 1 mM, aprotinin 1 μg/ml, leupeptin 5 μg/ml, and chymostatin 2 μg/ml). 40 μg or 25 μg of protein was loaded into each lane of a 10% SDS-PAGE gel and separated by electrophoresis. Proteins were then transferred to PVDF membranes (Roche Diagnostics Co, IN, USA), and the membranes were first incubated with rabbit anti-Ndrg1 polyclonal antibody at the dilution of 1:1000 in 5% non-fat dry milk for one hour at room temperature. The antibody production is described elsewhere [ 3 ]. After washing four times for 15 min each with TBS buffer, the membranes were incubated with a second anti-rabbit peroxidase-conjugated antibody (Santa Cruz, CA, USA) at the dilution of 1:10 4 in 5% milk for one hour at room temperature. At the end the membranes were treated with chemiluminiscent substrate ECL (Amersham Pharmacia Biotech, UK) for 1 min at room temperature, and Biomax MR-1 films (Eastman Kodak Co, NY, USA) were exposed to the membranes. The molecular weight of Ndrg1 was determined using prestained molecular weight markers (Invitrogen Life Technologies, CA, USA). Tissue staining For in vivo detection of Ndrg1 protein in human cancer and normal tissues, immunohistochemical (IHC) staining was exploited, using a rabbit polyclonal antibody against a 30-aminoacid-sequence at the C-terminal end of the Ndrg1 protein [ 3 ]. Tumour and normal tissue sections were obtained from the tumour registry of the Cancer Institute of New York University Medical School. The tissues were embedded in paraffin wax. Five-micron sections were cut and baked at 60°C for 30 minutes. After cooling, the sections were deparaffinized and hydrated through the following series: 3 × 5 minutes xylene, 3 × 5 minutes 100% Etoh (ethyl alcohol), 3 × 5 minutes 95% Etoh. The slides were then rinsed gently with distilled water and stained with hematoxylin-eosin for histopathological diagnosis. For antigen retrieval, the slides were heated in 1 mM EDTA buffer (pH: 8.0) in a microwave oven for 10 min, and then endogenous peroxide was blocked with methanol containing 0.35% H 2 O 2 for a further 30 min. After incubation with the antibody against Ndrg1 protein overnight, the slides were processed with a second anti-rabbit peroxidase-conjugated antibody (Santa Cruz, CA, USA). Identification of the protein was then achieved using Avidin-Biotin horseradish peroxidase complex and 3,3-diaminobenzidine (DAB) as the chromogen. Negative controls were performed using nonimmune serum instead of primary antibodies. Immunohistochemical detection of HIF-1α protein was achieved by using Catalyzed Signal Amplification System (DAKO Corp., Carpinteria, CA) which is based on streptavidin-biotin-horseradish peroxidase complex formation. Antigen retrieval was done by 10 mM sodium citrate buffer (pH 6.0). The specimens were incubated overnight at +4°C with monoclonal anti-HIF-1α antibody (Clone MAb H1α 67, #NB 100–123; Novus Biologicals, Littleton, CO) in a dilution of 1:1000. After the amplification of the signal according to manufacturer's instructions, the slides stained with chromogen DAB, counter-stained with hematoxylin, dehydrated, and then mounted. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC518960.xml |
368164 | Phylogenomics of the Reproductive Parasite Wolbachia pipientis wMel: A Streamlined Genome Overrun by Mobile Genetic Elements | The complete sequence of the 1,267,782 bp genome of Wolbachia pipientis w Mel, an obligate intracellular bacteria of Drosophila melanogaster , has been determined. Wolbachia , which are found in a variety of invertebrate species, are of great interest due to their diverse interactions with different hosts, which range from many forms of reproductive parasitism to mutualistic symbioses. Analysis of the w Mel genome, in particular phylogenomic comparisons with other intracellular bacteria, has revealed many insights into the biology and evolution of w Mel and Wolbachia in general. For example, the w Mel genome is unique among sequenced obligate intracellular species in both being highly streamlined and containing very high levels of repetitive DNA and mobile DNA elements. This observation, coupled with multiple evolutionary reconstructions, suggests that natural selection is somewhat inefficient in w Mel, most likely owing to the occurrence of repeated population bottlenecks. Genome analysis predicts many metabolic differences with the closely related Rickettsia species, including the presence of intact glycolysis and purine synthesis, which may compensate for an inability to obtain ATP directly from its host, as Rickettsia can. Other discoveries include the apparent inability of w Mel to synthesize lipopolysaccharide and the presence of the most genes encoding proteins with ankyrin repeat domains of any prokaryotic genome yet sequenced. Despite the ability of w Mel to infect the germline of its host, we find no evidence for either recent lateral gene transfer between w Mel and D. melanogaster or older transfers between Wolbachia and any host. Evolutionary analysis further supports the hypothesis that mitochondria share a common ancestor with the α-Proteobacteria, but shows little support for the grouping of mitochondria with species in the order Rickettsiales. With the availability of the complete genomes of both species and excellent genetic tools for the host, the w Mel– D. melanogaster symbiosis is now an ideal system for studying the biology and evolution of Wolbachia infections. | Introduction Wolbachia are intracellular gram-negative bacteria that are found in association with a variety of invertebrate species, including insects, mites, spiders, terrestrial crustaceans, and nematodes. Wolbachia are transovarialy transmitted from females to their offspring and are extremely widespread, having been found to infect 20%–75% of invertebrate species sampled ( Jeyaprakash and Hoy 2000 ; Werren and Windsor 2000 ). Wolbachia are members of the Rickettsiales order of the α-subdivision of the Proteobacteria phyla and belong to the Anaplasmataceae family, with members of the genera Anaplasma , Ehrlichia , Cowdria , and Neorickettsia ( Dumler et al. 2001 ). Six major clades (A–F) of Wolbachia have been identified to date ( Lo et al. 2002 ): A, B, E, and F have been reported from insects, arachnids, and crustaceans; C and D from filarial nematodes. Wolbachia– host interactions are complex and range from mutualistic to pathogenic, depending on the combination of host and Wolbachia involved. Most striking are the various forms of “reproductive parasitism” that serve to alter host reproduction in order to enhance the transmission of this maternally inherited agent. These include parthenogenesis (infected females reproducing in the absence of mating to produce infected female offspring), feminization (infected males being converted into functional phenotypic females), male-killing (infected male embryos being selectively killed), and cytoplasmic incompatibility (in its simplest form, the developmental arrest of offspring of uninfected females when mated to infected males) ( O'Neill et al. 1997a ). Wolbachia have been hypothesized to play a role in host speciation through the reproductive isolation they generate in infected hosts ( Werren 1998 ). They also provide an intriguing array of evolutionary solutions to the genetic conflict that arises from their uniparental inheritance. These solutions represent alternatives to classical mutualism and are often of more benefit to the symbiont than the host that is infected ( Werren and O'Neill 1997 ). From an applied perspective, it has been proposed that Wolbachia could be utilized to either suppress pest insect populations or sweep desirable traits into pest populations (e.g., the inability to transmit disease-causing pathogens) ( Sinkins and O'Neill 2000 ). Moreover, they may provide a new approach to the control of human and animal filariasis. Since the nematode worms that cause filariasis have an obligate symbiosis with mutualistic Wolbachia , treatment of filariasis with simple antibiotics that target Wolbachia has been shown to eliminate microfilaria production as well as ultimately killing the adult worm ( Taylor et al. 2000 ; Taylor and Hoerauf 2001 ). Despite their common occurrence and major effects on host biology, little is currently known about the molecular mechanisms that mediate the interactions between Wolbachia and their invertebrate hosts. This is partly due to the difficulty of working with an obligate intracellular organism that is difficult to culture and hard to obtain in quantity. Here we report the completion and analysis of the genome sequence of Wolbachia pipientis w Mel, a strain from the A supergroup that naturally infects Drosophila melanogaster ( Zhou et al. 1998 ). Results/Discussion Genome Properties The w Mel genome is determined to be a single circular molecule of 1,267,782 bp with a G+C content of 35.2%. This assembly is very similar to the genetic and physical map of the closely related strain w MelPop ( Sun et al., 2003 ). The genome does not exhibit the GC skew pattern typical of some prokaryotic genomes ( Figure 1 ) that have two major shifts, one near the origin and one near the terminus of replication. Therefore, identification of a putative origin of replication and the assignment of basepair 1 were based on the location of the dnaA gene. Major features of the genome and of the annotation are summarized in Table 1 and Figure 1 . Figure 1 Circular Map of the Genome and Genome Features Circles correspond to the following: (1) forward strand genes; (2) reverse strand genes, (3) in red, genes with likely orthologs in both R. conorii and R. prowazekii ; in blue, genes with likely orthologs in R. prowazekii , but absent from R. conorii ; in green, genes with likely orthologs in R. conorii but absent from R. prowazekii ; in yellow, genes without orthologs in either Rickettsia ( Table S3 ); (4) plot is of χ 2 analysis of nucleotide composition; phage regions are in pink; (5) plot of GC skew (G–C)/(G+C); (6) repeats over 200 bp in length, colored by category; (7) in green, transfer RNAs; (8) in blue, ribosomal RNAs; in red, structural RNA. Table 1 w Mel Genome Features Repetitive and Mobile DNA The most striking feature of the w Mel genome is the presence of very large amounts of repetitive DNA and DNA corresponding to mobile genetic elements, which is unique for an intracellular species. In total, 714 repeats of greater than 50 bp in length, which can be divided into 158 distinct families ( Table S1 ), were identified. Most of the repeats are present in only two copies in the genome, although 39 are present in three or more copies, with the most abundant repeat being found in 89 copies. We focused our analysis on the 138 repeats of greater than 200 bp ( Table 2 ). These were divided into 19 families based upon sequence similarity to each other. These repeats were found to make up 14.2 % of the w Mel genome. Of these repeat families, 15 correspond to likely mobile elements, including seven types of insertion sequence (IS) elements, four likely retrotransposons, and four families without detectible similarity to known elements but with many hallmarks of mobile elements (flanked by inverted repeats, present in multiple copies) ( Table 2 ). One of these new elements (repeat family 8) is present in 45 copies in the genome. It is likely that many of these elements are not able to autonomously transpose since many of the transposase genes are apparently inactivated by mutations or the insertion of other transposons ( Table S2 ). However, some are apparently recently active since there are transposons inserted into at least nine genes ( Table S2 ), and the copy number of some repeats appears to be variable between Wolbachia strains (M. Riegler et al., personal communication). Thus, many of these repetitive elements may be useful markers for strain discrimination. In addition, the mobile elements likely contribute to generating the diversity of phenotypically distinct Wolbachia strains (e.g., mod − strains [ McGraw et al. 2001 ]) by altering or disrupting gene function ( Table S2 ). Table 2 w Mel DNA Repeats of Greater than 200 bp Three prophage elements are present in the genome. One is a small pyocin-like element made up of nine genes (WD00565–WD00575). The other two are closely related to and exhibit extensive gene order conservation with the WO phage described from Wolbachia sp. w Kue ( Masui et al. 2001 ) ( Figure 2 ). Thus, we have named them w Mel WO-A and WO-B, based upon their location in the genome. w Mel WO-B has undergone a major rearrangement and translocation, suggesting it is inactive. Phylogenetic analysis indicates that w Mel WO-B is more closely related to the w Kue WO than to w Mel WO-A ( Figure S1 ). Thus, w Mel WO-A likely represents either a separate insertion event in the Wolbachia lineage or a duplication that occurred prior to the separation of the w Mel and w Kue lineages. Phylogenetic analysis also confirms the proposed mosaic nature of the WO phage ( Masui et al. 2001 ), with one block being closely related to lambdoid phage and another to P2 phage (data not shown). Figure 2 Phage Alignments and Neighboring Genes Conserved gene order between the WO phage in Wolbachia sp. w Kue and prophage regions of w Mel. Putative proteins in w Kue ( Masui et al. 2001 ) were searched using TBLASTN against the w Mel genome. Matches with an E -value of less than 1e −15 are linked by connecting lines. CDSs are colored as follows: brown, phage structural or replication genes; light blue, conserved hypotheticals; red, hypotheticals; magenta, transposases or reverse transcriptases; blue, ankyrin repeat genes; light gray, radC ; light green, paralogous genes; gold, others. The regions surrounding the phage are shown because they have some unusual features relative to the rest of the genome. For example, WO-A and WO-B are each flanked on one side by clusters of genes in two paralogous families that are distantly related to phage repressors. In each of these clusters, a homolog of the radC gene is found. A third radC homolog (WD1093) in the genome is also flanked by a member of one of these gene families (WD1095). While the connection between radC and the phage is unclear, the multiple copies of the radC gene and the members of these paralogous families may have contributed to the phage rearrangements described above. Genome Structure: Rearrangements, Duplications, and Deletions The irregular pattern of GC skew in w Mel is likely due in part to intragenomic rearrangements associated with the many DNA repeat elements. Comparison with a large contig from a Wolbachia species that infects Brugia malayi is consistent with this ( Ware et al. 2002 ) ( Figure 3 ). While only translocations are seen in this plot, genetic comparisons reveal that inversions also occur between strains ( Sun et al., 2003 ), which is consistent with previous studies of prokaryotic genomes that have found that the most common large-scale rearrangements are inversions that are symmetric around the origin of DNA replication ( Eisen et al. 2000 ). The occurrence of frequent rearrangement events during Wolbachia evolution is supported by the absence of any large-scale conserved gene order with Rickettsia genomes. The rearrangements in Wolbachia likely correspond with the introduction and massive expansion of the repeat element families that could serve as sites for intragenomic recombination, as has been shown to occur for some other bacterial species ( Parkhill et al. 2003 ). The rearrangements in w Mel may have fitness consequences since several classes of genes often found in clusters are generally scattered throughout the w Mel genome (e.g., ABC transporter subunits, Sec secretion genes, rRNA genes, F-type ATPase genes). Figure 3 Alignment of w Mel with a 60 kbp Region of the Wolbachia from B. malayi The figure shows BLASTN matches (green) and whole-proteome alignments (red) that were generated using the “promer” option of the MUMmer software ( Delcher et al. 1999 ). The B. malayi region is from a BAC clone ( Ware et al. 2002 ). Note the regions of alignment broken up by many rearrangements and the presence of repetitive sequences at the regions of the breaks. Although the common ancestor of Wolbachia and Rickettsia likely already had a reduced, streamlined genome, w Mel has lost additional genes since that time ( Table S3 ). Many of these recent losses are of genes involved in cell envelope biogenesis in other species, including most of the machinery for producing lipopolysaccharide (LPS) components and the alanine racemase that supplies D-alanine for cell wall synthesis. In addition, some other genes that may have once been involved in this process are present in the genome, but defective (e.g., mannose-1-phosphate guanylyltransferase, which is split into two coding sequences [CDSs], WD1224 and WD1227, by an IS5 element) and are likely in the process of being eliminated. The loss of cell envelope biogenesis genes has also occurred during the evolution of the Buchnera endosymbionts of aphids ( Shigenobu et al. 2000 ; Moran and Mira 2001 ). Thus, w Mel and Buchnera have lost some of the same genes separately during their reductive evolution. Such convergence means that attempts to use gene content to infer evolutionary relatedness needs to be interpreted with caution. In addition, since Anaplasma and Ehrlichia also apparently lack genes for LPS production ( Lin and Rikihisha 2003 ), it is likely that the common ancestor of Wolbachia , Ehrlichia , and Anaplasma was unable to synthesize LPS. Thus, the reports that Wolbachia -derived LPS-like compounds is involved in the immunopathology of filarial nematode disease in mammals ( Taylor 2002 ) either indicate that these Wolbachia have acquired genes for LPS synthesis or that the reported LPS-like compounds are not homologous to LPS. Despite evident genome reduction in w Mel and in contrast to most small-genomed intracellular species, gene duplication appears to have continued, as over 50 gene families have apparently expanded in the w Mel lineage relative to that of all other species ( Table S4 ). Many of the pairs of duplicated genes are encoded next to each other in the genome, suggesting that they arose by tandem duplication events and may simply reflect transient duplications in evolution (deletion is common when there are tandem arrays of genes). Many others are components of mobile genetic elements, indicating that these elements have expanded significantly after entering the Wolbachia evolutionary lineage. Other duplications that could contribute to the unique biological properties of w Mel include that of the mismatch repair gene mutL (see below) and that of many hypothetical and conserved hypothetical proteins. One duplication of particular interest is that of wsp , which is a standard gene for strain identification and phylogenetic reconstruction in Wolbachia ( Zhou et al. 1998 ). In addition to the previously described wsp (WD0159), w Mel encodes two wsp paralogs (WD0009 and WD0489), which we designate as wspB and wspC , respectively. While these paralogs are highly divergent from wsp (protein identities of 19.7% and 23.5%, respectively) and do not amplify using the standard wsp PCR primers ( Braig et al. 1998 ; Zhou et al. 1998 ), their presence could lead to some confusion in classification and identification of Wolbachia strains. This has apparently occurred in one study of Wolbachia strain w KueYO, for which the reported wsp gene (gbAB045235) is actually an ortholog of wspB (99.8% sequence identity and located at the end of the virB operon [ Masui et al. 2000 ]) and not an ortholog of the wsp gene. Considering that the wsp gene has been extremely informative for discriminating between strains of Wolbachia , we designed PCR primers to the w Mel wspB gene to amplify and then sequence the orthologs from the related w Ri and w AlbB Wolbachia strains from Drosophila simulans and Aedes albopictus , respectively, as well as the Wolbachia strain that infects the filarial nematode Dirofilaria immitis to determine the potential utility of this locus for strain discrimination. A comparison of genetic distances between the wsp and wspB genes for these different taxa indicates that overall the wspB gene appears to be evolving at a faster rate than wsp and, as such, may be a useful additional marker for discriminating between closely related Wolbachia strains ( Table S5 ). Inefficiency of Selection in w Mel The fraction of the genome that is repetitive DNA and the fraction that corresponds to mobile genetic elements are among the highest for any prokaryotic genome. This is particularly striking compared to the genomes of other obligate intracellular species such as Buchnera , Rickettsia , Chlamydia , and Wigglesworthia , that all have very low levels of repetitive DNA and mobile elements. The recently sequenced genomes of the intracellular pathogen Coxiella burnetti ( Seshadri et al. 2003 ) has both a streamlined genome and moderate amounts of repetitive DNA, although much less than w Mel. The paucity of repetitive DNA in these and other intracellular species is thought to be due to a combination of lack of exposure to other species, thereby limiting introduction of mobile elements, and genome streamlining ( Mira et al. 2001 ; Moran and Mira 2001 ; Frank et al. 2002 ). We examined the w Mel genome to try to understand the origin of the repetitive and mobile DNA and to explain why such repetitive/mobile DNA is present in w Mel, but not other streamlined intracellular species. We propose that the mobile DNA in w Mel was acquired some time after the separation of the Wolbachia and Rickettsia lineages but before the radiation of the Wolbachia group . The acquisition of these elements after the separation of the Wolbachia and Rickettsia lineages is suggested by the fact that most do not have any obvious homologous sequences in the genomes of other α-Proteobacteria, including the closely related Rickettsia spp. Additional evidence for some acqui-sition of foreign DNA after the Wolbachia–Rickettsia split comes from phylogenetic analysis of those genes present in w Mel, but not in the two sequenced rickettsial genomes (see Table S3 ; unpublished data). The acquisition prior to the radiation of Wolbachia is suggested by two lines of evidence. First, many of the elements are found in the genome of the distantly related Wolbachia of the nematode B. malayi (see Figure 3 ; unpublished data). In addition, genome analysis reveals that these elements do not have significantly anomalous nucleotide composition or codon usage compared to the rest of the genome. In fact, there are only four regions of the genome with significantly anomalous composition, comprising in total only approximately 17 kbp of DNA ( Table 3 ). The lack of anomalous composition suggests either that any foreign DNA in w Mel was acquired long enough ago to allow it to “ameliorate” and become compositionally similar to endogenous Wolbachia DNA ( Lawrence and Ochman 1997 , 1998 ) or that any foreign DNA that is present was acquired from organisms with similar composition to endogenous w Mel genes. Owing to their potential effects on genome evolution (insertional mutagenesis, catalyzing genome rearrangements), we propose that the acquisition and maintenance of these repetitive and mobile elements by w Mel have played a key role in shaping the evolution of Wolbachia . Table 3 Regions of Anomalous Nucleotide Composition in the wMel Genome It is likely that much of the mobile/repetitive DNA was introduced via phage, given that three prophage elements are present; experimental studies have shown active phage in some Wolbachia ( Masui et al. 2001 ) and Wolbachia superinfections occur in many hosts (e.g., Jamnongluk et al. 2002 ), which would allow phage to move between strains. Whatever the mechanism of introduction, the persistence of the repetitive elements in w Mel in the face of apparently strong pressures for streamlining is intriguing. One expla-nation is that w Mel may be getting a steady infusion of mobile elements from other Wolbachia strains to counteract the elimination of elements by selection for genome streamlining. This would explain the absence of anomalous nucleotide composition of the elements. However, we believe that a major contributing factor to the presence of all the repetitive/mobile DNA in w Mel is that w Mel and possibly Wolbachia in general have general inefficiency of natural selection relative to other species. This inefficiency would limit the ability to eliminate repetitive DNA. A general inefficiency of natural selection (especially purifying selection) has been suggested previously for intracellular bacteria, based in part on observations that these bacteria have higher evolutionary rates than free-living bacteria (e.g., Moran 1996 ). We also find a higher evolutionary rate for w Mel than that of the closely related intracellular Rickettsia , which themselves have higher rates than free-living α-Proteobacteria ( Figure 4 ). Additionally, codon bias in w Mel appears to be driven more by mutation or drift than selection ( Figure S2 ), as has been reported for Buchnera species and was suggested to be due to inefficient purifying selection ( Wernegreen and Moran 1999 ). Such inefficiencies of natural selection are generally due to an increase in the relative contribution of genetic drift and mutation as compared to natural selection ( Eiglmeier et al. 2001 ; Lawrence 2001 ; Parkhill et al. 2001 ). Below we discuss different possible explanations for the inefficiency of selection in w Mel, especially in comparison to other intracellular bacteria. Figure 4 Long Evolutionary Branches in w Mel Maximum-likelihood phylogenetic tree constructed on concatenated protein sequences of 285 orthologs shared among w Mel, R. prowazekii , R. conorii , C. crescentus, and E. coli . The location of the most recent common ancestor of the α-Proteobacteria ( Caulobacter , Rickettsia , Wolbachia ) is defined by the outgroup E. coli. The unit of branch length is the number of changes per amino acid. Overall, the amino acid substitution rate in the w Mel lineage is about 63% higher than that of C. crescentus , a free-living α-Proteobacteria. w Mel has evolved at a slightly higher rate than the Rickettssia spp., close relatives that are also obligate intracellular bacteria that have undergone accelerated evolution themselves. This higher rate is likely in part to be due to an increase in the rate of slightly deleterious mutations, although we have not ruled out the possibility of G+C content effects on the branch lengths. Low rates of recombination, such as occur in centromeres and the human Y chromosome, can lead to inefficient selection because of the linkage among genes. This has been suggested to be occurring in Buchnera species because these species do not encode homologs of RecA, which is the key protein in homologous recombination in most species ( Shigenobu et al. 2000 ). The absence of recombination in Buchnera is supported by the lack of genome rearrangements in their recent evolution ( Tamas et al. 2002 ). Additionally, there is apparently little or no gene flow into Buchnera strains. In contrast, w Mel encodes the necessary machinery for recombination, including RecA ( Table S6 ), and has experienced both extensive intragenomic homologous recombination and introduction of foreign DNA. Therefore, the unusual genome features of w Mel are unlikely to be due to low levels of recombination. Another possible explanation for inefficient selection is high mutation rates. It has been suggested that the higher evolutionary rates in intracellular bacteria are the result of high mutation rates that are in turn due to the loss of genes for DNA repair processes (e.g., Itoh et al. 2002 ). This is likely not the case in w Mel since its genome encodes proteins corresponding to a broad suite of DNA repair pathways including mismatch repair, nucleotide excision repair, base excision repair, and homologous recombination ( Table S6 ). The only noteworthy DNA repair gene absent from w Mel and present in the more slowly evolving Rickettsia is mfd, which is involved in targeting DNA repair to the transcribed strand of actively transcribing genes in other species ( Selby et al. 1991 ). However, this absence is unlikely to contribute significantly to the increased evolutionary rate in w Mel, since defects in mfd do not lead to large increases in mutation rates in other species ( Witkin 1994 ). The presence of mismatch repair genes (homologs of mutS and mutL ) in w Mel is particularly relevant since this pathway is one of the key steps in regulating mutation rates in other species. In fact, w Mel is the first bacterial species to be found with two mutL homologs. Overall, examination of the predicted DNA repair capabilities of bacteria ( Eisen and Hanawalt 1999 ) suggests that the connection between evolutionary rates in intracellular species and the loss of DNA repair processes is spurious. While many intracellular species have lost DNA repair genes in their recent evolution, different species have lost different genes and some, such as w Mel and Buchnera spp., have kept the genes that likely regulate mutation rates. In addition, some free-living species without high evolutionary rates have lost some of the same pathways lost in intracellular species, while many free-living species have lost key pathways resulting in high mutation rates (e.g., Helicobacter pylori has apparently lost mismatch repair [ Eisen 1997 , Eisen 1998b ; Bjorkholm et al. 2001 ]). Given that intracellular species tend to have small genomes and have lost genes from every type of biological process, it is not surprising that many of them have lost DNA repair genes as well. We believe that the most likely explanations for the inefficiency of selection in w Mel involve population-size related factors, such as genetic drift and the occurrence of population bottlenecks. Such factors have also been shown to likely explain the high evolutionary rates in other intracellular species ( Moran 1996 ; Moran and Mira 2001 ; van Ham et al. 2003 ). Wolbachia likely experience frequent population bottlenecks both during transovarial transmission ( Boyle et al. 1993 ) and during cytoplasmic incompatibility mediated sweeps through host populations. The extent of these bottlenecks may be greater than in other intracellular bacteria, which would explain why w Mel has both more repetitive and mobile DNA than other such species and a higher evolutionary rate than even the related Rickettsia spp. Additional genome sequences from other Wolbachia will reveal whether this is a feature of all Wolbachia or only certain strains. Mitochondrial Evolution There is a general consensus in the evolutionary biology literature that the mitochondria evolved from bacteria in the α-subgroup of the Proteobacteria phyla (e.g., Lang et al. 1999 ). Analysis of complete mitochondrial and bacterial genomes has very strongly supported this hypothesis ( Andersson et al. 1998 , 2003 ; Muller and Martin 1999 ; Ogata et al. 2001 ). However, the exact position of the mitochondria within the α-Proteobacteria is still debated. Many studies have placed them in or near the Rickettsiales order ( Viale and Arakaki 1994 ; Gupta 1995 ; Sicheritz-Ponten et al. 1998 ; Lang et al. 1999 ; Bazinet and Rollins 2003 ). Some studies have further suggested that mitochondria are a sister taxa to the Rickettsia genus within the Rickettsiaceae family and thus more closely related to Rickettsia spp. than to species in the Anaplasmataceae family such as Wolbachia ( Karlin and Brocchieri 2000 ; Emelyanov 2001a , 2001b , 2003a , 2003b ). In our analysis of complete genomes, including that of w Mel, the first non- Rickettsia member of the Rickettsiales order to have its genome completed, we find support for a grouping of Wolbachia and Rickettsia to the exclusion of the mitochondria, but not for placing the mitochondria within the Rickettsiales order ( Figure 5 A and 5 B; Table S7 ; Table S8 ). Specifically, phylogenetic trees of a concatenated alignment of 32 proteins show strong support with all methods (see Table S7 ) for common branching of: (i) mitochondria, (ii) Rickettsia with Wolbachia , (iii) the free-living α-Proteobacteria, and (iv) mitochondria within α-Proteobacteria. Since amino acid content bias was very severe in these datasets, protein LogDet analyses, which can correct for the bias, were also performed. In LogDet analyses of the concatenated protein alignment, both including and excluding highly biased positions, mitochondria usually branched basal to the Wolbachia–Rickettsia clade, but never specifically with Rickettsia (see Table S7 ). In addition, in phylogenetic studies of individual genes, there was no consistent phylogenetic position of mitochondrial proteins with any particular species or group within the α-Proteobacteria (see Table S8 ), although support for a specific branch uniting the two Rickettsia species with Wolbachia was quite strong. Eight of the proteins from mitochondrial genomes (YejW, SecY, Rps8, Rps2, Rps10, RpoA, Rpl15, Rpl32) do not even branch within the α-Proteobacteria, although these genes almost certainly were encoded in the ancestral mitochondrial genome ( Lang et al. 1997 ). Figure 5 Mitochondrial Evolution Using Concatenated Alignments Networks of protein LogDet distances for an alignment of 32 proteins constructed with Neighbor-Net ( Bryant and Moulton 2003 ). The scale bar indicates 0.1 substitutions per site. Enlargements at lower right show the component of shared similarity between mitochondrial-encoded proteins and (i) their homologs from intracellular endosymbionts (red) as well as (ii) their homologs from free-living α-Proteobacteria (blue). (A) Result using 6,776 gap-free sites per genome (heavily biased in amino acid composition). (B) Result using 3,100 sites after exclusion of highly variable positions (data not biased in amino acid composition at p = 0.95). All data and alignments are available upon request. Results of phylogenetic analyses are summa-rized in Table S7 . Since amino acid content bias was very severe in these datasets, protein LogDet analyses were also preformed. In neighbor-joining, parsimony, and maximum-likelihood trees generated from alignments both including and excluding highly biased positions (6,776 and 3,100 gap-free amino acid sites per genome, respectively), mitochondria usually branched basal to the Wolbachia–Rickettsia clade, but never specifically with Rickettsia ( Table S7 ). This analysis of mitochondrial and α-Proteobacterial genes reinforces the view that ancient protein phylogenies are inherently prone to error, most likely because current models of phylogenetic inference do not accurately reflect the true evolutionary processes underlying the differences observed in contemporary amino acid sequences ( Penny et al. 2001 ). These conflicting results regarding the precise position of mitochondria within the α-Proteobacteria can be seen in the high amount of networking in the Neighbor-Net graph of the analyses of the concatenated alignment shown in Figure 5 . An important complication in studies of mitochondrial evolution lies in identifying “α-Proteobacterial” genes for comparison ( Martin 1999 ). For example, in our analyses, proteins from Magnetococcus branched with other α-Proteobacterial homologs in only 17 of the 49 proteins studied, and in five cases they assumed a position basal to α-, β-, and γ-Proteobacterial homologs. Host–Symbiont Gene Transfers Many genes that were once encoded in mitochondrial genomes have been transferred into the host nuclear genomes. Searching for such genes has been complicated by the fact that many of the transfer events happened early in eukaryotic evolution and that there are frequently extreme amino acid and nucleotide composition biases in mitochondrial genomes (see above). We used the w Mel genome to search for additional possible mitochondrial-derived genes in eukaryotic nuclear genomes. Specifically, we constructed phylogenetic trees for w Mel genes that are not in either Rickettsia genomes. Five new eukaryotic genes of possible mitochondrial origin were identified: three genes involved in de novo nucleotide biosynthesis ( purD , purM , pyrD ) and two conserved hypothetical proteins (WD1005, WD0724). The α-Proteobacterial origin of these genes suggests that at least some of the genes of the de novo nucleotide synthesis pathway in eukaryotes might have been laterally acquired from bacteria via the mitochondria. The presence of such genes in other Proteobacteria suggests that their absence from Rickettsia is due to gene loss ( Gray et al. 2001 ). This finding supports the need for additional α-Proteobacterial genomes to identify mitochondrion-derived genes in eukaryotes. While organelle to nuclear gene transfers are generally accepted, there is a great deal of controversy over whether other gene transfers have occurred from bacteria into animals. In particular, claims of transfer from bacteria into the human genome ( Lander et al. 2001 ) were later shown to be false ( Roelofs and Van Haastert 2001 ; Salzberg et al. 2001 ; Stanhope et al. 2001 ). Wolbachia are excellent candidates for such transfer events since they live inside the germ cells, which would allow lateral transfers to the host to be transmitted to subsequent host generations. Consistent with this, a recent study has shown some evidence for the presence of Wolbachia- like genes in a beetle genome ( Kondo et al. 2002 ). The symbiosis between w Mel and D. melanogaster provides an ideal case to search for such transfers since we have the complete genomes of both the host and symbiont. Using BLASTN searches and MUMmer alignments, we did not find any examples of highly similar stretches of DNA shared between the two species. In addition, protein-level searches and phylogenetic trees did not identify any specific relationships between w Mel and D. melanogaster for any genes. Thus, at least for this host–symbiont association, we do not find any likely cases of recent gene exchange, with genes being maintained in both host and symbiont. In addition, in our phylogenetic analyses, we did not find any examples of w Mel proteins branching specifically with proteins from any invertebrate to the exclusion of other eukaryotes. Therefore, at least for the genes in w Mel, we do not find evidence for transfer of Wolbachia genes into any invertebrate genome. Metabolism and Transport w Mel is predicted to have very limited capabilities for membrane transport, for substrate utilization, and for the biosynthesis of metabolic intermediates ( Figure S3 ), similar to what has been seen in other intracellular symbionts and pathogens ( Paulsen et al. 2000 ). Almost all of the identifiable uptake systems for organic nutrients in w Mel are for amino acids, including predicted transporters for proline, asparate/glutamate, and alanine. This pattern of transporters, coupled with the presence of pathways for the metabolism of the amino acids cysteine, glutamate, glutamine, proline, serine, and threonine, suggests that w Mel may obtain much of its energy from amino acids. These amino acids could also serve as material for the production of other amino acids. In contrast, carbohydrate metabolism in w Mel appears to be limited. The only pathways that appear to be complete are the tricarboxylic acid cycle, the nonoxidative pentose phosphate pathway, and glycolysis, starting with fructose-1,6-biphosphate. The limited carbohydrate metabolism is consistent with the presence of only one sugar phosphate transporter. w Mel can also apparently transport a range of inorganic ions, although two of these systems, for potassium uptake and sodium ion/proton exchange, are frameshifted. In the latter case, two other sodium ion/proton exchangers may be able to compensate for this defect. Many of the predicted metabolic properties of w Mel, such as the focus on amino acid transport and the presence of limited carbohydrate metabolism, are similar to those found in Rickettsia. A major difference with the Rickettsia spp. is the absence of the ADP–ATP exchanger protein in w Mel. In Rickettsia this protein is used to import ATP from the host, thus allowing these species to be direct energy scavengers ( Andersson et al. 1998 ). This likely explains the presence of glycolysis in w Mel but not Rickettsia. An inability to obtain ATP from its host also helps explain the presence of pathways for the synthesis of the purines AMP, IMP, XMP, and GMP in w Mel but not Rickettsia. Other pathways present in w Mel but not Rickettsia include threonine degradation (described above), riboflavin biosynthesis, pyrimidine metabolism (i.e., from PRPP to UMP), and chelated iron uptake (using a single ABC transporter). The two Rickettsia species have a relatively large complement of predicted transporters for osmoprotectants, such as proline and glycine betaine, whereas w Mel possesses only two of these systems. Regulatory Responses The w Mel genome is predicted to encode few proteins for regulatory responses. Three genes encoding two-component system subunits are present: two sensor histidine kinases (WD1216 and WD1284) and one response regulator (WD0221). Only six strong candidates for transcription regulators were identified: a homolog of arginine repressors (WD0453), two members of the TenA family of transcription activator proteins (WD0139 and WD0140), a homolog of ctrA , a transcription regulator for two component systems in other α-Proteobacteria (WD0732), and two σ factors (RpoH/WD1064 and RpoD/WD1298). There are also seven members of one paralogous family of proteins that are distantly related to phage repressors (see above), although if they have any role in transcription, it is likely only for phage genes. Such a limited repertoire of regulatory systems has also been reported in other endosymbionts and has been explained by the apparent highly predictable and stable environment in which these species live ( Andersson et al. 1998 ; Read et al. 2000 ; Shigenobu et al. 2000 ; Moran and Mira 2001 ; Akman et al. 2002 ; Seshadri et al. 2003 ). Host–Symbiont Interactions The mechanisms by which Wolbachia infect host cells and by which they cause the diverse phenotypic effects on host reproduction and fitness are poorly understood, and the w Mel genome helps identify potential contributing factors. A complete Type IV secretion system, portions of which have been reported in earlier studies, is present. The complete genome sequence shows that in addition to the five vir genes previously described from Wolbachia w KueYO ( Masui et al. 2001 ), an additional four are present in w Mel. Of the nine w Mel vir ORFs, eight are arranged into two separate operons. Similar to the single operon identified in w Tai and w KueYO, the w Mel virB8 , virB9 , virB10 , virB11 , and virD4 CDSs are adjacent to wspB , forming a 7 kb operon (WD0004–WD0009). The second operon contains virB3 , virB4 , and virB6 as well as four additional non- vir CDSs, including three putative membrane-spanning proteins, that form part of a 15.7 kb operon (WD0859–WD0853). Examination of the Rickettsia conorii genome shows a similar orga-nization ( Figure 6 A). The observed conserved gene order for these genes between these two genomes suggests that the putative membrane-spanning proteins could form a novel and, possibly, integral part of a functioning Type IV secretion system within these bacteria. Moreover, reverse transcription (RT)-PCRs have confirmed that wspB and WD0853–WD0856 are each expressed as part of the two vir operons and further indicate that these additional encoded proteins are novel components of the Wolbachia Type IV secretion system ( Figure 6 B). Figure 6 Genomic Organization and expression of Type IV Secretion Operons in w Mel (A) Organization of the nine vir -like CDSs (white arrows) and five adjacent CDSs that encode for either putative membrane-spanning proteins (black arrows) or non- vir CDSs (gray arrows) of w Mel, R. conorii , and A. tumefaciens . Solid horizontal lines denote RT experiments that have confirmed that adjacent CDSs are expressed as part of a polycistronic transcript. Results of these RT-PCR experiments are presented in (B). Lane 1, virB3 - virB4 ; lane 2, RT control; lane 3, virB6 -WD0856; lane 4, RT control; lane 5, WD0856-WD0855; lane 6, RT control; lane 7, WD0854-WD0853; lane 8, RT control; lane 9, virB8 - virB9 ; lane 10, RT control; lane 11, virB9 - virB11 ; lane 12, RT control; lane 13, virB11 - virD4 ; lane 14, RT control; lane 15, virD4 - wspB ; lane 16, RT control; lane 17, virB4 - virB6 ; lane 18, RT control; lane 19, WD0855-WD0854; lane 20, RT control. Only PCRs that contain reverse transcriptase amplified the desired products. PCR primer sequences are listed in Table S9 . In addition to the two major vir clusters, a paralog of virB8 (WD0817) is also present in the w Mel genome. WD0818 is quite divergent from virB8 and, as such, does not appear to have resulted from a recent gene duplication event. RT-PCR experiments have failed to show expression of this CDS in w Mel-infected Drosophila (data not shown). PCR primers were designed to all CDSs of the w Mel Type IV secretion system and used to successfully amplify orthologs from the divergent Wolbachia strains w Ri and w AlbB (data not shown). We were able to detect orthologs to all of the w Mel Type IV secretion system components as well as most of the adjacent non- vir CDSs, suggesting that this system is conserved across a range of A- and B-group Wolbachia . An increasing body of evidence has highlighted the importance of Type IV secretion systems for the successful infection, invasion, and persistence of intracellular bacteria within their hosts ( Christie 2001 ; Sexton and Vogel 2002 ). It is likely that the Type IV system in Wolbachia plays a role in the establishment and maintenance of infection and possibly in the generation of reproductive phenotypes. Genes involved in pathogenicity in bacteria have been found to be frequently associated with regions of anomalous nucleotide composition, possibly owing to transfer from other species or insertion into the genome from plasmids or phage. In the four such regions in w Mel (see above; see Table 3 ), some additional candidates for pathogenicity-related activities are present including a putative penicillin-binding protein (WD0719), genes predicted to be involved in cell wall synthesis (WD0095–WD0098, including D-alanine-D-alanine ligase, a putative FtsQ, and D-alanyl-D-alanine carboxy peptidase) and a multidrug resistance protein (WD0099). In addition, we have identified a cluster of genes in one of the phage regions that may also have some role in host–symbiont interactions. This cluster (WD0611–WD0621) is embedded within the WO-B phage region of the genome (see Figure 2 ) and contains many genes that encode proteins with putative roles in the synthesis and degradation of surface polysaccharides, including a UDP-glucose 6-dehydrogenase (WD0620). Since this cluster appears to be normal in terms of phylogeny relative to other genes in the genome (i.e., the genes in this region have normal w Mel nucleotide composition and branch in phylogenetic trees with genes from other α-Proteobacteria), it is not likely to have been acquired from other species. However, it is possible that these genes can be transferred among Wolbachia strains via the phage, which in turn could lead to some variation in host–symbiont interactions between Wolbachia strains. Of particular interest for host-interaction functions are the large number of genes that encode proteins that contain ankyrin repeats ( Table 4 ). Ankyrin repeats, a tandem motif of around 33 amino acids, are found mainly in eukaryotic proteins, where they are known to mediate protein–protein interactions ( Caturegli et al. 2000 ). While they have been found in bacteria before, they are usually present in only a few copies per species. w Mel has 23 ankyrin repeat-containing genes, the most currently described for a prokaryote, with C. burnetti being next with 13. This is particularly striking given w Mel's relatively small genome size. The functions of the ankyrin repeat-containing proteins in w Mel are difficult to predict since most have no sequence similarity outside the ankyrin domains to any proteins of known function. Many lines of evidence suggest that the w Mel ankyrin domain proteins are involved in regulating host cell-cycle or cell division or interacting with the host cytoskeleton: (i) many ankyrin-containing proteins in eukaryotes are thought to be involved in linking membrane proteins to the cytoskeleton ( Hryniewicz-Jankowska et al. 2002 ); (ii) an ankyrin-repeat protein of Ehrlichia phagocytophila binds condensed chromatin of host cells and may be involved in host cell-cycle regulation ( Caturegli et al. 2000 ); (iii) some of the proteins that modify the activity of cell-cycle-regulating proteins in D. melanogaster contain ankyrin repeats ( Elfring et al. 1997 ); and (iv) the Wolbachia strain that infects the wasp Nasonia vitripennis induces cytoplasmic incompatibility, likely by interacting with these same cell-cycle proteins ( Tram and Sullivan 2002 ). Of the ankyrin-containing proteins in w Mel, those worth exploring in more detail include the several that are predicted to be surface targeted or secreted ( Table 4 ) and thus could be targeted to the host nucleus. It is also possible that some of the other ankyrin-containing proteins are secreted via the Type IV secretion system in a targeting signal independent pathway. We call particular attention to three of the ankyrin-containing proteins (WD0285, WD0636, and WD0637), which are among the very few genes, other than those encoding components of the translation apparatus, that have significantly biased codon usage relative to what is expected based on GC content, suggesting they may be highly expressed. Table 4. Ankyrin-Domain Containing Proteins Encoded by the w Mel Genome Conclusions Analysis of the w Mel genome reveals that it is unique among sequenced genomes of intracellular organisms in that it is both streamlined and massively infected with mobile genetic elements. The persistence of these elements in the genome for apparently long periods of time suggests that w Mel is inefficient at getting rid of them, likely a result of experiencing severe population bottlenecks during every cycle of transovarial transmission as well as during sweeps through host populations. Integration of evolutionary reconstructions and genome analysis (phylogenomics) has provided insights into the biology of Wolbachia , helped identify genes that likely play roles in the unusual effects Wolbachia have on their host, and revealed many new details about the evolution of Wolbachia and mitochondria. Perhaps most importantly, future studies of Wolbachia will benefit both from this genome sequence and from the ability to study host–symbiont interactions in a host ( D. melanogaster ) well-suited for experimental studies. Materials and Methods Purification/source of DNA w Mel DNA was obtained from D. melanogaster yw 67c23 flies that naturally carry the w Mel infection. w Mel was purified from young adult flies on pulsed-field gels as described previously ( Sun et al. 2001 ). Plugs were digested with the restriction enzyme AscI (GG^CGCGCC), which cuts the bacterial chromosome twice ( Sun et al. 2001 ), aiding in the entry of the DNA into agarose gels. After electrophoresis, the resulting two bands were recovered from the gel and stored in 0.5 M EDTA (pH 8.0). DNA was extracted from the gel slices by first washing in TE (Tris–HCl and EDTA) buffer six times for 30 min each to dilute EDTA followed by two 1-h washes in β-agarase buffer (New England Biolabs, Beverly, Massachusetts, United States). Buffer was then removed and the blocks melted at 70°C for 7 min. The molten agarose was cooled to 40°C and then incubated in β-agarase (1 U/100 μl of molten agarose) for 1 h. The digest was cooled to 4°C for 1 h and then centrifuged at 4,100 × g max for 30 min at 4°C to remove undigested agarose. The supernatant was concentrated on a Centricon YM-100 microconcentrator (Millipore, Bedford, Massachusetts, United States) after prerinsing with 70% ethanol followed by TE buffer and, after concentration, rinsed with TE. The retentate was incubated with proteinase K at 56°C for 2 h and then stored at 4°C. w Mel DNA for gap closure was prepared from approximately 1,000 Drosophila adults using the Holmes–Bonner urea/phenol:chloroform protocol ( Holmes and Bonner 1973 ) to prepare total fly DNA. Library construction/sequencing/closure The complete genome sequence was determined using the whole-genome shotgun method ( Venter et al. 1996 ). For the random shotgun-sequencing phase, libraries of average size 1.5–2.0 kb and 4.0–8.0 kb were used. After assembly using the TIGR Assembler ( Sutton et al. 1995 ), there were 78 contigs greater than 5000 bp, 186 contigs greater than 3000 bp, and 373 contigs greater than 1500 bp. This number of contigs was unusually high for a 1.27 Mb genome. An initial screen using BLASTN searches against the nonredundant database in GenBank and the Berkeley Drosophila Genome Project site ( http://www.fruitfly.org/blast/ ) showed that 3,912 of the 10,642 contigs were likely contaminants from the Drosophila genome. To aid in closure, the assemblies were rerun with all sequences of likely host origin excluded. Closure, which was made very difficult by the presence of a large amount of repetitive DNA (see below), was done using a mix of primer walking, generation, and sequencing of transposon-tagged libraries of large insert clones and multiplex PCR ( Tettelin et al. 1999 ). The final sequence showed little evidence for polymorphism within the population of Wolbachia DNA. In addition, to obtain sequence across the AscI-cut sites, PCR was performed on undigested DNA. It is important to point out that the reason significant host contamination does not significantly affect symbiont genome assembly is that most of the Drosophila contigs were small due to the approximately 100-fold difference in genome sizes between host (approximately 180 Mb) and w Mel (1.2 Mb). Since it has been suggested that Wolbachia and their hosts may undergo lateral gene transfer events ( Kondo et al. 2002 ), genome assemblies were rerun using all of the shotgun and closure reads without excluding any sequences that appeared to be of host origin. Only five assemblies were found to match both the D. melanogaster genome and the w Mel assembly. Primers were designed to match these assemblies and PCR attempted from total DNA of w Mel infected D. melanogaster . In each case, PCR was unsuccessful, and we therefore presume that these assemblies are the result of chimeric cloning artifacts. The complete sequence has been given GenBank accession ID AE017196 and is available at http://www.tigr.org/tdb . Repeats Repeats were identified using RepeatFinder ( Volfovsky et al. 2001 ), which makes use of the REPuter algorithm ( Kurtz and Schleiermacher 1999 ) to find maximal-length repeats. Some manual curation and BLASTN and BLASTX searches were used to divide repeat families into different classes. Annotation Identification of putative protein-encoding genes and annotation of the genome was done as described previously ( Eisen et al. 2002 ). An initial set of ORFs likely to encode proteins (CDS) was identified with GLIMMER ( Salzberg et al. 1998 ). Putative proteins encoded by the CDS were examined to identify frameshifts or premature stop codons compared to other species. The sequence traces for each were reexamined and, for some, new sequences were generated. Those for which the frameshift or premature stops were of high quality were annotated as “authentic” mutations. Functional assignment, identification of membrane-spanning domains, determination of paralogous gene families, and identification of regions of unusual nucleotide composition were performed as described previously ( Tettelin et al. 2001 ). Phylogenomic analysis ( Eisen 1998a ; Eisen and Fraser 2003 ) was used to aid in functional predictions. Alignments and phylogenetic trees were generated as described ( Salzberg et al. 2001 ). Comparative genomics All putative w Mel proteins were searched using BLASTP against the predicted proteomes of published complete organismal genomes and a set of complete plastid, mitochondrial, plasmid, and viral genomes. The results of these searches were used (i) to analyze the phylogenetic profile ( Pellegrini et al. 1999 ; Eisen and Wu 2002 ), (ii) to identify putative lineage-specific duplications (those proteins with a top E -value score to another protein from w Mel), and (iii) to determine the presence of homologs in different species. Orthologs between the w Mel genome and that of the two Rickettsia species were identified by requiring mutual best-hit relationships among all possible pairwise BLASTP comparisons, with some manual correction. Those genes present in both Rickettsia genomes as well as other bacterial species, but not w Mel, were considered to have been lost in the w Mel branch (see Table S3 ). Genes present in only one or two of the three species were considered candidates for gene loss or lateral transfer and were also used to identify possible biological differences between these species (see Table S3 ). For the w Mel genes not in the Rickettsia genomes, proteins were searched with BLASTP against the TIGR NRAA database. Protein sequences of their homologs were aligned with CLUSTALW and manually curated. Neighbor-joining trees were constructed using the PHYLIP package. Phylogenetic analysis of mitochondrial proteins For phylogenetic analysis, the set of all 38 proteins encoded in both the Marchantia polymorpha and Reclinomonas americana ( Lang et al. 1997 ) mitochondrial genomes were collected. Acanthamoeba castellanii was excluded due to high divergence and extremely long evolutionary branches. Six genes were excluded from further analysis because they were too poorly conserved for alignment and phylogenetic analysis ( nad7 , rps10 , sdh3 , sdh4 , tatC , and yejV ), leaving 32 genes for investigation: atp6 , atp9 , atpA , cob , cox1 , cox2 , cox3 , nad1 , nad2 , nad3 , nad4 , nad4L , nad5 , nad6 , nad9 , rpl16 , rpl2 , rpl5 , rpl6 , rps1 , rps11 , rps12 , rps13 , rps14 , rps19 , rps2 , rps3 , rps4 , rps7 , rps8 , yejR , and yejU . Using FASTA with the mitochondrial proteins as a query, homologs were identified from the genomes of seven α-Proteobacteria: two intracellular symbionts ( W. pipientis w Mel and Rickettsia prowazekii ) and five free-living forms ( Sinorhozobium loti , Agrobacterium tumefaciens , Brucella melitensis , Mesorhizobium loti , and Rhodopseudomonas sp.). Escherichia coli and Neisseria meningitidis were used as outgroups. Caulobacter crescentus was excluded from analysis because homologs of some of the 32 genes were not found in the current annotation. In the event that more than one homolog was identified per genome, the one with the greatest sequence identity to the mitochondrial query was retrieved. Proteins were aligned using CLUSTALW ( Thompson et al. 1994 ) and concatenated. To reduce the influence of poorly aligned regions, all sites that contained a gap at any position were excluded from analysis, leaving 6,776 positions per genome for analysis. The data contained extreme amino acid bias: all sequences failed the χ 2 test at p = 0.95 for deviation from amino acid frequency distribution assumed under either the JTT or mtREV24 models as determined with PUZZLE ( Strimmer and von Haeseler 1996 ). When the data were iteratively purged of highly variable sites using the method described ( Hansmann and Martin 2000 ), amino acid composition gradually came into better agreement with acid frequency distribution assumed by the model. The longest dataset in which all sequences passed the χ 2 test at p = 0.95 consisted of the 3,100 least polymorphic sites. PROTML ( Adachi and Hasegawa 1996 ) analyses of the 3,100-site data using the JTT model detected mitochondria as sisters of the five free-living α-Proteobacteria with low (72%) support, whereas PUZZLE, using the same data, detected mitochondria as sisters of the two intracellular symbionts, also with low (85%) support. This suggested the presence of conflicting signal in the less-biased subset of the data. Therefore, protein log determinants (LogDet) were used to infer distances from the 6,776-site data, since the method can correct for amino acid bias ( Lockhart et al. 1994 ), and Neighbor-Net ( Bryant and Moulton 2003 ) was used to display the resulting matrix, because it can detect and display conflicting signal. The result (see Figure 5 A) shows both signals. In no analysis was a sister relationship between Rickettsia and mitochondria detected. For analyses of individual genes, the 63 proteins encoded in the Reclinomonas mitochondrial genome were compared with FASTA to the proteins from 49 sequenced eubacterial genomes, which included the α-Proteobacteria shown in Figure 5 , R. conorii , and Magnetococcus MC1, one of the more divergent α-Proteobacteria. Of those proteins, 50 had sufficiently well-conserved homologs to perform phylogenetic analyses. Homologs were aligned and subjected to phylogenetic analysis with PROTML ( Adachi and Hasegawa 1996 ). Analysis of wspB sequences To compare wspB sequences from different Wolbachia strains, PCR was done on total DNA extracted from the following sources: w Ri was obtained from infected adult D. simulans , Riverside strain; w AlbB was obtained from the infected Aa23 cell line ( O'Neill et al. 1997b ), and D. immitis Wolbachia was extracted from adult worm tissue. DNA extraction and PCR were done as previously described ( Zhou et al. 1998 ) with wspB -specific primers ( wspB -F, 5′-TTTGCAAGTGAAACAGAAGG and wspB -R, 5′-GCTTTGCTGGCAAAATGG). PCR products were cloned into pGem-T vector (Promega, Madison, Wisconsin, United States) as previously described ( Zhou et al. 1998 ) and sequenced (Genbank accession numbers AJ580921–AJ508923). These sequences were compared to previously sequenced wsp genes for the same Wolbachia strains (Genbank accession numbers AF020070, AF020059, and AJ252062). The four partial wsp sequences were aligned using CLUSTALV ( Higgins et al. 1992 ) based on the amino acid translation of each gene and similarly with the wspB sequences. Genetic distances were calculated using the Kimura 2 parameter method and are reported in Table S5 . Type IV secretion system To determine whether the vir -like CDSs, as well as adjacent ORFs, were actively expressed within w Mel as two polycistronic operons, RT-PCR was used. Total RNA was isolated from infected D. melanogaster yw 67c23 adults using Trizol reagent (Invitrogen, Carlsbad, California, United States) and cDNA synthesized using SuperScript III RT (Invitrogen) using primers wspB R, WD0817R, WD0853R, and WD0852R. RNA isolation and RT were done according to manufacturer's protocols, with the exception that suggested initial incubation of RNA template and primers at 65°C for 5 min and final heat denaturation of RT-enzyme at 70°C for 15 min were not done. PCR was done using r Taq (Takara, Kyoto, Japan), and several primer sets were used to amplify regions spanning adjacent CDSs for most of the two operons. For operon virB3-WD0853, the following primers were used: ( virB3 - virB4 )F, ( virB3 - virB4 )R, ( virB6 -WD0856)F, ( virB6 -WD0856)R, (WD0856-WD0855)F, (WD0856-WD0855)R, (WD0854-WD0853)F, (WD0854-WD0853)R. For operon virB8 - wspB , the following primers were used: ( virB8 - virB9 )F, ( virB8 - virB9 )R, ( virB9 - virB11 )F, ( virB9 - virB11 )R, ( virB11 - virD4 )F, ( virB11 - virD4 )R, ( virD4 - wspB )F, and ( virD4 - wspB )R. The coexpression of virB4 and virB6 , as well as WD0855 and WD0854, was confirmed within the putative virB3 -WD0853 operon using nested PCR with the following primers: ( virB4 - virB6 )F1, ( virB4 - virB6 )R1, ( virB4 - virB6 )F2, ( virB4 - virB6 )R2, (WD0855-WD0854)F1, (WD0855-WD0854)R1, (WD0855-WD0854)F2, and (WD0855-WD0854)R2. All ORFs within the putative virB8 - wspB operon were shown to be coexpressed and are thus considered to be a genuine operon. All products were amplified only from RT-positive reactions (see Figure 6 ). Primer sequences are given in Table S9 . Supporting Information Figure S1 Phage Trees Phylogenetic tree showing the relationship between WO-A and WO-B phage from w Mel with reported phage from w Kue and w Tai. The tree was generated from a CLUSTALW multiple sequence alignment ( Thompson et al. 1994 ) using the PROTDIST and NEIGHBOR programs of PHYLIP ( Felsenstein 1989 ). (60 KB PDF). Click here for additional data file. Figure S2 Plot of the Effective Number of Codons against GC Content at the Third Codon Position (GC3) Proteins with fewer than 100 residues are excluded from this analysis because their effective number of codon (ENc) values are unreliable. The curve shows the expected ENc values if codon usage bias is caused by GC variation alone. Colors: yellow, hypothetical; purple, mobile element; blue, others. Most of the variation in codon bias can be traced to variation in GC, indicating that the mutation forces dominate the w Mel codon usage. Multivariate analysis of codon usage was performed using the CODONW package (available from http://www.molbiol.ox.ac.uk/cu/codonW.html ). (289 KB PDF). Click here for additional data file. Figure S3 Predicted Metabolism and Transport in w Mel Overview of the predicted metabolism (energy production and organic compounds) and transport in w Mel . Transporters are grouped by predicted substrate specificity: inorganic cations (green), inorganic anions (pink), carbohydrates (yellow), and amino acids/peptides/amines/purines and pyrimidines (red). Transporters in the drug-efflux family (labeled as “drugs”) and those of unknown specificity are colored black. Arrows indicate the direction of transport. Energy-coupling mechanisms are also shown: solutes transported by channel proteins (double-headed arrow); secondary transporters (two-arrowed lines, indicating both the solute and the coupling ion); ATP-driven transporters (ATP hydrolysis reaction); unknown energy-coupling mechanism (single arrow). Transporter predictions are based upon a phylogenetic classification of transporter proteins ( Paulsen et al. 1998 ). (167 KB PDF). Click here for additional data file. Table S1 Repeats of Greater Than 50 bp in the w Mel Genome (with Coordinates) (649 KB DOC). Click here for additional data file. Table S2 Inactivated Genes in the w Mel Genome (147 KB DOC). Click here for additional data file. Table S3 Ortholog Comparison with Rickettsia spp (718 KB XLS). Click here for additional data file. Table S4 Putative Lineage-Specific Gene Duplications in w Mel (116 KB DOC). Click here for additional data file. Table S5 Genetic Distances as Calculated for Alignments of wsp and wspB Gene Sequences from the Same Wolbachia Strains (24 KB DOC). Click here for additional data file. Table S6 Putative DNA Repair and Recombination Genes in the w Mel Genome (26 KB DOC). Click here for additional data file. Table S7 Phylogenetic Results for Concatenated Data of 32 Mitochondrial Proteins (34 KB DOC). Click here for additional data file. Table S8 Individual Phylogenetic Results for Reclinomonas Mitochondrial DNA-Encoded Proteins (117 KB DOC). Click here for additional data file. Table S9 PCR Primers (47 KB DOC). Click here for additional data file. Accession Numbers The complete sequence for w Mel has been given GenBank ( http://www.ncbi.nlm.nih.gov/Genbank/ ) accession ID number AE017196 and is available through the TIGR Comprehensive Microbial Resourceat http://www.tigr.org/tigr-scripts/CMR2/GenomePage3.spl?database=dmg The GenBank accession numbers for other sequences discussed in this paper are AF020059 ( Wolbachia sp. w AlbB outer surface protein precursor wsp gene), AF020070 ( Wolbachia sp. w Ri outer surface protein precursor wsp gene), AJ252062 ( Wolbachia endosymbiont of D. immitis sp. gene for surface protein), AJ580921 ( Wolbachia endosymbiont of D. immitis partial wspB gene for Wolbachia surface protein B), AJ580922 ( Wolbachia endosymbiont of A. albopictus partial wspB gene for Wolbachia surface protein B), and AJ580923 ( Wolbachia endosymbiont of D. simulans partial wspB gene for Wolbachia surface protein B). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC368164.xml |
512295 | Altered mRNA expression of genes related to nerve cell activity in the fracture callus of older rats: A randomized, controlled, microarray study | Background The time required for radiographic union following femoral fracture increases with age in both humans and rats for unknown reasons. Since abnormalities in fracture innervation will slow skeletal healing, we explored whether abnormal mRNA expression of genes related to nerve cell activity in the older rats was associated with the slowing of skeletal repair. Methods Simple, transverse, mid-shaft, femoral fractures with intramedullary rod fixation were induced in anaesthetized female Sprague-Dawley rats at 6, 26, and 52 weeks of age. At 0, 0.4, 1, 2, 4, and 6 weeks after fracture, a bony segment, one-third the length of the femur, centered on the fracture site, including the external callus, cortical bone, and marrow elements, was harvested. cRNA was prepared and hybridized to 54 Affymetrix U34A microarrays (3/age/time point). Results The mRNA levels of 62 genes related to neural function were affected by fracture. Of the total, 38 genes were altered by fracture to a similar extent at the three ages. In contrast, eight neural genes showed prolonged down-regulation in the older rats compared to the more rapid return to pre-fracture levels in younger rats. Seven genes were up-regulated by fracture more in the younger rats than in the older rats, while nine genes were up-regulated more in the older rats than in the younger. Conclusions mRNA of 24 nerve-related genes responded differently to fracture in older rats compared to young rats. This differential expression may reflect altered cell function at the fracture site that may be causally related to the slowing of fracture healing with age or may be an effect of the delayed healing. | Background Bone formation to bridge the fracture gap following skeletal fracture slows with age in both humans [ 1 - 6 ] and rats [ 7 - 9 ]. While young, 6-week-old rats reach radiographic union by 4 weeks after femoral fracture, adult, 26-week-old rats require 10 weeks, and older, 52-week-old rats need in excess of 6 months [ 7 ]. Despite this increased time to radiographic union with age, there was no increase in the time of expression of Indian hedgehog or any of the bone morphogenetic proteins in the fracture callus for adult rats [ 10 ] or for older rats [ 11 , 12 ]. Radiographic union for adult and older rats occurred well after the time of expression of these skeletally active cytokines [ 10 , 11 ]. Except for markers of osteoblast activity and bone matrix formation, few genes remain up-regulated during the time period when bone forms to bridge the fracture gap [ 10 - 12 ]. These earlier studies done with RT-PCR revealed a paucity of data for genes differentially expressed by age. We had hypothesized that bone formation to bridge the fracture gap would be under a negative-feedback control system. Thus, the genes which stimulate bone formation should be up-regulated in adult or older rats to attempt to accelerate their slower progression of bony healing. This was not observed in adult [ 10 ] or older [ 11 , 12 ] rats. Either bone formation to bridge the fracture gap is not subject to negative-feedback control, or the genes up-regulated to control this bone formation are not those normally thought of as being involved in skeletal homeostasis. This suggested the need for a wider search for genes active during the fracture reparative process. In this project, mRNA gene expression was measured by DNA microarray technology at various time points after fracture for young, adult, and older rats. The goal was to identify genes whose expression following fracture was altered by age. Such genes may either show reduced expression, if the age-related slowing of healing is caused by inadequate expression levels, or they may show enhanced expression, in an attempt to stimulate some poorly responding pathway. Among the genes which were differentially expressed at the fracture site with age were genes related to nerve cell activity. In this study, we explored whether abnormal mRNA expression of genes related to nerve cell activity was associated with the slowing of skeletal repair in older rats. Abnormalities in the innervation of the fracture site will slow skeletal healing clinically [ 13 - 15 ] and experimentally [ 16 - 18 ]. Methods Rats Intact female Sprague-Dawley rats (Harlan Sprague-Dawley, Inc., Indianapolis, IN) were purchased at one or six months of age and housed in our vivarium in pairs until they were the proper age for experimentation. The rats were fed Teklad Rodent Diet [W] (#8604, Harlan Teklad, Madison, WI) and tap water ad libitum . The work was done in an AAALAC-accredited vivarium under protocols approved by our Institutional Animal Care and Use Committee. Surgery Intact female Sprague-Dawley rats at 6 (young), 26 (adult) or 52 (older) weeks of age, weighing 154 ± 11 g (mean ± SD), 281 ± 25 g, and 330 ± 30 g respectively, were anaesthetized with an intraperitoneal injection of ketamine and xylazine (30 mg and 5 mg/kg body weight respectively) as described earlier [ 7 , 11 ]. The left knee was shaved, scrubbed with Betadine Solution (Purdue Frederick, Stamford, CT), and draped with sterile sheets. A medial incision was made at the knee, the patella was deflected laterally and a 1.0 mm hole was drilled into the intercondylar notch. An intramedullary rod (1.0 mm diameter, stainless steel, type 304V, O-SWGX-400, Small Parts, Miami Lakes, FL) was placed retrograde into the left femur [ 11 ]. The incision was closed with wound clips. A closed simple transverse mid-diaphyseal femoral fracture was induced with a Bonnarens and Einhorn device [ 19 ]. Randomly selected rats from among those scheduled for surgery were used for 0 time no-fracture sham controls. Rats were euthanized at 0, 0.4, 1, 2, 4, and 6 weeks after fracture for a total of 6 time points at each of the 3 ages. Six rats per time point per age group were selected for microarray analysis (2 rats/array). Radiographs were made at fracture, at 1 week after fracture, and at euthanasia. The femora were rapidly harvested, and one third of the femoral length, centered on the fracture site, was collected. This contained the fracture callus with associated cortical bone and marrow and was frozen in liquid nitrogen and stored at -75 C. RNA Sample Preparation and Microarray Processing Samples were prepared as described in the Affymetrix GeneChip Expression Analysis Technical Manual (copyright 2001, Affymetrix, Inc., Santa Clara, CA, Rev. 1, Part number 701021, ). The sample preparation is described here in brief. Total RNA was extracted from the tissue by TRIzol (Invitrogen Life Technologies, Carlsbad, CA) with disruption of the tissue in a Brinkman Polytron homogenizer. RNA from two rats of the same age and time point was pooled for each microarray sample. Samples with 30 μg RNA were purified on RNeasy columns by Qiagen (Valencia, CA, P/N 74104) and then converted to double-stranded cDNA with a Superscript Double Stranded cDNA Synthesis Kit (Invitrogen Life Technologies, P/N 11917-010). The cDNA was then expressed as biotin-labeled cRNA by in vitro transcription (IVT) with the Enzo RNA Transcript Labeling Kit (Affymetrix, P/N 900182). Each sample was spiked with bioB, bioC, bioD, and cre (Affymetrix P/N 900299). The biotin-labeled cRNA was fragmented non-enzymatically. The fragmented cRNA was hybridized to 54 Rat U34A microarrays (Affymetrix P/N 900249) in the Affymetrix hybridization buffer for 16 hours at 45 C. The hybridized arrays were washed and stained in the Affymetrix Fluidics Station 400 to attach fluorescent labels to the biotin, followed by biotin-labeled antibody, and then a second staining with fluorescent labeling of the biotin. Each array was scanned twice by the Agilent GeneArray Scanner G2500A (Agilent Technologies, Palo Alto, CA). Three arrays from three independent samples (six rats) were done for each age at each time point. Data Analysis The Rat U34A GeneChip Microarray has probe sets for over 8,700 rat genes. Most probe sets have 20 different probes for the same gene on each array with 20 additional mismatch controls. The data were analyzed with Affymetrix Microarray Suite 5.0 and Affymetrix Data Mining Tool 3.0 software. Microarray Suite was used to scale the mRNA expression (signal value) of all genes to an average of 500 for each array. For each gene, the software reported a signal value and a Present/Marginal/Absent call. This latter algorithm was a statistical comparison of the variation among the several probe sets for each gene compared to the noise level and gave a call for each gene as Present, Marginal, or Absent. The program then compared the signal value of each gene in the fractured samples against the signal value of the same gene in the unfractured control sample. The difference between the two signal levels, relative to the variability between the multiple probes for each gene, yielded a probability of change due to chance alone. Genes with p less than 0.005 were judged significantly different from the same gene in the unfractured sample. This more conservative p value was employed to minimize false positive responses. The Data Mining Tool was used for cluster analysis with the Self Organizing Map (SOM) algorithm. The data were clustered on the signal values between 20 and 20,000 with the maximum/minimum ratio of at least 3.0 and the maximum – minimum difference of at least 100. One hundred clusters were specified. Nerve-related genes were identified by searches for nerve-related names in the gene descriptions of each gene on the microarray. This association was confirmed by a review of the information for that gene in the NetAffx web site and in the PubMed database . GenBank accession numbers and names are shown for each gene. Each graph shows the average ± SEM of the three microarrays that were done for each time point for each age. Significant changes in gene expression were demonstrated by t test and linear regression [ 20 ]. This report conforms to the MIAME standards of MGED . A copy of the full microarray data set has been deposited in the NCBI Gene Expression Omnibus as series GSE594. Results Radiology In all young rats, bone bridged the fracture gap by four weeks after surgery. By six weeks after fracture, remodeling was beginning to obscure the fracture site (Fig. 1 ). In contrast, bone bridging in the adult rats progressed more slowly. The adult rats did have a vigorous periosteal reaction at the site of the fracture and were approaching radiographic union by six weeks after surgery (Fig. 1 ). In the older, one-year-old rats, bridging of the fracture gap by bone progressed the slowest. They had a minimal periosteal reaction at six weeks after surgery (Fig. 1 ). General results On each array, on average, 5,200 genes were scored as absent, and 3,300 as present. Of these, 1,159 were significantly up-regulated and 928 were significantly down-regulated at two weeks after fracture in the adult rats of the first series (see Additional File 1 ). Up-regulated genes included cytokines and matrix genes for both cartilage and bone. Down-regulated genes included genes related to blood cell synthesis and mitochondrial function. SOM clusters identified genes up- or down-regulated by fracture. Most genes affected by fracture followed the same time course at all three ages. These genes showed approximately the same peak expression level and regressed to baseline at about the same time point at all three ages. Among the genes affected by fracture were a number of genes associated with nerve cells. These were selected for more intense analysis. Similar responses at all three ages Up-regulated nerve-related genes are shown in Table 1 . Two examples are shown in the upper two graphs in Figure 2 . Both of these genes were significantly up-regulated from the 0 time control (P < 0.001 by t test for 9 samples (3 ages × 3 replicates) of 0 time vs . 0.4 week (Fig. 2 , top graph) or vs . 0 time vs . 2 week (Fig. 2 , middle graph)). Other nerve-related genes were down-regulated by fracture at all three ages (Table 2 ). These regained near normal activity by six weeks after fracture. An example is shown in the bottom graph of Figure 2 . This gene (TAG-1) had a significant down-regulation after fracture (P < 0.001 by t test for 9 samples of 0 time vs . 0.4 week), followed by a significant increase at 6 weeks after fracture compared to 0.4 week after fracture (P < 0.001 by t test for 9 samples). Defects in the older rats SOM cluster analysis identified three types of defects in the older rats. In the first type, a number of genes were down-regulated by fracture at all three ages. However, while genes in the younger rats were returning to pre-fracture expression levels by six weeks after fracture, there was less recovery in the older rats. These genes are shown in Table 3 , and three examples of these genes are shown in Figure 3 . All three of these genes had a significantly decreased mRNA expression levels at 1 week after fracture compared to 0 time control (P < 0.001 by t test for 9 samples (3 ages × 3 replicates)). At 4 and 6 weeks after fracture, the young rats showed faster recovery in mRNA expression than did the older rats for the three genes in Fig. 3 (P < 0.01 by t tests for 4 and 6 week young vs . 4 and 6 week old). In the second type of defect, other genes were up-regulated by fracture, but the response was weaker in the older rats. These genes are shown in Table 4 . Three examples are shown in Figure 4 . The broad peaks of the genes in Figure 4 permitted the t test to demonstrate a significantly higher expression level in the young rats at 1 and 2 weeks after fracture in comparison to the same time points of older rats. These comparisons for the three genes in Figure 4 were significant at P < 0.001 (top graph, AF034963), P < 0.02 (middle graph, AB005541) and P < 0.01 (bottom graph, U09357) for 6 samples per age group (2 time points pooled for 3 replicates). In the third type of defect, genes were also up-regulated by fracture. However, the response was stronger in the older rats than in the younger rats. These genes are shown in Table 5 , and three examples are shown in Figure 5 . The peak values for these three genes significantly increased with age by linear regression (P < 0.001 (top graph, AF030089), P = 0.01 (middle graph, M88469), and P < 0.001 (bottom graph, X89963) for 9 data points (3 ages × 3 replicates)). Present/Marginal/Absent calls For each gene for each array, the Microarray Suite software reported a statistical decision as to whether the mRNA was Present, Marginal, or Absent. We have reviewed these calls for the genes shown in Figures 2 , 3 , 4 , 5 . For Figure 2 , the Present/Marginal/Absent calls were: Fig. 2 Top: 53/1/0; Fig. 2 Middle: 39/2/13; and Fig. 2 Bottom: 0/0/54. For Figure 3 , the calls were: Fig. 3 Top: 45/2/7; Fig. 3 Middle: 7/0/47; and Fig. 3 Bottom: 32/6/16. For Figure 4 , the calls were: Fig. 4 Top: 0/0/54; Fig. 4 Middle: 0/0/54; and Fig. 4 Bottom: 51/0/3. For Figure 5 , the calls were: Fig. 5 Top: 45/1/8; Fig. 5 Middle: 52/0/2; and Fig. 5 Bottom: 54/0/0. Discussion In this study, as in our earlier work [ 7 , 10 - 12 ], the time required to reach radiographic union after femoral fracture increased with age in the female rat. This slowing of fracture repair with age is associated with changes in the mRNA expression of specific genes within the healing fracture site [ 10 - 12 ]. To study this further, microarray technology was used to identify additional genes whose mRNA expression was affected by skeletal fracture. More than two-thirds of the detectable genes on the rat U34A microarray have a change in mRNA expression level following fracture [ 21 ]. Most of these genes were not known to participate in the healing process of bone before the advent of microarray technology [ 21 , 22 ]. This reflects changes in both the types of cells at the fracture site as well as changes in the activity of the existing cells. Among the cells affected by fracture are nerve fibers. Protein and mRNA of genes related to neuronal functioning are found in intact bone and in the fracture callus [ 23 - 29 ]. Since proper innervation of the fracture site is needed for fracture repair clinically [ 13 - 15 ] and experimentally [ 16 - 18 ], this led to the hypothesis that the age-related slowing of fracture repair may be related to the abnormal nerve cell activity at the fracture site. To evaluate this hypothesis, nerve-related genes were studied from among the genes present on the Affymetrix Rat U34A microarray. Genes were identified for which the mRNA response to femoral fracture was changed in the older rats compared to the young rats. Three types of change with age were found: 1. The mRNA expression levels of the genes shown in Table 3 and Figure 3 were decreased by fracture. While gene expression in the young rats was approaching pre-fracture levels by six weeks after fracture, gene expression showed minimal return to normal in older rats. Genes in this category were all related to signaling molecules or to signal receptors (references shown in Table 3 ). 2. Other nerve-related genes had strong up-regulation after fracture in young rats but only mild up-regulation in older rats. These are shown in Table 4 and Figure 4 . This partial loss of function with age was observed in genes associated with nerve cell differentiation or cell cycle or genes related to synaptic structure (references cited in Table 4 ). 3. A third set of genes was increased in mRNA expression by fracture, but the increase was greater in the older rats. These are shown in Table 5 and Figure 5 . Many of these genes were related to cell adhesion or to cell signal or signal transduction (references cited in Table 5 ). All three classes of genes showed altered expression in the older rats compared to young rats. We hypothesize that bone fracture may physically disrupt nerve fibers in bone. A sub-population of these skeletal nerve fibers may regrow into the fracture site or regain function at a slower rate in older rats. This may account for the failure to recover from low mRNA values for the first group (prolonged down-regulation) or the failure to up-regulate mRNA expression adequately after fracture in the older rats in the second group (diminished up-regulation). Other genes in the third group with increased levels of mRNA after fracture in the older rats may represent attempts to stimulate nerve regrowth or other processes that are not responding. This may represent negative-feedback-induced up-regulation caused by effector cell resistance. Taken together, these changes in nerve cell function with age may contribute to the slowing of fracture repair in older rats. It must be pointed out that the associations noted here do not necessarily reflect cause and effect. It is also possible that the delayed re-innervation of the fracture site is an effect of the delayed healing in the older rats and not a cause of the delayed healing. Experimental studies have been done to detect the role of innervation on fracture healing. Studies of sectioning the sciatic nerve in concert with tibial fracture have been reported to speed fracture healing [ 30 - 33 ]. However, sectioning both femoral and sciatic nerves inhibits fracture healing [ 18 ]. Aro et al . [ 16 ] have reported mechanoreceptors (Pacinian corpuscles) in the periostium of the rat fibula, which, if removed, lead to non-union [ 17 ]. Direct application of nerve growth factor to the fracture site increases healing in the rat rib [ 34 ]. In humans, abnormal bone healing is also associated with lack of nerve activity at the fracture site. Nagano et al . [ 13 ] have noted scaphoid nonunion in the wrists of patients with neuroarthropathy from a long-standing nerve palsy. Santavirta et al . [ 14 ] have found a lack of peripheral innervation at the fracture site of noninfected fractures with delayed union or nonunion of diaphyseal bones. Nordstrom et al . [ 15 ] have found a lack of stromal innervation associated with delayed union or pseudoarthrosis in spondylolysis. Humans [ 1 - 6 ] show a slowing of fracture healing with increasing age as do rats [ 7 - 9 ]. The cause of the slowing of fracture healing with age is not well understood. The femora of young rats regain normal biomechanical properties by 4 weeks after fracture, while adults take 12 weeks, and older rats require in excess of 6 months [ 7 ]. This model presents an opportunity to elucidate novel genes important to this healing process. The slowing could reflect a loss of function as some processes essential for the rapid healing of fractures in young animals are inhibited with age. Alternatively, the slowing of skeletal repair with age may be caused by partial resistance of the healing process to stimulation in adult or older individuals. Such resistance should result in enhanced stimulation by regulatory systems to attempt to evoke a healing response. Both patterns were seen among the genes studied in this report. These genes are candidates for further study. These changes with age are not limited to genes related to neuronal activity. We have also noted similar changes in genes related to mitochondrial activity [ 35 ]. It is likely that the age-related changes in fracture repair are caused by failure of several metabolic pathways. Methods, such as DNA microarrays, which sample many different biological pathways will be useful in defining these novel, multi-faceted defects. The specificity of these changes is seen in the majority of the nerve-related genes for which the expression pattern following fracture was unaffected by age. These transcripts had similar increases or decreases following fracture in the young, adult, and older rats. These uniform responses suggest that most metabolic patterns were unaffected by age. Nerve-related genes similarly up-regulated by femoral fracture at all three ages were broadly related to differentiation and growth of nerve cells, to known up-regulation following nerve injury, or to association with apoptosis (references cited in Table 1 ). Some of these genes were slower to return to baseline values in older rats, such as galanin and TAG-1. In contrast, nerve-related genes similarly down-regulated by femoral fracture at all three ages were broadly related to the nerve growth cone or to synaptic signaling pathways (references cited in Table 2 ). In this study gene expression was measured by quantification of the mRNA level for each gene with microarray technology. It must be kept in mind that there are other control systems which influence the protein synthetic rate and also protein degradation. Protein synthesis will be low in the absence of mRNA for that gene, but elevated mRNA levels are not a guarantee that protein levels will also be elevated for that gene. Changes noted at the mRNA level will need to be confirmed at the protein and structural levels. Assignment of the genes studied herein as nerve-related is made on the basis of currently available information. Other cell types in the fracture callus may also express these genes. Histological studies will permit the association of these genes with specific cell types within the fracture callus. These experiments are now in progress. We have compared mRNA gene expression by microarray to that measured by reverse transcription – polymerase chain reaction (RT-PCR)[ 36 ]. Good correlation was found between the two methods if the transcripts were judged mostly present, the signal level did not approach the upper limit of the detector, and the probe sets or PCR primers were from the same region of the gene [ 36 ]. Some other genes, even though most samples were judged absent, also gave good correlation between the two methods. These latter genes were at the upper range of the absent calls and had good precision between samples [ 36 ]. The genes reported herein have the marked variation in mRNA levels that have been reported previously in fracture samples [ 10 , 11 ] with large changes in expression after fracture which return to the prefracture levels as healing progresses. The finding here of moderate signal levels, good precision among the three samples for each time point at each age, and a strong response to fracture indicate the ability of this technology to report changes in mRNA levels for these genes. Conclusions In summary, most genes respond to bone fracture with altered mRNA gene expression, including genes related to neuronal functioning. However, a number of these genes responded to fracture differently in older rats than in young rats. Such differential expression with age may reflect altered cell functioning at the fracture site that may be related to the slowing of fracture healing in older rats. Competing interests None declared. Authors' contributions MM, WE, and RM participated in the surgery and the sample collection. MM and WE prepared and hybridized the samples and analyzed the data. RM prepared the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 Additional File 1. Response of adult rats at two weeks after fracture. This spreadsheet is a listing of all genes altered by fracture in the first adult rat sample at two weeks after fracture compared to the first adult no-fracture sample. The data are sorted on the probability value for the comparison of each gene between the two arrays. For each array, the signal (mRNA transcript level), detection (present, absent, or marginal) and detection p-value are given. In addition, for the two-week sample, the change (increase, marginal increase, marginal decrease or decrease) and change p-value are given for the comparison of the two-week sample to the no-fracture sample. These two arrays are archived in the GEO repository as samples GSM9028 (no-fracture sample) and GSM9031 (2-week sample). Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC512295.xml |
523855 | Gene family evolution: an in-depth theoretical and simulation analysis of non-linear birth-death-innovation models | Background The size distribution of gene families in a broad range of genomes is well approximated by a generalized Pareto function. Evolution of ensembles of gene families can be described with Birth, Death, and Innovation Models (BDIMs). Analysis of the properties of different versions of BDIMs has the potential of revealing important features of genome evolution. Results In this work, we extend our previous analysis of stochastic BDIMs. In addition to the previously examined rational BDIMs, we introduce potentially more realistic logistic BDIMs, in which birth/death rates are limited for the largest families, and show that their properties are similar to those of models that include no such limitation. We show that the mean time required for the formation of the largest gene families detected in eukaryotic genomes is limited by the mean number of duplications per gene and does not increase indefinitely with the model degree. Instead, this time reaches a minimum value, which corresponds to a non-linear rational BDIM with the degree of approximately 2.7. Even for this BDIM, the mean time of the largest family formation is orders of magnitude greater than any realistic estimates based on the timescale of life's evolution. We employed the embedding chains technique to estimate the expected number of elementary evolutionary events (gene duplications and deletions) preceding the formation of gene families of the observed size and found that the mean number of events exceeds the family size by orders of magnitude, suggesting a highly dynamic process of genome evolution. The variance of the time required for the formation of the largest families was found to be extremely large, with the coefficient of variation >> 1. This indicates that some gene families might grow much faster than the mean rate such that the minimal time required for family formation is more relevant for a realistic representation of genome evolution than the mean time. We determined this minimal time using Monte Carlo simulations of family growth from an ensemble of simultaneously evolving singletons. In these simulations, the time elapsed before the formation of the largest family was much shorter than the estimated mean time and was compatible with the timescale of evolution of eukaryotes. Conclusions The analysis of stochastic BDIMs presented here shows that non-linear versions of such models can well approximate not only the size distribution of gene families but also the dynamics of their formation during genome evolution. The fact that only higher degree BDIMs are compatible with the observed characteristics of genome evolution suggests that the growth of gene families is self-accelerating, which might reflect differential selective pressure acting on different genes. | Background An extremely broad variety of phenomena in physics, biology, and the social sphere is described by power law distributions. The power laws apply to the distribution of the number of links between documents in the Internet, the population of towns, the number of species that become extinct within a year, the number of sexual and other contacts between people, and numerous other quantities [ 1 - 4 ]. In the field of genomics, the "dominance by a selected few" [ 5 ] encapsulated in the power laws applies to the distribution of the number of transcripts per gene, the number of interactions per protein, the number of genes in coexpressed gene sets, the number of genes or pseudogenes in paralogous families, the number of connections per node in metabolic networks, and other quantities that can be obtained by genome analysis [ 5 - 9 ]. Mathematically, these distributions are described by the formula: P ( i ) ≈ ci - γ where P ( i ) is the frequency of nodes with exactly i connections or sets with exactly i members, γ is a parameter which typically assumes values between 1 and 3, and c is a normalization constant. Obviously, in double-logarithmic coordinates, the plot of P as a function of i is close to a straight line with a negative slope. Recently, it has been shown that the distributions of several genome-related quantities are best described by the so-called generalized Pareto function: P ( i ) = c ( i + a ) - γ where γ > 0, a are parameters [ 10 - 13 ]. At large i ( i >> a ), this distribution is indistinguishable from a power law, but at small i , it deviates substantially, with the magnitude of the deviation depending on a . Power law distributions and the associated scale-free networks are compatible with the intuitively plausible mechanism of evolution by preferential attachment although other modes of evolution are also possible [ 9 , 14 ]. Under preferential attachment, a network or a mathematically analogous object, such as an ensemble of gene families, grows via attachment of new nodes to the pre-existing ones with a probability that is proportional to the degree (number of connections) of the latter. However, preferential attachment or other general evolutionary principles associated with power law type distributions and scale-free phenomena do not actually explain the emergence of these phenomena in biologically meaningful terms. A biological explanation involves, at a minimum, identifying the elementary events underlying the evolutionary process and the simplest models of evolution that include these events and are compatible with the observations. Under this logic, families of paralogous genes represent a perfect object for evolutionary modeling. Indeed, for these families, elementary evolutionary processes are defined naturally. By definition, paralogous families evolve by gene duplication. It has been long suspected and, with the advent of genomics, established beyond reasonable doubt that genome evolution proceeds largely by duplication of genes or portions thereof, and even long genomic segments or entire genomes [ 15 - 20 ]. All sequenced genomes contain numerous paralogous genes, and in more complex genomes, the majority of genes have at least one paralog [ 21 , 22 ]. Duplication is followed by mutational diversification and gradually leads to functional differentiation of the paralogs. It is thought that such differentiation occurs via the routes of neofunctionalization (emergence, in one of the paralogs, of a new function non-existent in the ancestral gene) [ 16 ] and, probably most often, subfunctionalization, i.e., partitioning of subfunctions of the ancestral gene among the paralogs [ 23 , 24 ]. Hence, duplication obviously is the first elementary process of genome evolution. Genomes and gene families not only grow but often shrink or, probably most of the time, persist in equilibrium. Therefore, duplication must be counter-balanced by the opposite elementary process, gene loss . Again, comparative genomics has shown that gene loss occurs in all species and seems to be extensive in certain lineages, particularly in parasites [ 25 - 27 ]. Finally, genes new to a given lineage may emerge either as a result of a dramatic change after duplication obliterating all "memories" of a gene's origin, or via horizontal gene transfer, or by evolution of a protein-coding gene from a non-coding sequence (rare as this latter process might be). Collectively, the contribution of these processes to genome evolution may be termed innovation . Gene duplication, gene loss, and innovation seem to comprise a reasonable minimal set of elementary events for modeling genome evolution. The only potential major addition could be rearrangement of the gene structure whereby genes accrete or lose domains. However, at least for first approximation modeling, these changes could be covered either by duplication, if they do not yield new genes without detectable relationships to pre-existing families, or by innovation if they do. We should further note that evolutionary analysis of paralogous gene families can be reasonably viewed as a study of the evolution of genomes themselves if all genes are viewed as members of paralogous families, ranging in size (number of members) from 1 to N (the size of the largest family). Of course, one must keep in mind that describing genome evolution in terms of gene duplication, loss, and innovation represents a high level of abstraction, whereby a gene is considered an atomic unit of evolution, and mutation processes occurring within a gene are ignored. However, numerous comparative-genomic studies have shown the utility of the gene-level abstraction both for systematic prediction of the functions of uncharacterized genes using the patterns of their distribution in diverse genomes [ 28 - 31 ] and for understanding general evolutionary trends. A striking recent example of the latter type of achievement is the demonstration that different functional categories of genes scale differently with genome size, with the steepest ascent of regulatory genes offering a plausible explanation for the observed limits of genome size in prokaryotes [ 32 ]. A natural framework for modeling evolution of gene families is a birth-and-death process, a concept well explored in many physical and chemical contexts [ 33 ]. Duplication constitutes a gene birth, and gene loss is a death event; innovation also can be readily incorporated in this context. The birth-and-death approach has been applied to modeling the evolution of paralogous genome family sizes [ 6 , 12 , 34 ], the distribution of folds and families in the entire protein universe [ 35 ], and protein-protein interaction networks [ 36 , 37 ]. For over a century since the publication of Darwin's seminal work [ 38 ], biologists believed that evolution at all levels is largely driven by natural selection [ 39 ]. However, the advent of molecular evolution shifted the perspective by demonstrating, largely through the work of Kimura and his school, that many, if not most, of the fixed nucleotide substitutions are effectively neutral [ 40 ]. Recent comparative analyses of gene expression led to the expansion of the neutral evolution concept beyond the genome sequence, at least to the level of the transcriptome [ 41 , 42 ]. Perhaps the principal importance of the neutral theory is that it leads to a change of the prevailing null hypothesis of evolutionary biology: neutrality should be taken as the null hypothesis, and selection should be invoked only when this hypothesis can be rejected. Birth and death models naturally fit this paradigm because they do not include the notion of selection (at least not explicitly). It is therefore of considerable interest to determine whether or not simple models of this class can be rejected as the explanation for various observed features of genomes. In the previous work [ 12 ], we examined in detail simple deterministic models of genome evolution, which we dubbed BDIMs, after b irth (duplication), d eath (elimination), and i nnovation ( de novo emergence or acquisition via horizontal gene transfer) models. We showed that the power law asymptotic of the size distribution of gene families appears if, and only if, birth and death rates of domains in families of sufficiently large size are balanced (asymptotically equal up to the second order) and that any power asymptotic with γ ≠ 1 appears only if the per gene birth/death rates depend on the size of the gene family. We showed that the simplest model that adequately approximates the empirical data on gene (domain) family size distributions is the linear 2 nd order balanced BDIM. Subsequently, we expanded the BDIM framework by introducing stochastic BDIMs, which account not only for the stationary state of the gene ensemble but also for the characteristics of evolution of the system, such as the probability of the formation of a family of the given size before extinction and the mean times of formation and extinction of a family of a given size [ 43 ]. We first investigated these issues for the linear 2 nd order balanced stochastic BDIM. Given the published estimates of the rates of gene duplication and loss [ 24 ], we found that this version of BDIM, which gives a good approximation of the stationary distributions of family sizes for different genomes, predicts completely unrealistic mean times for reaching the observed sizes of the largest domain families. In computer simulations with a large ensemble of genes, even the minimum time required for the formation of the largest family was shown to be unrealistically long. Thus, the linear BDIM is incompatible with the estimates of the rate of genome size growth derived from the empirical data. Therefore we performed a preliminary examination of non-linear, higher degree BDIMs and showed that the rate of genome size growth increases with the degree of the model, rendering non-linear BDIMs more realistic models of genome evolution [ 43 ]. Here, we present a detailed analysis of the properties of different non-linear stochastic BDIMs, including polynomial, rational, and logistic ones, which were obtained by the appropriate transformations of the original linear model. These models generated the same stationary family size distribution, but the stochastic properties of the higher order models were dramatically different from those of the linear BDIM. The mean number of elementary events, duplications and deletions, which are required for the formation of the largest family, decrease monotonically with the increase of the model degree. By contrast, the mean time of formation of a gene family of the given size under a fixed average duplication rate went through a minimum depending on the model degree; typically, the model degree corresponding to this minimum was between 2 and 3. However, even with this optimal degree, the mean times of formation of the largest families in different genomes were unrealistically long. The times of formation and extinction of gene families are random variables with unknown distributions. Therefore it was important to determine the variance of these times and the number of elementary events preceding the formation and extinction of the largest families. We found that the coefficients of variation were very large such that the extreme values of the formation times for the largest family could differ from the mean time by at least two orders of magnitude. Thus, for assessing the feasibility of the formation of the largest families under a given model, the relevant value is not the mean but the minimal time of family formation over the entire ensemble of genes. Using Monte Carlo simulations, we show that the minimal time required for the formation of families of the expected size under BDIMs of the orders between 2–3 is compatible with the timescale of genome evolution. Results and discussion 1. Definitions and empirical data The basic BDIM definitions and assumptions We treat a genome as a "bag" of genes (or, more precisely, portions of genes) encoding protein domains (or simply domains for brevity; see [ 12 ] for details). Domains are treated as independent evolving units disregarding co-occurrence of domains in multidomain proteins. Each domain is considered to be a member of a family, which may have one or more members. In this work, we interchangeably refer to domain families or gene families. Three types of elementary events are postulated: i) b i rth , which yields a new member in the same domain family as a result of gene duplication, ii) death , i.e., inactivation and/or deletion of a domain, and iii) innovation , which generates a new, single-member family. Innovation may occur via domain evolution from a non-coding sequence or a non-globular protein sequence, via horizontal gene transfer from another species, or via radical change of a domain after duplication. The rates of elementary events are defined as the probabilities of the respective events during an infinitesimally short time interval [ 44 ] and is postulated to be independent of time (all analyzed models are homogeneous) and of the structure, biological function, and other features of individual domain families. Clearly, these assumptions are simplifications made in order to avoid prohibitively complex models; the justification is that, over large (genome-wide) ensembles of families and long time intervals, the existing non-homogeneities are likely to cancel out, making homogeneous models realistic. It may be useful to emphasize that homogeneity of the models does not imply constancy of the number of events during any finite time interval, which is a random variable. The data on the size of domain families in sequenced genomes were obtained as described previously [ 12 ]. Briefly, the domains were identified by comparing the CDD library of position-specific scoring matrices (PSSMs) for domains extracted from the Pfam and SMART databases, to the protein sequences from completely sequenced eukaryotic and prokaryotic genomes using the RPS-BLAST program [ 45 ]. In a finite genome, the maximum number of domains in a family cannot exceed the total number of domains and, in reality, is probably much smaller. Let N be the maximum possible number of domain family members (this limit is introduced for technical reasons; however, this should not be perceived as a biologically unrealistic assumption because N can be made extremely large, e.g., to exceed the number of genes in the largest known genome by several orders of magnitude; furthermore, almost all of the results below are valid with N = ∞ under certain well defined conditions, which ensure the existence of the ergodic distribution of the birth-and-death process). We also consider virtual, "empty" families that consist of 0 domains. In the stochastic BDIMs, newborn domains are extracted from this class and dead domains return to it. Originally, we examined exclusively the deterministic version of the BDIMs [ 12 ]. Introduction of the 0 class "closes" the model and allows us to transform it into a Markov process, which provides for the possibility to explore the stochastic properties of the system [ 43 ]. In these stochastic models, innovation was not introduced explicitly as it was in the deterministic models, but was implied in the emergence of domains from the 0 class. Let p i ( t ) be the frequency of a domain family of size i . Then p i ( t ) satisfy a system of forward Kolmogorov equations for birth-and-death process (e.g., [ 44 , 46 ]): d p 0 ( t )/d t = - λ 0 p 0 ( t ) + δ 1 p 1 ( t ), d p i ( t )/d t = λ i -1 p i -1 ( t ) - ( λ i + δ i ) p i ( t ) + δ i +1 p i +1 ( t ) for 0 < i < N , (1.1) d p N ( t )/d t = λ N -1 p N -1 ( t ) - δ N p N ( t ). Mathematically, (1.1) defines the state probabilities of a birth-and-death process with the finite number of states {0,1,... N } and reflecting boundaries in 0 and N . The evolution of individual trajectories of the birth-and-death process X ( t ), whose state probabilities satisfy the system (1.1), can be described as follows. At the starting time, the system is situated in some initial state x 0 . The time axis { t ≥ 0} can be divided into intervals [0, τ 1 ), [ τ 1 , τ 2 ), [ τ 2 , τ 3 ) ... such that X ( t ) is a constant on each interval. If, at the moment τ n , the system was situated in the point x n = i , then, in the moment τ n +1 , the system moves either into the state i +1 with the probability β i = λ i /( λ i + δ i ) or into the state i -1 with the probability μ i = δ i /( λ i + δ i ). The sojourn time t i = τ n +1 - τ n between the arrival at the point x n = i and the exit from this point is a random variable independent of the previous history of the system and is distributed exponentially: P{ t i ≥ x } = exp(-( λ i + δ i ) x ). Note that the random variables t i are independent, and the mean sojourn time, E ( t i ), in the state i is E ( t i ) = 1/( λ i + δ i ). Process (1.1) has a unique stationary ergodic distribution p 0 ,..., p N defined by the equalities d p i ( t )/d t = 0 for 0 ≤ i ≤ N . Let J ( i, t ) = δ i p i ( t ) - λ i -1 p i -1 ( t ) be the current through the state i in t time moment, J ( i ) = δ i p i - λ i -1 p i -1 be the current in the stationary state. Then the equation for the stationary distribution can be written as J ( i +1) - J ( i ) = 0. As the system is closed, J (0) = 0 and hence J ( i ) = 0 for all i , such that p i / p i -1 = λ i -1 / δ i . We will consider also the variant of this model with states {1,... N } and reflecting boundaries in states 1 and N : d p 1 ( t )/d t = - λ 1 p 1 ( t ) + δ 2 p 2 ( t ), d p i ( t )/d t = λ i -1 p i -1 ( t ) - ( λ i + δ i ) p i ( t ) + δ i +1 p i +1 ( t ) for 1 < i < N , (1.3) d p N ( t )/d t = λ N -1 p N -1 ( t ) - δ N p N ( t ). This model describes the evolution of the size of a domain family that includes an indispensable (essential) gene and is not allowed to go extinct. Similarly, for model (1.3), the ergodic distribution is: The ergodic distribution (1.2) (or 1.4) is globally stable and is approached exponentially with respect to time from any initial state. The asymptotic of the ergodic distribution is completely defined by the asymptotic behavior of the function χ ( i ) ≡ λ i -1 / δ i . Let us suppose that, for large i , the following expansion is valid: χ ( i ) ≡ λ i -1 / δ i = i s θ (1- γ / i + O (1/ i 2 )) (1.5) Then, the asymptotical behavior of the stationary distribution of model (1.1) is completely defined by three parameters: s , θ and γ ([ 12 ]). In particular, if the birth-and-death process is the 1 st order balanced , i.e. if, by definition, s = 0 in (1.5), then, asymptotically, p i ~ θ i i - γ . If the process is 2 nd order balanced, i.e. s = 0 and θ = 1, then p i ~ i - γ . The complete description of all possible asymptotics of the ergodic distributions of model (1.1) under condition (1.5) is given in Mathematical Appendix, Theorem 1 (hereinafter all references of the form (A.m.n) refer to the corresponding formula in the Mathematical Appendix [see Additional file 1 ]). It asserts that a large class of models, namely the second order balanced BDIMs, provide any given power asymptotic of the stationary frequency distributions of family sizes. 2. Classification of BDIMs Linear BDIM The simplest model that shows the generalized Pareto distribution is the linear BDIM with λ i = λ ( i+a ), δ i = δ ( i + b ) for i > 0, λ , δ , a and b are constants. (2.1) The equilibrium distribution of domain family sizes is defined by: So, if λ = δ ( θ = 1), the resulting 2 nd order balanced linear BDIM has a power asymptotics with γ = 1 + b - a . Polynomial BDIM Informally, polynomial BDIMs can be introduced as follows. Under the linear BDIM, the dependence of the birth and death rates on family size is very weak; although each gene "senses" the size of the family (as reflected in the non-zero parameters a and b ), this dependence cannot be interpreted as a specific form of interaction between family members. If such interactions are postulated, λ i ~ P n ( i ) and/or δ i ~ Q m ( i ), where P n ( i ), Q m ( i ) are polynomials on i of the n -th and m -th degrees. The ergodic distribution of the stochastic polynomial BDIM of the form (1.1) and (1.3) is asymptotically the same as that of the originally described deterministic polynomial BDIM [ 12 ], see Appendix (A.1.4), (A.1.5) [see Additional file 1 ] and Proposition 2 for details. We show here that non-linear polynomial 2 nd order balanced BDIM predict evolution rates that are dramatically greater than those for the linear BDIM. As an example, let us consider the quadratic BDIM in more detail. It takes into account the simplest, pairwise interaction between family members, which leads to λ i ~ i 2 and/or δ i ~ i 2 , i.e., one or both rates are polynomials on i of the second degree. If the polynomial degrees of the birth and death rates are different (e.g., λ i ~ i and δ i ~ i 2 ), the corresponding BDIM is non-balanced, and equilibrium frequencies have no power asymptotics. Thus, let λ i = λ ( i 2 + r 1 i + r 2 ), δ i = δ ( i 2 + q 1 i + q 2 ), (2.3) where λ , δ , r k , q k , k = 1,2 are constants (such that λ i , δ i are positive for all i ); equivalently, λ i = λ ( i + a )( i + a 2 ), δ i = δ ( i + b )( i + b 2 ). Then, r 1 = a + a 2 , q 1 = b + b 2 , and χ ( i ) = λ i -1 / δ i = θ (1 + ( r 1 - q 1 - 2)/ i + O (1/ i 2 )), where θ = λ / δ . The quadratic BDIM has equilibrium sizes of domain families (see A.1.6) p i ≈ c 2 p 0 λ 0 / λθ i i ρ -2 where ρ = r 1 - q 1 , c 2 = p 0 [(Γ (1 + b ) Γ (1 + b 2 )] / [Γ(1 + a ) Γ (1 + a 2 )], and Thus, if the quadratic BDIM is 2 nd order balanced, then p i ~ i ρ -2 . Note that the asymptotic behavior frequencies p i do not depend on free coefficients r 2 and q 2 in (2.3), but only on θ and r 1 - q 1 , although the constant c 2 could depend on the free coefficients r 2 , q 2 . Rational BDIM Rational models comprise a rather general class of BDIMs, for which the asymptotic behavior of the equilibrium frequencies and equilibrium sizes of domain families are fully tractable. The ergodic distribution of the stochastic rational BDIM is asymptotically the same as that of the deterministic rational BDIM [ 12 ]. In particular, if the model is 2 nd order balanced, then p i ~ i - γ , (see A.1.2 and Proposition 1 in the Appendix for details [see Additional file 1 ]). The rational BDIMs can describe a substantially wider class of birth and death rates compared to polynomial models. In particular, birth rate can have a maximum at some specific value of family size and then decrease with further growth of the size, e.g., as shown in Fig. 1 . This dependence of rates on family size can be described by the rational model with λ i = λ ( i + a 1 )/( i + a 2 ) 2 , δ i = δ ( i + b 1 )/( i + b 2 ) 2 . Figure 1 Dependence of the birth rate ( λ i = ( i + c 1 )/( i + c 2 ) 2 ) on the family size. Logistic BDIM Evidently, the number of size classes of protein families, N , should be finite, although intrinsic features that could determine the value of N so far have not been considered (the impossibility of an infinite genome is self-evident but one would expect a much tighter bound based, e.g., on the limited time and resources available for genome replication and expression). Under the BDIMs described above, birth rate grows monotonically as the family size increases from 1 to N and then abruptly drops to 0 (since families of size N +1 or greater are not allowed). However, this behavior is an arbitrary simplification of the model and hardly can reflect the actual process of genome evolution. In population dynamics models, the finiteness of a population size typically results from the "saturation type" of growth: the growth rate tends to 0 as the population size tends to the maximal possible value (see, e.g., [ 47 ]). It seems likely that, during genome evolution, gene duplication (and death) rate also tends to 0 as duplications leading to increase in gene number become deleterious when the size of some paralogous families becomes prohibitively large. The simplest formalism, which yields this type of population growth, is the logistic form of the birth rate. Logistic-like stochastic models have been investigated in various applications (e.g., [ 48 , 49 ]), which considered a birth-and-death process with the rates λ ( i ) = c 3 ( c 1 + i )( N-i ), δ ( i ) = c 3 i ( c 2 -i ), c k > 0, k = 1,2,3, c 2 > N . This model produces log-normal and log-series distributions; with the appropriate values of parameters, power low distributions of frequencies also appear, but only for intermediate values of i , namely, 1 << i << N and N >> 1. Non-linear transformation of BDIM We have shown previously [ 43 ] that the following modification of any form of BDIM: λ * i = λ i g ( i ), δ * i = δ i g ( i -1) (2.4) where g ( i ), i = 0,... N , is a positive function, g (0) = 1, results in a BDIM with the same ergodic distribution of the family sizes as the original one. In particular, modifications of a linear BDIM with g( i ) = ( i + 1) d -1 or g( i ) = ( i + 1) d -1 (1 - i /( N + c )) define, respectively, wide classes of rational or logistic BDIMs with the same stationary distribution as the original linear BDIM, but with manifestly different dynamic properties. 3. Probability of formation of a family of the given size before extinction and mean and variance of extinction time In is known [ 44 ] that the probability for the birth-and-death process to reach state n before reaching state 0 from an initial state i > 0 is given by formula (A.2.2). In terms of BDIM (1.1), this means that the probability of formation of a family of size n starting from a family of size i before getting to extinction is given by (A.2.2). The random birth-and-death process (1.1) certainly visits state 0 in the course of time; this means that any domain family will eventually get extinct (and then, formally, can be "reborn", returning from the 0-class). Below we compute the mean time to extinction of a family of the given size for different versions of BDIM; the mean time to extinction of the largest family in the given genome is of particular interest. Let us denote S ( n )=inf{ t : X ( t ) = 0| X (0)= n } the time to the first passage of state 0 from the initial state n ; S ( n ) is a random variable for each n . The mean time to extinction of the family of initial size n, E ( S ( n )), is given by the general formula A.3.2. Linear BDIM We have shown previously that, for the linear 2 nd order balanced BDIM, the probability that a singleton expands to a family of size n before dying, P (1) (1, n ) has the power asymptotics for large n (A.2.5). The values of probabilities P (1) (1, n ) for different species are shown in Table 1 ; these probabilities are no greater than ~10 -4 - 10 -5 . The mean time to extinction, E ( S ( n )), can be calculated using the relation E ( S ( n )) = 1/ λE (1) n , where E (1) n , the mean time to extinction expressed in the 1/ λ time units, is given by formula A.3.3 (see Table 1 for some numerical data and Figs. 1 , 2 in [ 43 ]). Table 1 Family evolution under the linear BDIM ( d =1) N P (d) (1,N) *10 2 e (d) N E (d) N f (d) N M (d) N M (d) N /E (d) N c (d) du T (d) N Sce 130 0.284 295267 47.46 260080 20381.6 429.5 1.903 1939.3 Dme 335 0.227 778830 153.74 734725 37409.9 243.3 1.784 3337.0 Cel 662 0.160 1.866*10 6 347.76 1.803*10 6 68709.6 197.6 1.523 5232.2 Ath 1535 0.016 2.150*10 7 702.65 2.087*10 7 529639. 753.8 2.382 63080.0 Hsa 1151 0.026 1.329*10 7 505.26 1.29*10 7 300665. 595.1 2.721 40905.5 Tma 97 0.060 681356 31.47 513450 80677.3 2563.6 1.109 4473.6 Mth 43 1.125 37131.5 14.91 28570 4707.04 315.9 1.091 256.8 Sso 81 0.461 129115 30.14 98440 12853.5 426.5 1.253 805.3 Bsu 124 0.284 237343 48.89 202150 22921.0 468.8 1.320 1512.8 Eco 140 0.155 440665 51.67 375943 37959.8 734.7 1.544 2930.5 For the linear BDIM ( d = 1) and for the largest family of size N in each genome, the table shows the probability of formation P ( d ) (1, N ), mean number of events before extinction of the largest family e ( d ) N ; mean number of events before formation of the largest family from a singleton, f ( d ) N ; mean times of formation M ( d ) N and extinction E ( d ) N (in 1/ λ units); the value of coefficient c ( d ) du = r du / λ ; mean times of formation T ( d ) N in Ga (10 9 yrs) under r du = 2 × 10 -8 . The model parameters were genome-specific as determined previously [12]. and were the same for all model degrees according to (2.4). Species abbreviations: Sce, Saccharomyces cerevisiae , Dme, Drosophila melanogaster , Cel, Caenorhabditis elegans , Ath, Arabidopsis thaliana , Hsa, Homo sapiens , Tma, Thermotoga maritima , Mth, Methanothermobacter thermoautotrophicum , Sso, Sulfolobus solfataricus , Bsu, Bacillus subtilis , Eco, Escherichia coli . Figure 2 Coefficient of variation of the extinction time versus the family size for the linear BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. sapiens (red), A. thaliana (green) (Table 1 in [43]). The variance of extinction time Var ( S ( n )) for the linear 2 nd order balanced BDIM is Var ( S ( n )) = 1/ λ 2 W (1) n , where W (1) n can be calculated using the formula (A.3.7). The plot of the coefficient of variation s (1) n = ( W (1) n ) 1/2 / E (1) n versus n for different species is shown in Fig. 2 (see also Table 1 for some numerical data). Clearly, the extinction time can vary within an extremely broad range of values. Non-linear polynomial and rational BDIM The stochastic behavior of the system and its characteristics also can be investigated within the broader framework of rational BDIMs. We will examine models represented as transformed linear BDIM (2.1), with λ i = λ ( i + a )( i + 1) d -1 , δ i = λ ( i + b ) i d -1 , (3.1) where d ≥ 1 is the model degree. Let us recall that Theorem 1 (Mathematical Appendix [see Additional file 1 ]) shows that the highest degrees and the corresponding coefficients of the birth and death rates at i d must be equal to provide for the power asymptotics of the stationary distribution, P ( i ) ~ i - γ . The power γ of this distribution is completely determined by the degree d and the coefficients at i d -1 . Thus, the model (1.1), (3.1) is representative of all rational BDIMs of the degree d with a given power asymptotic ( γ = b - a + 1) of the stationary distribution. Besides, according to Proposition 1, this distribution for model (3.1) is exactly the same as for the corresponding linear model with λ i = λ ( i + a ), δ i = λ ( i + b ), which was studied in detail in [12]. We applied formula (A.2.6) with g ( i ) = ( i + 1) d -1 , to calculate the probability of formation of a family of the given size from a singleton before getting to extinction for the BDIM of degree d, P ( d ) (1, n ). For example, the probabilities P (2) (1, n ) and P (3) (1, n ) for the quadratic and cubic BDIMs, respectively, are given by this formula with g( i ) = i + 1 and g( i ) = ( i + 1) 2 , respectively. Figures 3 and 4 show the dependence of the probabilities P (2) (1, n ) and P (3) (1, n ) on the family size n for different species. The dependence of the probability P ( d ) (1, N ) of the formation of the largest family on the model degree is shown in Fig. 5 . Figure 3 Probability of family formation starting from a singleton, P (2) (1, n ), versus the family size ( n ) for the quadratic BDIM (in double logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 4 Probability of family formation from a singleton, P (3) (1, n ), versus the family size ( n ) for the cubic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 5 Probability of formation of the largest family starting from a singleton, P ( d ) (1, N ), for rational BDIMs depending on the model degree d . The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). The mean time to extinction for the rational BDIM (1.1), (3.1) with a fixed d is calculated using the formula (A.3.4) where Here E * n is the mean time to extinction in the 1/ λ time units. Figures 6 and 7 show the dependence of E (2) n and E (3) n on n for the quadratic and cubic BDIMs, respectively. Fig. 8 shows the mean times of extinction of the largest family, E ( d ) N , for different species, depending on the model degree d . Some numerical values of the mean time to extinction for quadratic and cubic BDIMs and different species are given in Tables 2 and 3 . The variance of the extinction time of a family of size n , Var( S ( n ))= 1/ λ 2 W ( d ) ( n ), d = 2, 3 for the quadratic and cubic BDIMs, and the coefficient of variation s ( d ) n = ( W ( d ) n ) 1/2 / E ( d ) n are calculated using the formulas (A.3.8). The results are shown in Figs. 9 and 10 . Some numerical values of the coefficient of variation of the extinction time for different species are given in Table 4 . Figure 6 Mean time to extinction (in 1/ λ units) depending on the family size for the quadratic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 7 Mean time to extinction (in 1/ λ units) depending on the family size for the cubic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 8 Mean time to extinction (in 1/ λ units) of the largest family for the rational BDIM depending on the model degree d . The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 9 Coefficient of variation of the time to extinction depending on the family size for the quadratic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 10 The coefficient of variation of extinction time versus family size for the cubic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Table 2 Family evolution under the linear BDIM ( d = 1) N P ( d ) (1, N ) *10 2 e ( d ) N E ( d ) N f ( d ) N M ( d ) N M ( d ) N / E ( d ) N c ( d ) du T ( d ) N Sce 130 0.230 33206.9 2.82 32772 249.80 88.58 7.56 94.4 Dme 335 0.404 127814. 4.72 127567 206.26 43.71 11.67 120.4 Cel 662 0.498 394794. 6.61 394593 215.36 32.58 15.80 170.2 Ath 1535 0.131 2.768*10 6 5.98 2.77*10 6 638.27 106.73 22.50 718.1 Hsa 1151 0.166 1.555*10 6 5.37 1.68*10 6 468.84 87.31 24.48 573.9 Tma 97 0.039 38872.6 2.25 36306 1231.3 547.26 3.27 201.3 Mth 43 0.315 4539.9 2.03 4234 166.47 77.09 3.33 27.7 Sso 81 0.233 13281.1 2.61 12852 252.47 97.11 4.33 54.7 Bsu 124 0.212 26441.0 3.10 25969 304.97 98.38 5.09 77.6 Eco 140 0.135 34970.6 2.90 40270 431.85 148.91 5.74 123.9 For the quadratic BDIM ( d = 2) and for the largest family of size N in each genome, the table shows the probability of formation P ( d ) (1, N ), mean number of events before extinction of the largest family e ( d ) N ; mean number of events before formation of the largest family from a singleton, f ( d ) N ; mean times of formation M ( d ) N and extinction E ( d ) N (in 1/ λ units); the value of coefficient c ( d ) du = r du vλ ; mean times of formation T ( d ) N in Ga (10 9 yrs) under r du = 2 × 10 -8 . The model parameters were the same as for the linear model according to (2.4). Species abbreviations: Sce, Saccharomyces cerevisiae , Dme, Drosophila melanogaster , Cel, Caenorhabditis elegans , Ath, Arabidopsis thaliana , Hsa, Homo sapiens , Tma, Thermotoga maritima , Mth, Methanothermobacter thermoautotrophicum , Sso, Sulfolobus solfataricus , Bsu, Bacillus subtilis , Eco, Escherichia coli . Table 3 Family evolution under the cubic BDIM ( d = 3). N P ( d ) (1, N ) e ( d ) N E ( d ) N f ( d ) N M ( d ) N M ( d ) N / E ( d ) N c ( d ) du = r du vλ T ( d ) N Sc e 130 0.105 12315.7 0.944 12306 4.60 4.84 92.46 21.3 Dme 335 0.222 60759.4 1.390 60755 2.45 1.76 549.65 67.3 Cel 662 0.283 208472 1.804 208469 2.10 1.17 2020.37 212.1 Ath 1535 0.255 1.29*10 6 1.390 1.29*10 6 1.93 1.39 3754.83 362.3 Hsa 1151 0.254 756242 1.291 756238 1.65 1.27 2938.07 242.4 Tma 97 0.019 9442.5 0.781 9390 24.5 31.4 18.84 23.1 Mth 43 0.061 1530.2 0.848 1514 7.85 9.24 18.26 7.2 Sso 81 0.073 4799.6 0.960 4786 7.21 7.51 36.71 13.2 Bsu 124 0.088 10265.3 1.059 10254 6.40 6.04 63.38 20.3 Eco 140 0.071 14459.9 0.957 14446 7.34 7.67 65.06 23.9 For the cubic BDIM ( d = 3) and for the largest family of size N in each genome, the table shows the probability of formation P ( d ) (1, N ), mean number of events before extinction of the largest family e ( d ) N ; mean number of events before formation of the largest family from a singleton, f ( d ) N ; mean times of formation M ( d ) N and extinction E ( d ) N (in 1/ λ units); the value of coefficient c ( d ) du = r du vλ ; mean times of formation T ( d ) N in Ga (10 9 yrs) under r du = 2 × 10 -8 . The model parameters were the same as for the linear model according to (2.4). Species abbreviations: Sce, Saccharomyces cerevisiae , Dme, Drosophila melanogaster , Cel, Caenorhabditis elegans , Ath, Arabidopsis thaliana , Hsa, Homo sapiens , Tma, Thermotoga maritima , Mth, Methanothermobacter thermoautotrophicum , Sso, Sulfolobus solfataricus , Bsu, Bacillus subtilis , Eco, Escherichia coli . Table 4 Coefficients of variation of the extinction and formation times for the BDIMs of different degrees N s (1) N σ (1) N s (2) N σ (2) N s (3) N σ (3) N Dme 335 194.11 81.79 304.96 126.90 766.29 184.70 Cel 662 413.30 195.73 460.31 277.24 481.65 391.25 Ath 1535 885.78 421.03 1016.85 583.56 1042.86 886.95 Hsa 1151 649.77 308.40 746.56 425.21 768.04 647.23 The table shows coefficient of variation s ( d ) N of extinction time for the largest family; coefficient of variation σ ( d ) N of formation time for the largest family; d = 1,2,3 for the linear, quadratic and cubic BDIM, respectively. Species abbreviations: Dme, Drosophila melanogaster , Cel, Caenorhabditis elegans , Ath, Arabidopsis thaliana , Hsa, Homo sapiens . Logistic BDIM Let us consider the logistic modification of the rational BDIM; specifically, we will examine models with the birth and death rates of the form λ i = λ ( i + a )( i + 1) d -1 (1 - i /( N + c )), δ i = δ ( i + b ) i d -1 (1 - ( i - 1)/( N + c )). (3.2) We will refer to the parameter c as saturation boundary. The shape of λ i essentially depends on the value of c (Fig. 11 ). Figure 11 Dependence of λ i (3.2) at d = 2 on i with different boundary value, c = 1, c = 100, c = 1000 (from bottom to top). The model parameters are for Drosophila melanogaster . The logistic model (1.1), (3.2) is a transformation (2.9) of the linear BDIM using the function: g( i ) = ( i + 1) d -1 (1 - i /( N + c ), c = const ≥ 0. (3.3) The stationary distribution of family size frequencies for the logistic model (1.1), (3.2) is exactly the same as that for corresponding linear BDIM but the stochastic properties are different and close to the rational models, and essentially depend on the boundary c . With a large c , the model is very close to the corresponding rational model with λ i = λ ( i + a )( i + 1) d -1 , δ i = δ ( i + b ) i d -1 , but with small c , we can observe some new effects when the family size approaches N . The probability of formation of a family of a given size from a singleton before getting to extinction for the logistic BDIM is calculated using the general formula (A.2.6) where the function g ( i ) is given by (3.3). The dependence of this probability on the model degree d under a fixed large value of the boundary c ~ N is similar to that for the corresponding rational models but differs under a small c ; Fig. 12 shows this dependence for c = 1. Figure 12 Dependence of the probability P ( d ) (1, n ) on the family size n for the logistic model with c = 1 for d = 1, 2 and 3 (from bottom to top). The model parameters are for Drosophila melanogaster . The mean times of extinction for the logistic BDIMs are calculated using formula (A.3.4). Fig. 13 shows the mean times of extinction of the largest family, E ( d ) N , depending on the model degree d for different values of saturation boundary c . Fig. 14 shows the dependence of E ( d ) N on the saturation boundary c for different values of d . Figure 13 Mean time to extinction (in 1/ λ units) of the largest families for the logistic BDIM depending on the model degree d for c = 1, c = 100 and c = 1000 (from top to bottom, in double logarithmic scale). The model parameters are for Drosophila melanogaster . Figure 14 Mean time to extinction (in 1/ λ units) of the largest families for the logistic BDIM, dependently on the boundary value c for d = 1 (left) and d = 2 (right). The model parameters are for Drosophila melanogaster. 4. Mean and variance of formation time for a family of the given size Let us denote T ( j, n ) = inf{ t : X ( t ) = n | X (0) = j } the time to the first passage of state n from the initial state j ; T ( j, n ) is a random variable for each j, n . The mean time to the first passage for BDIM (1.1), m ( j, n ) = E ( T ( j, n )), can be calculated using the formula m ( j, n ) = m 0 ( j, n ) + m 1 ( j, n ). Here the term m 0 ( j , n ) is the mean time elapsed before the system leaves the 0 state for the last time, and the term m 1 ( j , n ) is the mean time of formation of a family of size n from a singleton after its last "resurrection" (see formulas (A.4.1) for details). Below we examine only the mean family formation time from an essential singleton (model (1.3)). Linear BDIM Previously, we determined the mean time of formation of a family of size n from a singleton for different species [ 43 ]. For the linear BDIM, the value of the mean formation time from an essential singleton is given by formula M (1) (1, n ) = 1/ λ M (1) n , where M (1) n , the mean formation time in 1/ λ units is calculated using the formula (A.4.6) The transition from the 1/ λ time units to years is considered in s.6 of the Mathematical Appendix [see Additional file 1 ]. The mean formation time E ( T (1, n )) in years is calculated using the formula (A.6.4) and the current empirical estimates of the gene duplication rate [ 24 ]. Plots of E ( T (1, n )) for different species are shown in Fig. 15 . Figure 15 Mean time to formation (in years, Ga, with r du = 2*10 -8 ) depending on family size for the linear BDIM (double logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Once we computed the mean time of formation of a family of size n for different species, the question arises how accurately is the time T ( i, n ) of the first random passage through the threshold n predicted by the mean value. To address this problem, we estimated the variance of the time of family formation, Var( T ( i, n )) using the general formulas (A.5.2) for model (1.1) and (A.5.3) for model (1.3), respectively. For the linear BDIM, the variance of the formation time for a family of size n from an essential singleton, V (1) n , is given by the formula (A.5.5). A more important and informative characteristic, which is independent on the model parameter λ , is the coefficient of variation, which is equal to . The coefficient of variation of the formation time of a family of size n from a singleton, σ ( d ) n = ( V ( d ) n ) 1/2 / M ( d ) (1; n ) for the BDIM of degree d is the most relevant value. The plots of σ (1) n versus n for the linear model and for different species are shown in Fig. 16 . Figure 16 The coefficient of variation σ (1) n of family formation time depending on n for the linear BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). The coefficients of variation were very large for all species (see numerical values in Table 4 ). To summarize the results obtained for the stochastic characteristics of the linear BDIM, we found that: i) under this model, the mean time to extinction of the largest families in most genomes was much shorter than the mean time of formation of these families, and ii) using the current estimates of duplication rates in eukaryotic genomes ( r du ≈ 2 × 10 -8 duplications/gene/year [ 24 ]) to express the mean family formation times in real time units instead of the dimensionless 1/ λ units, we obtain M (1) (1; N ) ~ 10 13 - 10 14 yrs, a completely unrealistic time estimate. The mean family formation times given by the linear BDIM would become realistic only if the recent analyses underestimated the gene duplication rate by a factor of ~10 4 , which does not seem plausible. Thus, the linear BDIM cannot provide an adequate description of genome evolution, at least when only the mean time of family formation is considered. The variance of the family formation time is extremely large (the coefficient of variation is ~10 2 ), and, accordingly, large deviations from the mean time, more to two orders of magnitude, are possible. However, even taking this into account, the family formation times predicted by the linear BDIM are far longer than the time allotted for life's evolution of earth. In the next section, we consider non-linear, higher order models that have the potential to yield shorter mean times of family formation. Polynomial BDIMs The mean time of formation of families from an essential singleton (or after the last "resurrection" of a family) depending on the family size n for the polynomial BDIMs is E ( T (1, n )) = 1/ λ M * n where M * n , the mean time of formation in 1/ λ units can be calculated using the formulas (A.4.9) Fig. 17 shows the dependence of the mean time of family formation on the family size for the quadratic BDIM in years, calculated using the formula (A.6.4). The values of mean times of formation for this BDIM are given in Table 2 . Figure 17 Mean time of formation (in years, Ga, with r du = 2*10 -8 ) depending on family size n for the quadratic BDIM (in double logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). The variance of formation time of a family of the size n can be calculated using the formula (A.5.6), with g( j )= j +1 for the quadratic BDIM and g( j ) = ( j + 1) 2 for the cubic BDIM, respectively. The dependence of the coefficient of variation σ (2) n = ( V (2) (1, n )) 1/2 / M (2) (1; n ) on the family size for the quadratic BDIM is shown in Fig. 18 , and some numerical data are given in Table 4 . Figure 18 The coefficient of variation σ (2) n of formation time versus family size for the quadratic BDIM and species. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Although the variance of family formation times for the quadratic BDIM is approximately 5 orders of magnitude less than that for the linear BDIM, the values of the coefficient of variation for quadratic BDIM are about 1.3–1.5 times greater than those for the linear BDIM. Thus, the actual formation time for the largest family could differ from the mean value by several orders of magnitude with a high probability. Figures 19 and 20 show the dependence of the mean and the coefficients of variation of family formation time on family size for the cubic BDIM. Figure 19 Mean time of formation (in years, Ga, with r du = 2*10 -8 ) depending on family size n for the cubic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 20 The coefficient of variation σ (3) n of formation time versus family size for the cubic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). We have shown previously that the cubic model shows extremely high evolution rate comparatively with the linear and even quadratic models under the same value of the parameter λ [ 43 ]. On the contrary, the mean formation times in years for the quadratic and cubic models are of the same order (Tables 2 and 3 ). The polynomial models bring the mean time required for the formation of families of the observed size closer to realistic values but these times still remain far too long. Specifically, with the empirical estimates of the duplication rates used above for the linear BDIM, the quadratic model gives the mean family formation times ~10 11 yrs. This value is close to the minimum possible time of family formation that can be calculated using the duplication rate estimates of Lynch and Conery [ 24 ] and non-linear rational BDIMs. Non-linear rational BDIMs Let us investigate the dependence of the dynamics of the mean time of family formation on the model degree and the family size. The mean time of formation of a family of size n from a singleton under a fixed model degree d, M ( d ) (1; n ), for the rational BDIM (1.1),(3.1), is calculated using the formula (A.4.9). A comparison of the mean time of formation and extinction for rational BDIMs reveals an interesting property of non-linear BDIMs: for any given family size n , there exists such a model degree that the times of family formation and extinction are equal (as it becomes obvious from the intersection of the respective curves in Fig. 21 ). Accordingly, at higher model degrees, the mean time of formation becomes shorter than the mean time to extinction. The model degree that corresponds to the point of intersection in Fig. 21 obviously depends on the size of the considered family. Tables 2 and 3 show that the mean time of formation is about 100 times more than the mean time to extinction for the largest families of different species for the quadratic BDIM and only about 10 times more for the cubic model. Figure 21 Mean times (in 1/ λ units) of formation (upper curve before the point of intersection) and extinction (upper curve after the point of intersection) of the largest family depending on the model degree (semi-logarithmic scale). The model parameters are for Homo sapiens. As shown previously, increasing the degree (the "order of interaction") d results in indefinite decrease of the family formation time expressed in 1/ λ time units ([ 43 ] and Fig. 22 ). However, we have also shown that this effect is offset by the rapid increase of the average duplication rate in the model. Assuming the gene duplication rate of ~2*10 -8 year -1 [ 24 ], the evolution time in years , calculated according to the formula (A.6.5), does not decrease indefinitely, but has a minimum at the model degree between 2 and 3 (Fig. 23 ). Even the minimum mean time of the largest family formation achievable with the rational BDIMs is on the order of 10 11 years (see Table 6 ), which is incompatible with the age of life on Earth [ 43 ]. Thus, a rational BDIM of any degree cannot provide an adequate description of genome evolution, at least when only the mean time of family formation is considered. Accordingly, for assessing the feasibility of the formation of the largest families under a given model, the variance of the formation time should be investigated. Figure 22 Mean time of formation of the largest family (in 1/ λ units), M ( d ) N , for the rational BDIM depending on the model degree d (double logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 23 Dependence of the time (in years, Ga) required for the formation of the largest family on the model degree d for the rational BDIM (semi-logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Table 6 Rational BDIM yielding the shortest mean time of family formation N D R ( D ) ( N ) T ( D ) N Sce 130 3.13 416.0 20.8 Dme 335 2.67 1131.0 56.55 Cel 662 2.44 2317.7 115.9 Ath 1535 2.65 5553.8 277.7 Hsa 1151 2.71 4079.5 204. Tma 97 3.56 317.8 15.9 Mth 43 2.40 125.2 6.3 Sso 81 2.19 254.2 12.7 Bsu 124 2.05 404.4 20. Eco 140 2.16 460.4 23. For each genome, D is the value of the model degree d , which results in the minimum of the mean time of formation of the largest family, T ( d ) N = R ( d ) ( N )/ r du (in Ga, under indicated value of d and r du = 2 × 10 -8 ) are shown. Species abbreviations: Sce, Saccharomyces cerevisiae , Dme, Drosophila melanogaster , Cel, Caenorhabditis elegans , Ath, Arabidopsis thaliana , Hsa, Homo sapiens , Tma, Thermotoga maritima , Mth, Methanothermobacter thermoautotrophicum , Sso, Sulfolobus solfataricus , Bsu, Bacillus subtilis , Eco, Escherichia coli . Generally, the variance of the formation time of the family of the given size is given by the formulas (A.5.3) and (A.5.6). Although the variance of formation times for the quadratic and, especially, for the cubic BDIM is several orders of magnitude less than that for the linear BDIM, the coefficients of variation for both formation and extinction time increase with the model degree (Table 4 ). These coefficients are so large that the actual formation time of the largest family could differ from its mean value by several orders of magnitude with a high probability. Logistic BDIM The mean time of formation (in 1/ λ units) of a family of size n from an essential singleton for the logistic BDIM (1.3), (3.2) under fixed d is calculated using formula (A.4.9). Fig. 24 shows the dependence of mean times of family formation, M ( d ) (1; n ), on the family size n for different model degrees d under the fixed saturation boundary c = 1, and Fig. 25 shows the dependence of mean times of family formation on the boundary value (see Tables 7 and 8 for some numerical data). Similarly to the rational BDIM, increasing the degree (the "order of interaction") of the logistic model results in faster family evolution under a fixed value of the parameter λ . However, when this inner model parameter is excluded and the mean time of family formation is expressed in years according to formula (A.6.5), then we again face a restriction that does not allow indefinite shortening of the family formation time, T ( d ) N . Specifically, T ( d ) N for the logistic model with a fixed N has a minimum over d . We identified the model degrees yielding the minimum mean time of formation of the largest family for the logistic-rational BDIM. Fig. 26 and Table 9 show the dependence of T ( d ) N on d for the logistic model with fixed saturation boundary. Figure 24 Mean time of formation (in 1/ λ units) of a family of the given size depending on the size for the logistic BDIM with the boundary value c = 1 for d = 1, d = 2, d = 3 (from top to bottom, semi-logarithmic scale). The model parameters are for Drosophila melanogaster. . Figure 25 Mean time of formation (in 1/ λ units) of the largest family for the logistic BDIM, depending on the model degree d for c = 1, c = 100 and c = 1000 (from top to bottom, double logarithmic scale). The model parameters are for Drosophila melanogaster . Table 7 Evolution of gene families under the logistic BDIM with c = 1 and different d . P ( d ) (1, N ) E ( d ) N M ( d ) N M ( d ) N / E ( d ) N c ( d ) du = r du vλ T ( d ) N d = 1 0.24*10 -7 314.72 351042. 1115.4 1.7545 30795.2 d = 2 0.68*10 -3 5.66 1247.3 220.37 10.073 628.20 d = 3 0.113 1.41 6.14 4.35 297.29 91.27 Model parameters are for D. melanogaster . Table 8 Evolution of gene families under the logistic BDIM with c = 100 and different d . P ( d ) (1, N ) E ( d ) N M ( d ) N M ( d ) N / E ( d ) N c ( d ) du = r du vλ T ( d ) N L ( d ) d = 1 0.94*10 -5 227.19 90107.4 396.62 1.7612 7934.9 32.62 d = 2 0.2*10 -2 5.24 412.45 78.71 10.437 215.24 193.34 d = 3 0.178 1.40 3.39 2.42 354.72 25.25 6571.04 Model parameters are for D. melanogaster . Table 9 Logistic BDIM yielding the shortest mean time of family formation under c = 1 N D R ( D ) ( N ) T ( D ) N Dme 335 3.18 1726.8 86.34 Cel 662 2.92 3749.5 187.5 Ath 1535 3.11 10234.5 511.7 Has 1151 3.19 7433.9 371.7 For each genome, D is the value of model degree d , which results in the minimum of the mean time of formation of the largest family, T ( d ) N = R ( d ) ( N )/ r du (in Ga), is indicated. Species abbreviations: Dme, Drosophila melanogaster , Cel, Caenorhabditis elegans , Ath, Arabidopsis thaliana , Hsa, Homo sapiens . Figure 26 Dependence of the mean time (in years, Ga) required for the formation of the largest family for the logistic BDIM under fixed saturation boundary c = 1 on the model degree d (semi-logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Thus, as in the case of rational BDIMs, increase of the degree of logistic BDIMs under a fixed value of average duplication rate r du cannot yield mean family formation times < 10 11 years. Furthermore, the "saturation effect" seen in the logistic models increases the mean time of family formation compared to the corresponding rational models (compare Tables 5 and 7 ). Table 5 Coefficients of variation of the number of events before formation of the largest family for the BDIMs of different degrees N Σ (1) N Σ (2) N Σ (3) N Dme 335 87.00 86.60 79.91 Cel 662 177.99 168.73 154.81 Ath 1535 402.66 399.03 366.50 Hsa 1151 296.42 299.31 276.23 Coefficient of variation Σ ( d ) N of the number of events before formation of the largest family; d = 1 for the linear BDIM, d = 2 for the quadratic BDIM, d = 3 for the cubic BDIM. Species abbreviations: Dme, Drosophila melanogaster , Cel, Caenorhabditis elegans , Ath, Arabidopsis thaliana , Hsa, Homo sapiens . 5. The mean number of elementary events before family extinction and formation Comparing the mean family formation and extinction times predicted by BDIMs with the actual evolutionary timescale allow us to choose the most appropriate version from the examined class of models. The number of elementary evolutionary events namely, duplication and deletion of domains, predicted by these models is of potential interest in itself as an approximation of an important characteristic of genome evolution. To calculate the mean number of elementary events during evolution of gene families, we employed the so-called embedding chains { Y ( n )} instead of the original BDIM. The embedding chain { Y n } for a particular BDIM is a random walk with discrete time on the same set of states and transition probabilities p i , i +1 = β i = λ i /( λ i + δ i ), p i , i -1 = μ i = δ i /( λ i + δ i ) and p ij = 0 for all other cases (see s.7 of Mathematical Appendix for details [see Additional file 1 ]). The transition from the state i to the state i +1 (or i -1) corresponds to the duplication (or deletion) of a domain in a family of size i . The only difference between the original birth-and-death process and the embedding chain is that the sojourn time for the embedding chain is equal to 1 for any state i instead of 1/( λ i + δ i ). The ratio β i / μ i (= λ i / δ i ) characterizes the trend of family evolution from the state i , i.e., is the family more likely to grow or to shrink; for a symmetric random walk, β i / μ i = 1 for all i . The dependence of the ratio β i / μ i on i for different rational and logistic embedded chains is shown in Figures 27 and 28 . For the rational models, β i / μ i ≈ 1 for large i ; for the logistic models, β i / μ i ≈ 1 for 0 << i << N (however, this ratio significantly deviates from 1 at both ends of the interval of states). Thus, the behavior of the embedding chain is similar to the behavior of the symmetric random walk in the corresponding subsets of states. Informally, the plots in Figures 27 and 28 indicate that small families may preferentially grow (under higher degree models) or shrink (under low degree models) whereas the evolution of large families tends to a symmetrical random walk. Figure 27 The ratio β i / μ i against family size i for the rational BDIM depending on the model degree d : d = 1, d = 1.6, d = 2 (from bottom to top), in double logarithmic scale. The model parameters are for Drosophila melanogaster . Figure 28 The ratio β i / μ i against family size i for the logistic BDIM (3.2) with c = 1 depending on the model degree d : d = 1, d = 1.6, d = 2 (from bottom to top). The model parameters are for Drosophila melanogaster The mean number of elementary events before the formation of a family of the given size, f n , is computed using formulas (A.7.5)-(A.7.7). The plots in Figures 29 and 30 show the dependence of f n on the family size for different species for the linear and quadratic models, respectively. The mean number of elementary events before the extinction of a family of the given size, e n , is computed using formulas (A.7.13)-(A.7.15) and Figures 31 and 32 show the corresponding dependences for family extinction. Some numerical data for the mean number of elementary events for polynomial BDIMs are shown in Tables 1 , 2 , 3 and, for coefficients of variation, in Table 5 . Given that all the analyzed BDIMs are balanced, i.e., the birth and death rates are asymptotically equal, it was not unexpected that the mean number of events required for the formation of a large family (or the number of events preceding the extinction of such a family) was orders of magnitude greater than the size of the family. This suggests a highly dynamic picture of genome evolution whereby numerous duplications counterbalanced by gene losses are typically involved in the evolution of large families. However, the number of events required for the formation of a family of the given size quickly drops with the increase of a model degree (Fig. 33 ), which may be construed as reflection of positive selection leading to amplification of family members. Figure 29 Mean number of events before the formation of a family of the given size for the linear BDIM (double logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 30 Mean number of events before the formation of a family of the given size for the quadratic BDIM (double logarithmic scale). The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 31 Mean number of events before extinction of a family of the given size for the linear BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 32 Mean number of events before extinction of a family of the given size for the quadratic BDIM. The model parameters are for D. melanogaster (blue), C. elegans (purple), H. Sapiens (red), Arabidopsis thaliana (green). Figure 33 Mean number of events before the formation of the largest family against the model degree for the rational BDIM (double logarithmic scale). The model parameters are for Drosophila melanogaster 6. Monte Carlo simulation of evolution of gene family ensembles under BDIMs of different degrees As noticed previously [ 43 ], it is the minimum rather than the mean evolution time that is important for modeling the dynamics of evolution of genomes consisting of many gene families. Due to the large variance of the family formation time estimates (see the detailed discussion above), this value is likely to be much less than the mean. Although an analytical solution to this problem is hard to obtain, it can be examined in detail by Monte Carlo simulation analysis. As described previously [ 43 ], we employed for this analysis model parameters estimated for the human proteome. The simulated evolution started from 3000 families of size one (singletons) and continued until the largest family reached 1024 members (a convenient arbitrary number to approximate the size of the largest family in eukaryotic genomes); the simulation was run from 10 to several hundred times depending on the model degree (the time required for the simulation showed a complex, non-linear dependence on the model degree). In the course of the simulation, the number of families fluctuated due to stochastic births, deaths, and innovations of genes but, generally, tended toward the equilibrium number of ~1700, which is close to the empirically determined number of families in the human genome and is pre-determined by the choice of model parameters (the initial number of singletons did not have much impact on the model's dynamics). The time scale was adjusted such that r du = 2 × 10 -8 duplications/gene/year [ 24 ]. A series of simulations was performed for non-linear rational BDIMs with different degrees d . As shown in Fig. 34 , the time at which the family size of 1024 members is reached for the first time depends on d in a similar fashion as the mean time for a single family, i.e., there is clear minimum at a particular value of d . At the optimal value of d ≈ 2.2, the model reaches this family size in 2.2 ± 0.5 Ga, which is comparable to the time of evolution of eukaryotes. Compared to the minimal evolution time predicted by BDIMs of different degrees for a single family, the genome-size ensemble of gene families reached the threshold size much faster (by 1.5–2.5 orders of magnitude), and the optimum values of d was lower by ~0.5 (Fig. 35 ). The much faster formation of large families from an ensemble of singletons was predictable due to the large variation coefficient of the family formation and extinction times, but the simulation was necessary in the absence of knowledge of the exact distribution of these values. Figure 34 The time required for the formation of a first family with 1024 members determined by Monte Carlo simulation starting from an ensemble of 3000 singletons. The model parameters are for Homo sapiens. Figure 35 The time required for the formation of a first family with 1024 members starting from an ensemble of 3000 singletons (blue) compared to the mean time predicted by BDIMs of different orders (magenta). The model parameters are for Homo sapiens. 7. General discussion Here and in the previous publications [ 12 , 43 , 50 ], we describe a general class of models, which are based on the classical concept of a birth-and-death process and seem to naturally apply to the genome evolution process. Similar, although not identical and apparently less general, modeling approaches have been considered by others [ 6 , 34 , 51 ]. Even earlier, evolution of gene families has been modeled within the distinct mathematical framework of multiplicative processes [ 52 ]. The utility of birth-and-death type models in evolutionary genomics in itself is not a trivial matter but rather stems from fundamental features of genome evolution. As captured in the title of Ohno's famous book [ 16 ], although foreseen even in the early days of genetics [ 15 , 53 ], gene duplication probably is the principal mechanism of genome evolution. Of course, genomes cannot grow ad infinitum and, through most of the evolutionary history, the number of genes within a given phylogenetic lineage probably remains roughly constant. Hence duplication is intrinsically coupled to gene loss. The results of comparative genomics further show that many genes in each lineage cannot be obviously linked to other genes through duplication. Without necessarily specifying the biological mechanisms (these could involve rapid change after duplication, gene acquisition via horizontal transfer, and possibly, birth of genes from non-coding sequences), it is reasonable to view these unique genes as resulting from innovation. For genomes to maintain equilibrium, the combined rates of duplication and innovation over the entire ensemble of gene families should equal the rate of gene loss, at least when averaged over long time spans. The observed distribution of family sizes, which asymptotically tends to a power law, dictates a much more specific connection between the gene birth and death rates, namely, the second order balance. It should be noted that this form of balance does not amount to particularly fine tuning of the gene birth and death rates. The only requirement is that these rates tend to the same value when the family size tends to infinity according to the condition (1.5). In contrast, for small families, the rates may substantially differ, without significantly changing the shape of equilibrium distribution. The incentive to examine BDIMs in detail stems from at least two fundamental questions: i) are the above elementary evolutionary mechanisms sufficient to account for the empirically observed characteristics of genomes, ii) what is the contribution of natural selection to the general, quantifiable features of genomes, such as the size distribution of gene families. The analysis of BDIMs starts to provide some answers, albeit preliminary ones. The critical observation made in the course of BDIM analysis was that different versions of these models could be readily distinguished on the basis of goodness of fit to the empirical data. This being the case, we found that the simplest possible model, in which all paralogs are considered independent, is incompatible with the data. Thus, turning to the first of the above questions, we had to conclude that, in addition to the three elementary processes, "something else" was required to model genome evolution. This "something" is the dependence or "interaction" between gene family members which results in self-accelerating family growth. In order to account for the observed stationary distribution of family sizes, it is sufficient to introduce a very weak dependence as embodied in the linear BDIM. However, when we switched from the deterministic to the stochastic version of BDIMs, which provide for the possibility of analysis of the dynamics of the systems evolution, we found that evolution under the linear BDIM was much too slow to account for the emergence of the large families of paralogs found in all genomes during the time of life's evolution. Only higher order BDIMs, with degrees between 2 and 3, i.e., with "strong interactions" between family members were found to provide for sufficiently fast evolution to be compatible with the real biological timescale. Obviously, these findings beg the question: what is the nature of the mysterious "interactions" between paralogs? Although, on some occasions, paralogous protein do form physical complexes or interact functionally, the situation when such interaction does not exist is much more common. Therefore, the "interactions" in our models should not be perceived literally. This brings us to the second of the above major problems. BDIMs do not explicitly include the notion of selection. However, the simplest interpretation of the virtual interactions implied by the higher order BDIMs seems to be that these reflect differential tendencies of genes to form paralogous families of different sizes depending on the intensity of selection. Recent studies have shown that evolutionary fixation of gene duplications is linked to the evolutionary rates of genes. Specifically, duplications of slowly evolving genes, i.e., those that are subject to stronger purifying selection, are fixed more often [ 54 , 55 ]. The strong dependence of per gene duplication rates on family size in higher order BDIMs could be an abstraction of this trend. Should that be the case, we are justified to conclude that very weak selection would suffice to explain the stationary distribution of family sizes, but much stronger selective pressure is needed to account for the dynamics of genome evolution. However, the interpretation of BDIM degree as a manifestation of selection is, at this point, no more than a guess. One of the further developments of genome evolution modeling involves introducing selection explicitly and determining whether the resulting more sophisticated models will be equivalent to the higher order BDIMs explored here. Conclusions In this work, we extended our analysis of stochastic Birth, Death and Innovation Models (BDIMs) of gene family evolution and showed that: • the behavior of logistic BDIMs models, in which birth/death rates are limited for the largest families, is essentially the same as that of previously investigated BDIMs that included no such limitation • the mean time required for the growth of large families is limited by the overall number of duplications and does not increase indefinitely with the increase of the model degree but instead passes through a minimum; even under the best-case scenario, which corresponds to a non-linear rational BDIM with d ≈ 2.7, the mean time of the largest family formation is orders of magnitude greater than any realistic estimates based on the timescale of life's evolution; • using the embedding chains technique, we estimated the expected number of elementary evolutionary events (gene duplications and deletions) preceding the formation of gene families of the observed size; the mean number of events exceeds the family size by orders of magnitude, suggesting a highly dynamic process of genome evolution; • the variance of the time required for the formation of the largest families is large (coefficient of variation >> 1), which means that some families might grow much faster than the mean rate; thus, the minimal time required for family formation is more relevant for a realistic representation of genome evolution than the mean time; • Monte Carlo simulations of family growth from an ensemble of simultaneously evolving singletons show that the time elapsed before the formation of the largest family was much shorter than the estimated mean time and approached realistic values (2.2 ± 0.5 Ga for the non-linear rational BDIM with d ≈ 2.2). Contributions of individual authors GPK developed most of the mathematical formalism and wrote the draft of the mathematical part of the manuscript; YIW performed the imitation modeling and wrote the draft of the corresponding part of the manuscript; FSB derived some of the mathematical statements; EVK contributed to the inception of the work and the formulation of the models, gave the biological interpretation of the results, wrote the background and discussion sections and extensively edited the entire manuscript. Supplementary Material Additional File 1 This additional file includes proofs of some of the mathematical statements contained in the main text as well as accessory mathematical formulations. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC523855.xml |
314468 | The Proteasome and the Delicate Balance between Destruction and Rescue | The proteasome is a large multiprotein complex that degrades unwanted cellular proteins. The mechanisms that control this protein-eating machine are being uncovered | Inside eukaryotic cells there is a massive protein complex called the proteasome whose raison d'être is to remove unnecessary proteins by breaking them down into short peptides. The proteasome is thus responsible for an important aspect of cellular regulation because the timely and controlled proteolysis of key cellular factors regulates numerous biological processes such as cell cycle, differentiation, stress response, neuronal morphogenesis, cell surface receptor modulation, secretion, DNA repair, transcriptional regulation, long-term memory, circadian rhythms, immune response, and biogenesis of organelles ( Glickman and Ciechanover 2002 ). With the multitude of substrates targeted and the myriad processes involved, it is not surprising that aberrations in the pathway are implicated in the pathogenesis of many diseases, including cancer. With so many proteins to target for degradation, the activity of the proteasome is subject to multiple levels of regulation. In the overwhelming majority of cases, selected proteins are first “labeled” by the addition of several copies of a small protein tag called ubiquitin and are thus targeted for degradation in the proteasome ( Figure 1 ). The ubiquitination of proteins is regulated through precise selection of protein substrates by specific E3 ubiquitin ligases ( Pickart 2001 ). These enzyme complexes each recognize a subset of substrates and tag them by linking the carboxyl terminus of ubiquitin with an amino group on the target protein via an amide bond ( Figure 1 ). Figure 1 Structure of an Ubiquitinated Protein Ubiquitin (light violet) is a small 76 amino acid protein that can be covalently attached to target proteins (green) by specific E3 ubiquitin ligases. Such conjugation takes the form of an isopeptide bond between the carboxyl terminus of ubiquitin (denoted as C) and a lysine amino sidechain (K) on the substrate, or in some cases, conjugation can be via a peptide bond between ubiquitin and the amino terminus of the protein (N). These amide bonds are indicated as blue links. Multiple ubiquitin moieties can link in a similar manner via lysine-48 (K48) to form a polyubiquitin chain. As symbolized, more than one such chain can assemble on a single target. The result is a branched fusion protein with multiple amino termini (seven in the depicted example) coalescing at a single carboxyl terminus. Polyubiquitination in this manner targets proteins to the proteasome, where they are hydrolyzed into short peptides (green stack). Deubiquitinating enzymes can hydrolyze the bond between one ubiquitin moiety and another or between ubiquitin and the target protein. Interestingly, ubiquitination is a reversible process. Even when a protein has been tagged with ubiquitin, its fate is not sealed—specific hydrolytic enzymes called deubiquitinases can remove the ubiquitin label intact ( Figure 1 ). By deubiquitinating their substrates, these enzymes compete with the proteasome, which acts on the polyubiquitined form. In the competition between proteolysis and deubiquitination, polyubiquitinated proteins rarely accumulate in the cytoplasm of “healthy” cells, as they are either irreversibly degraded or deubiquitinated and rescued. It is thought that this competition provides a certain level of stringency or quality control to the system. Based on sequence homology, deubiquitinating enzymes were traditionally classified into two families: ubiquitin-specific proteases (UBPs or USPs) and ubiquitin carboxy-terminal hydrolases (UCHs). Both enzyme families are classified as cysteine proteases that employ an active site thiol to cleave ubiquitin from its target ( Kim et al. 2003 ; Wing 2003 ). The proteasome itself is made up of a multiprotein core particle (CP) where proteolysis occurs and a separate multiprotein regulatory particle (RP) that recognizes and prepares substrates for degradation by the CP. A base subcomplex of the RP is pivotal in anchoring polyubiquitin chains during this process, either directly or via auxiliary ubiquitin-binding proteins ( Lam et al. 2002 ; Hartmann-Petersen et al. 2003 ). The base attaches to the outer surface of the CP and uses energy to unravel the substrate, simultaneously with preparing the channel that leads into the proteolytic chamber of the CP ( Forster and Hill 2003 ). The lid subcomplex of the RP attaches to the base and is required for proteolysis of ubiquitin–protein conjugates, but not of unstructured polypeptides ( Glickman et al. 1998 ; Guterman and Glickman 2003 ). The size and complexity of this protein-eating machine hints at the exquisite controls that must rgulate its function. An intriguing evolutionary and structural relationship between the proteasome lid and an independent complex, the COP9 signalosome (CSN), may shed light on their respective roles in regulated protein degradation. Both are made up of eight homologous protein subunits that contain similar structural and functional motifs. While a lot is still unknown, the CSN appears to mediate responses to signals (e.g., light, hormones, adhesion, nutrients, DNA damage) in a manner that is intimately linked to the ubiquitin–proteasome system. This is accomplished, for instance, by suppressing ubiquitin E3 ligase activity or interacting with various components of the pathway ( Bech-Otschir et al. 2002 ; Cope and Deshaies 2003 ; Li and Deng 2003 ). In particular, one subunit—Csn5—moderates SCF (Skp1–cullin–F box) and other cullin-based E3 ubiquitin ligases by removal of the ubiquitin-like Rub1/Nedd8 molecule from the cullin subunit of the ligase complex. Further analysis of the CSN will no doubt uncover additional mechanisms whereby ubiquitin-mediated protein degradation is controlled. Surprisingly, the proteasome itself harbors intrinsic deubiquitination activity ( Eytan et al. 1993 ). Moreover, both the lid and the base contribute independently to RP deubiquitination activity. The source of this activity has been attributed to a number of different subunits. These include the associated cysteine proteases Ubp6/USP14 ( Borodovsky et al. 2001 ; Legget et al. 2002 ), UCH37/p37 ( Lam et al. 1997 ; Hoelzl et al. 2000 ), and Doa4/Ubp4 ( Papa et al. 1999 ), as well as the intrinsic proteasome subunit Rpn11/POH1 ( Verma et al. 2002 ; Yao and Cohen 2002 ). The importance of these components to proteasome function is apparent in their partially overlapping properties. In groundbreaking work, an intrinsic “cryptic” deubiquitinating activity that is sensitive to metal chelators has been reported for the proteasome, in addition to “classic” cysteine protease behavior ( Verma et al. 2002 ; Yao and Cohen 2002 ). This metalloprotease-like activity maps to the putative catalytic MPN+/JAMM motif of the lid subunit Rpn11 and lies at the heart of proteasome mechanism by linking deubiquitination with protein degradation. Notably, Rpn11 shares close homology with Csn5, which is also responsible for proteolytic activities in its respective complex. By defining a new family of putative metalloproteases that includes a proteasomal subunit, a CSN subunit, and additional proteins from all domains of life, the MPN + /JAMM motif garnered great attention. The trademark of the MPN + /JAMM motif is a consensus sequence E—HxHx (7) Sx (2) D that bears some resemblance to the active site of zinc metalloproteases. Members of this family were predicted to be hydrolytic enzymes, some of which are specific for removal of ubiquitin or ubiquitin-like domains from their targets ( Maytal-Kivity et al. 2002 ; Verma et al. 2002 ; Yao and Cohen 2002 ). In a further development, two independent groups determined the molecular structure of an MPN + /JAMM protein from an archaebacterium ( Ambroggio et al. 2003 ; Tran et al. 2003 ). The structures identify a zinc ion chelated to the two histidines and the aspartic residue of the MPN + /JAMM sequence. The fourth ligand appears to be a water molecule activated through interactions with the conserved glutamate to serve as the active site nucleophile. Overall, this protein certainly has properties consistent with a metallohydrolase and can serve as the prototype for the deubiquitinating enzymes in its class. This revelation adds an all-new enzymatic activity and, with it, an additional layer of regulation to the ubiquitin–proteasome system. Now that it is evident that the proteasome contains a member of a novel metalloprotease family, a fundamental question can be raised: why does a proteolytic enzyme like the proteasome need auxiliary proteases for hydrolysis of ubiquitin domains? At first glance, the delegation of tasks between the proteolytic subunits of the proteasome (situated in the proteolytic core particle) and the auxiliary deubiquitinating enzymes (situated in the regulatory particle) is clear-cut: the latter cleave between ubiquitin domains, while the core proteolytic subunits process the target protein itself ( Figure 1 ). However, this still does not explain the mechanistic rational for finding deubiquitination within the proteasome itself. In principle, deubiquitination could be used for (1) recycling of ubiquitin, (2) abetting degradation by removal of the tightly folded highly stable globular ubiquitin domain, or (3) mitigating degradation by removal of the ubiquitin anchor, without which the substrate is easily released and rescued. There is evidence that recycling of ubiquitin by the proteasome is indeed a crucial feature of deubiquitination in proper cellular maintenance ( Legget et al. 2002 ). Distinguishing between options 2 and 3, however, depends to a large extent on the delicate balance between the two proteolytic activities associated with the proteasome: proteolysis and deubiquitination ( Figure 2 ). Figure 2 Deubiquitination versus Proteolysis at the Proteasome Once recognized and anchored to the proteasome via its polyubiquitin tag (light violet), a substrate (green) can be unraveled, unfolded, and translocated by the 19S regulatory particle (red) into the proteolytic chamber of the 20S core particle (purple), where it is hydrolyzed into short peptides (left). A byproduct of proteolysis is the polyubiquitin anchor (that may still be linked to a residual peptide). Cytoplasmic deubiquitinating enzymes eventually process this chain and recycle ubiquitin. However, the proteasome can also directly deubiquitinate the substrate, with diverse outcomes. For example, the substrate can be “shaved” upon cleavage of the bond to the proximal ubiquitin (right). Without its anchor, the substrate is presumably released and rescued. A distinct deubiquitinating activity is “trimming” or removal of the distal ubiquitin moiety (middle). According to one hypothesis, trimming serves as a timer; extended or difficult-to-process chains allow ample time for substrate unfolding and irreversible proteolysis (left), while short or easy-to-process chains inevitably lead to substrate release and rescue (right). This delicate balance between destruction and rescue is fundamental to proteasome efficiency. Once bound to the proteasome, a polyubiquitinated substrate can be unfolded by the RP and irreversibly translocated into the CP. It has been proposed that long polyubiquitin chains commit a substrate to unfolding and degradation by the proteasome, whereas short chains are poor substrates because they are edited by deubiquitinating enzymes, resulting in premature substrate release ( Eytan et al. 1993 ; Lam et al. 1997 ; Thrower et al. 2000 ; Guterman and Glickman 2003 ). Extended polyubiquitin chains could slow down chain disassembly, thereby allowing ample time for unfolding and proteolysis of the substrate ( Figure 2 ). Interestingly, both “trimming” and “shaving” deubiquitinating activities are associated with the proteasome, though the exact contribution of the various proteasome-associated deubiquitinating enzymes to each of these distinct activities has yet to be elucidated. It is expected that in order to obtain efficient proteolysis of the target, shaving of chains at their proximal ubiquitin should be slower than the rate of trimming at the distal moiety. As an outcome of this requirement, longer polyubiquitin tags would be preferential substrates for degradation by the proteasome. Thus, the uniqueness of ubiquitin as a label for degradation may lie in its being a reversible tag. Deubiquitinases, such as Rpn11, serve as proofreading devices for reversal of fortune at various stages of the process, right up to the final step before irreversible degradation by the proteasome. Identifying Rpn11 and Csn5 as members of a novel class of metallohydrolases immediately elevates them into promising “drugable” candidates. Undoubtedly, the molecular structures deciphered by the groups of Deshaies ( Ambroggio et al. 2003 ) and Bycroft ( Tran et al. 2003 ) will focus efforts to design novel site-specific inhibitors of the ubiquitin–proteasome pathway. While Csn5 is thought to impede the action of ubiquitin ligases through shaving cullins from their Rub1/Nedd8 modification (and possibly also by deubiquitinating substrates bound to the cullins), the outcome of Rpn11 inhibition will depend largely on whether Rpn11 participates primarily in shaving substrates from their chains, promoting release and rescue, or in trimming the polyubiquitin tag, allowing for proteolysis quality control ( Figure 2 ). | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC314468.xml |
549534 | Expression of plasminogen activators in preimplantation rat embryos developed in vivo and in vitro | Background Embryo implantation plays a major role in embryogenesis and the outcome of pregnancy. Plasminogen activators (PAs) have been implicated in mammalian fertilization, early stages of development and embryo implantation. The invasion of trophoblast cells into the endometrium during the implantation process can be blocked by inhibitors of serine proteases, illustrating the role of these enzymes in the invasion process. As in vitro developing embryos resulted in lower implantation rate than those developed in vivo we assume that a reduced PAs activity may lead to it. There is hardly any information regarding qualitative or quantitative differences in expression of PAs in preimplantation embryos, or comparisons between in vivo and in vitro developed embryos. The purpose of this study was to assess the expression of urokinase type (uPA) and tissue type (tPA) plasminogen activators in in vivo and in vitro preimplantation development in rat embryos using immunofluorescence confocal microscopy and computerized image analysis. Methods Zygotes, 2-cell, 4-cell, 8-cell, morula and blastocyst stages of development were flushed from the reproductive tract (control groups) of Wistar rats. Zygotes were flushed and grown in vitro to the above mentioned developmental stages and comprised the experimental groups. Immunofluorescence microscopy and computerized image analysis were used to evaluate both qualitative (localization) and quantitative expression of plasminogen activators. Results uPA and tPA were found to be expressed in rat embryos throughout their preimplantation development, both in vivo and in vitro. While uPA was localized mainly in the cell cytoplasm, the tPA was detected mainly on cell surface and in the perivitelline space. In blastocysts, both in vivo and in vitro, uPA and tPA were localized in the trophectoderm cells. Total uPA content per embryo was higher in the in vivo as compared with the in vitro developed embryos at all stages measured. Blastocyst uPA content was significantly low as compared with the four-cell, eight-cell, and morula stages. Total tPA content was higher in embryos developed in vivo than those developed in vitro except for the 4-cell and 8-cell stages. Conclusion In vitro embryo development leads to lower PAs expression in a stage dependent manner as compared with in vivo developing controls. The enzymes studied vary probably in the ratio of their active and inactive forms as there is no correlation between their content and the activity observed in our previous study. The localization of both PAs in the blastocysts' trophectoderm supports the assumption that PAs plays a role in the implantation process in rats. | Background Plasminogen activators (PAs) and matrix metalloproteinases (MMPs) have been implicated in mammalian gametogenesis [ 1 ], ovulation [ 2 , 3 ], fertilization [ 4 , 5 ], early stages of development and embryo implantation [ 6 , 7 ]. The PAs are serine proteases, which convert the inactive plasminogen to the potent protease plasmin. Plasmin can degrade directly or indirectly, through the activation of metalloproteinase zymogens, all components of the extracellular matrix [ 8 , 9 ]. There are two types of PAs, tissue-type plasminogen activator (tPA) and urokinase-type plasminogen activator (uPA). Plasminogen, its activators and inhibitors, participate in the implantation process. Trophoblast cells of human blastocysts cultured in vitro produced PAs during the period corresponding to the in vivo invasion into the endometrium [ 10 ]. In embryos of the homozygous t w73 mouse mutant, PAs were reduced and was concomitantly associated with implantation failure [ 11 ]. The invasion of trophoblast cells during the implantation process could be blocked by inhibitors of serine proteases, illustrating the role of these enzymes in the invasion process [ 12 , 13 ]. In the human, embryo implantation following in vitro fertilization and embryo transfer (IVF-ET) is considered to play a major role in the success of the treatment. Only 12% of the transferred embryos are able to successfully implant [ 14 ]. In the implantation process, two major factors participate: the uterus undergoes changes that prepare it for the arrival and implantation of embryos, and the embryos undergo cellular reorganization that enables them to penetrate the endometrium and to form the placenta. We assume that one of the reasons for low implantation rate of embryos developed in vitro involves reduced PAs activity. In a previous study we demonstrated differences in PAs activities between in vivo and in vitro preimplantation developed embryos. In both, uPA activity increased from the zygote towards the blastocyst stage while tPA activity remained relatively unchanged. However, tPA and uPA activities were lower in in vitro developed embryos as compared with in vivo developing ones, at all developmental stages, which may lead to a reduced implantation rate of in vitro developed embryos [ 15 ]. There is hardly any information regarding qualitative or quantitative differences in expression of PAs in preimplantation embryos, or comparisons between in vivo and in vitro developed embryos. Therefore, the purpose of this study was to investigate the PAs expression and localization during embryo development in vivo and in vitro by immunofluorescence confocal microscopy. Methods The following study was approved by the Institutional committee for animal care and ethics at Ben-Gurion University of the Negev, Beer-Sheva, Israel. Animals Mature female Wistar rats 2–3 months old, weighing 180–230 g were used. The animals were kept in a temperature-controlled room maintained at 22–24°C with lighting regimen of 14 hours light 10 hours dark (light on 5:00 AM – 7:00 PM). The rats were allowed free access to rat chow and tap water. Daily vaginal smears were taken at 10:00 AM, and the stage of estrous cycle was determined. Overnight caging of a proestrous female with a male of proven fertility induced pregnancy. The next day, the presence of a vaginal plug or spermatozoa in the vaginal smear was designated as day 1 of pregnancy. Collection of embryos Zygotes, two-cell, four-cell, eight-cell embryos and morulae were flushed with rat 1-cell embryo culture medium (R1ECM) [ 16 ] from oviducts at days 1, 2, 3 and 4 of pregnancy, respectively, and blastocysts at day 5 from the uterine horns. All equipment and media used were sterile. Ovary-oviduct complexes were removed from anesthetized animals. The complexes were placed in R1ECM, and the oviducts were separated under a dissecting microscope. 30-gauge blunt-end needle attached to a syringe containing R1ECM was inserted through the oviductal end held by forceps surrounding the needle and tube. Embryos were gently flushed into 35-mm-diameter culture dish. Embryos were washed 3 times by transfer into fresh R1ECM to remove cell debris and any maternal factors present in the oviduct. Zygotes in their cumulus mass were flushed and the cumulus cells were removed by gentle aspiration through a micropipette (diameter, 150–200 μm) several times in R1ECM containing 80 U/mL of hyaluronidase. Clean zygotes were washed 3 times by transfer into fresh R1ECM to remove traces of hyaluronidase. Flushed embryos were collected with a mouth-controlled micropipette (diameter, 150–200 μm). Blastocysts were flushed from the uterine horns by insertion of a 23-gauge needle attached to a syringe containing R1ECM. Flushed, free-floating blastocysts were collected into a polypropylene tube inserted through the vagina and pushed gently to surround the cervical openings. Tubes were removed, and their contents were poured into Petri dishes. Blastocysts were washed and collected as described for zygotes and embryos. These embryos developed in vivo were the control groups. Embryo culture As described for the in vivo embryos, clean zygotes were grown in vitro to the same developmental stages as controls; these were the experimental groups. Each group of embryos consisted of 25–35 embryos collected from six pregnant females. This was repeated three times for each developmental stage (total of about 90 embryos per stage). Groups of 25–35 zygotes were placed into 35-mm-diameter culture dishes (Nunc, Roskilde, Denmark) containing 50μL of R1ECM medium under a layer of mineral oil (previously equilibrated to the experimental conditions) and cultured at 37°C under 5% CO 2 in air. This medium was shown by Miyoshi et al. [ 16 ] to enable rat embryo culture to the blastocyst stage. In a comparison of various media at our laboratory, R1ECM was found to be the best medium to enable a synchronous development of embryos (95% of total) to the blastocyst stage. The developing embryos seemed to be normal in their morphology, with almost no fragmentation. At the end of incubation, embryos were washed 3 times with fresh R1ECM. Embryo immunocytochemistry The method used was basically that of Dubey et al. [ 17 ] with various modifications. Groups of 20–25 embryos at different developmental stages from the experimental and control groups were fixed in 4% paraformaldehyde in phosphate buffered saline (PBS) at room temperature and washed twice in PBS, PH 7.4 for 5 minutes. Five percent bovine serum albumin (BSA) in PBS was used for dilution of antibodies and washings (PBS-BSA). The embryos were washed four times in PBS-BSA before immunoreaction. Embryos, randomly chosen were either exposed to polyclonal rabbit anti rodent uPA or rabbit anti rat tPA (American Diagnostics, Pendelton, IN) at a concentration of 4μg/mL. Embryos were then incubated overnight in 50μL of each antibody solution under paraffin oil in a 35 mm culture plate in a moist chamber at 4°C. The embryos were then washed four times in PBS-BSA and incubated with Cy3-conjugated goat anti-rabbit IgG (Jackson ImmunoResearch Laboratories, Inc., West Grove, PA) at 37°C for 60 minutes. The conjugated antibody was used at a dilution of 1:300 in PBS-BSA. After incubation with the secondary antibody, the embryos were washed again in PBS-BSA and stained with DNA stain 4', 6-diamidino-2-phenylindole (DAPI) (Vector Laboratories, Burlingame, CA) and mounted in Flouromount – G (Southern Biotechnology Associates, Inc. Birmingham, AL) to minimize quenching. To confirm that the fluorescence observed was neither attributable to nonspecific binding of the secondary antibody nor to formaldehyde-induced autofluorescence, negative controls (without primary antibody) were established during each immunoreaction procedure. The immunocytochemistry staining procedure was repeated three times for each stage of embryo development on different batches of embryos. Image Analysis The distribution and concentration of PAs in the embryos were visualized by fluorescent microscopy on a Zeiss laser scanning confocal microscope equipped with an X100 objective. Z-sections and XZ-sections were obtained from 3D scanning by using LSM510 software (Zeiss, Feldbach, Switzerland) The PAs density for each embryo was computed by image analysis based on the same principles as manual counting described elsewhere [ 17 ]. The embryos' fluorescent images were downloaded using an image analysis software, ImagJ (NIH, Bethesda, MD). These images were stored in the computer by use of pixels. All the slices obtained from 3D scanning were of 0.7μ and were analyzed by counting the number of pixels of the Cy3 (red color) in the whole slice. Total pixels in a whole embryo were calculated by summing the number of pixels in all the slices of an embryo. This method showed the total amount of PAs expression in each embryo. The number of pixels in an embryo represents the intensity of PAs staining (fluorescence), which is in turn proportional to the amount of PAs in that embryo. Each experimental group consisted of 8–10 embryos and the measurements repeated three times with different batches of embryos from each developmental stage stained at different times (total number of 24–30 embryos per stage). Statistical analysis Data are expressed as means ± SEM. Statistical analysis was performed with two-way analysis of variance, followed by the least significant differences test for multiple comparisons using computer software (Statistica 6.0, Statsoft, Inc. Tulsa, OK). P < 0.05 was defined as statistically significant difference. Results PAs localization Immunohistochemical staining for the location of tPA and uPA in preimplantation embryos developed in vivo and in vitro are shown in figure 1 . The PAs were detected in all stages of embryo development, both in in vivo and in vitro (Fig. 1A–V ). The uPA was expressed in the cell cytoplasm and plasma membrane (Fig. 1A–K ) while tPA was detected on the cell membrane and in the perivitelline space (Fig. 1L–V ). In blastocysts developed in vivo and in vitro PAs were localized mainly in the trophectoderm (Fig. 1F, K, Q, V ). There was no difference in PAs localization comparing in vivo and in vitro developed embryos at the same stage. Figure 1 PAs localization. Expression of uPA (A-K) and tPA (L-V) in preimplantation developing rat embryo stages grown in vivo and in vitro. (A, L)-Zygote. (B, G, M, R)-2-cells. (C, H, N, S)-4-cells. (D, I, O, T)-8-cells. (E, J, P, U)-Morula. (F, K, Q, V)-Blastocyst. (A-F)-uPA in vivo. (G-K)-uPA in vitro. (L-Q)-tPA in vivo. (R-V)-tPA in vitro. (X)-2-cell uPA negative control. (XX)- 2-cell tPA negative control. Quantitative measurement of uPA Quantitative measurement of total uPA in an embryo at each stage showed significantly lower expression (p < 0.01) in in vitro developed embryos from the 4-cell stage up to the blastocyst stage compared with the in vivo developed corresponding timepoint. The highest expression of uPA was found in in vivo developed embryos from 4-cell to the morula stage (90.58, 78.78 and 79.35 Pixels per embryo × 10 3 , respectively, Fig. 2 ). In the in vitro developed embryos, a significant increase (p < 0.01) in uPA expression was found from the 2-cell stage to the 4-cell stage (43.91 and 61.86 Pixels per embryo × 10 3 , respectively). Figure 2 Quantitative measurement of uPA. Quantitative (Pixels/Embryo) uPA expression in preimplantation rat embryos developed in vivo and in vitro. Different letters represent statistically significant differences (P < 0.05). Quantitative measurement of tPA Total tPA expression in a whole embryo showed highest expression in in vivo developed embryos at the 2-cell, 8-cell, morula and blastocyst stages (61.59, 52.71, 48.03 and 52.34 Pixels per embryo × 10 3 , respectively, Fig. 3 ). At the 2-cell stage, morula and blastocyst, a significantly lower expression (p < 0.01) was found in in vitro developed embryos as compared with the in vivo ones. Figure 3 Quantitative measurement of tPA. Quantitative (Pixels/Embryo) tPA expression in preimplantation rat embryos developed in vivo and in vitro. Different letters represent statistically significant differences (P < 0.05). Discussion Our study demonstrates that uPA and tPA are expressed throughout all stages of preimplantation development of rat embryos. Zhang et al. [ 18 ] reported expression of uPA gene and uPA activity in preimplantation rat embryos developed in vitro and Khamsi et al. [ 19 ] reported the presence of mRNA for uPA in human blastocysts. However, information about the expression and immunolocalization of uPA and tPA has not been reported in in vivo nor in in vitro preimplantation developing rat embryos. We present here a quantitative measurement of PAs expression in a whole embryo allowing comparison of embryos at different developmental stages, grown in vivo or in vitro. The results show localization of immunoreactive uPA in the embryonic cell cytoplasm and plasma membrane in all developmental stages both in vivo and in vitro, while tPA is detected on the cell membrane and in the perivitelline space. In the blastocyst stage, PAs are localized mainly in the trophectoderm. In our previous study [ 15 ] we showed that the activity of uPA was higher than that of tPA in the blastocyst. Its presence in the trophectoderm combined with its high activity in this stage, support the assumption that uPA is important for proper implantation. This assumption is supported by the study of Kubo et al. [ 20 ] who showed that inhibition of PAs activity prevents the adhesion of mouse embryos to decidual cells grown in vitro. In addition, trophoblast cells grown in vitro showed PAs activity at the time of their penetration into the endometrium in vivo and uPA was the major enzyme secreted from trophectoderm cells with the highest activity on days five to seven of pregnancy [ 7 ]. The exact source(s) of the immunoreactive PAs in in vivo developing embryos cannot be identified. The serum, oviduct and endometrium could be contributing sources as suggested in previous studies [ 10 , 21 - 23 ]. Lack of the these source(s) in in vitro situation may lead to lower implantation ability of embryos as shown earlier [ 15 ]. Pro-uPA is synthesized as an inactive single chain that can be stored or secreted. The secreted pro-uPA can be cleaved to produce the two-chain active molecule, uPA, by the aid of limited proteolytic activity of plasmin [ 24 ]. The secreted pro-uPA or the active uPA can be found free in cytoplasm and extracellular matrix or bound to a membrane uPA receptor [ 25 ]. Whether the uPA identified in this study is the inactive pro-uPA or the active uPA associated with the embryonic cell membrane uPA receptor, is unknown. In our previous work we have shown an increase in uPA activity towards the blastocyst stage in in vivo and in in vitro developing embryos [ 15 ]. The results of the present study, showing lower expression of uPA in the blastocyst stage, may suggest a shift of uPA from the inactive form to the active form resulting in an increase of activity despite the reduction in its expression. High tPA expression was detected at the zygote stage which is in accordance with high tPA activity found in this stage [ 15 ]. This is supported by the report of Zhang et al. [ 18 ] who showed presence of tPA mRNA in rat oocytes and two-cell embryos. The embryonic genome of rats and mice start to be expressed at the 2-cell stage [ 26 ] and the high tPA levels in the zygote are probably due to maternal mRNA expressed and accumulated in the oocyte [ 27 ]. The embryonic extracellular matrix is in a continuous turnover during the embryonic development. The 8-cell stage is characterized by structural changes taking place in the embryo during the compaction process. It is therefore very likely that such changes at the 8-cell stage could be associated with increased tPA expression and activity which is known to participate in tissue remodeling [ 8 ]. The high increase in tPA expression from the 4-cell stage to the 8-cell stage in in vitro developed embryos suggests de novo synthesis of tPA since there is no extraembryonic tPA source but the embryos in the culture. Lower expression of uPA was observed in in vitro developed embryos as compared with in vivo ones from the 4-cell up to the blastocyst stage while tPA expression was lower only in the morula and blastocyst stages. This could be explained by reduced metabolic activity in the in vitro developed embryos as suggested by Krisher et al. [ 28 ]. In addition, in vitro conditions may lead to a slower cell division rate which may result in a blastocyst comprised of fewer cells and decreased ability to hatch from the zona pellucida [ 28 , 29 ]. Carroll et al. [ 30 ] showed that the oviduct is also a source of PAs, which could attach to receptors on embryonic cell membrane, and this source is lacking in in vitro developing embryos. It should be noted that any culture media would lack maternal factors, known or yet unknown, which affect embryo development and thus, implantation rate, through their effect on the PA/plasmin system. Additional studies addressing the regulation of PA/Plasmin system by adding exogenous factors may provide insights into its role in early embryo development and implantation. Conclusions The purpose of the study was to determine the relative importance of tPA and uPA in preimplantation embryo development. In vitro embryo development leads to lower PAs expression in a stage dependent manner as compared with in vivo developing ones. The localization of both PAs in the blastocysts' trophectoderm supports the assumption that PAs may play a role in the implantation process in rats. Authors' contributions EDA participated in the planning of the project, carried out the animal experimentation, immunohistochemistry and the image analysis studies. USM participated in the planning of the project, animal experimentation and participated in preparation of the manuscript. GP participated in preparation of the manuscript. IHV participated in the planning of the project, statistical analysis and in preparation of the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549534.xml |
516030 | Periodicity of DNA in exons | Background The periodic pattern of DNA in exons is a known phenomenon. It was suggested that one of the initial causes of periodicity could be the universal ( RNY) n pattern ( R = A or G , Y = C or U , N = any base) of ancient RNA. Two major questions were addressed in this paper. Firstly, the cause of DNA periodicity, which was investigated by comparisons between real and simulated coding sequences. Secondly, quantification of DNA periodicity was made using an evolutionary algorithm, which was not previously used for such purposes. Results We have shown that simulated coding sequences, which were composed using codon usage frequencies only, demonstrate DNA periodicity very similar to the observed in real exons. It was also found that DNA periodicity disappears in the simulated sequences, when the frequencies of codons become equal. Frequencies of the nucleotides (and the dinucleotide AG) at each location along phase 0 exons were calculated for C. elegans , D. melanogaster and H. sapiens . Two models were used to fit these data, with the key objective of describing periodicity. Both of the models showed that the best-fit curves closely matched the actual data points. The first dynamic period determination model consistently generated a value, which was very close to the period equal to 3 nucleotides. The second fixed period model, as expected, kept the period exactly equal to 3 and did not detract from its goodness of fit. Conclusions Conclusion can be drawn that DNA periodicity in exons is determined by codon usage frequencies. It is essential to differentiate between DNA periodicity itself, and the length of the period equal to 3. Periodicity itself is a result of certain combinations of codons with different frequencies typical for a species. The length of period equal to 3, instead, is caused by the triplet nature of genetic code. The models and evolutionary algorithm used for characterising DNA periodicity are proven to be an effective tool for describing the periodicity pattern in a species, when a number of exons in the same phase are analysed. | Background Periodicity of DNA in exons, with the period being equal to 3 nucleotides, has been well known for some time [ 1 - 6 ]. This periodicity reflects correlations between nucleotide positions along coding sequences [ 7 ], which is caused by the asymmetry in base composition at the three coding positions [ 8 ]. This periodicity has also been suggested as a reading-frame monitoring device during translation, due to interrupted periodic patterns matching with frame shifts downstream where the periodic pattern returns [ 9 ]. The triplet code has undergone evolution itself, from the earliest form of the triplet code to what exists today. The universal DNA periodicity observed in exons suggests a ( RNY) n pattern ( R = A or G , Y = C or U , N = any base), which probably was inherited from the earliest mRNA sequences [ 10 , 11 ]. In this study comparisons between real and simulated coding sequences were used in attempt to better understand the cause of the DNA periodicity. The only data used by the simulation program were codon usage frequencies from real species. Thus the simulated coding sequences had frequencies of codons very similar to real species. The major difference, however, was a random position of codons in simulated sequences. The periodicity of exons, as well as other coding statistics can be an additional tool for exon prediction programs [ 7 ]. The distance between two types of nucleotides is counted, and a period is determined by the distance between the similar frequencies. For example, if there is a nucleotide A at one point in a sequence, and other A 's are more common when there are 2, 5, 8 and so on nucleotides between them, a period of three can be determined [ 7 ]. Additional methods of finding periodicity include Fourier analysis [ 12 ], the length shuffle Fourier transform algorithm [ 13 ], autocorrelation functions [ 14 ] and distance analysis [ 15 ]. We applied two models of an evolutionary algorithm [ 16 , 17 ] to quantify DNA periodicity. Thus the second objective of this study was investigation of a new method for quantification of DNA periodicity. Results Periodicity of DNA in exons As mentioned in the Background, periodic 3-nucleotide pattern has been known for eukaryotic exons for some time. We studied a question whether DNA periodicity similar to that observed in exons can be simulated in computer experiments utilising codon usage frequencies (CUF) of real species as the only source of information. The computer program GENERATE, which was used in these experiments, composed artificial coding sequences using CUF of several species as the only source of information. Thus despite random choice the frequencies of codons in simulated sequences were very similar to the real CUF. As an example of these experiments, Figure 1A shows distribution of Adenine nucleotides in real Drosophila melanogaster exons (phase 0) and in simulated (Figure 1B ) coding sequences (phase 0) created by GENERATE using D. melanogaster CUF. To avoid any significant influences of splicing signals, D. melanogaster exons aligned at the 5' end start from the 10 th nucleotide (4 th codon). In the simulated coding sequences periodicity is also highly pronounced and the periodicity patterns observed in D. melanogaster exons and simulated sequences are nearly identical (Figure 1A & 1B ). Other studied species C. elegans and H. sapiens despite significant differences in AT and GC content also show high similarity in the periodicity pattern between exons and simulated sequences (data are not shown). Periodicity of other nucleotides was also observed and it shown high similarity in both DNA of real exons and simulated sequences (data are not shown). The obvious conclusion following from this study is that CUF, which was the only source of information for the simulated coding sequences, is the crucial factor determining periodicity. Figure 1 A. Adenine periodicity in Drosophila melanogaster exons (N exons = 32,760); B. Adenine periodicity in simulated coding sequences (N genes = 13,065) created by GENERATE using D. melanogaster CUF; C. Adenine periodicity in simulated coding sequences (N genes = 13,065) created by GENERATE using equal frequencies of all non-stop codons and D. melanogaster frequencies of stop codons and D. Adenine periodicity in simulated coding sequences (N genes = 13,065) created by GENERATE using equal frequencies of all codons including frequencies of stop codons. The length of coding sequences was determined by the program. DNA periodicity in simulated coding sequences was dramatically reduced in the experiments where the frequencies of all non-stop codons were made equal (Figure 1C ). This observation strongly supports the conclusion that codon usage frequencies determine DNA periodicity in exons. A very light periodicity of Adenine and Thymine (data are not shown) was caused by the fact that 3 stop codons and the corresponding combinations of nucleotides were present in the simulated coding sequences in different and much lesser frequencies than other codons. Cytosine , which is not a component of any stop-codon, does not show periodic pattern at all because frequencies of Cytosine containing codons were equal to frequencies of all other non stop codons. Finally, when frequencies of all codons, including stop codons, were made equal, no periodicity was observed in the simulated sequences (Figure 1D ). Thus the computer simulations lead to a firm conclusion that 3 nucleotide periodicity observed in DNA of exons is determined by codon usage frequencies. The triplet nature of genetic code is rather responsible for the length of the period but not periodicity itself, as some people might think. Quantification of DNA periodicity using an evolutionary algorithm Data sets on frequency of the nucleotides and all 16 dinucleotides at each location were constructed for Caenorhabditis elegans , Drosophila melanogaster and Homo sapiens phase 0 exons. The frequencies of Adenine and dinucleotide pair AG are shown in this paper as an example. Two models were used to fit to these data, with the key objective of describing periodicity in the data. Inspection of Figures 2 to 5 shows that periodicity is quite apparent, and that the frequencies between peak and trough frequencies are generally consistent, but sometimes trending in value. Two models were used to accommodate this shifting pattern. The first model is: Figure 2 Curves of best-fit for Adenine in phase 0 exons compared to actual frequencies. Pink points represent frequencies of nucleotide A in phase 0 C. elegans , D. melanogaster and H. sapiens exons aligned at the 5' end. Blue points represent the best-fit curve for the data points in an ideal situation, from position 20–100. The scales of the graphs were altered to provide better contrast between the data points. Figure 3 Curves of best-fit for dinucleotide AG in phase 0 exons compared to actual frequencies. Pink points represent frequencies of AG in phase 0 C. elegans D. melanogaster and H. sapiens exons aligned at the 5' end. Blue points represent the best-fit curve for the data points in an ideal situation, from position 20–100. The scales of the graphs were altered to provide better contrast between the data points. Figure 4 Fixed period best-fit curves for Adenine in phase 0 exons compared to actual frequencies. Pink points represent frequencies of nucleotide A in phase 0 exons aligned at the 5' end. Blue points represent the best-fit curve for the data points in an ideal situation, from position 20–100. The scale of the graph was altered to provide better contrast between the data points. Figure 5 Fixed period best-fit curves for dinucleotide AG in phase 0 exons compared to actual frequencies. Pink points represent frequencies of dinucleotide AG in phase 0 exons aligned at the 5' end. Blue points represent the best-fit curve for the data points in an ideal situation, from position 20–100. The scale of the graph was altered to provide better contrast between the data points. - where is predicted frequency at nucleotide position i , and b 1 to b 5 are parameters to be estimated from data. Because of irregularities in frequencies close to the 5' end of exons, nucleotide position 20 was take as position i = 1. The component b 1 + b 2 i fits an overall linear trend in frequencies, independently from finer-scale periodicity. For some data sets it could be useful to include a quadratic term for i . Parameter b 3 gives the amplitude of periodic waves. As 2π radians describes a full cycle, periodicity is given by parameter b 5 . However, if b 5 differs from exactly 3, this generates a shift in phase that is linear with nucleotide position. This shift combines with static phase shift parameter b 4 to fit the relative frequencies in adjacent groups of three locations. This is not an ideal model, in that parameter b 5 does not cleanly describe periodicity, but it proves to work well in practice. The second model is: - where the value of Offset depends on nucleotide position I as follows: if i < b4 Then Offset = b6 else if i < b4 + b5 Then Offset = b7 else Offset = b8 Thus, in this case, three regions are defined by parameters b 4 and b 5 , and Offset is defined within region by parameters b 6 and b 7 . Model 2 fixes periodicity at 3 nucleotides, but allows for different patterns of relative frequency in regions chosen by the data. The analysis task for both models is to find values of the b parameters that give a close fit between the real frequencies Y and the predicted frequencies . The criterion used for this was the sum of squared errors across nucleotide position: Best-fitting b parameters were found using a form of evolutionary algorithm (Differential evolution, [ 17 ]) with modifications to improve robustness following [ 18 ]. To test if the above method could identify a quantifiable period in exons, several tests were performed. All phase 0 exons were extracted from the EID database, described in Material and Methods. This procedure dramatically enhanced the visible periodicity compared to exons of all three phases (Figure 1A .). Once the phase 0 exons were separated, they were aligned at the 5' end and data for all four single nucleotide frequencies were analysed using both methods. In addition to all single nucleotide frequencies, the dinucleotide frequency of AG was run through the analysis for both dynamic period determination best-fit curve, and the static period of 3 best-fit curve. This was done for the three studied species, C. elegans , D. melanogaster and H. sapiens for phase 0 exons. Several other dinucleotide pairs also shown clear periodicity patterns in exons (data are not shown). AG was used in this paper as an example. Introns were only run through the analysis for dynamic period determination and did not show clear and stable periodicity. Phase 0 exons – dynamic period determination model (Model 1) Curves of best-fit were created for the four separate nucleotides and the dinucleotide AG for C. elegans , D. melanogaster and H. sapiens phase 0 exons. These curves were created using model 1, in an attempt to find a periodicity within the given data. The first few positions of exons are frequently under different selection pressures, which do not always conform to the same pressures as the remainder of the exon. It was for this reason that the algorithm was run starting from position 20 to position 100 in exons. Table 1 , shows DNA periodicity in exons of C. elegans , D. melanogaster and H. sapiens as they were determined by the analysis. The amplitude of the periodicity was measured as the variation from the center-point of the sine curve. As mentioned earlier, the criterion value of goodness of fit is the sum of squared deviations between observed frequencies and frequencies predicted by the model with the prevailing parameters. This means that as the analysis runs through its generation cycles, it finds better fitting curves as it goes along and replaces the previous curve of best-fit. The criterion value is a reflection of this process, in that as it gets closer to zero, the closer the curve of best-fit represents actual data. The data show that in all cases, the determined period is very close to 3 in exons for all three species for all nucleotides and the dinucleotide pair AG . The range of periods goes from a low of 2.990391 in C. elegans nucleotide C , a difference from 3 is ~0.0096, to a maximum of 3.019349 in C. elegans dinucleotide pair AG , a difference from three of ~0.0193. The criterion values of goodness of fit are also very low for exons, with the largest among them at 0.0093573 in H. sapiens nucleotide C . Table 1 Periods of best-fit curves in phase 0 aligned exons. Periods and criterion values of phase 0 C. elegans D. melanogaster and H. sapiens exons aligned at the 5' end. The four single nucleotide as well as a single dinucleotide pair, AG were studied. Species Nucleotides Best-Fit Period Amplitude Criterion Value C. elegans A 2.995367 ± 0.02007 0.0019226 C 2.990391 ± 0.01583 0.0019602 G 3.005597 ± 0.08199 0.0047119 T 3.014848 ± 0.06247 0.0049716 AG 3.019349 ± 0.02077 0.0031441 D. melanogaster A 3.002192 ± 0.10128 0.0055213 C 3.000776 ± 0.07056 0.0043703 G 2.998296 ± 0.09142 0.0036970 T 2.999109 ± 0.06115 0.0049037 AG 2.999744 ± 0.05038 0.0012143 H. sapiens A 3.000390 ± 0.07898 0.0087358 C 2.997380 ± 0.05627 0.0093573 G 2.998682 ± 0.08669 0.0014469 T 2.998730 ± 0.06627 0.0065035 AG 2.995860 ± 0.03872 0.0025106 A comparison between the best-fit curves for nucleotide A and actual frequencies of nucleotide A in exons of the three studied species can be seen in Figure 2 . Figure 3 shows a similar comparison for the dinucleotide pair AG . Best-fit curves for nucleotides C , G and T are not shown. The blue points on the graph represent data points predicted by fitting model 1 to actual data, represented by pink points. As can be seen in the graphs, the blue best-fit curves in both Figure 2 and Figure 3 both closely follow the pink line for actual data, which confirms that the best-fit line quite accurately portrays actual data. Phase 0 exons – static period 3 model (Model 2) The data were then fitted to model 2, keeping the period fixed at 3. With this algorithm, the ideal best-fit curve would remain nearly in the same pattern. Again phase 0 exons were used as sample data. Figure 4 shows the best-fit curve for nucleotide Adenine in C. elegans D. melanogaster and H. sapiens phase 0 exons with the period fixed at three. Figure 5 shows the best-fit curve for the dinucleotide pair AG in 0 exons in the same species. In both sets of graphs, pink points represent actual frequencies of nucleotides and the dinucleotide pair, AG , while the blue points represent the optimized curve of best-fit for these frequencies. It is clear from the graph that keeping the period fixed at exactly three in exons does not detract from the accuracy of the curve of best-fit. The curve of best-fit is remarkably similar to the actual data points. Discussion The fact of DNA periodicity in exons, as well as the lack of periodicity in introns, is known for some time [ 1 - 6 ]. "Such a periodic pattern reflects correlations between nucleotide positions along coding sequences (that is, the probability of finding a nucleotide at a given position in a coding sequences is not independent of the nucleotide occurring at some other even distant position). The correlations arise, in turn, because of the asymmetry in base composition at the three codon positions in coding sequences" [ 7 , 8 ]. The simulation experiments described in the paper support such conclusion and provide a clear proof that frequency of codon usage is the key cause for DNA periodicity in exons (Figure 1 ). We have shown that simulations, which utilized only codon usage frequencies data, produced an exceptionally good match to periodicity observed in real exons. As soon as frequencies of all codons are set as equal, DNA periodicity in exons entirely disappears. It is reasonable to think that the asymmetry in base composition, studied by Guigó [ 7 ], might be caused by the codon usage frequency. The results presented in this paper also demonstrate effectiveness of the evolutionary algorithm and the both models used to identify a periodicity pattern in exons. Although periods, which are seen in Table 1 , are not precisely equal to 3, they are very close. This minor discrepancy is a result of the analysis compensating for slight changes in the pattern of frequencies over nucleotide position. When the period is fixed at exactly 3, and the program allows for change-over points where the curve of best-fit is adjusted to better suit the data, the curves of best-fit still closely match the actual data points (see Figures 4 and 5 ), revealing that the period of 3 is not simply coincidental when it is allowed to be determined by the program. As it can be seen in Figures 2 , 3 , 4 , 5 the amplitude of variation is much more narrow for C. elegans than for the two other species under consideration. Introns do not show any specific period that can be determined by the analysis (results are not shown). Although the analysis does produce a period for each data set given, these periods are not consistent with each other, and the predictions do not fit the data well. As introns are not composed from codons, this is an additional indication supporting the conclusion that CUF determine periodicity pattern in exons. Since only exons show a strong periodicity of three, this type of analysis can be in principle used as an additional component of exon finding tools. Such possibility was already considered [ 7 ]. Unfortunately, the methods discussed here being very effective in quantifying DNA periodicity in a set of many sequences, are not sensitive enough for a single sequence. Further modifications of the approach are necessary before it can be used in exon prediction programs. Conclusions Conclusion can be drawn that DNA periodicity in exons is determined by codon usage frequencies. It is essential to differentiate between DNA periodicity itself, and the length of the period equal to 3. Periodicity itself is a result of certain combinations of codons with different frequencies typical for a species. The length of period equal to 3, instead, is caused by the triplet nature of genetic code. The models and evolutionary algorithm used for characterising DNA periodicity are proven to be an effective tool for describing the periodicity pattern in a species, when a number of exons in the same phase are analysed. Methods Exon-Intron Database Information relevant to C. elegans , D. melanogaster and H. sapiens , was extracted from the exon-intron database (EID), which was compiled in the W. Gilbert laboratory, Department of Molecular and Cellular Biology, Harvard University [ 18 ]. The database contains protein-coding intron-containing genes. From the version of the database that we used, the following data were extracted: C. elegans 14,836 genes and 98581 exons; D. melanogaster 13,361 genes and 58,801 exons, H. sapiens 7150 genes and 47908 exons. exScan This program calculates frequencies of nucleotides or any combination of nucleotides in a database. exScan align all exons in the database at either the 5' or 3' end. The program then searches the exons for given sequences and give a summary of the sequences found. exScan was used in order to obtain the frequencies for nucleotides and dinucleotide pairs in each position of exons aligned at the 5' end. A summary of the program's operation follows: - Command line for exScan selects the database to be used for searching. - The string(s) to be searched are also entered into the command line. - ExScan then aligns the exons by the 5' end and searches each exon for all matches to the search strings. - The output of the program will provide the number of matches for each search string entered at every position along the aligned exons. - These numbers are then converted into frequencies. ExScan is written in the C++ programming language; its full description and the program itself are available upon request GENERATE The program simulates a required number of coding sequences, using as an input file CUF for a particular species and a generator of random numbers. Inclusion of stop-codon in a coding sequence terminates the gene. There are options, which allow establishing a minimal and a maximal length of genes as well a shape of gene length distribution. GENERATE was used in this study to show how CUF alone could create periodicity, even in randomly created sequences, which do not code for any real protein. The procedure when running GENERATE follows: - GENERATE accepts as input a file containing usage frequencies of all 64 codons. These codon usage frequencies for different species were taken from the database located at . Thus, despite random choice the frequencies of codons in simulated sequences were very similar to the real CUF. - These frequencies are then used to construct a requested number of artificial genes, with the codons chosen randomly based on their frequencies. - Artificial genes all start with ATG , and will terminate once a stop codon is randomly chosen. - The artificial genes can then be used as a separate database for analysis with exScan as above. GENERATE was written in the C++ programming language. Description of the program and program itself are available upon request. Differential Evolution The specific method used for fitting the two periodicity models was Differential Evolution (DE). As DE is a widely applicable method of general utility for optimization, the reader is directed to Storn and Price [ 13 ] for detailed description and example computer code. The concept is outlined here: - A population of candidate solutions is established. Each population member is constituted by a randomly sampled set of b parameters and is characterized by its fitness (its value on the prevailing objective function, the sum of squared errors across nucleotide position). - For each population member, a challenger is constructed. If this challenger has superior fitness, it will replace the population member in the next generation . A challenger is constructed as follows: - Three other population members are chosen at random. We can label these as a , b and c . Each parameter is then addressed in turn. With probability CR ( CR = 0.4 was adopted) the parameter is simply taken from the population member that the challenger is challenging. Otherwise, a new parameter value is constructed as the value for member a plus F times the difference of the values for b and c . For this application, F = 0.4, except F = 1 every fourth generation and F = 2 every seventh generation, to help avoid local optima. In addition, mutation independent from differences between other solutions was invoked periodically, also to help avoid local optima. - Successful challengers replace their respective population members, and, together with surviving members constitute a new generation with higher mean fitness. The process continues over sufficient generations to achieve convergence close to an optimal solution, with the most fit solution being chosen. Competing interests None declared. Authors' contributions SE conducted the data analysis and was involved in drafting the manuscript. FE designed and wrote the code for the C++ computer programs. BK assisted with the manuscript and wrote the code for modeling and fitting periodicity. AR drafted the manuscript, performed statistical analysis, conceived the study and participated in its design and coordination. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516030.xml |
340944 | Treatment of Terminal Peritoneal Carcinomatosis by a Transducible p53-Activating Peptide | Advanced-stage peritoneal carcinomatosis is resistant to current chemotherapy treatment and, in the case of metastatic ovarian cancer, results in a devastating 15%–20% survival rate. Therapeutics that restore genes inactivated during oncogenesis are predicted to be more potent and specific than current therapies. Experiments with viral vectors have demonstrated the theoretical utility of expressing the p53 tumor suppressor gene in cancer cells. However, clinically useful alternative approaches for introducing p53 activity into cancer cells are clearly needed. It has been hypothesized that direct reactivation of endogenous p53 protein in cancer cells will be therapeutically beneficial, but few tests of this hypothesis have been carried out in vivo. We report that a transducible D-isomer RI-TATp53C′ peptide activates the p53 protein in cancer cells, but not normal cells. RI-TATp53C′ peptide treatment of preclinical terminal peritoneal carcinomatosis and peritoneal lymphoma models results in significant increases in lifespan (greater than 6-fold) and the generation of disease-free animals. These proof-of-concept observations show that specific activation of endogenous p53 activity by a macromolecular agent is therapeutically effective in preclinical models of terminal human malignancy. Our results suggest that TAT-mediated transduction may be a useful strategy for the therapeutic delivery of large tumor suppressor molecules to malignant cells in vivo. | Introduction Most patients who succumb to cancer do so not from primary tumor burden, but from metastatic disease ( Fidler 2003 ). For example, advanced-stage peritoneal carcinomatosis (e.g., from metastatic ovarian and breast cancer) and disseminated peritoneal lymphomas are often resistant to current chemotherapy treatment ( Parsons et al. 1996 ). Posttreatment survival rates for patients presenting with metastatic ovarian peritoneal carcinomatosis or lymphoma are less than 20% and less than 50%, respectively ( Lam and Zhao 1997 ; Deppe and Baumann 2000 ; Hofstra et al. 2000 ). Consequently, the development of novel therapeutic strategies to reverse these numbers is clearly warranted. A significant effort has been aimed at understanding the function of tumor suppressor gene pathways that are genetically and epigenetically altered during oncogenesis ( Macleod 2000 ). One rationale for the study of tumor suppressor pathways is the hypothesis that reconstitution of these pathways in cancer patients will be therapeutically beneficial ( Macleod 2000 ). The p53 tumor suppressor protein induces growth arrest and apoptosis in response to cellular stress ( Vousden and Lu 2002 ). Mutation of genes in the p53 pathway is thought to be nearly universal in human cancer ( Vousden and Lu 2002 ). Thus, any strategy designed to restore p53 activity in tumor cells will likely be an effective means of inducing cancer cell death and will be applicable to a large fraction of cancer patients. The inability of large tumor suppressor proteins, all of which are intracellular, to cross the plasma membrane precludes the therapeutic administration of recombinant tumor suppressors in a manner analogous to administration of extracellular biological therapeutics (e.g., insulin or G-CSF). Thus, the development of an efficient methodology for restoring tumor suppressor function to cancer cells in vivo remains a challenge for both basic and clinical researchers. Gene therapy approaches aimed at restoring tumor suppressor function have been extensively investigated. Both viral and nonviral vectors have been employed to express exogenous tumor suppressor genes, such as p53 , in cancer cells ( McCormick 2001 ). Although gene therapy may be useful under certain conditions, problems associated with immunogenicity and lack of systemic biodistribution to disseminated metastases are likely to curtail its anticancer efficacy ( McCormick 2001 ). Delivery of macromolecules by protein transduction has recently emerged as an alternative methodology for directly introducing tumor suppressor proteins into cancer cells in vivo. Several small cationic peptides, including TAT, Antp, and polyArg (referred to as protein transduction domains [PTDs]), are capable of traversing the plasma membrane and entering the cytoplasm of cells by a concentration-dependent, but receptor-independent, macropinocytic mechanism ( Wadia et al. 2004 ). PTDs have recently been used to deliver a wide range of cargo, including biologically active proteins, peptides, nucleic acids, and iron beads, into cells in culture ( Fischer et al. 2001 ; Lindsay 2002 ). PTDs have also been employed to deliver biologically active cargo into most, if not all, tissues in preclinical models ( Schwarze et al. 1999 ). Because of the presence of either wild-type or mutant p53 protein in most tumors, it has been hypothesized that restoration of endogenous p53 activity in cancer cells will be a therapeutically efficacious alternative to delivery of exogenous p53. However, this hypothesis has been tested in vivo in only a limited number of cases ( Foster et al. 1999 ; Bykov et al. 2002 ) and has never been tested in preclinical models of terminal human malignancy. We therefore focused on a strategy to activate endogenous p53 in cancer cells by PTD-mediated delivery. The C-terminus of p53 is a lysine-rich domain that is subjected to a variety of posttranslational modifications ( Apella and Anderson 2001 ). A peptide derived from the C-terminus was previously shown by D. Lane's group (University of Dundee, United Kingdom) to activate specific DNA binding by p53 in vitro by an unknown mechanism ( Hupp et al. 1995 ). In cancer cells, p53C′ peptide can induce apoptosis by activating wild-type p53 protein and by restoring function to several p53 DNA contact mutants. Importantly, the p53C′ peptide also restores specific DNA binding to some p53 DNA contact mutants in vitro and induces apoptosis in cancer cells expressing p53 DNA contact mutants ( Selivanova et al. 1997 , 1998; Kim et al. 1999 ). However, the peptide fails to induce apoptosis in p53-deficient tumor cells or in tumor cells containing p53 structural mutations. In contrast, primary cells are resistant to p53C′ peptide action ( Selivanova et al. 1997 ; Kim et al. 1999 ). This resistance is likely a result of the extremely low levels of endogenous p53 present in normal cells and the absence of continual DNA damage often associated with tumor cells ( Selivanova et al. 1997 ; Kim et al. 1999 ). Here we report a proof-of-concept that in vivo delivery of a transducible, proteolytically stable p53C′ peptide (termed RI-TATp53C′) is a therapeutically effective means of activating the p53 tumor suppressor pathway in preclinical models of terminal metastatic cancer. Results Activation of p53 by Transducible Retro-Inverso D-Isomer p53C′ Peptide Although PTDs solve one major obstacle to the use of intracellular peptides as therapeutics, the susceptibility of peptides to degradation in vivo remains problematic. To circumvent the problem of proteolytic degradation, we synthesized a retro-inverso version of the parental p53C′ peptide by inverting the peptide sequence, using D-isomer residues, and adding the TAT PTD to obtain a transducible RI-TATp53C′ peptide ( Figure 1 A). This double inversion of peptide structure often leaves the surface topology of the sidechains intact and has been used extensively to stabilize biologically active peptides for in vivo applications ( Chorev and Goodman 1993 ). Because of their greater stability, retro-inverso peptides often display increased potency. Figure 1 RI-TATp53C′ Induces the Hallmarks of p53 Activity in TA3/St Mammary Carcinoma Cells (A) Sequence of p53C′TAT peptide (L-amino acids) and its retro-inverso analogue (D-amino acids). To generate a negative control peptide, three essential lysine residues ( Selivanova et al. 1997 ) were mutated while leaving the remaining peptide sequence intact. (B) Induction of G1 arrest in TA3/St cells by wild-type RI-TATp53C′, but not mutant peptide, 24 h after peptide addition. (C) Dose-dependent induction of G1 arrest by RI-TATp53C′ (open square) (D-amino acids) and the less potent p53C′TAT (open circle) (L-amino acids) but not mutant (open triangle) peptide at 24 h (left) and 48 h (right) after single treatment. (D) Induction of a permanent growth arrest in TA3/St cells by RI-TATp53C′. Cells were treated with RI-TATp53C′ peptide or vehicle for 2 d, replated, and allowed to proliferate in the presence of serum for 10 d. Colonies were then stained with methylene blue. (E) Induction of a senescence-like phenotype in TA3/St cells by RI-TATp53C′. Cells were treated with RI-TATp53C′ peptide and stained for acidic β-galactosidase activity. To determine whether the RI-TATp53C′ peptide retained functionality, we compared the transducible parental L-isomer p53C′TAT and D-isomer RI-TATp53C′ peptides for the ability to induce a cell cycle arrest ( Figure 1 B). Treatment of murine TA3/St mammary carcinoma cells (which express wild-type p53) with either the L-isomer p53C′TAT or D-isomer RI-TATp53C′ peptides resulted in a concentration-dependent G1 cell cycle arrest ( Figure 1 B and 1 C). The control D-isomer mutant peptide had little to no effect ( Figure 1 B and 1 C). Compared to the L-isomer p53C′TAT, the D-isomer RI-TATp53C′ peptide induced a stronger cell cycle arrest at substantially lower concentrations ( Figure 1 C). A single administration of the L-isomer p53C′TAT peptide partially arrested cells for 24 h, but by 48 h cells had reentered the cell cycle ( Figure 1 C). In contrast, a single dose of the D-isomer RI-TATp53C′ peptide was sufficient to sustain a G1 arrest for greater than 7 d ( Figure 1 C; data not shown). To ascertain whether sustained arrest required the continuous presence of RI-TATp53C′ peptide, TA3/St cells were treated with peptide or vehicle for 2 d and then replated under mitogenic conditions in the absence of peptide. Peptide-treated tumor cells formed less than 1% as many colonies as vehicle-treated cells ( Figure 1 D). This observation suggested that RI-TATp53C′ peptide induced a permanent growth arrest in TA3/St cells. We therefore assayed RI-TATp53C′ peptide-treated cells for induction of senescence, a state of terminal arrest that can be induced by p53 activation ( Roninson et al. 2002 ). By 6 d after peptide addition, greater than 80% of viable TA3/St cells were positive for acidic β-galactosidase activity ( Figure 1 E), the standard marker of senescence ( Roninson et al. 2002 ). The treated cells also displayed other features of senescence ( Roninson et al. 2002 ), including increased size, increased granularity, and a flattened morphology ( Figure 1 E; data not shown). These observations suggest that treatment of mammary carcinoma cells with the RI-TATp53C′ peptide induces hallmarks of p53 activation, namely a G1 cell cycle arrest followed by induction of senescence. We next investigated the ability of the RI-TATp53C′ peptide to transcriptionally activate p53-responsive genes. We transiently transfected p53 null human H1299 lung carcinoma cells with p53-dependent luciferase reporter plasmid (PG13-Luc) and either wild-type p53 expression plasmid or empty vector. The use of p53 null cells allows for negative controls that are not possible in cells expressing endogenous p53. As expected, we observed a p53-dependent induction of luciferase activity in cells transfected with p53 expression vector ( Figure 2 A). However, RI-TATp53C′ peptide treatment of cells transfected with p53 expression vector resulted in a significant increase in p53-dependent luciferase activity ( Figure 2 A, left). Consistent with observations in TA3/St cells (see Figure 1 B and 1 C), mutant peptide and L-isomer p53C′TAT displayed substantially reduced potency in this assay when compared to RI-TATp53C′ peptide (data not shown). Importantly, RI-TATp53C′ peptide treatment of cells transfected with empty vector and luciferase plasmids caused no increase in p53 target promoter activity ( Figure 2 A). In addition, RI-TATp53C′ peptide activated p53-dependent transcription in SW480 colon carcinoma cells expressing a p53 DNA contact mutant (R273H) and in H1299 p53 null colon carcinoma cells transfected with a p53 DNA contact mutant (R248Q and R273H) ( Figure 2 A, right), though to a lesser extent than in the presence of wild-type p53 ( Figure 2 A, left). These observations show that RI-TATp53C′ peptide retains the ability to specifically activate p53-dependent gene transcription. Figure 2 RI-TATp53C′ Peptide Activates p53-Dependent Transcription and Inhibits Tumor Cells Expressing p53 (A, left) Induction of transcription from a p53-dependent promoter by RI-TATp53C′ only when p53 protein is expressed. H1299 cells ( p53 −/− ) were cotransfected with p53-responsive reporter (PG13-Luc) and either empty vector or p53 expression vector. Depicted are mean and standard deviation of triplicate results that are representative of multiple experiments. (A, right) RI-TATp53C′ peptide activates p53-dependent transcription in cells expressing DNA contact mutant p53. SW480 cells containing a DNA contact mutant (R273H) p53 were transfected with p53-dependent reporter (PG13-Luc). H1299 cells ( p53 −/− ) were co-transfected with PG13-Luc and either R248Q or R273H mutant p53 expression vector. RI-TATp53C′ was added to cells, and promoter activity was assessed 24 h later. (B) Inhibition of tumor cell proliferation in a p53-dependent manner by RI-TATp53C′. Increasing concentrations of peptide were added to HCT 116 cells ( p53 +/+ ) and their p53 −/− isogenic derivatives. After 2 d, the number of viable cells was assessed by Trypan blue exclusion and normalized to the number of viable untreated cells. Mean and standard deviation of multiple experiments are depicted. (C) Inhibition of the proliferation of tumor cells expressing wild-type or mutant p53, but not p53 −/− tumor cells or nontransformed human fibroblasts. Cell viability was assessed as in (B). Mean and standard deviation of multiple experiments are depicted. To confirm that the RI-TATp53C′ peptide inhibited tumor cell proliferation in a p53-dependent fashion, we compared parental p53 +/+ HCT116 colorectal carcinoma cells to HCT116 cells that were rendered p53-deficient at both loci by targeted genetic recombination ( Bunz et al. 1998 ). Treatment of wild-type p53 HCT116 cells with RI-TATp53C′ peptide inhibited cell proliferation in a dose-dependent manner ( Figure 2 B). In contrast, RI-TATp53C′ peptide treatment of p53-deficient HCT116 cells did not significantly alter the number of viable cells. p53-deficient human H1299 lung adenocarcinoma cells also failed to respond to the RI-TATp53C′ peptide ( Figure 2 C), further confirming the specificity of peptide action. RI-TATp53C′ peptide inhibited proliferation of TA3/St cells ( p53 +/+ ) and human Namalwa lymphoma cells that express a p53 hotspot DNA contact mutant (R248Q) ( Figure 2 C). In contrast, RI-TATp53C′ peptide did not alter the proliferation of normal human foreskin fibroblasts containing wild-type p53 ( Figure 2 C). These results are consistent with previous observations that certain p53 contact mutations are susceptible to p53C′ peptide activation and that the p53C′ peptide induces apoptosis in tumor cells, but not normal cells ( Selivanova et al. 1997 , 1998; Kim et al. 1999 ). Taken together, these observations demonstrate both the p53 and tumor dependency of the RI-TATp53C′ peptide. Systemic Delivery of RI-TATp53C′ Peptide Inhibits Solid Tumor Growth We ( Schwarze et al. 1999 ) and others ( Datta et al. 2001 ; Harada et al. 2002 ) have previously shown that intraperitoneal (IP) administration of TAT–fusion peptides and proteins results in systemic delivery in animal models. Consistent with these observations, IP injection of a biotinylated RI-TATp53C′ peptide into mice harboring subcutaneous tumors resulted in distribution of the peptide throughout the tumor ( Figure 3 A). Figure 3 Solid Tumor Growth Is Inhibited by Systemic RI-TATp53C′ Peptide Administration (A) Delivery of RI-TATp53C′-biotin to subcutaneous TA3/St tumors after IP administration to immune competent A/J mice. (B) Reduction of solid TA3/St tumor growth in immune competent mice as a result of systemic administration of RI-TATp53C′. TA3/St cells were injected subcutaneously into A/J mice and allowed to grow to an average size of approximately 100 mm 3 . Mice were then sorted into treatment groups that received eight daily injections of vehicle (open circle) ( n = 17), 650 μg of mutant peptide (open diamond) ( n = 7), or 650 μg of wild-type RI-TATp53C′ peptide (open triangle) ( n = 11). Final mean tumor volumes were 573 mm 3 for vehicle-treated mice, 550 mm 3 for mice treated with mutant peptide, and 268 mm 3 for the wild-type RI-TATp53C′ peptide group. We next tested the ability of IP administration of RI-TATp53C′ peptide to inhibit the growth of distant solid tumors in immune competent mice. Subcutaneous tumors in mice receiving either vehicle or mutant peptide grew rapidly, reaching an average volume of nearly 600 mm 3 by the end of treatment ( Figure 3 B). In contrast, tumors in mice treated with wild-type RI-TATp53C′ peptide were significantly retarded in growth and reached a final mean volume less than 50% that of tumors in the control-treated mice ( p = 0.01) ( Figure 3 B). These observations demonstrate that systemic delivery of RI-TATp53C′ peptide in immune competent mice can significantly inhibit the growth of an aggressively proliferating solid tumor at a distant site. RI-TATp53C′ Peptide Treatment of Terminal Peritoneal Carcinomatosis Because of their encapsulation and ectopic site of growth, subcutaneous tumors fail to replicate many of the features of terminal human cancer. We therefore tested the efficacy of RI-TATp53C′ peptide in a terminal peritoneal carcinomatosis mouse model that more closely resembles metastatic human disease. TA3/St carcinoma cells inoculated into the peritoneum of immune competent, syngeneic A/Jax (A/J) mice proliferated in a rapid logarithmic fashion, doubling in 24 h and increasing their numbers 100-fold 5 d postinoculation ( Nagy et al. 1993 ). This aggressive, terminal peritoneal carcinomatosis model of human disease has been used extensively to study the pathophysiology of peritoneal tumor growth ( Nagy et al. 1993 , Nagy et al. 1995 ). We assayed the ability of the RI-TATp53C′ peptide to alter the tumor burden and increase the longevity of mice harboring TA3/St peritoneal carcinomatosis. Vehicle-treated mice rapidly succumbed to peritoneal tumor burden with a mean survival time of 11 d ( Figure 4 A). Mice treated with control mutant peptide succumbed to their tumor burden with similar kinetics and a mean survival time of 10 d ( Figure 4 A). In contrast, peritoneal tumor-bearing mice treated with wild-type RI-TATp53C′ peptide lived on average more than 70 d after tumor inoculation ( Figure 4 A), a greater than 6-fold increase in lifespan over mutant peptide- or vehicle-treated mice ( p < 10 −6 ). These observations demonstrate the ability of transducible peptides to significantly extend survival in a mouse model of terminal peritoneal carcinomatosis. Figure 4 RI-TATp53C′ Treatment Extends Survival of Mice Harboring Terminal Peritoneal Carcinomatosis (A) A 6-fold increase in survival of A/J immune-competent mice harboring lethal TA3/St mammary peritoneal carcinomatosis burden after RI-TATp53C′ peptide treatment. A/J mice were given IP injections of TA3/St cells, and cells were allowed to double in number (approximately 24 h). Peritoneal tumor-bearing mice were then treated once a day for 12 consecutive days with vehicle ( n = 15), 600 μg of wild-type RI-TATp53C′ ( n = 10), or 600 μg of mutant peptide ( n = 10). Mean survival duration was 11 d for vehicle-treated mice, 10 d for mice receiving mutant peptide, and greater than 70 d for the group receiving wild-type RI-TATp53C′ peptide. (B) Reduction of tumor cell number in vivo by RI-TATp53C′ treatment. Mice were injected with TA3/St tumor cells and treated with wild-type peptide as in (A). Three days after tumor cell injection, cells were flushed from the peritoneal cavity and serially diluted in 6-well plates. Growth of colonies was then assessed by methylene blue staining and used to measure the number of viable tumor cells present in the peritoneum after treatment with vehicle or wild-type peptide. We next investigated the biological consequences of peptide treatment to tumor cells in vivo. Peritoneal-TA3/St tumor-bearing mice were given daily injections of wild-type RI-TATp53C′ peptide or vehicle control. Mice were sacrificed 3 d after tumor cell inoculation for assessment of tumor burden. Vehicle-treated mice contained a significant tumor burden of recoverable dividing TA3/St tumor cells ( Figure 4 B). In contrast, mice treated with wild-type RI-TATp53C′ peptide showed a dramatic reduction in tumor cell number, suggesting that RI-TATp53C′ peptide treatment extended survival by directly inhibiting overall tumor proliferation. Consistent with cell culture studies, cell cycle analysis of tumor cells from peptide-treated mice showed an increase in G1 phase of the cell cycle (data not shown). RI-TATp53C′ Peptide Treatment of Terminal Peritoneal Lymphoma To broaden these results, we also tested the efficacy of the RI-TATp53C′ peptide in a mouse model of aggressive, disseminated peritoneal lymphoma. Wild-type RI-TATp53C′ peptide, but not mutant peptide, induced G1 phase accumulation and substantial apoptosis in Namalwa human lymphoma cells ( Figure 5 A). When injected IP into SCID (severe combined immune deficiency) mice, Namalwa cells proliferate in the peritoneum and disseminate to other locations (e.g., spleen, lymph nodes, and blood [ de Menezes et al. 1998 ]), modeling human B-cell lymphoma ( Bertolini et al. 2000 ). Mice harboring peritoneal lymphoma succumbed to tumor burden with similar kinetics when treated with either vehicle or mutant peptide, with a mean survival time of 35 d and 33 d, respectively ( Figure 5 B). In contrast, wild-type RI-TATp53C′ peptide treatment resulted in 50% long-term survival ( p < 0.0007) ( Figure 5 B), with six of 12 treated mice still healthy at more than 200 d after tumor cell injection. Taken together, these observations demonstrate that in models of terminal metastatic human disease, transducible p53-activating peptides can modulate tumor biology in vivo, resulting in significantly decreased tumor burden, increased lifespan, and long-term disease-free survival. Figure 5 RI-TATp53C′ Treatment Leads to 50% Long-Term Survival of Mice Bearing Terminal Peritoneal Lymphoma (A) Treatment of human Namalwa B-cell lymphoma cells with RI-TATp53C′ peptide induces apoptosis. Cells were treated with wild-type or mutant peptide, and DNA content was analyzed by flow cytometry 24 h after peptide addition. (B) Long-term survival of SCID mice harboring lethal peritoneal Namalwa lymphoma tumor burden after RI-TATp53C′ peptide treatment. Namalwa lymphoma cells were IP injected into SCID mice and allowed to proliferate for 48 h. Mice were then injected 16 times over 20 d with vehicle control ( n = 16), 900 μg of wild-type RI-TATp53C′ peptide ( n = 12), or 900 μg of mutant peptide ( n = 6). Mean survival duration was 35 d for vehicle-treated mice and 33 d for mice receiving mutant peptide, whereas 50% of mice treated with wild-type RI-TATp53C′ peptide remained healthy at 150 d after tumor cell injection. The subset of RI-TATp53C′ peptide-treated animals that succumbed to peritoneal carcinomatosis could have failed treatment either because of the emergence of peptide-resistant tumor cells or because of insufficient treatment duration. To distinguish between these two possibilities, we isolated TA3/St and Namalwa cells from animals that failed treatment. In both cases, the cells readily proliferated in culture under the same conditions as the parental cell population (data not shown). RI-TATp53C′ peptide treatment of reconstituted TA3/St cells induced a G1 arrest similar in extent to that of the parental cell line ( Figure 6 A). RI-TATp53C′ peptide treatment of reconstituted Namalwa cells also inhibited the viability of both parental and reconstituted cells to the same degree ( Figure 6 B). These observations demonstrate that treatment failure is not due to acquisition of RI-TATp53C′ peptide resistance and suggests that an extended treatment protocol (greater than 12 d) may lead to a further enhancement of survival in these preclinical cancer models. Figure 6 Tumor-Reconstituted Cells from Treated Mice Remain Sensitive to RI-TATp53C′ Peptide-Induced G1 Arrest or Apoptosis in Culture (A) TA3/St cells were recovered from an A/J mouse treated with RI-TATp53C′ peptide and grown in DMEM/10% FBS. Recovered cells were treated with increasing concentrations of RI-TATp53C′ peptide and then analyzed for DNA content by flow cytometry 24 h later. (B) Namalwa cells were recovered from a SCID mouse treated with RI-TATp53C′ peptide and grown in RPMI plus 10% FBS. Recovered cells were treated with increasing concentrations of RI-TATp53C′ peptide. After 2 d, the number of viable cells was assessed by Trypan blue exclusion and normalized to the number of viable untreated cells. Mean and standard deviation of multiple experiments are depicted. Discussion Advanced-stage peritoneal carcinomatosis and disseminated peritoneal lymphomas are often resistant to current chemotherapy treatment ( Parsons et al. 1996 ), and new strategies for treating these diseases are clearly needed. The need to develop different therapeutic modalities to restore tumor suppressor function is acutely illustrated by the current limitations of viral/DNA-based strategies for delivering tumor suppressor genes to cancer cells in patients ( McCormick 2001 ). Here we show that macromolecular biological cargo can be delivered via TAT-mediated transduction in order to modulate tumor biology in vivo. Specifically, we find that delivery of a transducible p53-activating peptide in sensitive tumor cells inhibits solid tumor growth in vivo (see Figure 3 ) and dramatically extends survival (greater than 6-fold), yielding disease-free animals in terminal peritoneal cancer models of human metastatic disease (see Figures 4 and 5 ). The vast majority of tumors express either wild-type p53 protein or a full-length p53 point mutant ( Vousden and Lu 2002 ). This observation has led to the hypothesis that reactivation of endogenous p53 protein will be a useful means of treating cancer. The data presented here provide evidence for this hypothesis by showing that TAT-mediated delivery of a p53-activating peptide in vivo is an effective treatment for multiple preclinical cancer models. This macromolecular approach to p53 reactivation has certain advantages over the limited number of small molecule-based strategies reported to reactivate mutant p53 in vivo ( Foster et al. 1999 ; Bykov et al. 2002 ). First, the RI-TATp53C′ peptide can activate wild-type p53 in addition to several p53 contact mutants. Second, small molecules may suffer from a lack of specificity ( Rippin et al. 2002 ) in comparison to larger, more information-rich macromolecules. Finally, recent investigations into the mechanism of TAT-mediated transduction ( Richard et al. 2002 ; Fittipaldi et al. 2003 ) suggest that, unlike small molecules, TAT-linked cargo is taken up by macropinocytosis (Wadia et al. 2004) and is therefore not susceptible to the multidrug resistance phenotype. Theoretically, tumors could avoid the RI-TATp53C′ peptide action by mutating or deleting p53; however, we did not observe peptide resistance here (see Figure 6 ). Therefore, we conclude that linking PTDs to p53C′ and to other p53-activating peptides may be an effective therapeutic strategy applicable to a significant fraction of human cancers. The work presented here provides several broad lines of evidence for the general feasibility of applying in vivo TAT-mediated transduction to cancer therapy. First, we find that inversion of the p53C′TAT peptide sequence and synthesis with D-amino acids results in a highly stable peptide (RI-TATp53C′) that retains both biological activity and the ability to transduce into cells. Given the rapid degradation of L-residue-containing peptides in vivo ( Chorev and Goodman 1993 ), use of retro-inverso transformations with D-isomer residues and/or other stabilizing procedures will likely be essential for the pharmacological use of transducible peptides. Second, given the history of virus-mediated gene delivery, the necessity of validating new therapeutic approaches to systemic disease in the context of an intact immune system cannot be underestimated. Consequently, here we demonstrate that TAT-mediated systemic delivery inhibits tumor growth in immune competent animals. Finally, most studies on anticancer transduction peptides have relied primarily on the use of solid, subcutaneous tumor growth as a measure of efficacy ( Datta et al. 2001 ; Harada et al. 2002 ). Although informative, such studies are inherently limited by the minimal impact that subcutaneous tumors have on the biology of the host and by the failure of this type of tumor to closely mimic human disease. In contrast, the more rigorous peritoneal carcinomatosis and peritoneal lymphoma models used here require that therapeutic agents be able to suppress tumors to such an extent that the deleterious effects of the tumor on host physiology are substantially ameliorated. This is a particularly salient point because cancer patients generally do not succumb to the primary tumor burden but to complications from metastatic disease ( Fidler 2003 ). Indeed, anticancer therapeutics are defined as clinically successful by their ability to alleviate pathology and extend survival and not simply by their ability to reduce tumor volume. Our work here, combined with that of Fulda et al. (2002 ), demonstrates that transducible agents can effectively treat rigorous models of terminal cancer. Current clinical use of macromolecular biological therapies is limited to agents that have an extracellular mode of action. The preclinical data presented here demonstrate a proof-of-concept that intracellular delivery of biologically active macromolecular cargo by TAT-mediated transduction can modify specific pathways in vivo and that this approach potentially serves as a foundation for the generation of new classes of intracellular biological therapeutics. Materials and Methods Cell culture and flow cytometry. TA3/St (gift of W. G. Kaelin), H1299 (gift of R. K. Brachmann), and human foreskin fibroblast (M. Haas) cells were maintained in DMEM plus 10% fetal bovine serum (FBS) and penicillin/streptomycin (P/S). Namalwa cells (American Type Culture Collection, Manassas, Virginia, United States) were maintained in RPMI plus 10% FBS, P/S. HCT116 cells (gift of B. Vogelstein) were grown in McCoy's medium plus 10% FBS, P/S. All cells were maintained at 37°C in 5% CO 2 . Short-term cell viability was assessed by counting Trypan blue-excluding cells on a hemocytometer. Long-term cell viability was assessed by colony formation assay. After serial dilution and 10 d of culture, colonies were washed in PBS and stained with 1% methylene blue. Cellular senescence was assessed by X-Gal staining as previously described ( Schwarze et al. 1999 ), except for the use of PBS (pH 6.0). For cell cycle analysis, TA3/St cells were treated with 0.25–10 μM peptide and Namalwa cells with 40 μM peptide. DNA was stained 24 h later with 10 μg/ml propidium iodide in 0.5% NP-40 (TA3/St cells) or Draq5 (Namalwa cells) per the manufacturer's instructions (Qbiogene, Carlsbad, California, United States). DNA profiles were analyzed using a FACScan and CellQuest software (Becton Dickinson, Palo Alto, California, United States). Peptide synthesis. Peptides were synthesized by standard Fmoc chemistry on an ABI 433A Peptide Synthesizer (Applied Biosystems, Foster City, California, United States). Crude peptides were purified by reverse-phase HPLC over a C18 preparatory column (Varian, Palo Alto, California, United States). The identity of all peptides was confirmed by mass spectrometry. Promoter activity assays. In a 96-well dish, 4 × 10 4 cells were plated per well. The next day, H1299 cells were transfected with 15 ng of TK-Renilla (Promega, Madison, Wisconsin, United States), 200 ng of PG13-Luc, and one of the following: 0.3 ng of empty vector, 0.3 ng of p53 expression vector, or 1 ng mutant of p53 expression vector (gift of R. K Brachmann). SW480 cells were transfected with 25 ng of TK-Renilla and 250 ng of PG13-Luc reporter plasmid. Cells were all transfected using Lipofectamine 2000 per the manufacturer's protocol (Invitrogen, Carlsbad, California, United States). After 5 h, the transfection medium was removed and peptides were added to cells. Luciferase activity was measured 24 h later with the Dual Luciferase Reporter Assay System per the manufacturer's instructions (Promega). Animal tumor models. For TA3/St tumor models, 4- to 8-wk-old immune competent A/J female mice were obtained from Jackson Laboratory (Bar Harbor, Maine, United States). Solid TA3/St tumors were generated by subcutaneous injection of 1.5 × 10 6 TA3/St cells in 200 μl of Hanks' balanced salt solution (HBSS). Tumor volume was estimated by V = ( a 2 × b )/2, where a is the short axis and b is the long axis of the tumor. IP TA3/St tumors were generated by injection of 2 × 10 6 TA3/St cells IP in 400 μl of HBSS. For the Namalwa lymphoma tumor model, 6- to 8-wk-old CB17 SCID female mice were obtained from Charles River Laboratory (Wilmington, Massachusetts, United States). Then, 5 × 10 5 Namalwa lymphoma cells were injected IP in 400 μl of HBSS. Peptide was dissolved in water, brought to 600 μl in PBS, and injected IP. All animal studies were approved by the University of California, San Diego, Institutional Animal Care and Use Committee. Histology. Mice harboring solid TA3/St tumors were injected with 650 μg of biotinylated RI-TATp53C′ peptide and sacrificed 1 h postinjection. Sections from frozen tumors were stained with Vectastain Elite ABC Kit and DAB substrate per the manufacturer's instructions (Vector Laboratories, Burlingame, California, United States). Statistical analysis. Student's t -test was used to determine statistical significance ( p < 0.05) in all experiments except animal survival experiments, in which the Wilcoxon Rank-Sum Test was performed. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC340944.xml |
512281 | The FU gene and its possible protein isoforms | Background FU is the human homologue of the Drosophila gene fused whose product fused is a positive regulator of the transcription factor Cubitus interruptus (Ci). Thus, FU may act as a regulator of the human counterparts of Ci, the GLI transcription factors. Since Ci and GLI are targets of Hedgehog signaling in development and morphogenesis, it is expected that FU plays an important role in Sonic, Desert and/or Indian Hedgehog induced cellular signaling. Results The FU gene was identified on chromosome 2q35 at 217.56 Mb and its exon-intron organization determined. The human developmental disorder Syndactyly type 1 (SD1) maps to this region on chromosome 2 and the FU coding region was sequenced using genomic DNA from an affected individual in a linked family. While no FU mutations were found, three single nucleotide polymorphisms were identified. The expression pattern of FU was thoroughly investigated and all examined tissues express FU . It is also clear that different tissues express transcripts of different sizes and some tissues express more than one transcript. By means of nested PCR of specific regions in RT/PCR generated cDNA, it was possible to verify two alternative splicing events. This also suggests the existence of at least two additional protein isoforms besides the FU protein that has previously been described. This long FU and a much shorter isoform were compared for the ability to regulate GLI1 and GLI2. None of the FU isoforms showed any effects on GLI1 induced transcription but the long form can enhance GLI2 activity. Apparently FU did not have any effect on SUFU induced inhibition of GLI. Conclusions The FU gene and its genomic structure was identified. FU is a candidate gene for SD1, but we have not identified a pathogenic mutation in the FU coding region in a family with SD1. The sequence information and expression analyses show that transcripts of different sizes are expressed and subjected to alternative splicing. Thus, mRNAs may contain different 5'UTRs and encode different protein isoforms. Furthermore, FU is able to enhance the activity of GLI2 but not of GLI1, implicating FU in some aspects of Hedgehog signaling. | Background The signaling molecule Hedgehog (Hh) and components of its intracellular signaling pathway have been the subject of intensive research in several species from fruit fly to man during recent years. Numerous developmental and morphogenic processes are controlled by the Hedgehog family of proteins. Much effort has been directed at identifying components of the signaling pathway and their respective roles and interactions [for an extensive review see [ 1 ]]. In Drosophila , Hh signaling to the transcription factor Cubitus interruptus (Ci) is mediated by a protein complex consisting of Ci and three other cytosolic proteins. These are the costal 2 (cos2), suppressor of fused (su(fu)) and fused (fu), where fu is a kinase domain containing protein with positive regulatory activities in Hh induction of Ci mediated transcriptional activation. Hh binds to its receptor patched (ptc), a 12 membrane spanning protein, leading to the activation of another membrane protein smoothened (smo) [ 2 , 3 ]. Smo is a 7 transmembrane protein that, by an unknown mechanism, signals to the Ci containing protein complex leading to activation of Ci. Vertebrate homologues of these Drosophila genes and proteins have been identified during the last decade. To a large extent the signaling pathway has been conserved in vertebrates. However, the picture is more complicated since some of the Drosophila genes have two or more vertebrate homologues. There are three Ci homologues in vertebrates, GLI1, GLI2 and GLI3. GLI1 has activation properties whereas GLI2 and GLI3 have both activation and repression activities [reviewed in [ 4 ]]. It is expected that the human homologue of fu (FU) is a positive regulator of GLI proteins, whereas the su(fu) homologue SUFU is a negative regulator. It has been shown by several groups that SUFU inhibits both GLI1 and GLI2 transcriptional activity and has major effects on the shuttling between cytosol and nucleus [ 5 - 7 ]. In a similar way it was shown in C3H/10T½ cells that FU is a positive regulator of GLI2 but with little effect on GLI1 [ 8 ]. FU is a 1315 residue protein with high similarity to fu in the N-terminal kinase domain. Interestingly, it was discovered that mutations in PTCH1 , the human counterpart of ptc , underlie the Nevoid Basal Cell Carcinoma Syndrome (NBCCS) [ 9 , 10 ]. Patients with NBCCS (also known as Gorlin syndrome) have developmental abnormalities and eventually develop basal cell carcinoma (BCC) and other tumors like medulloblastoma and rhabdomyosarcoma [ 11 , 12 ]. Also SMO and SUFU mutations as well as overexpression of GLI1 or GLI2 can lead to BCC or medulloblastoma [ 13 - 16 ]. Thus, investigations of this signaling pathway, its genes and protein components, is not only important for understanding development and morphogenesis, but also for cancer biology. Here three FU cDNA clones have been identified and used for sequence analysis, identification and structural description of the FU gene, as well as for construction and subcloning of FU expression vectors. Using the available public databases the FU gene was found to be present in a sequenced BAC clone from chromosome 2. FU is located in the same region of chromosome 2q34-q36 to which the human limb malformation disorder Syndactyly type 1 (SD1) has recently been mapped in a large German pedigree [ 17 ] and confirmed in an Iranian family [ 18 ]. Its possible association with this condition was investigated by sequencing the coding exons of the FU gene in an affected member from the German family [ 17 ]. The tissue expression pattern of FU has been determined using an RNA array and Northern blots. FU is expressed in all 72 tested tissues. It is clear that not only a single transcript is expressed. Instead transcripts of different sizes are seen and some tissues apparently express more than one major transcript. From the genomic structure and the cDNA clones it was possible to predict several alternative splicing events and consequently the likely expression of different protein isoforms. Two of the isoforms were expressed in HEK293 cells and tested for their ability to regulate the activity of GLI1 and GLI2, showing positive effects on GLI2 but not on GLI1. Results Chromosomal localization of FU The sequence information derived from the FU cDNA clones 1HFU, 2HFU and Ngo3689 (see Methods) allowed the identification of the FU gene in a 200 kb BAC clone (AC009974) from chromosome 2. The gene is localized to 2q35 at 217.56 Mb using the Ensembl [ 19 ] annotation. The Ensembl gene prediction programs have identified most, but not all (21 of 29 exons; the published FU [ 8 ] predicts 26 exons) of the FU structure and named the gene STK36 (serine/threonine protein kinase 36). Chromosome 2q35 is the locus of several genetically based disorders. Both Syndactyly type 1 (SD1) and Brachydactyly type A1 (BDA1) have been mapped to this region [ 17 , 18 , 20 ]. Recently, the gene responsible for BDA1 has been identified as IHH (Indian Hedgehog) one of the vertebrate Hh homologues [ 21 ]. IHH is located in the vicinity of FU on chromosome 2 (217.94 Mb) less than 400 kb away. In order to determine if alterations of FU are responsible for SD1, the FU coding region (exons 3–29) and the flanking intronic regions were sequenced using genomic DNA from an affected member of an SD1 family whose trait maps to the 2q34-q36 region [ 17 ] and an unrelated control individual. No FU mutations were detected in this study, although three single nucleotide polymorphisms were identified. These included a T to C transition in intron 10, 17 bp 5' of exon 11 (IVS10-17T>C), causing gain of a BstNI site, and a G to A transition in exon 16, 17 bp 5' of the end of the exon (1748G>A), causing substitution of glutamine for arginine at amino acid 583 (R583Q) and loss of an AciI site. The altered restriction sites created by these sequence changes were tested in 44 CEPH unrelated individuals. The results showed that both changes are normal sequence variations as previously reported in the NCBI SNP database. The third change was a G to A transition in exon 27 (3008G>A), causing substitution of aspartic acid for glycine at amino acid 1003 (G1003D). By sequencing exon 27 in 8 affected and 6 unaffected members of the SD1 family [ 17 ], the disease variant could be observed in affected and unaffected members of the family, and a homozygous healthy individual was found. This variant has also been reported previously as a single nucleotide polymorphism in the NCBI SNP database. FU structure None of the obtained cDNA clones contain sequence from all FU exons, but they allow determination of the exon-intron organization of FU . Figure 1 shows the structure of the FU gene. The cDNA clones are outlined to account for the predicted structure. Only clone 1HFU and Ngo3689 contain exons from the 5'non-coding region. To the 5' side of the sequence encoded by exon 3 these clones are different, indicating that alternative 5' untranslated regions (UTR) from different exons can be used. Exons 3 to 9 encode the N-terminal kinase domain. None of the cDNA clones encodes the FU protein that has previously been described [ 8 ]. 1HFU lacks the sequence encoded by exon 8, which results in a frame shift and a premature termination of translation. However, it cannot be unambiguously excluded that this may encode a very short protein isoform having only a partial kinase domain. 1HFU also includes the sequence from exon 13, which encodes an in frame stop codon. Neither Ngo3689 nor 2HFU contain the sequence encoded by exon 13. It is suggested that inclusion of exon 13 gives rise to a shorter protein (S-FU) of 474 residues, encode by exons 3 to 13. The previously described [ 8 ] long form of FU (L-FU) having 1315 residues is encoded by exons 3 to 29 without inclusion of exon 13. An additional alternative splice variant is suggested from the Ngo3689 clone. The first 63 bp of the sequence in exon 24 are missing. This results in a protein that is 21 residues shorter than L-FU (encoded by exons 3 to 29 without exon 13 and the 63 bp). Since almost all of the sequence from exon 24 is missing (only 18 bp are left) this isoform is termed L-FUΔ24. The Ngo3689 clone also contains all the 289 bp from intron 17, but whether this represents a true alternative splice variant is doubtful. Figure 1 FU gene structure . All sequenced segments of the available FU cDNA clones (1HFU, 2HFU and Ngo3689) were identified in a BAC clone (AC009974) from chromosome 2. This allowed the identification of the exonic sequences and thereby also the introns. The FU protein sequence [8] and translational analysis provided information about the role of different exons. Exons 1 and 2 encode 5'UTRs shown in brown. Exon 3 contains the initiating ATG codon approximately in the middle (position 90–92 from 5'end). Exons 3 to 9 encode the kinase domain shown in red. As judged from the cDNA clones the sequences encoded by exons 8, 13 and part of exon 24 are subjected to alternative splicing. Both exon 13 and 29 encode in frame stop codons. The cDNA clones end with a poly A + tail at the same position starting 706 bp from the stop codon (TGA) in exon 29. Multi tissue array and northern blot analyses A tissue array with poly A + RNA from 72 human tissues was hybridized with a labeled probe 3' of the kinase domain. It was clear from this array that all examined tissues express FU to some extent. The highest amount of FU transcripts were detected in testis and pituitary (not shown). This is in agreement with the previous results by Northern blotting, showing highest FU expression in testis [ 8 ]. This analysis revealed that most tissues express an approximately 5 kb transcript [ 8 ]. The Northern blot analysis was here repeated with a larger number of tissues and a probe containing a 3' portion of the gene (exon 28). Figure 2 shows results of the three different Northern blots used. Here the transcripts are estimated to be a bit larger than the reported 5 kb, generally in the range of 6 to 7 kb. It should be emphasized that the identified cDNA clones are approximately 5 kb and that this seems closer to the correct sizes of transcripts, though they may appear larger on the Northern blots. Adult skeletal muscle, thymus, spleen, liver, small intestine, placenta, lung and leukocytes show a faint 6.5 kb band. However, the adult tissues brain, heart, colon and thyroid express a shorter transcript of 6 kb. Adrenal seems to preferentially express a band in the 6.5 to 7 kb range. In pancreas, fetal brain and fetal kidney it appears that at least two bands are expressed in the range from 6 to 7 kb. Besides, mRNA from fetal brain and lung also give rise to a band of much larger size around 9.5 kb. It is not clear if this constitutes a transcript that has not been fully processed, or whether it may contain sequences from as yet unidentified exons, for instance unknown 5'UTRs. Fetal lung and liver clearly preferentially express transcripts of different sizes, 7 and 6 kb respectively. It is confirmed that adult testis shows the highest expression but also pancreas, kidney, fetal brain and kidney stand out, in agreement with the previously reported Northern analysis [ 8 ]. Since one major alternative splicing event seems to involve exon 13, we attempted to evaluate the tissue specificity of this. A probe containing only the exon 13 sequence was used in hybridization of the Northern blots. This resulted in smeary bands irrespective of the hybridization conditions used (not shown). A likely explanation is that the probe is too short to achieve high specificity hybridization, or perhaps the transcripts containing exon 13 are degraded much faster, possibly due to the process of nonsense mediated decay [ 22 ]. Figure 2 Northern blot analysis of FU expression in human tissues . Three commercially available Northern blots were hybridized with a labeled FU probe and analyzed by phosphorimaging. The blots have RNA from endocrine organs, other adult and fetal tissues as indicated. Size markers are shown to the left. Nested PCR analyses To examine more specifically the expression of exons that are involved in alternative splicing events, nested PCR was performed on cDNA probes generated from whole tissue RNA by reverse transcription. Sequences containing segments including exons 8, 13 and 24 were amplified and analyzed on agarose gels (Fig. 3 ). It was not possible to detect transcripts that lack exon 8 (Fig. 3 , panel A). In contrast, it appears that transcripts both with and without exon 13 are present (Fig. 3 , panel B). Expression of transcripts without exon 13 is clearly most prevalent in all tissues. The expression of the longer form seems to be proportional to the expression of the shorter form. This indicates that this splicing event is not subjected to any significant tissue specific regulation. However, since a detectable amount of transcripts including exon 13 are present, it is likely that S-FU is also expressed in the tissues. The analysis of alternative splicing of the part encoded by exon 24 turned out to be difficult and several primer pairs were tested before reliable results could be obtained (Fig. 3 , panel C). Interestingly, the examined tissues show very different expression patterns, suggesting that alternative splicing in this case is a regulated event. Most tissues express all of exon 24 but in small intestine and prostate clear expression of a transcript without the 63 bp is observed, and in testis expression of transcripts both with and without this segment is found. Sequence analysis shows that the 63 bp segment is likely to encode part of a leucine zipper domain and therefore a putative protein interaction may be lost in this isoform (L-FUΔ24). Figure 3 Analyses of alternative splicing by nested PCR . Nested PCR was performed on cDNA from 8 tissues, over regions implicated to undergo alternative splicing. These are encoded by exons 8 (panel A), 13 (panel B) and 24 (panel C). Available cDNA clones either with or without these parts served as controls as indicated. The PCR products were analyzed on agarose gels as shown. Functional analyses of FU isoforms Investigations into the ability of FU isoforms to regulate GLI transcription factors and SUFU has here been initiated by expression of both L-FU and S-FU as well as 2HFU, which does not have a full kinase domain, in HEK293 cells. The 293 cells, unlike the previously used C3H/10T½ cells [ 8 ], do not have a complete Hedgehog signaling pathway. Thus, it is possible to determine if FUs have direct effects on GLI proteins as it was done for SUFU [ 5 ]. The assay is based on the induction of a luciferase reporter construct having 12 consecutive binding elements for GLI transcription factors [ 5 ]. In all transfection assays GLI1 was able to induce the luciferase reporter 100–250 fold, whereas GLI2 induced the reporter some 15–30 fold, depending on cell density and the amount of construct used. Figure 4 shows the results of these expression analyses. As shown previously [ 5 ] SUFU has a strong inhibitory effect on GLI1. Moreover, a similar strong effect was seen on GLI2 (Fig 4 ., panel A). The results presented are typical for a large number of experiments and depend on the amount of GLI and SUFU that is used. In contrast to the strong effects seen with SUFU, none of the FUs revealed major changes (Fig. 4 , panel B and C). It is clear that the FUs are not able to regulate GLI1 at all, though L-FU and 2HFU have a weak (2–3 fold) positive effect on GLI2. It appears that L-FU has a slightly stronger effect than 2HFU. This is qualitatively the same result as was obtained in C3H/10T½ cells, where the L-FU was also compared to a kinase-dead mutant and a 546 residue variant similar to S-FU. Thus, in both cases there is no evidence that the kinase domain is required for the activation of GLI2. Unlike the previous analyses [ 8 ], it cannot be confirmed that FUs have a direct effect on SUFU function (Fig. 4 , panel D). The inhibitory effect of SUFU on GLI1 is not relieved by the addition of FU. The effect on GLI2/SUFU cannot be distinguished from the effect on GLI2 alone, implying that FU does not regulate SUFU but may only affect GLI2. Since 293 cells lack components of the Hedgehog signaling pathway, it is possible that an effect of L-FU on SUFU [ 8 ] requires the presence and activity of additional molecules. Figure 4 Regulation of GLI induced transcriptional activity in transfected HEK293 cells . HEK293 cells were transfected with a GLI inducible luciferase reporter construct together with a GLI1 or GLI2 expression construct. The cells were also transfected with a β-galactosidase construct that served to correct for transfection efficiency and cell density. The effects of FU and SUFU were tested by cotransfecting FU and SUFU expression constructs and compared to effects with empty vectors. Panel A shows typical examples of SUFU effects on GLI1 and GLI2. Panel B shows typical examples of the effects of different FU constructs on GLI1 and GLI2; experiments that were performed in parallel are shown, to illustrate the different impact on the GLI proteins. L-FU is shown with squares, 2HFU with triangles and S-FU with circles. Panel C shows the impact of FU constructs (400 ng) on GLI1 and GLI2 as summarized by the results of at least 4 different experiments. Panel D shows the impact of FU constructs (400 ng) on SUFU inhibited GLI1 and GLI2 as summarized by the results of 3 different experiments. In these experiments the GLI induced transcriptional activity was inhibited to 20–40 % of the non-inhibited level by the addition of SUFU (i.e. 2½ to 5 fold inhibition). Relative activity (panel A) is given as compared to activity in mock transfected cells and normalized activity (panel B-D) is given relative to activity in cells transfected with GLI and GLI+SUFU set to 1. Discussion The FU gene In the present paper, the FU gene was identified and its structure determined. FU consists of 29 exons of which exons 1 and 2 encode 5'UTRs, exon 3 contains the initiating ATG codon and exons 13 and 29 contain in frame stop codons (Fig. 1 ). Exon 1 and 2 may serve as alternative first exons, like the alternative exons 1, 1A and 1B found in the PTCH1 gene [ 23 ]. Exons 3 to 9 encode the kinase domain. This segment has strong similarity to Drosophila fu, whereas the remaining C-terminal part has a much weaker similarity [ 8 ]. Using the DIALIGN program [ 24 ] it is possible to align fu to L-FU in two regions in the C-terminal part (not shown). These are largely encoded by exons 15–16 and 22–29. This indicates that exons 10–14 and 17–21 may have been recruited to the FU gene during evolution. Investigations of Syndactyly patient material We investigated the possibility that FU underlies SD1 based upon the fact that FU lies within the localization interval for SD1 and that it is part of the Sonic Hedgehog signaling pathway, which participates in digital patterning [ 1 ]. Although three previously reported single nucleotide polymorphisms were identified, we did not detect any mutation in the FU coding region or flanking intronic regions. While these results do not implicate FU in the causation of SD1, it is possible that this disorder is caused by mutations in the noncoding regions not screened in this study. Alternatively, SD1 could be caused by a genomic rearrangement not identified by sequence analysis, although no altered bands were detected in an affected member of the SD1 family by Southern analysis using a FU cDNA clone as probe (data not shown). Expression analyses Analyses of FU expression have shown that transcripts are detected in all tissues examined. For the first time evidence is presented showing that more than one transcript can be expressed from this gene. The Northern blots clearly show that FU transcripts of different sizes indeed exist. Here the transcripts are estimated to be slightly bigger than previously reported and in some tissues more than one transcript is evident. It is clear from the available cDNAs and RT/PCR based transcript analyses that alternative splicing occurs. Additionally, it is also clear that different 5'UTRs are present in the transcripts. At least two protein isoforms, besides the previously described L-FU [ 8 ], may be produced. The S-FU isoform is the one that most dramatically differs from L-FU, consisting only of the N-terminal one third of L-FU. S-FU expression results from inclusion of exon 13 in the mature transcript. This alternative splicing event was detected in all tissues examined and at an apparently constant ratio. Also a case of regulated alternative splicing was detected by RT/PCR, but with a much less dramatic impact at the protein level, since it only results in the loss of 21 residues encoded by exon 24. However, the expression reveals a possible tissue specific regulation of this alternative splicing event. This may well reflect that L-FUΔ24 plays a biological role different from L-FU. Since it appears that the mRNA for L-FUΔ24 is not expressed in small intestine and prostate it can be speculated that FU has a different role there, if a leucine zipper is truly lost in L-FUΔ24. It is intriguing that testis appears to express transcripts both with and without the 63 bp segment and is also the tissue with strongest expression. Perhaps the expression of L-FUΔ24 and L-FU together is linked to the function of Desert Hedgehog which has been shown to have a particular role in spermatogenesis [ 25 ]. Whether interactions with GLI proteins, SUFU or other components of the signaling pathway are altered, and if this has any impact on GLI or SUFU activities, remains to be investigated. Certainly this adds another variable to the complicated picture of Hedgehog signaling and GLI regulation in vertebrates. Functional investigations and perspectives The assessment of functionality revealed that S-FU was not able to regulate GLI1 or GLI2 when expressed in 293 cells. In contrast, both L-FU and a variant lacking a full kinase domain (2HFU) were able to enhance GLI2 induced transcription. These results are qualitatively similar to those previously reported in C3H/10T½ cells [ 8 ]. L-FU and 2HFU were only able to enhance GLI2 activity 2 to 3 fold in 293 cells, whereas 5 to 8 fold inductions are seen in C3H/10T½ cells. This may reflect the fact that the latter cell line expresses additional components of the Hedgehog signaling pathway, which are required for full activity of FU. Unlike the previous investigations [ 8 ] it was not possible to see an effect of L-FU on SUFU. Again this difference may be explained by the various properties of the cell lines used. Understanding the signaling events downstream of SMO may reveal functional differences of the proteins involved, as compared to their fruit fly counterparts. Although SUFU inhibits GLI transcription factors and su(fu) inhibits Ci, there are still striking differences. As yet there have been no reports of a cos2 counterpart in vertebrates. Instead it has been observed that FU interacts with all GLI proteins and SUFU [ 8 ], even though fu does not bind to Ci [ 26 ]. It has also been observed that both L-FU and SUFU can be found in the nucleus [ 5 - 8 ], which has not been observed for fu or su(fu). It is likely that both FU and SUFU are shuttled in and out of the nucleus by binding to GLI proteins [ 5 , 8 ]. Though basic activities of both FU and SUFU in regulation of GLI have been conserved, it also appears that significant differences from their fruit fly counterparts exist. Clearly, FU is not having an effect on GLI1 similar to the one seen on GLI2. Additional investigations are needed in order to establish the role of FU in hedgehog signaling and GLI control. The role of the different isoforms also remains to be elucidated. These have to be tested individually for their regulation of all GLI proteins and proteolytic products. Fu is known to have at least two separate physiological functions in the fly, one of which is dependent upon the kinase domain [ 27 ]. Likewise, FU may well have two or more distinct functions in signaling, represented by different domains, isoforms and protein interactions. Conclusions FU is localized on chromosome 2q35 very close to IHH . Though SD1 has been mapped to this region, we have not identified a causative role for FU in this disorder. FU consists of 29 exons of which 1 and 2 encode 5'UTRs and 3 to 9 encode a kinase domain. For the first time it is shown that transcripts of different sizes are expressed and alternative splicing takes place, probably leading to the generation of different protein isoforms. FU protein is likely to be involved in the Hedgehog signaling pathway since it can enhance the activity of GLI2. In contrast, it has no effect on GLI1 and an effect on SUFU cannot be observed in 293 cells. Methods The FU cDNA clones Two almost full-length human FU clones were identified in the Incyte database. Both 1HFU and 2HFU were cloned in the vector pINCY. A third clone was available from Kazusa DNA Research Institute (Chiba, Japan) and termed Ngo3689 (Gene name KIAA1278). This clone was in the vector pBluescript II SK + . The human BAC clone AC009974 was obtained from Research Genetics (Huntsville, AL). The human GLI, human SUFU, 12GLI-RE-luciferase reporter and β-galactosidase vectors have been described previously [ 5 ]. FU cDNA subcloning Expression constructs for different isoforms of FU was obtained by direct PCR or extension overlap PCR, using end-primers having specific restriction sites and the high fidelity Vent R DNA polymerase (New England Biolabs, Beverly, MA). The cDNA for the long form of FU (L-FU) was subcloned into pCDNA3.1-HisB using the NotI and XbaI sites. 2HFU and the short FU (S-FU) cDNAs were subcloned into pCDNA3.1-HisC using the KpnI and XbaI sites. DNA sequencing and analyses All PCR generated products were analyzed by DNA sequencing. The Big-Dye Terminator Cycle Sequencing kit (Applied Biosystems, Foster City, CA) was used according to instructions. Sequencing was performed at CyberGene AB (Huddinge, Sweden). Sequence alignments were done using the DIALIGN program [ 24 ] available at the BiBiServ from University of Bielefeld, Germany. Sequence information of proteins, clones and chromosomes were obtained from the Swiss-Pro [ 28 ], Entrez [ 29 ] and Ensembl [ 19 ] databases. Analyses of genomic DNA from family members with SD1 After informed consent was obtained, blood was taken from affected and unaffected family members and DNA extracted from peripheral blood leukocytes according to standard methods. Intronic primers were designed to amplify exons 3–29 of FU either as single exons with flanking intronic sequences or as products containing two exons with flanking intronic sequence and the complete intervening intron. The primer sequences can be obtained upon request. PCR was performed in a standard fashion and products were sequenced using either the Thermosequenase CyTM5.5 Dye Terminator or DYEnamic ET Dye Terminator Cycle Sequencing kits (Amersham Biosciences, Piscataway, NJ). Electrophoresis and analysis were performed on either an Automated Laser Fluorescence (ALF) DNA sequencer or MegaBACE DNA sequencer (Amersham Biosciences) after purification with Autoseq columns (Amersham Biosciences). For exon 27, the PCR product was purified using the enzymatic ExoI-SAP purification method, sequenced using the Terminator Cycle Sequencing kit (Amersham Pharmacia Biotech) and analysed on an ABI 3100 genetic analyzer (Applied Biosystems). PCR products containing exon 11 or exons 15/16 were digested with BstNI or AciI, respectively, and the bands resolved on 3–4% agarose gels to confirm sequence changes in the patient with SD1 and to determine their frequency in a panel of 44 CEPH individuals. Northern blot analysis Commercially available Human MTN 12-lane Blot 2, Human Fetal MTN Blot II and Human Endocrine System MTN Blot Northern blots (Clontech, Paolo Alto, CA) were obtained and used with PCR generated hybridization probes. DNA probes were made by direct PCR, amplifying the sequences corresponding to exon 13 and 28. The generated fragments were then labeled with 32 P-ATP using the High Prime DNA labeling kit (Boehringer Mannheim, Mannheim, Germany) according to instructions. Hybridization of Northern blots was done with labeled DNA probes in ExpressHyp (Clontech) at 68°C according to instructions. The blots were then analyzed with a Fujix Bas 2000 phosphoimager (Fuji Photo Film, Tokyo, Japan). Expression analysis by nested PCR The expression of exon 8, 13 and 24 sequences in mRNA was assessed by nested PCR on RT/PCR generated cDNA samples from eight different tissues as provided in Human Multi Tissue cDNA Panel II (Clontech). Two sets of primers were made for each exon to be investigated. The outer pairs were used in a first PCR using 5 μl of the cDNA and Vent polymerase. In a second PCR 0.5 μl of the first PCR products was used together with the inner primer pairs. These pairs were also used for PCR of FU cDNA clones that served as controls. The primer pairs are listed in Table 1 . The PCR reactions were performed using 95°C for 1 min denaturation, elongation at 72°C and 40 cycles. The exon 8 and 13 sequence PCRs were performed using 60°C 1 min annealing and 1 min elongation. The exon 24 sequence PCRs were performed using 59°C 1 min annealing and 1 min 30 sec elongation. Table 1 Primers for nested PCR analyses Sequence from exon Outer PCR primer pairs Inner PCR primer pairs 8 fwd 5'-AACATCCTCCTCGCCAAGGGT 5'-ATATGAACTGGCAGTAGGCAC rev 5'-TGCTCTCCTGACTGT GCCTGAGTAGACTCA 5'-TTACCCTTGGGGGCCAACCGA 13 fwd 5'-AACATCCTCCTCGCCAAGGGT 5'-AGCCTGTGCCTATTCAACTGA rev 5'-TGCTCTCCTGACTGT GCCTGAGTAGACTCA 5'-GCCTCCCGGCAGAAGGAATAC 24 fwd 5'-CGCAAGTGAGCCAGCCACTGC 5'-CAGCCAGCTCAGGCCATCCCT rev 5'-CTGGACCGCAGGAATCT GGAATCACATGCTATGGG 5'-CCAGGCCTGTGAGAAGGCTGA Reporter gene assays The cDNA clones were used in transfections of HEK293 cells in 24 well culture plates. Basically this was done as previously described [ 5 ]. In short, the 293 cells were transfected using Superfect Transfection Reagent (Clontech), with 100 ng of the luciferase reporter and β-galactosidase as well as different amounts and combinations of GLI, FU and SUFU constructs. For every assay there was a corresponding control with an equal amount of empty vector. The cells were harvested 24 hours after transfection with 50 μl of lysis buffer from the Galacto-Light kit (Applied Biosystems). Of this was 10 μl used for β-galactosidase assay and the rest for luciferase assay using the Luciferase Assay kit (BioThema, Dalarö, Sweden). Analyses were done on a Microplate Luminometer (Berthold Detection System, Pforzheim, Germany). Authors' contributions TØ contributed to the experimental design; participated in sequencing, sequence analysis and subcloning; did the gene analysis, Northern blots, nested PCR, cell experiments; and made the manuscript draft. DBE and CES designed and carried out the patient analysis; and contributed to the manuscript. MM provided clones; contributed with subcloning; made the array analysis; and contributed to the manuscript. MMN and RCB provided the SD1 patient material; performed segregation analyses in the SD1 family; and edited the manuscript. PGZ contributed to the experimental design and subcloning; and edited the manuscript. RT contributed to the experimental design; did data base analysis; and edited the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC512281.xml |
449905 | Honeybees' Distance Perception Changes with Terrain of Flight Path | null | When a trip for food can require a three-mile flight, it pays to get the directions right, especially if you're a bee. Bees more typically forage within a 600- to 800-yard radius, expending a significant amount of energy—a fact they seem keenly aware of: for reasons that remain unclear, bees tend to ignore directions that send them to a target on water. It's been known since Aristotle's time that returning foragers dance a little jig for their hivemates, presumably regaling them with tales of nectar-laden flora. Some 2,300 years later, zoologist Karl von Frisch correlated dance choreography to the direction and distance of a food source, eventually winning the Nobel prize for his work. Since then, researchers have been working out the details of bee communication, such as how fellow foragers interpret the “waggle dance” and how dancers perceive and convey navigational details of a trip. A key aspect of this information exchange is how bees estimate distance. A honeybee's “odometer” generally runs faster when it flies over land than water Recent studies suggest that bee odometers are driven by the “optic flow” experienced during flight—or, simply put, bees appear to log distance by measuring the rate that images of passing terrain move in their eye during flight. This theory comes from observations that when bees fly a given distance, they indicate a much longer distance—by performing a longer waggle—for low-flying trips than for those at higher altitudes, presumably because flying at higher altitudes limits the bees' ability to perceive changing images. Likewise, bees flying through short, narrow tunnels filled with visual elements waggle a disproportionately long distance. These observations also raise questions about which visual aspects of the environment—contrast, texture, distribution of objects—are most important to a bee's perception of image flow. To investigate the factors driving the bee trip odometer, Mandyam Srinivasan and colleagues trained bees to feed at locations along two different routes in a natural environment, then compared their waggle dances. One route was entirely over land; the other started on land, shifted to water, and ended back on land. Both routes were the same distance, about 630 yards. Bees trained to feed at a boat in the middle of a lake had no trouble getting there, which was no guarantee based on reports that bees have trouble flying across lakes, often plunging into the drink. They had less success recruiting their colleagues to share in the bounty, even though their waggles clearly placed the feeder on the lake. (This finding supports an earlier, controversial theory that suggests experienced bees know water rarely harbors bee food.) The length of the bees' waggle dance increased faster with distance when they flew over land than when they flew over water. This disparity indicates that land provides a stronger “odometric signal” than water. “The honeybee's odometer,” the authors explain, “runs at a slower pace when flight is over water.” Overland flights tend to offer high contrasts and rich textures, while flights over water tend to offer low contrasts and sparse textures. Most likely, it is the high contrast of land surfaces that triggers a stronger odometric signal. But land surfaces also show variation in contrast, which was reflected in the bees' dance. One section of the land-only route was a paved bicycle path, a low contrast surface that the bees waggled as a relatively shorter distance. Whether or not the contrast theory holds, Srinivasan and colleagues conclude, differences in the visual environment trigger differences in odometric signal. The odometer racks up yards depending on the nature of the terrain, whether it be land or water, during flight. The great Belgian playwright and avid bee-keeper Maurice Maeterlinck wondered at the language of bees in his 1901 book, The Life of the Bee , deciding it must correspond “to senses and properties of matter wholly unknown to ourselves.” As Srinivasan and colleagues show here, the bee's view of the world indeed corresponds to a unique way of interpreting the landscape—and of sharing news of their travels with their hivemates. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC449905.xml |
512530 | Organization of the mitochondrial genomes of whiteflies, aphids, and psyllids (Hemiptera, Sternorrhyncha) | Background With some exceptions, mitochondria within the class Insecta have the same gene content, and generally, a similar gene order allowing the proposal of an ancestral gene order. The principal exceptions are several orders within the Hemipteroid assemblage including the order Thysanoptera, a sister group of the order Hemiptera. Within the Hemiptera, there are available a number of completely sequenced mitochondrial genomes that have a gene order similar to that of the proposed ancestor. None, however, are available from the suborder Sternorryncha that includes whiteflies, psyllids and aphids. Results We have determined the complete nucleotide sequence of the mitochondrial genomes of six species of whiteflies, one psyllid and one aphid. Two species of whiteflies, one psyllid and one aphid have mitochondrial genomes with a gene order very similar to that of the proposed insect ancestor. The remaining four species of whiteflies had variations in the gene order. In all cases, there was the excision of a DNA fragment encoding for cytochrome oxidase subunit III( COIII )-tRNA gly -NADH dehydrogenase subunit 3( ND3 )-tRNA ala -tRNA arg -tRNA asn from the ancestral position between genes for ATP synthase subunit 6 and NADH dehydrogenase subunit 5. Based on the position in which all or part of this fragment was inserted, the mitochondria could be subdivided into four different gene arrangement types. PCR amplification spanning from COIII to genes outside the inserted region and sequence determination of the resulting fragments, indicated that different whitefly species could be placed into one of these arrangement types. A phylogenetic analysis of 19 whitefly species based on genes for mitochondrial cytochrome b, NADH dehydrogenase subunit 1, and 16S ribosomal DNA as well as cospeciating endosymbiont 16S and 23S ribosomal DNA indicated a clustering of species that corresponded to the gene arrangement types. Conclusions In whiteflies, the region of the mitochondrial genome consisting of genes encoding for COIII-tRNA gly - ND3-tRNA ala - tRNA arg - tRNA asn can be transposed from its ancestral position to four different locations on the mitochondrial genome. Related species within clusters established by phylogenetic analysis of host and endosymbiont genes have the same mitochondrial gene arrangement indicating a transposition in the ancestor of these clusters. | Background Whiteflies, psyllids, and aphids correspond to superfamilies within the suborder Sternorrhyncha (Hemiptera) [ 1 ]. These insects share a number of common properties that are a consequence of their utilization of plant phloem as their diet. This mode of feeding is accomplished by means of needle-like stylets that probe plant tissues between plant cells until they enter the phloem-sieve elements. Due to this mode of feeding, some species are of major agricultural importance in that they vector plant pathogens and in high numbers may cause plant debilitation due to excessive nutrient consumption [ 1 ]. Whiteflies, psyllids, and aphids have an obligatory association with prokaryotic endosymbionts localized in specialized cells called bacteriocytes that constitute a larger structure called the bacteriome [ 2 - 4 ]. In the past, numerous studies have been performed on the phylogeny of some of these endosymbionts and their hosts [ 2 - 5 ]. The results have indicated congruence between the endosymbiont and the host derived phylogeny. This observation has been interpreted as being the consequence of an infection of an insect ancestor by a prokaryote and the vertical transmission of the endosymbiont resulting in cospeciation or cocladogenesis. In a recent study of whiteflies, we compared the phylogeny based on endosymbiont 16S*-23S* rDNA to the phylogeny of the host based on several mitochondrial genes [ 6 ]. During this study, we found that in the whitefly, Bemisia tabaci , the order of some of the mitochondrial genes was quite different from the frequently found order of genes in the mitochondria of the class Insecta. This observation led us to obtain the full sequence of the mitochondrial genome of representatives of the suborder Sternorrhyncha. Due to the observed differences in the order of genes in the mitochondrial genome of whiteflies, we obtained additional mitochondrial sequences from species representative of the major phylogenetic clusters previously established on the basis of whitefly mitochondrial and endosymbiont genes [ 6 ]. Previous studies of the phylogenetic relationships of member of the Sternorrhyncha, using host 18S rDNA, indicated that it is a monophyletic group [ 7 - 9 ]. These studies also showed that aphids and whiteflies were more closely related to each other than to psyllids. In animals, the mitochondrial genome is generally circular (14–17 kb), is maternally transmitted and has a relatively simple genetic structure, and a rapid rate of sequence change [ 10 - 12 ]. Of the thirty seven genes found in animal mitochondria, thirteen encode for proteins, consisting of three subunits of cytochrome oxidase ( COI , COII , COIII ), two subunits of ATP synthase ( atp6 , atp8 ), seven subunits of NADH dehydrogenase ( ND1 , ND2 , ND3 , ND4 , ND4L , ND5 , ND6 ), and cytochrome b ( cytB ). Two genes encode for the large subunit of ribosomal RNA ( 16S ) and the small subunit of ribosomal RNA ( 12S ). In addition, there are 22 tRNAs, two for leucine and two for serine, and one tRNA each for the remaining eighteen amino acids. In general, there is conservation of the gene order within phyla but variation between phyla [ 10 , 13 - 17 ]; the tRNA genes are subject to more change in their position than the genes for proteins and rRNAs. The order of mitochondrial genes has been suggested to be a good phylogenetic marker for studies of relationships [ 14 ]. The animal mitochondrial genome is generally very compact with few if any intergenic spaces. Usually there is one (or rarely more) noncoding region, frequently following 12S rDNA . Such a region most often has a reduced G+C content and all or some of the following properties: a) direct repeats, b) inverted repeats, c) stretches of "T"s, "A"s, or "TA"s. By analogy with other well-studied mitochondria, such a region is considered to be a putative origin of DNA replication and a region from which transcription is initiated [ 10 , 12 ]. Variation in mitochondrial size is generally a consequence of variation in the length of the repeats in the noncoding region and not in the number of structural genes. Early studies within the class Insecta suggested conservation of the gene order over a wide range of different organisms indicating an ancestral gene order for this group [ 10 , 18 ]. However, more recent studies have shown that within the Hemipteroid assemblage, there is considerable variation in the order of genes in the orders Phthiraptera, Psocoptera, and Thysanoptera, but no variation in the order Hemiptera (that includes the suborder Sternorrhyncha) [ 18 - 21 ]. The complete sequence of the mitochondria of a representative of the Phthiraptera (wallaby louse) and the Thysanoptera (plague thrips) has been obtained [ 18 , 20 ]. The latter shows major differences from the ancestral gene order. The Hemiptera and the Thysanoptera are sister groups and it was consequently of interest to obtain sequences of the mitochondrial genomes of the former. Since the sequence of mitochondrial genomes is poorly conserved, sequence determination of a portion of the genome is useful for the study of closely related species or the population structure within a species [ 22 , 23 ]. The availability of completely sequenced mitochondrial genomes is also an aid to the design of primers for the PCR amplification of the regions selected for population studies. Results Evolutionary relationships within the Sternorrhyncha Table 1 gives the properties and the accession numbers of the mitochondrial DNA sequences determined in this study. An unrooted phylogenetic tree showing the relationships of whiteflies, psyllids and aphids, based on mitochondrial cytB (partial), ND1 , and 16S rDNA is presented in Fig. 1 . A similar tree is obtained when the amino acid sequence of CytB (partial) and ND1 is used. The sole difference is the position of Neomaskellia andropogonis which becomes part of the cluster containing Bemisia tabaci , Tetraleurodes acaciae , Aleurochiton aceris , and Trialeurodes vaporariorum . Whiteflies, psyllids and aphids have associations with different primary endosymbionts that are transmitted vertically and are essential for the survival of the insect host [ 2 - 6 ]. The time for the establishment of these endosymbiotic associations and the emergence of the composite organism is generally estimated to be between 100 and 200 million years ago [ 2 ]. The representative species chosen for study (Fig. 1 ) probably span the range of diversity within whiteflies, psyllids, and aphids. The maximum % difference in the DNA sequence of these organisms is 33.5 % for whiteflies, 29.7% for psyllids and 13.1% for aphids suggesting that the rate of mitochondrial sequence change in aphids is considerably less than that in whiteflies and psyllids. Resolution of the order of branching among these insect groups is not possible using mitochondrial sequences, since due to their rapid rate of change they are saturated. Table 1 Properties and accession numbers of mitochondrial DNA sequences determined in this study. Organism Type Mitochondrial sequence Size (bp) G+C content GenBank accession number Bemisia tabaci whitefly-A complete 15,322 25.9 AY521259 Tetraleurodes acaciae whitefly-B complete 15,080 28.0 AY521626 Neomaskellia andropogonis whitefly-C complete 14,496 18.7 AY572539 Aleurochiton aceris whitefly-D complete 15,388 22.1 AY572538 Trialeurodes vaporariorum whitefly-Y complete 18,414 27.7 AY521265 Aleurodicus dugesii whitefly-Y complete 15,723 13.8 AY521251 Pachypsylla venusta psyllid complete 14,711 26.3 AY278317 Schizaphis graminum aphid complete 15,721 16.1 AY531391 Bemisia argentifolii whitefly-A cytB-COIII 4, 796 23.2 AY521257 Bemisia sp. whitefly-A 12S-COIII 985 19.4 AY572845 cytB-12S AY521257 a Aleuroplatus sp. whitefly-B cytB-COIII 4, 540 27.4 AY521256 Tetraleurodes mori whitefly-B cytB-COIII 4,416 25.3 AY521263 Vasdavidius concursus whitefly-C cytB-COIII 3,374 20.0 AY648941 Siphonius phillyreae whitefly-D cytB-12S 4,561 22.3 AY521268 Bactericera cockerelli psyllid cytB-12S 3,077 28.0 AY601890 Calophya schini psyllid cytB-12S 3,044 26.3 AY601891 Glycaspis brimblecombei psyllid cytB-12S 3,081 26.8 AY601889 Diuraphis noxia aphid cytB-12S 3,180 15.8 AY601892 Melaphis rhois aphid cytB-12S 3,184 17.0 AY601894 Schlechtendalia chinensis aphid cytB-12S 3,188 16.1 AY601893 Daktulosphaira vitifoliae phylloxera cytB-12S 3,215 22.6 AY601895 a From [6]. Figure 1 Unrooted phylogenetic tree showing the relationships of members of the Sternorrhyncha (whiteflies, aphids, and psyllids) . The tree is based on mitochondrial cytB , ND2 , and 16S rDNA sequences. Maximum likelihood analysis, values at nodes are for bootstrap percentages from 500 replicates, only nodes supported by 70% or greater are shown. * by species name designates the organisms for which the complete mitochondrial sequence has been determined. Mitochondrial genomes with a similar gene order With some notable exceptions, within the class Insecta the order of the mitochondrial genes is highly conserved and has led to the proposal of an ancestral gene order [ 10 , 18 ]. An identical or a similar gene order has been observed in the mitochondrion of Pachypsylla venusta (psyllid), Schizaphis graminum (aphid), as well as Aleurodicus dugesii and Trialeurodes vaporariorum (whiteflies) (Fig. 2 ). In pysllids and aphids, tRNA-C is followed by tRNA-Y (Fig. 2 , extreme right) which corresponds to the ancestral Insecta gene order. In most whiteflies (Fig. 2 , 3 , 4 , 6 ), the order of these tRNA genes is reversed and this probably constitutes the whitelfly ancestral gene order. In the mitochondria of T. vaporariorum , tRNA-G is transposed from its position between COIII and ND3 to a position between tRNA-W and tRNA-Y (Fig. 1 ). tRNA-S1 was not detected in the mitochondria of S. graminum ; this tRNA and tRNA-Q was not detected in A. dugesii . Figure 2 Mitochondrial gene arrangements of psyllids, aphids, and selected whitefly species all of which have a highly similar gene order. Genes are transcribed from left to right except for the underlined genes which are transcribed in the opposite direction. Dot in box indicates a putative origin of replication and/or a region of direct repeats. Empty box indicates 40–60 nt that do not code for a readily identifiable tRNA. Horizontal bar between genes indicates that they are contiguous with little or no nucleotides between them. The change in the position of tRNA-G in T. vaporariorum is traced by lines and the 5X preceding the box containing tRNA-S2 , indicates that this putative tRNA is present five time in a direct repeat. Figure 3 Differences of the A type gene order from the postulated whitefly ancestral gene order. The principal changes are indicated by thick lineswith complete arrowheads. Direction of the arrowheads or half arrowheads indicates direction of transcription. Figure legend same as for Fig. 2. Dashed double headed arrow indicates the sequenced genes from other listed whitefly species. Figure 4 Differences of the B type gene order from the postulated whitefly ancestral gene order. Figure legend same as in Fig. 2 and 3. Figure 6 Differences of the D type gene order from the postulated whitefly ancestral gene order. Figure legend same as in Fig. 2 and 3. Mitochondria of whiteflies with transposition of COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) A number of whitefly mitochondria had transpositions of DNA fragments containing COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) . In most cases in which these genes are removed, there is a change in the direction of transcription of the adjacent downstream tRNA-S1 from clockwise to counter clockwise (Fig. 3 , 5 , 6 ). There is variation in the mitochondrial position into which these genes are transposed. In addition, there are differences with respect to the retention of the number and the order of the excised tRNA genes at the mitochondrial location in which the genes are inserted. The maximal insertion involves all of the genes from the excised fragment in their original order (Fig. 3 ) (tRNAs-A-R-N)-ND3-(tRNA-G)-COIII) , the minimal insertion involves ND3-(tRNA-G)-COIII (Fig. 4 , 5 ). In all insertions, the transcription direction is altered from that in the original position. Based on the location of the insertions and the adjacent genes, we have subdivided these transpositions into four types (A-D) (Fig. 3 , 4 , 5 , 6 ). In all cases, it would appear that the excision involved the removal of COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) . However, the DNA that is inserted always contains COIII-(tRNA-G)-ND3 and may contain all or only some of the tRNA genes. Figure 5 Differences of the C type gene order from the postulated whitefly ancestral gene order. Figure legend same as in Fig. 2 and 3. Transposition of the A type is shown in Fig. 3 . In this case, COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) is removed from the inferred ancestral position and placed between 12S rDNA and tRNA-I . Additional changes involve the position of tRNA-D , tRNA-Q and the direction of transcription of tRNA-S1 and tRNA-E . There is a total of 5 difference between the A type gene arrangement and the ancestral whitefly gene order. Sequence determination of smaller DNA fragments from two related species ( cytB-COIII ) were consistent with the same gene order (Fig. 3 ). Transposition of the B type is shown in Fig. 4 . In this case, ND3-(tRNA-G)-COIII is inserted into a location downstream of 12S rDNA and is bounded by tRNAs that have also changed locations ( tRNAs-Q-V and tRNAs-R-D ). In addition, the position of tRNA-A is changed as compared to the ancestral position. There were 6 differences from the putative ancestral gene order. No tRNA genes for N, S1, and I were detected. Sequence determination of a smaller DNA fragment ( cytB-COIII ) from two related species was consistent with the same gene order. Transposition of the C type is shown in Fig. 5 . In this case, ND3-(tRNA-G)-COIII is inserted downstream of 16S rDNA between tRNA-P and tRNA-C . Another major difference is the change in the direction of the transcription of tRNA-V and 12S rDNA . Other differences include the change in the order and the position of tRNA-Y and tRNA-C , the change of position of tRNA-P , and the direction of transcription of tRNA-S1 . Putative tRNA-W is transcribed clockwise. A small change in the span of the DNA fragment resulted in a putative tRNA-S2 , transcribed counter clockwise. The initially adjacent tRNAs-A-R-N as well as tRNA-I were not detected. There was a total of 7 differences between the whitefly ancestral gene order and the C type gene order. Sequence determination of a fragment of mitochondrial DNA from a related whitefly species was consistent with the C type gene order. The D type gene order is shown in Fig. 6 . In this case, (tRNAs-R-A)-ND3-(tRNA-G)-COIII is found after tRNA-S2 and before tRNA-N . Additional differences from the ancestral gene order involve the change in position of tRNA-N and tRNA-Q and the direction of transcription of tRNA-S1. tRNA-I was not detected. The total number of differences between the ancestral gene order and the D type gene order is 4. The sequence of a mitochondrial DNA fragment from a related species indicated a gene order of the D type (Fig. 6 ). PCR-based screening for excision of COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) and identification of transposition types We have devised a set of oligonucleotide primers complementary to COII and ND5 that allow the amplification of the DNA between these two genes. The size of the resulting fragments is a potential indication of the presence or absence of COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) between COII and ND5 (Fig. 2 ). Fig. 7 shows the results obtained with insects containing mitochondria that have these genes in the ancestral position (lanes 6–8, bands of 3.7 kb) and those in which they have been excised from this position (lane 2–5, bands of 2.2 to 2.3 kb). Figure 7 Agarose gel electrophoresis of PCR products amplified from whole insect DNA using primers complementary to regions encoding COII and ND5 . A, B, C, D, refers to different gene arrangement types; Y, ancestral arrangement. Lanes 1 and 9 molecular size markers; lane 2, Bemisia tabaci ; lane 3, Tetraleurodes acaciae ; lane 4, Neomaskellia andropogonis ; lane 5, Aleurochiton aceris ; lane 6, Aleyrodes elevatus ; lane 7, Trialeurodes vaporariorum ; and lane 8, Aleurodicus dugesii . In addition, we have devised a set of PCR primers that allow the distinction of the four types of transpositions. Using oligonucleotide primers complementary to COIII and cytB , the PCR fragments shown in Fig. 8 were obtained. The sizes characteristic of arrangement types A, B, C and D, were 4.9, 4.5, 3.5, and 1.5 kb, respectively. Figure 8 Agarose gel electrophoresis of PCR products amplified from whole insect DNA using primers complementary to regions encoding COIII and cytB . A, B, C, D, refers to different gene arrangement types; Y, ancestral arrangement. Lanes 1 and 8 molecular size markers; lane 2, Bemisia tabaci ; lane 3, Tetraleurodes acaciae ; lane 4, Neomaskellia andropogonis ; lane 5, Aleurochiton aceris ; lane 6, Trialeurodes vaporariorum ; and lane 7, Aleurodicus dugesii . Non-coding regions Mitochondrial genomes of insects are very compact. The principal non-coding segments of the genome are a low G+C content region usually following 12S rDNA [ 10 - 12 ]. The low G+C region usually has stretches of "T"s or "A"s as well as multiples of the sequence "TA." Another feature of this region may be inverted and direct repeats. Fig. 9 presents a diagrammatic summary illustrating some of the properties of the non-coding regions of the mitochondria of the studied insects. Only direct repeats and their sizes are indicated in this figure. No consistent pattern of inverted repeats was found and these are not indicated in the diagrams. All of the non-coding regions in the vicinity of 12S rDNA had a G+C content lower than the G+C content of the full genome (Fig. 9 ). The decrease ranged from 3.2 to 10.0%. Some of these regions of lower G+C content, adjacent to 12S rDNA , contained direct repeats (Fig. 7 , Adu, Tva, Aac). In Bta and Nan (Fig. 9 ), the direct repeats were in a non-coding region following COIII that also had a decrease in the G+C content. The noncoding regions of Tac, containing direct repeats, had a G+C content that was actually higher than that of the full Tac mitochondrial genome. However, the segment before the repeats had regions with a lower G+C content. In Sgr (Fig. 9 ), the direct repeats were between tRNA-E and tRNA-F and had essentially no decrease in the G+C content. In this organism and Pve (Fig. 9 ), the region following 12S rDNA had a decrease G+C content but did not contain substantial direct repeats. Figure 9 Summary of the properties of non-coding regions found in the mitochondrial genomes of psyllids, aphids and whiteflies. Letter abbreviations for amino acids, denote tRNAs for the designated amino acid; thin lines, length of non-coding region; thick lines, direct repeats; numerals above line, length of direct repeats, numbers in parentheses at end of fragment denote its length in bp; column of numbers at right represent the difference between the G+C content of the full mitochondrial genome and the considered segment. Pve, psyllid, Pachypsylla venusta ; Sgr, aphid, Schizaphis graminum ; Adu, whitefly, Aleurodicus dugesii ; Tva, whitefly, Trialeurodes vaporariorum ; Bta, whitefly, Bemisia tabaci ;Tac, whitefly, Tetraleurodes acaciae ; Nan, whitefly, Neomaskellia andropogonis ; Aac , whitefly , Aleurochiton aceris . tRNA anticodons In general, the anticodons found in the tRNAs of the mitochondria of whiteflies, psyllids, and aphids were those expected of insect mitochondria [ 12 ]. Some exceptions were "TTT" (instead of CTT) for tRNA-K for B. tabaci and A. dugesii , and "TCT"(instead of GCT) for tRNA-S1 for B. tabaci , A. aceris , N. andropogonis and T. vaporariorum . The latter codon maybe the usual tRNA-S1 anticodon in whiteflies; this tRNA was not detected in T. acaciae and A. dugesii . Discussion The novel aspect of this study is the finding that whitefly mitochondria contain a region of their genome spanning COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) that is prone to excision followed by insertion as a unit or as fragments in different parts of the mitochondrial genome. Based on the collection of whiteflies we have examined, this event occurred three or four different times in the ancestors of the studied species. These conclusions are summarized in Fig. 10 where the phylogeny of the whiteflies is compared to the gene arrangement types. The designation Y refers to the ancestral arrangement established in the species under this designation. Species bracketed under A and B have a similar insertion position for COIII-(tRNA-G)-ND3 ) (Fig. 3 , 4 ) but differ in adjacent tRNAs, so that it is possible these differences followed the insertion of the transposed fragment in a common ancestor. The positions of the transpositions in species of clusters C and D are very different and are probably the results of independent events. For purposes of this discussion we have chosen the simplest interpretation but this does not exclude other more complex scenarios. Cluster C is of additional interest since it is related to two species of Aleyrodes that have the ancestral (Y) arrangement. From the 16S*-23S* rDNA sequence divergence of Portiera (the primary endosymbiont of whiteflies) and the estimated rate of endosymbiont sequence change [ 24 ], it is possible to estimate the time of divergence of cluster C and the two Aleyrodes species. This value corresponds to 30–60 million years ago which is the maximum time for the occurrence of the transposition in an ancestor of cluster C. Figure 10 Summary of the major transpositions occurring in the mitochondria of whiteflies and the relationship of these changes to the phylogeny of whitefly species . The phylogenetic tree was obtained on the basis of combined mitochondrial cytB-N22-16S rDNA and Portiera endosymbiont 16S* and 23S* rDNA using the maximum likelihood method. Numbers at nodes correspond to bootstrap values after 500 replicates. The combination of the host and endosymbiont sequence data is justified by their cospeciation [6]. A, B, C, D indicate transposition type; Y, indicates mitochondria with an ancestral gene arrangement. Large arrowhead in mitochondrial genome indicates the original position of the transposed genes. Small arrowheads indicate the position of the insertion of the genes. Arrows outside circle indicate the direction of the transcription of the transposed genes. Arrow by the arrowhead of the B type transposition indicates the changed direction of transcription of the 12S rDNA . tRNAs have been omitted. (*), by species names indicate that the full mitochondrial genome was sequenced. (+), by species names indicates that a DNA fragment containing all or a part of the gene encoding for COIII and adjacent genes was sequenced. (o), by species name indicates that using oligonucleotide primers to COII and ND5 a PCR product was obtained corresponding to a size that was consistent with the presence of COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) in the ancestral position (Fig. 8). The excision of the same mitochondrial fragment at least four times during the evolutionary history of whiteflies suggests that this fragment is prone to transposition. In spite of the apparent similarity of the excisions, we have not been able to find any conserved sequence properties either adjacent to the region of the excised fragment or adjacent to its insertion site. The excision appears to be associated with a change in the direction of transcription of the previously adjacent tRNA-S1 (Fig. 3 , 5 , 6 ) and a change in the direction of transcription of the relocated fragments. As previously noted the order of the mitochondrial genes is conserved in most insects [ 10 , 14 ]. The major exceptions are within the three hemipteroid orders Phthiraptera, Psocoptera, and Thysanoptera [ 18 , 20 , 21 ]. The rearrangements are different within these three orders, being rather extreme in the Thysanoptera. In most insects, the order of the rRNA genes is 16S-(tRNA-V)-12S and the genes are transcribed in the counterclockwise direction [ 10 , 18 ]. A major exception is in the mitochondrion of Thrips imagines where these two genes are distant from each other and transcribed in opposite orientations [ 18 ]. In the C type gene order, there is an inversion of 12S-(tRNA-V) that is possibly associated with the insertion of ND3-(tRNA-G)-COIII between 16S and 12S rDNA (Fig. 5 , 10 ). This situation resembles that found in Thrips imagines in that the rRNA genes are transcribed in opposite directions. In whiteflies, besides rearrangements involving COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) , there are also substantial rearrangements involving single tRNAs. The physiological significance (if any) of these rearrangements is not known. Genes that are highly expressed ( 16S , 12S rDNA ) when separated and transcribed in opposite orientations would have to become part of different transcription units. In addition, we are not certain of the validity or significance of our inability to find a few of the tRNAs. In some cases, this may stem from our inability to recognize them. In other cases, such as tRNAs-A-R-N that are absent in the type C gene order, there would not appear to be any room for these genes on the mitochondrion and it might be that the tRNAs for these amino acids are provided by the host [ 25 ]. The mitochondrion of A. dugesii has a G+C content of 13.8 moles % (Table 1 ). All the other sequenced whitefly mitochondria have G+C contents of 18.7 to 27.7 moles % (Table 1 ). On the basis of morphological classification, Aleurodicus has been placed into a subfamily Aleurodicinae, while the remaining whitefly species listed in Fig. 10 have been placed into the subfamily Aleyrodinae [ 1 ]. This separation is supported by a phylogenetic analysis of mitochondrial DNA, host 18S rDNA, as well as Portiera DNA from different whitefly species [ 6 - 8 ]. It is possible that the common ancestor of whiteflies had a higher G+C content in its mitochondria and that in Aleurodicus there was a decrease. Alternatively, it is possible that the ancestral G+C content was low and increased in the Aleyrodinae. Our work points to the uncertainty inherent in making generalizations from one or a few organisms assumed to be representative of a group. We were fortunate that in the whiteflies the first mitochondrion we chose to study was that of B. tabaci which had an altered gene order. Had we started with our second or third choice ( T. vaporariorum , A. dugesii ) we would have concluded that the whiteflies have the ancestral mitochondrial gene order and not pursued further studies of mitochondria within this group of insects. Previously, evidence was found of a correlation between the rate of nucleotide sequence change and the rate of gene rearrangement [ 26 ]. If this has general applicability one would expect conservation of the mitochondrial gene order in aphids which have a low rate of sequence change (Fig. 1 ) and perhaps some changes in the gene order of psyllids as has been observed with whiteflies. The relatively localized different changes observed in several whitefly lineages may be of use in the study of the phylogeny and taxonomy of these organisms as is already indicated from the relatively small sample of organism studied in the present work. Conclusions Psyllids, aphids, and many whiteflies have mitochondria in which the order of the genes resembles the proposed Insecta ancestral gene order. However, in a variety of whitefly species there is a change in the gene order. In these organisms, there is an excision of a DNA segment containing COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) from the ancestral position, between atp6 and tRNA-S1 , and the insertion of all of these genes or fragments containing COIII-(tRNA-G)-ND and tRNAs into different locations on the mitochondrial genome. On the basis of the insertion positions, four gene arrangement types were identified. A phylogenetic analysis of 19 whitefly species involving mitochondrial and endosymbiont genes showed that each arrangement type was characteristic of a cluster of related whitefly species indicating that the transposition occurred in a common ancestor of the related species. The reason for the "restlessness" of this DNA segment in whiteflies and the physiological significance of these rearrangements are not known. Methods Amplification and sequencing of mitochondrial genomes In all cases, the starting material was whole insect DNA that was prepared and used in a previous study [ 6 ]. In our initial attempts at cloning mitochondrial DNA, we used methods previously developed for obtaining clones of insect endosymbiont DNA that have been described in detail [ 27 ]. In outline this involved obtaining a homologous probe for COI using previously described primers [ 28 ], followed by restriction enzyme and Southern blot analysis of insect DNA. Appropriate sized fragments were electroeluted from agarose gels and cloned into λ-ZAP (Stratagene, La Jolla, California). Following excision of the insert-containing plasmid, the DNA sequence was determined using a double stranded nested deletion kit (Pharmacia, Piscatawy, New Jersey) and where necessary custom-made oligonucleotides. As new sequence data was acquired for the mitochondria of several insect species our ability to design more specific oligonucleotide primers was improved. This allowed us to use pairs of primers, in combination with PCR, to obtain the full mitochondrial genome in 2–4 overlapping fragments. Conserved regions of the whitefly mitochondrial genome that are of use for the design of oligonucleotide primers, based on comparisons of six mitochondrial genomes, are given in Table 2 . Usually the oligonucleotide primers had added sequences at the 5'-ends for restriction enzymes. Table 2 Oligonucleotide primers for PCR amplification of whitefly mitochondrial DNA fragments. a Primer Gene Position on AY5212656 Nucleotide sequence of conserved regions (5'->3') F-COI-1 COI 172–221 TCWCATGCWT TTATYATAAT TTTTTTYATR ACWATGCCTT TDGTWATTGG F-COI-2 COI 673–710 GAYCCHATTT TRTAT CAACA YTTDTTTTGA TTTTTTGG R-COI COI 1133–1069 A CATAATGAA AATGDGCAAC AACAAAATAW GTATCATG HA RACAHACATC HACHGAAGAA TTACC F-COII COII 1845–1876 CCTTCTATYC GDATTTTDTA TYTAATRGAT GA R-COII COII 2093–2067 AGGAACHGTY CAAGAATGHA AAACATC F-COIII COIII 3854–3879 TTAACWGGHT TTCAYGGNTT HCATGT R-COIII COIII 4002–3974 CARACWAHR T CDACRAAATG TCAGTATCA F-CYTB CYTB 9169–9215 GCTTTTATRG GBTATATYTT R CCTTGRGGY CARATATCTT TTTGRGG R-CYTB CYTB 9566–9544 GCTATAATAA AATTTTCTGA ATC F-ND1 ND1 10275–10314 ATTCAATRTT AAAWCCWGAA ATWARYTCTG AYTCTCCTTC R-ND1 ND1 10506–10484 CAAYTAATTT CDTATGAAAT TAA F-16S-1 16S 11053–11087 A CCTGGCTTA CGCCGGTCTG AACTCAGATC ATGTA F-16S-2 16S 11202–11220 GCTGTTATCC CTTAGGTAA R-16S-1 16S 11377–11354 AAAAGACAAR AAGACCCTTT AGAA R-16S-2 16S 11526–11483 TTAAATA GCT GCAGTAWATT DACTGTACTA AGGTAGC ATA ATAA F-12S-1 12S 12273–12308 ACTTTCCAGT AADTTTACTT TGTTACGACT TATCTT F-12S-2 12S 12318–12342 AAGAGTGACG GGCRATTTGT ACATA R-12S-1 12S 12643–12619 CTTCAAACTT AAAAAATTTG GCGGT R-12S-2 12S 12861–12843 GTGCCAGCAG TWGCGGTTA a Underlined sequences indicate primers used in this study. b Mitochondrial genome of Trialeurodes vaporariorum . We will illustrate the approach by describing how the full genome of the mitochondrion of T. acaciae was obtained in three overlapping fragments. Using primers F-CYTB and R-12S-2 (Table 2 ) and PCR, a 3.6 kb DNA fragment was obtained. Similarly, using the pairs of primers F-COI-2 and R-CYTB and F-12S-2 and R-COI fragments of 7.3 and 5.5 kb, respectively, were obtained. For fragments of 4 kb or less, the PCR reaction mixture (10 ul) contained 10 ng insect DNA, 1 ug bovine serum albumin, 5 mM MgCl 2 , 0.2 mM dNTP, 10 pmoles of each primer, 0.6 U Bio-X-Act DNA polymerase, in Opti-Buffer (Bioline, London, United Kingdom). The PCR program was 94°C for 3 min, 30 cycles of, 94°C 30 sec, 55.0–65.0°C (predetermined optimal annealing temperatures) 30 sec, 70° 5 min, followed by 70°C 10 min. For the 5.5 and 7.3 kb DNA fragments, the PCR reaction mixture was modified by the increase of dNTPs to 0.3–0.4 mM and Bio-X-Act to 0.8 U. The PCR program was 94°C for 2 min, 10 cycles of 92°C 20 sec, 55.0–65.0°C (predetermined optimal annealing temperatures) 30 sec, 68° 10 min, followed by 20 cycles of 92°C 20 sec, optimal annealing temperature 30 sec, 68°C 10 min with increases of 15 sec each cycle, followed by 68°C 10 min. The DNA fragments were purified by means of the Wizard SV gel and PCR clean-up system (Promega, Madison, Wisconsin) as directed by the manufacturer. Following digestion with restriction enzymes the mitochondrial DNA fragments were cloned into pBluescript (Stratagene). In some cases where difficulty was experienced with using this vector due to possible toxicity of the inserts, the low copy number plasmid pWSK130 was used [ 29 ]. The DNA sequence was obtained as described above. Sequences were determined at the University of Arizona (Tucson) LMSE sequencing facility. In some cases, PCR fragments of 1 to 4 kb were directly sequenced after gel purification using custom made oligonucleotide primers. PCR amplification of other mitochondrial fragments CytB-12S mitochondrial DNA fragments were amplified and cloned into pBluescript as previously described [ 6 ]. CytB-COIII DNA fragments were obtained using oligo WF-CYTB-3 ( Bam HI, Sac II; 5'-GCAGGATCCG CGGCCWTGRG GHCAAATATC WTTTTGRGGD GC-3') and WF-COIII-3 ( Kpn I, 5'-GTGCGGTACC TTCWATTTGR TATTGRCATT TYGTTGA-3') and cloned into pBluescript. COII-ND5 DNA fragments (indicative of the presence or absence of COIII-(tRNA-G)-ND3-(tRNAs-A-R-N) were obtained by use of oligo WF-COII (5'-TGYTCAGAAA TYTGTGGRGT TAATCAYAGR TTTATRCC-3') and WF-ND5 (5'-TCAGCMTTAG TYCAYTCWTC AACAYTAGTW ACAGCAGG-3'). CytB-COIII fragments (size diagnostic of the arrangement type) were obtained by use of WF-CYTB-1 (5'-TTTATRGGBT ATATYTTRCC TTGRGG-3') and WF-COIII-1 (5'-TATTCWRTWT GATATTGACA TTTYGT-3'). The PCR reaction mixture (10 ul) differed from those above in containing 0.1 mM dNTP, and 0.8 U Bio-X-Act DNA polymerase. The PCR program was 94°C for 5 min, 30 cycles of, 94°C 30 sec, 56.0–63.1°C (predetermined optimal annealing temperatures) 30 sec, 70° 5 min, followed by 70°C 10 min. Identification of genes and phylogenetic analyses The protein-coding and rRNA genes were identified by BLAST searches [ 30 ] of GenBank. tRNA genes were identified by tRNAscan-SE [ 31 ], DOGMA [ 32 ] and in some cases by eye from the anticodons and inferred secondary structures. The methods used for the phylogenetic analyses have been described [ 6 ]. In Fig. 1 , the phylogenetic analysis of mitochondrial cytB-ND1-16S was based on 2730 characters; the analysis in Fig. 10 , which besides cytB-ND1-16S also included cospeciating endosymbiont 16S*-23S* rDNA [ 6 ], was based on 6860 characters. List of abbreviations used tRNAs tRNA-one letter amino acid abbreviation (parenthesis three letter amino acid abbreviation followed by anticodons): tRNA-A (ala, TGC), tRNA-C (cys, GCA), tRNA-D (asp, GTC), tRNA-E (glu, TTC), tRNA-F (phe, GAA), tRNA-G (gly, TCC); tRNA-H (his, GTG), tRNA-I (ile, GAT); tRNA-K (lys, TTT or CTT), tRNA-L1 (leu, TAG), tRNA-L2 (leu, TAA), tRNA-M (met, CAT), tRNA-N (asn, GTT), tRNA-P (pro, TGG), tRNA-Q (gln, TTG), tRNA-R (arg, TCG), RNA-S1 (ser, TCT or GCT), RNA-S2 (ser, TGA), tRNA-T (thr, TGT), tRNA-V (val, TAC), tRNA-W (trp, TCA), and tRNA-Y (tyr, GTA). Other structural genes atp6 (ATP synthase, subunit 6), atp8 (ATP synthase, subunit 8), COI (cytochrome oxidase, subunit I), COII (cytochrome oxidase, subunit II), COIII (cytochrome oxidase, subunit III), ND1 (NADH dehydrogenase, subunit 1), ND2 (NADH dehydrogenase, subunit 2), ND3 (NADH dehydrogenase, subunit 3), ND4 (NADH dehydrogenase, subunit 4), ND4L (NADH dehydrogenase, subunit 4L), ND5 (NADH dehydrogenase, subunit 5), ND6 (NADH dehydrogenase, subunit 6), 12S (small subunit of mitochondrial ribosomal DNA [rDNA]), 16S (large subunit of mitochondrial rDNA), 16S* (small subunit of primary endosymbiont rDNA), 23S* (large subunit of primary endosymbiont rDNA). Other abbreviations %G+C (moles percent guanine+ cytosine in DNA). Authors' contributions MLT cloned and sequenced the mitochondrial genomes of whiteflies. LB cloned and sequenced the mitochondrial genome of a psyllid and an aphid as well as smaller fragments of mitochondrial DNA from psyllids and aphids. PB directed the research and in collaboration with MLT and LB performed the data analysis and wrote the paper. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC512530.xml |
516018 | Heat shock protein 70 expression, keratin phosphorylation and Mallory body formation in hepatocytes from griseofulvin-intoxicated mice | Background Keratins are members of the intermediate filaments (IFs) proteins, which constitute one of the three major cytoskeletal protein families. In hepatocytes, keratin 8 and 18 (K8/18) are believed to play a protective role against mechanical and toxic stress. Post-translational modifications such as phosphorylation and glycosylation are thought to modulate K8/18 functions. Treatment of mouse with a diet containing griseofulvin (GF) induces, in hepatocytes, modifications in organization, expression and phosphorylation of K8/18 IFs and leads, on the long term, to the formation of K8/18 containing aggregates morphologically and biochemically identical to Mallory bodies present in a number of human liver diseases. The aim of the present study was to investigate the relationship between the level and localization of the stress inducible heat shock protein 70 kDa (HSP70i) and the level and localization of K8/18 phosphorylation in the liver of GF-intoxicated mice. The role of these processes in Mallory body formation was studied, too. The experiment was carried out parallely on two different mouse strains, C3H and FVB/n. Results GF-treatment induced an increase in HSP70i expression and K8 phosphorylation on serines 79 (K8 S79), 436 (K8 S436), and K18 phosphorylation on serine 33 (K18 S33) as determined by Western blotting. Using immunofluorescence staining, we showed that after treatment, HSP70i was present in all hepatocytes. However, phosphorylated K8 S79 (K8 pS79) and K8 S436 (K8 pS436) were observed only in groups of hepatocytes or in isolated hepatocytes. K18 pS33 was increased in all hepatocytes. HSP70i colocalized with MBs containing phosphorylated K8/18. Phophorylation of K8 S79 was observed in C3H mice MBs but was not present in FVB/n MBs. Conclusions Our results indicate that GF intoxication represents a stress condition affecting all hepatocytes, whereas induction of K8/18 phosphorylation is not occurring in every hepatocyte. We conclude that, in vivo , there is no direct relationship between GF-induced stress and K8/18 phosphorylation on the studied sites. The K8/18 phosphorylation pattern indicates that different cell signaling pathways are activated in subpopulations of hepatocytes. Moreover, our results demonstrate that, in distinct genetic backgrounds, the induction of K8/18 phosphorylation can be different. | Background Intermediate filaments (IFs) with microtubules and actin microfilaments are the major cytoskeletal components of most vertebrate cells [ 1 - 4 ]. IF proteins constitute a large family of proteins that is divided into five types [ 1 , 2 ]. The expression of the different IF proteins is differentiation and tissue specific [ 1 , 5 ]. Keratins expressed in epithelial cells, represent the largest and most complex subtype of IF proteins (more than 20 proteins)[ 2 ]. They are classified into two groups, the type I (acidic K9 to K20) and the type II (neutral-basic, K1 to K8), which form obligate heteropolymers composed of equimolar amounts of type I and type II keratins [ 2 , 6 ]. It is now generally accepted that, in multilayered epithelia, one of the function for keratins IFs is to protect the tissue from mechanical stress [ 7 - 9 ]. The first evidences for this function came from studies on epidermis, which showed that transgenic mice lacking epidermal keratins, or expressing mutated keratins, displayed blistering skin disease phenotypes, similar to human skin diseases such as epidermolysis bullosa simplex or epidermolytic hyperkeratosis [ 7 , 10 , 11 ]. As for epidermal keratins, the production of transgenic mice targeting K8 or K18 has been necessary to unravel the role of IFs in simple epithelium such as in the liver. In hepatocytes, K8/18 is the only keratin pair and thus both keratins are necessary to form an IF network. Transgenic mice expressing K8 or K18 carrying mutations that affect filament formation, develop mild hepatitis and display greater liver sensitivity to mechanical and toxic stress than wild type animals [ 12 , 13 ]. Recent studies from Ku et al. [ 14 - 16 ] have shown that mutations on K8/18 predispose to the development of liver disease in humans. Moreover, modifications in IF organization and the formation of keratin containing aggregates, named Mallory bodies (MBs), are observed in different liver diseases such as alcoholic hepatitis, Wilson's disease, Indian childhood cirrhosis and liver steatosis in obesity [ 17 - 21 ]. Other proteins, such as ubiquitin and the heat shock protein 70 kDa (HSP70), are also present in MBs and could play a role in their formation [ 22 - 24 ]. Taken together, these results support the hypothesis that keratins are necessary to preserve the hepatocytes integrity upon stressful conditions. It is still unclear how keratins accomplish these protective roles. Previous studies have shown that modifications in keratin phosphorylation are associated with various conditions such as mitosis, apoptosis and stress, suggesting a role for this post-translational modification in the modulation of keratin-related functions [ 25 - 27 ]. Long-term treatment of mice with a diet containing griseofulvin (GF) induces the development of an hepatitis associated with the formation of MBs, which are biochemically and morphologically similar to those found in humans [ 19 , 28 ]. This animal model constitutes a useful tool to investigate the keratin dynamics in the response of hepatocytes to the presence of a hepatotoxic agent. In the present study, we investigated the effect of chronic GF intoxication on hepatocytes from C3H and FVB/n mouse strains. We monitored the expression of the stress inducible form of the heat shock protein 70 kDa (HSP70i) and the induction of K8/18 phosphorylation at specific sites: K8 on serine 79 (K8 S79), K8 on serine 436 (K8 S436), K18 on serine 52 (K18 S52) and K18 on serine 33 (K18 S33) (reviewed in [ 26 , 29 ]). We also examined the possible relationship between HSP70i expression and K8/18 phosphorylation, during the development of hepatitis and in MB formation. Results Induction of HSP70i and K8/18 phosphorylation upon GF-treatment in C3H and FVB/n mouse livers The modifications in the amount of HSP70i, K8/18, and phosphorylated keratins (K8 pS79, K8 pS436 and K18 pS33) were analyzed by Western Blotting of total proteins from control and GF-treated C3H and FVB/n mouse livers (2 weeks, 6 weeks and 5 months). GF intoxication induced an increase in keratin levels in livers from both mouse strains (Fig. 1A,1B ). HSP70i, which was present in control livers of both mouse strains, was also increased by the treatment (Fig. 1A ). Figure 1 Biochemical analysis of livers from C3H and FVB/n mice. Western blots from C3H mouse livers; A, K8 and HSP70i; C, K8 pS79; E, K18 pS33; G, K8 pS436. Western blots from FVB/n mouse livers; B, K8; D, K8 pS79; F, K18 pS33. Total proteins from C3H and FVB/n livers were probed with antibodies against K8 pS79, K8 pS436 and K18 pS33 (Fig. 1C,1D,1E,1F,1G ). Significant changes in K8 and K18 phosphorylation occurred after GF-treatment in both mouse strains (Fig. 1C,1D,1E,1F,1G ). Small amounts of K8 pS79 and K18 pS33 were found in control livers (Fig. 1C,1D and 1E,1F ), whereas K8 pS436 was not detected (Fig. 1G ). After 2 weeks of treatment, an increase in the amount of all phosphokeratin species studied was observed. The phosphorylation levels of K8 S436 and K18 S33 remained higher than control values in both mouse strains for the entire treatment (Fig. 1E,1F,1G ). However, when compared with 2 week treatment, a decrease in K8 pS436 and K18 pS33 was noted after 5 months of treatment (Fig. 1E,1F,1G ). Similarly, a decrease in K8 S79 phosphorylation was observed after 5 months of treatment, in C3H mice (Fig. 1C ). However in FVB/n mice, K8 pS79 was not detected after the same period of treatment (Fig. 1D ). Localization of HSP70i and K8/18 during GF intoxication We analyzed at the cellular level, by double immunofluorescence staining, the distribution of HSP70i and IFs on liver sections of control and GF-treated C3H and FVB/n mouse livers. In control hepatocytes, IFs formed a complex cytoplasmic network that was denser at the cell membrane and particularly around the bile canaliculi (Fig. 2A ). Our biochemical analysis showed that HSP70i was present in control hepatocytes. However, by immunofluorescence, we did not detect the presence of HSP70i in the cells (Fig. 2B ). After 2 weeks of treatment, most of the hepatocytes were enlarged and the bile canaliculi were dilated. IF network was denser around dilated bile canaliculi (Fig. 2C ). All hepatocytes contained a very dense cytoplasmic IF network. These modifications were accompanied by an increase in the amount of HSP70i in hepatocytes and a granular staining was detectable at the membrane and in the nuclei (Fig. 2D ). A few cells showed a high level of HSP70i. After 5 months of treatment, there was a mosaic pattern of cells with and without IF staining (Fig. 2E ). HSP70i showed a granular staining pattern in many hepatocytes and was also present in MBs (Fig. 2F ). Figure 2 Distribution of keratin IFs and HSP70i in hepatocytes from control and GF-fed C3H mice. A, C, E keratin IFs; B, D, F HSP70i; A, B) control; C, D) 2 week treatment; E, F) 5 month treatment. Arrows in E and F indicate reactive MBs with Troma 1 (anti-K8) and anti-HSP70i, respectively. Scale bar = 20 μm. Phosphorylation of K8 S79, K8 S436 and K18 S33 during GF intoxication Cryosections of control and GF-treated C3H and FVB/n mouse livers were fixed with 4% paraformaldehyde and processed for double immunofluorescence staining. As mentioned above, control mice showed hepatocytes with a cytoplasmic IF network, which was denser at the cell periphery (Fig. 3 , 4 , 5A ). K8 pS79 and K8 pS436 were not generally detected in the IF network of control hepatocytes. Only, occasionally some doublet cells, most likely representing cells in mitosis, were stained (data not shown). A basal level of phosphorylation for K18 S33 was detected at the periphery of all hepatocytes (Fig. 5B ). Figure 3 Distribution of keratin IFs and K8 pS79 in hepatocytes from control and GF-fed C3H mice. A, C, F, I keratin IFs; B, D, E, G, H, J K8 pS79; A, B) control; C, D, E) 2 week treatment; F, G, H 6 week treatment; I, J 5 month treatment. Arrow in D indicates clusters of cells containing K8 pS79. Empty arrowheads in I and J indicate MBs reactive with Troma 1 but not with LJ4 (anti-K8 pS79), respectively. Scale bar = 20 μm. Figure 4 Distribution of keratin IFs and K8 pS436 in hepatocytes from control and GF-fed C3H mice. A, C, E, G keratin IFs; B, D, F, H K8 pS436; A, B) control; C, D) 2 week treatment; E, F) 6 week treatment; G, H); 5 month treatment. Arrows in D indicate clusters of cells containing K8 pS436. Filled arrowheads in G and H indicate reactive MBs with Troma 1 and 5B3 (anti-K8 pS436), respectively. Scale bar = 20 μm. Figure 5 Distribution of keratin IFs and K18 pS33 in hepatocytes from control and GF-fed C3H mice. A, C, E, G keratin IFs; B, D, F, H K18 pS33; A, B) control; C, D) 2 week treatment; E, F) 6 week treatment; G, H) 5 month treatment. Asterisk in D shows an hepatocyte containing a high level of K18 pS33; arrow indicates a dilated bile canaliculi. Filled arrowheads in G and H indicate reactive MBs with Troma 1 and Ab8250 (anti-K18 pS33), respectively. Scale bar = 20 μm. After 2 weeks of GF-treatment, hepatocytes were enlarged and an increase in cytoplasmic IF network was observed (Fig. 3 , 4 , 5C ). This treatment induced the phosphorylation of K8 on S79 and S436 in some hepatocytes. K8 pS79 and K8 pS436 were present in clusters of cells scattered over the whole liver (Fig. 3 , 4 , 7D ). The groups of cells stained with the anti-K8 pS79 or anti-K8 pS436 usually surrounded damaged cells (Fig. 3 , 7D ). In the case of K8 pS79, IFs located in the cytoplasm and at the periphery of the cells were highly stained (Fig. 3D,3E ). For K8 pS436, the staining was stronger at the cell periphery and around the dilated bile canaliculi (Fig. 4D ). In addition to their presence in clusters of cells, K8 pS79 and K8 pS436 displayed an intense cytoplasmic staining in some isolated cells or cell doublets (Fig. 3E ). Since both epitopes showed similarities in their patterns of distribution, we asked whether they were present in the same hepatocytes. Immunostaining for the detection of K8 pS79 and K8 pS436 were performed on serial liver sections of GF-treated mouse liver. Our results showed that the groups of hepatocytes positives for K8 pS79 were also positive for K8 pS436 (Fig. 6 ). Figure 7 Distribution of keratin IFs and K8 pS79 in hepatocytes from control and GF-fed C3H mice. A, C, E, G keratin IFs; B, D, F, H K8 pS79; A, B) control; C, D) 2 week treatment; E, F) 6 week treatment; G, H) 5 month treatment. Arrow in D indicates clusters of cells containing K8 pS79; asterisk shows a damaged hepatocyte. Filled arrowheads in G and H indicate MBs reactive with Troma 1 and LJ4 (anti-K8 pS79), respectively; empty arrowheads indicate MBs reactive with Troma 1 but not with LJ4 (anti-K8 pS79), respectively. Scale bar = 20 μm. Figure 6 Colocalization of K8 pS79 and K8 pS436 in hepatocytes from GF-fed C3H dmice. A, C K8 pS79; B, D K8 pS436; A, B, C, D 2 week treatment. Arrows in A and B indicate clusters of hepatocytes containing both K8 pS79 and K8 pS436. Note: reactivity in nuclei observed in A, B, C and D represents non-specific staining due to the secondary antibody. Scale bar = 20 μm. In the case of K18 S33, its phosphorylation was increased in most (if not all) hepatocytes. Most of the staining was observed at the periphery of the cells, delimitating clearly the bile canaliculi. A few hepatocytes showed high levels of cytoplasmic K18 pS33 (Fig. 5D ). After 6 weeks of GF-treatment, the distribution of hepatocytes containing K8 pS79 and K8 pS436 was different from the one observed after 2 weeks of treatment. Clusters of labeled cells were smaller, whereas labeled isolated cells became more prominent (Fig. 3G , 4F ). Singlet and doublet cell(s) highly labeled with K8 pS79 and K8 pS436 were also present (Fig. 3H ). K18 pS33 was present in most hepatocytes and showed a similar pattern as the one observed after staining with Troma1 (Fig. 5F ). After 5 months of GF-treatment, MBs were present in some hepatocytes in both mouse strains (Fig. 3I , 4G , 5G ). MBs had variable size and different positions depending on the cell and were observed in cells with or without a visible intracytoplasmic IF network, as detected with Troma 1. In both mouse strains, K8 pS436 and K18 pS33 were present in MBs (Fig. 4 , 5H ), whereas K8 pS79 seemed to be absent (Fig. 3J ). Experiments described above were also performed using cold acetone instead of 4% paraformaldehyde. After acetone fixation, no difference in the staining pattern was observed for GF-treatment of 2 and 6 weeks in both mouse strains (Fig. 7 ). However, differences were observed for MB staining in the 5 months GF-treated mouse liver. In C3H mouse strain livers, K8 pS79 was present in many MBs although some of them showed no staining (Fig. 7H ). In FVB/n mouse strain livers, K8 pS79 was not present in most MBs (Fig. 8 ). No difference in staining of MBs was observed for K8 pS436 and K18 pS33. Figure 8 Distribution of keratin IFs and K8 pS79 in hepatocytes from GF-fed FVB/n mice. A, C keratin IFs; B, D K8 pS79; A, B, C, D 5 month treatment. Asterisks in A and B indicate MBs reactive with Troma 1 but not with LJ4; arrows in C and D indicate MBs reactive with Troma 1 and LJ4, respectively. Scale bar = 20 μm. Localization of phosphorylated K8 species and HSP70i during GF intoxication Double immunostaining with anti-HSP70i and anti-phosphorylated keratins (K8 pS79 or K8 pS436) was performed for studying the localization of HSP70i in relation to keratin phosphorylation. The results showed that HSP70i and phosphorylated K8 species colocalized in some cells (Fig. 9 ). However, in most of the cells, the colocalization was not observed. Figure 9 Distribution of phosphorylated keratin IFs and HSP70i in hepatocytes from GF-fed C3H mice. A K8 pS79; B, D HSP70i; C K8 pS436; A, B, C, D 2 week treatment. Arrows in A and B indicate cells in which HSP70i and K8 pS79 colocalized. Arrows in C and D indicate cells in which HSP70i and K8 pS436 colocalized. Scale bar = 20 μm. Discussion The functional significance of K8/18 in simple epithelium has been the subject of numerous studies over the last decade [ 29 - 35 ]. Although most of these reports lead towards roles for K8/18 in the resistance of cells to mechanical and toxic stress, the molecular mechanisms underlying these phenomena remain to be elucidated. To date, most of our understanding of the pathways involving keratins in the response of hepatocytes to toxic stress comes from the analyses of various cell lines [ 36 - 38 ]. K8/18 phosphorylation at specific sites has been proposed to be a key factor in the regulation of those keratin functions. In this regard, K8 pS79, K8 pS436, K18 pS52 and K18 pS33 are the most studied phosphorylation sites [ 39 ]. In vivo , K8/18 are also subjected to phosphorylation and, as suggested by in vitro studies, it is proposed to help hepatocytes to cope with toxic stress [ 26 , 29 , 35 ]. For instance, transgenic mice expressing human K18 S52 mutated in alanine mutant are more susceptible to drug-induced liver injury than transgenic mice over expressing wild type human K18 [ 40 ]. In the present study, we showed that the chronic intoxication of mice with GF, which is known to induce modifications in keratin organization and formation of MBs [ 33 ], was associated with increased expression of the stress protein HSP70i. GF-treatment resulted in a rapid increase in the expression of HSP70i. This modification was already perceptible after 2 weeks of treatment and was maintained for the whole period of treatment. This result provides direct evidence that GF-treatment, which has been proposed to constitute an oxidative stress for hepatocytes [ 41 ], triggers signaling pathways involved in cellular protection [ 33 ]. This interpretation of our biochemical data is in agreement with our immunofluorescence study, which showed that HSP70i partly relocalized to the nucleus during the treatment. This distribution pattern is typical of the distribution of HSP70i in stressed cells [ 42 , 43 ]. We have previously shown that GF intoxication induced an overall increase in K8/18 phosphorylation [ 44 , 45 ]. Here, we show that GF-treatment is associated with modifications in K8/K18 phosphorylation at specific sites such as: K8 S79, K8 S436 and K18 S33. Among the studied phosphorylation sites, and because it was present and was increased in all treated hepatocytes, K18 pS33 was the keratin phosphoepitope the tissue distribution of which resembled that of HSP70i. The phosphorylation of K18 S33 has been shown to play a role in keratin reorganization during mitosis and by linking 14-3-3 proteins, to modulate their function [ 46 , 47 ]. Hence, we propose that K18 S33 phosphorylation could be linked to IF reorganization during GF intoxication. Moreover, because K18 pS33 is increased in all hepatocytes, it could be implicated in the stress response by participating in the relocalization and/or the recruitment of molecules or factors implicated in stress-induced cell signaling. In contrast to K18 pS33, phosphorylated K8 species, K8 pS79 and K8 pS436 were not present in control mice hepatocytes. After 2 and 6 weeks of treatment, we observed an increase in the level of phosphorylation on these sites. However, contrary to HSP70i and K18 pS33, these phosphorylation sites were only present in isolated cells (singlet or doublets) or clusters of cells. Labeled singlet or doublet cells were more numerous after staining with the anti-K8 pS79 than after staining with the anti-K8 pS436. These cells could correspond to cells that are undergoing mitosis. This is supported by previous studies which showed that the phosphorylation of K8 on S79 and S436 occurs during mitosis [ 25 , 48 ]. This interpretation is also in agreement with the work of Stumptner et al. [ 49 ], which showed the presence of cell doublets reactive with the anti-K8 pS79 after a short treatment with DDC that induces on the long term MB formation. The discrepancy in the number of cells stained for K8 pS79 and K8 pS436, both in the singlet and doublet cells, suggests that different kinases are involved in the phosphorylation of those sites. The presence of K8 pS79 and K8 pS436 was also detected in islets of cells. Interestingly, both antigens were present in the same clusters of cells surrounding unstained cells that were most likely undergoing apoptosis. These unstained cells are evocative of detached cells during anoikis, an apoptotic process that can be induced by loss of cell-cell anchorage. Stress and apoptosis has been shown to modulate K8 S79 and K8 S436 phosphorylation [ 25 , 48 ]. The observed phosphorylation could indicate that these hepatocytes are stressed hepatocytes intended to apoptosis. However, analysis of the livers for the presence of apoptosis showed that only a few hepatocytes are going through programmed cell death and groups of cells in apoptosis were never observed (data not shown). We propose that the apoptotic cell could represent the starting point of a signal transduction pathway to neighboring cells. The activation of specific kinases that would phosphorylate keratins could provide those cells a resistance to apoptosis. This latter interpretation is in agreement with the notion that K8/18 intermediate filaments play a key role in the protection of cells against apoptosis [ 26 , 35 ]. Liao et al. [ 50 ] have shown that HSP70 associates with K8/18 via K8. Our study show that colocalization of HSP70i and IFs occurs only in a few hepatocytes. Since the hepatocytes, in which colocalization was observed, contained K8 pS79 or K8 pS436, HSP70i binding to IFs in these cells may be related to the presence of keratin phosphorylation and participates to cellular pathways involving phosphorylated K8/18 on specific sites. Ku et al. [ 51 ] have shown that phosphorylation could modulate K8/18 ubiquitination and ensuing turnover. Knowing that binding of HSP70 to a protein can affect its targeting by kinases or phosphatases [ 52 ], HSP70i could bind to phosphorylated K8 species, prevent dephosphorylation by specific phosphatases, and thereby enhance phosphorylation-mediated K8/18 protection from degradation by the ubiquitin pathway [ 51 , 53 ]. However, since HSP70i and phosphorylated K8 species colocalized only in a few cells over the whole tissue, the relevance of this phenomenon in the response to the presence of the hepatotoxin needs to be addressed and further investigations will be necessary to confirm that hypothesis. Chronic intoxication of mice with GF induces the formation of MBs. Numerous studies have demonstrated the presence of different phosphorylated K8/18 species within MBs, suggesting that K8/18 phosphorylation could participate in the MB formation processes [ 49 , 54 ]. In our experiments, we showed that K8 pS436 and K18 pS33 were present in all observed MBs, whereas K8 pS79 was present in MBs in C3H mice hepatocytes but not in FVB/n mice. The difference in the presence of K8 pS79 phosphoepitope within MBs suggests that phosphorylation at that specific site is not essential for MB formation. However, as suggested by Stumptner et al. [ 49 ], because K8 pS436 and K18 pS33 are always detected in MBs, phosphorylation on these sites could be implicated in the processes of MB formation. Taken together, those results indicate that in the context of MB formation, K8/18 phosphorylation should not be considered as a general phenomenon but as specific events that affect precise sites on K8 or K18. The difference observed between keratin phosphorylation in C3H and FVB/n mice indicates that the genetic background influences the response of hepatocytes to toxic stress. This interpretation is in agreement with the results obtained with K8-null mice which displayed variable phenotypes depending on the genetic background [ 30 , 31 ]. The treatment with GF that represents a toxic stress, most likely, involves the activation of stress activated protein kinases (SAPKs) in some hepatocytes. SAPKs p38 and JNK are physiologic kinases for K8 S79 and K8 S436 [ 37 , 55 ]. We postulate that p38 kinase and/or JNK are activated by GF-treatment in some hepatocytes and are responsible for the modifications in K8 phosphorylation we observed. K8 and K18 give different patterns of phosphorylated cells indicating that, under the same conditions, K8 and K18 phosphorylation is regulated differently. Conclusions Our results show that increases in HSP70i, K8/18 expression and K8/18 phosphorylation constitute early events in the response of hepatocytes to the presence of GF. These observations support a role for keratins in preserving cellular integrity during stress conditions induced by the presence of a chemical agent [ 33 , 35 ]. HSP70i expression in hepatocytes after GF-treatment is not directly related to K8/18 phosphorylation at the studied sites: K8 S79, K8 S436 and K18 S33. With regard to MB formation, it appears that both HSP70i and K8/18 phosphorylation might contribute to the IF aggregation processes. The involvement of K8/18 phosphorylation in MB formation seems to be related only to specific sites and dependent on mouse genetic inheritance. Methods Experimental design Experiments were performed with adult C3H mice (Charles River Canada, St-Constant, QC) and FVB/n mice (Baribault et al. 1994) weighing 25 to 30 g. Two mouse strains were used to minimize the potential effect of different genetic background on the response of hepatocytes and to facilitate the interpretation of the data. All animals were housed with a 12-hour light-dark cycle and allowed the consumption of water and of a standard mouse semi-synthetic diet (Texlad Test Diet, Madison, WI), both ad libitum . GF-treated mice were fed a diet containing 2.5% (w/w) GF (Schering Corp., Kenilworth, NJ) for different periods of time: 2 weeks, 6 weeks and 5 months according to the method of Denk et al. [ 28 ]. Control mice were fed the same diet without GF. For control and each period of GF-treatment, experimental groups included 3 animals. Mice were sacrificed by cervical dislocation and livers were snap frozen in methylbutane precooled with liquid nitrogen and stored at -70°C before use. All experiments were conducted according to the requirements of Canadian Council Animal Care and the "Université du Québec à Trois-Rivières" Animal Welfare Committee. For microscopical studies, paraformaldehyde and cold acetone were routinely used, as fixatives, to ensure that the staining patterns were not a consequence of the fixative used. Reagents The antibodies used were as following: Troma 1, a rat monoclonal antibody (rAb) that recognizes K8 [ 56 ]; LJ4, a mouse monoclonal antibody (mAb) that recognizes human K8 pS73 equivalent to mouse K8 pS79 [ 25 ]; mAb 5B3 that recognizes K8 pS431 equivalent to mouse K8 pS436 [ 48 ]; 8250, a rabbit polyclonal antibody (pAb) that recognizes K18 pS33 [ 46 ] and a pAb that recognizes the stress inducible form of HSP70, HSP70i (Stressgen, Victoria, BC). The secondary antibodies for fluorescence microscopy were as follows: tetramethylisothiocyanate (TRITC) or fluorescein isothiocyanate (FITC) conjugated goat anti-rat IgG, FITC-conjugated donkey anti-rabbit IgG (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON). The M.O.M. kit and Avidin/Biotin blocking kit (Vector ® Laboratories Canada, Burlington, ON) were used to perform immunolabelling with mAbs LJ4 and 5B3. The secondary antibodies used for Western blotting were as follows: biotinylated goat anti-rat IgG, biotinylated donkey anti-mouse IgG and peroxydase donkey anti-rabbit IgG (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON). Other reagents used were: Horseradish Streptavidin Peroxydase-conjugated (SPC) (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON), Bovine Serum Albumin (BSA) (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON), Leupeptin (Sigma-Aldrich Canada, Oakville, ON), Pepstatin (Sigma-Aldrich Canada, Oakville, ON), Aprotinin (Sigma-Aldrich Canada, Oakville, ON), Normal Horse Serum (NHS) (Vector ® Laboratories Canada, Burlington, ON), Luminol (Amersham Pharmacia Biotech, Oakville, ON). Gel electrophoresis and immunoblotting Livers were homogenized in 62.5 mM Tris-HCl, pH 6.8 containing 2.3 % (w/v) SDS, 50 mM sodium fluoride, 10 mM EDTA, 1 mM sodium pyrophosphate, 1 mM DTT, 1 mM PMSF, 1 μM leupeptin (Sigma-Aldrich Canada, Oakville, ON), 1 μM pepstatin (Sigma-Aldrich Canada, Oakville, ON), 2.5 μg/ml aprotinin (Sigma-Aldrich Canada, Oakville, ON). Proteins were separated by electrophoresis on 10% SDS-polyacrylamide gels [ 57 ]. Protein concentration was determined by the Lowry method, modified for the presence of SDS [ 58 ], and equal amounts of proteins (5 to 12.5 μg) were loaded on each well. Gels were stained with 0.1% Coomassie Blue, or transferred onto nitrocellulose membranes (Biorad laboratories Canada, Mississauga, ON), and processed for immunodetection. Membranes were blocked overnight with 5% (w/v) non-fat dry milk (Carnation, Nestlé ® ) in PBS (Phosphate Buffer Saline, 0.137 M NaCl, 2.7 mM KCl, 4.3 mM Na 2 HPO 4 , 14.7 mM KH 2 PO 4, pH 7.2), incubated with the primary antibodies for 45 min, at room temperature, washed in PBS containing 0.2% (v/v) Tween 20 and incubated for 45 min with the appropriate secondary antibody: biotinylated goat anti-rat IgG (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON), biotinylated donkey anti-mouse IgG (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON) and horseradish peroxydase donkey anti-rabbit IgG (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON). When biotinylated secondary antibodies were used, membranes were washed with PBS-Tween 20 and incubated with streptavidin conjugated with horseradish peroxydase (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON) for 30 min and washed with PBS-Tween 20. The chemiluminescent horseradish peroxydase substrate Luminol (Amersham Pharmacia Biotech, Oakville, ON) was added to the membranes according to recommendations of the company, and membranes were exposed to Blue X-Omat X-ray film sheets (Mandel Scientific Company, Guelph, ON) to localize antibody binding. Fluorescence microscopy Cryosections (4 μm) of fresh liver were fixed with 4% (w/v) paraformaldehyde in PBS pH 7.2 for 20 min, at room temperature, and rinsed in PBS or TBS (Tris Buffer Saline, 10 mM Tris-HCl, 0.138 M NaCl, 2.7 mM KCl, pH 7.4) upon staining protocols. Since fixation can affect antibody-binding capacity, cryosections were also fixed with cold acetone (-20°C) for 10 min. For the detection of K8, sections were incubated with rAb Troma 1 at room temperature, washed in PBS and incubated with a FITC or a TRITC conjugated goat anti-rat IgG (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON) for 45 min, at room temperature. For immunostaining of K18 pS33, sections were incubated for 1 hour, at room temperature with anti-K18 pS33 (8250) diluted in PBS containing 10% (w/v) BSA, washed in PBS and incubated for 45 min with a FITC conjugated donkey anti-rabbit IgG in PBS containing 10% BSA (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON). Immunostaining with anti-K8 pS79 (LJ4) and anti-K8 pS436 (5B3) mAbs, was done using the M.O.M. (mouse on mouse) detection kit (Vector ® Laboratories Canada, Burlington, ON) and an Avidin/Biotin blocking kit (Vector ® Laboratories Canada, Burlington, ON) according to recommendations of the company. Normal horse serum (Vector ® Laboratories Canada, Burlington, ON) was added to solution during incubation step with secondary antibody. For heat shock proteins staining, liver sections were incubated with anti-HSP 70i diluted in TBS containing 10% BSA for 45 min at 37°C, washed in TBS and incubated for 45 min at 37°C with a FITC conjugated donkey anti-rabbit IgG (Jackson Immunoresearch, Bio/Can Scientific, Mississauga, ON) diluted in TBS containing 10% BSA. For detection of HSP70i, K8 pS79 and K8 pS436, the sections were treated with 1% (v/v) Nonidet P-40 (Sigma-Aldrich Canada, Oakville, ON) following fixing step with 4% paraformaldehyde. The tissues were mounted in P-phenylene diamine diluted in 50% (v/v) glycerol. The slides were kept at -20°C and photomicrographs were collected using an Olympus ® BX60 photomicroscope. List of abbreviations HSP70i – inducible form of 70 kDa Heat shock protein. GF – griseofulvin. IFs – intermediate filaments. K8 – keratin 8. K8/18 – keratin 8 and keratin 18. K8 S79 – serine 79 on keratin 8. K8 pS79 – phosphorylated serine 79 on keratin 8. MBs – Mallory bodies. Authors' contributions MF carried out all western blotting analyses, performed the immunofluorescence studies and participated in drafting the manuscript. LV participated in the design of the study. MC participated in the design of study, its coordination and drafting the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516018.xml |
516781 | The clinical significance of serum and bronchoalveolar lavage inflammatory cytokines in patients at risk for Acute Respiratory Distress Syndrome | Background The predictive role of many cytokines has not been well defined in Acute Respiratory Distress Syndrome (ARDS). Methods We measured prospectively IL-4, IL-6, IL-6 receptor, IL-8, and IL-10, in the serum and bronchoalveolar lavage fluid (BALF) in 59 patients who were admitted to ICU in order to identify predictive factors for the course and outcome of ARDS. The patients were divided into three groups: those fulfilling the criteria for ARDS (n = 20, group A), those at risk for ARDS and developed ARDS within 48 hours (n = 12, group B), and those at risk for ARDS but never developed ARDS (n = 27, group C). Results An excellent negative predictive value for ARDS development was found for IL-6 in BALF and serum (100% and 95%, respectively). IL-8 in BALF and IL-8 and IL-10 serum levels were higher in non-survivors in all studied groups, and were associated with a high negative predictive value. A significant correlation was found between IL-8 and APACHE score (r = 0.60, p < 0.0001). Similarly, IL-6 and IL-6r were highly correlated with PaO2/FiO2 (r = -0.27, p < 0.05 and r = -0.55, p < 0.0001, respectively). Conclusions BALF and serum levels of the studied cytokines on admission may provide valuable information for ARDS development in patients at risk, and outcome in patients either in ARDS or in at risk for ARDS. | Background Acute respiratory distress syndrome (ARDS) is characterized by respiratory failure of acute onset as a result of acute lung injury (ALI) either directly or indirectly via the blood. The main characteristics of the syndrome are diffuse inflammation and increased microvascular permeability that cause diffuses interstitial and alveolar oedema and persistent refractory hypoxemia [ 1 ]. Although a variety of insults may lead to ARDS, a common pathway may probably result in the lung damage [ 2 - 4 ]. A complex series of inflammatory events have been recognized during the development of ARDS but the exact sequence of the events remains unclear. Leukocyte activation and free radical release, proteases, arachidonic acid metabolites, inflammatory and anti-inflammatory cytokines results in the increased alveolar-capillary membrane permeability [ 5 - 7 ]. Cytokines are produced in the lung by local resident cells such as alveolar macrophages, lung epithelial cells, and fibroblasts or by cells such as neutrophils, lymphocytes, monocytes and platelets as a response to local or systemic injury [ 8 - 12 ]. Cytokines involved in the early phase of inflammatory response, such as IL-1, IL-2, IL-6, IL-8, [ 8 , 13 , 14 ] are secreted in response to injurious agents. Inflammatory cytokines are of critical importance in the pathophysiology of septic shock, a condition frequently leading to ARDS [ 15 ]. It has been hypothesized that the inability of lung to repair after ALI is due to a persisted inflammatory stimulus [ 16 ]. Predictive levels of inflammatory cytokines (IL-1, IL-2, IL-6, IL-8) for ARDS development in at risk patients have been reported with controversial results [ 5 , 7 , 11 , 15 , 16 ]. Cut-off values above which ARDS development occurs in at risk patients have been also reported for IL-4 and IL-10 [ 16 ]. Schutte et al [ 17 ] compared ARDS to pneumonia and cardiogenic pulmonary oedema patients and found higher IL-6 and IL-8 values in ARDS compared to remaining populations. A systematic study of the role of all main inflammatory cytokines at the same time in the pathogenesis and development of the ARDS has not been undertaken. The purpose of this study is to evaluate the role of inflammatory cytokines IL-4, IL-6, IL-6r, IL-8, and IL-10 in serum and bronchoalveolar lavage (BALF) as possible prognostic indicators for the development, severity, and outcome of patients with ARDS or at risk for ARDS. Methods Patients We studied prospectively 59 consecutive patients who were admitted in our Intensive Care Units (ICU) (Table 1 ). The first group (group A) included 20 patients fulfilling the criteria of ARDS [ 1 ]. All these patients were supported mechanically for their respiratory failure. The second group (group B) included 12 patients on mechanical respiratory support who had at least one condition from those suggested by Fowler et al [ 2 ] as risk factors for ARDS development. All patients in this group developed ARDS within 48 hours. The third group (group C) included 27 patients on high risk for ARDS development who never developed ARDS (Table 1 ). Table 1 Clinical features of the studied population on admission. Group N Sex Age (yr) Diagnosis PaO2/FiO2 APACHE II A 20 M = 14 53 ± 19 Trauma 9 121 ± 10 19.8 ± 1.4 F = 6 Pneumonia 3 Sepsis 2 Transfusion 2 Pancreatitis 2 Intoxication 1 Burns 1 B 12 M = 8 56 ± 20 Sepsis 3 239 ± 30 20.5 ± 1.3 F = 4 Pneumonia 4 Trauma 4 Pancreatitis 1 C 27 M = 21 49 ± 18 Sepsis 12 276 ± 16 16.0 ± 1.1* F = 6 Trauma 5 Pneumonia 2 Transfusion 2 Intoxication 2 Arrest 2 Pancreatitis 2 D 33 M = 20 36 ± 16 F = 13 M = male, F = female, burns = >40% of the body surface. * p < 0.05 group A vs groups B and C. For patients' classification, the following criteria were employed: 1. The ARDS criteria of the American-European Consensus Conference on ARDS (1): a. acute onset, b. bilateral chest radiographic infiltrates, c. pulmonary artery occlusion pressure of ≤18 mm Hg, or no evidence of left atrial hypertension, and d. impaired oxygenation regardless of the PEEP concentration, with a PaO 2 /FiO 2 ratio of ≤ 300 torr for ALI and ≤ 200 torr for ARDS. 2. The high-risk criteria for ARDS development according to Fowler et al [ 2 ]). 3. The criteria for pneumonia according to EPIC study [ 18 ], and 4. The criteria for septic syndrome according to Bone et al [ 19 ]. Acute Physiology and Chronic Health Evaluation-II (APACHE II) scoring system was used for grading the disease severity [ 20 ]). The main clinical features of the patients are shown in Table 1 . The protocol was approved by the Ethics Committee of our institutions. After admission to the ICU blood samples were obtained from a central venous line within 2 hours. APACHE II score and PaO 2 /FiO 2 values were obtained at the time of sample collection. The blood was collected in a heparinized vacutainer tube and kept immediately at 4°C. After centrifugation at 1500 g at 4°C, the plasma was kept at -80°C until the measurement. Immediately after blood collection BALF was obtained by fiberoptic bronchoscopy. The fluid was filtered through nylon net to remove the mucous secretions, and centrifuged at 500 g for 10 min to remove cells. The supernatant was kept in cryotubes at -80°C in aliquots of 0.5 ml. The method of micro-lavage was used as described previously [ 21 ]. The following criteria were used for an acceptable sample: a. The procedure should be shorter than 1 min, while the time of saline staying into the lungs should be less than 20 sec, b. recovery of more than 50% of the saline used for the lavage, c. absence of obvious blood contamination in the BALF, and d. the level of urea in the BALF should be more than 0.4 mmol. The urea level was used as an index of BALF dilution [ 21 ]. To check the accuracy of the method, two subsequent lavages were taken in 8 patients and all the studied parameters in the two samples did not differ significantly. Measurement of the plasma cytokines The assay method for cytokine measurement was the same for blood and BALF samples. Determination of plasma cytokines was done with solid phase enzyme-linked immunosorbent assay (ELISA) methodology based on the quantitative immunometric sandwich enzyme immunoassay technique [ 22 ]. Reagents for the studied cytokines were obtained from several sources (kits of R&D systems, Inc. Minneapolis, MN, USA, for IL-6R, kits of Genzyme Diagnostics, Cambridge, MA, USA for IL-8, IL-10, and RIA kits of Amersham, Buckinghamshire, UK, for IL-4, IL-6) were used according to manufacturer's instructions. Intra-assay and inter-assay reproducibility was checked and found more than 90%. To calculate the dilution factor of the BALF, urea values in the plasma and BALF were used because this low molecular weight substance is found to be in the body fluids at the same concentration as in the blood. Statistical analysis Data analysis was carried out using SPSS 8.0 statistical software (SPSS Inc., Chicago, IL). Results are expressed as mean ± 1SD, or median (range), unless otherwise indicated. The Mann-Whitney non-parametric test was used to compare the mean values of the cytokines in the blood and BALF in the various groups. Receiver-operating characteristic (ROC) correlation was used to find the optimal cut-off values of the studied cytokines for ARDS development in patient at risk and survival of the patient population [ 23 ]. For tests of association, we calculated Spearman's correlation coefficient. A p value <0.05 was considered to be statistically significant. Results There was no significant difference in the mean age of the patients among the three studied groups. Using APACHE-II score to determine the severity of the disease, significant difference between group A and group C (p = 0.04) and group B and C (p = 0.045) was found, and not between groups A and B (p = 0.06). The mean time of staying in the ICU did not differ among the three groups. Predictive capabilities of BALF mediators for onset of ARDS The mean values (+/- SD) of the measured cytokines in BALF and serum in the three studied patient groups are shown in Table 2 . A significant difference was found for BALF IL-6r, which was higher in group A than in groups B and C (p < 0.0001). Similarly, BALF IL-6 was higher in groups A and B compared to C (p < 0.01). Table 2 Mean (+/-SD) BALF and serum levels of studied cytokines in the three groups of patients. Group A (n = 20) Group B (n = 12) Group C (n = 27) Cytokines (pg/mL) BALF Serum BALF Serum BALF Serum IL-4 260 ± 181 158 ± 68 284 ± 119 95 ± 35 1 242 ± 147 83 ± 68 1 IL-6 538 ± 432 2 388 ± 324 3 1135 ± 1382 2 505 ± 217 318 ± 446 313 ± 373 3 IL-6r 180 ± 52 30 ± 25 80 ± 37 4 34 ± 26 73 ± 22 4 39 ± 45 IL-8 480 ± 222 3525 ± 1523 5 492 ± 165 3543 ± 2265 5 467 ± 179 2553 ± 2824 IL-10 62 ± 24 117 ± 60 111 ± 98 177 ± 117 73 ± 50 118 ± 84 1 p < 0.0001 versus group A, 2 p < 0.01 versus group C. 3 p < 0.05 versus group B. 4 p < 0.0001 versus group A. 5 p < 0.0001 versus group C. Predictive capabilities of serum mediators for onset of ARDS Serum levels of IL-4 were higher in group A compared to groups B and C (p < 0.0001). Serum IL-6 was higher in group B compared to group A and C (p < 0.05). Serum levels of IL-8 was higher in group A and group B compared to group C (p < 0.0001) (Table 2 ). Predictive values for ARDS development in at risk patients (groups B and C) for BALF and serum IL-6 are shown in Table 3 . IL-6 negative predictive values for ARDS development were 100% and 95% for BALF cut off value of 195 pg/ml and serum cut off value of 255 pg/ml, respectively. Table 3 BALF and serum IL-6 predictive values for ARDS development in patients at risk (n = 39, groups B+C). Criterion PPV NPV Sensitivity Specificity Prevalence 95% CI BALF IL-6, (pg/mL) >195 44 100 100 62 24 0.62–0.91 Serum IL-6, (pg/mL) >255 44 95 88 65 24 0.60–0.90 PPV: positive predictive value, NPV: negative predictive value. CI: confidence interval Predictive capabilities of BALF mediators for survival of ARDS Mean (SEM) values in BALF and serum of the studied mediators in the survivors and non-survivors (groups A+B+C) are shown in Table 4 . BALF levels of IL-6, IL-6r and IL-8 were significantly higher in those who did not survive (p < 0.05, p < 0.05 and p < 0.0001, respectively). Table 4 Mean (SEM) BALF and serum levels of the measured cytokines in all patients (Groups A+B+C) according to survival. BALF Serum Cytokines (pg/ml) survivors (n = 30) non-survivors (n = 29) survivors (n = 30) non-survivors (n = 29) IL-4 262 ± 188 247 ± 108 72 ± 51 154 ± 68*** IL-6 313 ± 427 743 ± 877* 218 ± 191 530 ± 389*** IL-6r 94 ± 52 129 ± 69* 18 ± 19 53 ± 42** IL-8 340 ± 109 621 ± 144*** 1269 ± 830 4957 ± 1965*** IL-10 69 ± 37 82 ± 70 70 ± 16 188 ± 84*** * p < 0.05, ** p < 0.001, *** p < 0.0001 versus survivors group. Patients with ARDS (group A) who did not survive had significantly higher BALF levels of IL-8 (p < 0.0001) and significantly lower BALF levels of IL-10 (p < 0.001) (Table 5 ). Patients at risk (groups B and C) who did not survive had significantly higher BALF levels of IL-8 (p < 0.0001) (Table 6 ). IL-6, IL6-r and IL-8 BALF concentration cut off predictive values for surviving patients are shown in Table 7 . BALF IL-8 was also elevated in patients of group C who died (p < 0.0001) (Table 8 ). Table 5 Mean (SEM) BALF and serum levels of cytokines in ARDS patients according to survival (group A). BALF Serum Cytokines (pg/ml) Survivors (n = 6) Non-survivors (n = 14) Survivors (n = 6) Non-survivors (n = 14) IL-4 352 ± 291 224 ± 115 155 ± 63 160 ± 73 IL-6 361 ± 238 606 ± 476 213 ± 140 455 ± 353 IL-6r 180 ± 38 180 ± 59 24 ± 24 31 ± 25 IL-8 218 ± 79 581 ± 167*** 2028 ± 700 4100 ± 1353* IL-10 80 ± 12 55 ± 24** 63 ± 11 138 ± 58* *p < 0.05, ** p < 0.001, *** p < 0.0001 versus survivors group. Table 6 Mean (SEM) BALF and serum levels of cytokines in at risk patients who developed or not ARDS according to survival (Groups B and C). BALF Serum Cytokines (pg/ml) Survivors (n = 24) Non-survivors (n = 15) Survivors (n = 24) Non-survivors (n = 15) IL-4 242 ± 160 271 ± 100 53 ± 21 147 ± 65*** IL-6 302 ± 463 890 ± 1178 219 ± 203 612 ± 423** IL-6r 74 ± 29 74 ± 18 17 ± 18 77 ± 44*** IL-8 368 ± 96 664 ± 105*** 1097 ± 770 5885 ± 2151*** IL-10 67 ± 40 111 ± 91 72 ± 17 243 ± 74*** * p < 0.01, ** p < 0.001, *** p < 0.0001 versus survivors group. Table 7 Predictive BALF and serum levels (pg/ml) for surviving patients of all groups (n = 59) Criterion PPV NPV Sensitivity Specificity 95 % CI BALF IL-6 299 68 70 68 70 0.57–0.83 IL-6r 101 65 63 52 74 0.52–0.79 IL-8 481 96 90 88 96 0.85–0.99 Serum IL-4 84 81 100 100 78 0.77–0.96 IL-6 160 69 94 96 59 0.69–0.92 IL-6r 18 76 78 76 78 0.66–0.89 IL-8 2340 92 96 96 93 0.90–0.99 IL-10 98 96 93 92 96 0.84–0.99 PPV: positive predictive value, NPV: negative predictive value, CI: confidence interval. Table 8 Mean (SEM) BALF and serum levels of cytokines in at risk patients who did not develop ARDS (group C) BALF Serum Cytokines (pg/ml) Survivors (n = 19) Non-survivors (n = 8) Survivors (n = 19) Non-survivors (n = 8) IL-4 242 ± 169 243 ± 69 51 ± 22 169 ± 78** IL-6 297 ± 497 374 ± 285 199 ± 197 620 ± 559* IL-6r 70 ± 25 80 ± 7 17 ± 19 98 ± 43** IL-8 377 ± 95 711 ± 102*** 1102 ± 810 6496 ± 2543*** IL-10 69 ± 43 85 ± 69 73 ± 17 240 ± 70*** * p < 0.05, ** p < 0.001, *** p < 0.0001 versus survivors group. Predictive capabilities of serum mediators for survival of ARDS Cytokine concentration cut off predictive values for surviving patients are shown in Table 7 . All studied mediators were found at higher levels in the serum of non-survivors (p < 0.001 to p < 0.0001). In patients at risk (groups B and C) who did not survive all serum mediators were significantly elevated (p < 0.001 to p < 0.0001) (Table 6 ). Serum levels of all the studied molecules were increased in all patients that did not survive (p < 0.05 to p < 0.0001) (Table 8 ). In survivors BALF/serum ratios were significantly higher for IL-4, IL-8, IL-10 (p < 0.0001, p < 0.001 and p < 0.0001, respectively), due to lower serum levels and not to higher BALF levels. Correlations of the studied cytokines Furthermore, the serum levels of all studied mediators were significantly correlated to APACHE II score. Serum IL-8 exhibited the strongest correlation with APACHE II score (Figure 1 ). The level of IL-8 in the BALF were found to be significantly correlated to APACHE II score (r = 0.60, p < 0.0001). Figure 1 Positive strong correlation of serum levels of IL-8 to APACHE II score (Spearman's rank order correlation coefficient). PaO 2 /FiO 2 ratio was significantly correlated to the BALF levels of IL-6 and IL-6r (r = -0.27, p < 0.05; r = -0.55, p < 0.0001; respectively) (Figure 2 ) and to serum levels of IL-4 (r = -0.36, p < 0.05). Figure 2 Negative correlation of BALF levels of IL-6, and IL6 receptor to PaO 2 /FiO 2 ratio (Spearman's rank order correlation coefficient). Discussion We designed this study in order to explore factors that could have prognostic value for the development, the severity, and the outcome of patients with ARDS and at risk for ARDS. Prediction of ARDS development We observed that BALF levels of IL-6r were significantly higher in group A than in groups B and C (p < 0.0001), while no difference was observed in serum levels among the three groups of patients. Interestingly, the BALF and serum levels of cytokine IL-6 were significantly higher in patients at risk who developed ARDS (group B) compared to the other two groups. This observation differs from previous studies [ 24 , 25 ], probably reflecting the different patient population from our study. However, our results are in agreement with previous reports regarding the luck of its prediction capacity for ARDS onset [ 25 , 26 ], since both BALF and serum IL-6 levels, showed a low positive predictive value according to the ROC analysis. Patients of group A and group B had higher serum levels of the inflammatory cytokine IL-8 compared to the patients of group C, but neither serum nor BALF IL-8 levels were predictive for ARDS development. In two studies, Miller et al, [ 27 ] found that IL-8 in BAL at the beginning of ARDS was highest in patients who died, and Donnelly et al [ 28 ] found that IL-8 was highest in patients at risk for ARDS who later developed ARDS. Unfortunately, subsequent studies have found that IL-8 does not predict outcome either at the outset or during the course of ARDS [ 5 ]). Meduri et al [ 16 ] found that all cytokines measured remained high during the course of ARDS in patients who died. The importance of considering anti-inflammatory constituents of BALF is shown by Donnelly et al [ 28 ] who found that patients with ARDS who died had significantly lower initial concentrations of IL-10 in BAL than patients who lived. Parson et al [ 29 ]) studied serial levels of IL-1ra and IL-10 in patients who were identified as being at risk for the development of ARDS. Initial IL-1ra levels were significantly higher (p < 0.0001) in the patients than in normal control subjects. Similarly, IL-10 levels were increased in patients compared with normal control subjects but did not predict the development of ARDS. Like IL-1ra levels, initial IL-10 levels were significantly higher (p = 0.005) in patients who died compared with survivors. However, in other studies increased levels of IL-4, and IL-10 in serum and/or BALF were found to have beneficial effect in pre-ARDS patients [ 13 , 14 ]. Thus, the heterogeneity of patients in the various studies may be a reason for the contradictive results reported earlier. Prediction of outcome Patients who died had significantly higher levels of IL-6, IL-6r and IL-8 in BALF than those who finally survived, while all mediators studied were significantly higher in the serum of non-survivors. The rationale for analysis of cytokine concentrations in BAL fluid is that inflammatory cytokines, like IL-6 and IL-8 are known to be produced by airway epithelial cells and activated pulmonary macrophages in response to a variety of infectious agents and other triggers of airway inflammation [ 30 ]. During ARDS, the alveolar epithelial-endothelial barrier is disrupted, and cytokines produced in the lung are released into the systemic circulation. This is believed to be a potential mechanism for the development of systemic inflammatory response syndrome [ 31 , 32 ]. The relationship between circulatory and pulmonary cytokines levels and outcome provides support to the hypothesis that poor outcome in ARDS is related to a persistent inflammatory process [ 30 - 33 ]. In addition, in agreement with our findings, bronchoalveolar concentrations of the above cytokines have been reported to be increased in patients with or at risk for ARDS [ 33 ]. As demonstrated by Meduri and co-workers, BAL fluid concentrations of IL-8 and IL-6 were significantly higher in nonsurvivors than in survivors [ 31 ]. Increased BAL levels most likely indicate intrapulmonary overproduction and not increased permeability [ 33 ]. Therefore, determination of these selected inflammatory cytokines in BAL fluid in ARDS could be of prognostic relevance [ 33 , 34 ]. Regarding serum levels, patients at risk (groups B and C) who died had all molecules significantly increased (Table 6 ), suggesting that systemic inflammatory over-response in critically ill patients may be destructive leading to multiple organ dysfunction and poor outcome. Serum levels of all the studied molecules were increased taking all patients together (groups A+B+C, Table 4 ) or separate (Tables 5 , 6 , and 8 ) that did not survive, suggesting that cytokinemia might reflect the severity and extension of inflammation but is not the only factor related to ARDS development. Interestingly, only IL-8 and IL-10 both in BALF and serum were higher in ARDS patients who died. These results are consistent with those of Donnelly and coworkers, who found elevated concentrations of IL-10 in BALF of 28 patients with ARDS [ 35 ]. However, our results differ from those of Armstrong and Millar, who found significantly lower concentrations of IL-10 in a small number group of patients at risk for ARDS [ 36 ]. In addition, low concentrations of IL-10 in BALF from patients with ARDS were found to be associated with increased mortality [ 35 , 37 ]. In contrast, all cytokines were elevated in those who died taking together all the patients at risk (groups B and C). Regarding the survival prediction, IL-8 and IL-10 showed the higher serum positive predictive value (92 and 96%, respectively), while IL-4 had the higher serum negative predictive value and sensitivity, taking together all patients. Relation to severity of lung injury Regarding the relation of the studied molecules and the severity of lung injury, a negative correlation was found between BALF IL-6, and IL-6r and PaO2/FiO2. The same was true for serum IL-4 and PaO2/FiO2. All the studied molecules in the serum were positively correlated with the APACHE II score, as was BALF IL-8. It is probable that this cytokine is closely related to the extension of tissue damage and organ failure. Conclusions In conclusion, our data show that the predictive role of most of the studied molecules both in serum and BALF for ARDS development is valuable. In addition, almost all of them are good predictors of outcome in these patients. Further studies with greater number of patients with various subgroups of ARDS as well as stricter grouping criteria should be designed to investigate the complex network of these molecules and their receptors in ARDS and their value as predictive factors in these patients. Competing interest None declared. Authors' contributions DB conceived of the study, and participated in its design and coordination and drafted the manuscript MGA participated in the design of the study, carried out immunoassays and drafted the manuscript KMA carried out immunoassays and drafted the manuscript PA patients data and samples collection IP patients data and samples collection SA patients data and samples collection GP Performed statistical analysis AP patients data and samples collection NK carried out RIA measurements DP KMA carried out immunoassays and drafted the manuscript All authors read and approved the final manuscript Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516781.xml |
506784 | Using a Geographical Information System to investigate the relationship between reported cryptosporidiosis and water supply | Background This paper reports on a study investigating the epidemiology of sporadic cryptosporidiosis in the North West of England and Wales using a Geographical Information System (GIS) to map location of residence of cases. Some 747 reports of cases were made to CDSC North West of which 649 reports were suitable for analysis. Cases were plotted on the maps of water supply zone and water quality area boundaries, provided by the two main water utilities. Results It was notable that there were major spatial variations in attack rate across the North West and Wales. The most dramatic example was the large difference between the Greater Manchester conurbation with many reports and Liverpool with none. Given the distribution of previously detected waterborne outbreaks in the region it was initially thought that drinking water source may be an explanation. However, an analysis of the distribution of cases in the Greater Manchester area showed no correlation with any of five water supplies that serve the conurbation. Conclusions Our study has shown a dramatic variation in the incidence of laboratory confirmed cryptosporidiosis within two regions of the United Kingdom. Further analysis has not been able to prove drinking water as a likely explanation of this variation which so far remains unexplained. | Background Cryptosporidiosis is infection with species of the genus Cryptosporidium . Most infections in the UK are due either to C. parvum (previously known as C. parvum genotype 2 or bovine strain) or C. hominis (previously known as C. parvum genotype 1 or human strain) [ 1 ]. Cryptosporidium spp. are protozoan parasites. In otherwise healthy individuals they tend to cause a self-limiting form of gastroenteritis which can last for several days and, sometimes, weeks. In patients with certain forms of immune deficiency, most notable the Acquired Immune Deficiency Syndrome (AIDS), the infection can cause a severe and prolonged diarrhoeal illness which, prior to the widespread use of at highly effective antiretroviral therapy was often fatal [ 2 ] Cryptosporidiosis has now become the most commonly identified protozoal cause of gastroenteritis in the United Kingdom. Most of the epidemiological data to-date has been related to reports of outbreaks. Between the years 1983 to 1997 there were 80 outbreaks cryptosporidiosis in England and Wales affecting 4649 individuals [ 3 ]. Of these 80 outbreaks, 25 affecting 3455 cases were associated with drinking water. Indeed large outbreaks of cryptosporidiois have often been associated with drinking water [ 4 , 5 ]. Outbreaks of cryptosporidiosis were a particular problem in the North West Region of England during the 1990s where a single unfiltered surface water source was responsible for several outbreaks [ 2 ]. However, outbreak-related cases represent only a small proportion (<10%) of the total cases reported to national surveillance. The epidemiology of sporadic (non-outbreak-related) cases is largely unknown. Of three large case-control studies reported in the past few years only one found an association with drinking mains water [ 7 - 9 ]. The study that did find an association with drinking water was undertaken in a largely rural area that when the study was undertaken received some of its water from systems that did not have modern filtration plants. The other two studies identified contact with another case, contact with cattle and overseas travel as being the main risk factors for sporadic infection. The question remains what proportion of sporadic cryptosporidiosis infections may be due to the consumption of mains drinking water. This paper reports a study using GIS to investigate the epidemiology of sporadic cryptosporidiosis and particularly address the issues of whether sporadic cases are also associated with consumption of drinking water. Results Table 1 shows the number of records from each health authority and the number of exclusions, including reasons for exclusion. Figures 1 and 2 show the geographical distribution of individual cases by indicating a dot on the map of the water supply zones (water quality area for Wales). Figure 3 indicates the attack rates for each zone in the North West where the shading indicates a range of attack rates. Care should be taken in interpreting the zone rates as the populations covered by each zone/area varied substantially. In some zones high attack rates were seen despite only a single case being identified because of a low denominator population. The area specific attack rates are not shown for Wales as numbers of cases was smaller. Table 1 Health authority of reported cases, including reasons for exclusion from analysis Reasons for excluding post codes Health Authority Total records Included in analysis Excluded Incorrect Duplicate Incomplete Missing % excluded Bro Taf 2 2 2 100 Bury and Rochdale 51 43 8 4 2 1 1 15.7 Dyfed Powys 49 44 5 5 10.2 East Lancashire 47 45 2 2 4.3 Gwent 12 12 0 0 Iechyd Morgannwg 6 6 6 100 Morecambe Bay 13 10 3 3 23.1 Manchester 69 48 21 3 10 8 30.4 N Cheshire 6 6 0 0 North Wales 121 111 10 9 1 8.3 North West Lancashire 74 59 15 3 2 10 20.3 South Cheshire 63 63 0 0 South Lancashire 20 20 0 0 Salford 34 28 6 4 2 17.6 St Helens & Knowsley 8 8 0 0 Stockport 66 60 6 1 1 4 9.1 West Pennine 32 29 3 1 1 1 9.4 Wigan and Bolton 46 35 11 2 4 4 1 23.9 Wirral 28 28 0 0 TOTAL 747 649 98 37 10 24 27 13.1 Figure 1 Cryptosporidiosis cases in the North West, January 2001 to February 2002 Figure 2 Cryptosporidiosis cases in the Welsh Water Area, February 2001 to February 2002 Figure 3 Cryptosporidium attack rates in the North West. Attack rates calculated for each Water Supply Zone in the North West, January 2001 to February 2002 (per 1000 population). It can be seen that there is substantial spatial variation in the distribution of reported cases. In part, this variation can be explained by variation in population density. However, much of the variation is unexplained. For example, reports from Liverpool are very uncommon, whilst reports from Greater Manchester are very common. It was decided to investigate the excess case reporting from Greater Manchester in further detail to look for any possible association with water supplies. Water to the Greater Manchester area comes from five main water treatment works; Lostock (derived from Thirlmere in the Lake District and chlorinated but not filtered), Woodgate Hill (derived from Haweswater and Windermere via the Watchgate Treatment Works near Kendal where the water is treated by rapid gravity sand filtration, though not chemically coagulated before spring 2003), Arnfield-Godley (chemical coagulation, clarification and rapid gravity sand filtration), Buckton Castle (chemical coagulation, dissolved air flotation and rapid gravity sand filtration) and Wybersley (chemical coagulation, dissolved air flotation and rapid gravity sand filtration). In order to determine whether there was any relationship between attack rate and water supply, all water supply zones in the North West that received any water from one or more of these five supplies were identified (Figure 4 ). For each of these water supply zones, the proportion of the supply from each treatment works were was obtained from United Utilities. The correlation between the attack rate and proportion of water from each treatment works was tested using Kendall's rank correlation (Table 2 ) [ 10 ]. The figure adjusted for ties was used. There was no significant correlation between water source and attack rate. Figure 4 Dominant water treatment works in Greater Manchester during 2001. Each water supply zone in the Greater Manchester area is identified and colour coded to illustrate which of the five water treatment works supply most of the drinking water. Where there is no single dominant source the zone is left uncoloured Table 2 Water supply and cases of Cryptosporidiosis Correlation between water supply zone specific attack rate and proportion of water received from each of the five main water treatment works supplying Greater Manchester. Water treatment works Z P value Lostock -1.084 0.2782 Woodgate Hill 1.713 0.0867 Arnfield – Godley -1.186 0.2353 Buckton Castle -0.628 0.5294 Wybersley 0.451 0.6517 Discussion As already mentioned, care should be taken in the interpretation of this analysis. It is notable that the proportion of reports that could not be allocated a correct postcode varied from one health authority to another to some extent. Also variation in attack rate between water supply zones or water quality areas was as likely to be due to differences in population size as to differences in reported cases. This was most obvious in zones/areas with relatively small population sizes where random effects could have a particularly important affect. However, there are a number of obvious features. The most obvious is the large number of cases from the Greater Manchester conurbation. This covered the Bury and Rochdale, Manchester, Salford, Stockport, West Pennine, and Wigan and Bolton Health Authorities. This excess of cases in Manchester is even more remarkable when compared with the virtual absence of cases from the Liverpool conurbation (Liverpool, Sefton and St Helen's & Knowsley Health Authorities). The reason for the excess of cases in Greater Manchester is unclear. Although different reporting habits could play a part, we doubt that it could explain more than a small part of the difference. Reporting practices are not that greatly different across the North West [ 10 ]. A sero-epidemiological study, currently underway, may be able to determine whether the low reporting rate from Liverpool is real or not. An alternate explanation could be that the increase represents different water supplies. Salford, and Wigan and Bolton Health Authorities get much of their water supply from Thirlmere, a supply known to be prone to contamination by Cryptosporidium [ 6 ], none of the others have been implicated in outbreaks of disease. However, it would appear that the attack rates did not vary in any consistent way in relation to water source and so a waterborne hypothesis for this excess could not be proven. Analysis was restricted to Greater Manchester as analysis of all reports in the North West could be subject to confounding as a result of geographical variation in reporting behaviour, whereas the Health Authorities in Greater Manchester share a very similar notification system. The lack of an association with drinking water source was consistent with the conclusions of the case control study undertaken at the same time which also did not find an association with drinking water [ 9 ]. Nevertheless, it will be interesting to see whether the completion of an adequate water filtration plant for the Thirlmere supply which is scheduled for spring 2004 has much, if any, impact on the number of reports from Manchester. A further explanation could be that the Manchester population experience other risk factors more commonly than the Liverpool population. Possible explanations include contact with contact with animals, visiting swimming pools and overseas travel. We do not have access to data to show whether or not people from Manchester are more exposed to these factors than people from Liverpool. However, Manchester is closer to a major National Park, The Peak District. If people from Manchester use their proximity to the Peak District to spend more time in the countryside and so are more likely to come into contact with farm animals, this could explain the difference. This would be an interesting hypothesis to test in a further study. In addition to Greater Manchester, there are also areas of increased reporting from North Wales and from North West Lancashire. These hotspots also remain unexplained. North West Lancashire, however, receives much of its water from Thirlmere and a water source cannot be excluded. However, many cases were reported from the Fylde peninsula which only receives a small proportion of its water from Thirlmere. Conclusions The use of GIS to study the spatial distribution of cases has been useful in identifying geographical variation, but not necessarily for identifying the reasons for this variation. However, initial analysis does not support the hypothesis that differences in drinking water source is the major reason for this variation. We agree with Dangendorf et al . [ 12 ] that GIS will contribute substantially to our understanding of the contribution of drinking water to human disease as it aids the identification of possible associations between disease and particular water supplies, provided sufficient information is collected to enable accurate location of cases. Methods Consultants in Communicable Disease Control in the North West Region of England and in Wales were asked to forward details of cryptosporidium cases upon notification from the laboratory. A data collection form was completed for each case, giving the following details: name, address, postcode, date of birth, GP name, GP address and date of notification. The form was faxed or e-mailed to Communicable Disease Surveillance Centre (CDSC) – North West as soon as possible. Enhanced surveillance for the North West of England and Wales were set up separately, North West England in mid December 2000 and Wales in February 2001. Both ran until February 2002. To check for accuracy, the data were audited every 2 months. Each CCDC was sent a list of the cases they had notified to CDSC North West in the preceding 2 months. Any cases that had not been notified were forwarded to CDSC. The first stage in the geographical analysis was to check the 747 records for possible duplicate records. These were selected on the criteria of 2 individuals with identical names, dates of birth and postcodes being present in the database. Given that a postcode contains on average only 15 addresses the chances of these being legitimate is highly unlikely. Through this procedure 10 records were deleted from the database. Consequently 737 cases of cryptosporidiosis were identified during the period of enhanced surveillance. The next step was to assign a grid reference to each postcode and this was achieved using the Royal Mail Postcode Address File. Eighty eight records were excluded as either, an incomplete postcode was entered into the cryptosporidiosis database or a match could not be found in the postcode address file. Therefore, in total the database was reduced to 649 cryptosporidiosis cases. These were plotted as points against a backdrop of the water supply zones for the two main water utilities. The water supply zone and water quality area boundaries were provided by the two main water utilities (United Utilities and Welsh Water). A "water supply zone" is an area designated by a water undertaker providing water to the residences of not more than 50,000 people. In general the source is consistent across a particular zone. Using the GIS each case was also assigned its corresponding water supply zone and the number of cases in each WSZ was divided by the population, based upon data supplied by the two water utilities, to produce the attack rate maps. The analysis was undertaken in ArcGIS 8.1 using point in polygon techniques [ 13 ]. Authors' contributions PRH was lead on study design, did the statistical analyses and co-wrote the paper. SH co-designed the study and co-wrote the paper and undertook most of the data collection. QS co-designed the study and co-wrote the paper. SW co-designed the study and co-wrote the paper. IL undertook the geographical analyses and co-wrote the paper. KO co-designed the study, co-wrote the paper and obtained data on the water distribution. RC co-designed the study and co-wrote the paper. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC506784.xml |
511005 | Who will lose weight? A reexamination of predictors of weight loss in women | Background The purpose of this study was to analyze pretreatment predictors of short-term weight loss in Portuguese overweight and obese women involved in a weight management program. Behavioral and psychosocial predictors were selected a priori from previous results reported in American women who participated in a similar program. Methods Subjects were 140 healthy overweight/obese women (age, 38.3 ± 5.9 y; BMI, 30.3 ± 3.7 kg/m 2 ) who participated in a 4-month lifestyle weight loss program consisting of group-based behavior therapy to improve diet and increase physical activity. At baseline, all women completed a comprehensive behavioral and psychosocial battery, in standardized conditions. Results Of all starting participants, 3.5% (5 subjects) did not finish the program. By treatment's end, more than half of all women had met the recomended weight loss goals, despite a large variability in individual results (range for weight loss = 19 kg). In bivariate and multivariate correlation/regression analysis fewer previous diets and weight outcome evaluations, and to a lesser extent self-motivation and body image were significant and independent predictors of weight reduction, before and after adjustment for baseline weight. A negative and slightly curvilinear relationship best described the association between outcome evaluations and weight change, revealing that persons with very accepting evaluations (that would accept or be happy with minimal weight change) lost the least amount of weight while positive but moderate evaluations of outcomes (i.e., neither low nor extremely demanding) were more predictive of success. Among those subjects who reported having initiated more than 3–4 diets in the year before the study, very few were found to be in the most successful group after treatment. Quality of life, self-esteem, and exercise variables did not predict outcomes. Conclusions Several variables were confirmed as predictors of success in short-term weight loss and can be used in future hypothesis-testing studies and as a part of more evolved prediction models. Previous dieting, and pretreatment self-motivation and body image are associated with subsequent weight loss, in agreement with earlier findings in previous samples. Weight outcome evaluations appear to display a more complex relationship with treatment results and culture-specific factors may be useful in explaining this pattern of association. | Background Predicting weight loss outcomes from information collected from subjects before they start weight management programs is a long-standing goal [ 1 ]. In effect, if individual variability in obesity treatment remains as high as it is presently, identifying variables that moderate outcomes (i.e., that explain for whom treatment works and under what conditions) will justifiably continue to deserve attention from researchers [ 2 , 3 ]. To date, however, evidence shows that individual weight change cannot be accurately predicted, with only a few variables showing positive results [ 4 , 5 ]. Nevertheless, advances in theoretical formulations regarding the process of weight control [ 6 ], improved research methodologies [ 7 ], and an increasing number of variables tested as potential predictors [ 8 ] suggest further progress is possible. Among the most valuable applications of valid weight loss prediction models is the early identification of individuals with the least estimated probability of success in a given treatment, who could (and perhaps should) be directed to alternative therapies. Research specifically aimed at studying these overweight/obese persons, who are more resistant to current forms of treatment, would be particularly relevant. Equally important are improvements in the matching between treatments and participants, which are dependent on the measurement of relevant pretreatment variables (i.e., that are found to predict success). More individualized programs have the potential for higher cost-effectiveness and improved overall success rates, by targeting specific areas of concern in selected participants or homogeneous groups [ 9 ]. Finally, the development of a valid and comprehensive weight loss readiness questionnaire and its use as a screening tool in obesity treatment are additional foreseeable outcomes of this research [ 10 ]. We have previously tested a large number of psychosocial and behavioral variables as predictors of short-term weight outcomes [ 8 ]. A number of significant pretreatment correlates of 4-month weight loss were identified, including previous dieting and recent weight changes, self-motivation, weight outcome evaluations, body size dissatisfaction, weight-related quality of life, self-esteem, and exercise self-efficacy and perceived barriers. Because this earlier study was primarily hypothesis-generating, confirmatory results are needed. The goal of the present study was to re-evaluate the predictive value of several of these variables in a different sample of women who underwent a comparable weight reduction program. While our previous work has studied women in the United States (US), the present analysis reports on a group of similarly-overweight/obese Portuguese females. Cross-cultural differences in social norms regarding ideal weights, in the role of physical activity, and in eating habits and relationship with food (e.g. [ 11 ]) could have an impact on how individuals respond to obesity therapies and also inform researchers about the role of pretreatment variables (moderators) in treatment success. It should be noted that this study was not designed to evaluate the overall effectiveness of the weight loss program but to analyze predictors of short-term results among participants who displayed highly variable levels of success. Methods Subjects Subjects were recruited from the community for a 2-year weight management program through newspaper ads, a website, email messages on listservs, and announcement flyers. Subjects were required to be older than 24 years, be premenopausal and not currently pregnant, have a BMI higher than 24.9 kg/m 2 , and be free from major disease to be eligible for the study. After several orientation sessions, 152 women signed up for the program. During the run-in phase, four women decided not to participate (reporting new time and scheduling conflicts), four did not comply with testing requirements and were excluded, three women found out they were pregnant or decided to attempt pregnancy and were also excluded, and one subject was found ineligible due to medical reasons (untreated hyperthyroidism), leaving a total of 140 women who started the intervention. An initial visit with the study physician ensured that subjects met all medical inclusion criteria. All participants agreed to refrain from participating in any other weight loss program and gave written informed consent prior to participation in the study. The Faculty of Human Movement's Human Subjects Institutional Review Board approved the study. Assessments Weight was measured twice, to the nearest 0.1 kg (average was used) using an electronic scale (SECA model 770, Hamburg, Germany) and height was also measured twice, to the nearest 0.1 cm (average was used). Body mass index (BMI) in kilograms per squared meter was calculated from weight (kg) and height (m). In addition to weight and other morphological and physiological variables assessed, subjects filled out a large psychosocial questionnaire battery prior to the first weekly treatment session. This was conducted in standardized conditions of comfort and silence, with a study technician attending every assessment period. To ensure optimal levels of concentration and avoid overburden caused by long periods of psychometric testing, subjects were required to attend three sessions, each lasting approximately 45 minutes. Portuguese versions of the Impact of Weight on Quality of Life – Lite (IWQOL-Lite, [ 12 ]), Self-Motivation Inventory (SMI, [ 13 ]), Rosenberg's Self-esteem/Self-concept (RSE, [ 14 ]), Exercise Perceived Barriers (EPB, [ 15 ]), and Exercise Self-efficacy (ESE, [ 16 ]) questionnaires were used. Details of the original English versions of these instruments are described elsewhere [ 8 ]. In brief, the IWQOL-Lite measures weight-specific perceived quality of life on five dimensions of daily life (physical functioning, self-esteem, sexual life, public distress, and work) and it also provides a summary score, which was used in this study. The SMI evaluates a general (i.e., context-unspecific) tendency to persevere, finish tasks initiated, maintain self-discipline, and motivate oneself. The RSE measures a person's self-respect and positive self-opinion. The EPB assesses the extent to which the elements of time, effort, and other obstacles are perceived barriers to habitual physical activity. The ESE measures an individual's belief or conviction that she can "stick with" an exercise program for at least 6 months under varying circumstances, in the dimensions of making time for exercise and resisting relapse. Summary scores for both the EPB and ESE were calculated and used in this study. For all instruments, higher scores indicate higher values for the constructs being measured. Forward and backward translations between English and Portuguese were performed for all questionnaires cited above. Two bilingual Portuguese researchers subsequently reviewed the translated Portuguese versions and minor adjustments were made to improve grammar and readability. In this study, Cronbach's alpha estimates were as follows, for the IWQOL-Lite (0.95, 31 items), SMI (0.88, 40 items), RSE (0.81, 10 items), EPB (0.71, 11 items), and ESE (0.77, 10 items), ensuring acceptable to high internal consistency. Number of previous diets and weight history variables were taken from a diet/weight history questionnaire developed specifically for this study. Weight outcome evaluations were assessed by 4 questions derived from the Goals and Relative Weights Questionnaire (GRWQ, [ 17 ]). Subjects were asked to indicate their "dream" weight, and also what would be their "happy", "acceptable", and "disappointing" weights by the end of the 4-month intervention. Each outcome evaluation was computed as the percentage of pretreatment measured weight. Body size dissatisfaction was assessed by the difference between self and ideal body figures selected from a list of 9 female silhouettes of increasing size [ 18 ]. High scores (i.e., larger disparity between self and ideal figure) indicate greater body size dissatisfaction. For multiple-item questionnaires, if a subject failed to correctly fill out at least 75% of all items in a summary/global scale or at least 50% of items in a subscale, the corresponding score was not calculated. However, this did not automatically eliminate a subject from analyses, if other (valid) scores could be used for the same participant. Intervention Subjects attended 15 treatment sessions in groups of 32 to 35 women, for approximately 4 months. Average attendance to the treatment sessions was 83%. Sessions lasted 120 minutes and included educational content and practical application classroom exercises in the areas of physical activity and exercise, diet and eating behavior, and behavior modification [ 19 ]. Physical activity topics included learning the energy cost associated with typical activities, increasing daily walking and lifestyle physical activity, planning and implementing a structured exercise plan, setting appropriate goals, using logs and the pedometer for self-monitoring, and choosing the right type of exercise, among many others. Examples of covered nutrition topics are the caloric, fat, and fiber content, and the energy density of common foods, the role of breakfast and meal frequency for weight control, reducing portion size, strategies to reduce the diet's fat content, preventing binge and emotional eating, planning for special occasions, and reducing hunger by increasing meal satiety (e.g., increasing fiber content). Cognitive and behavior skills like self-monitoring, self-efficacy enhancement, dealing with lapses and relapses, enhancing body image, using contingency management strategies, and eliciting social support were also part of the curriculum. The intervention team included two Ph.D.- and six M.S.-level exercise physiologists and dietitians, and one behavioral psychologist. Subjects were instructed and motivated to make small but enduring reductions in caloric intake and to increase energy expenditure to induce a daily energy deficit of approximately 300 kcal. Although weight was monitored weekly, subjects were advised that long-term (i.e., after 1–2 years), not necessarily rapid weight reduction was the primary target. In the first session, participants were informed that reaching a minimum of 5% weight loss at 6 months was an appropriate goal in this program and were subsequently instructed to individually calculate the number of kg that corresponded to. Statistical Analysis Measures of central tendency, distribution, and normality were examined for all psychosocial variables at baseline and for weight at baseline and 4 months. Following intention-to-treat principles and to include psychosocial data from all starting subjects in statistical analysis, the Last Observation Carried Forward (LOCF) method was used for 5 subjects who dropped from the program and could not be reached for testing at 4 months (the five subjects dropped after sessions number 10, 11, 12 [two subjects], and 14); in these cases, the last measured weight, which was assessed weekly for each woman with the same scale as used in laboratory testing, was entered as their final weight. The limitations of this method notwithstanding [ 20 ], variations of the LOCF are commonly used in obesity longitudinal trials (e.g., [ 21 ]). The very small number of subjects for whom 4-month weight data were imputed, all of which were derived from weights measured late in the program, should result in relatively unbiased results [ 22 ]. Furthermore, since a trend toward weight regain is common upon subjects leaving treatment, assuming no further weight change after dropping out works against the study's primary hypotheses, providing additional protection from type I error. One subject was removed from analyses that included weight outcome evaluation variables since her values were markedly lower than values from the rest of the group (i.e., it was considered a data outlier). Rank-order correlation (Spearman's ρ) was used to estimate the linear relationship between predictors and weight change. All but one among independent variables assessed at baseline displayed a non-normal distribution, warranting the use of this non-parametric technique. The dependent measure was expressed as the difference between baseline and 4-month weight. An alternative way to express weight results is to calculate the "residualized" value for 4-month weight, after the effect of baseline weight is removed (i.e., regressed out in linear regression). This method protects against overcorrection of the post by the pre score when using a subtraction score, and also effectively and completely adjusts this new "change" score for the pretreatment weight value [ 23 ]. This variable was also used as a dependent variable in analyses. Quadratic terms were produced for the two weight outcome evaluation variables, to assess the curvilinear relationship between these measures and actual weight results. Multiple regression analysis was performed to assess the multivariate relationships between the independent variables and weight change. In this regression model, the selected predictors (variables which were significant or approached significance in the bivariate analysis) were forced into the model and the squared semi-partial correlation coefficient was calculated to quantify the unique contribution of each predictor to the variance in the dependent measure [ 23 ]. Considering the relatively small subject-parameter ratio (24:1) and in the absence of strong theoretical support for a hierarchical entering of predictors into the model, this a priori (forced) model is preferable to a stepwise model as it minimizes instability in the selection of variables into the model (and in parameter estimation) caused by potential sampling biases [ 24 ]. A distribution-based criterion was employed to form three equally numbered groups, split by the two tertiles of weight change. Means of independent variables for the three subgroups were compared by analysis of variance (ANOVA), followed by post-hoc comparisons (Tukey's Honestly Significant Difference test). Type I error was set at 0.05 for all tests. Statistical analyses were completed using the Statistical Package for the Social Sciences (SPSS), version 12.0. Results Weight loss data reported in the present study refer to the initial 4 months of a longer trial. After the 4-month phase, subjects were randomly assigned to three distinct long-term interventions. Figure 1 shows individual weight changes for all 140 participants who started the program. Attrition was very low (3.5%) and average weight change was -2.9 ± 3.2 kg (-3.0 ± 3.2 kg, if only the 135 completers are considered). The range for weight change was about 19 kg, a (large) level of individual variability providing an optimal setting to study correlates of weight loss. About 53% of participants lost more than 3.3% of their initial weight (roughly the equivalent of a 5% weight loss after 6 months, in red in Figure 1 ), thus generally meeting or surpassing the recommended weight loss goals. Eighteen percent of all women (in grey in Figure 1 ) did not lose, or gained weight after 4 months. Figure 1 Individual Weight Change After 4 Months. Red bars indicate subjects who lost more than 3.3% of their initial weight; grey bars indicate subjects who did not lose weight or who gained weight. Table 1 shows descriptive statistics for the independent variables and their association with weight change. Fewer previous diets, weight outcome evaluations, and to a lesser degree self-motivation and body image were positively associated with weight loss. When the significance level was adjusted for the number of variables being tested (Bonferroni adjustment, new significance set at 0.005), the number of previous diets and weight outcome evaluations remained significantly correlated with weight results. An additional weight history question, asking whether subjects had lost at least 5 kg in the previous 2 years, was not associated with weight loss at 4 months (t = 0.71, p = 0.480, comparing subjects responding "yes" and "no"). Two additional variables from the GRWQ were also analyzed. "Dream" weight (mean ± SD, 98.1 ± 3.9%) was unrelated to baseline-adjusted weight loss (ρ = 0.001, p = 0.98) while "disappointing" weight outcome (77.4 ± 7.8% of initial weight) was associated with baseline-adjusted weight loss (ρ = 0.27, p = 0.002). Time at current weight, obesity-specific quality of life, self-esteem, and exercise variables were not associated with weight results, before or after adjusting for baseline weight. Significant predictors in the bivariate analysis (Table 1 ) were entered into a multivariate regression model to predict weight change. Since "happy" and "acceptable" outcome evaluations were highly intercorrelated and represent similar constructs, they were averaged into a single variable for this analysis. All variables entered in the model explained independent shares of the variance in weight loss, before (not shown) and after the inclusion of baseline weight (Table 2 ). Each predictor caused a significant increase in the model's R 2 with weight outcome evaluations explaining the single largest share of the dependent variable. The model accounted for about 24% of the variance in 4-month weight change. Table 1 Correlation Between Pretreatment Variables and Weight Change at 4 Months Weight Change Weight Change 1 n ρ p ρ p Mean SD Min Max Number of diets in past year 130 0.26 0.002 0.26 0.003 1.2 1.7 0 8 Months at current weight 127 -0.13 0.139 -0.13 0.157 24.1 24.1 0 120 "Acceptable" weight loss (% initial) 134 0.33 <0.001 0.26 0.002 92.7 4.0 77.1 100.6 "Happy" weight loss (% initial) 135 0.27 0.001 0.21 0.015 89.0 4.9 74.9 99.1 Impact of weight on quality of life 138 0.02 0.837 -0.05 0.594 79.5 14.1 37.9 100.0 Self-motivation 135 -0.19 0.030 -0.18 0.036 141.3 17.9 100.0 183.0 Body size dissatisfaction 134 0.09 0.280 0.18 0.038 2.29 0.88 0 5 Self-esteem 131 0.00 0.970 -0.01 0.930 32.4 3.77 24 40 Exercise perceived barriers 138 0.08 0.364 0.08 0.359 29.8 6.29 12 43 Exercise self-efficacy 138 -0.03 0.721 -0.03 0.750 38.3 4.78 25 49 Higher scores indicate higher value for characteristic tested (e.g. higher quality of life, higher self-motivation, higher body size dissatisfaction, more perceived barriers, etc.); Since weight change was coded as baseline weight subtracted to 4-month weight, weight loss is represented by a negative value (thus, a negative correlation coefficient indicates a positive correlation with weight loss). 1 Four-month weight adjusted for baseline weight. Table 2 Multiple Regression Analysis for 4-month Changes in Weight B t p Squared semi-partial correlation (%) Baseline weight -.069 -2.481 0.015 4.0 Number of diets in past year .372 2.439 0.016 3.8 Weight outcome evaluations 1 .235 3.673 <0.001 8.7 Self-motivation -.040 -2.714 0.008 4.7 Body size dissatisfaction .755 2.389 0.018 3.7 R 2 (×100) = 24.0 (adjusted R 2 (×100) = 20.5), SEE = 2.80 kg, F(df,123) = 7.84 ( p < 0.001); 1 Average of "happy" and "acceptable" weight outcome evaluations. Weight outcome evaluations were computed as a percentage of participants' initial weight. Thus, the lower this percentage, the more stringent (i.e., more demanding) was a subject's evaluation of her results, and vice-versa. We found significant and positive linear relationships between outcome evaluations and weight loss (Tables 1 and 2 ), indicating that the more demanding the evaluations of outcomes were at baseline (i.e., the lower the percentage of initial weight), the more weight was later lost (and vice-versa, i.e., the more accepting the evaluation of future weight loss, the less weight subjects lost). However, a visual inspection of these associations suggested that participants on the lower end of the outcome evaluation distribution might not be following the overall group trend. In fact, an additional analysis revealed that, for the whole group, a curvilinear pattern of association described the relationship slightly better than a linear pattern, for both "happy" and "acceptable" outcome evaluations and for the average of the two variables (Figure 2 ). Quadratic (squared) terms were tested in regression models, following procedures described by Cohen and Cohen [ 23 ], and were shown to produce small but significant increases in R 2 , in addition to the non-transformed, linear variables alone. Both linear and curvilinear relationships are depicted in Figure 2 . To account for skewness in the weight outcomes data, regression analyses were repeated with the top and bottom 5% of observed values removed from analysis, yielding very similar results (y = 503.9 - 11.5x + 0.065 x 2 ; R 2 change for x 2 = 0.05, p = 0.010). Figure 2 Relationship Between Weight Outcome Evaluations and Weight Loss. Dashed line shows curvilinear (quadratic term) and solid line shows linear relationship between weight outcomes evaluations (average of "happy" and "acceptable" values) and weight loss (% of initial). Regression equation includes both linear and quadratic terms and R 2 change refers to the addition of the quadratic term into the model, after the linear term was already in the model. To further explore the association of the selected predictors with weight outcomes, subjects were divided into three groups based on tertiles of weight reduction adjusted for initial weight, and baseline psychosocial measures were compared among groups. Significant overall (ANOVA) differences emerged for the number of previous diets and self-motivation, with post-hoc comparisons showing significant mean differences only between the most and least successful groups (Figure 3 ). Considering the slightly curvilinear relationships observed for the GRWQ variables, it was not surprising that significant differences were not detected between success groups for "happy" ( p = 0.284) and "acceptable" ( p = 0.145) weight loss evaluations. Body size dissatisfaction scores were also not different among the three groups ( p = 0.432). Table 3 shows the frequency of previous diets reported by each success group in more detail. Of all subjects reporting no diets initiated in the previous year, only 17% finished in the least successful groups. Conversely, of the 20 subjects reporting 3 or more recent diets, only 3 (15%) finished within the most successful group. Ten women reported having initiated 4 to 8 diets in the previous year, none of whom finished the 4-month program in the group of women losing the most weight. Figure 3 Comparison of Success Groups for Previous Dieting and Pretreatment Self-motivation. Groups based on tertiles for 4-month weight loss. F for ANOVA. Error bars show 95 th confidence interval. Different letters indicate significant group differences in post-hoc analysis ( p < 0.05). Table 3 Frequency of Diets Initiated in the Previous Year, by Weight Loss Success Group 1 Most Successful Intermediate Least Successful Number of diets Freq. % Cum.% Freq. % Cum.% Freq. % Cum.% 0 25 58 58 24 52 52 10 25 25 1 7 16 74 10 22 74 14 34 59 2 8 19 93 6 13 87 6 15 73 3+ 3 7 100 6 13 100 11 27 100 Weight loss (kg) Mean -6.3 -2.7 0.3 SD 2.1 1.0 1.7 1 Groups defined based on tertiles of 4-month weight loss adjusted for baseline weight. Discussion This study aimed at reexamining the association between several pretreatment individual characteristics and success in short-term behavioral weight reduction, in overweight and moderately obese women. Ten variables which had previously been shown to predict weight change [ 8 ] were analyzed in a separate sample, using a comparable research methodology. Previous dieting, self-motivation, and body image showed significant effects as predictors and in the expected direction of relationship. Participants' evaluations about possible weight outcomes were also significantly associated with weight loss in the present study, although in a direction opposite than what was hypothesized; more stringent evaluations of outcomes had predicted worse outcomes in US women [ 8 ] while the reverse was observed in Portuguese women for whom more accepting attitudes towards weight loss were associated with smaller weight changes. Earlier results for exercise, quality of life, self-esteem, and also for some variables related to weight history (time at current weight and large recent weight losses), were not confirmed in the present study. To date, the majority of research on the treatment of overweight and obesity has focused on assessing overall treatment efficacy (expressed as mean group weight change, number of individuals reaching some marker of success, etc.) and analyzing which programs work best, typically using randomly-assigned experimental treatment groups [ 25 - 27 ]. By contrast, much less research has been undertaken to investigate the mechanisms (mediating variables) by which treatments affect subjects, and for whom treatments are most effective (i.e., individual moderators). The potential benefits of studying moderators and mediators of outcomes within the behavioral and social sciences, including for physical activity, diet, and weight control are well described in the literature [ 28 - 30 ]. The identification of such variables open the way to a new generation of interventions, characterized by a higher level of individualization and overall efficacy, both by targeting those individuals more likely to succeed and through an increased focus on those mediators (treatment-related, environmental, and individual factors, and critical interactions among them) more clearly associated with outcomes [ 7 ]. Nevertheless, empirically-derived hypotheses for the role of moderators and mediators in the treatment of obesity remain scant, particularly for psychosocial variables. As a contrasting example, sufficient evidence was already available in the alcohol prevention field in the early 1990's for a large multi-center trial to be funded and carried out, aimed at testing the interaction between treatment modality and a considerable number of individual predictors/moderators such as cognitive impairment, conceptual level, motivation, social support, and patient typology [ 31 ]. In the present study and in other trials [ 32 - 36 ], previous dieting and weight loss attempts have emerged as reliable negative predictors of weight loss. One explanation is that the subset of women reporting more frequent dieting contains a disproportionally high number of individuals who are, for some reason, more resistant to weight control. Despite evidence showing that many individuals are successful even after many previous failed attempts [ 37 , 38 ], it is possible that some subjects in research-based obesity treatment programs see those programs as just one more among many solutions they have tried and failed at before, and thus are more prone to low self-confidence and impaired motivation. Frequent restriction of eating, implied in the question "how many diets have you started...?", could also be a marker for more extreme dieting behaviors that may not be sustainable after the initial boost of motivation [ 39 ]. This could also increase the probability for weight rebound. More studies are needed to investigate the mechanisms through which previous dieting affects weight control, a consisting finding in the literature. The present report also provides indication that a threshold may exist (3–4 number of diets in the previous year) which is associated with a marked reduction in the likelihood of success. Four earlier reports have examined the role of self-motivation as a predictor of weight loss [ 8 , 36 , 40 , 41 ] while one additional study used a general self-efficacy questionnaire worded similarly to the SMI [ 42 ]. The related construct of autonomy-oriented motivation (defined as a motivation style more related to a persons' own interests and values and less controlled by external events) has also been evaluated as a predictor [ 43 ]. With one exception [ 41 ], evidence has supported the notion that high pretreatment levels of self-motivation and an autonomy-oriented motivation are beneficial traits for subsequent weight loss. The SMI has also been shown to correlate with eating variables during weight loss [ 44 ] and to predict exercise behavior [ 13 ]. Contrary to earlier observations in US women [ 8 , 36 ], exercise-related variables did not predict weight loss in the present analysis. That is, while the more general personality attributes related to motivation and efficacy were stable predictors of outcomes in weight loss across studies, the moderating role of exercise self-efficacy and exercise perceived barriers (time, effort, etc.) did not translate well from the US to the Portuguese data set. Cross-national differences such as distinct levels of social awareness for exercise or differences in level of knowledge, past adoption levels, and/or perceived competence regarding exercise and physical activities, all of which may have influenced answers to the exercise questionnaires, are possible explanations for these differences. This study is among only a few that have analyzed associations between the Goals and Relative Weights Questionnaire and subsequent weight loss. Interestingly, marked differences emerged between the present and two previous analyses [ 8 , 36 ]. Portuguese women with more modest weight outcome evaluations were less likely to lose weight, while in US women the opposite was observed, that is, more stringent (demanding) evaluations of possible results were predictive of poorer results. Evidence for a significant effect of outcome expectancies on weight control is extremely relevant in the context of realistic versus unrealistic expectations for weight loss [ 45 - 47 ]. Excessively optimistic expectations are common in US treatment-seeking obese samples [ 17 ], for whom a great value is typically placed on reaching desired weights [ 48 ]. By contrast, Portuguese women, perhaps because their are comparatively less exposed to external pressures to be thin and/or because they belong to a culture where optimism is less valued than in the US, were less likely to produce very demanding weight-related evaluations. Accordingly, we have recently reported that Portuguese women do, on average, state overall less stringent evaluations of weight loss outcomes at baseline than their American counterparts [ 49 ]. This being the case, one hypothesis for the divergent associations for US and Portuguese samples is that, when a broad population is considered, the expectations-outcomes relationship is indeed curvilinear (with an yet-undetermined nadir or interval representing the more favorable goals/expectations) and that Portuguese women predominantly fall on the right (more conservative) side of the distribution while US subjects better represent the left side (more stringent). In the present study, it appeared that the weights participants would find acceptable/happy were associated with weight loss (i.e., more "optimistic" outcome evaluations, more weight loss) until a certain threshold was reached, somewhere around 85 to 90% of initial weight (10–15% weight loss); for women reporting outcome evaluations below that level no further benefit was apparent. One previous study has shown that women with more modest absolute weight loss goals were more likely to achieve their goals, and that those who achieved their weight goals had better weight maintenance after 2.5 years; however, desired weight loss did not directly predict actual weight loss [ 50 ]. Positive expectations expressed as a higher reported likelihood of reaching goal weight predicted larger short-term weight loss in subjects who showed lower level of fantasizing and daydreaming about beneficial consequences of large weight loss [ 51 ]. Other studies have shown larger weight loss goals to positively predict weight loss [ 41 , 52 ] and in one other case goals had small predictive value [ 53 ]. Collectively, previous results and those we now report suggest that positive and moderate expectations/outcome evaluations foretell the best overall results, particularly if accompanied by a high sense of self-assurance [ 52 ]. It should be noted that variables originating from the GRWQ are closely related but are not equivalent to the construct of outcome expectancies (the belief that certain actions will lead to the projected results [ 54 ]) or to weight loss goals. The GRWQ seems to partially measure an actual prediction of outcomes by the participant, similar to a general self-efficacy expectation (e.g., how much weight do you think you will lose by the end of this program? ), while simultaneously tapping into a more attitudinal facet towards a person's weight and weight loss ( how happy/accepting/disappointed would you feel at certain levels of weight loss? ). To some extent, the latter could measure idealization of body weight and perceived importance of body weight and shape for self-esteem and well-being. Therefore, it is possible that moderate or "realistic" weight outcome evaluations (i.e. not too accepting but also not excessively stringent) are the most beneficial and indeed reflect a good balance between a sufficient and necessary sense of self-efficacy and low to moderate levels of thin-ideal internalization, a variable which has been shown to be a positive risk factor for body dissatisfaction, negative affect, and eating disorders [ 55 , 56 ]. Women reporting a larger discrepancy between self and ideal body figures, which indicates a higher body size dissatisfaction [ 18 ], were less likely to lose weight. In a previous report, the same self-ideal measure correlated similarly with short-term results, while two other measures of body image showed comparable, albeit non-significant trends [ 8 ]. Pretreatment scores in the body dissatisfaction scale of the Eating Disorders Inventory, a measure of psychological concern and dislike about one's body shape and size [ 57 ], has also been negatively associated with weight loss in two other behavioral weight loss programs [ 58 , 59 ]. These relationships may be explained by the negative association of body image with mood and psychological impairment [ 60 ], and also by the disappointment and lack of self-worth and self-confidence following previous failed attempts to change weight and body shape [ 6 ]. Although self-esteem did not predict outcomes, we observed significant correlations between body size dissatisfaction and self-esteem (ρ = -0.18, p = 0.042), the number of previous diets (ρ = 0.22, p = 0.013), and weight-related quality of life (ρ = -0.37, p < 0.001). Rapid and concurrent improvements in body image and eating behavior (e.g., reduction in binge episodes) have been observed after surgery-induced thinning [ 61 ], clearly suggesting a close link between attitudes towards one's body and weight control behaviors. Body image therapy has also been shown to reduce concern with food, in the context of a behavioral weight control trial [ 62 ]. Despite the sound theoretical rationale and supportive body of evidence, a note of caution must be made regarding the multidimensionality of the body image construct [ 63 ] and the proliferation of assessment instruments for body image. Although they are typically intercorrelated [ 60 ], different body image scales should be interpreted separately as they may result in different patterns of association with weight loss [ 8 , 58 ]. Strengths of this study are the a priori selection of variables to be analyzed as predictors, a unique population (Portuguese women), and the very low dropout rate. Limitations include a moderately-sized sample considering the known measurement error associated with questionnaire psychological assessments, the fact that some of the scales used still lack well-established validity, and the absence of a control or comparison group. Conclusions Several pretreatment variables were re-evaluated as predictors of short-term weight loss in women. Previous dieting, low self-motivation, and body size dissatisfaction were confirmed as negative predictors of weight outcomes, while the relationship of outcome evaluations with weight reduction suggested a negative and curvilinear pattern, with positive but not excessively demanding evaluations presaging the best results. These data regarding people's outcome evaluations prior to weight loss may have important clinical implications [ 64 ] and are the first evidence for such a pattern of association; thus, they await replication in other samples. Additionally, treatment decisions based on level of previous dieting (alone or included in comprehensive prediction models) may be possible in the near future, at least for overweight and moderately obese women. The more consistent predictors from this and previous studies (e.g., [ 8 , 42 , 59 ]) can and should be used in future hypothesis-testing studies of moderators of weight loss. Finally, this study highlights the fact that behavioral and psychological prediction models may, to some extent, be specific to a particular culture [ 65 ]. Hence, it is likely that some variables will emerge as moderators (and mediators) of obesity treatment in some, but not all cultures, while others will be proven as more universal correlates of success. Competing Interests None declared. Authors' Contributions PJT conceived the study, led the implementation team, performed most statistical analysis, and drafted the manuscript. ALP participated in the study's implementation, in statistical analysis, and was responsible for all psychometric assessments. TLB, SSM, CSM, and JTB were actively involved in the study's implementation and in data collection. AMS participated in the study's implementation and collected all body habitus data. LBS is the principal investigator in the research trial and contributed to the final version of the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC511005.xml |
553975 | Transitions in care during the end of life: changes experienced following enrolment in a comprehensive palliative care program | Background Transitions in the location of care and in who provides such care can be extremely stressful for individuals facing death and for those close to them. The objective of this study was to describe the distribution of transitions in care experienced by palliative care patients following admission to a comprehensive palliative care program (PCP). A better understanding of these transitions may aid in reducing unnecessary change, help predict care needs, enhance transitions that improve quality of life, guide health care system communication links and maximize the cost-effective utilization of different care settings and providers. Methods Transition and demographic information pertaining to all patients registered in the PCP at the Queen Elizabeth II Health Sciences Centre (QEII), Halifax, Nova Scotia, Canada between January 1, 1998 and December 31, 2002 and who died on or prior to December 31, 2002 was extracted from the PCP database and examined. A transition was defined as either: (1) a change in location of where the patient was cared for by the PCP or, (2) a change in which clinical service provided care. Descriptive analysis provided frequencies and locations of transitions experienced from time of PCP admission to death and during the final two and four weeks of life, an examination of patient movement and a summary of the length of stay spent by patients at each care location. Results Over the five year period, 3974 adults admitted to the QEII PCP experienced a total of 5903 transitions (Mean 1.5; standard deviation 1.8; median 1). Patients with no transitions (28%) differed significantly from those who had experienced at least one transition with respect to survival time, age, location of death and diagnosis (p < 0.0001). The majority of patients were admitted to the PCP from various acute care units (66%). Although 54% of all transitions were made to the home, only 60% of these moves included care provided by PCP staff. During the last four weeks of life, 47% of patients experienced at least one transition; 36% during the final two weeks of life. Shorter stays in each location were evident when care was actively provided by the PCP. Conclusion A relatively small number of patients under the care of the PCP at the end of life, made several transitions in care setting or service provider. These particular patients need closer scrutiny to understand why such transitions take place so that clinical programs may be designed or modified to minimize the transitions themselves or the impact transitions have on patients and families. | Background For individuals facing death, and for those close to them, transitions in the location of care and in who provides care can be extremely stressful.[ 1 ] Such transitions include moving from home to hospital or to long-term care facilities, from ward to ward within hospitals, or in and out of care directed by particular care providers (such as specialists). With or without good continuity of information transfer, patients and caregivers may, at each transition, need to retell their story, renegotiate the goals of care and redefine their relationships with health professionals. At each point, new communication channels must be established and new trusts formed. Much of the health service research literature in end-of-life care focuses on issues either in hospital or at home. [ 2 - 5 ] We believe it is also important to examine the changes or transitions patients make in where they are cared for and by whom during the end-of-life. The ultimate goal of understanding these transition issues better is to reduce unnecessary changes, help to predict care needs, enhance transitions that improve patient and caregiver quality of life, guide communication links within the health care system and maximize the cost-effective utilization of different care settings and providers. The primary purpose of this study was to describe the distribution of transitions in care experienced by palliative care patients during the time subsequent to admission to a comprehensive palliative care program. We also report the proportion of transitions as death becomes imminent (the last two and four weeks of life) and describe the length of stay in each location of care or care setting. Methods Subjects Subjects included all adult patients registered in the palliative care program (PCP) at the Queen Elizabeth II Health Services Centre (QEII) in Halifax, Nova Scotia, Canada between January 1, 1998 and December 31, 2002 with a recorded date of death on or prior to December 31, 2002. The PCP includes multidisciplinary care for the dying, an in-patient acute care unit, an in-hospital consultation service, a home consultation service and an oncology outpatient clinic consultation service. The team consists of physicians, nurses, social workers, pharmacists, spiritual care and volunteers. There is no free standing hospice facility in Nova Scotia. The PCP has existed since 1988 and, by 1997, had at least one contact with over 62% of those who die annually of cancer in the Halifax Regional Municipality (population approximately 350,000).[ 6 ] Data Individual level information extracted from the PCP database included demographics (sex, date of birth, date of death, postal code), diagnoses, the relationship of the primary caregiver to the patient (for example, spouse, daughter, son, friend), reason for referral, location of death and program transition data. The program transition information provided the date of each transition and locations the patient had been moved to or from (for example, home, an acute care facility, long-term care) as well as the clinical service providing care. For example, for inpatients the service might be the Palliative Care Unit, or a medical or surgical service; for outpatients it might be the Nova Scotia Cancer Centre (NSCC) PCP clinic, the PCP Home Support Service, or the family doctor. The service indicator field also provided a record of whether patients were 'actively' being cared for by PCP staff or whether their care had been transferred to the staff of the NSCC or family doctor (either locally or elsewhere in the province). Ethical approval for this research was provided by the Nova Scotia Capital District Health Authority research ethics board. Measures A transition in this study is defined as either: (1) a change in location of where the patient was cared for by the PCP or, (2) a change in which clinical service provided care. For example, a transition might be a move to or from the home, a specific acute care unit or a long-term care facility. A transition would also occur if the patient stayed in a single location, for example at home, but the care being provided was transferred from PCP staff ('actively cared for' by the PCP) to their family physician or home care nurse ('non-active' and no longer 'actively' cared for by PCP staff) or vice versa. This 'transfer of care' transition scenario is illustrated below: Length of stay was calculated as the number of days a patient stayed in a single location while receiving the same form of care. For example, a patient might be admitted to the PCP from home and stay in the home for a total of 86 days at which time the patient was admitted to an acute care inpatient unit. After spending 10 days in acute care the patient was sent home where they died 25 days later. In this scenario the patient experienced two transitions (1. from home to acute care; 2. from acute care to home) but had three stay periods (two at home and one in acute care). Survival in this study was defined as the number of days between date of the initial admission to the PCP and death. Cancer is the most prevalent disease among PCP patients, in particular lung cancer. To reflect this we created a diagnostic summary with three categories: lung cancer, all other cancers and other disease only (no cancer). Lung cancer was separated from other cancers as it is the most common cancer affecting both sexes in Nova Scotia, has a short prognosis and often has PCP involvement. Analysis The analysis focused on providing a description of the number and location of transitions experienced by patients over the five-year study period. For each patient, the total number of transitions occurring from the date of initial admission to the PCP to the date of death and during the final two and four weeks of life were counted and described. Locations of admission, death, care and service provision are summarized as well as an examination of patient movement from location to location or 'site movements'. Summary statistics are provided to describe the length of stay (LOS) or number of days spent by all patients at each care location. Differences between patients who experienced no transitions versus those who experienced at least one transition were assessed using contingency tables with chi-square techniques and logistic regression. Results In total, 3972 adult patients were admitted to the QEII PCP between January 1, 1998 and December 31, 2002 and had died on or prior to December 31, 2002. There was a slight preponderance of male patients (52%); patients tended to be older (mean 68.5 years, standard deviation [SD] 13.7) and diagnosed with cancer (90%) (Table 1 ). Lung was the major cancer site accounting for 29% of all cancer diagnoses. Survival time, the number of days between program admission and death, was highly variable, ranging from 0 days (died same day as admitted to the PCP) to 1688 days, with an average of 100.6 (SD 163.2) and median of 45 days. As recorded in Table 1 , 40% of patients survived less than 15 days and 24% survived 121 days or more. Eighty five percent of patients admitted to the PCP survived 6 months or less. Table 1 Characteristics of patients admitted to the Queen Elizabeth II Palliative Care Program between Jan 1, 1998 and Dec 31, 2002 and died during the same period Characteristic Number of patients* (%) Sex Female 1925 (48.5) Male 2047 (51.5) Age, years < 60 992 (25.1) 60–69 873 (22.1) 70–79 1208 (30.6) ≥ 80 878 (22.2) Year of admission to PCP 1998 883 (22.2) 1999 824 (20.8) 2000 876 (22.1) 2001 771 (19.4) 2002 618 (15.6) Year of death 1998 645 (16.2) 1999 820 (20.7) 2000 858 (21.6) 2001 824 (20.8) 2002 824 (20.8) Survival (days) 0–30 1605 (40.4) 31–60 723 (18.2) 61–90 432 (10.9) 91–120 251 (6.3) 121+ 960 (24.2) Location of death Hospital death (not in PCP unit) 2009 (50.6) Inpatient PCP unit 635 (16.0) Home 1218 (30.7) Long-term care facility 110 (2.8) Diagnoses, summarized Lung cancer 1025 (26.1) All other cancers 2525 (64.3) Other disease, no cancer 375 (9.6) Caregiver relationship Spouse 2165 (57.8) Child 1017 (27.2) Parents / other relations 448 (12.0) Other 116 (3.1) Primary reasons for referral to PCP (responses are not exclusive) Pain 1630 (41.0) Other symptoms 1868 (47.0) 1729 (43.5) Patient support 1729 (43.5) Family support 1620 (40.8) Staff support 317 (8.0) Home consultation 1067 (26.9) Terminal care 345 (8.7) Respite care 5 (0.1) Grief 19 (0.5) * N = 3,972. Smaller n by characteristic due to missing values. Figure 1 illustrates the distribution of the number of transitions experienced by patients over the five-year study period. The number of transitions totalled 5903, ranging from 0 to 21, with an average of 1.5 (SD 1.8) per patient (median: 1). Overall, 28% did not experience a transition, 41% experienced one transition and 31% experienced two or more. Patients with no transitions experienced no change in the care provided or the location of care from the point of PCP admission to death. However, this group differed significantly from patients who experienced at least one transition, at the 0.001 level of significance, with respect to survival time, age, location of death and diagnosis. Compared to those who had at least one transition during the time period between admission to the PCP and death, patients with no transitions were much more likely to have a survival time of fourteen days or less (64% versus 10%), to be aged 80 years or older (32% versus 19%), to experience a hospital death (68% versus 44%) and to have a diagnosis other than cancer (19% versus 6%). Figure 1 Total transitions over a five year period among all patients admitted to a comprehensive Palliative Care Program (PCP) (N = 3972) The majority of patients (66%) were initially admitted to the PCP as inpatients of various acute care units (n = 2619). Of this inpatient group, only 65 patients (2.5%) were admitted directly into the inpatient unit of the PCP. PCP admissions to care at home accounted for 34% of patients (n = 1335). Care services provided to home patients were provided upon admission by either PCP home care staff (51%) or on an outpatient basis through the NSCC (49%). Very few were admitted to the service from a long-term care facility (n = 17). Overall, 54% of all transitions were made to the home, 27% to other acute care inpatient units within the same hospital, 17.5% to the PCP inpatient unit, and almost 2% to a long-term care facility. PCP staff members, however, were not always actively providing the care provided within each of these locations. For instance, in transitions to home only 60% involved care being 'actively' provided by PCP home care staff or through the NSCC PCP outpatient clinic. The reasons why the PCP was not involved in care provision are varied and include improvements in health status (for example their original symptoms have abated, or they were thought to be dying, but improved unexpectedly), changes in their personal caregiver situation or movement out of the PCP coverage area etc. In contrast, few transitions involving acute care units were due to a change in care from active care provision by PCP staff to non-active PCP care (4%). Table 2 illustrates the number of transition changes from one location or care site to another. For example, from the table we show that 32% of transitions originating from the home were to the PCP inpatient unit; 65% of transitions from the PCP inpatient unit were to the home. Table 2 Location or site / care movements: a summary of the number of transitions made by all patients by location change Location / care changing FROM Location / care changing TO Number of all transitions (%) PCP inpatient unit All other acute care units Home Home with NSCC care Long term care Total PCP inpatient unit 120 1 (26.1) 28 (6.1) 301 (65.4) 0 11 (2.4) 460 (7.8) All other acute care units 184 (7.2) 350 2 (13.7) 1899 (74.5) 73 (2.9) 44 (1.8) 2550 (43.2) Home 628 (32.1) 897 (45.8) 199 3 (10.2) 206 (10.5) 29 (1.5) 1959 (33.2) Home (with NSCC care) 93 (10.4) 290 (32.3) 504 (56.1) 9 3 (1.0) 2 (0.22) 898 (15.2) Long term care 3 (9.1) 10 (30.3) 14 (42.4) 0 6 3 (18.2) 33 (0.6) Total 1028 (17.4) 1575 (26.7) 2914 (49.4) 288 (4.9) 92 (1.6) 5900 4 (100) Site/care movements from and to the same location were evident. 1 The patient was admitted under the PCP but placed in a bed elsewhere in the hospital while waiting transfer to the inpatient PCP unit. The movement in this situation reflects a physical location change from within the acute care facility to the PCP inpatient unit. It does not represent a change in care. 2 Site movements across different acute care units are captured here. 3 Each of these same site movements reflect a change in 'active' care status to 'non-active' care status or vice versa. 'Active' care is defined as care provided to a patient by PCP staff. 'Non-active' care refers to care being provided by individuals not associated with the PCP program. 4 Three transition records were missing site information During the last four weeks of life, 47% of patients experienced at least one transition. The majority of transitions were to an acute care facility (26% to the PCP inpatient unit, 35% to other acute care units), followed by the home (37%) and long-term care (2%). Similar patterns were evident when moves associated with the 2 weeks prior to death were examined. At least one transition was experienced by 36% of patients during these final 2 weeks of life. Moves to an acute care facility accounted for 68% (29% to the PCP inpatient unit; 39% to other acute care units), to the home 30% and 2% to long-term care. The location of death among PCP patients followed a pattern similar to that found at admission. Sixty-seven percent (n = 2644) of deaths occurred within an inpatient acute care unit, 16% of these deaths occurred in the PCP inpatient unit (n = 423). Home deaths were experienced by 31% of all admissions (n = 1218), while 3% (n = 110) of patients died as a long-term care resident. Table 3 summarizes the median length of stay (LOS) in days associated with each transition by location or care setting and the PCP's role in providing care. Medians are reported due to the very wide, skewed distributions associated with each LOS. We have split the LOS by PCP role in care to illustrate how LOS tended to be longer during transitions where PCP health care providers did not provide care. The median length of stay spent by patients who transitioned to acute care units without PCP care was 27.5 days. In contrast, transitions to the PCP inpatient unit and other acute care units providing active care by PCP health care providers was much shorter, with median days stayed of seven and six days respectively. The median length of stay spent at home while under the care of PCP home care providers was 28 days, 10 days less than that experienced at home when care was provided by others not associated with the PCP. Table 3 Median length of stay spent in each care setting by PCP role in care 1 Location / care setting Median number of days by PCP role in care (range) 'Active' care 'Non-active' care Acute care: PCP inpatient unit 7 (0 – 175) - All other inpatient units 6 (0 – 390) 27.5 (0 – 1139) Home 2 28 (0 – 845) 38 (0 – 1658) Home, followed by NSCC 30 (0 – 1295) 44 (2 – 901) Long-term care 18 (0–112) 63 (3 – 1079) 1 'Active' care is defined as care provided to a patient by PCP staff. 'Non-active' care refers to care being provided by individuals others not associated with the PCP program. 2 Home is defined as when a patient is not required to leave their home for contact with health care providers. Home, followed by NSCC indicate the patient leaves the home to go to the PCP clinic at the Cancer Centre Discussion Patients followed by the QEII PCP are primarily elderly (75% 60 years of age or older), urban dwellers, with cancer and survive less than six months from time of initial program admission (80%). This age and sex distribution is similar to that experienced among all Nova Scotians who died due to cancer between 1992 and 1998.[ 2 ] The average number of transitions experienced in care settings by this group of elderly was 1.5. Two or more transitions were experienced by 31% of patients. We were surprised that the vast majority of patients (69%) had fewer than two transitions. We had expected the number to be somewhat higher. Our clinical experience of providing care for these patients is perhaps skewed by the challenges faced by the minority of patients with multiple transitions. Patients who had no transitions beyond entry to the PCP appear to be a different population than those with one or more transitions. This group may be a much sicker population since they tended to have much shorter survival times, were older and more likely to die in hospital. They were also more likely to die of disease other than cancer. This population warrants special attention in program planning and service delivery given their potentially higher institutional based needs. The most common transition identified was from an acute care hospital unit "to" the home. This fact, in and of itself, provides evidence for the substantial focus that must take place in hospital on discharge planning for those at the end of life. Attention to the multiple discharge issues for this unique group of patients is likely the single biggest transition issue facing the acute care units and the consulting QEII PCP. Issues of symptom control, drug supplies, home equipment needs, family and professional caregiving needs in the home, advance planning for routine follow up and crisis management as well as the psychological supportive issues all need planning. The acute care teams must do this planning along with the QEII PCP consult team in collaborative fashion. These transitions present challenges in coordination and information transfer in order to facilitate continuity of care for patients and families. The next most common category of transitions was from home "to" the hospital, either to an acute care unit or to the PCP inpatient unit. Just as the previous transition needs to be planned and coordinated so does the home to hospital one. Unfortunately we have no data on how this latter transition occurs. Most often it may be due to a symptom crisis in the home, a lack of caregiver capacity in the home, or a lack of financial resources to bring adequate care to the home.[ 7 ] Given the substantial pressure on acute hospital beds in Canada today, many of these admissions take place via the emergency department. Such environments may be appropriate in acute symptom stabilization before admission but the emergency department could be bypassed with planned and coordinated direct admission to the inpatient unit concerned. More information is needed on the "route" taken to hospital and the issues that require admission, the goals of admission and whether or not hospital based care versus respite / long-term care would best meet patient care needs. Most patients came to the care of the PCP from within the acute care system. This may be a reflection of the lateness of referral for a substantial number of individuals (40% dying within 15 days of referral). These individuals, by the time of referral, may be quite ill in hospital. This initial entry to PCP (also a transition of care but one we did not explore) may need to be a focus of concern to better understand it. We need to understand the timing of referrals, what can be done to identify and meet patients' needs when required and whether these needs could be met outside of the hospital setting. Once under the care of PCP the most common transitions were "to" home. Only a very small number of patient move "to" or "from" long term care (LTC). The fact that the majority of transitions were to home reflects the goals and expertise of the PCP in emphasizing the home location by coordinating the services needed for care at home. The fact that long term care is a rare transition also needs exploration. Policies may exist or wait times for admission may be such that transition to these facilities for people with short prognosis is difficult to achieve. At the same time, few patients transition "from" long term care. Some local LTC facilities have intramural palliative care programs designed to specifically meet the needs of their residents and their goals of staying in LTC as death approaches, avoiding hospital transfers. The results of this study indicate almost half of patients have a transition in the last month of life and 36% in the last 2 weeks of life. As death approaches these transitions are more likely to be "to" hospital (i.e. 62% of those in the last 4 weeks and 68% of those in the last 2 weeks). Defining the "appropriateness" of these late transitions is difficult. As stated before, they may be due to acute symptom crises or caregiver inability to meet all of the care needs. We have also heard that patients are admitted when they appear in emergency departments with these issues as they have not been able to access professionals to assess and problem solve during nights and weekends.[ 7 ] Therefore, some late admissions are entirely appropriate and others may have been avoidable if more resources were available to patients and families at home. As to the generally longer stays for patients when not under "active PCP" care, it may be that they are much less sick with other chronic, non-palliative problems and their prognosis is longer. One might also postulate that for the in-patients, the active involvement of PCP may facilitate shorter stays and transitions to the home. Such facilitation could include more rapid control of symptoms or more expeditions discharge planning. The phenomena may be similar in both the home and long-term care settings. Limitations One substantial limitation is the loss of information pertaining to patients who are transitioned to non-active care permanently. Although we do have a record of their death, once a patient is moved to non-active care and ceases further involvement with the PCP, we do not have a record of who has taken responsibility for their care or where this care has been received. Our results and our clinical experience suggest this group may be quite different from those who remain actively cared for by the PCP. We have begun efforts to collect information about this group in order to understand them better. Conclusion In conclusion, a small number of patients under the care of the PCP very near the end of their lives, make several transitions in care setting or service provider. These particular patients need much closer scrutiny in order to understand why such transitions take place. We will then be able to design or modify clinical programs to minimize the transitions themselves or the impact the transitions have on patients and families. Possible negative impacts of multiple transitions include discontinuity of care, poor coordination of care, financial burden and psychological stress that each move may bring to patients and their families. Competing interests The author(s) declare that they have no competing interests. Authors' contributions FB and BL participated in the conceptualization and design of the project, the analysis and interpretation of the data, first-drafted the majority of the article, and incorporated co-authors' comments into the final draft. PC participated in the conceptualization and design of the project, interpretation of data, and revising of the manuscript. DM participated in the analysis and interpretation of data and revising each draft for critical content. All authors gave approval to the final version. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC553975.xml |
549520 | The Severe Acute Respiratory Syndrome (SARS)-coronavirus 3a protein may function as a modulator of the trafficking properties of the spike protein | Background A recent publication reported that a tyrosine-dependent sorting signal, present in cytoplasmic tail of the spike protein of most coronaviruses, mediates the intracellular retention of the spike protein. This motif is missing from the spike protein of the severe acute respiratory syndrome-coronavirus (SARS-CoV), resulting in high level of surface expression of the spike protein when it is expressed on its own in vitro . Presentation of the hypothesis It has been shown that the severe acute respiratory syndrome-coronavirus genome contains open reading frames that encode for proteins with no homologue in other coronaviruses. One of them is the 3a protein, which is expressed during infection in vitro and in vivo . The 3a protein, which contains a tyrosine-dependent sorting signal in its cytoplasmic domain, is expressed on the cell surface and can undergo internalization. In addition, 3a can bind to the spike protein and through this interaction, it may be able to cause the spike protein to become internalized, resulting in a decrease in its surface expression. Testing the hypothesis The effects of 3a on the internalization of cell surface spike protein can be examined biochemically and the significance of the interplay between these two viral proteins during viral infection can be studied using reverse genetics methodology. Implication of the hypothesis If this hypothesis is proven, it will indicate that the severe acute respiratory syndrome-coronavirus modulates the surface expression of the spike protein via a different mechanism from other coronaviruses. The interaction between 3a and S, which are expressed from separate subgenomic RNA, would be important for controlling the trafficking properties of S. The cell surface expression of S in infected cells significantly impacts viral assembly, viral spread and viral pathogenesis. Modulation by this unique pathway could confer certain advantages during the replication of the severe acute respiratory syndrome-coronavirus. | Background The recent severe acute respiratory syndrome (SARS) epidemic, which affected over 30 countries, resulted in more than 8000 cases of infection and more than 800 fatalities (World Health Organization, ). A novel coronavirus was identified as the aetiological agent of SARS [ 1 ]. Analysis of the nucleotide sequence of this novel SARS coronavirus (SARS-CoV) showed that the viral genome is nearly 30 kb in length and contains 14 potential open reading frames (ORFs) [ 2 - 4 ]. These viral proteins can be broadly classified into 3 groups; (i) the replicase 1a/1b gene products which are important for viral replication, (ii) the structural proteins, spike (S), nucleocapsid (N), membrane (M) and envelope (E), which have homologues in all known coronaviruses, and are important for viral assembly, and (iii) the "accessory" proteins that are specifically encoded by SARS-CoV. Much progress have been made in characterizing these SARS-CoV proteins [ 5 , 6 ], but the molecular determinant for the severe clinical manifestations of SARS-CoV infection in contrast to the mild diseases caused by most coronaviruses, remains to be determined. In addition, the exact roles of "accessory" proteins of SARS-CoV are still poorly understood. The subject of this hypothesis relate to the S protein and one of the "accessory" proteins, the SARS-CoV 3a protein. The S protein, which forms morphologically characteristic projections on the virion surface, mediates binding to cellular receptor and the fusion of viral and host membranes, both of these processes being critical for virus entry into host cells [ 7 , 8 ]. As such, S is known to be responsible for inducing host immune responses and virus neutralization by antibodies [ 9 , 10 ]. 3a (also termed ORF3 in [ 2 ] and [ 11 ], as X1 in [ 3 ], and as U274 in [ 12 , 13 ]) is the largest "accessory" protein of SARS-CoV, consisting of 274 amino acids and 3 putative transmembrane domains. Three groups independently reported the expression of 3a in SARS-CoV infected cells [ 13 - 15 ] and it was also detected in a SARS-CoV infected patient's lung specimen [ 14 ]. Antibodies against 3a were also found in convalescent patients [ 11 , 12 , 14 ]. This article hypotheses that the endocytotic properties of 3a allow it to modulate the surface expression of S and explores a functional significance for the interaction between S and 3a, which has been observed experimentally [ 13 , 15 ]. Presentation of the hypothesis The cellular fate of the S protein has been well mapped [ 16 , 17 ]: S is cotranslationally glycosylated and oligomerized at the endoplasmic reticulum. Its N-linked high mannose side chains are trimmed, modified and become endoglycosidase H-resistant during the transportation to the Golgi apparatus. Only this fully-matured form of S can be assembled into virions and/or transported to the cell surface. The latter could cause cell-cell fusion and the formation of syncytia. Recently, Schwegmann-Wessels and co-worker reported that a novel sorting signal for intracellular localization is present in the S protein of most coronaviruses, but absent from SARS-CoV S [ 18 ]. Site-directed mutagenesis studies confirmed that a YxxΦ motif (where x is any amino acid and Φ is an amino acid with a bulky hydrophobic side chain) retains the S protein of TGEV intracellularly when it is expressed alone. On the other hand, SARS-CoV S is transported efficiently to the cell surface unless such a motif is introduced into its cytoplasmic tail by mutagenesis. The YxxΦ motif has been implicated in directing protein localization to various intracellular compartments [ 19 - 21 ]. Furthermore, most YxxΦ motifs are capable of mediating rapid internalization from the plasma membrane into the endosomes. Interaction between the adaptor protein complex 2 (AP-2) with the YxxΦ motif present in the cytoplasmic domain of the internalizing protein concentrated the protein in clathrin-coated vesicle, which then budded from the plasma membrane resulting in internalization. However, it appears that the YxxΦ motif can also bind other adaptor protein complexes, like AP-1, 3 and 4, and the differential binding to the different adaptors will determine the pathway of a cargo protein containing a particular YxxΦ motif [ 21 ]. Coincidently, a YxxΦ motif in the cytoplasmic domain of 3a has previously been identified [ 13 ]. Furthermore, the juxtaposition of the YxxΦ motif and a ExD (diacidic) motif was found to be essential for the transport of 3a to the cell surface, consistent with the role of these motifs in the transportation of other proteins to the plasma membrane [ 22 ]. 3a on the cell surface can also undergo internalization [ 13 ]. Analyzing the experimental results present in these publications collectively, it is possible to postulate a functional role for the evolution of the SARS-CoV 3a protein. The SARS-CoV S protein lacks the YxxΦ motif but it can bind to the 3a protein which has internalization properties. In SARS-CoV infected cells, S is rapidly transported to the cell surface. But if 3a is expressed in the same cell, it is also transported to the cell surface where it can bind S. The interaction between 3a and S enables both proteins to become internalized, resulting in a decrease in the expression of S on the cell surface. Thus, this viral-viral interaction confers the functional role for the YxxΦ motif found in other coronaviruses to the SARS-CoV S. This hypothesis also implies that the precise mechanisms used by TGEV and SARS-CoV to reduce the expression of S are different although in both cases, the YxxΦ motifs will be crucial. In TGEV, the YxxΦ motif in S caused it to be retained intracellularly, while in SARS-CoV, S that is transported to the cell surface becomes internalized again after it interacts with the 3a protein. Testing the hypothesis Using mammalian cell culture system and biochemical methods, it will be possible to determine the exact effects of 3a on the trafficking properties of S. Mutagenesis studies can be used to map the protein domains that are important for the interaction between 3a and S and for the defining the manner by which 3a contributes to the reduction of cell surface expression of S. Given that a full-length infectious clone of SARS-CoV has been assembled [ 23 ], the use of reverse genetics would certainly reveal more about the interplay between 3a and S during SARS-CoV infection. Implication of the hypothesis This hypothesis, if proven, will indicate that the interaction between SARS-CoV-unique 3a protein and S results in a reduction of S on the cell surface through the endocytotic properties of 3a [ 13 ]. During SARS-CoV infection, expression of S on the cell surface of an infected cell mediates fusion with un-infected neighboring cells, leading to syncytium formation. It follows that reducing the cell surface expression of S will delay this cell-damaging effect and prevent the premature release of unassembled viral RNA. It may also enhance virus packaging as it appears that the assembly of coronavirus occurs intracellularly, probably in the intermediate compartments between the endoplasmic reticulum and Golgi apparatus [ 24 ]. Clearly, this has certain advantages for the virus at certain stages of its life cycle. In addition, a reduction in the cell surface expression of S may also help the infected cell evade the host defense system and reduce the production of anti-S neutralizing antibodies. Conversely, host or viral factors that disrupt the interaction between S and 3a would favor the expression of S on the cell surface and enhance cell-cell fusion, a process that is important for viral spreading. Table 1 shows a comparison of the amino acid sequences of the cytoplasmic tails of the S protein of different coronaviruses, including SARS-CoV, which is distantly related to the established group 2 coronaviruses [ 25 ], as well as two recently identified novel human coronaviruses, HCoV-NL63 [ 26 ] and HCoV-HKU1 [ 27 ]. The YxxΦ motifs are clearly present in all group 1 coronaviruses and also in IBV, which belongs to group 3. However, no YxxΦ motif is present in SARS-CoV and MHV, both group 2 coronaviruses. In addition, there is a YGGR motif in the S protein of RtCoV and YxxH motifs in the S proteins of the other group 2 coronaviruses, BCoV, HEV and HCoV-HKU1. However, these motifs may not be able to function as signaling motifs because both R and H are not hydrophobic amino-acids. Therefore, HCoV-OC43 is the only one of these group 2 coronaviruses that encodes a S protein with a YxxΦ motif. It is still unclear how the localization of S is modulated in those viruses that lack YxxΦ motifs in the S proteins and further studies will be needed to understand the different signaling pathways that are important for regulating the trafficking properties of S. Indeed, the dilysine endoplasmic reticulum retrieval signal, which is a different type of sorting signal from the YxxΦ motif, in the cytoplasmic tail of IBV was reported to be important for intracellular retention of S [ 28 ]. Table 1 Amino acid sequences of the cytoplasmic tail of spike (S) proteins of coronaviruses are compared with the YxxΦ (where x is any amino acid and Φ is an amino acid with a bulky hydrophobic side chain) motifs found in SARS-CoV 3a protein and other cellular proteins that are known to undergo endocytosis. Protein Amino acid sequences in the cytoplasmic tail a TGEV S b TM-CLGSCCHSICSRRQFEN YEPI EKVHVH PRCoV S b TM-CLGSCCHSIFSRRQFEN YEPI EKVHVH CCoV S b TM-CLGSCCHSICSRGQFES YEPI EKVHVH FCoV S b TM-CLGSCCHSICSRRQFEN YEPI EKVHVH PEDV S b TM-CCGACFSGCCRGPRLQP YEAF EKVHVQ HCoV-229E S b TM-CFASSIRGCCESTKLP YYDV EKIHIQ HCoV-NL63 S b TM-CLTSSMRGCCDCGSTKLP YYEF EKVHVQ BCoV S c TM-ICGGCCDD YTGH QELVIKTSHDD HCoV-OC43 S c TM-KCGGCCDD YTGY QELVIKTSHDD HEV S c TM-KCGGCCDD YTGH QEFVIKTSHDD MHV S c TM-KKCGNCCDECGGHQDSIVIHNISSHED RtCoV S c TM-KCGNCCDE YGGR QAGIVIHNISSHED HCoV-HKU1 S c TM-KCHNCCDE YGGH HDFVIKTSHDD SARS-CoV S c TM-GACSCGSCCKFDEDDSEPVLKGVKLHYT IBV S d TM-KKSS YYTT FDNDVVTEQYRPKKSV SARS-CoV 3a e TM-38aa- YNSV TDTIVVTEGD-101aa TfR e 19aa- YTRF SLARQVDGDNSHV-26aa-TM LDLR (proximal) e TM-17aa- YQKT TEDEVHICH-20aa LDLR (distal) e TM-34aa- YSYP SRQMVSLEDDVA CD-M6PR e TM-34aa- YRGV GDDGLGEESEERDDHLLPM ASGPR e MTKE YQDL QHLDNEES-24aa a Sequences were obtained from National Center for Biotechnology Information (NCBI). Yxxx tetrapeptides are underlined and abbreviations used are: TM, transmembrane domain, aa, amino acids. b S proteins of group 1 coronaviruses: TGEV, transmissible gastroenteritis virus (AJ271965); PRCoV, porcine respiratory coronavirus (Z24675); CCoV, canine coronavirus (D13096); FCoV, feline coronavirus (AY204704); PEDV, porcine epidemic diarrhea virus (AF353511); HCoV-229E, human coronavirus 229E (AF304460); HCoV-NL63, human coronavirus NL63(AY518894). c S proteins of group 2 coronaviruses: BCoV, bovine coronavirus (AF220295), HCoV-OC43, human coronavirus OC43 (AY585228), HEV, porcine hemagglutinating encephalomyelitis virus (AY078417), MHV, murine hepatitis virus (AF201929), RtCoV, rat coronavirus (AF207551), HCoV-HKU1, human coronavirus HKU1 (AY597011), SARS-CoV, SARS coronavirus (AY283798). d S protein of group 3 coronavirus: IBV, infectious bronchitis virus (M95169). e SARS-CoV 3a protein (AY283798) and other cellular proteins that are known to undergo endocytosis. Abbreviations: TfR, transferrin receptor (P02786), LDLR, low-density lipoprotein receptor (P01130); CD-M6PR, cation-dependent mannose 6-phosphate receptor (P24668); ASGPR, asialoglycoprotein receptor (P07306). It therefore appears that the cell surface expression of S protein of SARS-CoV can be reduced like that for other coronaviruses, but the mechanism may be different. The trafficking of SARS-CoV S may be mediated through 2 separate viral proteins, expressed from separate subgenomic RNA, and regulated by numerous complex cellular processes including the efficiency of transcription and translation, post-translation modification and stability of the viral proteins, as well as their interactions with host factors. Indeed, it is crucial to determine how this unique pathway benefits replication of the SARS-CoV. It is also interesting to note that sequence comparison of isolates from different clusters of infection showed that both S and 3a showed a positive selection during virus evolution [ 29 , 30 ], implying that these proteins play important roles in the virus life cycle and/or disease development and is consistent with the proposal that 3a has evolved to modulate the trafficking properties of the spike protein. Competing interests The author(s) declare that they have no competing interests. Author's contributions Yee-Joo Tan is responsible for the entire manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549520.xml |
524257 | Paying Attention to Memory | null | If you could peer inside someone else's head, you'd see a scrunched-up gelatinous mass of tissue, weighing roughly a kilogram, homogeneous to the naked eye—in other words, a brain. The seeming uniformity of the overlying cerebral cortex, which has so outstripped other parts of the brain over the course of evolution that it makes up more than 80% of the brain, is belied by centuries of painstaking neuroscience. Some of the most compelling early evidence that parts of the cortex are specialized in their duties came from gun-shot wounds during the first world war. For instance, bullets lodged in the back of the brain disrupted sight in discrete portions of the visual scene, prompting insights into the localization and function of visual cortex. The study of the front of the brain has a similar history of injury leading to insight. Phineas Gage, a railroad worker, had a 3.5-foot-long tamping iron blow straight through his frontal lobes and turned from a responsible, mild-mannered geek into an unruly exhibitionist overnight. Parts of the prefrontal cortex that he damaged have since been much studied for their involvement in motivation and emotional control. More recent work has implicated other parts of the prefrontal cortex in working memory. Working memory is famously illustrated by your ability to temporarily remember a seven-digit telephone number, roughly the amount of information that you can store on-line in working memory for the duration of a task like phoning for a pizza. Monkeys can be trained to remember information much like you remember a phone number, and then use the memory for gaining a reward (usually juice rather than pizza). They can learn to remember the specific location of a briefly flashed target on a screen and then, when cued, make an eye movement to look directly at that location. Previous research has shown that neurons in the prefrontal cortex maintain high rates of activity while monkeys remember the target location, and gradually the idea that the prefrontal cortex specializes in maintaining these transient memories has risen to dominance over other ideas about its functions. In this issue of PLoS Biology , Mikhail Lebedev and his colleagues challenge this prevailing view with evidence that most prefrontal cortex neurons may not be so closely tied to working memory after all. As in previous research, they also trained monkeys to make an eye movement to a remembered target, but instead of only seeing one target, the monkeys saw two potential target locations during the course of the task. The monkeys had to pay attention to one of the potential targets, but this was not necessarily the one they would have responded to and was not the one they had to remember. To perform the task successfully, the animals had to engage their working memory, but most of the neurons the researchers recorded increased their activity selectively to the target that was the focus of attention. Despite decades of research, the degree to which one region of the brain can be thought of as dedicated exclusively to a particular function is still much debated. These results do not refute the idea that the prefrontal cortex plays an important role in working memory. However, the authors suggest that this area may be more important in focusing the attention needed to remember that phone number, rather than actually holding that number in your mind. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524257.xml |
554983 | Review of "Dynamics of the Vascular System" by John K-J Li. Series on Bioengineering & Biomedical Engineering – Vol. 1 | null | "Dynamics of the Vascular System" is a new book by one of the world's greatest experts on bioengineering aspects of hemodynamics. It provides an excellent elementary introduction to this topic. Following an illustrative historical introduction, the author briefly reviews vascular physiology. This is followed by the basics of fluid mechanics as an introduction to the hemodynamics of large arteries. A dedicated chapter illuminating the dynamic consequences of vascular branching follows this chapter. This is the field to which Professor Li has made his substantial contributions. The following chapters cover the venous system and microcirculation. Finally the book reviews measuring techniques used to study hemodynamic behavior. The author suggests this volume to be "a companion" to his own treatise "The Arterial Circulation" [ 1 ]. For those familiar with the latter book I must compare the two. While the new book adds useful information on the venous system and on microcirculation, a topic that has been neglected in classical treatments of hemodynamics, this new volume is substantially less comprehensive in most topics covered in both. Also its index is regretfully, substantially less detailed. Little new has been published in this field since the first book was published in 2000, with the notable exception of Zamir's book "The Physics of Pulsatile Flow" [ 2 ]. The reader is advised, therefore, to consider this book as a supplement rather than a companion to the former. In brief, this book, which is more affordable than its predecessor, should be regarded as a good introduction to the topic, to be used primarily by bioengineering students, rather than an updated authorative text by its erudite author. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554983.xml |
547914 | A methodology for distinguishing divergent cell fates within a common progenitor population: adenoma- and neuroendocrine-like cells are confounders of rat ileal epithelial cell (IEC-18) culture | Background IEC-18 cells are a non-transformed, immortal cell line derived from juvenile rat ileal crypt cells. They may have experimental advantages over tumor-derived gastrointestinal lineages, including preservation of phenotype, normal endocrine responses and retention of differentiation potential. However, their proclivity for spontaneous differentiation / transformation may be stereotypical and could represent a more profound experimental confounder than previously realized. We hypothesized that IEC-18 cells spontaneously diverge towards a uniform mixture of epigenetic fates, with corresponding phenotypes, rather than persist as a single progenitor lineage. Results IEC-18 cells were cultured for 72 hours in serum free media (SFM), with and without various insulin-like growth factor agonists to differentially boost the basal rate of proliferation. A strategy was employed to identify constitutive genes as markers of divergent fates through gene array analysis by cross-referencing fold-change trends for individual genes against crypt cell abundance in each treatment. We then confirmed the cell-specific phenotype by immunolocalization of proteins corresponding to those genes. The majority of IEC-18 cells in SFM alone had a loss in expression of the adenomatous polyposis coli (APC) gene at the mRNA and protein levels, consistent with adenoma-like transformation. In addition, a small subset of cells expressed the serotonin receptor 2A gene and had neuroendocrine-like morphology. Conclusions IEC-18 cells commonly undergo a change in cell fate prior to reaching confluence. The most common fate switch that we were able to detect correlates with a down regulation of the APC gene and transformation into an adenoma-like phenotype. | Background In the last quarter century, more than five hundred published manuscripts contained experiments using the popular rat crypt cell lines IEC-6 and IEC-18 originally established by Quaroni and Isselbacher in 1978 and 1981 [ 1 , 2 ]. These lineages have stood the test of time as being two of the most reliable and phenotypically well-preserved gastrointestinal cell lines to date. In our lab, we utilize the IEC-18 lineage because it is more robust in serum-restricted conditions and, in our experience, it is more suitable for correlation with distal intestinal crypt and enterocyte physiology than cell lineages derived from colonic tumors. One of the more exciting aspects of IEC-18 cells is their capacity to retain the crypt cell phenotype as well as the continued potential for differentiation/maturation into enterocytes [ 3 - 5 ]. However, as we have worked with this lineage over the years, the rare but unmistakable morphology of an occasional neuroendocrine-like cell, the suspicious frothiness of a possible paneth-like cell and the rare giant vacuole of a goblet cell precursor have prompted us to wonder how prevalent spontaneous differentiation towards other lineages and/or transformants might be (especially in serum-restricted conditions). This is a fairly crucial experimental question because serum-restricted conditions allow the tightest experimental rigor but may drive IECs towards altered cell fates, potentially introducing uncontrolled experimental confounders. Until recently, we did not have the necessary tools with which to determine more subtle epigenetic divergences in IECs. Our current investigation uses a novel strategy to demonstrate that spontaneous cell fate divergence is not only highly prevalent in IEC-18 cells, but it is fairly uniform and therefore a predictable confounder for gene array studies in this cell line. Results Initial characterization of IEC-18 culture heterotypy using immunolocalization We have recently discovered a novel antigen localization pattern by using anti-carboxyl IGF binding protein-2 (IGFBP-2) antibody (to an antigen which we call C2) to demonstrate a multivesicular pattern in some IEC-18 cells but not in others that are immediately adjacent (illustrated in Figure 1A ). We note that anti-amino IGFBP-2 showed no staining in IEC-18 cells and that pretreatment with synthetic antigen abolished C2 staining (data not shown), confirming that C2 is a carboxyl fragment of IGFBP-2. For the purpose of this study, we wanted to determine whether or not this heterotypy represented cell fate divergence or systemic pleiomorphism. Figure 1 Immuno-characterization of IEC-18 cell heterotypy. A : Carboxyl IGFBP-2 immunostaining of the sequestered C2 fragment in cells with a preserved crypt cell phenotype but not in others. B : F-actin immunostaining (performed without antigen retrieval to visualize dynamic actin filaments) demonstrates intense staining in crypt cells but not in others. C : Cells plated at variable density with 10% FBS start out as weakly C2 positive ( i ) but have a progressive loss of C2 immunostaining prior to confluence ( ii-iv ). D : Immunolocalization of the C2 antigen demonstrates that both C2 positive ( i ) and C2 negative ( ii ) cells preserve their phenotypes during proliferation. E-F : Control immunolocalization using the IGF type 2 receptor (E) as a prelysosome-localized antigen and villin (F) as a cytoplasmic-localized antigen to demonstrate that both intravesicular and cytoplasmic antigens can be evenly detected throughout IEC culture. To further characterize the observed heterotypy, we examined IEC-18 cells with anti-actin antibody in IEC-18 cells without performing antigen retrieval and allowing the detection reaction to proceed until only a limited amount of staining is seen. When used with the right antibody, this technique is a means to visualize dynamic portions of actin filaments, such as stress fibers and the proximal ends of microvilli, because they lack the actin binding proteins that obscure the antigen. If the reaction is allowed to continue, eventually all actin filaments will stain. In the version that we use, the assay is a qualitative assessment of cytoskeletal turnover when we are comparing two adjacent cells (because both cells had exactly the same conditions for detection of actin). Both stress fibers and microvilli are heavily stained and appear to have high turnover rates in cells that have C2 staining, but not in those without it (Figure 1B ). We note that the punctate dots of filamentous-actin corresponded with small microvilli in C2 positive cells and that the C2 negative cells had a paucity of microvilli on their surface – which was confirmed by focusing up and down the shafts of the microvilli (data not shown). IEC-18 cells spontaneously lose C2 staining prior to confluence, despite 10% FBS When IEC-18 cells were plated at variable densities in 10% fetal bovine serum (FBS), we saw that all crypt cells were initially positive for C2 staining, but with increasing density there were increasing numbers of C2 negative cells (Figure 1C ). We also saw that C2 positive cells had increased staining intensity as they approached confluence. This experiment demonstrates two important points: first, that loss of C2 staining begins before confluence and second, that it can occur with comparable prevalence in the presence of FBS as it does in serum free media (SFM). C2 positive cells and C2 negative cells are both capable of phenotype preservation during proliferation To determine if the C2 positive and negative IEC-18 cells were each capable of phenotype preservation during proliferation, IGFBP-2 stained cells on coverslips were searched for mitotic cells in the stage of cytokinesis and their images were captured by digital photomicroscopy (n = 6 cell pairs for each phenotype). In each case, all daughter cells had the same phenotype as their sister, confirming that both the C2 positive and C2 negative phenotypes are conserved in subsequent cycles of mitosis (representative examples in Figure 1D ). C2 positive cell abundance is increased in proportion to the efficacy of IGF agonist treatment Because IEC-18 cells grow best in the presence of high dose insulin, we suspected that crypt cell proliferation was dependent upon IGF receptor stimulation. In general, differentiated and benignly transformed epithelial cells are less likely to proliferate upon reaching confluence, so we sought to preferentially drive crypt cell proliferation in a graded fashion using different IGF receptor agonists (Figure 2 ). IGF-II analog, a weak agonist, reduced the crypt cell abundance while NBI 31772 (an agent that displaces IGFs from IGFBPs) increased it significantly and R 3 -IGF-I doubled the number of crypt cells per 10X field (also highly significant) while none of the treatments significantly altered the abundance of C2 negative cells. This strategy allowed us to systematically skew the cell composition and use gene array analysis to determine whether C2 positive cells were epigenetically divergent. Figure 2 Mean number of C2 positive and C2 negative cells for each treatment condition. C2 positive cells are boxed in gray and C2 negative cells in stripes. Note that IGF-II analog reduces C2 positive cells when compared to SFM, whereas both NBI 31772 and R 3 -IGF-I significantly increase C2 positive and total cell abundance when compared to SFM (*** = p < 0.01, ** = p < 0.05, * = p < 0.1). In contrast, no treatment significantly altered C2 negative cell abundance. Our gene array methodology identified four candidate genes as potential markers of cell fate divergence in IEC-18 culture Eleven genes met our criteria for a significant fold change, four were positively correlated with crypt cell abundance and seven were inversely correlated (Figure 3 ). Of these first pass candidates, only six showed the appropriate fold trends across all treatment conditions, consistent with our hypothesis of constitutive expression that could reflect divergent cell fates. Of these six, one was found to have a significant difference between IGF-II analog and SFM, suggesting a direct treatment effect by IGF agonists but not by NBI 31772 (dithiolethione-inducible gene 1). Enzymatic glycosylation-regulating gene is known to be an insulin responsive gene and was also excluded because of the high probability of a direct effect by our IGF agonists [ 6 ]. Of the four remaining candidates, one (brain acyl CoA hydrolase) had absolute values that bordered on background levels in the SFM, IGF-II analog and NBI 31772 treatment conditions (defined as 10 arbitrary fluorescent units) and has had a relatively limited characterization in the literature [ 7 - 9 ]. We saw no obvious means for obtaining or generating an antibody to it and thought it unlikely to be a robust marker at the protein level, in large part because its message has only been found in brain thus far. Another had high homology with the EGF family and is currently a predicted protein based on genomic sequence [ 10 ]. However, the two remaining candidate genes we pulled out were well-characterized gut-related proteins (APC and 5-HT2A). Figure 3 Gene array screen using skewed IEC-18 cell composition to find potential phenotypic markers. Affymetrix rat gene chips were used to compare RNA from SFM and R 3 -IGF-I treated cells to find individual genes with significant fold changes – defined as greater than two fold more (A) or two fold less (B) with a p-value of less than 0.1 for the purpose of this screen. These were then compared with the fold changes from IGF-II analog and NBI 31772 to look for fold change trends that paralleled the changing cell composition (summarized in C as the percentage of crypt cells). The results are presented as upward arrows (strong positive correlations), upward dashed arrows (weak positive correlations), downward arrows (strong inverse correlations) and downward dashed arrows (weak inverse correlations). The large asterisk points to an example where both direct receptor agonists resulted in a significant difference but NBI 31772 did not, suggesting a direct drug effect rather than an effect of cell composition – making this gene a less likely candidate. For smaller asterisks, *** = p < 0.01, ** = p < 0.05, and * = p < 0.1. Immunolocalization for 5-HT2A and APC revealed 2 divergent phenotypes within IEC-18 cell culture Western blots using antibodies against APC and 5-HT2A in IEC-18 cell lysates revealed staining of the appropriate sized band for each (Figure 4 ). In the case of APC, there were several discrete smaller bands which were inversely proportional in abundance to the 300 kD full-sized protein (consistent with proteolytic fragments) when compared across multiple samples (data not shown). In the case of 5-HT2A, there was a single 28 kD band that was faint, suggesting relatively low abundance. Both antibodies were deemed suitable for immunolocalization. Immunolocalization for APC revealed that C2 positive cells also had intense APC staining whereas C2 negative cells either had limited or no APC staining (Figure 5A ). This finding is in keeping with our gene array analysis and suggests that there is substantial and wide spread divergence of IEC-18 crypt cells away from the crypt cell phenotype and towards an adenoma-like transformation. Figure 5 Immunolocalization of APC and 5-HT2A in IEC-18 culture. A : APC immunostaining parallels that of C2 and is found in crypt cell strands but is absent or scant in adjacent cells. B-D : 5-HT2A immunostaining is absent in the majority of cells but is found in a few rare cells. On closer inspection, we found that there seemed to be a progressive increase in staining intensity that correlated with a morphologic transition away from IEC-18 cell morphology and towards that of a neuroendocrine-like cell (illustrated by the numbered circles). Immunolocalization for 5-HT2A demonstrated a second cell phenotype (Figure 5B,C and 5D ). We had previously noted rare neuroendocrine-like morphology characterized by spindle-shaped cells with bipolar, dendritic arbors (unpublished findings), however with 5-HT2A immunostaining we found a mean of 5 positively stained cells per coverslip of confluent cells (n = 6 coverslips of confluent cells in SFM). 5-HT2A is present in high abundance within paneth and neuroendocrine cell types but is absent within small intestine crypt cells in vivo [ 11 ]. The cells that we observed appeared to be in transition, going from IECs to neuroendocrine-like cell morphology – with corresponding increases in staining intensity. While still a very low prevalence, the staining was quite intense in this small subset and was not detected in any of the adjacent cells. Double labeling confirms that C2 negative cells have diminished APC abundance Double immunolabeling with overlay technique (using C2 and actin antibodies) demonstrated that C2 negative cells have a paucity of microvilli in comparison to C2 positive cells (Figure 6A ). Additionally, double labeling with C2 and APC demonstrated that C2 positive cells have uniform APC staining whereas C2 negative cell have variable and overall diminished staining in comparison (Figure 6B ). These experiments provide an objective demonstration that C2 positive cells retain the IEC phenotype whereas C2 negative cell are undergoing transformation (which is an obligatory step associated with the loss of APC). Figure 6 Double labeling overlay immunolocalization of C2 with either actin or APC. A i . C2 immunostaining of wet-prepped cells. A ii . C2 immunostaining in the same cells, overlaid with f-actin immunostaining. The proximal cores of actin-stained microvilli bundles located on C2 positive cells are encircled, whereas there are either no or few microvilli on C2 negative cells. B i . C2 immunostaining of wet-prepped cells. Bii . C2 immunostaining in the same cells, overlaid with APC immunostaining. The dotted line divides C2 positive cells (below) from C2 negative cells (above). There is consistent APC staining in C2 positive cells, whereas there is variable and comparably reduced APC staining in C2 negative cells. Discussion In this manuscript we have taken advantage of an IEC immunolocalization marker that we call C2 to demonstrate two forms of cell fate divergence within IEC-18 culture. Using a combination of gene array screening and immunolocalization, we found that C2 is lost in over half the cells by the time of confluence and its loss is also associated with a down regulation of the APC gene, decreased APC protein abundance, decreased actin filament turn over, and reduced microvillar density. In short, there is an adenoma-like phenotype that fits with this genotype and this fits well with what is known about the function of the APC protein [ 12 - 15 ]. In addition, another cell genotype-phenotype correlate was detected by screening out the 5-HT2A gene and visualizing the cells that express its protein, i.e. neuroendocrine-like cells. These findings also fit well with cell phenotypes known to express 5-HT2A in the gut endoderm [ 11 , 16 , 17 ] and strongly argue against the dogma that IECs persist as a single lineage prior to reaching confluence. While we think our findings have important implications for the existing IEC literature, the more important aspect of this manuscript may be in the methodology we have piloted. In many ways, cell culture, whether primary or immortalized, transformed or not, is by definition a model in flux. The progenitor lineage that a researcher starts with is rarely the hodge podge they end up with after a limited number of passages and it is common, if not expected that epigenetic drift will occur with each cell culture passage. However, what we describe in this manuscript is different in that IEC-18 cells are displaying a uniform trend in cell fate divergence – a trend that can be modulated with IGF. Many if not most gut epithelial cell lines are IGF (or high dose insulin) dependent for proliferation and thus are potentially vulnerable to this biological confounder. It will remain to be seen if other epithelial cell types have similar behaviors when examined in this fashion. Conversely, there is also a positive light to our findings; IEC-18 cells could be a compelling model for spontaneous adenomatous transformation because these adenoma-like cells are arising from a genetically competent progenitor prior to reaching confluence. To our knowledge, no such model with this property has been previously defined. Our study has notable weaknesses and strengths. First, we have identified two divergent cell fates but have only partially characterized them because we were focused on developing a viable screening methodology (hence the ubiquitous use of the word "like" in this manuscript). Second, we are using a rat gene array chip that has approximately 9000 non-EST genes per chip. This is not an exhaustive survey of the rat genome and it is possible that there are other cell fates present in IEC-18 culture that we did not detect. Third, our phenotype assays are based on immunolocalization, which is a semi-quantitative technique with regards to assessing protein abundance. However, in this case we are actually combining cell-to-cell differences in protein abundance with distinguishing morphologic characteristics (e.g. loss of microvilli, flattening, bipolar shape, dendritic arbors, etc.) to delineate the phenotypes between adjacent cells. In short, what we are quantifying, in the case of adenoma-like cells, is the percentage of cells with a given phenotype. For this purpose, blinded immunolocalization is simple, quantitative and exceedingly efficient. The combined methodologies we chose result in a highly accurate technique for assessing divergence and their specificity can be bolstered by comparative studies of co-divergent markers (as we demonstrated with C2 and APC). As for other positives, the methodology is relatively rapid and can detect low prevalence phenotypes (as demonstrated by the anti-5-HT2A antibody). Additionally, we have demonstrated that a transcriptional marker is not required to create an effective screen. What is required, and what should probably prompt a researcher to employ this methodology, is a probe, a phenotype, or a pleiotropism that results in a consistently heterogeneous and quantifiable pattern (as C2 proved to be for us). In closing, we point out one last caveat. Gene array investigation is an evolving science but there remain three potential pitfalls for every new application: experimental design flaws, data integrity issues and biological misassumptions [ 18 - 20 ]. In this study, we used a well-accepted screening principle (i.e. significant fold changes within individual genes in response to a treatment); we included a paired reference standard for each treatment condition (the SFM control); we increased the screening stringency by adding a requirement for parity in fold trend in accordance with changing cell compositions; and then confirmed our findings by phenotype assays. However, we demonstrated that a small minority of neuroendocrine-like cells were still able to significantly alter the outcome of our gene array screen (a possibility that we had thought to be remote, given our assay's stringency). We conclude that even low frequency epigenetic events can be a serious biologic confounder of gene array studies in cell culture. Conclusions We have demonstrated a novel methodology for detecting and characterizing cell fate divergence in cell cultures derived from a common progenitor. The majority of IEC-18 cells are transformed into adenoma-like cells in SFM. IGF agonists reduce the rate of transformation by driving proliferation of the progenitor phenotype but do not prevent it. We also detected a very low incidence of differentiation toward a neuroendocrine-like cell type in these same cultures. Methods Cell culture Rat ileal epithelial cells (IEC-18 – American Type Culture Collection, Rockville, MD) from aliquots of passage numbers 6–8 were grown to confluence in DMEM with 10% fetal bovine serum (FBS) and .1% insulin. The time period for complete epithelial cell confluence was 24–48 hours. Confluent cells were incubated for 72 hours in DMEM to establish a serum free period either alone or with 10 -6 M NBI 31772 (Calbiochem, San Diego, CA [ 21 ]), 0.5 × 10 -6 M IGF-II analog, or 0.5 × 10 -6 M R 3 -IGF-I (Sigma, St. Louis MO), as a means of boosting crypt cell proliferation. We point out that crypt cells can be driven to proliferate despite confluence whereas more differentiated epithelial cell types are resistant to IGF-driven proliferation upon reaching confluence. Alternatively, IEC-18 cells were diluted to 1/8, 1/4 and 1/2 the original seeding density and plated in DMEM with 10% FBS for 24 hours as a means to evaluate cell fate divergence in the presence of serum. Western blots Cell lysate samples were collected from 150 mm dishes, washed with PBS and recovered in 700 uL of lysis buffer by scraping the dish. Cells were spun down for 10 minutes in a pre-chilled centrifuge. The supernatant was diluted 4:1 with loading buffer and run on 12 or 15% acrylamide gels. After the gel had been transferred onto a nitrocellulose membrane, 0.5% Ponceau S stain was applied to confirm equitable protein transfer across all lanes. Western blot membranes were incubated in 5% milk block for 1 hour at RT. Goat polyclonal primary antibody for APC and 5-HT2A were applied for 1 hour at RT. Primary antibody was washed off with TBS-Tween and secondary antibody was placed on the membranes for one hour, then washed again. ABC (Elite Series, Vector Labs) was placed on the membrane for an hour and then washed and visualized with SuperSignal chemiluminescent substrate (Pierce, Rockford, IL), each was prepared per the manufacturer's instructions. Gene array analysis IEC-18 cells were treated and processed as described in C ell Culture methods and then utilized to screen for specific cell fate markers. Total RNA was harvested for each treatment condition in triplicate experiments and then used for gene chip analysis per the manufacturer's protocol (rat gene chip # 230A, Affymetrix, Santa Clara, CA). All available non-EST tags were searched within the R 3 -IGF-I data set and those with significantly different fold changes when compared to SFM cells were selected as potential candidates for further analysis (for the purpose of a first pass screen, a significant fold change was defined as a p value < 0.1 and a greater than two fold increase or decrease). To further refine the list of potential candidates, the fold changes for NBI 31772 and IGF-II analog, which had step-wise reductions in effects upon crypt cell proliferation, were used to determine if there was a positive or inverse trend between a selected gene's mRNA abundance and crypt cell abundance. In this way, we sought to select constitutively expressed genes, whose primary differences in abundance would be due to differences in cell composition. Our statistical test for determining significance was a two-tailed paired t-test (provided as part of the standard analysis by the University of Virginia Biomolecular Research Facility and the Dept. of Health Evaluation Sciences [ 22 ]). Immunolocalization experiments Immunolocalization in IEC-18 cells was performed for IGFBP-2 (C2), f-actin, IGF receptor type 2 and villin as well as two proteins that were screened out by our gene array analysis (APC and 5-HT2A) in a minimum of three separate experiments for each. In brief, the cells were fixed in formalin overnight, washed in DIG buffer (4% 1M Tris Base, 6% 5M NaCl, 16% 1M Tris HCl) 5 times then put in blocking solution (1% wt/vol BSA in DIG buffer) for 1 hour at room temperature. This was followed by a one-hour incubation with goat polyclonal primary antibody (all were obtained from Santa Cruz Biotechnology, CA; specifically for the anti-actin antibody the product number was sc-1615) in antibody diluent solution (5% 1M Tris HCl, 1% 1 M Tris Base, 9% NaCl, 3.3% Triton-X 100). Primary antibody was washed off with DIG × 5 and secondary antibody (donkey anti-goat [Jackson Immunoresearch Labs, West Grove, PA]) was placed on the slides for one hour. ABC (Elite Series, Vector Labs) was prepared and applied per the manufacturer's instructions, washed as above and visualized with DAB solution (Sigma, St. Louis, MO) in parallel. The slides were counterstained with hematoxylin, cover-slipped, photomicrographed and representative images were chosen for publication. With respect to the actin visualization, detection was performed such that colored precipitate was observed under the microscope and stopped when stress fibers and microvillus cores were evident and before the high background of the total cell f-actin began to stain efficiently. In the crypt cell quantification experiments, six 10X images were taken at random, printed on color paper at maximum size, blinded, and all cells were assessed as C2 positive or C2 negative and their means and standard deviations calculated for each treatment condition. C2 double labeling immunolocalization experiments To objectively test our observations that C2 positive cells have a distinctive colocalization pattern when compared to C2 negative cells, C2 double labeling experiments with f-actin and with APC were performed as overlay experiments using digital microscopy. In brief, C2 immunolocalization was performed as described above except that a grid was drawn on the wet mount slide, allowing digital photomicrographs to be taken at 40X of C2 staining while still covered with stop solution and then a second antibody, either for f-actin or APC was applied and the same process repeated before being counter-stained with hematoxylin and cover-slipped. The same exact images were then re-mapped using the grid and the stored digital images were used for precise position verification. The overlaid double-labeled image was then taken and contrasted to the original image to allow comparison of differential localization of the two antigens. Abbreviations IGF , insulin-like growth factor; IGFBP-2 , IGF binding protein-2; IGF-II analog , synthetic truncated IGF-II; R 3 - IGF-I , synthetic long arginine IGF-I; NBI 31772 , an alpha-numeric designation for a non-biologic compound that best displaced IGFs from IGF binding proteins in a large bioassay screen; SFM , serum-free media; APC , adenomatous polyposis coli; 5-HT2A , serotonin receptor 2A; C2 , IGFBP-2 carboxyl fragment Authors' contributions PG designed the study, participated in the immunolocalization studies, and performed data analyses. JP carried out the immunolocalization, cell count, and Western blot studies. NF maintained the IEC cells lines and expertly provided uniformly confluent cells. PG and JP produced the figures and drafted the manuscript. All authors read and approved the final manuscript. Figure 4 Western blots of APC and 5-HT2A in IEC-18 cell lysates. Visualization of the appropriate sized band (and break down products) for APC is shown in lane A and 5-HT2A is shown in lane B. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC547914.xml |
544548 | Help Wanted: Science Manager | The recently created professional science master's degree may be the answer to the increasing need for science-savvy employees in the business world | “I didn't want to be just another MBA,” says Pascal Herzer, one of the first recipients of a new graduate credential known as the professional science master's, or PSM. “Not many people have the ability to understand science and business, and [the PSM] program was designed for that very purpose.” PSMs are two-year American master's degrees financed in large part by the Alfred P. Sloan Foundation to cultivate science managers. Sloan's ultimate goal is to make science careers more attractive to talented young people like Herzer, a 2003 PSM graduate in Applied Biosciences from the University of Arizona, who believes his PSM makes him more marketable to science-based businesses. “I am at the true junction of science and business,” he says. The Missing Degree Fortunately for Herzer, the business of science is booming. Jobs for scientists and engineers grew four times faster than the United States national average since 1980, and should outpace the market until at least 2010. Surprisingly to many academics, most of these jobs are in industry. In 1999, the last year with complete data, two out of three employed science and engineering (S&E) graduates worked in industry, including the great majority of bachelor's and master's degree holders, and 40% of doctorates. In other words, industry, not academe, now drives American S&E employment, and will for the near future. Like academia, industry needs scientifically literate personnel; unlike academia, industry wants employees with business savvy as well. However, in the past, graduate students received either science or business instruction, not both. “Industry simply hired regular master's-degreed people, or MBAs, or more likely PhDs, and just expected them to learn their weaknesses on the job,” says Eleanor L. Babco, Executive Director of the Commission on Professionals in Science and Technology, a nonprofit corporation with funding from the Sloan Foundation to assess PSM graduates. For science-based businesses, then, the American S&E doctorate—viewed by many as the worldwide gold standard for science education—is too specialized for their needs (see Box 1 ). But a master's degree may be just right. Box 1. Is There a Doctorate in the House? The length of time to obtain a biological science doctorate has increased… …The number of postdocs and part-time faculty in the biological sciences is increasing… …And the proportion of doctorates in academia is decreasing… (Statistics taken from the National Science Board's “Science and Engineering Indicators 2004” [ http://www.nsf.gov/sbe/srs/seind04/ ] and the National Science Foundation Science Resources Statistics Division's “1995 and 2001 Survey of Doctorate Recipients” [ http://www.nsf.gov/sbe/srs/infbrief/nsf04328/table1.xls ].) Bridging the Gap During the 20th century, the master's degree evolved as a professional credential in many fields, including business, education, and social work, and more recently, pharmacy, physical therapy, and accounting. In the 1990s, non-incidental master's in the sciences—in other words, intentionally terminal degrees, not consolation prizes for failing out of graduate school—crept into engineering and applied mathematics, too, as companies grew more reliant on computational analysis and hired accordingly. From 1981 to 2000, for example, the number of earned master's degrees in mathematics and computer science more than doubled. With hopes of spurring a “significant movement,” in 1997 the Sloan Foundation bet big on professional master's degrees, eventually spending $11 million on almost 100 programs across the US. Sloan Foundation–backed PSM programs now operate at 45 universities in 20 states, in such fields as microbial biotechnology and applied genomics; similar programs have also developed independently of the Sloan Foundation, such as the Master of Science in Bioinformatics at Johns Hopkins. And while most PSM-style programs are currently in the US, this may soon change: the 1999 “Bologna Agreement” requires all European Union universities to adopt uniform undergraduate and graduate degrees “relevant to the European labour market”; so master's-level industry-centric degrees are sure to follow. At Leiden University in the Netherlands, for example, students can now add a “science-based business” focus to any research master of science (MSc) program. Like all graduate programs, PSMs offer advanced coursework in a (science or math) specialization, usually in an emerging or hybrid field such as bioinformatics. Most PSMs also provide business courses—including finance, project management, regulatory affairs, and intellectual property law—and information technology classes as well. PSMs are “industry relevant” by design, with external advisory committees populated by local business leaders, weekly colloquia led by corporate representatives, special arrangements for employed students, and industry internships or final projects exploring realistic business scenarios (see Box 2 ). Box 2. Requirements for PSM Programs Only programs meeting most of the following requirements may earn the official moniker “professional science master's.” Two years of science or math graduate-level coursework, taught by regular faculty, characterized by interdisciplinary studies and a focus on informatics Training in business fundamentals—such as finance, marketing, project management, communication, and team building—and exposure to industry professionals Final project reflecting a realistic workplace issue and/or industry internship Advisory board of industry professionals Targeted recruitment and admissions separate from other degree programs Commitment to tracking graduates through first five years Long-term sustainability (Source: http://www.sciencemasters.com/affiliation.html .) A key principle underlying the PSM model is interdisciplinarity. PSM students are encouraged to reach out to other departments and broaden their expertise in multiple areas, to better understand the collaborative culture of industry-style scientific enterprise. To promote such connections, PSM programs explicitly teach teamwork and effective scientific communication, with authentic case studies analyzed alongside MBA students, classroom presentations and public seminars, and open defenses of final projects. Consequently, PSM graduates, unlike many doctoral graduates, are trained to possess a wide array of interactive skills, including sizing up an audience for their ability to comprehend the presented material and adapting appropriately. In a science-based business, ideas must flow freely between scientists and non-scientists in and out of the company—between researchers and marketers, say, or inventors and patent lawyers—to capitalize on discoveries and comply with regulations. When non-scientists misunderstand the science underpinning a business model, profits suffer. But the presence of a central employee who streams data between differently educated members of the network may boost the bottom line. PSM students are specifically trained to act as such “science translators.” “[My PSM] allows me to serve as an efficient mediator between corporate entities, university personnel, and scientists,” says Herzer. For this reason, small companies and start-ups, which cannot afford specialists for every position, may particularly benefit from PSM-credentialed employees, able to connect different people and function in multiple roles; indeed, many PSM graduates have job descriptions expressly created for them. “We need generalists rather than specialists,” says James L. Ratcliff, Chairman and CEO of Rowpar Pharmaceuticals, a dental products company in Scottsdale, Arizona. For small companies like his, Ratcliff says, “PSM graduates have an appropriate combination of project management expertise, an understanding of business environments and priorities, and advanced knowledge in the physical and life sciences.” Although it is too early for comprehensive assessment, employment outcomes for PSM graduates have been examined, and this result is clear: they are getting industry jobs. According to The Conference Board, an independent business management organization funded by the Sloan Foundation to survey PSM alumni, by 2002, 91% of the first PSM graduates had obtained full-time positions within their field despite a white-collar recession, two-thirds with salaries of $50,000 or more. A separate analysis by the Commission on Professionals in Science and Technology found that 61.5% of employed respondents were hired by businesses. Employment opportunities range from marketing to bioinformatics (see Box 3 ). “Companies need people that can work in companies,” says Lindy A. Brigham, coordinator of the Applied Biosciences PSM program at the University of Arizona. Box 3. First Jobs Obtained by PSM Graduates A sampling of actual first jobs obtained by students in PSM or PSM-style programs: Coordinator of Regulatory Affairs Associate Criminalist Licensing Assistant Staff Researcher Senior Computer Database Specialist Project Manager E-Product Marketing Specialist Clinical Consultant Technical Support Specialist Bioinformatics Programmer Manager of Medical Affairs (Source: personal communications from L. A. Brigham, D. Ascher, T. Tiongson Pohar, and S. Inamdar.) Not a Perfect Cure Although most scientific careers demand a graduate degree, a professional master's in many hard sciences still encounters entrenched academic opposition. According to Lee-Jen Wei, then acting chair of the Department of Biostatistics in the Harvard School of Public Health, quoted in the Wall Street Journal , “Harvard tries to create leadership in industry, academics and government, and our philosophy is we don't think that with a master's degree people can fill that role very easily.” The government appears to agree with this view. While most doctoral candidates receive federal funds for tuition and other expenses, there is little money for master's students, who disproportionately end up in industry regardless of specialization. PSM students are especially affected by this problem because “interdisciplinary” equals “expensive.” Similarly, “interdisciplinary” can also mean “hard to find”—companies with targeted recruitment often miss PSM students, who are not “in” any particular department—and “confusing”—differences in these new, still somewhat vaguely defined programs can make hiring comparisons difficult. But perhaps the most conspicuous drawback to PSMs is their newness, and resulting obscurity: almost half of graduates say they are “not sure” employers will value their PSM, or the unique skill set it affords. But Will They Succeed? Still, many observers of higher education support the PSM concept. Judith Glazer-Raymo, author of the forthcoming book Professionalizing Graduate Education: The Master's Degree in the Marketplace , argues that converging market forces will lead to the success of the professional master's degree in science. These forces include: rapid technological change; the rise of alternative learning channels such as online and distance education, corporate universities, and hi-tech certification programs; the proliferation of degrees in general, and in multidisciplinary fields specifically; and a fundamental societal shift away from public service and toward entrepreneurship, profitability, and competition. Kenneth R. Smith, former dean of the Eller College of Business and Public Administration at the University of Arizona, and others make the case that PSMs may protect students' careers from outsourcing to foreign countries. The American S&E labor pool is shrinking, and industry has already responded by transferring much of its research and development overseas; however, companies are mostly moving lab scientists, not strategic analysts. Cross-training in both science and business could thus provide an edge for domestic workers in the near-term employment environment; in fact, PSM programs have a higher proportion of US citizens and residents than S&E doctoral programs. Further, in its 2003 report, the National Science Board urged the government to better align S&E graduate education with “expected national skill needs,” including “interdisciplinary skills.” The report also recommended federal funding for a wider range of educational options and more attention on the real economic concerns of students—code words for support of professional master's degree initiatives. In the same vein, top universities now advocate “interconnections” between their professional schools and traditional departments, as a way of strengthening the overall academic mission, and many countries are sponsoring initiatives to stimulate university–industry links, to maximize marketing of technological innovations. For advocates, then, the PSM both advances the cause of science education reform and addresses changing employment conditions with one big idea: reinvention of the two-year graduate credential for an entrepreneurial age. Herzer, for one, now a technology development representative at the Scripps Research Institute, has staked his future on the potential of professional master's degrees. “Scientists rarely understand business dealings, and business personnel rarely comprehend scientific discoveries,” he says. “The overlay of the two is crucial for any successful business transaction of scientific origins.” Median length of time to doctorate degree Number of doctorates employed in the biological sciences, by position Science and engineering doctorates by employment sector | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544548.xml |
518974 | What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals? | Background We conducted this analysis to determine i) which journals publish high-quality, clinically relevant studies in internal medicine, general/family practice, general practice nursing, and mental health; and ii) the proportion of clinically relevant articles in each journal. Methods We performed an analytic survey of a hand search of 170 general medicine, general healthcare, and specialty journals for 2000. Research staff assessed individual articles by using explicit criteria for scientific merit for healthcare application. Practitioners assessed the clinical importance of these articles. Outcome measures were the number of high-quality, clinically relevant studies published in the 170 journal titles and how many of these were published in each of four discipline-specific, secondary "evidence-based" journals ( ACP Journal Club for internal medicine and its subspecialties; Evidence-Based Medicine for general/family practice; Evidence-Based Nursing for general practice nursing; and Evidence-Based Mental Health for all aspects of mental health). Original studies and review articles were classified for purpose: therapy and prevention, screening and diagnosis, prognosis, etiology and harm, economics and cost, clinical prediction guides, and qualitative studies. Results We evaluated 60,352 articles from 170 journal titles. The pass criteria of high-quality methods and clinically relevant material were met by 3059 original articles and 1073 review articles. For ACP Journal Club (internal medicine), four titles supplied 56.5% of the articles and 27 titles supplied the other 43.5%. For Evidence-Based Medicine (general/family practice), five titles supplied 50.7% of the articles and 40 titles supplied the remaining 49.3%. For Evidence-Based Nursing (general practice nursing), seven titles supplied 51.0% of the articles and 34 additional titles supplied 49.0%. For Evidence-Based Mental Health (mental health), nine titles supplied 53.2% of the articles and 34 additional titles supplied 46.8%. For the disciplines of internal medicine, general/family practice, and mental health (but not general practice nursing), the number of clinically important articles was correlated withScience Citation Index (SCI) Impact Factors. Conclusions Although many clinical journals publish high-quality, clinically relevant and important original studies and systematic reviews, the articles for each discipline studied were concentrated in a small subset of journals. This subset varied according to healthcare discipline; however, many of the important articles for all disciplines in this study were published in broad-based healthcare journals rather than subspecialty or discipline-specific journals. | Background Evidence on the journal-reading habits of clinicians comes from three separate groups of publications. First, several surveys have been used to ascertain the reading habits of physicians. Fafard and Snell [ 1 ] assessed house staff who reported reading an average of 8.7 hours per week, with about half of their time spent reading for specific patient situations. Reading time for family practice residents was more than three hours per week [ 2 , 3 ] and ranged from 1–12 hours. Dermatology residents averaged 4.2 hours reading per week and read an average of seven journals, four of which were peer reviewed [ 4 ]. Internists read an average of 4.4 hours per week [ 3 ], while surgeons reported an average reading time of 3.5 hours across 3–16 journals [ 5 ]. This average of three-four hours of reading time per week is quite consistent across disciplines, level of education, time, and nationality. A second set of surveys and studies center on general information-seeking behaviors of clinicians. These studies show how journal reading fits in with the other types of information that clinicians use. Two systematic reviews have been done recently. Researchers at the Australian National Institute of Clinical Studies [ 6 ] reviewed preferred information sources in many clinician groups, including physicians (primary care/general practice/family practice, hospitalists, rural physicians, diabetologists); nurses (hospital and occupational health nurses); physical therapists; dental hygienists; and policy-makers. They reviewed 34 studies and concluded that all groups used multiple information resources, with information needs answered most often by other people, followed by books and journals. Dawes and Sampson [ 7 ] evaluated 19 studies of physician information-seeking behavior. They placed books and journals in one category (print resources) and found this to be the most used information source, with colleagues being the second. The third source of information on clinicians' use of information resources comes from marketing studies. The Association of Medical Publications [ 8 ] monitors physician use of printed journals and other information resources. Despite the rapid expansion of the Internet and all the information it contains, physicians continue to read and value journal articles, and their reliance on journals may be increasing. Data collected in 1983 and 1998 shows that physician reliance on journal literature as their main source of medical information increased from 61.8% to 76.3%, an absolute increase of 14.5% in 15 years. The importance of reading journal articles for clinical care is evident. The increasing number of journals from which important and relevant articles are found, combined with the decreasing number of personal subscriptions [ 9 ], makes it more important than ever for physicians to choose carefully which journals to subscribe to and read. This decision should not be based on intuition alone, as shown in an important study by obstetricians and gynecologists. Weiner et al. [ 10 ] sought to determine which journals had published numerical data on the relation between oral contraceptive use and cancer–information they judged to be clinically important and readily available in their subspecialty journals. Assessing 3735 articles identified by MEDLINE searches, only 27 studies reported numerical data, of which 23 were published in mainstream general medical journals. Only four were published in obstetrics and gynecology journals. Since the publication of the study by Weiner et al. [ 10 ], several groups have tried to determine targeted journal subsets that could provide the most important clinical information to physicians in different specialties. Birken and Parkin [ 11 ] assessed journals with pediatric content. Using data from pediatric-related systematic reviews in the Cochrane Database for Systematic Reviews for 1997, as well as policy statements from the American Academy of Pediatricians and the Canadian Paediatric Society, they determined that four general medical journals and three pediatric specialty journals provided access to most of the important advances: Archives of Diseases in Childhood, BMJ, JAMA, Journal of Pediatrics, Lancet, New England Journal of Medicine, and Pediatrics . Their results validate the findings of Weiner et al. –important studies in a discipline or specialty are often not published in specialty journals. Gehanno and Thirion [ 12 ] used MEDLINE searches and the Science Citation Index (SCI) Impact Factors to identify journal subsets in occupational health. Eight journals provided coverage of 27% their discipline content; 38 journals increased this to 52%. Coverage needed to be expanded beyond their specialty journals for them to remain current in occupational health. Lee et al. [ 13 ] sampled research articles from 30 randomly selected journals from a list of 107 general internal medicine journals defined by SCI. They found that journals with high citation rates, SCI Impact Factors, and circulation rates; low manuscript acceptance rates; and listing on the Brandon/Hill Library List [ 14 ] were predictive of higher article methodologic scores. Ebell et al. [ 15 ] as well as our research group [ 16 ] present an alternative approach for clinicians to keep up to date with current literature. Both groups produce summaries of important advances in areas of clinical care so that individuals do not have to read primary journals and evaluate reports. Ebell et al. provided results of a hand search of 85 core journals of interest to family/general practice. Physicians read these journals for six months and identified articles that were considered to be POEMs (patient-oriented evidence that matters). A POEM addresses a clinical question encountered by a family physician at least once every two weeks, measures patient-oriented outcomes, and presents results that will likely affect practice. The report provides summaries of which journals publish important clinical advances for general/family practice. In this article we report on a survey of the contents of 170 core clinical journals for the publishing year 2000 to assess which journals publish the highest number of methodologically sound and clinically relevant studies in the disciplines of internal medicine, general/family practice, general practice nursing, and mental health. In the "Methods" section we describe our two-step article selection process for clinical importance and methodologic rigor, which is very similar to that used by Ebell et al. [ 15 ]. The data we provide reflects the merit of individual journal titles from a clinical perspective; it may help clinicians to choose which journals to read, and health sciences libraries to include them in their collections. Methods The Health Information Research Unit of the Department of Clinical Epidemiology and Biostatistics at McMaster University, Ontario, Canada, publishes several secondary "evidence-based" journals, systematically selecting, summarizing and appraising articles in a broad range of primary clinical journals. In 2000 we prepared ACP Journal Club (ACP J Club) to support internal medicine , Evidence-Based Medicine (EBM) to support general/family practice, Evidence-Based Nursing (EBN) to support general care nursing, and Evidence-Based Mental Health (EBMH) to support mental health. To identify potential candidate articles for inclusion in these journals, six Masters-level trained staff read each article in the major general healthcare journals and those in the disciplines and subdisciplines related to the content of each abstract journal. The list of these journals (see Additional File 1 ) is comprised of titles suggested by librarians, clinicians, editors, and editorial staff; SCI Impact Factors; and systematic examination of the contents of each title for at least six months. More than 400 journal titles have been assessed since the abstract journals were started in 1991. We consider the Cochrane Database of Systematic Reviews to be a separate journal that publishes systematic reviews of the literature on a quarterly basis. This is consistent with the U.S. National Library of Medicine's decision to index the Cochrane Database of Systematic Reviews as a separate journal. We evaluate only the new reviews and those that are substantially updated each quarter. We do not consider the rest of the database or protocols that describe reviews that are in progress or being planned. Original and review articles are placed in one or more of seven categories of study type–therapy and prevention, screening and diagnosis, prognosis, etiology and harm, economics and cost, clinical prediction guides, and qualitative studies [ 16 ]. All categories have a set of pass/fail rules for selection (see: ), except for qualitative and cost studies. Basic inclusion criteria are that the articles i) are about the healthcare of humans; ii) have at least one clinically important outcome; and iii) use appropriate statistical analyses. As an example of category-specific criteria, an article on screening or diagnosis must meet these additional criteria: • a spectrum of participants were included, some with the disease or condition of interest and some without • objective diagnoses were made using the "gold" standard or current clinical standard for diagnosis of the disease or condition • participants received both the new test and some form of the diagnostic standard • the diagnostic standard was interpreted without knowledge of the test result and vice versa. The basic inclusion criteria are based on study design and methodology principles for evidence-based healthcare. Their use identifies studies that have data related to patients or those at risk of disease, diseases and conditions, and real-life clinical settings. Therefore, a study or review article that meets the criteria can be considered to be appropriate for possible use in patient care decision-making. The article readers are trained and retested annually so that they can reliably apply these selection rules for inclusion in our evidence-based journals (kappa measuring chance-adjusted agreement > 80% for all categories) [ 16 ]. With a research grant from the U.S. National Library of Medicine we intensified our data collection from the reading process related to the evidence-based journals for the publishing year 2000. All articles in 170 journals were classified as to whether they were "of interest" to the healthcare of humans and, if so, whether they reported original data or were systematic review articles. These original studies and reviews were classified into all possible categories (where more than one category could apply, for example, a therapy article that included economic data), and were then given a pass or fail methodologic designation for each category. Articles passing methodologic criteria were assessed further for clinical interest by an editorial group of practicing clinicians for each abstract journal. These clinicians have expertise in methodology and specific areas of healthcare such as gastroenterology or neonatology nursing. At this point the clinician raters excluded all studies with preliminary results, interventions that were not readily available or proven useful, already known and applied findings, and topics addressing rare conditions or diseases. After review, often by a team of three-five clinicians (see for a copy of the rating system that was used in paper format for this study) some articles were further processed. The editors chose articles to be abstracted that they considered to have the most important message for clinicians. The remaining pass articles were listed as "Other Articles Noted" if their content was of relevance to the disciplines covered by the abstract journals. This dual selection process (methodologic rigor and clinical importance) provided insight into which journals yielded the highest numbers of pass articles. This had major implications for clinical practice at two levels of clinical relevance. The more stringent level includes articles that were summarized in each abstract journal. The second, less stringent level includes articles that are abstracted as well as those articles that are listed in the Other Articles Noted sections. Analysis was done by abstract journal title ( ACP J Club, EBM, EBN , and EBMH ) to ascertain which journal titles were most important to their target clinical audience (internal medicine, general/family practice, general practice nursing, and mental health, respectively). SCI Impact Factors were collected for each journal title for each discipline. If an SCI Impact Factor was not available we sought Social Science Index Impact Factors. These data were analyzed to determine if Impact Factors were related to yield of clinically important advances, as found by Lee et al. [ 13 ] and Gehanno and Thiron [ 12 ]. Results For 2000, the 170 core journals we selected published 60,352 articles. The total number of pass articles was 3059 for original studies and 1073 for reviews. An article could be counted more than once if it passed for multiple categories. Six journals did not publish any pass articles. The complete list of journals and their yield appears in the Additional File 1 . The category breakdown of pass articles for original studies and review articles, respectively, was 1639 and 662 for therapy and prevention, 152 and 47 for screening and diagnosis, 195 and 22 for prognosis, 290 and 308 for etiology and harm, 35 and 10 for economics, 358 and 8 for qualitative studies, and 93 and 4 for clinical prediction guides. The top 20 journals for yield of pass articles are included in Table 1 . The titles varied considerably in both the total number and proportion of clinically relevant articles that they published. For example, 95.0% of the articles (all reviews) in the Cochrane Database of Systematic Reviews passed our criteria, while only 2.8% of the articles in the American Journal of Gastroenterology met standards for clinically applicable studies. (The American Journal of Gastroenterology is a specialty journal and a substantial proportion of its content is preclinical. These preclinical articles, by definition, did not meet the clinical criteria in this study.) Generally, a clinical reader would need to read in the range of 13–14 articles from these top 20 journals to obtain one that is directly clinically important in any healthcare area, although the range is substantial (1.1 to 36.9). We call this number the "number of articles needed to be read" or NNR. The number of pass articles did not correlate with SCI Impact Factors for the top 20 journals (correlation coefficient 0.29, P = 0.24). Analysis of the top 50 titles showed a weak correlation (correlation coefficient 0.41, P = 0.004) for the same analysis. The breakdown by discipline was done using the total number of articles that were selected for inclusion in each of the four abstract journals–internal medicine, general/family practice, general care nursing, and mental health (Tables 2 , 3 , 4 , 5 ). Both the total number of articles abstracted and the total number of articles in each journal (abstracted and "Other Articles Noted") are included in Tables 2 , 4 , and 5 giving a two-level assessment of "clinical worth". EBM does not publish an "Other Articles Noted" Section. Internal medicine content (ACP J Club) The journals contributing articles important to the practice of internal medicine ( ACP J Club) are shown in Table 2 . Substantial drop-off is seen after the top three titles ( New England Journal of Medicine, JAMA, and Lancet) . These three journals and the Cochrane Database of Systematic Reviews provided 56.5% of the articles abstracted, with 28 additional journals providing the other 43.5%. Fifteen titles provided only one article each and, overall, 32 journals provided at least one article for abstraction. Another 51 journals provided at least one article in the "Other Articles Noted" section. Thus, 83 journals from our list of 170 published studies important to internal medicine. The NNR to obtain one high-quality and clinically relevant study or review varied considerably across the titles. For the more stringent definition of clinical relevance (article abstracted in ACP J Club) , the range of NNR for internal medicine was from 40.4 for the Cochrane Database of Systematic Reviews to 1334 for Neurology . For the less stringent definition (article abstracted or noted in ACP J Club), the NNR range for internal medicine was from 3.4 for the Cochrane Database of Systematic Reviews to 242 for Acta Obstetrica et Gynaecologica Scandinavica . Correlating the number of articles published in ACP J Club with their SCI Impact Factor showed a large and positive correlation for both levels of clinical importance (correlation coefficient 0.786, P < 0.001 for the more stringent definition; correlation coefficient 0.688, P < 0.001 for the less stringent definition of clinical importance). These findings support the findings by Lee et al. [ 13 ] that SCI Impact Factors were correlated with quality articles for general internal medicine. General/family practice content (EBM) The most important articles for general/family practice (publication in EBM ) were published in BMJ, Lancet, Cochrane Database of Systematic Reviews, Archives of Disease in Childhood , and Annals of Internal Medicine –these journals provided 55.6% of EBM content (Table 3 ). Overall, 45 titles provided abstracts for general/family practice coverage. The "shape" of the data is different for general/family practice than for general internal medicine, with more journals providing articles for abstraction. This is consistent with the discipline because general/family practitioners must use knowledge from a broader range of health conditions (including pediatrics and obstetrics, for example) than general internists and other specialists. Only the most stringent definition for clinical worth could be evaluated for EBM content because the "Other Articles Noted" section of the journal did not exist in the year 2000. The NNR for general/family practice ranged from 55 for the Cochrane Database of Systematic Reviews to 1351 for Circulation . Correlation analysis showed that the number of qualified articles in each journal title was associated with the journal's SCI Impact Factor (correlation coefficient 0.546, P = < 0.001). This shows substantial agreement between SCI Impact Factors and number of articles but slightly less agreement than that found using the general internal medicine data (correlation coefficients > 0.688). General care nursing content (EBN) Nursing content came from many journals, including journals that are considered to be primarily targeted at physicians, and was not concentrated in a small set of journal titles (Table 4 ). To reach 51.0% of the abstracted articles, seven titles were needed ( Qualitative Health Research, Cochrane Database of Systematic Reviews, Pediatrics, JAMA, Lancet, BMJ , and Journal of Advanced Nursing ). Thirty-two other journals provided articles for abstraction and 33 journals provided studies that were listed only in the "Other Articles Noted" section; 72 journals in total provided content for general care nursing. The NNR for general practice nursing was variable, ranging from 6.0 for Qualitative Health Research to 1530 for New England Journal of Medicine for the more stringent definition of clinical relevance. For the less stringent definition of clinical relevance the NNRs ranged from 4.7 for Qualitative Health Research to 923 for American Journal of Gastroenterology . The low NNR for Qualitative Health Research undoubtedly reflects the fact that only clinical criteria for relevance were applied in the selection of qualitative studies, not explicit methodologic criteria. The reason for lack of methodologic criteria for qualitative studies was that we have been unable to obtain agreement from qualitative researchers of what the quality criteria should be. No correlation was seen between number of articles published per journal title and SCI Impact Factors for either the stringent definition of clinical relevance (correlation coefficient 0.096, P = 0.57) or the less strict definition (correlation coefficient 0.256, P = 0.038). Mental health content (EBMH) Mental health content was also spread over a broader range of journals than was internal medicine (Table 5 ). To reach 53.2% of the articles abstracted, nine titles needed to be read: Archives of General Psychiatry, Cochrane Database of Systematic Reviews, American Journal of Psychiatry, British Journal of Psychiatry, JAMA, Lancet, International Journal of General Psychiatry, Journal of the American Academy of Child and Adolescent Psychiatry, and Journal of Consulting and Clinical Psychology. Forty-one titles provided at least one article for abstraction. The titles in Table 5 show that studies related to mental health are published in many journals and specialties–a reflection of the broad nature of the discipline. The NNR for mental health for the most stringent definition of clinical relevance ranged from 20.1 for Archives of General Psychiatry to 1142.7 for BMJ . Archives of General Psychiatry also has the lowest NNR for the less stringent definition (11.5), with CMAJ having the highest NNR (1007) of those journals with at least one article on mental health. EBMH has a smaller "Other Articles Noted" section. Only eight additional journals provide articles for this section beyond the 61 that provide articles for abstraction. A weak association was shown between the number of published mental health articles and SCI Impact Factors (correlation coefficient 0.386, P = 0.02 for the more stringent definition; and correlation coefficient 0.381, P = 0.01 for the less stringent definition). All disciplines Combining the content across the four discipline areas, we again see the concentration of important clinically relevant articles in a small subset of journals. Eight journals provided at least one article for abstraction to all four abstract journals: Annals of Internal Medicine, Archives of Internal Medicine, BMJ, CMAJ, Cochrane Database of Systematic Reviews, JAMA, Lancet, and New England Journal of Medicine. Another 10 journals provided at least one article to three of the four abstract journals: American Journal of Medicine, Archives of General Psychiatry, British Journal of General Practice, British Journal of Surgery, Health Psychology, Journal of Clinical Epidemiology, Journal of Clinical Psychopharmacology, Journal of Family Practice, Journal of the American Geriatrics Society, and Pediatrics. Twenty-eight journals provided studies to two of the abstract journals, 36 provided articles to at least one abstract journal, and 82 titles provided no articles for abstraction (excluding the six titles that did not publish any pass articles). Conclusions We found that the majority of articles for each discipline were sequestered in a small subset of journals. This is consistent with Bradford's Law of Scattering for journal subsets, which states that the important articles on any topic will be concentrated in a small subset of journals with exponential drop-off in numbers of relevant articles across journal titles [ 17 ]. Across disciplines and study areas, approximately 70% of articles are often found in 30% of journals in any given area of study. Not surprisingly, for broad-based disciplines such as mental health and nursing, the number of titles was greater than for more focused disciplines such as internal medicine. SCI Impact Factors were highly correlated with the number of important clinical articles in separate titles for internal medicine and, to a lesser extent, for general/family practice, and mental health but not for general practice nursing. This likely reflects the volume of clinically important research activity in these fields–with especially high volumes in the disorders managed by internal medicine and its subspecialties–coupled with the avidity of authors from all disciplines to submit their best studies to the high-circulation general journals. As found by Weiner et al. [ 10 ] and others, most of the important advances in any discipline are not published in specialty journals but in the more general healthcare journals such as JAMA, Lancet, BMJ, New England Journal of Medicine, and Cochrane Database of Systematic Reviews . Health professionals in all disciplines should be aware that major advances in any field will most likely be published in the main general medicine journals, while at the same time recognizing that specialty journals also publish important information. Much variation exists across journal titles in both the number and proportion of articles that are high quality, clinically important, and newsworthy. Variation also exists across disciplines. It is also interesting to note that all lists of important journals discussed in this report and also the one by Ebell et al. [ 15 ] include both North American and European titles. Reading choices for clinicians cannot be based on national or discipline boundaries alone. Of the 45 titles that provided articles to EBM , 23 were on the list provided by Ebell et al. (POEM articles) [ 15 ]. Ebell et al. found common POEMs in 49 journals and any POEMs in 64 journals. POEMs and EBM cover the content of general/family practice by considering a similar number of journals, although both groups read approximately 50% unique titles. Ebell et al. read 85 titles for POEMs articles and we read 170 titles for this study. Our coverage of clinical content was broader and included internal medicine, general practice nursing, and mental health, but 53 titles were read by both groups. Correlational analysis for the ranking of each journal title according to the number of articles identified as clinically important showed a small but significant agreement (0.4397, P = 0.005) when comparing our list with the list by Ebell et al. Consistent with the data from Weiner et al. [ 10 ], many advances important to general practice nursing are not published in nursing specialty or discipline-specific journals. Only four of the top 17 and eight of the top 41 journals in Table 4 are considered nursing specialty titles. Overall 39 titles provided at least one article for abstraction and an additional 33 titles provided at least one article to the "Other Articles Noted" section, again showing the broader spectrum of journals that publish articles important to general care nursing. Clinicians in the target disciplines described here could use our findings to focus their fulltext readings. For other disciplines, a similar audit of clinical yield would be needed, either from an appropriate secondary journal that systematically reviews specified journals, or an independent audit. Another approach to staying current may be to subscribe to one or more secondary journals that highlight important clinical advances. These secondary publications have not only selected the most appropriate studies for clinical consideration, they highlight important aspects of methodology and implementation. This assessment of studies before application can be time-consuming and difficult for many clinicians, and involves a certain amount of training and practice to become proficient. Many examples of secondary publications exist in various disciplines and include the four studied in this report, POEMs [ 15 ], and Journal Watch . Use of these summaries of studies and reviews can be supplemented by access to fulltext articles. Many academic medical centers and hospitals provide good online access to major healthcare journals. For example, the Health Sciences Library of the University of Pittsburgh, PA, USA, provides online access to 24 of the top 25 journals in this study and all 25 of the journals identified as high yielders by Ebell et al. [ 15 ]. Specialized health libraries with limited budgets may wish to focus on the journals, either in paper or electronic format, with the highest yield for the disciplines they serve. List of abbreviations ACP J Club ACP Journal Club (journal) AHCPR Agency for Health Care Policy and Research (now AHRQ) AHRQ Agency for Healthcare Research and Quality (formerly AHCPR) CCOHTA Canada Coordinating Office for Health Technology Assessment EBM Evidence-Based Medicine (journal) EBMH Evidence-Based Mental Health (journal) EBN Evidence-Based Nursing (journal) NNR Number of articles needed to read to obtain one high-quality and clinically relevant study or review POEM Patient-oriented evidence that matters SCI Science Citation Index Competing interests The authors all worked with ACP Journal Club, Evidence-Based Medicine, Evidence-Based Nursing, and Evidence-Based Mental Health at the time of this study, and were paid for this work, but the publishers of these journals were not involved in the study, which was funded externally. The authors do not hold stocks or shares in any company that may benefit from the publication of this paper. Author contributions NLW and RBH prepared grant submissions in relation to this project. All authors drafted and commented on the manuscript and approved the final manuscript, as well as supplied intellectual content to the collection and analysis of the data. NLW and KAM did data collection and analysis, and supervised research staff. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 Additional File 1 includes a list of the 170 journals read for 2000 along with the number of articles reviewed, the number and percentage that passed criteria, and the NNR (number of articles that are needed to be read to obtain one that is clinically relevant and has high-quality methods). The file name is "Publishing Important Articles Appendix.doc" and it is in Word 2000 format. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC518974.xml |
516024 | Estimates of statistical significance for comparison of individual positions in multiple sequence alignments | Background Profile-based analysis of multiple sequence alignments (MSA) allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1) MSA position and a set of predicted residue frequencies, and (2) between two MSA positions. These problems are important for (i) evaluation and optimization of methods predicting residue occurrence at protein positions; (ii) detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii) detection of sites that determine functional or structural specificity in two related families. Results For problems (1) and (2), we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a) We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b) We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c) We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d) We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families. | Background Profile-based methods of sequence analysis use multiple sequence alignments (MSA) to extract information about conserved features of a protein family, which are impossible to decipher from a single sequence. Such methods increase both the sensitivity of homology detection and the quality of produced alignments [ 1 - 10 ], mainly due to more accurate scoring of similarity between sequence positions. Here, we address the problem connected to but different from the problem of scoring positional matches. We focus on detecting confident dissimilarities between profile positions that are suggested to be equivalent. In particular, we sought conservative P-value estimates for the comparison of individual columns in MSA. Such estimates have at least three practical applications: (i) evaluation and optimization of methods predicting propensities for residue occurrence at protein positions; (ii) detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii) detection of sites of functional or structural specificity in two related families. Statistical analysis at the level of individual MSA positions may be used to compare residue frequencies predicted from some model to the actually observed residue usage at the given position in sequence homologs. The model may represent, for example, a method for in silico sequence design that generates native-like sequences from a structural template. Detection of discrepancies between the model and the real data would assist the analysis of the model's performance and its further improvement. To our knowledge, such statistical assessment has not been proposed up to date. Several approaches have been proposed to detect potential regions of low alignment quality in sequence-sequence and sequence-profile alignments. These approaches range from identifying low-scoring regions in pairwise alignment [ 11 ] to more complicated schemes: comparing scores of the given alignment and the optimal alignment where this position is omitted [ 12 ], or analyzing the consistency of a given position among different alignments produced with various parameters of alignment construction [ 13 , 14 ]. For multiple sequence alignments, positional residue conservation was proposed as a measure to detect potentially misaligned regions of high variability [ 15 , 16 ]. Cline and co-authors [ 17 ] compared several methods for positional evaluation of sequence-profile alignments and recommended the approach based on the analysis of near-optimal alignments [ 13 , 14 ]. However, detection of potentially misaligned regions in profile-profile alignments has not been addressed before. When the analyzed alignment is highly reliable, detecting positions of significant dissimilarity may reveal sites that determine functional or structural specificity of otherwise similar proteins. Several approaches have been proposed that use comparison of multiple sequence alignments in order to predict such sites [ 18 - 21 ]. However, these methods do not involve explicit estimation of statistical significance. Bejerano [ 22 ] has recently proposed a promising algorithmic approach to the exact P-value computation, which allows for a faster enumeration of possible outcomes. Despite a significant improvement in the computational efficiency, the algorithm still requires a considerable time to process realistic data in 20-dimensional space of residue frequencies. In this work, we consider approximate analytical estimates of P-value in two settings: (1) comparison of an alignment column to an emission vector of residue probabilities, and (2) comparison of two alignment columns. These estimates allow detecting cases where the null hypothesis (assumption of similarity) can be confidently rejected. We performed simulation experiments that show consistency of the estimates with the statistical model, and applied our method, PEAC ( P -value E stimation for A lignment C olumns), to the analysis of real MSA. Results Theory As the statistical null model of a multiple alignment column, we assumed independent random draw of residues according to a vector of emission probabilities. We represented randomly generated columns by vectors of residue counts n , with total count N equal to that of the real alignment column under evaluation. Statistical significance of similarity between a multiple alignment column and a vector of emission frequencies Null hypothesis H 0 (1) given alignment column (vector of residue counts n * ) is generated by given vector of emission probabilities f . If this hypothesis is rejected, then the set of emission probabilities is inadequate for the description of the residue content in this alignment column. The assumed null model of random columns corresponds to a multinomial form of ρ ( n | f ), which is difficult for analytical consideration. To calculate the P-value, we use the multivariate Gaussian approximation of the multinomial distribution, based on the assumption of large statistical samples (large total residue counts N in the generated columns): where x = { x i } is a random d -dimensional vector of residue counts of size , f is emission vector of residue frequencies, is the mean vector of residue counts, Σ = ||cov( x i , x j )|| is the covariance matrix. This approximation of p.d.f. allows for the analytical expression for the P-value (Appendix 1 [see Additional file 1 ]): where is a regularized gamma-function, d is dimensionality of vector f . Thus, P-value is described by a χ 2 distribution with ( d - 1) degrees of freedom. Random simulation shows consistency of P-value estimates with null model In order to analyze whether the Gaussian approximation allows reasonable P-value estimates, we performed extensive random simulations and tested consistency of P-values based on this approximation (formula (2)) with P-values based on the multinomial model. In particular, we used a set of residue frequencies f = { f i } 1 20 to generate a large number Ω = 10 7 of random columns of a fixed size N , i.e. Ω sets of N residues drawn randomly according to probabilities f i . For each random column, residue counts n = { n i } 1 20 were derived and the multinomial probability of its generation was calculated as , where is the multinomial coefficient. All Ω generated columns were sorted by ρ mult in the ascending order. For a given P-value P *, the column number Ω P 0 was chosen from this sorted list. This column corresponded approximately to the multinomial P-value P *. This P-value was compared to our estimate P estim (formula (2)) calculated for the chosen column in the Gaussian approximation of multinomial distribution. For each value P * we performed 10 independent simulations and plotted average values of P estim against P*, which showed their general consistency. Figure 1 illustrates the results for three typical sets of emission frequencies f derived from real alignment columns, and for three typical column sizes N . The accuracy of estimates becomes poorer for lower column sizes and more skewed frequency sets (Fig. 1A ). However, even in such cases the accuracy of P Gauss within orders of magnitude is sufficient for the purpose of detecting the pronounced dissimilarities with P << 0.05. Thus, the error introduced by Gaussian approximation still allows the use of P-value estimates under the initially assumed null model of random columns. Statistical significance of similarity between two columns of multiple alignments Null hypothesis H 0 (2) two observed columns m * and n * are generated by a single vector of emission probabilities. As the prior distribution of emission vectors, we use the maximum likelihood (ML) estimate based on m * and n * . Such prior should produce the conservative upper estimate of the P-value. Rejection of hypothesis H 0 (2) would mean that the two alignment columns are highly dissimilar. The P-value for this hypothesis is calculated in three steps: a). Given the two vectors of residue counts { n , m }, we produce the ML estimate of the p.d.f. for emission vectors f that can generate both columns simultaneously. We assumed a simple form of multivariate Gaussian distribution and calculated ML estimates of its mean and variance values (formulae B5). b). We use this p.d.f. θ ( f ) as the prior to calculate the posterior probability ρ ( n , m | θ ( f )) that a pair of random columns { n , m } is produced by any single emission vector f . Similarly to problem 1, we use multivariate Gaussian approximation of the multinomial distribution that assumes large total residue counts in the generated columns. The posterior probability density can be calculated as ρ ( m,n | θ ( f )) = ∫ ρ ( m,n | f ) θ ( f ) d f (3) c). Using (3), we calculate P-value as the integral (Appendix 2 [see Additional file 2 ]): This value can serve as the upper estimate of the P-value, since the prior distribution θ ( f ) is a ML estimate based on the observed alignment columns. The partial integral ∫ ρ ( m,n | f ) d m d n can be calculated analytically for any emission vector f , but analytical calculation of full integral (4) is problematic. However, an approximate estimate of this value would suffice, since (i) expression (4) already contains approximations introduced by estimates of θ ( f ), ρ ( n , m | θ ( f )) and ρ ( n *, m * | θ ( f )); and (ii) we are interested in a conservative estimate of the upper P-value limit. Hence, we calculate an approximate upper estimate of P-value (Appendix 2 [see Additional file 2 ]): where erf ( x ) is error function, and Random simulation shows consistency of upper P-value estimates with null model To assess the consistency of our estimates with the null model, we performed the following simulation experiments. A random emission vector of residue frequencies f was used to produce a column of size N by random draw according to these frequencies. Having the vector residue counts n in this column, we produced another vector of counts m that made our estimated P-value P estim ( m , n ) equal to the specified value P 0 . To produce this vector, we considered sets of residue counts as points in multidimensional and randomly chose a straight line passing through the point n . On this randomly directed line, we found the point m as the solution of equation P estim ( m , n ) = P 0 , where P estim ( m , n ) is defined by formula (5). Thus, we generated a pair of columns that corresponded to the specified P-value according to the PEAC estimate. We compared this estimate to the actual P-value P * calculated for the generation of m and n by the original vector f . As shown by the plot of P * against P estim (Fig. 2 ), a particular estimate of P-value may correspond to various actual values P *. However, for low P-values, i.e. for the range of our interest, PEAC systematically produces P estim higher than actual values P *, as expected from the upper P-value estimates. These conservative estimates ensure the absence of false positive results among detected cases of significant dissimilarity. We developed P-value estimates for the following null hypotheses (see Theory): (1) a given alignment column is generated by a given set of emission residue frequencies; and (2) two given alignment columns are generated by a single set of residue frequencies. We applied both types of estimates to the analysis of real multiple alignments, detecting cases of significant dissimilarity where the null hypotheses were confidently rejected. Application Comparison of an alignment column to a frequency vector Using our method, we assessed the consistency between predictions of residue frequencies based on structural considerations, and the frequencies in multiple alignments of sequence homologs. Specifically, we prepared a dataset of 1695 PDB structures and made predictions of residue propensities at each position, based on local structural environment. In parallel, the sequences corresponding to these structures were used as queries for PSI-BLAST searches, and profiles of detected confident sequence homologs were constructed (see Methods). The effective residue frequencies at profile positions were compared to the structure-based predictions, and P-values for each position were estimated using PEAC. The histogram of produced P-values for all positions is shown in Fig. 3A . These P-values ranged widely between 10 -320 and 1.0, with the median being approximately 0.01. To analyze the cases of most pronounced discrepancy between our structure-based predictions and residue frequencies observed among sequence homologs, we chose ~1000 protein positions (0.3% of the whole dataset) that had lowest P-values ( P < 10 -100 ). These sites were located mainly in the secondary structure elements, most frequently at their ends, and corresponded to unusual local distortions of 3D conformations. We compared residue content in the corresponding subset of alignment columns to the whole dataset. As shown in Fig. 3B , alignment positions with low P-values demonstrated unusually high average frequencies of negatively charged residues, glutamate and aspartate. For a more detailed analysis, we considered the subset of 145 alignment positions with P < 10 -100 that contained highly conserved D or E, and inspected corresponding D or E residues in tertiary structures. The vast majority of these residues were buried, as was indicated by the accessible surface area (ASA) of their carboxyl caps (data not shown). When we excluded glutamates and aspartates whose charge could be neutralized by contacts with positively charged arginine, lysine or histidine, the remainig portion of the set was still comprised of mostly buried residues (Fig. 4A ). These buried residues with acidic side chains did not form salt bridges with basic side chains, which is the most typical way of neutralizing a charge in hydrophobic environment. Inspecting these positions manually, we found a less usual mode of charge neutralization, which involves contacts with other polar residues. A typical example of such conformation is a motif classified in the I-site database [ 23 , 24 ] as aspartate beta bulge, located in the middle of a beta strand in bovine rhodanese (thiosulfate:cyanide sulfurtransferase, PDB ID 1 rhs, Fig. 4B ). The contact between side chain oxygen of D32 and S34 distorts the regular beta-strand conformation. Our scheme of structure-based frequency prediction considered only most common classes of local conformations that involve nearest neighbor residues. This scheme could not account for less usual residue contacts and therefore failed to predict a high conservation of buried acidic residues at this position, which may have functional or structural importance [ 25 ] In summary, this application of our method assists detecting positions with discrepancies between the predicted and naturally occurring residue frequencies. A detailed analysis of these positions may highlight shortcomings of a predicting scheme and suggest possible directions for improvement. Comparison of two alignment columns Statistical comparison of two MSA positions may be used in two applications. (i) In automatically produced alignments of sequences or structures, consideration of profiles of confident homologs helps to detect inconsistencies. According to our observations, these inconsistencies are caused mainly by alignment errors. (ii) In the high-quality structure based alignments, where structural equivalence of residues is confident, the low P-values may indicate functional specificity of spatially aligned residues. Detection of errors in sequence alignments As an example of application (i), we evaluated our method by ability to predict erroneous residue matches produced by an automatic sequence aligner (ClustalW [ 26 ]), as compared to the high-quality reference alignments in a manually curated database, BaliBase [ 27 ]. For each BaliBase alignment, we (1) extracted individual sequences and generated their ClustalW alignment; (2) for the top and the bottom sequences of BaliBase alignment, produced MSAs of their homologs detected by PSI-BLAST; and (3) used the resulting alignment pair to estimate P-values for the sequence positions matched by ClustalW. We then sorted all ClustalW positional matches by ascending P-values and classified them as true or false predictions of ClustalW errors. For our purpose, the ClustalW matches different from those in BaliBase were considered true positive predictions; whereas correct matches were considered false positives. Having the ranked list of true and false positive predictions, we generated sensitivity curve (plot of the number of true positives vs. the number of false positives, Fig. 5 ). The curve shows the degree of discrimination between erroneous and correct positional matches. Among the top 1000 predictions, the method generated 151 false positives. Up to ~17,000 true positives, the rate of false positive predictions is slowly growing, then this rate considerably increases. This point approximately corresponds to the P-values of ~10 -2 . Detection of evolutionarily unrelated positions in structure-based alignments We applied our method to detect profile dissimilarity between protein positions that are aligned by an automatic structure based method. Specifically, we (1) collected pairs of protein domains that are structurally similar according to the DALI alignments [ 28 ] in the FSSP database [ 29 , 30 ], (2) for each of these proteins, produced MSA of homologs detected by PSI-BLAST, and (3) used the resulting pairs of alignments to estimate P-values for the positions matched in FSSP. We used two sets of the FSSP domain pairs, with different sequence identities between the domains: 25 ± 1% (the upper limit of "twilight zone", which generally allows for homology detection and alignment construction based on sequences alone [ 31 , 32 ])., and 15 ± 1% (a lower range of identity, where structural alignment is more difficult to reproduce by sequence comparison). Figures 5A,5B show the histograms of P-values produced for pairs of profile positions that correspond to structurally aligned residues. The distributions of P-values were different for the two ranges of sequence identities. For identities around 15% (Fig. 6A ), the histogram had maximum at approximately 0.5 10 -2 and the median was approximately 0.1 10 -2 . For identities around 25% (Fig. 6B ), the maximum was above 0.1 and the median was approximately 2.5 10 -2 , which shows much better consistency between structure-based and profile-based alignments. To analyze the most dissimilar profile positions, in each dataset we chose 0.3% of position pairs that had lowest P-values (639 pairs for identities 15 ± 1%, and 760 pairs for identities 25 ± 1%). Using Insight II suit for molecular modeling and simulation (Accelrys), we performed a detailed manual analysis of structural superposition for a portion of the corresponding structural alignments. We found that the majority of inspected positions were apparently misaligned. Approximately 80% of these residues were located within 5 positions from a gap introduced in the structural alignment. Vicinities of gaps generally correspond to less similar fragments of aligned structures, which are more difficult to superimpose and where alignment errors can occur more frequently. We considered residue contents of the MSA columns corresponding to these low-P-value position pairs, and compared these contents to the average residue frequencies in the whole MSA datasets. In the set corresponding to 15 ± 1% sequence identity, the most pronounced difference was a higher frequency of aspartate at the position pairs with low P-values (Fig. 6C ). In the set for 25 ± 1% identity, the low-P-value positions had higher frequencies of methionine, leucine and isoleucine (Fig. 6D ). We further concentrated on the aligned structural positions that showed unusual residue frequencies in the corresponding MSA columns. In the set corresponding to 15 ± 1% sequence identity, we considered positions with highly conserved aspartate, whereas in the set corresponding to 25 ± 1% sequence identity, we considered positions with high combined frequency of methionine, leucine and isoleucine. In an attempt to exclude apparently misaligned positions, we considered only those positions that were distanced more than 5 residues from gaps in the FSSP structural alignment. We selected and manually analyzed 16 of such positional matches. However, even among these selected matches most of the discrepancies were still caused by apparent alignment errors: 10 cases corresponded to structural misalignments (usually due to a shift in 1 position), and 3 cases were caused by biased residue frequencies at profile positions, due to errors in PSI-BLAST alignments of sequence homologs. The remaining 3 position pairs did not involve apparent errors of either DALI or PSI-BLAST. These pairs might represent real differences in residue preferences at structurally equivalent positions. Figure 7 shows two examples of low P-values for protein regions that were superimposed by automatic structure aligners. The first example illustrates a typical case of apparent misalignment. The second example represents a case that is observed much rarer among automatic structural alignments: the structure superposition is correct but inconsistent with sequence-based similarity. Such inconsistency might represent a change in the structural role of evolutionary related positions. Human glyoxalase II (PDB ID 1qh5) [ 33 ] and bacterial metallo-beta-lactamase L1 (penicillinase, PDB ID 1sml) [ 34 ] belong to different families of the SCOP metallo-hydrolase/oxidoreductase superfamily. Although these proteins share only 16% sequence identity, their structures are highly similar (DALI Z-score 16.3). Both glyoxalase II and penicillinase bind two zinc atoms at similar locations. Fig. 7A shows a manual structural alignment of their fragments, beta hairpins that contain residues involved in Zn binding (D134 in 1qh5A and S185 in 1sml). These residues have similar orientation of their sidechains (shown in Fig. 7A , strand b ). In glyoxalase II, D134 binds zinc atoms directly [ 33 ], whereas in penicillinase, S185 is linked with zinc through a water molecule [ 34 ]. Figures 7B,7C and 7D show sequence alignments of these regions and corresponding positional P-values based on automated structure comparisons by DALI (Fig. 7B ) and MAMMOTH [ 35 ] (Fig. 7C ), and on the comparison of sequence profiles (Fig. 7D ). Alignments of strands a illustrate superposition errors as the typical source of low P-values for automatic structural alignments. DALI (Fig. 7B ) constructed the correct alignment, which was the same as the manual structure alignment (Fig. 7A ) and profile-based alignment (Fig. 7D ). This alignment corresponded to high positional P-values. MAMMOTH (Fig. 7C ) apparently misaligned strands a by introducing a one-position register shift, which resulted in the low P-values for this region (Fig. 7C ). Structural alignments of strands b represent a rare example of an automatic alignment that corresponds to low positional P-values and yet is correct from the structural viewpoint. Both DALI and MAMMOTH produced the alignment consistent with the confident manual superposition. This structural superposition correctly aligns zinc-binding residues (D134 in 1qh5A and S185 in 1smlA). However, such alignment corresponds to low positional P-values, indicating a significant difference between structure-based and sequence-based position similarity. The optimal profile-based alignment (Fig. 7D ) has a one-residue shift that dramatically increases positional P-values in this region, but is inconsistent with the topology of the beta strands and zinc-binding sites. Such a shift might represent a change in the structural roles of related protein positions in remote homologs. Indeed, zinc binding role in penicillinase 1smlA is transferred from residue D184, which is related to the zinc binding D134 of glyoxalase II (1qh5A [ 33 ]), to the neighboring S185 [ 34 ]. Thus, in the case of a high-quality structural alignment, low positional P-values may indicate evolutionary dissimilarity of spatially superimposed residues. Such cases, however, comprised a minor portion among automatic structure-based alignments and were overwhelmed by the cases of misalignment. Prediction of structurally and functionally specific protein positions As an example of possible predictions of functionally specific regions, we considered positions in multiple alignments of sequence homologs for two structurally similar but evolutionary divergent proteins: RNA 2'-O ribose methyltransferase from T. thermophilus [ 36 ] (PDB ID 1ipaA) and hypothetical E. coli protein Ybea (PDB ID 1ns5A). These proteins possess the same α / β knot fold but belong to different SCOP families, SpoU-like RNA 2'-O ribose methyltransferase and Ybea-like, respectively. Using manually curated structure-based alignment of the two proteins and MSAs of their homologs detected by PSI-BLAST, we considered structurally equivalent positions that were well aligned in space (C α distance less than 2 A, Fig. 8 ) but showed significantly different residue contents in the MSAs ( P < 0.01). We found 24 such positions, the majority being concentrated in the region of dimer interface, which includes the 'knotted' C-terminal helix D (Fig. 8A ). In RNA 2'-O ribose methyltransferase, this region is suggested to be crucial for the molecular dimerization [ 36 ]. Positions detected in other regions mostly correspond to buried residues of hydrophobic core. The discrepancies in residue content at these positions may reflect different structural solutions for the sidechain packing within the core, as in the case of buried residues in helix C (W225 in 1ipaA VS C112 in 1ns5A, Fig. 8 ). Thus, the detected positional differences between SpoU-like and YbeA-like families highlight the functional importance of the 'knotted' C-terminal helix and may suggest a family-specific mode of dimerization and dimer activity for the hypothetical protein YbeA. Discussion Here, we applied the concept of statistical significance to comparison of single positions of multiple sequence alignments. We proposed rigorous problems of the P-value estimation for the comparison of an alignment column to an emission frequency vector; and for the comparison of two alignment columns. We suggested approximate analytical solutions to these problems and applied the resulting P-value estimates to the analysis of protein families. Comparison of an alignment column to an emission frequency vector Using our method, we compared residue conservation among sequence homologs and residue propensities predicted from local structural environment. The cases of the highest discrepancy between observed and predicted residue frequencies were enriched with positions containing conserved buried residues D/E. Many of these acidic residues do not form a salt bridge with basic side chains, but use contacts with polar residues to neutralize the negative charge in hydrophobic environment. Surveys of such contacts formed by aspartate residues were previously performed by Singh and Thornton [ 37 ] and by Fiser et al. [ 25 ]. The observed residue conservation may indicate the importance of such motifs for protein structure or function. The structure-based statistic for the prediction of residue propensities used only common classes of structural environments and considered closest neighboring residues in polypeptide chain. Hence this statistic was unable to predict the found conservation of buried glutamate and aspartate. Detection of such contradictions between predicted residue propensities and actual residue frequencies in MSA has three main implications. First, analysis of these contradictions can assist evaluation and further optimization of the predicting schemes, including knowledge-based potentials [ 38 - 41 ] or environment-specific substitution tables [ 42 , 43 ]. Second, the patterns of atypical relations between residue conservation and structural conformation may point to local motifs of potential structural or functional significance. Third, such atypical patterns, which are unlikely to coincide in two proteins by chance, may serve as signatures for homology detection. Comparison of two alignment columns We used our estimates to assess similarity between MSA positions. First, we evaluated our method by detection of erroneous residue matches produced by an automatic sequence aligner, ClustalW [ 26 ]. The evaluated automatic alignments were compared to the high-quality reference alignments in a manually curated database, BaliBase [ 27 ]. Second, we estimated P-values for MSA positions corresponding to structurally aligned residues in the FSSP database of automatic structure based alignments [ 29 , 30 ]. We found that among detected cases of highest dissimilarity, the vast majority was caused by local structural misalignment. Correction of such alignment errors typically produced an increase of P-values (see results for strand a in Fig. 7C VS Fig. 7B ). These results suggest a potential value of the method for the detection of misaligned regions in automatic alignments. In our set of FSSP structural alignments, correctly aligned sites of low P-value were very rare. Such sites correspond to structurally equivalent positions that have different residue content in two related families. To illustrate the detection of such family-specific protein positions, we used a high-quality manually curated structural alignment of distantly related SpoU-like and YbeA-like families of the same α / β knot fold (Fig. 8 ). In addition to specific preferences for sidechain packing in the hydrophobic core, the statistically significant positional differences emphasized the importance of the 'knotted' helix (Fig. 8A ), which is essential for dimer formation [ 36 ]. These differences may suggest a family-specific mode of dimerization and dimer activity for the hypothetical protein YbeA. Conclusions We proposed P-value estimates to assess statistical significance for (1) comparison of a single position in a multiple alignment to a set of emission residue frequencies; and (2) comparison of two alignment positions. Computational implementation of these estimates showed its potential value for several important tasks in sequence analysis: (i) evaluation and optimization of methods predicting propensities for residue occurrence at protein positions, such as protocols for in silico sequence design; (ii) detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii) detection of sites that determine functional or structural specificity in two related families. Methods Calculation of effective residue counts in multiple alignments Effective residue counts at alignment positions were calculated based on the PSIC [ 44 ] method. We calculated 21 counts n eff PSIC for each symbol in the alignment column (including gaps, which are considered the 21 st symbol), and then applied the following transformation [ 16 ]: Here, n eff corresponds to the number of randomly aligned sequences with the average number of residue types per position equal to n eff PSIC (for more details, see [ 16 ]). Profiles corresponding to fragments of protein structures We applied our method to compare structure-based predictions of residue probabilities to the actual residue frequencies observed among sequence homologs. For such a comparison, we produced sequence profiles that correspond to fragments of known 3D structures. Briefly, we used a non-redundant set of structures from PDB (minimum 40 residues long, X-ray structure resolution no more than 2.5, NMR structures excluded, no pairs with sequence identity above 20%). SCOP [ 45 , 46 ]. entries classified as membrane proteins or small proteins enriched with disulfide bonds or metal ions were excluded. The final dataset contained 1695 SCOP domains. Starting from the sequence of each domain, PSI-BLAST searches were performed to 5 iterations over the non-redundant NCBI database, with a conservative E-value cutoff of 10 -5 . In the resulting multiple alignments of detected homologs, we purged sequences whose identity to the query was less than 25%, so that only confident sequence homologs were used for profile construction. We split query sequence into fragments of fixed length F . For each fragment, we extracted the corresponding segment of the multiple alignment and removed the sequences with deletions (gaps) in this fragment. For a query of length L we produced L - F +1 sub-alignments and derived effective residue counts as described above. In this work, we used the library of profile fragments of length F = 6, which provided accurate results when applied to the prediction of local structural environment from a sequence profile [ 47 ]. Prediction of expected residue frequencies from local structural environment The equilibrium frequency of an amino acid at a position in protein structure reflects the energetic fitness of the sidechain in the local structural environment [ 38 , 39 ]. To estimate these frequencies, we employed the scheme similar to those used to derive statistical or knowledge-based potentials [ 38 - 41 ] or environment-specific substitution tables [ 42 , 43 ]. In brief, we divided structural positions into discrete classes based on local structural environment, and analyzed residue contents for each class of positions in known protein structures. As the characteristics of local structural environment, we used the backbone conformations ( φ and ψ dihedral angles) at the given position and the preceding position, and solvent accessibility of the sidechain at the given position. For a given position we used the partition of Ramachandran plot into 15 ( φ , ψ ) classes proposed by Shortle [ 39 ]., combined with 3 ranges of relative sidechain solvent accessibility as calculated by the NACCESS package [ 48 ] For the position preceding the given, we used a less detailed partition of Ramachandran plot into 6 classes. For each of the resulting 15 × 3 × 6 = 270 classes, we analyzed the set of PDB structures described in the previous section and derived the probabilities of residue types to occur in a class. These probabilities were used as frequency predictions at the structural positions that belong to the class. We assessed consistency of these predictions with residue frequencies in multiple alignments of sequence homologs. Pairs of profiles corresponding to pairs of similar structures As the second application, we estimated statistical significance of similarity between pairs of columns in multiple alignments. Namely, we used pairs of structurally similar proteins (according to the FSSP database [ 29 , 30 ]), produced multiple alignments of their sequence homologs detected by PSI-BLAST, and assessed the consistency between structurally equivalent positions of these multiple alignments. We chose protein pairs with relatively low sequence identities, where detection of similarity between sequences is not straitforward. We focused on two identity ranges: 25 ± 1% (at the upper bound of twilight zone) and a lower range of 15 ± 1%. From each FSSP family, we extracted the parent sequence and all sequences of a significant structural similarity to the parent (Z-score greater than 5.0), with sequence identity to the parent within a given range. We found totally 494 and 1406 sequence pairs with identities 25 ± 1% and 15 ± 1%, respectively. These numbers were reduced by purging symmetric pairs and manual inspection of the remaining domains for the presence of repeats and low-complexity regions. For further analysis, we used 251 sequence pairs with identity 25 ± 1% and 340 pairs with identity 15 ± 1%, each pair representing a unique FSSP family. For each sequence, we ran 5 iterations of PSI-BLAST 2.2.1 against the NCBI nr database (E-value threshold for inclusion in the next iteration 0.005) and obtained multiple alignments of detected homologs. We then applied a procedure of the alignment processing similar to that implemented in PSI-BLAST [ 1 ] In particular, only one copy was retained of any rows that were >97% identical to one another, and the columns with gaps in the first (query) sequence were purged. The resulting multiple alignments were used to calculate P-values for confident structure-based position matches (positions represented as capital letters in FSSP alignments). Calculation of solvent accessibility Solvent accessible surface area (ASA) for the residues of interest was determined using NACCESS package [ 48 ], which was applied to PDB structures, with heteroatoms excluded. To determine ASA for carboxyl groups of aspartate and glutamate, the sum of ASA for atoms of these groups was calculated. Residue contacts were determined using default settings of NACCESS. Authors' contributions RS carried out the theoretical considerations, computational experiments, analysis of the results and drafted the manuscript. NG conceived of the study, and participated in its design and coordination. Both authors read and approved the final manuscript. Supplementary Material Additional File 1 "P-value for multivariate Gaussian distribution". Click here for file Additional File 2 "Upper estimate of P-value for similarity between two alignment columns". Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516024.xml |
547900 | Global patterns of healthy life expectancy in the year 2002 | Background Healthy life expectancy – sometimes called health-adjusted life expectancy (HALE) – is a form of health expectancy indicator that extends measures of life expectancy to account for the distribution of health states in the population. The World Health Organization reports on healthy life expectancy for 192 WHO Member States. This paper describes variation in average levels of population health across these countries and by sex for the year 2002. Methods Mortality was analysed for 192 countries and disability from 135 causes assessed for 17 regions of the world. Health surveys in 61 countries were analyzed using new methods to improve the comparability of self-report data. Results Healthy life expectancy at birth ranged from 40 years for males in Africa to over 70 years for females in developed countries in 2002. The equivalent "lost" healthy years ranged from 15% of total life expectancy at birth in Africa to 8–9% in developed countries. Conclusion People living in poor countries not only face lower life expectancies than those in richer countries but also live a higher proportion of their lives in poor health. | Background In the World Health Report 2000 , the World Health Organization (WHO) reported for the first time on the average levels of population health for its 191 member countries using a summary measure that combines information on mortality and morbidity [ 1 , 2 ]. Because substantial resources are devoted to reducing the incidence of conditions that cause ill-health but not death and to reducing their impact on people's lives, it is important to capture both fatal and non-fatal health outcomes in any such measure of population health. Healthy life expectancy – sometimes called health-adjusted life expectancy (HALE) – is a form of health expectancy indicator that extends measures of life expectancy to represent the average health in a population in terms of equivalent years of full health, taking into account the distribution of health states [ 3 ]. HALE has been calculated previously for Canada and Australia using population survey data on disability [ 4 - 6 ]. The United States has adopted a public health policy goal to increase the expected years of healthy life in the population and has used a type of healthy life expectancy to measure progress towards this goal [ 7 , 8 ]. In calculating HALE for 191 WHO Member States for 1999, we carried out an analysis of 62 representative population health surveys which revealed substantial problems with comparability of self-report health status and disability data [ 2 ]. We used health state prevalence estimates from the Global Burden of Disease 2000 project to adjust for biases in self-report data; the independent information on levels of population health provided by the health surveys was thus quite limited. It has long been known that health expectancy estimates based on self-reported health status information are not comparable across countries due to differences in survey instruments and cultural differences in reporting of health [ 9 , 10 ]. The International Network on Health Expectancy (REVES) and international agencies have devoted substantial efforts to try to standardize questionnaire instruments and methods [ 11 - 13 ]. Though some cross-national surveys using a common self-report instrument have become available [ 14 ], standardized instruments alone do not solve comparability problems [ 15 ]. These relate more fundamentally to unmeasured differences in expectations and norms for health, so that the meaning different populations attach to the labels used for response categories in self-reported questions, such as mild, moderate or severe, can vary greatly. Given these problems, WHO undertook a Multi-Country Survey Study on Health and Responsiveness (MCSS) in 2000 and 2001 in collaboration with Member States using a standardized health status survey instrument together with new statistical methods for adjusting biases in self-reported health [ 16 - 19 ]. These new data, together with comprehensive analyses of epidemiological data for all regions of the world, and new life tables for all WHO Member States, have enabled us to calculate HALE for 192 countries for 2002 in a way that improves comparability across countries. These results are reported in the World Health Report 2004 [ 20 ]. A previous paper has examined variations in HALE among OECD countries [ 21 ]. This paper examines the implications of the results for our understanding of global patterns of health. Methods Calculation of HALE requires three inputs: life tables, prevalences of various health states, and valuations of time spent in these health states compared to full health. The WHO methods used to calculate HALE have been developed to maximise its comparability across populations. These methods are described in more detail elsewhere [ 22 , 23 ] and have been reviewed by an independent scientific peer review group [ 24 ]. A set of spreadsheet tools are also under development to provide full access to the inputs and calculations for country-specific HALE for 2002; these will enable users to modify inputs and to carry out sensitivity analyses for various factors. This section provides an overview of these methods and data sources and a more detailed description of adjustments for institutionalized populations. Life table methods Procedures used to estimate the 2002 life tables differed for Member States depending on the data availability to assess child and adult mortality. Complete or incomplete vital registration data together with sample registration systems cover 72% of global mortality. Survey data and indirect demographic techniques provide information on levels of child and adult mortality for the remaining 28% of estimated global mortality. Separate estimates were used for the numbers and distributions of deaths due to HIV/AIDS in countries with a substantial HIV epidemic [ 25 ]. A full overview of methods used to construct life tables is given elsewhere [ 20 , 26 , 27 ]. Following an initial scientific review [ 24 ], significant improvements were implemented in both data and methods used to calculate life expectancies for WHO Member States. Recent surveys and censuses provided substantially more information on levels of child and adult mortality in Member States lacking complete death registration. This has resulted in changes in point estimates of life expectancies and reductions in uncertainty ranges for some Member States compared to previously published estimates. Health state prevalence data Because comparable health state prevalence data are not yet available for all countries, two sources of information were used: the Global Burden of Disease (GBD) study and the MCSS. Data from the GBD study were used to estimate severity-adjusted prevalences by age and sex for all 192 countries [ 23 ]. Secondly, data from the MCSS were used to make independent estimates of severity-adjusted prevalences by age and sex for 55 countries. Finally, 'posterior' prevalences for all countries were calculated based on the GBD-based prevalences and the survey prevalences as described below. This process is summarized in Figure 1 . Figure 1 Estimation of severity-adjusted health state prevalences for calculation of HALE. The GBD revisions draw on a wide range of data sources to develop internally consistent estimates of incidence, prevalence, duration and years lived with disability (YLD), for 135 major causes, for 14 sub-regions of the world [ 20 ]. Prevalence-based YLD rates were calculated, and adjusted for co-morbidity, giving direct estimates of the severity-weighted prevalence of health states attributable to each cause [ 23 ]. Tables 1 and 2 summarize the major causes of YLD for developed and developing countries in the year 2002 as published in the World Health Report 2004 [ 20 ]. Neuropsychiatric conditions accounted for 42% of YLD in developed countries and nearly 30% of YLD in developing countries. Unipolar depression is the leading contributor to this burden. Other major causes of YLD include vision and hearing loss (9% in developed countries and 13% in developing countries) and injuries (nearly 12% in developing countries and 7% in developed countries). More detailed estimates of YLD by age group and cause are available for 14 subregions of the WHO Regions, and for countries grouped into high, medium and low income categories, on the WHO website at . Table 1 Leading causes of disability, years lived with disability (YLD) by cause as percent of YLD from all causes, developed countries a , 2002 Cause group % of total YLD Female to male ratio I. Communicable, maternal, perinatal and nutritional conditions 6.6 1.47 Infectious and parasitic diseases 2.4 0.94 Maternal conditions 0.9 - Perinatal conditions 0.8 0.95 Nutritional deficiencies 2.1 1.50 II. Noncommunicable diseases 86.2 1.12 Malignant neoplasms 2.4 1.54 Diabetes mellitus 2.3 1.10 Neuropsychiatric conditions 41.9 1.10 Unipolar depressive disorders 15.0 1.69 Bipolar disorder 2.2 0.99 Schizophrenia 2.3 0.94 Alcohol use disorders 6.8 0.24 Alzheimer and other dementias* 4.2 1.99 Drug use disorders 1.7 0.34 Other neuropsychiatric disorders 9.7 1.50 Sense organ diseases 8.6 1.16 Vision disorders b 3.0 1.44 Hearing loss, adult onset 5.7 1.04 Cardiovascular diseases 6.7 0.86 Respiratory diseases 6.9 0.96 Musculoskeletal diseases 7.6 1.53 Other non-communicable diseases 9.7 1.09 III. Injuries 7.2 0.45 Unintentional injuries 5.8 0.48 Intentional injuries 1.4 0.31 a Developed countries includes European countries, former Soviet countries, Canada, USA, Cuba, Japan, Australia, New Zealand, Brunei Darussalam, Singapore. b Vision disorders includes vision loss due to glaucoma, cataracts, macular degeneration and other age-related vision loss. Table 2 Leading causes of disability, years lived with disability (YLD) by cause as percent of YLD from all causes, developing countries a , 2002 Cause group % of total YLD Female to male ratio I. Communicable, maternal, perinatal and nutritional conditions 23.4 1.36 Infectious and parasitic diseases 10.9 0.95 Maternal conditions 3.8 - Perinatal conditions 3.1 0.98 Nutritional deficiencies 4.4 1.08 II. Noncommunicable diseases 64.9 1.03 Malignant neoplasms 0.3 2.02 Diabetes mellitus 1.1 1.11 Neuropsychiatric conditions 29.4 1.09 Unipolar depressive disorders 11.1 1.47 Bipolar disorder 2.5 0.98 Schizophrenia 2.9 0.97 Alcohol use disorders 2.5 0.15 Alzheimer and other dementias* 1.0 1.40 Drug use disorders 0.8 0.28 Other neuropsychiatric disorders 8.6 1.37 Sense organ diseases 13.0 1.13 Vision disorders b 8.7 1.26 Hearing loss, adult onset 4.3 0.92 Cardiovascular diseases 3.3 0.82 Respiratory diseases 4.2 0.72 Musculoskeletal diseases 4.6 1.18 Other non-communicable diseases 8.9 0.90 III. Injuries 11.7 0.66 Unintentional injuries 9.7 0.75 Intentional injuries 2.0 0.32 a Developing countries includes all countries except for European countries, former Soviet countries, Canada, USA, Cuba, Japan, Australia, New Zealand, Brunei Darussalam, Singapore. b Vision disorders includes vision loss due to glaucoma, cataracts, macular degeneration and other age-related vision loss. Summation of prevalence YLD across all causes would result in overestimation of the total average severity-weighted health state prevalence because of comorbidity between conditions. In earlier calculations of HALE, adjustments were made for independent comorbidity assuming that the probability of having two (comorbid) conditions would equal the product of the probabilities for having each of the diseases. For the World Health Report 2003, further work was undertaken to take dependent comorbidity into account more rigorously [ 28 ]. For many diseases, the probability of having a pair of diseases is greater than the product of the probabilities for each disease, reflecting common causal pathways (for example common risk factors causing both diabetes and heart disease) and also that one disease may increase the risk of another. Data from five large national health surveys were analysed by age and sex to estimate "dependent comorbidity factors" for pairs of conditions. There was surprising consistency in these factors across the five surveys and the results were used for all Member States to adjust for dependent comorbidity in summation of prevalence YLD across all causes [ 28 ]. MCSS Survey estimates The MCSS was carried out in 2000–2001. A total of 71 surveys were completed in 61 countries using face-to-face, postal and telephone interviewing modes [ 16 ]. Thirty-five of the surveys were carried out in 31 Western and Eastern European countries, 27 surveys in 22 developing countries, and the remainder in Canada, USA, Australia and New Zealand. To overcome the problem of comparability of self-report health data, the WHO survey instrument used anchoring vignettes to calibrate self-reported health for the 6 core health domains (mobility, self care, pain, affect, work and household activities, cognition) and vision. Anchoring vignettes are short descriptions that mark fixed levels of ability (e.g. people with different levels of mobility such as a paraplegic person or an athlete who runs 4 km each day) and allow us to adjust for individual variations in the use of response categories to describe the same health state [ 17 , 18 , 29 ]. We included in the analysis only those surveys that have met certain explicit criteria that reflect the quality of survey implementation with specific reference to the health vignettes. The USA postal survey was also excluded because respondents were presented the vignettes in order of severity rather than randomized as in the case of all other surveys. Sixty-two surveys met these criteria and were included in the model. Health state prevalences from the WHO Multi-country Household Survey Study were assumed to relate to calendar year 2000. Trends in prevalence YLD between 2000 and 2002 were calculated for each Member State using the GBD estimates. Aggregated across all causes, these trends were generally small. In calculating HALE for 2002, the 2000 survey results were adjusted for likely change over two years using these estimated trends. Health state valuations The health state valuations used in HALE calculations represent average population assessments of the overall health levels associated with different states. They range from 0 representing a state of good or ideal health to 1 representing states equivalent to being dead. These weights do not measure average levels of well-being or quality of life associated with health states, or imply any societal value of a person in a disability or health state. Rather they characterize health decrements on a continuum starting from the societal ideal of good health. Household surveys including a valuation module were conducted in fourteen countries: China, Colombia, Egypt, Georgia, India, Iran, Lebanon, Indonesia, Mexico, Nigeria, Singapore, Slovakia, Syria and Turkey. Data on nearly 500,000 health state valuations from over 46,000 respondents were used to develop a mapping function that captured the average relationships between levels on the six core health domains and overall health state valuations. This average global valuation function was then applied to the vignette-adjusted health domain levels for each survey respondent in order to estimate health state valuations for the calculation of HALE [ 30 , 31 ]. The MCSS survey samples did not include older people resident in nursing homes or other health institutions. Because these people will generally have worse health than those resident in households, adjustments were made to account for the older population who were resident in health institutions. Fifty-four national estimates of the proportion of the population aged 65 years and over who are resident in nursing homes were collected for 36 countries from national statistical publications and international statistical databases of OECD and the World Bank. These were used to estimate the percentage of the population aged 60+ years institutionalized in MCSS countries. This ranged from 3 to 5% in most OECD countries, was highest at around 7% in the Netherlands and Sweden, was substantially lower at around 0.3 to 1% in Eastern European countries, and was close to zero for developing countries. As data on the severity distribution of health states in institutionalized populations were not available, an average disability weight of 0.5 (corresponding to a health state with mobility and self-care limitations and where the person cannot carry out usual daily activities) was assumed for the institutionalized population. Sensitivity analyses showed that the resulting HALE estimates were not sensitive to the choice of this disability weight within a plausible range of variation. Calculation of posterior severity-weighted prevalences Because there is potential measurement error in severity-weighted health state prevalences derived from both household surveys and epidemiological estimates, posterior estimates of prevalence for the survey countries were calculated as weighted averages of the GBD-based prevalences and the survey prevalences [ 23 ]. The relationship between the GBD-based prevalences and the posterior prevalences was estimated for the survey countries using ordinary least squares regression and the results used to adjust the GBD 2000-based prevalences for the non-survey countries. This ensured that the use of the survey data did not introduce a prevalence differential between survey and non-survey countries, and allowed the survey evidence to be indirectly taken into account in making the best possible prevalence estimates for non-survey countries. Calculation of HALE and uncertainty intervals HALE was calculated using Sullivan's method [ 32 ] based on abridged country life tables and the posterior severity-weighted prevalences. Uncertainty ranges for HALE estimates were estimated using Monte Carlo simulation techniques to quantify the uncertainty in life expectancy projections, in GBD estimates for prevalence and disability severity, and in the survey-based prevalence estimates [ 23 ]. Apart from ongoing revisions to the GBD analyses of epidemiological information on diseases and injuries, the implementation of improved methods for dealing with comorbidity has resulted in a reduction in estimated proportion of healthy years of life lost at older ages compared to previous years. Improvements in methods used for the analysis of the MCSS survey data and in the adjustment of total YLD rates for dependent comorbidity have resulted in improved estimates of the severity-adjusted prevalence of health states and a reduction in the uncertainty associated with these estimates. For these reasons, HALE estimates for 2002 are not directly comparable with those for 2000 and 2001 published in the World Health Report 2002 [ 33 ]. Results The survey data identified considerable variability in the use of question response categories to describe the same level on a particular health domain and a systematic tendency for people in countries with higher income per capita to use more severe response categories in rating a given anchoring vignette. Thus the self-reported prevalence of problems for a health domain in a country may be quite different from the prevalence of problems after adjustment for these response category cut-point shifts. In France, for example, 70% of survey respondents gave self-report responses of mild or greater problems on the question for the affect domain (anxiety and depression), whereas the standardized (vignette-adjusted) prevalence at all levels higher than "none" was only 33% for France. In contrast, for some countries such as Egypt, the self-report and standardized prevalences were almost the same at around 38%. Figure 2 shows average HALE at birth for 192 countries, plotted against income per capita (Gross Domestic Product measured in international dollars using purchasing power parity conversion rates) on a logarithmic scale. The error bars show estimated 95% uncertainty ranges for HALE at birth. Country-specific estimates of male and female HALE and total life expectancy at birth and at age 60, together with 95% uncertainty ranges, are provided in the World Health Report 2004 [ 20 ]. Figure 2 Healthy life expectancy at birth versus Gross Domestic Product (GDP) per capita, 192 WHO Member States . Healthy life expectancy at birth in 2002, together with 95% uncertainty ranges, versus Gross Domestic Product (GDP) per capita for 2001 in international dollars (purchasing power parity conversion), 192 WHO Member States Japanese women led the world with an estimated average HALE at birth of 77.7 years in the year 2002, 7.5 years lower than total life expectancy at birth. HALE at birth for Japanese males was 72.3 years, 5.4 years lower than for females. This male-female gap in HALE was narrower than that for total life expectancy at birth (6.9 years). After Japan, in second to seventh places, were San Marino, Sweden, Switzerland, Monaco, Iceland and Italy, with HALE at birth (males and females combined) in the range 72.7 to 73.4 years, followed by Australia and a number of other industrialized countries of Western Europe. There was a considerable range of uncertainty in the ranks for countries other than Japan, with typical 95% uncertainty ranges for HALE of around 0.5 to 2 years for developed countries. Keeping these uncertainty intervals in mind, Canada was in 11th place (72.0 years) and the USA in 29th place (69.3 years). Other countries with reasonably high HALE in the Americas included Argentina (65.3 years), Chile (67.3 years), Costa Rica (67.2 years), Cuba (68.3 years), Mexico (65.4 years), Panama (66.2 years) and Uruguay (66.2 years). Brazil was split, with a high HALE in its southern half, and a lower one in the north. The total average was a relatively low 59.8 years, at 57.2 for males and 62.4 for females. Overall, global HALE at birth in 2002 for males and females combined was 57.7 years, 7.5 years lower than total life expectancy at birth (Figure 3 ). In other words, poor health resulted in a loss of nearly 8 years of healthy life, on average globally. Global HALE at birth for females was only 2.7 years greater than that for males. In comparison, female life expectancy at birth was 4.2 years higher than that for males. Global HALE at age 60 was 12.7 years and 14.7 years for males and females respectively; 4.3 years lower than total life expectancy at age 60 for males and 5.3 years lower for females. Figure 3 Life expectancy (LE), healthy life expectancy (HALE), and lost healthy years as per cent of total LE (LHE%), at birth and at age 60, by sex and region, 2002. HALE at birth ranged from a low of 40 years for African males to over 70 years for females in the low mortality regions of Western Europe, North America and the Pacific (Japan, Australia, New Zealand). This reflects an almost 2-fold difference in HALE between major regional populations of the world (Figure 3 ). The equivalent "lost" healthy years (total life expectancy minus HALE) ranged from 15% of total life expectancy at birth in Africa to 8–9% in the European region and the Western Pacific region. The sex gap was highest for Eastern Europe and lowest in North Africa and the Middle East. There was a similar almost two-fold variation in HALE at age 60 across the regions of the world, ranging from 19 years for women in low-mortality countries to around 10 years for men and women in sub-Saharan Africa. Males lost more healthy years of life at age 60 than females only in China and South East Asia, where life expectancy gaps at age 60 were also low compared with the more than 5-year gap in Japan (due to high female life expectancy) and in Eastern Europe (due to high adult male mortality rates). There was an enormous difference between the world's highest HALE at birth of 77.7 years (for females in Japan) and the lowest of 27.2 years (for males in Sierra Leone) in 2002. In Sierra Leone, people both lived shorter lives (male life expectancy at birth was estimated at 32.4 years), and had higher levels of disease and disability at all ages. The probability of a male child dying before his 5th birthday was 33% in Sierra Leone, compared with less than 0.5% in Japan. The low levels of HALE in sub-Saharan Africa reflect the additional impact of the HIV-AIDS epidemic, as well as war and conflict in some countries such as Sierra Leone. AIDS is now the leading cause of death in Sub-Saharan Africa, far surpassing the traditional deadly diseases of malaria, tuberculosis, pneumonia and diarrhoeal disease. AIDS killed 2.1 million Africans in 2002, versus 300,000 AIDS deaths in 1990 [ 20 , 34 ]. In Russia, HALE at birth was 64.1 for females, 4 years below the European average, but just 52.8 years for males, 9.4 years below the European average. This was one of the widest sex gaps in the world and reflects the sharp increase in adult male mortality in the 1990s. The most common explanation is the high incidence of male alcohol abuse, which led to high rates of accidents, violence and cardiovascular disease. From 1991 to 1994, the risk of premature death increased by 50% for Russian males [ 35 ]. Between 1994 and 1998, life expectancy improved for males, but has declined significantly again in the last 3 years [ 35 - 37 ]. Overall, HALE at birth for males in Russia and other former Soviet countries was 16 years lower than the average for males in Western Europe; the difference for females was lower at 9 years. Other Eastern European countries such as Ukraine and Belarus also had large gaps between male and female HALE at birth, as did Colombia, where male HALE at birth was nearly 9 years lower than that for females. At the other extreme there were 12 countries where female HALE at birth was lower than male, and an additional 20 countries where it was less than 1 year higher. These included African countries greatly affected by HIV/AIDS such as Botswana, Kenya, Tanzania and Zimbabwe, but also Eastern Mediterranean countries such as Bahrain, Kuwait, Qatar and the United Arab Emirates, and Asian countries such as Afghanistan, India, Pakistan and Bangladesh. Also included were Nigeria and Haiti. In most of these countries, female life expectancy was slightly higher than male; only in Qatar, Maldives and Bangladesh was female life expectancy at birth lower than that for males. However, higher levels of disability and poor health reduced HALE for females to a greater extent than for males in these countries. At a broader regional level, sub-Saharan Africa, the Eastern Mediterranean region, and the South East Asian region all had female HALE at birth less than three years higher than male, compared with a female-male gap of 6 to 7 years in developed countries. HALE at birth in Afghanistan was estimated at 35.5 years, the 11th lowest in the world in 2002. There was a large 95% uncertainty range around this estimate, of 27 to 44 years, reflecting the lack of population health information for that country. More health information was available for Iraq, where HALE at birth in 2002 was estimated at 50.1 years, with an uncertainty range of 47 to 54 years. China had a healthy life expectancy well above the global average, at 64.1 years, 65.2 years for women and 63.1 for men. Other countries in the Asian region generally had lower HALE. Improving health in Viet Nam has resulted in a HALE of 61.3 years, while Thailand has not improved significantly over the past decade, with a HALE of 60.1 years in 2002. HALE in Myanmar was just 51.7 years at birth, substantially behind its South East Asian neighbors. Figure 4 shows the expectation of lost healthy years at birth (LHE = LE – HALE) versus total life expectancy at birth for 192 countries. While lower life expectancies are generally associated with lower HALE, there were large variations in HALE and in LHE for any given level of life expectancy. For example, for countries with a life expectancy of 72, HALE varied from 61.1 to 64.6, a non-trivial variation. Figure 4 Lost healthy years at birth (LE – HALE) versus life expectancy at birth, by sex, 192 WHO Member States, 2002. Correspondingly, LHE varied from 7.3 to 11 years, or by up to 50%. If male and female HALE are considered separately, the range of variation increases to 59–66 at total life expectancy of 72 years. While LHE increases somewhat with increasing life expectancy up to around 70 years life expectancy, there is a flattening for males and a trend to decreasing LHE for females in the countries with longest life expectancies. Although there are higher prevalences of disabling conditions such as dementia and musculoskeletal disorders in countries with longest life expectancies, this is offset by lower levels of disability for diseases such as cardiovascular disease and chronic respiratory diseases where incidence and mortality rates are also lower. Discussion and conclusions As discussed elsewhere, the new methods used in the WHO Multi-Country Survey Study offer clear evidence that self-report data on health status are not comparable, and allow adjustments that improve the comparability of the resulting health status measures [ 18 ]. Building on the findings from the MCSS, WHO is now undertaking the World Health Survey in collaboration with Member States [ 38 ]. During 2003 and the first half of 2004, 73 Member States conducted the World Health Survey, and results have now all been received by WHO. The World Health Survey results will contribute to the analysis of healthy life expectancy in future years. Despite the fact that people live longer in the richer, more developed countries, and have greater opportunity to acquire non-fatal disabilities in older age, disability has a greater absolute (and relative) impact on healthy life expectancy in poorer countries. Separating life expectancy into equivalent years of good health and years lost to sub-optimal health thus widens rather than narrows the difference in health status between the rich and the poor countries. Richer countries should be much more active in seeking ways to improve the health of the world's poor. WHO has been a strong advocate for efforts to increase the resources available for this purpose, and the recent report of its Commission of Macro-economics and Health concluded that the bulk of the global disease burden is the result of a relatively small set of conditions, each with an existing set of effective interventions [ 39 ]. The main problems are the funding of these interventions and access of poor populations to these interventions. The Commission estimated that the essential interventions to target these problems could be provided for a cost of around $34 per person per year. The World Health Report 2002 included an analysis of mortality and morbidity attributable to the combined effects of 20 selected leading risk factors for 14 subregions of the world [ 33 , 40 ]. Globally 47% of premature mortality and 39% of total disease burden were attributable to the joint effects of the 20 selected risk factors. Removing these risk factors would lead to an estimated gain of 9.3 years (17%) in global HALE. The regional gains were between 4.4 years (6%) in the developed countries of the Western Pacific, to 16.0 years (43%) in parts of sub-Saharan Africa [ 41 ]. The World Health Report 2002 also analyzed the cost-effectiveness of a wide range of interventions to address these risks. For the first time ever, policy makers were provided not only with a summary measure of level of population health (HALE), but also with information on its determinants (diseases, injuries and major risk factors) and on the gains that could be achieved through specific intervention packages, along with an analysis of the potential improvements in healthy life expectancy for different regions of the world that could be achieved through reduction or elimination of exposure to 20 major global risk factors [ 33 ]. The regular assessment of levels of population health is a key input to the public policy process, and comparable measurement of population health levels creates possibilities of investigating broad determinants at national and cross-national level. Mortality indicators are not adequate for this purpose, given the considerable policy interest for many populations as to whether – and to what extent – gains in life expectancy have been accompanied by improvements in non-fatal health status [ 9 , 42 - 45 ]. The trend to flattening or decreasing LHE for countries with longest life expectancies, noted above, provides the first cross-population comparable evidence that compression of morbidity may be occurring in low mortality countries, although this evidence is cross-sectional rather than longitudinal. Comparability is fundamental to the use of survey results for development of evidence for health policy but has been under-emphasized to date in instrument development. We believe that the new methods used in the WHO Multi-country Household Survey Study and the World Health Survey have increased the comparability of self-report data across countries and provide the first steps towards the consistent and comparable measurement of population health across the world. Final confirmation that compression of morbidity is occuring in low mortality populations awaits longitudinal cross-population comparable data for specific populations: this is now achievable. Competing interests The author(s) declare that they have no competing interests. Authors' contributions CDM, CJLM and JAS developed the methods for calculation of HALE. CJLM, JAS and AT developed the methods and carried out the calculations for the measurement of health states and health state valuations in the MCSS, with contributions from the other authors. CJLM, JAS, SC and BU developed the MCSS survey instrument, and SC and BU coordinated the implementation and analysis of the surveys. KMI, CDM and JAS carried out analyses and calculations for the computation of HALE for the year 2002 for 192 WHO Member States, with contributions from CJLM and the other authors. CDM drafted the paper, with substantial contributions from other authors. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC547900.xml |
524519 | Surface structure, model and mechanism of an insect integument adapted to be damaged easily | Background Several sawfly larvae of the Tenthredinidae (Hymenoptera) are called easy bleeders because their whole body integument, except the head capsule, disrupts very easily at a given spot, under a slight mechanical stress at this spot. The exuding haemolymph droplet acts as a feeding deterrent towards invertebrate predators. The present study aimed to describe the cuticle surface, to consider it from a mechanistic point of view, and to discuss potential consequences of the integument surface in the predator-prey relationships. Results The integument surface of sawfly larvae was investigated by light microscopy (LM) and scanning electron microscopy (SEM) which revealed that the cuticle of easy bleeders was densely covered by what we call "spider-like" microstructures. Such microstructures were not detected in non-easy bleeders. A model by finite elements of the cuticle layer was developed to get an insight into the potential function of the microstructures during easy bleeding. Cuticle parameters (i.e., size of the microstructures and thickness of the epi-versus procuticle) were measured on integument sections and used in the model. A shear force applied on the modelled cuticle surface led to higher stress values when microstructures were present, as compared to a plan surface. Furthermore, by measuring the diameter of a water droplet deposited on sawfly larvae, the integument of several sawfly species was determined as hydrophobic (e.g., more than Teflon ® ), which was related to the sawfly larvae's ability to bleed easily. Conclusion Easy bleeders show spider-like microstructures on their cuticle surface. It is suggested that these microstructures may facilitate integument disruption as well as render the integument hydrophobic. This latter property would allow the exuding haemolymph to be maintained as a droplet at the integument surface. | Background The integument of insects is very often involved in defence strategies towards predators and pathogenic agents [ 1 , 2 ]. Generally it constitutes the first contact point in the interaction between an insect and such natural enemies. It often offers an efficient protection as a physical barrier due to its hardness, for instance, in adult beetles. At the opposite extreme, a low mechanical strength of the integument can be implicated in insect defence strategies as well. One example of this is the phenomenon of reflex bleeding that is known in several insect orders. The integument presents a few localized weak points which can disrupt when the insect under disturbance will increase its internal hydraulic pressure, provoking the release of a droplet of distasteful haemolymph [e.g., [ 3 ]]. The phenomenon of easy bleeding is another type of adaptation used in defence, by where the whole body integument, except the head capsule, can disrupt easily at a given spot when this spot is subjected to mechanical stress [see definition in [ 4 ]]. The phenomenon occurs in the larvae of some species belonging to sawflies (Hymenoptera, Symphyta, Tenthredinidae). Species that show easy bleeding notably belong to genera such as Aneugmenus , Athalia , Monophadnus , Phymatocera and Rhadinoceraea . Recently, the mechanical strength of dissected pieces of larval integument was measured in a calibrated manner. The force needed to damage the integument can vary in more than one order of magnitude from one species to another [ 4 ]. Easy bleeding differs from reflex bleeding in that, first, almost the whole body integument is potentially involved in the phenomenon, and second, an external force is necessary to exhibit the phenomenon [ 4 ]. As soon as the integument of an easy bleeder is damaged, a haemolymph droplet exudes and can remain as such during several minutes. An ecological implication of easy bleeding is that the emission of a haemolymph droplet will deter an attacking predator from killing and feeding on an easy bleeder. Indeed, the haemolymph is feeding deterrent towards foraging ants and wasps [ 4 - 8 ]. Birds are other important predators of sawfly larvae [ 9 ], but to which easy bleeding seems less clearly effective [ 10 ]. Thus the ecological function of easy bleeding is demonstrated as a chemically mediated defence strategy directed especially towards foraging invertebrate predators. However, integument disruption remains puzzling from a morphological and mechanistic point of view. The present study is based on a comparative analysis of the larval integument surface in several sawfly species, which comprise easy bleeders as well as non-easy bleeders. We wanted to describe the geometry and to approach the mechanical properties of the integument surface, and to consider proximate, ecological implications. Results Microstructures covering the cuticle surface The larvae of sawfly species observed by SEM showed above surface microstructures of their cuticle and which are described below. These microstructures were strikingly more complexly structured in easy bleeders than in non-easy bleeders (Fig. 1 , 2 ) and this differing occurrence among sawfly species was significant ( P = 0.0001, Fisher exact probability test, N = 24 species; Table 1 ). Figure 1 Cuticle surfaces of sawfly larvae by SEM. Easy bleeders are A. rosae (a, b) and M. monticola (d). Non-easy bleeders are C. septentrionalis (c), H. australis (e), N. miliaris (f), P. parvula (g) and G. hercyniae (h). The dorso-lateral part of the abdomens is shown. Detailed view showing spider-like microstructures (b). Views showing blister-like swellings (c, e to g) or setae (h). Figure 2 Cuticle surfaces of sawfly larvae by SEM and related integument sections by LM. Non-easy bleeder is S. multifasciata (a, e). Easy bleeders are P. aterrima (b, f), A. padi (c, g), R. nodicornis (d) and R. bensoni (h). Views by SEM (a to d) show blister-like swellings (a) or spider-like microstructures (b to d). Views by LM (e to h) showing that, above a cellular layer, the cuticle comprises a procuticle, in blue, whereas the epicuticle, in red (e, g), is not observed in some species (f, h). Table 1 Easy bleeding, cuticle microstructures and hydrophobic property in sawfly larvae Species Easy bleeding 1 Microstructures 2 Droplet 3 2 μl Diameter 3 4 μl TENTHREDINIDAE Allantiinae Athalia rosae (L.) EB + ? ? or 2.1 ± 0.0 Blennocampinae Eurhadinoceraea ventralis (Panzer) EB - · · Monophadnus monticola (Hartig) EB + · · Monophadnus spinolae (Klug) EB - · · Phymatocera aterrima (Klug) EB + ? or 1.5 ± 0.0 ? or 2.0 ± 0.0 Rhadinoceraea bensoni Beneš EB + · · Rhadinoceraea micans (Klug) EB + ? ? Rhadinoceraea nodicornis Konow EB + ? or 1.6 ± 0.1 ? or 2.0 ± 0.1 Tomostethus nigritus (Fabricius) N-EB - · · Nematinae Craesus alniastri (Sharfenberg) N-EB - 1.6 ± 0.0 2.1 ± 0.0 Craesus septentrionalis (L.) N-EB - 1.7 ± 0.0 2.2 ± 0.1 Hemichroa australis (Serville) N-EB* - 1.6 ± 0.1 2.2 ± 0.1 Hemichroa crocea (Geoffr.) N-EB - · · Hoplocampa testudinea (Klug) N-EB - · · Nematus melanocephalus Hartig · - · · Nematus miliaris (Panzer) N-EB* - · · Nematus pavidus Serville N-EB* - · · Pristiphora laricis (Hartig) N-EB - · · Pristiphora testacea (Jurine) N-EB - 1.9 ± 0.0 2.6 ± 0.0 Pseudodineura parvula (Klug) · - · · Selandriinae Aneugmenus padi (L.) EB + 1.7 ± 0.2 2.2 ± 0.2 Strongylogaster mixta (Klug) N-EB - 1.7 ± 0.1 2.1 ± 0.1 Strongylogaster multifasciata (Geoffr.) N-EB - 1.6 ± 0.1 2.1 ± 0.0 Tenthredininae Tenthredo scrophulariae L. N-EB* - · · ARGIDAE Arge sp. N-EB - · · DIPRIONIDAE Gilpinia hercyniae (Hartig) N-EB - 1.6 ± 0.0 2.0 ± 0.0 1 Species was an easy bleeder (EB), or a non-easy bleeder (N-EB). Data from Boevé & Schaffner [4], except data from U Schaffner & JLB, unpublished results (*). 2 Spider-like microstructures were present (+) or absent (-) by observations of the cuticle surface by SEM and/or of cuticle sections by LM. 3 Cuticle was either too hydrophobic so that adherence of water droplet was impossible (?) or the diameter (mean ± SD, in mm) of a 2 and 4 μl droplet on the cuticle was measured. (·) Not tested. In easy bleeders, the cuticle is covered with irregularly shaped wart-like microstructures (verrucose). Their density is approximately of 15 units per 0.01 mm 2 . They possess fine ridges (carinulate) in a radiated way (Fig. 1b , 2b,2d ), hence the term "spider-like". The fine ridges (i.e., the "legs" of the "spider") more or less imbricate in between those from adjacent microstructures, and their width is approximately of 0.5 to 1.5 μm. The form of the microstructure is generally circular (diameter excluding ridges: 10 μm in A. padi ), but can be elongated (length: 35 μm in A. rosae ). The ridges can be reduced (e.g., in A. padi ). The height of microstructures was measured on LM views and reaches 23 μm (in P. aterrima ). For further measurements by LM, see Table 2 . Table 2 Model input and output with force applied on cuticle of non-easy bleeders (A) and easy bleeders (B) A M1/1 M1/2 M1/10 Hc Pl Pt Sm Ts W1 110 110 110 110 110 110 110 110 H1 20 20 20 7 5 8 8 11 H2 10 10 10 6 5 3 5 5 E1 500 500 500 500 500 500 500 500 E2 500 1000 5000 5000 5000 5000 5000 5000 F 1z Max 0.206 0.170 1.001 2.910 4.520 2.651 2.906 2.201 Min -0.835 -1.003 -1.604 -4.167 -6.240 -6.482 -4.633 -3.772 F 1x Max 1.553 1.614 1.794 2.545 2.956 3.337 2.714 2.569 Min -0.363 -0.383 -0.447 -0.717 -0.860 -0.864 -0.764 -0.710 B M2 Ar Mm Pa Rb Rn W1 110 70 33.5 60 50 60 H1 15 8 8 15 10 15 H3 15 8 14 23 10 10 D1 28 23 9 20 15 20 D2 20 11 1 2 10 10 S1 20 5 6 8 8 8 P 1 3.306 80 100 4 4 N μstr 1 1 5 1 1 1 F 1z Max 0.665 1.696 0.623 4.291 1.430 0.770 Min -0.793 -2.560 -0.972 -7.286 -1.412 -2.134 F 1x Max 4.788 7.554 35.820 174.600 12.840 7.632 Min -1.476 -2.241 -1.782 -9.267 -3.805 -2.317 Model of non-easy bleeders was based on parameter values measured on LM and SEM views from H. crocea and N. pavidus together (M1/1, M1/2, M1/10), and from H. crocea (Hc), P. luridiventris (Pl), P. testacea (Pt), S. multifasciata (Sm) and T. scrophulariae (Ts). Different relative values of Young's modulus for procuticle (E1) and epicuticle (E2) were used in M1/1, M1/2, and M1/10. Model of easy bleeders was based on parameter values measured on LM and SEM views from P. aterrima and R. micans together (M2), and from A. rosae (Ar), M. monticola (Mm), P. aterrima (Pa), R. bensoni (Rb) and R. nodicornis (Rn). Parameter values, in μm, introduced in the model: width of the model sample (W1), height of procuticle layer (H1), height of epicuticle layer (H2), height of microstructure (H3), diameter at base of microstructure (D1), diameter at top of microstructure (D2), shortest distance between microstructures (S1). Number of microstructures set under pressure (N μstr). Pressure applied per microstructure (P). Stress values, obtained with a normal force (F 1z) or shear force (F 1x), are given as extreme values in traction (Max) and compression (Min). Compared to easy bleeders, the cuticle surface of non-easy bleeders was much smoother. It only shows blister-like swellings (pustulate) which have a diameter of 3–4 μm (e.g., in T. nigritus , H. australis , Nematus , Fig. 1e,1f ), 6–7 μm (e.g., in S. multifasciata , Fig. 2a ) up to 12 μm (in H. testudinea ). In some genera such as Nematus and Craesus , each swelling shows a very small prickle (echinulate). Several swellings are sometimes aligned and then can be joined, several together, to form a low ridge of approximately 35 μm long (in C. septentrionalis ). Although E. ventralis and M. spinolae are easy bleeders, no spider-like microstructures were detected. Instead, small ridges with one or a few prickles, and small spines were observed, respectively. The larvae (alive) of these two species as well as T. scrophulariae are covered with a layer of waxy powder. Setae were observed instead of microstructures in the outgroup species G. hercyniae (Fig. 1h ). Modelling the mechanical behaviour of the cuticle The aim of modelling was to compare the repartition of stresses of two cuticle configurations, as found in non-easy bleeders (M1) versus easy bleeders (M2), when a same loading is applied (Fig. 3a,3b ). The maximum stress value (in compression and traction) is an indicator of possible initiation of crack or damage (see Methods). Figure 3 Models of the cuticle of sawfly larvae. Model representing a non-easy bleeder (a, c to e, g) and an easy bleeder (b, f, h). View in perspective showing five microstructures (b) and the location of the applied force (a, b). Maximal stress distribution in a section through the cuticle (c to h). The ratio of Young's modulus for the procuticle to the one of the epicuticle is assumed to be 1/1 (c), 1/2 (d) and 1/10 (e). The applied force is normal (c to f) or sheared (g, h). The maximal value corresponds to the maximal stress of the principal stress 1 and the minimal value to the minimal stress of the principal stress 2. Only the distribution of principal stress 1 is shown, while the maximal value is given in Table 2. Degrees of freedom = 120,553 (a, c to e, g), 40,701 (b, f, h). In Table 2 , M1/1, M1/2 and M1/10 show the influence of the Young's modulus of the epicuticle, relative to the procuticle, on the distribution of stresses in a cuticle patch. When the Young's modulus of the epicuticle was increased, keeping the one of the procuticle constant, the stresses concentrated in the epicuticle and the maximal values increased (Fig. 3c to 3e ). This occurred with both load cases (i.e., normal and shear force). With a normal force, M1 and M2 resulted in stress values which were in the same range of magnitude (Table 2 , Fig. 3c to 3f ). Thus from this load case it cannot be deduced that one integument configuration will be damaged more easily than the other. In contrast, stress values obtained with a shear force were approximately three times higher in M2 than M1 (Table 2 ; Fig. 3g,3h ). This suggests that an integument with microstructures is more constrained and will more easily reach the yield stress corresponding to the damage of the cuticle. This conclusion was corroborated by considering several single sawfly species. The maximal stress value in compression as well as in traction was always more extreme in five species of easy bleeders than in five species of non-easy bleeders (Table 2 , see shear force). Note that for the easy bleeders M. monticola and P. aterrima , the apical part of the microstructure was extremely minute. All stresses were concentrated in the tip of the microstructure, which lead to non-physical deflections. The obtained values for these two species are probably irrelevant in a comparison with other species. Hydrophobic property of cuticle surfaces It was difficult or even impossible to deposit a water droplet on the larval body of some sawflies (Table 1 ). When the pipette tip was brought close to the integument, almost within physical contact, the droplet was pushed aside against the tip border. By then retrieving the pipette, the droplet was again on its tip, not on the integument. This could happen for some of the individuals tested per species (Table 1 ). In species where the integument was less hydrophobic, the diameter of the droplet on it ranged from 1.5 to 1.9 mm (2 μl droplet) and from 2.0 to 2.6 mm (4 μl). Considering this latter droplet size, a small diameter (2.0 mm) or an immeasurable diameter (see above) was associated with sawfly species which are easy bleeders, whereas a larger droplet diameter (> 2.0 mm) was associated with non-easy bleeders ( P = 0.045, Fisher exact probability test, N = 12 species, Table 1 ). Thus easy bleeders possess a hydrophobic and non-easy bleeders a rather hydrophilic integument. On inert surfaces the diameter of 2 and 4 μl droplets was constantly as follows: immeasurable (see above) and 2.2 mm on Teflon ® , 1.8 and 2.3 mm on Parafilm ® , 1.8 and 2.3 mm on polystyrene and 2.6 and 3.3 mm on glass, respectively. Thus even Teflon ® , that is considered as highly hydrophobic, led to a 4 μl droplet diameter which was comparable to the one obtained on the (hydrophilic) integument of non-easy bleeders. The 2 μl droplet was apparently light enough to impede its adhesion on the Teflon ® surface, but not the larger droplet size tested. Discussion Several sawfly larvae showed a characteristic cuticle surface with spider-like microstructures and this was associated with a low mechanical strength of their integument. For instance, these microstructures were present in the easy bleeder Aneugmenus padi and absent in the non-easy bleeders Strongylogaster spp. Both genera are closely related since they belong to the same subfamily and have the same host plant [ 11 ]. It is likely that the occurrence of microstructures cannot be interpreted simply in terms of a systematic arrangement of species and that they are related to the phenomenon itself of easy bleeding. The larval abdomen of several Dolerus (Tenthredinidae, Selandriinae) species presents "meshes of microsculpture not sharply defined", with a dimension ranging from 20 to 40 μm, and these microstructures are often fused [ 12 ]. They may constitute an intermediate state between those observed by us on easy bleeders and non-easy bleeders, but being more physically comparable to those of non-easy bleeders by the absence of spider-like microstructures. Particular microstructures are also observed at the cuticle surface of other arthropods than sawflies, such as in nymphs of bugs and ticks [ 13 ] and in adults of flies and dragonflies [ 14 , 15 ]. Their function is to allow by stretching an increase of body volume during feeding, to ally flexibility with mechanical stability during highly repeated movements, etc., but their possible role in promoting a mechanical damage of the integument was not envisaged so far [ 16 ]. The question arises to know whether in sawfly larvae able to bleed easily the microstructures are directly involved in integument disruption. We compared cuticle models of non-easy bleeders versus easy bleeders and applied a unit force on it. Compared to the real-life, the model was simplified by considering a linear elastic behaviour of the cuticle (i.e., the stresses are proportional to the strains – Hooke's law), because we do not know the exact physical properties of the cuticle. Nevertheless, a comparison of geometrical parameters from easy bleeders versus non-easy bleeders revealed that by applying a shear force the cuticle stresses both in compression and tension were higher in the presence of microstructures (Table 2 ). This suggests that microstructures may directly contribute in the damage of the integument. Yet, the breaking line of a damaged integument goes between the microstructures (SA, personal observation on the easy bleeder P. aterrima ). This biological observation is in agreement with our model results. The regions subject to high stresses are not restricted to the zone of the microstructure, but extend deeper into the cuticle mass (Fig. 3h ). From this trend we may extrapolate that if the shear force is enhanced, the microstructure will not break off from the rest of the cuticle, but the fracture line will start at the base of a microstructure and continue throughout the whole cuticle thickness. In other words, the integument will disrupt. This conclusion becomes even more relevant in the realistic situation where an attacking predator applies a more or less oblique force on the cuticle. Beside physical aspects, chemical ones also contribute in the mechanical properties of an integument [ 16 - 20 ]. One of these properties, visco-elasticity, is determined in the abdominal integument of the bug Rhodnius by the matrix protein(s) of the procuticle with a reinforcing effect of chitin microfibriles. Differing chitin and protein patterns are observed in the cuticle when easy bleeders are compared to non-easy bleeders (M. Spindler-Barth & SA, unpublished results). Ongoing research aims to investigate these physiological aspects as well as the healing process, and to link them with the phenomenon of easy bleeding. The cuticle surface of easy bleeders was highly hydrophobic (Table 1 ) as compared to a well-known hydrophobic material such as Teflon ® . There is a trend for the integument of easy bleeders (e.g., P. aterrima , Rhadinoceraea spp., A. padi ) to appear as mat, in contrast to the brilliant aspect in non-easy bleeders (e.g., Strongylogaster spp., Craesus spp.) (JLB, personal observations). We believe that the hydrophobic property is ecologically relevant during predator-prey interactions. When a predator, typically an insect with biting-chewing mandibles [ 10 ], bites into the integument of a sawfly larva at a given spot, the best for the larva is to keep the deterrent haemolymph spatially concentrated at this spot. A counter-example is that some insects are known to have morphological devices of the integument surface or wetting agents included in their defensive secretion, which help the secretion to spread out [ 21 , 22 ]. But such secretions are typically volatile and the defence consists of keeping the aggressor at a distance. The morphological devices and wetting agents modulate the evaporation of the secretion and, thereby, the effectiveness of a defence that acts by olfactory cues. In the case of easy bleeding, deterrent compounds dissolved in the haemolymph need to contact the mouthparts of an aggressor, acting by gustatory cues. Moreover, easy bleeders should not spread out their hemolymph since they would lose this valuable liquid. Remaining as a droplet and in contact with the larval haemocel, the droplet can be sucked back by the larva into its body within a few minutes, providing that the larva is not more disturbed [ 4 ]. A parallel can be drawn between the integument surface of easy bleeders and the one of several plant leaves. The lotus leaf led recently to the so-called Lotus-effect ® [ 23 ]. Particular physico-chemical properties of the leaf allow a self-cleaning by rain. This effect relies on a micro-structured surface and a coating of waxy crystals. Both characteristics contribute in rendering the surface hydrophobic [ 23 , 24 ]. The optimal configuration and size of the structures is a coarse structure of 10 to 50 μm and a finer one of 0.2 to 5 μm [ 25 ]. This corresponds well to the case of the spider-like microstructures as found on the cuticle of easy bleeders. In the insect both these coarse and finer structures are provided by the spider-like structures (Results), whereas in the plant each scale of structures is due to microstructures and waxy crystals, respectively [ 26 ]. There are no waxy crystals on the body surface of easy bleeders. A fine layer of waxy powder covers only some species of easy bleeders as well as non-easy bleeders (see Results). Such a waxy powder consists mainly of hexacosan-1-ol in Eriocampa ovata [ 27 ], a non-easy bleeder [ 4 ] not studied in the present work. It is likely that in a majority of easy bleeders the hydrophobic property relies especially or solely on the geometry of the cuticle surface, by the occurrence of microstructures. Conclusions We suppose at least two types of functions in the occurrence of spider-like microstructures, which we observed specifically on the body surface of easy bleeders. Firstly the damage provoked by a biting predator could be facilitated. Secondly the integument of easy bleeders could be rendered hydrophobic, which helps stop the emitted haemolymph droplet from spreading out. Methods Insects All sawfly larvae (see Table 1 ) were collected in the field (Belgium, Germany, Switzerland), except A. rosae and G. hercyniae that came from indoor populations. The larvae were identified according to Lorenz & Kraus [ 11 ]. The full-grown larval stage was used. Observations by SEM and LM Fixed larvae stored in ethanol were dried, coated with gold, and examined with a Philips XL-30 ESEM. Specimens were placed to observe the dorsal and lateral part of the abdomen. The terminology used in describing the cuticle surface refers to Harris [ 28 ]. Series of 7 μm thin cross sections were obtained from larvae by using classical histological techniques. They were deparaffinized in xylene and rehydrated in several decreasing ethanol to water solutions, then stained by the Azan trichrome method [ 29 ] and observed by LM. Model by finite elements General mechanical assumptions The general rigorous mechanical behaviour of the cuticle is complex. As a first attempt to understand the property of easy bleeding, it was assumed that a damage of the integument is due to excessive stress under static loading. Since the cuticle of a larva is also geometrically complex in three dimensions, no simplified laws, for instance, derived from the strength of material could be used. The analysis was therefore performed on solid configuration, discretized by a standard finite element method [ 30 ]. It was assumed that the stress-strain law is linear and isotropic (Hooke's law) and that the displacements and strains are small. The geometrical dimensions of the cuticle are very small, at the microscale. It is known that for such a configuration, the assumption of continuum may not be valid [ 31 ]. But, it is also known for standard materials such as metals that the strength is generally underestimated with continuum assumption. This is the reason why the analysis performed in this paper was purely qualitative and based on a comparison of the stress between the geometry encountered in easy bleeders and non-easy bleeders. Most of the results were interpreted on the principal stress: for 3D mechanical configurations, three directions always exist for which the stress (and strain for isotropic laws) is maximum or minimum. According to these directions, the shear stress is zero. It was then supposed that the maximum stress values (in traction or compression) cause the initiation of the cuticle damage. Finite element modelling The finite element analysis was performed using the general mechanical purpose software SAMCEF ® version 9.1. It was assumed that the integument is made of the repetition of reproducible patches in both x, y directions. Thus, only one patch has to be modelled by the appropriate boundary conditions representing this repetition (i.e., the displacements on each boundary are blocked in the direction normal to this boundary). The geometry was discretized with 3D solid linear finite elements (prisms or bricks). The patch used to model non-easy bleeders was composed of two layers, procuticle and epicuticle, of different properties in height and Young's modulus. It is supposed that generally the epicuticle of insects is pliant but not extensible, stronger in compression than tension, and that the epicuticle is less elastic than the procuticle [ 2 ]. The patch used for easy bleeders contained five microstructures and was homogenous. Indeed, generally no epicuticle is clearly detected in the cuticle of easy bleeders observed by LM (e.g., Fig. 2f ), an exception being A. padi (Fig. 2g ). Loading The loading was always divided into two load cases: a force perpendicular to the patch surface (normal force) and parallel to it (shear force). In a real situation, the mandibles of an attacking predator, typically a small arthropod, apply the loading [ 10 ]. The diameter of a mandible's tip was measured on workers of the ant Myrmica rubra and reached 20 μm as the smallest value. In the model, the shape of the contact point made by the mandible was a disc (radius = 10 μm) on which the force was applied. This force was applied either on the upper centre of the epicuticle for non-easy bleeders (Fig. 3a ) or on the top of the central microstructure when considering easy bleeders (Fig. 3b ). As the radius of the upper part of the microstructure changed from one species to another, being generally lower than 10 μm, the applied surface force was adapted to obtain a same resultant force for each configuration. The insect body contains a liquid, haemolymph. The patch, therefore, was modelled by applying a surface force equilibrated with the loading of the predator. Hydrophobic property This property of the integument was estimated by a simple method that allowed the use of insects alive. The first step was to notice whether a 2 and 4 μl droplet of charcoal filtered water could adhere, gently depositing it with a 1–10 μl pipette on the thoracic or abdominal integument of a sawfly larva that was resting on a leaf of its host plant. If an adherence was possible, the diameter of the deposited droplet was measured under a stereomicroscope with micrometer. Six full-grown larvae were tested per species. As control the following substrates were tested in the same manner: Teflon ® , Parafilm ® , polystyrene and glass. On these biological and inert surfaces, the droplet reaction (i.e., adherence capability and droplet diameter) was considered to express the hydrophobic or hydrophilic property of the surface. Authors' contributions JLB collected and identified the insects, performed the tests on the hydrophobic property, measured the cuticle parameters for the model on LM views, and wrote the manuscript, except the parts about this model in Results and Methods. VD obtained most SEM views. TM and PB performed the model by finite elements and wrote the two related parts in the manuscript. SA maintained indoor populations of two sawfly species and carried out and photographed the integument sections used in LM and, thereby, in the model. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524519.xml |
520757 | DIALIGN P: Fast pair-wise and multiple sequence alignment using parallel processors | Background Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Results Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: ( a ) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. ( b ) For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. Conclusions By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope. | Background Multiple sequence alignment continues to be an active field of research in Computational Biology and a number of novel approaches have been developed during the last years, see [ 1 ] for an overview on multi-alignment algorithms and [ 2 , 3 ] for systematic evaluation of the commonly used software tools. Until some years ago, research on sequence alignment was mainly concerned with aligning proteins or single genes. During the last few years, however, comparison of genomic sequences became a crucial tool for uncovering functional elements such as genes or regulatory sites. Consequently, the focus of alignment research shifted to large genomic sequences [ 4 , 5 ]. Alignment of sequences in the order of hundreds of kilobases or megabases is computationally demanding. Some extremely efficient tools have been developed that are able to align entire chromosomes or genomes [ 6 , 7 ]. These approaches, however, work best on closely related species; they are unable to compare sequences with larger evolutionary distances. DIALIGN [ 8 ] is a versatile tool for pair-wise and multiple alignment of nucleic acid and protein sequences. It combines global and local alignment features and is therefore particularly useful to align distantly related sequences sets sharing isolated local homologies. In a number of recent research projects DIALIGN has been used to align syntenic genomic sequences; some new program options have been implemented for this purpose [ 9 ]. Recent applications of DIALIGN in comparative genomics include detection of regulatory elements by multiple alignment [ 10 - 14 ], phylogenetic studies [ 15 , 16 ] and identification of signature sequences to detect pathogenic viruses as part of the US biodefense program [ 17 ]. An independent study by Pollard et al . evaluated the capability of alignment programs to detect conserved non-coding sites in genomic sequences. These authors conclude that DIALIGN can produce alignments with high coverage and sensitivity, as well as specificity to detect constrained sites [ 3 ]. Though DIALIGN produces alignments of high quality, it is slower than alternative multi-alignment programs. Especially if large genomic sequences are to be aligned DIALIGN is far more time-consuming than the above mentioned specialized programs for genomic alignment. A recently introduced anchored alignment option [ 18 ] can be used to speed-up the program, but even with this improvement DIALIGN is still slower than alternative software tools. Parallel computing has been used by various researchers in order to improve the running time of computationally expensive alignment procedures, see for example [ 19 - 21 ]. Herein, we introduce a parallel version of DIALIGN. We apply two different strategies to distribute sub-routines to multiple processors. In our test examples, the running time of DIALIGN could be reduced by up to 94.5 % for multiple protein alignment and by up to 97.5 % for alignment of large genomic sequences. Implementation Parallel multiple alignment For multiple alignment, the DIALIGN algorithm works as follows: in a first step, all respective optimal pair-wise alignments are carried out. This means that, for each pair of input sequences, a chain of local fragment alignments with maximum total weight score is identified. A fragment or fragment alignment is defined as an un-gapped local pair-wise alignment, and the weight score of such a fragment is calculated based on a P-value i.e. on the probability of its random occurrence, see [ 8 ] for a detailed explanation of this approach. The chaining algorithm that identifies a fragment chain with maximum total weigth is described in [ 22 ]. For a set of N input sequences, N × ( N - 1)/2 pair-wise alignments are to be calculated; fragments contained in these pair-wise alignments are then used to build up a multiple alignment in greedy fashion. If the maximum sequence length is bounded by some constant, the time-complexity of this algorithm as a function of the number N of sequences is as follows: Performing all pair-wise alignments takes O ( N 2 ) time. During the greedy procedure, O ( N ) independent fragments can be included into the multiple alignment; additional fragments would be either inconsistent or already contained in the existing multiple alignment. Accepting a single fragment takes O ( N 2 ) time since for each accepted fragment, so-called consistency frontiers are to be updated. These frontiers are used to decide if subsequent fragments are consistent with previously accepted fragments. Thus, the worst-case time complexity of our multi-alignment algorithm is O ( N 3 ). Test runs with real data show, however, that the real time-complexity is something between quadratic and cubic, see [ 23 ] for a detailed analysis of the complexity and running time of our algorithm. The current version of DIALIGN uses an efficient algorithm to update the consistency frontiers. This means that, although performing all pair-wise alignments has a lower theoretical time complexity than processing the fragments from these alignments in the greedy algorithm, for realistic data sets most of the CPU time is spent on the pair-wise alignments. For example, for a set of 20 protein sequences with an average length of 367 amino acid residues, as much as 97.4 % of the CPU time is used to perform the 20 × 19/2 = 190 pair-wise alignments. However, the relative proportion of CPU time used for pair-wise alignments decreases with the number of input sequences, as can be expected from the above theoretical considerations. The pair-wise alignments that are calculated as the first step of the multi-alignment procedure are completely independent of each other. Thus, it is obvious that the total program running time can be crucially reduced by running these procedures on parallel processors. Here, an important point is to distribute the work load evenly to the different processors in order to minimize the total program running time. To this end, our algorithm first estimates the running time for each pair-wise alignment as a function of the sequence length. As outlined in [ 22 ], the running time of DIALIGN for pair-wise alignment is proportional to the product of the sequence lengths. Based on this estimate, the algorithm distributes the N × ( N - 1)/2 pair-wise alignments to the available processors in order to balance the work load. Here, we are using a greedy algorithm to find a satisfactory work-load distribution in reasonable time: we first assign a processor to the pair-wise alignment with the longest expected running time, then assign a processor to the second-largest pairwise alignment etc. Alignment of large genomic sequences The quadratic time complexity of the original DI ALIGN algorithm for pair-wise alignment is clearly not efficient enough to align large genomic sequences. To improve the running time of DIALIGN for long sequences, an anchored alignment procedure has recently been implemented [ 18 , 24 ]. For our parallel approach, we use the fast local alignment tool CHAOS to identify a chain of high-scoring local alignments for each pair of input sequences . To select a consistent sub-set of these local alignments, a greedy algorithm is used, see [ 18 ] for details. CHAOS uses a trie data structure to identify pairs of segments with a user-defined upper bound on the number of mismatches per segment pair. It defines a local alignment as a chain of such gap-free local alignments that are located within a certain distance from each other. Finally, the algorithm returns an optimal chain of such local alignments. After the consistency check, these alignments are used by our algorithm to define anchor points in order to narrow down the search space for the final pair-wise alignment procedure that is performed by DIALIGN as explained in [ 18 ]. To be precise, an anchor point is a pair of segments, one segment from each of the two input sequences; this way each position in the first segment is assigned to the respective position in the second segment. If a residue x is assigned to a residue y through one of the anchor points, this means that x is the only residue that can be aligned with y in the final output alignment. Whether or not x and y will be aligned depends on the degree of local sequence similarity that DIALIGN detects. Moreover, all residues to the left of x can be aligned only with residues to the left of y and vice versa, see Figure 1 . The algorithm then returns an optimal alignment, i.e. a chain of fragments with maximum total weight score respecting the constraints imposed by the selected anchor points. Note that, if all anchor points are consistent with the optimal non-anchored alignment, then the result of the anchored alignment procedure will necessarily be the same as for the non-anchored procedure. In particular, this is the case if all anchor points are part of the optimal non-anchored alignment. In the present study, we use selected anchor points as cutting positions to split the input sequences into smaller sub-sequences , and we reduce the program running time for DIALIGN by aligning these sub-sequences independently on multiple processors. This procedure is related to Stoye's well-known divide-and-conquer approach to multiple alignment [ 25 ] and to the linear-space algorithm for pair-wise alignment proposed by Hirschberg [ 26 ]. It should be mentioned that, unlike the above outlined anchoring procedure, distributing sub-alignments to multiple processors may well affect the resulting output alignments, even if the selected cut positions are consistent with the optimal alignment. No matter how well our anchoring positions are chosen, we can generally not expect the optimal alignments of our sub-sequences to coincide with the optimal alignment of the original input sequences The reason for this behaviour is that DIALIGN uses a non-additive weighting function w for segment pairs (fragments). If a large fragment f is split into two smaller sub-fragments f 1 , f 2 , the sum w ( f 1 ) + w ( f 2 ) is, in general, lower than the original weight w ( f ), see [ 8 ]. As demonstrated in Figure 1 , concatenating alignments of sub-sequences may or may not result in the optimal alignment that would be returned by the naive alignment procedure or by the anchored procedure if anchor points are selected appropriately. Thus, special care has to be taken in selecting appropriate sub-sequences. In particular, the number of splits should not be too high as every split can possibly reduce the quality of the output alignment. To reduce the total running time of the alignment procedure as far as possible, our algorithm distributes the sub-sequence alignments evenly to the available processors. At the same time, it minimizes the loss of alignment quality that may be caused by splitting the sequences into too many subsequences. To this end, we first identify a chain of anchor points using CHAOS. The program then divides the sequences at every anchor point, thereby producing a large set S1 of pairs of relatively small sub-sequences. The running time for each of the corresponding sub-alignments is estimated as described above. Using these estimates, the sub-sequences in S1 are concatenated in such a way that a set S2 of larger sub-sequences is obtained that can still be evenly distributed to the available processors. Here again, we use a greedy approach to assign processors to sub-alignments. As a result, our algorithm minimizes the program running time by balancing the work load among the processors while it maximizes the length of the aligned sub-sequences, thereby reducing the possible loss of alignment quality. Computer resources We decided to use the message-passing interface (MPI) [ 27 , 28 ] for our work. Efficient MPI libraries are available for all supercomputing systems, and also for casual workstation pools. The results reported in the next section were achieved by experimental tests made on the Kepler-Cluster [University of Tübingen (SFB-382, ) that is a Linux-SMP cluster with two Pentium III processors (650 MHz) and 1 GB main memory per node. The nodes are connected by a Myrinet 1.28 GBit/s switched LAN. The software was also compiled and tested on a Sun fire with 8 processors and on an ordinary Linux-based workstation pool that generally exists in every institute. Results and conclusion The performance of existing multi-alignment software has been evaluated in detail. All programs have been extensively tested by their authors; in addition several independent studies have been carried out using numerous sets of real and artificial benchmark data. The quality of multiple-protein-alignment programs including DIALIGN has been systematically studied by Thompson at al . [ 29 ] and by Lassmann and Sonnhammer [ 2 ]. The ability of multi-alignment tools to detect conserved patterns in genomic sequences has recently been investigated by Pollard et al . [ 3 ]. Since the goal of the present study is to speed-up an existing approach, we do not evaluate the quality of the produced output alignments; the ability of our software to produce biologically meaningful alignments under various conditions has been evaluated in the above cited papers. Herein, we compare the running time of our parallel software to the original serial version of the program. As a first test example, we aligned sets of proteins with 20, 55, and 100 sequences, respectively. The program distributed the pair-wise alignments to different processors as described in section. Table 1 shows running time and speed-up for different numbers p of processors. Using 64 processors, the running time for these data sets can be reduced by 94.82%, 90.80% and 75.90%, respectively, compared to the serial running time. The observed differences in the relative speed-up are due to the different proportion of CPU time that is spent on the pairwise alignments, see [ 23 ]. The larger this proportion is, the higher is the relative speed-up that can be achieved by running these procedures on parallel processors (Amdahl's law [ 30 ]). Further improvements in program running time should be possible by parallelizing other parts of the algorithm such as fragment sorting and consistency calculations during assembly of the multiple alignment [ 23 ]. Next, we looked at the improvement of program running time that can be achieved for large genomic sequences using the algorithm described in section. As a test example, we used a set of three syntenic genomic sequences from mouse, rat and human. Each of these sequences is around 1 MB in length. The program CHAOS identified a total of 15,818 anchor points; 4,294 for human/mouse, 4,072 for human/rat and 7,452 for mouse/rat. Most of these anchor points were consistent with each other, only 121 out of the 15,818 anchor points had to be discarded because of consistency problems [ 18 ]. The consistent anchor points led to a set S1 containing 15,700 pairs of sub-sequences with an average length of 214 bp (note that, for each sequence pair, a chain of n anchor points divides the sequences into n + 1 pairs of sub-sequences). These pairs of subsequences were concatenated to obtain a set S2 of larger sub-sequences that can be evenly distributed to the available processors. Using CHAOS anchor points, the serial version of our program took 267,574 s = 74 h 19 m 34 s to compute the multiple alignment of our input sequences on a single processor of our cluster. We estimate that, without the anchoring option, the original DIALIGN program would have taken around three weeks to align these sequences. By contrast, the parallel version of the program with 64 processors took only 6,583 s = 1 h 49 m 43 s to align the same sequence set, corresponding to a running time improvement by 97.5 %. With a speed-up from more than three days to less than two hours, DIALIGN can now compute long-range multiple alignments of genomic sequences that were, until recently, far beyond its scope. Availability The program will be available online through Göttingen Bioinformatics Compute Server (GO-BICS) at . The program code is available on request. Authors' contributions MS parallelized DIALIGN, installed it on the Kepler cluster, performed the test runs and wrote parts of the manuscript. KN and MK participated in the design of the project and manuscript preparation. BM supervised the project and wrote most of the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC520757.xml |
549078 | CytoJournal's move to fund Open Access | CytoJournal is published by an independent publisher BioMed Central, which is committed to ensuring that the peer-reviewed biomedical research is Open Access. Since its launch, BioMed Central has graciously supported the processing of all the articles published during CytoJournal' s first 6 months. However, for long term viability, CytoJournal has to achieve financial viability to support publication expenses. From 1st March, 2005, authors will be asked by the publisher to pay a flat article-processing charge. This editorial discusses how a significant proportion of authors may not have to pay this fee directly under a variety of different mechanisms such as institutional and society memberships with BioMed Central. | Introduction CytoJournal is published by BioMed Central, an independent publisher committed to ensuring peer-reviewed biomedical research is Open Access – it is universally and freely available online to everyone, its authors retain copyright, and it is archived in at least one internationally recognised free repository [ 1 ]. CytoJournal however, has taken this further, by making all its content Open Access. Since its launch in August 2004, BioMed Central has graciously supported the processing of all the articles published during CytoJournal 's first 6 months. However, as we move forward it is crucial that CytoJournal develops its own financial viability, at least to support publication expenses. Thus, from 1 st March, 2005, authors will be asked by the publisher to pay a flat £330 (approximately US $600/€500) article-processing charge (APC) [ 2 ]. As explained below, a significant proportion of authors may not have to pay the fee directly. This is because the APC can be covered/waived under a variety of different mechanisms such as institutional and society memberships with BioMed Central [ 2 ]. Please note that authors submitting any article online to CytoJournal before 1 st March, 2005 will not have to pay an APC, even if it is accepted for publication after peer review. Problems with the traditional publishing model Traditionally, readers pay to access articles, either through subscriptions or by paying a fee each time they download an article. Escalating journal subscriptions have resulted in libraries subscribing to fewer journals [ 3 ], and the range of articles available to readers is therefore limited. Although traditional journals publish authors' work for free (unless there are page or colour charges), having to pay to access articles limits how many people can read, use and cite them. This compromises the scientific impact of the publication and overall visibility of the researchers work. Definition of Open Access CytoJournal 's Open Access policy changes the way in which articles are published. First, all articles become freely and universally accessible online, and so an author's work can be read by anyone at no cost. Second, the authors hold copyright for their work and grant anyone the right to reproduce and disseminate the article, provided that it is correctly cited and no errors are introduced [ 1 ]. Third, a copy of the full text of each article is permanently archived in an online repository separate from the journal. CytoJournal 's articles are archived in PubMed Central [ 4 ], the US National Library of Medicine's full-text repository of life science literature, and also in repositories at the University of Potsdam [ 5 ] in Germany, at INIST [ 6 ] in France, and in e-Depot [ 7 ]- the National Library of the Netherlands' digital archive of all electronic publications. Benefits of Open Access Open Access has four broad benefits for science and the general public. First, authors are assured that their work is disseminated to the widest possible audience, given that there are no barriers to access it. This is accentuated by the authors being free to reproduce and distribute their work, for example by placing it on their institution's website. They can still use the published material for any future plans including book chapters, monograms, and presentation after due acknowledgement about its original publication. It has been shown that free online articles are more highly cited because of their easier availability [ 8 ]. Second, the information available to researchers will not be limited by what their library can afford, and the widespread availability of articles will enhance literature searching [ 9 ]. Third, the results of publicly funded research will be accessible to all taxpayers and not just those with access to a library with a subscription. Note that this public accessibility is becoming a legal requirement for many countries and organizations [ 10 ]. Fourth, a country's economy will not influence its scientists' ability to access articles because resource-poor countries (and institutions) will be able to read the same material as wealthier ones (although creating access to the internet is another matter [ 11 ]). Open Access to original peer-reviewed scientific information to both the general public and experts alike has many benefits. It can affect and shape the public opinion. In the long run the availability of such information in the field of cytopathology will translate into further progress in cytopathology and maximize its benefits to various fields of life sciences. Description of APC payment APCs will allow continued Open Access to all of CytoJournal 's articles. Authors are asked to pay a flat payment of £330, during 2005, if their article is accepted for publication [ 2 ]. Waiver requests will be considered on a case-by-case basis, by the Editor-in-Chief. Authors can circumvent the charge by getting their institution to become a 'member' of BioMed Central, whereby the annual membership fee covers the APCs for all authors at that institution for that year [ 12 ]. Current members include numerous institutions in USA, NHS England, the World Health Organization, the US National Institutes of Health, and all UK universities [ 12 ]. Biomed Central has also opened a new avenue which allows membership to various societies such as different cytology, cytopathology, and pathology societies all over the world. The members of these societies could publish in CytoJournal without incurring an APC. We strongly recommend readers to approach their respective societies to become members under this provision. No charge is made for articles that are rejected after peer review. Many funding agencies have also realized the importance of Open Access publishing and have specified that their grants may be used directly to pay APCs [ 13 ]. As another avenue, Cytopathology-Foundation, Inc , a non-profit organization, is looking for help from appropriate organisations to support the publication cost in CytoJournal , so that ultimately our journal will be an Open Access journal free of any financial burden for all CytoJournal authors. We appreciate any help in this matter to achieve this goal. You may contact by e-mail: cytojournal@mcw.edu to extend the support. What the APC covers The APC pays for the manuscript publication process. It allows the article to be freely and universally accessible in various formats online, and for the processes required for inclusion in PubMed and archiving in PubMed Central, e-Depot, Potsdam and INIST. Although some authors may consider £330 expensive, it must be remembered that CytoJournal does not levy additional page or color charges on top of this fee, which can easily exceed £330 with conventional journals. As the entire manuscript with color PDF file is freely available on-line, authors do not have to stock reprints and spend mailing charges to share the publication with their peers. The PDF files can be directly downloaded by the interested colleague/researcher or the author to share the PDF file through e-mail at the click of the mouse! With reference to conventional journals it is usually stated in instructions to authors that- 'this journal does not reproduce color illustrations unless the cost of such reproduction is subsidized by the author and agreed upon in advance'. Restricting the number of images published in a manuscript or publishing them as black & white would significantly compromise the publication standard we hope to maintain for a morphology based discipline such as cytopathology. With CytoJournal , the article being online only, any number of color figures and photographs can be included at no extra cost. In addition, CytoJournal being a web based publication, a small video clip could also be included as part of publication without extra cost. This provision to include video clip(s) is an excellent opportunity to improve the clarity of the details to the readers about dynamic processes such as procedures and methodologies [ 14 ]. Free versus Open Access Although several journals now offer free access to their articles online, this is different from Open Access (as defined by the Bethesda Statement [ 15 ]). These journals often delay free access for 6-12 months, and even when the full text is available, readers are not allowed to reproduce and/or disseminate the work because of restrictions imposed by the copyright policy. That said, CytoJournal is not alone in the move to Open Access funded by APCs. The British Medical Journal has recently announced that it cannot continue to provide free access to its website [ 16 ] and is considering various sources of revenue, including APCs [ 17 ]. Also, the Public Library of Science has set up new Open Access journals, and have elected to set APCs of US$1500 for each accepted article [ 18 ]. Given that the Public Library of Science has used television advertising to promote journals [ 10 ], the high profile of these journals will raise awareness of Open Access and encourage researchers in all disciplines to understand and accept Open Access, with APCs as an acceptable method to fund it. Conclusion By providing a forum for Open Access, APCs will enable CytoJournal to serve the pathology, cytopathology, and entire medical community. We believe this change will benefit and aid scientific research in general. We hope you will support this progress by submitting your next article to CytoJournal . Abbreviations APC = article-processing charge. Competing interests At CytoJournal , the work of the Editors-in-Chief, the Editorial Board, and of all invited outside peer reviewers is entirely voluntary, without tangible remuneration of any kind. Our goal is publication and dissemination of the highest quality literature and research in cytopathology and related areas. Our (intangible) rewards are in the achievement of these goals. Any decisions about manuscripts are based entirely on the quality of the scientific content, and not on the ability of authors to pay article-processing charges. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549078.xml |
520743 | Is acupuncture a useful adjunct to physiotherapy for older adults with knee pain?: The "Acupuncture, Physiotherapy and Exercise" (APEX) study [ISRCTN88597683] | Background Acupuncture is a popular non-pharmacological modality for treating musculoskeletal pain. Physiotherapists are one of the largest groups of acupuncture providers within the NHS, and they commonly use it alongside advice and exercise. Conclusive evidence of acupuncture's clinical effectiveness and its superiority over sham interventions is lacking. The Arthritis Research Campaign ( arc ) has funded this randomised sham-controlled trial which addresses three important questions. Firstly, we will determine the additional benefit of true acupuncture when used by physiotherapists alongside advice and exercise for older people presenting to primary care with knee pain. Secondly, we will evaluate sham acupuncture in the same way. Thirdly, we will investigate the treatment preferences and expectations of both the participants and physiotherapists participating in the study, and explore the effect of these on clinical outcome. We will thus investigate whether acupuncture is a useful adjunct to advice and exercise for treating knee pain and gain insight into whether this effect is due to specific needling properties. Methods/Design This randomised clinical trial will recruit 350 participants with knee pain to three intervention arms. It is based in 43 community physiotherapy departments in 21 NHS Trusts in the West Midlands and Cheshire regions in England. Patients aged 50 years and over with knee pain will be recruited. Outcome data will be collected by self-complete questionnaires before randomisation, and 6 weeks, 6 months and 12 months after randomisation and by telephone interview 2 weeks after treatment commences. The questionnaires collect demographic details as well as information on knee-related pain, movement and function, pain intensity and affect, main functional problem, illness perceptions, self-efficacy, treatment preference and expectations, general health and quality of life. Participants are randomised to receive a package of advice and exercise; or this package plus real acupuncture; or this package plus sham acupuncture. Treatment details are being collected on a standard proforma. Interventions are delivered by experienced physiotherapists who have all received training in acupuncture to recognised national standards. The primary analysis will investigate the main treatment effects of real or sham acupuncture as an adjunct to advice and exercise. Discussion This paper presents detail on the rationale, design, methods, and operational aspects of the trial. | Background Knee pain in older adults is a common disabling problem. Approximately 25% of the population aged over 55 years are affected at any one time and half of these will have some restriction of normal daily activities [ 1 , 2 ]. After excluding `red flags' and specific pathologies such as inflammatory arthritis, most knee pain in older adults is due to osteoarthritis. Controlling the pain and minimising loss of function are the principal aims of treatment. Most sufferers are managed exclusively in primary care [ 3 - 5 ], where the usual approaches include analgesics and exercise [ 6 - 11 ]. A report from Arthritis Care [ 12 ] of patients' perspectives highlighted that people with knee osteoarthritis want treatment offering more pain relief and help with mobility. Easy to understand information was also felt to be important, as was exercise, to help manage the problem. A recent review of international guidelines suggests that, for patients with knee pain, the best non-pharmacological care consists of education, muscle strengthening and exercise [ 13 ]. Patients with musculoskeletal pain often choose methods of treatment that are not widely available within the NHS, such as complementary medicine [ 14 ]. Reports from the United States and the United Kingdom have indicated the popularity of complementary medicine with the general public and health care professionals [ 15 - 19 ]. Complementary medicine is available in approximately 40% of general practice surgeries and general practitioners and physiotherapists are the largest providers of complementary medicine within the NHS [ 17 ]. Acupuncture is one of the most popular complementary medicine modalities in the UK: reports suggest that it is available in 84% of chronic pain clinics and approximately 4000 general practitioners and physiotherapists are trained in acupuncture [ 21 , 22 ]. Although recent authors have promoted the concept of integrated practice incorporating conventional and complementary therapies [ 20 ], current guidelines highlight the need for further research evidence for the use of acupuncture for knee pain in older adults [ 13 ]. The clinical effectiveness of acupuncture, and the question of whether it is superior to sham interventions has not been established. In addition to providing exercise and advice, physiotherapists are also one of the largest groups of acupuncture providers within both primary and secondary care in the NHS [ 23 ]. Physiotherapy is therefore an appropriate and important arena in which to investigate the effectiveness of integrated mainstream and complementary therapy. Evidence for advice and exercise International guidelines suggest that the best package of care for this patient group is one that includes patient education, advice and exercise [ 13 ]. There is strong evidence for the usefulness of education, muscle strengthening and aerobic exercise. The beneficial effects of exercise on knee pain are well documented and it is a key component of successful rehabilitation programmes for patients [ 24 , 25 ]. Active rehabilitation programmes for patients with musculoskeletal and arthritic pain not only improve joint function and reduce pain, but also improve strength, walking speed and self-efficacy [ 26 ] as well as quality of life, and they reduce risk of other chronic conditions [ 27 ]. Randomised clinical trials consistently show the benefit of exercise for knee pain in older adults [ 28 - 31 ]. Recent studies also highlight the need to provide adequate instruction, feedback and practice in order to ensure that the key muscle groups around the knee, such as the quadriceps, are activated [ 32 ]. The European League Against Rheumatism (EULAR) recommendations have recently been updated and in particular, advocate exercise for knee pain related to osteoarthritis [ 33 ]. In line with this evidence base, the current trial was designed so that all participants receive a package of care which includes education, advice, and exercise. Evidence for acupuncture The physiological properties of acupuncture have been well described in the laboratory. Acupuncture activates central mechanisms of pain control and elicits release of specific neurotransmitters (mainly opioids) in the central nervous system [ 34 , 35 ]. Effects on the autonomic nervous system have also been demonstrated during and after acupuncture stimulation [ 36 , 37 ]. Despite this, its clinical effectiveness remains a matter of controversy [ 38 , 39 ]. This is partly because of methodological limitations in many trials of acupuncture, including small sample sizes, lack of credible sham-controls, and inadequate blinding [ 40 ]. Acupuncture has been shown to have a short-term analgesic effect in musculoskeletal pain [ 41 , 42 ]. A recent evaluation of acupuncture by the National Institutes of Health concluded that it has an analgesic effect on dental and orofacial pain and is a useful adjunct in a range of painful conditions, including musculoskeletal and myofascial pain [ 43 ]. In fibromyalgia, there is increasing evidence demonstrating the usefulness of acupuncture [ 44 , 45 ]. One meta-analysis concluded that acupuncture might offer benefit to patients with knee osteoarthritis when used as an adjunct to mainstream management strategies [ 46 ]. Appropriate sham interventions for acupuncture have been widely debated and several placebo needles have been introduced and tested [ 47 , 48 ]. A trial conducted in Germany recently concluded that true acupuncture has a better effect than sham acupuncture in the treatment of knee and back pain, but not for migraine headache [ 49 ]. In addition, another study reported positive effects of acupuncture for knee pain [ 50 ]. However, a key limitation to these studies is the lack of long-term follow-up, something which the current study has been designed to address [ 51 ]. We have designed, and are currently implementing, a prospective sham controlled randomised trial within the primary care setting addressing the important clinical question: is acupuncture a useful adjunct to physiotherapy care (advice and exercise) for treating knee pain in older adults? Research and development in primary care is important to public health and necessary to support the decisions and treatments in this setting [ 52 ]. The primary objective is to compare, at 6 months, the clinical outcomes of true acupuncture plus advice and exercise, with advice and exercise alone for treating people aged 50 years and over referred directly from primary care with knee pain. Our secondary objectives are i) to compare, at 6 weeks and 12 months, the clinical outcomes of adding true acupuncture to advice and exercise alone, in the same patient group. ii) to compare, at 6 weeks, 6 and 12 months, the clinical outcomes of sham acupuncture plus advice and exercise, with advice and exercise alone, in the same patient group. iii) to measure patients and physiotherapists beliefs, preferences and expectations about the treatments being tested and to explore their effect on clinical outcome. Methods Trial design This multicentre, three-arm sham-controlled randomised trial will be conducted in 43 individual Physiotherapy Centres that provide services for Primary Care Physicians located in 21 NHS Trusts situated in the Midlands and Cheshire regions of the UK. Multi-centre ethical approval has been obtained from the West Midlands Multicentre Research Ethics Committee and local approval was given by 12 ethics committees (Southern Derbyshire, Shropshire, Worcestershire, Warwickshire, Mid Staffordshire, South Staffordshire, Sollihull, North Birmingham, South Birmingham, West Birmingham, East Birmingham, East Cheshire). The trial was designed by a steering group with expert input from physiotherapists, an acupuncture specialist and trial methodologists. Information will be collected from the individual participating physiotherapists (demographics, current training and use of acupuncture, attitudes and beliefs about knee pain, and beliefs and expectations of the three treatment packages being compared in the trial) prior to the commencement of the trial and after the trial has been completed (Table 1 ). Study population Participants include patients with knee pain aged 50 years and over referred to physiotherapy centres by their general practitioner. Participants will be randomised to one of three groups: (i) advice and exercise alone, (ii) advice and exercise plus true acupuncture, (iii) advice and exercise plus sham acupuncture. Follow up will be at 2 weeks (by telephone), 6 weeks, 6 months and 12 months after randomisation, by postal questionnaire. Non-responders will be followed up. Inclusion criteria Eligible patients are male and female subjects aged 50 years and above with pain (with or without stiffness) in one or both knees presenting to primary care. They must be naïve to acupuncture treatment (i.e. have never experienced acupuncture before for their present or any past complaints), and considered suitable for referral to a physiotherapy outpatients department by their general practitioner. Participants must be able to read and write English, be willing to consent to participation, and able give full informed consent. They must also be available for telephone contact. Exclusion criteria Patients with potentially serious pathology (e.g. inflammatory arthritis, malignancy etc) on the basis of general practice or physiotherapy diagnosis or from past medical history, those who have had a knee or hip replacement on the affected side(s), are already on a surgical waiting list for total knee replacement, or for whom the trial interventions are contraindicated are excluded from the trial. Those who have received an exercise programme, from a physiotherapist, for their knee problem within the last 3 months (normal recreational involvement in sport or exercise will not be an exclusion) or an intra-articular injection to the knee in the last 6 months are also excluded. Participant recruitment Eligible patients referred by their GP to the physiotherapy departments will be invited to take part. Recruitment will take place over 18 months and will operate in one of two ways (See Figure 1 ): 1) Trial nurse To identify potentially eligible patients, a trial nurse will review GP referral letters received by participating physiotherapy departments that fall within a feasibly commuteable geographical area of the Research Centre. 2) Local physiotherapy assessor A minimum of two members of the physiotherapy team will be involved at centres that fall outside the trial nurse's geographical area, one will be the nominated local assessor and a second will treat participants. The nominated local physiotherapy assessor or trial nurse will perform an initial screen of the referrals to the physiotherapy department for potential participants. Potentially eligible patients will be posted information about the study, and their GP will be notified that the patient has been approached to take part. The GP will be asked to notify the Physiotherapy Department if they feel the patient is ineligible or unsuitable. Consent A minimum of 48 hours after receiving the information leaflet, patients will be telephoned by the nominated local assessor/trial nurse (within 10 working days) to further screen eligibility. Information will be recorded on a standard proforma. For those patients either not eligible or not willing to be recruited, the proforma will be used to detail the reason for ineligibility, or reason for decline. Where patients are willing, two additional questions are asked to those that decline to participate in the trial to capture their treatment preference and expectations with respect to acupuncture and an advice & exercise treatment package. This information will be anonymised. For patients willing to be recruited to the trial, an appointment is arranged for a research assessment at the patient's local physiotherapy department. After gaining verbal consent, the patient will be posted the baseline questionnaire to complete prior to their research appointment. At the research assessment visit the local physiotherapy assessor/trial nurse will perform a more detailed eligibility screen, explain the study, gain informed written consent to randomisation and conduct a baseline research interview and examination. Following consent to the study, the participant will be registered with the Research Centre by fax and allocated a unique trial number. The baseline assessment will be carried out blind to subsequent treatment allocation. An appointment will be made for the treating physiotherapist to begin treatment within 10 working days of the research assessment. Consent to treatment will be gained from each treating physiotherapist prior to commencing treatment, as is current physiotherapy practice. All participants will have an initial clinical physiotherapy assessment and treatment session of up to 40 minutes duration. During this session the physiotherapist will identify and record potential acupuncture points to be used should the participant be randomised to receive acupuncture (true or sham). A minimum of 6 and maximum of 10 points will be selected, based upon the participant's presentation and the clinical opinion of the physiotherapist. This will be carried out as part of the overall physical examination of the knee – the therapist will not draw the participant's attention to the localisation of acupuncture points to avoid raising their expectations about the possibility of receiving acupuncture. The advice and exercise package will then be started during this initial treatment visit. Randomisation Randomisation will take place after this initial physiotherapy session. The treating physiotherapist will telephone the Research Centre at Keele University, during normal working hours. This methodology ensures that the initial physiotherapy assessment and advice and exercise package provided is performed blind to subsequent treatment allocation. The specific trial interventions will commence during the participant's second treatment visit. During the randomisation telephone call the physiotherapist will be asked to identify the selected acupuncture sites to check that the participant has received their first pre-randomisation treatment session. The physiotherapists will also be asked questions about their own expectations of the individual participant's likely clinical outcome and their beliefs about which treatment they would like the participant to receive. Participants recruited to the trial will then be randomised to one of the three trial interventions in a 1:1:1 ratio based on their unique trial number. Computerised third-party randomisation will be performed using random permuted blocks of 12 (blocked by treatment centre). Interventions The interventions will be delivered within 10 working days of randomisation by experienced physiotherapists, trained in acupuncture to at least the minimum standard for basic membership of the Acupuncture Association of Chartered Physiotherapists (AACP) (35 hrs of training). The participant's GP will be contacted at the time of randomisation and asked to avoid co-interventions for the period of the trial wherever possible, but especially until the 6-week follow up has been completed. However, if the GP feels that symptoms are sufficiently troublesome to need further treatment this will be at the GPs discretion. Information about co-interventions will be collected in the follow-up questionnaires and by review of a sample of participants' clinical records. a) Advice and exercise Advice will be supplemented by a leaflet based on the arc Knee OA publication. This leaflet contains standard advice on the use of analgesia. If already using non-steroidal anti-inflammatory drugs, participants will be permitted to continue their stable dose. Participants will have the opportunity to discuss elements of the advice leaflet and exercise programme with their physiotherapist. In line with current practice, a maximum of 6 × 30-minute treatment sessions will be given over a period of 6 weeks. The advice and exercise programme has been developed through the use of reviews of current best evidence [ 23 , 27 , 55 ], clinical guidelines [ 9 ], a survey of current physiotherapy practice for knee pain [ 56 ], consensus workshop, and local physiotherapy practice. The exercise programme will include concentric, eccentric, isometric and balance exercises. Specificity of training, particularly in the first 6 weeks, is important and therapists will aim for a mix of functional exercises, open & closed kinetic chain exercises and accelerated walking elements. They will also clearly identify a home exercise programme with set targets. The intensity of the exercises will be progressively increased at each session. Previously sedentary individuals who are relatively untrained will initially be prescribed exercise of a low intensity (eg. 1–3 sets of 8–12 repetitions of an exercise, 2–3 times per week). Progression to medium and high intensity exercise will occur only once adaptation to the current level of training has occurred. Exercises will be prescribed and individualised for each participant by the treating physiotherapist from "Physio Tools" , a frequently used software package in physiotherapy. Hydrotherapy, group-based work, electrotherapy, additional acupuncture outside of the protocol and intra-articular injections will not be permitted. Participants may receive advice on the use of walking aids, and hot and cold applications. The key messages within the advice to participants include the common nature of knee problems and that rest for more than a day or two usually does more harm than good. b) Advice and exercise plus true acupuncture In addition to advice and exercise as detailed above, participants randomised to this group will receive 6 × 30-minute treatments of acupuncture, delivered over a period of 3 weeks. The acupuncture protocol is based on the concept of "treatment adequacy" which has been introduced by Ezzo et al [ 46 ] and Melchart et al [ 57 ] and has been shown recently to affect long term clinical outcome [ 58 ]. Physiotherapists delivering the treatment are provided with a choice of a total of 16 most commonly cited local and distal points from which they are required to chose between 6 – 10 points for each session. Local points available include: Sp 9, Sp 10, St 34, St 35, St 36, Xiyan, Gb 34 and trigger points. Distal points available include: LI 4, TH 5, Sp 6, Liv 3, St 44, Ki 3, BI 60 and Gb 41. Treatment will be performed with sterilised disposable steel needles, 30 × 0.3 mm. The depth of the needle insertion should be between 0.5 – 2.5 cm depending on the points selected for treatment and the needles will be manipulated until de-qi sensation is achieved. Therapists allow 25 (min) to 35 (max) minutes between insertion of the last needle and cessation of treatment and during that time they are to revisit the needles as appropriate. If the sensation is maintained, they should manipulate each needle lightly, if the de-qi sensation is no longer there, they use stronger manipulation in order to elicit it. Participants will be informed that they may or may not experience an aching, warm or `tingling' sensation from this type of stimulation. Therapists question participants at each session to ask them to describe the sensation they feel on needling, which is then recorded on a standard proforma. This also collects information on attendance, failed appointments, physiotherapist's diagnosis and whether any additional treatment modalities or specialist referrals are made. c) Advice and exercise plus placebo acupuncture In addition to advice and exercise as detailed above, participants randomised to this group will receive sham acupuncture which involves the placement of mock needles [ 47 ] upon a pre-defined set of points. The mock needle is a new device, which participants find indistinguishable from true acupuncture and has been used with success in a randomised trial comparing the effects of true versus placebo acupuncture in the treatment of shoulder pain [ 59 ]. The mock needle operates by allowing the shaft of the needle to collapse in the handle, creating an illusion of insertion (Figure 1 – adapted from [ 47 ]) The points chosen for the sham intervention receive no stimulation and participants will be told, as for the true acupuncture group, that they may or may not experience any particular sensation from this type of stimulation. The same parameters as for true acupuncture apply: placement of a minimum of 6 needles, and 6 × 30 minutes sessions within 3 weeks, with monitoring of elicited sensations. Audit of interventions Using a standard proforma, the physiotherapists record the number and duration of treatment sessions each participant receives, plus details about the advice and exercises prescribed, the location and number of acupuncture points (where applicable) and any adverse reactions. The sensation that needling (true or placebo) evokes has been shown to be a significant correlate of acupuncture-induced analgesia (this has been a finding from both clinical and experimental studies). Hence, the sensation evoked from each treatment in the acupuncture groups will also be recorded. Acceptability and credibility of the interventions will be evaluated using a telephone follow-up at 2 weeks from the beginning of treatment and a questionnaire administered at 6 weeks [ 60 ]. Baseline measures Participants who give verbal consent to participate in the trial are posted a baseline questionnaire which they are asked to complete and bring with them to the research interview and examination. Information collected on this questionnaire is detailed in Table 2 . These variables will be used to describe the study sample and as baseline measures of outcomes. Immediately following written informed consent, participants undergo a research interview and examination which follows a previously published schedule [ 66 ]. Follow-up Outcome measures will be performed at 2 weeks (Table 3 ), 6 weeks, 6 months and 12 months (Table 4 ). Follow-up assessments will be performed using a telephone call at 2 weeks after the first treatment and self-completed postal questionnaires at all other time points. Non-responders will be telephoned 2 weeks after mailing the follow-up questionnaire on up to 2 occasions and posted a replacement questionnaire with a reminder letter if there is still no response at 4 weeks. Sample size calculation The primary outcome measure for this trial is the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain sub-scale [ 61 ]. We have defined overall success as a 20% difference in the WOMAC pain sub-scale between true acupuncture and advice and exercise alone at 6 months. For this comparison, a minimum of 90 participants is needed in each group to reject the null hypothesis with 80% power and at a 5% significance level (two-tailed) [ 68 ]. As our trial will compare sham acupuncture with advice and exercise alone we have three groups and so need 270 participants. Allowing for a 30% drop out rate in those recruited to the trial, the total number of participants required to be randomised is 350. Analysis Collection of data and statistical analysis will be performed blinded to treatment allocation. Analysis will be performed on an intention to treat basis and the primary outcome will also be analysed on a "per protocol basis". Univariate analysis will be performed using t-tests to analyse numerical data and chi-square tests for categorical data. The clinical and demographic data collected at baseline will be inspected and if there are any important differences between the trial groups, these factors will be used as covariates. These analyses will be performed using ANOVA and logistic regression as appropriate. The analysis of the secondary outcomes will be exploratory. A univariate analysis with respect to the different treatment will be performed for the WOMAC pain sub-scale at 6-weeks and 12-months and for the WOMAC functioning sub-scale at 6-weeks, 6- and 12-months. The global outcome assessment with be analysed both as an ordinal and a dichotomous variable (categories 1–3 defined as "success"). Moreover, area under the curve slopes will be calculated for each treatment group over the whole treatment period and compared. Statistical significance will be set at the 5% level (two-tailed). Statistical analysis will be performed using Stata 7.0. The trial will be monitored by an independent Data Monitoring and Ethics Committee. No interim analysis of the primary or secondary outcomes will be undertaken during the trial period. Conclusions The APEX trial is a major trial of physiotherapy treatment for knee pain. Obtaining participation by physiotherapists, across the regions of the West Midlands and Cheshire in 21 NHS Trusts, to work to agreed treatment protocols has been an important achievement. We have presented the rationale, design, and strategy for implementation of a multi-centre RCT examining whether acupuncture is a useful adjunct to usual physiotherapy care of advice and exercise for treating knee pain in older adults. The primary objective of the trial is to compare the clinical outcomes of true acupuncture plus advice and exercise, with advice and exercise alone for treating people aged 50 years and over referred directly from primary care with knee pain. The secondary objectives are to compare, at 6 weeks and 12 months, the clinical outcomes of adding true acupuncture to advice and exercise alone; to compare, at 6 weeks, 6 and 12 months, the clinical outcomes of placebo acupuncture plus advice and exercise, with advice and exercise alone; and to measure patients and therapists beliefs, preferences and expectations about the treatments being tested and to evaluate the association between these variables with clinical outcomes. The results of this trial will be presented as soon as they are available. Competing interests (medicine) The authors declare that they have no competing interests. Authors' contributions All authors participated in the design of the trial and drafting the manuscript. All authors have read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC520743.xml |
522815 | A new method for determination of varicella-zoster virus immunoglobulin G avidity in serum and cerebrospinal fluid | Background Avidity determination of antigen-specific immunoglobulin G (IgG) antibodies is an established serological method to differentiate acute from past infections. In order to compare the avidity of varicella-zoster virus (VZV) IgG in pairs of serum and cerebrospinal fluid (CSF) samples, we developed a new technique of avidity testing, the results of which are not influenced by the concentration of specific IgG. Methods The modifications introduced for the new VZV IgG avidity method included the use of urea hydrogen peroxide as denaturing reagent, the adaptation of the assay parameters in order to increase the sensitivity for the detection of low-level VZV IgG in CSF, and the use of a new calculation method for avidity results. The calculation method is based on the observation that the relationship between the absorbance values of the enzyme immunoassays with and without denaturing washing step is linear. From this relationship, a virtual absorbance ratio can be calculated. To evaluate the new method, a panel of serum samples from patients with acute and past VZV infection was tested as well as pairs of serum and CSF. Results For the serum panel, avidity determination with the modified assay gave results comparable to standard avidity methods. Based on the coefficient of variation, the new calculation method was superior to established methods of avidity calculation. Conclusions The new avidity method permits a meaningful comparison of VZV IgG avidity in serum and CSF and should be of general applicability for easy determination of avidity results, which are not affected by the concentration of specific IgG. | Background In addition to the determination of immunoglobulin M (IgM) antibodies, the avidity of immunoglobulin G (IgG) antibodies is an important parameter for the diagnosis or exclusion of acute infections. Testing of IgG avidity has been applied for a large variety of pathogens (reviewed in [ 1 , 2 ]). A correct diagnosis is especially important for infections during pregnancy with rubella virus, cytomegalovirus and Toxoplasma gondii . Therefore, avidity testing has been particularly useful for these pathogens. Determination of antibody avidity is usually based on the separation of low and high avidity antibodies by denaturing agents in enzyme immunoassays (EIA) or immunofluorescence assays. Several agents such as guanidine hydrochloride [ 3 ], diethylamine [ 4 ], thiocyanate [ 5 ] or urea [ 6 ] have been used for this purpose. These protein denaturants have been either included in the sample diluent (diluting principle) or in the washing buffer after the serum incubation step (eluting principle). Calculation of the avidity result has been performed in numerous ways. Mostly, avidity is expressed as percent ratio of antibody titers or EIA absorbance values with and without denaturation. Avidity results based on end-point titration with and without denaturant are considered to be the gold standard for avidity determination [ 7 ]. This technique has an excellent sensitivity and specificity and is considered not to be influenced by the concentration of the specific IgG. To reduce the considerable expense and labour required for the titration curves, simplified avidity tests based on single-point determinations using EIAs have been described [ 6 , 8 - 11 ]. In these assays, EIA absorbance values or antibody titers mathematically derived from single dilutions have been used for avidity calculation. The avidity results based on single-point absorbance values are to some extent influenced by the concentration of specific IgG, but this is usually not critical for the distinction between low and high avidity. Diagnosis or exclusion of acute infection is the most common but not the only application of avidity determination. Determination of antibody avidity in pairs of serum and cerebrospinal fluid (CSF) samples has been suggested as a diagnostic means to detect intrathecal antibody synthesis [ 12 ] and to differentiate viral encephalitis from multiple sclerosis [ 13 ]. Longitudinal measurements of HIV avidity have been proposed to be useful for the assessment of HIV progression [ 14 ]. Because both of these applications involve comparative measurements of avidity where small differences may be of diagnostic importance, a technique for accurate avidity determination is required which is independent of antibody concentrations. The aim of our study was to establish an avidity assay that can be used for comparison of the avidities of varicella-zoster virus (VZV) IgG in serum and CSF. In order to achieve this aim, several modifications of a commercial VZV IgG assay were introduced. These modifications include the use of a novel denaturing agent and a new method to calculate the avidity results. Methods Serum and CSF samples The serum and CSF samples used in this study had been sent to the virological laboratory at the University of Würzburg for routine VZV testing and were stored in aliquots at -20°C. Two groups of serum samples were analyzed for the evaluation of the standard and modified VZV avidity assay. Group 1 consisted of 28 samples from 19 patients with acute or recent VZV infection and included 5 follow-up samples of patient F19, spanning a period of 11 months. The cases of this group fulfilled the following criteria: the presence of clinical symptoms suggestive of varicella, information on the disease onset, and a positive VZV IgM (Enzygnost Anti-VZV/IgM, Dade Behring, Marburg, Germany). Group 2 consisted of 37 samples from 37 subjects with infection in the distant past. In the subjects of this group, VZV IgG antibodies had been detected in previous samples taken at least 8 months earlier. For initial evaluation of the VZV avidity assay for CSF samples, three sample pairs of serum and CSF were tested, two from patients with VZV encephalitis (V2.1 and V9.2) and one from a patient with multiple sclerosis (M22). Standard VZV IgG avidity determination VZV IgG avidity determination of serum samples was performed in a semi-automated fashion using the Enzygnost Anti-VZV/IgG test kit (Dade Behring) with some modifications. Each of the sera diluted 1:231 in sample buffer was placed in two antigen coated microtiter plate wells. After 60 min at 37°C, one well was washed according to the instructions with the supplied washing buffer. The other well was washed with a solution of urea, or urea hydrogen peroxide, in the supplied washing buffer (for details, see Results) and once with the washing buffer only. The subsequent steps were carried out according to the instructions in an automated fashion using the Behring Elisa Processor III (BEP III; Dade Behring). Peroxidase-conjugated anti-human IgG was added in a 1:50 dilution and the plate was incubated at 37°C for 60 min. After three further washing steps with the supplied washing buffer, tetramethylene benzidine dihydrochloride was added as substrate and kept at room temperature for 30 min. The reaction was stopped with 0.5 N sulfuric acid. Control samples of acute and past VZV infections were included in each run. Where appropriate, VZV IgG antibodies were quantified by an one-point-quantification method (α-method, Dade Behring) according to the instructions of the manufacturer. VZV IgG avidity determination of CSF-serum-pairs To increase the sensitivity of the VZV IgG EIA in order to detect low-titer VZV IgG antibodies in CSF, the standard serum assay was modified as follows. The sample incubation time was increased to 180 min; the anti-human IgG was used in a 1:30 dilution and its incubation time was increased to 90 min. All incubations for the CSF assay were performed at room temperature because variation was found to be lower compared to incubation at 37°C. For the serum samples tested for evaluation purposes, the standard dilution of 1:231 was increased by a factor of 6 to yield a final dilution of 1:1386. Determination of VZV IgG avidity in CSF was always done in parallel with serum samples from the same time-point. Both serum and CSF were tested in at least four dilutions of a two-fold titration series. The starting dilution for each sample was derived from routine determinations of VZV IgG titers. For CSF samples, the starting dilution was at least 1:6. Calculation of the avidity index VZV IgG avidity was calculated by various methods. First, it was calculated as the ratio of EIA absorbance values obtained with and without the denaturing washing step. Alternatively, one-point quantification titers (see above) instead of absorbance values were used for avidity calculation. Two additional methods were used for determination of VZV IgG avidity in CSF and the corresponding serum samples. One method involved the use of the software "Avidity 1.2", based on curve-fitting analysis of serial dilutions [ 15 ]. Secondly, we developed a new calculation method based on our observation from dilution series that the relationship between the absorbance values obtained without and with denaturing agents is linear. Thus, the relationship can be described by the equation y = m × x + c, where m is the slope, c the intercept, x the absorbance without the denaturant (abs ref ) and y the absorbance with the denaturant (abs denat ). The m- and c-values of the equation for each sample were calculated with the software Excel (Microsoft) from two or more experimentally derived data points. It is essential, that the absorbance values obtained without denaturant (abs ref ) fall in the linear range of the enzyme immunoassay. For the VZV IgG assay with modified assay conditions, the linear range of the absorbance values without denturant extended from approximately 0.200 to 2.800. Avidity was determined from the linear equation for various virtual x-values as the percent ratio of the absorbance values with and without denaturing agent, i. e. avidity = (abs denat /abs ref ) = (m × x + c)/x. The result was referred to as virtual absorbance ratio. Results In order to evaluate the usefulness of the Enzygnost VZV IgG EIA for avidity testing, a panel of serum samples from patients with acute VZV infection of defined onset (group 1) and from controls with VZV infection in the distant past (group 2) was tested. In a preliminary experiment with three samples from each group, two different denaturing conditions were employed. The separation of the two sample groups with one 3 min washing step using 4 M urea hydrogen peroxide was superior to three 5 min washing steps with 5 M urea (Table 1 , Figure 1 ). Therefore, urea hydrogen peroxide was used for the following experiments. The results of avidity testing with urea hydrogen peroxide of all serum samples of group 1 and 2 are shown in Figure 2a . When avidity indices were calculated as the percent ratio of absorbance values, there was a clear distinction of the avidity values from both groups. Using a cut-off of 37 %, all samples from patients with disease onset of less than 50 days had avidity values below the cut-off, while all samples from the control group with VZV infection in the distant past had avidity values above the cut-off. Thus, a cut-off of 37 % resulted in a sensitivity and specificity of 100 % for the detection and exclusion of VZV infections within in the last 50 days. The standard result format of the Enzygnost VZV IgG assay are titers that are mathematically derived from single-point absorbance values (one-point-quantification). Because it had been shown previously for the Epstein-Barr virus (EBV) assay that avidity index calculations based on one-point-quantification titers gave results better than calculations using absorbance values [ 11 ], the avidity indices were recalculated using VZV IgG one-point-quantification titers instead of absorbance values. There was an excellent correlation between both methods of avidity index calculation, but in contrast to the EBV assay, the one-point-quantification method was not superior to that using absorbance values (data not shown). In general, IgG concentrations in CSF are by a factor of 1:200 to 1:1000 lower than in the corresponding serum. Therefore, the detection sensitivity of the VZV IgG assay was increased by prolonging incubations times and increasing the concentration of the anti-human IgG conjugate in order to reliably detect VZV IgG in all CSF samples from patients with positive serum VZV IgG. To study the effect of these assay modifications on avidity determination, all serum samples of group 1 and 17 randomly chosen samples of group 2 were retested with the modified assay. All the serum samples were tested in dilution 1:1386 instead of 1:231 used in the standard serum assay. The results are shown in Figure 2b . An avidity cut-off of 23 % resulted in a sensitivity of 95 % and specificity of 95 % for the detection and exclusion of VZV infections within the last 50 days, respectively. Although the standard serum conditions gave better results, there was again a clear separation between samples from both groups. Based on these results, the avidity assay with urea hydrogen peroxide and modified assay conditions to increase the sensitivity for VZV IgG detection was considered appropriate for use in VZV IgG avidity studies with CSF and is henceforth called the CSF avidity assay . For meaningful interpretation of avidity values in CSF, it is necessary to test serum samples obtained in parallel with the CSF. Because of the large difference of the IgG concentrations between serum and CSF, comparison of the avidity results necessitates independence of IgG concentration. To study this issue, CSF and serum samples were tested in twofold dilution series with the CSF avidity assay. Avidity was calculated as percent ratio of absorbance values from each dilution. The results for representative CSF and serum samples are shown in Figure 3 . The avidity indices of most of the samples varied considerably depending on the sample dilution. For some samples, the range of avidity indices was greater than 20 %. The coefficient of variation (CV) was mostly higher than 10 % (examples in Table 2 ). Thus, this method could not be used for comparative testing of serum and CSF samples. Alternatively, avidity indices were calculated with the software "Avidity 1.2", which is based on a mathematical model described previously by Korhonen et al. [ 15 ]. The software calculates avidity values from two sample dilutions, each tested with and without protein denaturant. To evaluate the influence of the working dilutions on the avidity indices obtained with this software, the avidity indices were calculated from different pairs of dilutions of the same sample. The results for representative examples are shown in Table 3 . The CV ranged from 5.9 % to 19.7 %. Because the two methods for avidity calculation described above were significantly influenced by the chosen working dilutions, we developed a new calculation method of antibody avidity. It is based on our observation that the relationship between absorbance values without and with denaturing agent is nearly linear. This relationship can therefore be described by the linear equation y = m × x + c, where m is the slope, c the intercept, x the absorbance without denaturant and y the absorbance with denaturant. Representative examples from patients with acute and past VZV infection and of serum and CSF pairs are shown in Figure 4 . After experimental determination of the sample-specific m- and c-values, this observation allows calculating the y-value for any virtual absorbance without denaturation (x-value). Absorbance results from two dilutions are required in order to calculate the m- and c-values for a sample. In order to evaluate the influence of the chosen sample dilution on the avidity index derived from the virtual absorbance ratio, several samples were tested in series of twofold dilutions with four or more steps. The m- and c-values were calculated from different pairs of dilution steps for each sample. As a result of preliminary experiments, the absorbance 2.0 was chosen as the virtual x-value for the calculation of avidity values with the new method. (The linear range of the absorbances in the CSF assay format extended up to 2.8.) Table 3 shows that the CV for the avidity indices calculated by the new method from different pairs of dilution steps was lower than the CV obtained with the software "Avidity 1.2" from the same dilution series. Based on theoretical considerations we speculated that the slope of the linear equation by itself might represent the avidity of a sample, i.e. the steeper the linear curve, the higher the avidity. This was in agreement with the data from follow-up samples of a patient with acute VZV infection over a period of 11 months (Figure 5 ). However, comparison of the CVs demonstrated that avidity calculations using a fixed x were less influenced by the chosen dilution than avidity indices based on the slope (data not shown). Further dilution series of CSF and corresponding serum were tested with the CSF avidity assay in order to confirm that the influence of the IgG concentration on the avidity indices derived from the virtual absorbance ratio was minimal. Table 2 shows that the CVs with this method were equally low with CSF and serum. Results obtained with the software "Avidity 1.2" and results based on the absorbance ratios of single dilutions are presented for comparison. Although for two samples the CVs with "Avidity 1.2" were slightly lower than with the new method (CSF of V2.1 and CSF of V9.2), for the others the new method gave lower CVs. Avidity values derived from absorbance ratios resulted in the highest CVs for all samples. Discussion We established a VZV avidity assay that is suitable for comparative evaluation of antibody avidity in serum and CSF samples. In order to achieve this aim several modifications of a serum VZV avidity assay were necessary. Though there have been a few studies on the use of avidity assays for VZV IgG antibodies [ 16 - 18 ], no standard method or commercial assays exist for the determination of VZV IgG avidity. Therefore, we first established conditions for a VZV IgG avidity assay that can be used for the differentiation of acute and past VZV infection. The serum assay was highly sensitive and specific for the diagnosis and exclusion of primary VZV infections. However, diagnostic applications of VZV IgG avidity determinations with serum samples are rare, because the diagnosis of acute or reactivated VZV infection is usually made clinically or by virus detection methods. Urea hydrogen peroxide appeared to be superior to urea in the optimization experiments for the VZV avidity assay and presents a novel option for a denaturing agent in avidity measurements. The denaturing effect of one washing step with a 4 M solution of urea hydrogen peroxide was more pronounced than that of three washing steps with a 5 M urea solution. Thus, the denaturing potential of urea hydrogen peroxide appears to be greater than that of urea. Furthermore, because of better solubility, handling of solutions of urea hydrogen peroxide is easier than handling solutions of urea in high molarity. Nevertheless, the optimal denaturing agent may vary between different antigens necessitating careful evaluation of denaturing conditions for each avidity assay. Under normal conditions, virus specific IgG is present in the CSF in very low concentrations. With an intact blood-CSF-barrier, the concentration gradient between serum and CSF is 200:1 to 1000:1. Thus, the usual serological methods for antibody quantification in serum are not sensitive enough to routinely detect specific IgG in CSF. Therefore, we modified the VZV IgG EIA to increase its sensitivity. This modification made it possible to detect and quantify VZV IgG in CSF samples from virtually all patients with measurable VZV IgG in serum. Corresponding serum samples were always run in parallel with CSF in the same assay. The CSF and serum dilutions were chosen individually for each sample pair in order to achieve similar absorbance values in the modified EIA. This assay formed the basis for avidity determination of CSF and the corresponding serum. For meaningful comparison of avidity values in serum and CSF, it is necessary to apply a calculation method that is independent of the concentration of specific IgG in the samples examined. One-point determination represents the simplest method for avidity index calculation, but such methods did not yield satisfactory results in this respect. The avidity technique based on end-point-titration is not influenced by the IgG concentration and is considered the reference method for avidity determination [ 1 , 7 ]. However, it is relatively laborious and reagent consuming, requiring two distinct dilution series for the assays with and without protein denaturant. Therefore, we attempted to use the logistic approach based on end-point-titration, with fewer dilutions per sample [ 15 ]. This method works well in distinction between acute and past infections from samples of serum. However, under the conditions of the CSF avidity assay we found that the results of the logistic model were not uniformly independent of the sample working dilutions, albeit without straightforward dose-dependence of specific IgG. The reason for this limitation with our assay may lie in its modifications to increase sensitivity. Possibly, the curve fitting obtained with the logistic model would have to be adjusted in order to account for the special conditions of the CSF avidity assay. Searching further for a simple calculation method of avidity indices independent of IgG concentration and sample dilutions, we observed a linear relationship between the absorbance values of the assays without and with denaturing washing conditions. After performing an avidity assay of two dilutions of a given sample, this relationship can be exploited to calculate a linear equation from the two pairs of absorbance values (with and without denaturation). The only requirement is that all absorbance values should fall in the linear range of the EIA. Once for a given sample the linear equation is experimentally determined, the equation can be used for calculation of the virtual absorbance value with denaturation (abs denat ) for any virtual absorbance value without denaturation (abs ref ). The avidity index is then calculated as the ratio of the two virtual absorbance values (abs denat /abs ref ). Thus, if a fixed abs ref is chosen for all samples to be compared, the avidity index becomes independent of the concentration of specific IgG in the samples. These theoretical considerations have been confirmed by the results obtained from dilution series in this study. It will be interesting to test the general applicability of this method for standard avidity assays by comparison with single-point and end-point avidity determinations. Conclusions In summary, we have described several modifications of existing avidity techniques that have the potential to broaden the use of avidity assays in diagnosis and research. Urea hydrogen peroxide is a novel denaturing agent, which appears to be advantageous compared to urea in terms of handling conditions. The new calculation method of avidity indices based on a linear equation with fixed reference absorbance values is cost-effective and simple and has the potential to substitute other avidity calculation procedures. The CSF avidity assay in combination with the new calculation method allows for determination of the avidity of VZV IgG in corresponding serum and CSF samples with high precision and independent of the VZV IgG concentration. These features are necessary in order to study the avidity maturation in serum and CSF in patients with intrathecal synthesis of VZV-specific IgG antibodies. Competing interests BW has received research and travel grants from Dade Behring. KH is employed by an organization (The Helsinki University Hospital Laboratory, HUSLAB) using the Avidity 1.2 software in infectious-disease diagnosis, and is a shareholder of an SME (Headman Ltd.) with commercial interest in it. Authors' contributions RHK, JS and FT carried out the immunoassays and participated in the design of the study. RHK carried out the avidity index calculations. WZ participated in establishing the standard VZV IgG avidity assay. KH provided the software "Avidity 1.2" and participated in the data analysis. BW conceived of and coordinated the study and drafted the manuscript. All authors contributed to the final editing of the manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC522815.xml |
554997 | Increased capsaicin receptor TRPV1 in skin nerve fibres and related vanilloid receptors TRPV3 and TRPV4 in keratinocytes in human breast pain | Background Breast pain and tenderness affects 70% of women at some time. These symptoms have been attributed to stretching of the nerves with increase in breast size, but tissue mechanisms are poorly understood. Methods Eighteen patients (n = 12 breast reduction and n = 6 breast reconstruction) were recruited and assessed for breast pain by clinical questionnaire. Breast skin biopsies from each patient were examined using immunohistological methods with specific antibodies to the capsaicin receptor TRPV1, related vanilloid thermoreceptors TRPV3 and TRPV4, and nerve growth factor (NGF). Results TRPV1-positive intra-epidermal nerve fibres were significantly increased in patients with breast pain and tenderness (TRPV1 fibres / mm epidermis, median [range] – no pain group, n = 8, 0.69 [0–1.27]; pain group, n = 10, 2.15 [0.77–4.38]; p = 0.0009). Nerve Growth Factor, which up-regulates TRPV1 and induces nerve sprouting, was present basal keratinocytes: some breast pain specimens also showed NGF staining in supra-basal keratinocytes. TRPV4-immunoreactive fibres were present in sub-epidermis but not significantly changed in painful breast tissue. Both TRPV3 and TRPV4 were significantly increased in keratinocytes in breast pain tissues; TRPV3, median [range] – no pain group, n = 6, 0.75 [0–2]; pain group, n = 11, 2 [ 1 - 3 ], p = 0.008; TRPV4, median [range] – no pain group, n = 6, [0–1]; pain group, n = 11, 1 [0.5–2], p = 0.014). Conclusion Increased TRPV1 intra-epidermal nerve fibres could represent collateral sprouts, or re-innervation following nerve stretch and damage by polymodal nociceptors. Selective TRPV1-blockers may provide new therapy in breast pain. The role of TRPV3 and TRPV4 changes in keratinocytes deserve further study. | Background Breast pain is a common problem, which can affect up to 70% of women [ 1 ]. Breast pain or mastalgia can be cyclical or non-cyclical. The cyclical type of breast pain has been attributed to sex hormonal changes through the menstrual cycle that may increase the size of the breast tissue, which stretches the internal structures and causes pain or soreness. Numerous studies have demonstrated variation in pain perception during the menstrual cycle [ 2 - 5 ]. Heat sensitivity is increased in the luteal (17–22) phase of the menstrual cycle [ 6 ] and lowest in the periovulatory phase (day 12–16), but other studies have shown variation at other times in the cycle. Non-cyclical breast pain can be caused by hormonal influences particularly oestrogen, and other causes such as macromastia, local infection or inflammation; rarely, breast cancer can present as breast pain. Macromastia may cause areas of numbness in the breast and problems with nipple erectile function, which is thought to be related to the stretching of the nerve supply with increase in breast size [ 7 ]. Post-surgical breast pain is also a significant entity, with about 50% of women who undergo mastectomy suffering from chronic pain one year after their operation [ 8 , 9 ]. The mechanisms of breast pain in the majority of women are not well understood at the cellular or molecular level. We hypothesized a relationship between clinical breast pain, nerve growth factor (NGF) and its regulated ion channels or receptors expressed by nociceptor fibres. Estrogens upregulate NGF receptor mRNA in sensory neurons [ 10 ], and enhance the proliferative effects of NGF [ 11 , 12 ]. As NGF is a key molecule that determines the sensitivity of nociceptors in humans [ 13 ] and animal models [ 14 ], sex hormonal influences could be responsible for altered NGF activity during the menstrual cycle, leading to cyclical breast soreness or pain. NGF expression is also increased by inflammation, and this is responsible for the collateral nerve fibre sprouting and hypersensitivity of nociceptor fibres associated with inflammation. The hypersensitivity is, in part, mediated via the capsaicin or vanilloid receptor 1 (TRPV1), which is required for thermal hyperalgesia in rodents [ 15 , 16 ], and is activated by heat pain. Thermal hyperalgesia can occur during the menstrual cycle and it is well known that the core body temperature alters during the cycle (this is a qualitative test for ovulation), and thus heat conductance and perception and tolerance of heat alters during the cycle [ 2 , 6 ]. The TRPV1 receptor is activated also by the products of inflammation. We have therefore studied TRPV1-expressing nerve fibres and NGF in skin from women with and without breast pain and tenderness. The recently discovered vanilloid thermoreceptors TRPV3 and TRPV4, which are also expressed by sensory fibres and activated by warmth, were also studied [ 17 , 18 ]. Methods Patients Eighteen patients were recruited (n = 12 breast reduction for macromastia; n = 6 breast reconstruction) at Chelsea and Westminster, Charing Cross, Ravenscourt Park Hospitals in London and Broomfield Hospital in Essex were recruited. Breast reduction patients had no previous surgery. The breast reconstruction patients had Latissimus dorsi flap reconstructions after previous mastectomies, and had implants. Patients below 18 years or above 70 years, with any local skin inflammation, infection or cancerous skin changes were excluded. The Research Ethics Committee of Hammersmith Hospitals Trust and Mid Essex Hospitals Trust gave ethical permission for the study. Informed consent was obtained prior to the clinical examination and questionnaire administration. Clinical pain assessment Age, parity, height, weight and menstrual data were collected. Details of current surgery, any previous breast surgery and breast disease were also recorded. A questionnaire which included questions on breast and period pain was administered, a diagram to indicate painful and tender areas and the 78 pain descriptors from the McGill Pain Questionnaire [ 19 ] were produced, along with a 10 cm unmarked visual analogue scale (VAS). The presence of breast pain was defined from the results of the Breast Pain Questionnaire using the total Pain Rating Index (PRI (total)) [ 19 ] and VAS scores marked by the patient in centimetres being more than zero to identify those patients with breast pain. Only two patients were taking simple analgesia for breast pain. Immunohistochemistry Full thickness skin biopsies were collected from each patient along the incision line of about 2 mm depth. Samples were coded, frozen on site and stored at -70°C. The skin samples were mounted in embedding medium (Tissue-Tek OCT compound, Sakura Finetek, USA). Frozen tissue sections (10 μm) were collected onto poly-L-lysine-coated (Sigma Poole Dorset UK) glass slides and post-fixed in freshly prepared, 4% w/v paraformaldehyde in phosphate buffered saline (PBS; 0.1 M phosphate; 0.9% w/v saline; pH 7.3). After washing in PBS, endogenous peroxidase was blocked by incubation with 0.3% w/v hydrogen peroxide in methanol. After a further wash in PBS the tissue sections were incubated overnight with affinity purified antibodies to TRPV1 (polyclonal rabbit anti-TRPV1; GlaxoSmithKline, Harlow, UK; 1/5000; 1/10,000), TRPV3 (polyclonal rabbit anti-TRPV3; GlaxoSmithKline, Harlow, UK; 1/1000), TRPV4 (polyclonal rabbit anti-TRPV4; GlaxoSmithKline, Harlow, UK; 1/250; 1/1000), recombinant human NGF (polyclonal rabbit anti-NGF; Genentech, San Francisco, USA; 1/4000) or marker of large and some small calibre nerve fibres (mixed mouse monoclonal antibodies to neurofilaments 200 kD, 70 kD and 57 kD; DAKO cytomation Cambs., UK, 1/50,000; Novocastra, Newcastle upon Tyne, UK, 1/500). Methodological controls included omission of primary antibodies, or their replacement with pre-immune serum. Specificity of antibodies has been described in previous publications [ 18 , 20 ]. Sites of antibody attachment were revealed using biotinylated goat anti- rabbit or biotinylated horse anti-mouse IgG (Vector Laboratories, High Wycombe, Bucks., U.K.) and nickel-enhanced, immunoperoxidase (avidin-biotin complex – ABC elite; Vector Laboratories, High Wycombe, Bucks., U.K.). Nuclei were counterstained with 0.1% w/v aqueous neutral red. The intensity of NGF immunostaining was graded on a scale 0 – 3 where 0 = negative or no immunoproduct, 1 = weak immunoproduct, 2 = intermediate intensity immunoproduct and 3 = intense immunoproduct. Intra-epidermal and sub-epidermal TRPV1-, TRPV4 or neurofilament – positive fibres were counted and the length and thickness of the epidermis was measured using a calibrated microscope, eyepiece graticule. Similarly, fibres that extended through the epidermis were counted, along with arborising "clusters" of fibres. The two observers who performed the histological studies were blinded with regard to clinical pain scores. Statistical analyses A non-parametric, two tailed test (Mann Whitney U) was used. Commercially available statistical software was used to perform the test (Prism 3™). Results Clinical pain assessment Pain Rating Indices (PRI) and Visual Analogue Scores (VAS) were used to group patients with (n = 10) and without (n = 8) breast pain: pain group; PRI -Median (range) 12 (4–30); Mean (SEM) 13.44(3.07) and VAS – Median (range) 5(3.7–6.7); Mean (SEM) 5.14(0.31)]; in patients without breast pain, PRI and VAS were zero. Only 2 patients reported thermal pain descriptors (burning, hot), while most reported ache and tenderness. The numbers of breast reduction and reconstruction patients with pain were 7 out of 12 and 3 out of 6 respectively, and all reported pain of duration greater than 6 months. The presence (n = 8) or absence (n = 10) of dysmenorrhoea was also recorded. There was no association between breast pain and the presence of dysmenorrhoea. Immunostaining TRPV1 TRPV1-immunoreactive fibres were present mainly in the sub-epidermis in normal (pain free) skin (Fig. 1A ). In breast pain, TRPV1-immunoreactive fibres appeared to be more abundant in the epidermis and frequently seen to pass along the junction of the stratum corneum (intra-epidermal fibres – IEF; Fig. 1B, Ci, ii ) often with multiple fibres (fibre "clusters"; Fig. 1Di, ii - arrows). Intra-epidermal fibres and fibre "cluster" counts/millimetre of epithelium were significantly higher in patients with breast pain (Table 1 ; Fig. 2 ). This significance was maintained despite exclusion of patients with previous breast surgery (i.e. the breast reconstruction patients; Pain, n = 7; No Pain, n = 5; p = 0.0303). While some specimens from pain patients showed thinning of the epidermis, this was not so overall for the pain group (Table 1 ). Figure 1 TRPV1-immunoreactive nerve fibres in breast skin . ( A ) Normal, control skin : TRPV1-immunoreactive nerve fibres (small arrows) in the sub-epidermis. ( B ) Painful skin (macromastia patient): intra-epidermal, TRPV1-immunoreactive nerve fibre (arrow) deriving from a large, sub-epidermal fascicle and extending to the stratum corneum. ( Ci ) Painful skin (breast reconstruction patient): TRPV1-immunoreactive intra-epidermal fibres passing along the junction between the epidermis and stratum corneum ( Cii -enlarged area from Ci ). ( Di ) Painful skin (macromastia patient): multiple branching, TRPV1-immunoreactive intra-epidermal nerve fibres (arrows) extending to the stratum corneum ( Dii -enlarged area from Di ). Large double arrows indicate relative epidermal thickness. Scale bars: A, B, C(i), D(i) = 50 μm; C(ii), D(ii) = 10 μm. Table 1 Histology results Patients with no breast pain (N = 8) Patients with breast pain (N = 10) P-Mann Whitney Test Intra-epidermal Median (range) Median (range) TRPV1 intra-epidermal fibres 0.69 (0.00 – 1.27) 2.15 (0.77 – 4.38) 0.0009** TRPVR1 fibre "clusters" 0.15 (0.00 – 0.27) 0.37 (0.00–0.85) 0.0085* Sub-epidermal TRPV1 fibres 2.98 (1.59–4.04) 3.79 (0.63 – 5.92) 0.1457 Neurofilament fibres 3.38 (0.83 – 5.92) 2.95 (0.94 – 4.95) 0.7618 NGF staining 2.5 (1.0–3.0) 2.38 (1.0–3.0) 0.5726 Epidermal Thickness (mm) 0.04 (0.02 – 0.12) 0.04 (0.03 – 0.05) 0.9654 Figure 2 Quantification of TRPV1-immunoreactive, intra-epidermal fibres and breast pain . Scattergrams show TRPV1-immunoreactivity in ( A ) intra-epidermal fibres and ( B ) intra-epidermal fibre "clusters" in patients with and without breast pain. TRPV3 TRPV3 immunoreactivity was detected in basal keratinocytes and occasional suprabasal cells throughout the epidermis. Quantification of immunostaining showed a significant (*P = 0.011) increase with breast pain (Fig. 3 ). TRPV3-immunoreactivity was not detected in skin nerve fibres in this study. Figure 3 TRPV3-immunoreactivity in breast skin . TOP PANELS: TRPV3-immunoreactive keratinocytes mostly in basal layer and graded from grade 0/negative (top left) to grade 3/strong staining (bottom right). Scale bar = 100 μm BOTTOM PANEL: Scattergram showing grading assessment and a significant increase (*P < 0.05) of TRPV3 immunoreactivity in patients with pain. TRPV4 Basal keratinocytes also displayed TRPV4 immunoreactivity in groups of cells which were particularly strong at the apex of dermal papillae, where immunoreactivity appeared most strong at the cell membrane junction (Fig 4 ). Quantification of immunostaining showed a significant (p < 0.03) increase with breast pain (Fig. 4 ). TRPV4 immunoreactivity was also detected in fine nerve fibres scattered through the sub-epidermis, and showed a trend (p = 0.402) for increased frequency in painful subjects (Fig 4 ). Figure 4 TRPV4-immunoreactivity in breast skin . TOP FOUR PANELS: TRPV4-immunoreactivity in keratinocytes mostly at the cell membrane and graded from grade 0/negative (top left) to grade 3/strong staining (bottom right). Scale bar = 100 μm MIDDLE PANEL: Fine, sub-epidermal, TRPV4-immunoreactive fibres (arrows). Scale bar = 100 μm BOTTOM PANELS: Scattergrams showing grading assessment of sub-epidermal fibres (left panel) and keratinocytes (right panel) significantly increased (*P < 0.05) in patients with pain. NGF NGF immunoreactivity was present in basal keratinocytes in all samples (Fig 5Ai, ii ) with little difference in intensity between pain and no pain specimens (Table 1 ). In some pain specimens the epidermis appeared to be thinner, and there was evidence of NGF expression in suprabasal as well as basal keratinocytes (Fig 5Bi and 5ii , – arrows), which correlated with the presence of TRPV1positive fibres (seen in a serial section from the same case shown in Fig 1B ). Figure 5 NGF-immunoreactivity in breast skin . ( Ai ) Normal, control skin: NGF-immunoreactive basal keratinocytes. ( Aii )-enlarged area from Ai showing NGF confined to single layer of keratinocytes. ( Bi ) Painful skin (macromastia patient): NGF-immunoreactive basal and supra-basal ( Bii -arrows) keratinocytes in skin with thin epidermis. Nerve marker (neurofilaments) Neurofilaments showed nerve fibres in sub-epidermal and dermal regions only but no significant changes were detected between groups (Table 1 ). Discussion While breast pain and tenderness is a common problem, in the majority of women the mechanisms underlying breast pain are poorly understood. Our study focused on Nerve Growth Factor (NGF) and the expression of the capsaicin receptor 1 (TRPV1) in nociceptor fibres, as these are key molecules in pain and hypersensitivity. Our finding, that TRPV1-positive intra-epidermal fibres were significantly increased in patients with breast pain and tenderness, is both novel and important. The increased and abnormal "clusters" of intra-epidermal fibres were shown in patients who had no previous breast surgery, and no known episodes of mastitis. Hence this may be a surrogate marker for "idiopathic" or macromastia related breast pain in some patients. Studies are in progress to correlate changes in TRPV1 nerve fibres with quantitative sensory perception thresholds. The cause of the increased intra-epidermal fibres is not known. Given the trend to thinning of the epidermis, normally associated with denervation [ 21 ], in the biopsies from some patients with breast pain, the increased intra-epidermal fibres here may represent nerve fibre sprouts following cutaneous terminal damage. The intra-epidermal fibre morphology and "clusters" would be in keeping with this explanation, as fibres ran in unusual patterns. There was no overall change in the sub-epidermal fibre counts for TRPV1 or the structural nerve marker neurofilament. NGF-immunostaining intensity was similar in the different groups. However, in some patients, there appeared to be staining for NGF in the supra-basal epidermis in addition to the basal cells, which is usually associated with inflammation or denervation; increased NGF is known to cause collateral sprouting [ 22 ]. Further studies, using quantitative NGF assays and in situ hybridisation, need to be performed to address this issue (these studies would require more substantial skin biopsies). The menstrual cycle influences on NGF levels are also difficult to determine with the current sample size. Our recent studies have demonstrated increased TRPV1-immunoreactive nerve fibres in inflammatory bowel disease [ 23 ], and in the mucosal and sub-mucosal layers of patients with rectal hypersensitivity, where they correlated with thermal and mechanical hypersensitivity, suggesting increase of polymodal nociceptors [ 20 ]. We proposed that topical capsaicin or resiniferatoxin treatment, which reduces the numbers of TRPV1 positive fibres, may be a useful therapeutic approach in rectal hypersensitivity. Topical capsaicin has been reported to be useful for treatment for post-mastectomy chronic pain [ 24 ], but it is uncertain if it could substantially help breast pain or tenderness in macromastia, as some of the symptoms are likely to arise from deeper structures. Oral selective TRPV1 antagonists thus deserve consideration: both mechanical and thermal hyperalgesia may be reversed by capsazepine, a TRPV1 antagonist [ 25 ], again suggesting an effect on polymodal nociceptors. Thus patients may be helped with respect to mechanical symptoms, which predominate in comparison with thermal descriptors. Little is known of the roles of TRPV3 and TRPV4 in human pain pathophysiology and keratinocyte function. While we have previously demonstrated TRPV3 in human sensory neurons [ 18 ], no TRPV3 staining was observed in skin innervation in this study, presumably as the levels in the periphery were below the detection limit of our method. It may be speculated that increased TRPV3 and TRPV4 in observed in keratinocytes may alter keratinocyte expression of NGF and other molecules, which in turn may sensitise nociceptors. Conclusion Breast pain and tenderness appears to be associated with abnormal intra-epidermal innervation. This may reflect re-innervation of skin following nerve stretch damage, and/or collateral sprouting. While further studies are necessary to establish functional links between the TRPV1, TRPV3 and TRPV4 immunohistological changes and breast pain, our findings indicate a path for increasing understanding and treatment of breast pain. List of abbreviations TRPV = transient receptor potential vanilloid; NGF = Nerve Growth Factor; VAS = Visual Analogue Score; PRI = Pain Rating Index Competing interests The author(s) declare that they have no competing interests. Authors' contributions PG and EW recruited patients, collected biopsies and participated in immunohistology studies. PF participated in immunohistology and coordination of the study. AH participated in design of the study and recruitment of patients. JD, GS and CB provided antibodies and helped draft the manuscript. PA conceived the original study, its design and coordination, and helped with the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554997.xml |
521688 | A systematic review of the content of critical appraisal tools | Background Consumers of research (researchers, administrators, educators and clinicians) frequently use standard critical appraisal tools to evaluate the quality of published research reports. However, there is no consensus regarding the most appropriate critical appraisal tool for allied health research. We summarized the content, intent, construction and psychometric properties of published, currently available critical appraisal tools to identify common elements and their relevance to allied health research. Methods A systematic review was undertaken of 121 published critical appraisal tools sourced from 108 papers located on electronic databases and the Internet. The tools were classified according to the study design for which they were intended. Their items were then classified into one of 12 criteria based on their intent. Commonly occurring items were identified. The empirical basis for construction of the tool, the method by which overall quality of the study was established, the psychometric properties of the critical appraisal tools and whether guidelines were provided for their use were also recorded. Results Eighty-seven percent of critical appraisal tools were specific to a research design, with most tools having been developed for experimental studies. There was considerable variability in items contained in the critical appraisal tools. Twelve percent of available tools were developed using specified empirical research. Forty-nine percent of the critical appraisal tools summarized the quality appraisal into a numeric summary score. Few critical appraisal tools had documented evidence of validity of their items, or reliability of use. Guidelines regarding administration of the tools were provided in 43% of cases. Conclusions There was considerable variability in intent, components, construction and psychometric properties of published critical appraisal tools for research reports. There is no "gold standard' critical appraisal tool for any study design, nor is there any widely accepted generic tool that can be applied equally well across study types. No tool was specific to allied health research requirements. Thus interpretation of critical appraisal of research reports currently needs to be considered in light of the properties and intent of the critical appraisal tool chosen for the task. | Background Consumers of research (clinicians, researchers, educators, administrators) frequently use standard critical appraisal tools to evaluate the quality and utility of published research reports [ 1 ]. Critical appraisal tools provide analytical evaluations of the quality of the study, in particular the methods applied to minimise biases in a research project [ 2 ]. As these factors potentially influence study results, and the way that the study findings are interpreted, this information is vital for consumers of research to ascertain whether the results of the study can be believed, and transferred appropriately into other environments, such as policy, further research studies, education or clinical practice. Hence, choosing an appropriate critical appraisal tool is an important component of evidence-based practice. Although the importance of critical appraisal tools has been acknowledged [ 1 , 3 - 5 ] there appears to be no consensus regarding the 'gold standard' tool for any medical evidence. In addition, it seems that consumers of research are faced with a large number of critical appraisal tools from which to choose. This is evidenced by the recent report by the Agency for Health Research Quality in which 93 critical appraisal tools for quantitative studies were identified [ 6 ]. Such choice may pose problems for research consumers, as dissimilar findings may well be the result when different critical appraisal tools are used to evaluate the same research report [ 6 ]. Critical appraisal tools can be broadly classified into those that are research design-specific and those that are generic. Design-specific tools contain items that address methodological issues that are unique to the research design [ 5 , 7 ]. This precludes comparison however of the quality of different study designs [ 8 ]. To attempt to overcome this limitation, generic critical appraisal tools have been developed, in an attempt to enhance the ability of research consumers to synthesise evidence from a range of quantitative and or qualitative study designs (for instance [ 9 ]). There is no evidence that generic critical appraisal tools and design-specific tools provide a comparative evaluation of research designs. Moreover, there appears to be little consensus regarding the most appropriate items that should be contained within any critical appraisal tool. This paper is concerned primarily with critical appraisal tools that address the unique properties of allied health care and research [ 10 ]. This approach was taken because of the unique nature of allied health contacts with patients, and because evidence-based practice is an emerging area in allied health [ 10 ]. The availability of so many critical appraisal tools (for instance [ 6 ]) may well prove daunting for allied health practitioners who are learning to critically appraise research in their area of interest. For the purposes of this evaluation, allied health is defined as encompassing "...all occasions of service to non admitted patients where services are provided at units/clinics providing treatment/counseling to patients. These include units primarily concerned with physiotherapy, speech therapy, family panning, dietary advice, optometry occupational therapy..." [ 11 ]. The unique nature of allied health practice needs to be considered in allied health research. Allied health research thus differs from most medical research, with respect to: • the paradigm underpinning comprehensive and clinically-reasoned descriptions of diagnosis (including validity and reliability). An example of this is in research into low back pain, where instead of diagnosis being made on location and chronicity of pain (as is common) [ 12 ], it would be made on the spinal structure and the nature of the dysfunction underpinning the symptoms, which is arrived at by a staged and replicable clinical reasoning process [ 10 , 13 ]. • the frequent use of multiple interventions within the one contact with the patient (an occasion of service), each of which requires appropriate description in terms of relationship to the diagnosis, nature, intensity, frequency, type of instruction provided to the patient, and the order in which the interventions were applied [ 13 ] • the timeframe and frequency of contact with the patient (as many allied health disciplines treat patients in episodes of care that contain multiple occasions of service, and which can span many weeks, or even years in the case of chronic problems [ 14 ]) • measures of outcome, including appropriate methods and timeframes of measuring change in impairment, function, disability and handicap that address the needs of different stakeholders (patients, therapists, funders etc) [ 10 , 12 , 13 ]. Methods Search strategy In supplementary data [see additional file 1 ]. Data organization and extraction Two independent researchers (PK, NMW) participated in all aspects of this review, and they compared and discussed their findings with respect to inclusion of critical appraisal tools, their intent, components, data extraction and item classification, construction and psychometric properties. Disagreements were resolved by discussion with a third member of the team (KG). Data extraction consisted of a four-staged process. First, identical replica critical appraisal tools were identified and removed prior to analysis. The remaining critical appraisal tools were then classified according to the study design for which they were intended to be used [ 1 , 2 ]. The scientific manner in which the tools had been constructed was classified as whether an empirical research approach has been used, and if so, which type of research had been undertaken. Finally, the items contained in each critical appraisal tool were extracted and classified into one of eleven groups, which were based on the criteria described by Clarke and Oxman [ 4 ] as: • Study aims and justification • Methodology used , which encompassed method of identification of relevant studies and adherence to study protocol; • Sample selection , which ranged from inclusion and exclusion criteria, to homogeneity of groups; • Method of randomization and allocation blinding; • Attrition : response and drop out rates; • Blinding of the clinician, assessor, patient and statistician as well as the method of blinding; • Outcome measure characteristics; • Intervention or exposure details; • Method of data analyses ; • Potential sources of bias ; and • Issues of external validity , which ranged from application of evidence to other settings to the relationship between benefits, cost and harm. An additional group, " miscellaneous ", was used to describe items that could not be classified into any of the groups listed above. Data synthesis Data was synthesized using MS Excel spread sheets as well as narrative format by describing the number of critical appraisal tools per study design and the type of items they contained. Descriptions were made of the method by which the overall quality of the study was determined, evidence regarding the psychometric properties of the tools (validity and reliability) and whether guidelines were provided for use of the critical appraisal tool. Results One hundred and ninety-three research reports that potentially provided a description of a critical appraisal tool (or process) were identified from the search strategy. Fifty-six of these papers were unavailable for review due to outdated Internet links, or inability to source the relevant journal through Australian university and Government library databases. Of the 127 papers retrieved, 19 were excluded from this review, as they did not provide a description of the critical appraisal tool used, or were published in languages other than English. As a result, 108 papers were reviewed, which yielded 121 different critical appraisal tools [ 1 - 5 , 7 , 9 , 15 - 102 , 116 ]. Empirical basis for tool construction We identified 14 instruments (12% all tools) which were reported as having been constructed using a specified empirical approach [ 20 , 29 , 30 , 32 , 35 , 40 , 49 , 51 , 70 - 72 , 79 , 103 , 116 ]. The empirical research reflected descriptive and/or qualitative approaches, these being critical review of existing tools [ 40 , 72 ], Delphi techniques to identify then refine data items [ 32 , 51 , 71 ], questionnaires and other forms of written surveys to identify and refine data items [ 70 , 79 , 103 ], facilitated structured consensus meetings [ 20 , 29 , 30 , 35 , 40 , 49 , 70 , 72 , 79 , 116 ], and pilot validation testing [ 20 , 40 , 72 , 103 , 116 ]. In all the studies which reported developing critical appraisal tools using a consensus approach, a range of stakeholder input was sought, reflecting researchers and clinicians in a range of health disciplines, students, educators and consumers. There were a further 31 papers which cited other studies as the source of the tool used in the review, but which provided no information on why individual items had been chosen, or whether (or how) they had been modified. Moreover, for 21 of these tools, the cited sources of the critical appraisal tool did not report the empirical basis on which the tool had been constructed. Critical appraisal tools per study design Seventy-eight percent (N = 94) of the critical appraisal tools were developed for use on primary research [ 1 - 5 , 7 , 9 , 18 , 19 , 25 - 27 , 34 , 37 - 41 ], while the remainder (N = 26) were for secondary research (systematic reviews and meta-analyses) [ 2 - 5 , 15 - 36 , 116 ]. Eighty-seven percent (N = 104) of all critical appraisal tools were design-specific [ 2 - 5 , 7 , 9 , 15 - 90 ], with over one third (N = 45) developed for experimental studies (randomized controlled trials, clinical trials) [ 2 - 4 , 25 - 27 , 34 , 37 - 73 ]. Sixteen critical appraisal tools were generic. Of these, six were developed for use on both experimental and observational studies [ 9 , 91 - 95 ], whereas 11 were purported to be useful for any qualitative and quantitative research design [ 1 , 18 , 41 , 96 - 102 , 116 ] (see Figure 1 , Table 1 ). Figure 1 Number of critical appraisal tools per study design [1,2] Table 1 Summary of tools sourced in this review. Research design focus of critical appraisal tools Critical appraisal tools with summary scores Secondary studies Systematic reviews/meta-analyses [2-5,15-36,116] All study designs [1,18,41,96-102,116] Summary score [18,41,96,97,116] Primary studies Experimental studies [2-4,19,25-27,34,37-73] No summary score [1,98-102] Diagnostic studies [19,74-79] Experimental studies Summary score [19,37-59] Observational studies [2,3,7,19,25,66,72,80-86] No summary score [2-4,25,27,28,34,60-73] Qualitative studies [9,26,66,87-90] Diagnostic studies Summary score [16,74-77] Experimental & Observational studies [9,91-102] No summary score [78,79] Qualitative studies Summary score [87] No summary score [9,26,66,88-90] Experimental and observational studies Summary score [91-93] No summary score [9,94,95] Critical appraisal items One thousand, four hundred and seventy five items were extracted from these critical appraisal tools. After grouping like items together, 173 different item types were identified, with the most frequently reported items being focused towards assessing the external validity of the study (N = 35) and method of data analyses (N = 28) (Table 2 ). The most frequently reported items across all critical appraisal tools were: Table 2 The type and number of component items contained in critical appraisal tools per study design. Type of items Design-specific critical appraisal tool components Generic critical appraisal tool components Total Systematic reviews Experimental studies Diagnostic studies Observational studies Qualitative studies Exp & Obs a studies All study designs Study aims and justification 35 27 5 18 17 4 11 117 Methodology used 38 1 0 0 0 0 1 40 Sample selection 30 62 12 37 10 10 14 175 Randomization 2 65 1 5 0 6 5 84 Attrition 4 59 3 23 0 8 8 105 Blinding 1 77 5 8 0 5 7 103 Outcome measure characteristics 41 46 3 33 2 9 19 153 Intervention 7 42 3 13 0 5 12 82 Data analyses 83 91 14 54 12 14 27 295 Bias 24 14 2 5 0 3 6 54 External validity 72 50 12 30 27 9 27 227 Miscellaneous 11 12 7 5 7 2 6 50 Total 348 546 67 331 75 75 143 1485 • Eligibility criteria (inclusion/exclusion criteria) (N = 63) • Appropriate statistical analyses (N = 47) • Random allocation of subjects (N = 43) • Consideration of outcome measures used (N = 43) • Sample size justification/power calculations (N = 39) • Study design reported (N = 36) • Assessor blinding (N = 36) Design-specific critical appraisal tools Systematic reviews Eighty-seven different items were extracted from the 26 critical appraisal tools, which were designed to evaluate the quality of systematic reviews. These critical appraisal tools frequently contained items regarding data analyses and issues of external validity (Tables 2 and 3 ). Table 3 The type and number of guidelines accompanying critical appraisal tools per study design Type of critical appraisal tool Type of guideline Total number of critical appraisal tools Handbook/published paper Accompanying explanation Total Number of tools References Number of tools References Systematic reviews 9 [2,4,15,20,25,28,29,331,36,116] 3 [16,26,27] 12 26 Experimental studies 10 [2,4,25,37,41,50,64-66,69] 6 [26,40,49,51,57,59] 16 45 Diagnostic studies 3 [74,75,76] 1 [79] 4 7 Observational studies 9 [2,25,66,80,84-87] 1 [83] 10 19 Qualitative studies 4 [9,87,89,90] 1 [26] 5 7 Experimental & Observational studies 2 [9,95] 1 [91] 3 6 All study designs 1 [100] 1 [102] 2 10 Total 38 14 52 120 Items assessing data analyses were focused to the methods used to summarize the results, assessment of sensitivity of results and whether heterogeneity was considered, whereas the nature of reporting of the main results, interpretation of them and their generalizability were frequently used to assess the external validity of the study findings. Moreover, systematic review critical appraisal tools tended to contain items such as identification of relevant studies, search strategy used, number of studies included and protocol adherence, that would not be relevant for other study designs. Blinding and randomisation procedures were rarely included in these critical appraisal tools. Experimental studies One hundred and twenty thirteen different items were extracted from the 45 experimental critical appraisal tools. These items most frequently assessed aspects of data analyses and blinding (Tables 1 and 2 ). Data analyses items were focused on whether appropriate statistical analysis was performed, whether a sample size justification or power calculation was provided and whether side effects of the intervention were recorded and analysed. Blinding was focused on whether the participant, clinician and assessor were blinded to the intervention. Diagnostic studies Forty-seven different items were extracted from the seven diagnostic critical appraisal tools. These items frequently addressed issues involving data analyses, external validity of results and sample selection that were specific to diagnostic studies (whether the diagnostic criteria were defined, definition of the "gold" standard, the calculation of sensitivity and specificity) (Tables 1 and 2 ). Observational studies Seventy-four different items were extracted from the 19 critical appraisal tools for observational studies. These items primarily focused on aspects of data analyses (see Tables 1 and 2 , such as whether confounders were considered in the analysis, whether a sample size justification or power calculation was provided and whether appropriate statistical analyses were preformed. Qualitative studies Thirty-six different items were extracted from the seven qualitative study critical appraisal tools. The majority of these items assessed issues regarding external validity, methods of data analyses and the aims and justification of the study (Tables 1 and 2 ). Specifically, items were focused to whether the study question was clearly stated, whether data analyses were clearly described and appropriate, and application of the study findings to the clinical setting. Qualitative critical appraisal tools did not contain items regarding sample selection, randomization, blinding, intervention or bias, perhaps because these issues are not relevant to the qualitative paradigm. Generic critical appraisal tools Experimental and observational studies Forty-two different items were extracted from the six critical appraisal tools that could be used to evaluate experimental and observational studies. These tools most frequently contained items that addressed aspects of sample selection (such as inclusion/exclusion criteria of participants, homogeneity of participants at baseline) and data analyses (such as whether appropriate statistical analyses were performed, whether a justification of the sample size or power calculation were provided). All study designs Seventy-eight different items were contained in the ten critical appraisal tools that could be used for all study designs (quantitative and qualitative). The majority of these items focused on whether appropriate data analyses were undertaken (such as whether confounders were considered in the analysis, whether a sample size justification or power calculation was provided and whether appropriate statistical analyses were preformed) and external validity issues (generalization of results to the population, value of the research findings) (see Tables 1 and 2 ). Allied health critical appraisal tools We found no critical appraisal instrument specific to allied health research, despite finding at least seven critical appraisal instruments associated with allied health topics (mostly physiotherapy management of orthopedic conditions) [ 37 , 39 , 52 , 58 , 59 , 65 ]. One critical appraisal development group proposed two instruments [ 9 ], specific to quantitative and qualitative research respectively. The core elements of allied health research quality (specific diagnosis criteria, intervention descriptions, nature of patient contact and appropriate outcome measures) were not addressed in any one tool sourced for this evaluation. We identified 152 different ways of considering quality reporting of outcome measures in the 121 critical appraisal tools, and 81 ways of considering description of interventions. Very few tools which were not specifically targeted to diagnostic studies (less than 10% of the remaining tools) addressed diagnostic criteria. The critical appraisal instrument that seemed most related to allied health research quality [ 39 ] sought comprehensive evaluation of elements of intervention and outcome, however this instrument was relevant only to physiotherapeutic orthopedic experimental research. Overall study quality Forty-nine percent (N = 58) of critical appraisal tools summarised the results of the quality appraisal into a single numeric summary score [ 5 , 7 , 15 - 25 , 37 - 59 , 74 - 77 , 80 - 83 , 87 , 91 - 93 , 96 , 97 ] (Figure 2 ). This was achieved by one of two methods: Figure 2 Number of critical appraisal tools with, and without, summary quality scores • An equal weighting system, where one point was allocated to each item fulfilled; or • A weighted system, where fulfilled items were allocated various points depending on their perceived importance. However, there was no justification provided for any of the scoring systems used. In the remaining critical appraisal tools (N = 62), a single numerical summary score was not provided [ 1 - 4 , 9 , 25 - 36 , 60 - 73 , 78 , 79 , 84 - 90 , 94 , 95 , 98 - 102 ]. This left the research consumer to summarize the results of the appraisal in a narrative manner, without the assistance of a standard approach. Psychometric properties of critical appraisal tools Few critical appraisal tools had documented evidence of their validity and reliability. Face validity was established in nine critical appraisal tools, seven of which were developed for use on experimental studies [ 38 , 40 , 45 , 49 , 51 , 63 , 70 ] and two for systematic reviews [ 32 , 103 ]. Intra-rater reliability was established for only one critical appraisal tool as part of its empirical development process [ 40 ], whereas inter-rater reliability was reported for two systematic review tools [ 20 , 36 ] (for one of these as part of the developmental process [ 20 ]) and seven experimental critical appraisal tools [ 38 , 40 , 45 , 51 , 55 , 56 , 63 ] (for two of these as part of the developmental process [ 40 , 51 ]). Critical appraisal tool guidelines Forty-three percent (N = 52) of critical appraisal tools had guidelines that informed the user of the interpretation of each item contained within them (Table 2 ). These guidelines were most frequently in the form of a handbook or published paper (N = 31) [ 2 , 4 , 9 , 15 , 20 , 25 , 28 , 29 , 31 , 36 , 37 , 41 , 50 , 64 - 67 , 69 , 80 , 84 - 87 , 89 , 90 , 95 , 100 , 116 ], whereas in 14 critical appraisal tools explanations accompanied each item [ 16 , 26 , 27 , 40 , 49 , 51 , 57 , 59 , 79 , 83 , 91 , 102 ]. Discussion Our search strategy identified a large number of published critical appraisal tools that are currently available to critically appraise research reports. There was a distinct lack of information on tool development processes in most cases. Many of the tools were reported to be modifications of other published tools, or reflected specialty concerns in specific clinical or research areas, without attempts to justify inclusion criteria. Less than 10 of these tools were relevant to evaluation of the quality of allied health research, and none of these were based on an empirical research approach. We are concerned that although our search was systematic and extensive [ 104 , 105 ], our broad key words and our lack of ready access to 29% of potentially useful papers (N = 56) potentially constrained us from identifying all published critical appraisal tools. However, consumers of research seeking critical appraisal instruments are not likely to seek instruments from outdated Internet links and unobtainable journals, thus we believe that we identified the most readily available instruments. Thus, despite the limitations on sourcing all possible tools, we believe that this paper presents a useful synthesis of the readily available critical appraisal tools. The majority of the critical appraisal tools were developed for a specific research design (87%), with most designed for use on experimental studies (38% of all critical appraisal tools sourced). This finding is not surprising as, according to the medical model, experimental studies sit at or near the top of the hierarchy of evidence [ 2 , 8 ]. In recent years, allied health researchers have strived to apply the medical model of research to their own discipline by conducting experimental research, often by using the randomized controlled trial design [ 106 ]. This trend may be the reason for the development of experimental critical appraisal tools reported in allied health-specific research topics [ 37 , 39 , 52 , 58 , 59 , 65 ]. We also found a considerable number of critical appraisal tools for systematic reviews (N = 26), which reflects the trend to synthesize research evidence to make it relevant for clinicians [ 105 , 107 ]. Systematic review critical appraisal tools contained unique items (such as identification of relevant studies, search strategy used, number of studies included, protocol adherence) compared with tools used for primary studies, a reflection of the secondary nature of data synthesis and analysis. In contrast, we identified very few qualitative study critical appraisal tools, despite the presence of many journal-specific guidelines that outline important methodological aspects required in a manuscript submitted for publication [ 108 - 110 ]. This finding may reflect the more traditional, quantitative focus of allied health research [ 111 ]. Alternatively, qualitative researchers may view the robustness of their research findings in different terms compared with quantitative researchers [ 112 , 113 ]. Hence the use of critical appraisal tools may be less appropriate for the qualitative paradigm. This requires further consideration. Of the small number of generic critical appraisal tools, we found few that could be usefully applied (to any health research, and specifically to the allied health literature), because of the generalist nature of their items, variable interpretation (and applicability) of items across research designs, and/or lack of summary scores. Whilst these types of tools potentially facilitate the synthesis of evidence across allied health research designs for clinicians, their lack of specificity in asking the 'hard' questions about research quality related to research design also potentially precludes their adoption for allied health evidence-based practice. At present, the gold standard study design when synthesizing evidence is the randomized controlled trial [ 4 ], which underpins our finding that experimental critical appraisal tools predominated in the allied health literature [ 37 , 39 , 52 , 58 , 59 , 65 ]. However, as more systematic literature reviews are undertaken on allied health topics, it may become more accepted that evidence in the form of other research design types requires acknowledgement, evaluation and synthesis. This may result in the development of more appropriate and clinically useful allied health critical appraisal tools. A major finding of our study was the volume and variation in available critical appraisal tools. We found no gold standard critical appraisal tool for any type of study design. Therefore, consumers of research are faced with frustrating decisions when attempting to select the most appropriate tool for their needs. Variable quality evaluations may be produced when different critical appraisal tools are used on the same literature [ 6 ]. Thus, interpretation of critical analysis must be carefully considered in light of the critical appraisal tool used. The variability in the content of critical appraisal tools could be accounted for by the lack of any empirical basis of tool construction, established validity of item construction, and the lack of a gold standard against which to compare new critical tools. As such, consumers of research cannot be certain that the content of published critical appraisal tools reflect the most important aspects of the quality of studies that they assess [ 114 ]. Moreover, there was little evidence of intra- or inter-rater reliability of the critical appraisal tools. Coupled with the lack of protocols for use, this may mean that critical appraisers could interpret instrument items in different ways over repeated occasions of use. This may produce variable results [123]. Conclusions Based on the findings of this evaluation, we recommend that consumers of research should carefully select critical appraisal tools for their needs. The selected tools should have published evidence of the empirical basis for their construction, validity of items and reliability of interpretation, as well as guidelines for use, so that the tools can be applied and interpreted in a standardized manner. Our findings highlight the need for consensus to be reached regarding the important and core items for critical appraisal tools that will produce a more standardized environment for critical appraisal of research evidence. As a consequence, allied health research will specifically benefit from having critical appraisal tools that reflect best practice research approaches which embed specific research requirements of allied health disciplines. Competing interests No competing interests. Authors' contributions PK Sourced critical appraisal tools Categorized the content and psychometric properties of critical appraisal tools AEB Synthesis of findings Drafted manuscript NMW Sourced critical appraisal tools Categorized the content and psychometric properties of critical appraisal tools VSK Sourced critical appraisal tools Categorized the content and psychometric properties of critical appraisal tools KAG Study conception and design Assisted with critiquing critical appraisal tools and categorization of the content and psychometric properties of critical appraisal tools Drafted and reviewed manuscript Addressed reviewer's comments and re-submitted the article Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 Search Strategy. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC521688.xml |
534093 | Atherosclerosis of the descending aorta predicts cardiovascular events: a transesophageal echocardiography study | Purpose Previous studies have shown that atherosclerosis of the descending aorta detected by transesophageal echocardiography (TEE) is a good marker of coexisting coronary artery disease. The aim of our study was to evaluate whether the presence of atherosclerosis on the descending aorta during TEE has any prognostic impact in predicting cardiovascular events. Material and Methods The study group consisted of 238 consecutive in-hospital patients referred for TEE testing (135 males, 103 females, mean age 58 +/- 11 years) with a follow up of 24 months. The atherosclerotic lesions of the descending aorta were scored from 0 (no atherosclerosis) to 3 (plaque >5 mm and/or "complex" plaque with ulcerated or mobile parts). Results Atherosclerosis was observed in 102 patients, (grade 3 in 16, and grade 2 in 86 patients) whereas 136 patients only had an intimal thickening or normal intimal surface. There were 57 cardiovascular events in the follow-up period. The number of events was higher in the 102 patients with (n = 34) than in the 136 patients without atherosclerosis (n = 23, p < 0.01). The frequency of events was in close correlation with the severity of the atherosclerosis of the descending aorta. Fifty percent of the patients with grade 3 experienced cardiovascular events. Excluding patients with subsequent revascularization, the multivariate analysis only left ventricular function with EF < 40% (HR 3.0, CI 1.3–7.1) and TEE atherosclerotic plaque >=2 (HR 2.4, CI 1.0–5.5) predicted hard cardiovascular events. Conclusion Atherosclerosis of the descending aorta observed during transesophageal echocardiography is a useful predictor of cardiovascular events. | Introduction During the past decades many methods and factors have been proposed as good prognostic tools or markers for cardiovascular risk stratification. However, even with the most sophisticated stress testing procedures the prediction capability still remains imperfect [ 1 ]. Previous studies have shown that atherosclerosis of the descending aorta detected by transesophageal echocardiography (TEE) is a good marker of coexisting coronary artery disease [ 2 - 9 ]. Cohen et al. have demonstrated, that in patients with brain infarction, "complex" plaques (with ulcerated surface, mobile parts and thrombi) are powerful predictors of future cardiovascular events [ 10 ]. The aim of this study was to evaluate whether the presence of atherosclerosis on the descending aorta observed by routine scanning during TEE has any prognostic impact in predicting cardiovascular events such as cardiac death, myocardial infarction or fatal stroke. Therefore, we conducted a prospective study; collecting and analyzing the data of 238 consecutive patients referred to the echo lab for transesophageal echocardiography. Methods Patient selection During the year 1998, 238 consecutive patients (135 males, 103 females, and mean age 58 ± 11 years) were studied with transesophageal echocardiography at the 2 nd Department of Medicine and Cardiology Center, University of Sciences, Szeged, Hungary. The patients underwent TEE examination for the following reasons: TIA or suspected cerebral embolism (n = 100), coronary flow reserve evaluation (n = 71), evaluation of the native or artificial mitral valve (n = 23), suspected endocarditis (n = 15), evaluation of the aortic valve (n = 13), suspected aortic dissection (n = 6), atrial septal defect or patent foramen ovale (n = 6), others (n = 4). Thirty patients had suffered a previous myocardial infarction. The patients were followed-up to a period of at least 2 years, median 31 ± 9 months. Transthoracic and transesophageal echocardiography All patients had a transthoracic echocardiographic examination to assess the global and regional left ventricular function. The ejection fraction was calculated using the area-length single plane method [ 11 ]. Left ventricular function was considered to be depressed in case of ejection fraction ≤40%. The TEE examination was carried out according to recommendations of the Mayo clinic procedure [ 12 ]. Two-dimensional echocardiograms were obtained by using commercially available imaging system (ATL-HDI) with a biplane transducer. Echocardiographic images were recorded on videotape for subsequent playback and analysis. The atherosclerotic lesions of the descending aorta were graded according to the modified scoring system originally proposed by Fazio et al [ 2 ]: Grade 0 – no sign of atherosclerosis Grade 1 – intimal thickening Grade 2 – plaque < 5 mm Grade 3 – plaque > 5 mm and/or "complex" plaque with ulcerated or mobile parts. Significant atherosclerosis was considered in case of grade 2 or 3 (fig 1 and additional files 1 , 2 , 3 , 4 ). Figure 1 Grading of the transesophageally detected aortic lesions in the descending aorta. Grade 1: intimal thickening (left upper panel); Grade 2: small plaque indicated by arrow right upper panel; Grade 3: lower panels. On the lower panels: left a huge, multiple plaque, right: an ulcerated plaque with mobile part. Follow up data Follow-up data were obtained from at least one of four sources: 1) review of the patient's hospital record; 2) personal communication with the patient's physician and review of the patient's chart; 3) a telephone interview with the patient conducted by trained personnel; or 4) a staff physician visiting the patients at regular intervals in the outpatient clinic. Follow-up data were obtained in all patients. The outcome events were: cardiac related death, fatal stroke, non-fatal myocardial infarction and revascularization either by percutaneous transluminal coronary angioplasty (PTCA) or by coronary artery by-pass grafting (CABG). Myocardial infarction was documented by a consistent history, EKG changes and cardiac enzyme level elevations and confirmed by hospital chart or hospital discharge letter review. In the next step, patients with revascularization were censored and the remaining patients were considered for the analysis of hard events (cardiac death, myocardial infarction, and fatal stroke). All-cause mortality (cardiovascular death plus death of other causes) was also analyzed. Statistical analysis Values are expressed as mean ± Standard Deviation. Continuous variables have been compared by the means of Student's t test (two-tailed). Statistical analysis of discrete variables has been performed with chi-square test; a Fisher's exact test has been used when appropriate. The individual effect of certain variables on infarction-free survival has been evaluated with the use of the Cox proportional hazard model with univariate and multivariate analysis and with the Kaplan-Meier method. The patients were stratified into two subgroups: patients with and without cardiac events. The examined variables were age, sex, risk factors (arterial hypertension, diabetes, hypercholesterolemia), previous myocardial infarction, ejection fraction, and significant atherosclerosis of the descending aorta. A p value <0.05 was considered statistically significant. Results Transesophageal echocardiography The examination was successful and complete in all patients, without any side effect. Significant atherosclerosis of the descending aorta was observed in 102 patients, (grade 3 in 16, and grade 2 in 86 patients) whereas 136 patients had only mild intimal thickening (n = 46) or normal endocardial surface (n = 90). Follow-up data There were 57 events in the follow-up period: cardiac related death: n = 14, fatal stroke: n = 4, non-fatal myocardial infarction: n = 5 and coronary artery revascularization: n = 34. Ten patients died of non-cardiovascular causes and the cause of the death of 2 patients was undetermined. Cardiovascular events and TEE findings The number of events was significantly higher in the 102 patients with (n = 34) as in patients the 136 patients without significant atherosclerosis (n = 23, p < 0.01). The results of univariate and multivariate analysis are shown in table 1 and 2 . Table 1 Univariate analysis – all events (cardiac death + nonfatal AMI + stroke + revascularization) Variable p value Aorta plaque 0,0033 Previous MI 0,0074 Male gender 0,0088 Hypertension 0,0758 Age 0,1687 Cholesterol 0,1154 Diabetes 0,2954 LV EF 0,7810 LV EF = Left ventricular ejection fraction; MI = myocardial infarction Table 2 Multivariate analysis – all events (cardiac death + nonfatal AMI + stroke + revascularization) 95% CI Variable p value HR Lower Upper Aorta plaque <0,01 2,1 1,2 3,5 Male gender <0,05 0,49 0,3 0,9 HR = Hazard Ratio; CI = Confidence Interval The frequency of events was in close correlation with the severity of the atherosclerosis of the descending aorta (fig 2 ). Fifty percent of the patients with grade 3 experienced cardiovascular events. There was no significant difference when all causes of death were considered between subjects with aortic lesions or free of atherosclerosis of the descending aorta (5% vs 1%, p = ns). Figure 2 A Kaplan-Meier curve showing the association between the severity of the transesophageally detected aortic plaques and the long term survival including all events. It can be clearly seen that more severe the atherosclerosis on the descending aorta was, higher the probability of future cardiovascular events. Spontaneous cardiovascular events and TEE findings Patients with early (<3 months, n = 29) or late (>3 months, n = 5) revascularization have been censored. Nine events have occurred in the 122 patients with TEE score ≤1 and 14 in the 82 patients with TEE score ≥2 (7% vs 17%, p < 0.05) (fig 3 ). The results of univariate analysis are shown in table 3 . Impaired left ventricular function (EF ≤ 40%), significant atherosclerosis of the descending aorta, and age were predictive for future cardiovascular events. By multivariate analysis, only left ventricular function with EF ≤ 40% (HR 3.0, 95% CI 1.3–7.1) and TEE atherosclerotic plaque ≥2 (HR 2.4, 95% CI 1.0–5.5) predicted cardiovascular events (table 4 ). Similarly to the entire group, in patients with no revascularization, the more severe the atherosclerosis on the descending aorta was, higher the probability of future cardiovascular events. There was no significant difference when all causes of death were considered between subjects with aortic lesions or free of atherosclerosis of the descending aorta (5% vs 1%, p = ns). Figure 3 A Kaplan-Meier curve survival showing a better outcome of patients without transesophageally detected aortic plaques. Table 3 Univariate analysis – Hard events (cardiac death + nonfatal AMI + stroke) Variable p value LV EF 0,0069 Aorta plaque 0,0283 Age 0,0440 Cholesterol 0,5642 Diabetes 0,9161 Hypertension 0,3212 Previous MI 0,6053 Male gender 0,9880 LV EF = Left ventricular ejection fraction; MI = myocardial infarction Table 4 Multivariate analysis – Hard events (cardiac death + nonfatal AMI + stroke) 95% CI Variable p value HR Lower Upper LV EF <0,01 3,0 1,3 7,1 Aorta plaque =0,04 2,4 1,0 5,5 HR = Hazard Ratio; CI = Confidence Interval Discussion Atherosclerosis of the descending aorta observed during transesophageal echocardiography is a useful predictor of future cardiovascular events. Comparison with previous studies Our data are in keeping up with previous findings, showing that patients with atherosclerosis of the thoracic aorta have higher probability of coexisting coronary artery disease [ 2 - 8 ]. In those series the positive predictive value of TEE varied between 64% and 95%, whereas the negative predictive value was consistently high (between 82 and 99%.), indicating that in the absence of echocardiographically assessed atherosclerotic plaque in the thoracic aorta the probability of coronary artery disease is unlikely. Furthermore, Khoury et al have demonstrated, that atherosclerotic plaques in patients with coronary artery disease were found predominantly in the descending aorta (in 93%) and in the aortic arch (in 80%), whereas the ascending aorta was the least involved (in 37%) [ 9 ]. Atherosclerosis is a complex polygenic, multifactorial vascular disorder associated with many differing and changing metabolic, anatomic and clinical manifestations [ 13 ]. The presence of atherosclerotic plaque in the thoracic aorta, as shown by chest x-ray, has been shown in previous studies to be correlated with an increased risk of cardiovascular death [ 14 , 15 ]. However, several studies have also demonstrated that the generation of acute coronary syndromes is not necessarily related to plaque severity rather to its morphology and complexity. From histopathologic and vascular biologic studies [ 13 , 16 ] plaque composition and vulnerability (type of lesion) rather than degree of stenosis (size of lesion) have emerged as crucial factors leading to sudden rupture of the plaque surface, usually with thrombosis superimposed, which underlies the great majority of infarctions. Angiographic studies also suggest that the most frequent situation giving rise to infarction is the occlusion of previously noncritical stenoses [ 17 ], which are more prevalent than the possibly more dangerous severe stenoses [ 18 ]. Taken together, these studies suggest that in two of three infarctions the culprit lesions had only mild to moderate stenosis on initial evaluation in a substantial number of patients. This is again consistent with our finding that significant coronary artery disease is in close relationship with the atherosclerosis of the aorta but more severe the atherosclerosis is, higher the probability of spontaneous cardiovascular events. Clinical implications One of the most important first steps in stratifying risk among patients with proven or suspected coronary artery disease is the identification of patients at high risk for coronary or vascular events during the course of the next few months or years. To date, left ventricular dysfunction, the number of diseased vessels, and the severity of myocardial ischemia have emerged as important determinants of survival [ 19 ]. Our data suggest that atherosclerosis of the descending aorta observed by a simple, routine transesophageal echocardiographic examination can be an additional prognostic marker in identifying patients for higher risk for cardiovascular events. When patients are referred to transesophageal testing for whatever reason, a semiquantitative description of atherosclerotic burden of the descending aorta should be always included in the prognostic stratification. Study limitations This study has several limitations. The study population was highly heterogeneous, reflecting the garden variety of patients referred to the echo lab for transesophageal testing: patients with known or suspected coronary artery disease or cerebrovascular disease coexist with patients with congenital or acquired valvular disease. Therefore, our findings cannot be directly translated into the general population, and further prospective studies are needed at this point to evaluate the prognostic value of transesophageally detected aortic pathology in more sharply defined clinical subsets. Future directions The morphological characterization of atherosclerotic plaque has not been performed in our study, neither in terms of plaque content [ 20 ] nor of plaque geometry [ 21 ]. Ultrasonic tissue characterization technology can be applied for a more accurate and quantitative description of echocardiographic plaque structure and profile [ 22 ]. Both these criteria have documented the prognostic impact in the carotid artery [ 23 ]. Thereby, the prognostic value of ultrasonic assessment of aortic atherosclerosis can certainly be further improved with more quantitative, albeit more technologically demanding, image analysis. It is however important, that even a semiquantitative, subjective and extremely simple assessment of atherosclerosis from transesophageal images applied on an extremely heterogeneous population yields powerful prognostic stratification, even when hard prognostic end-points are considered. Supplementary Material Additional File 1 Grade I. atherosclerosis of the descending aorta. Intimal thickening. Click here for file Additional File 2 Small plaque on the descending aorta, corresponding to Grade 2. atherosclerosis. Click here for file Additional File 3 Grade 3. atherosclerosis, with large, multiple plaques. Click here for file Additional File 4 Grade 3. atherosclerosis, plaque with mobile parts. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC534093.xml |
544574 | The effectiveness of metal on metal hip resurfacing: a systematic review of the available evidence published before 2002 | Background Conventional total hip replacement (THR) may be felt to carry too high a risk of failure over a patient's lifetime, especially in young people. There is increasing interest in metal on metal hip resurfacing arthroplasty (MoM) as this offers a bone-conserving option for treating those patients who are not considered eligible for THR. We aim to evaluate the effectiveness of MoM for treatment of hip disease, and compare it with alternative treatments for hip disease offered within the UK. Methods A systematic review was carried out to identify the relevant literature on MoM published before 2002. As watchful waiting and total hip replacement are alternative methods commonly used to alleviate the symptoms of degenerative joint disease of the hip, we compared MoM with these. Results The data on the effectiveness of MoM are scarce, as it is a relatively new technique and at present only short-term results are available. Conclusion It is not possible to make any firm conclusions about the effectiveness of MoM based on these early results. While the short-term results are promising, it is unclear if such results would be replicated in more rigorous studies, and what the long-term performance might be. Further research is needed which ideally should involve long-term randomised comparisons of MoM with alternative approaches to the clinical management of hip disease. | Background The treatment of younger people with disease of the hip joint presents a difficult clinical problem. Conventional total hip replacement (THR) may be felt to carry too high a risk of failure over a patient's lifetime. Overall, long-term results of THR in younger patients with a variety of underlying conditions indicate that 25–30% may require revision by 15 years [ 1 ], compared with less than five percent at ten years for older patients, and less than ten percent at ten or more years for all patients [ 2 ]. Specific subgroups of young active patients, such as those with osteoarthritis, may experience a revision rate of 50% [ 3 ]. In 1999/00 in the NHS in England 18% (8,389) of THRs were performed on people aged between 15 and 59, 46% (21,440) in people aged 60 to 74, and 36% (27,965) in people aged 75 and over [ 4 ]. Data on the number of revisions performed was not so readily available. A previous report suggested that out of approximately 2700 THRs per year, 2100 (78%) are primary THRs and 600 (22%) are revisions [ 5 ]. More recent data on revisions of THRs as a percentage of the total number of THR procedures suggest that in 1998/99 over ten percent of all THRs were carried out as revisions [ 6 ]. Due to concerns about the risks of revision, people who are expected to outlive a primary THR are often managed with non-surgical interventions, such medication to alleviate pain and to delay or prevent the need for surgery; collectively these interventions have been referred to as 'watchful waiting' (WW). People are typically referred for surgery only when their symptoms (e.g. pain, loss of physical function) become unmanageable by non-surgical means. Figures for the number of people who have their symptoms managed by pain control and other non-surgical interventions (such as the use of transcutaneous electrical nerve therapy and strengthening exercises) within England and Wales are difficult to determine. Evidence from a population survey suggest that 15.2 people per 1000 aged 35 to 85 years had hip disease severe enough for surgery. This equates to approximately 760,000 people within England and Wales [ 7 ]. Metal on metal hip resurfacing arthroplasty (MoM) offers a bone-conserving option for treating those patients who are not considered eligible for THR. MoM may also represent a more attractive alternative to other procedures such as osteotomy, bone fusion and arthroscopy, which have previously been used or been advocated as means of delaying or preventing the need for a THR. MoM involves the removal and replacement of the surface of the femoral head with a hollow metal hemisphere, which fits into a metal acetabular cup. This technique conserves femoral bone (although it is not conservative on the acetabular side), maintains normal femoral loading and stresses, and may not therefore compromise future total hip replacements. Data on the use of MoM within the NHS in England and Wales could not be obtained in this review. Never the less, because of increasing interest in MoM, we conducted a systematic review of the evidence of effectiveness aiming to compare it with THR and watchful waiting. Methods Search strategy Initial searches failed to identify any randomised or comparative observational studies comparing MoM with any of the chosen alternatives. A structured search was conducted to identify evidence relating to the clinical effectiveness and cost-effectiveness of MoM for treatment of hip disease. The search strategy comprised of: (1) A free text search to identify any potentially relevant papers evaluating MoM (free text search terms were used because of the anticipated scarcity of published literature); and (2) A search for RCTs and systematic reviews of RCTs for THR using a modified version of the search strategy used for a recent review [ 8 ]. The search strategies used are presented in the appendix. Appendix [see Additional file 1 ] The following databases were searched to identify relevant published literature: Cochrane database of systematic reviews (CDSR), Database of abstracts of reviews of effectiveness (DARE), Cochrane Controlled Trials Register, MEDLINE and PREMEDLINE, EMBASE, HealthSTAR, CINAHL, NHS Economic Evaluation Database (EED), and Allied or Alternative Medicine (AMED). Relevant audit databases and the World-Wide Web were also searched. Unpublished data sources were sought by contacting experts in this field and industries with an interest in this area of orthopaedics. Studies from 1990 to 2001 were searched for. Inclusion and exclusion criteria All identified abstracts were assessed for subject relevance independently by two reviewers. Full papers were then obtained and formally assessed for inclusion. It was agreed at the outset of the review that the search strategy would not be limited by language. It was agreed that non-English studies would be identified, but due to time and resource limitations would not be translated and assessed for their relevance to the review. No restrictions on the type of patient were imposed. Comprehensive systematic reviews of THR was carried out in Health Technology Assessment in 1998. These reviews were updated by the National Institute of Clinical Excellence (NICE) in 2000. Therefore, in this review a search for systematic reviews and RCTs published subsequent to the completion of the systematic reviews was carried out. Table 1 describes the inclusion and exclusion criteria applied for each of the treatments considered here. Data abstraction and quality assessment Two reviewers independently abstracted data and quality assessed the included studies. Where a difference in opinion occurred, an arbiter was consulted. A data abstraction form was developed to record details of trial methods, participants, interventions, patient's characteristics and pre-specified outcomes (See Table 2 ). The quality assessment form was based on a checklist developed by Morris, 1988 [ 2 ] to assess the quality of studies appearing in orthopaedic research journals. Results The initial search identified 352 potentially relevant MoM studies, 699 potentially relevant THR studies and 177 potentially relevant watchful studies. After reviewing titles and abstracts and applying the inclusion and exclusion criteria, data were abstracted from four published MoM studies [ 9 - 12 ], four published THR studies [ 2 , 8 , 13 , 14 ] and one watchful waiting study [ 15 - 17 ]. Four unpublished studies were also included [ 18 - 21 ]. These were obtained from companies that manufacture alternative MoM devices and also through personal communication with the Robert Jones and Agnes Hunt Orthopaedic and District Hospital.))No comparative studies were found. Quality of studies The majority of studies rated poorly in terms of description of study sample, control of bias, and statistical and analytical considerations. Most studies rated favourably in terms of clarity of the study question and definition of outcome, although less favourably with respect to the description of the intervention. The duration and completeness of follow-up was of variable quality, in terms of the interval between surgery and follow-up being clearly stated and the consideration of patients lost to follow-up. Of the three systematic reviews included, two were of high quality [ 2 , 13 ], although there were some limitations on the comprehensiveness of the literature searches. The other systematic review was of lower quality with poor reporting of the methodology [ 8 ]. A summary of the quality assessment of the remaining ten included studies is presented in Table 3 . Relative effectiveness of metal on metal hip resurfacing arthroplasty Metal on metal hip resurfacing arthroplasty included studies The MoM studies included in the review were four published studies, three unpublished reports from the manufactures of MoM prostheses, and one unpublished report. (Refer to table 4 ) The length of follow-up was less than five years for all the studies and ranging from 8.3 months [ 10 ] to 48 months [ 20 ]. The majority of the studies were small, (4424 [ 20 ] to four patients [ 11 ]). There was wide variation of patients' pre-operative diagnoses. Metal on metal hip resurfacing arthroplasty study outcomes Only one study reported details on the duration of the operation [ 11 ]. The mean operation time was reported as 247 minutes (range 180 to 370 minutes). McMinn et al, 1996 [ 10 ], reported that all patients were mobilised on the first post-operative day and at 12 days post-operation all patients had partial weight bearing of 25 kg on the surgically treated leg, with this weight being increased after 12 weeks. Patients in one study [ 12 ] spent a median of 21 days in hospital. All except one of the MoM studies reported the revision rates to THR. They ranged from 0% to 14.3%. Two groups of patients in the McMinn et al, 1996 [ 17 ] study were reported to have no revision to THR. Details on patients who were pain free were reported in one published study [ 17 ]. In this study 91% (60/66 patients) were pain free after a mean follow-up of 50.2 months (range 44 to 54 months). One of the manufacturers of MoM prostheses reported 71.1% (69/97 patients) to be pain free after a mean follow-up of 16.9 months [ 18 ]. The studies reported few complications. In one study [ 11 ] 10.5% (2/19 patients) were reported to have complications, one a femoral nerve palsy and one a haematoma. McMinn et al, 1996 [ 10 ] reported out of 235 patients, three patients had infections and one patient had sciatic nerve palsy. The only complication reported by Wagner et al, 1996 [ 12 ] (a study of 35 patients), was one patient with a femoral neck fracture, which was due to a traffic accident. The Oswestry Outcome Centre [ 20 ] reported the majority of revision surgery was due to fractures (56%), followed by loosening (19%), infection (11%), avascular necrosis (11%) and dislocation (3%). One manufacturer reported 6.4% (7/110 patients) to have complications [ 18 ]. Another manufacturer reported 3% (3/100 patients) to have complications [ 19 ]. The most common type of complication in these two studies was loosening. Alternative treatments to MoM Only one watchful waiting study was included in this review. (Refer to table 5 ) The results of the study were reported in two papers, one with results up to three years [ 16 ] and the other up to eight years [ 17 ]. All the patients included in the study suffered from osteoarthritis of the hip. The study reported that the THR surgery performed increased from 9 patients (32%) at 3 years, to 14 patients (48%) at eight years. The number of patients using walking aids also increased from 8 patients (29%) at three years, to 12 patients (41%) at eight years. Patients' level of pain showed a slight increase from three to eight years. Three systematic reviews provided the majority of information on THR for this review [ 2 , 8 , 13 ]. One of these reviews [ 2 ] included 11 RCTs (mean sample 168 patients), 18 comparative observational studies including two very large studies based on Scandinavian registry data [ 22 ], and 159 observational studies. The second systematic review [ 13 ] included 17 RCTs, 61 comparative studies and 145 observational studies. The third review [ 8 ] included the two systematic reviews mentioned above in addition to four RCTs, ten prospective comparative observational studies and Swedish Registry data [ 22 ]. One additional recent RCT [ 14 ] not included in the earlier systematic reviews was found from the search in this review. (Refer to table 6 ) The review by Fitzpatrick et al 1998 [ 2 ], reported an adjusted revision rate per 100 person years at risk of 0.37(+/- 0.02). Faulkner et al, 1998 [ 13 ] reported that cemented designs show good survival at ten to 15 years. The review by NICE, 2001 [ 8 ] reported that a number of prostheses achieved a revision rate of 10% or less after ten or more years follow-up. The study by Sharp et al, 2000 [ 14 ] reported a revision rate of 27.5% at a mean follow-up of 5.2 years. It was also reported in this study that two out of 91 patients (2.2%) had a dislocation within one year post-operation. No evidence on the extent or nature of complications was reported in any of the systematic reviews. Discussion Despite extensive searching for relevant studies, the evidence base for making comparisons between MoM and any of the comparators is limited. Initial searches had already shown a lack of comparative studies and therefore the focus of the literature search was on identifying less methodologically robust studies such as data from case series. Although such searches are problematic due to lack of specific indexing terms, an extensive search strategy was devised to identify as many eligible studies as possible. The early data pertaining to MoM suggests that MoM has the potential to be an effective technique for the management of hip disease. However, due to the lack of any controlled studies, it is difficult to know how much more or less effective it is compared to any comparators. The data available with which to make comparisons is uncontrolled and the studies identified have, in many cases, considered patient populations that are dissimilar in many ways. Identified studies also did not always use comparable outcomes and had different lengths of follow-up. The lack of long-term data on MoM makes it difficult to compare with the other comparators. In particular the failure rates for some types of THR prosthesis increase significantly after ten years [ 2 ], and it is possible the same could occur with MoM. It is also unclear whether the success rates reported for THR could be replicated in younger or more active populations. Comparisons between MoM and THR studies are difficult as the MoM studies included younger patients and had shorter follow-up than the THR studies. The evidence from the systematic reviews of different methods of THR reported that several prostheses had revision rates of ten percent or less at ten years or more [ 2 , 13 ]. Revision rates reported in the MoM studies ranged from 0% to 14% for up to 5 years follow-up. The only other outcome that could be compared is the percentage of patients who were pain-free at follow-up. This was reported to be 90.9% at 50.2 months follow-up in one group of patients in one MoM study [ 10 ]. The systematic review conducted by Fitzpatrick and colleagues in 1998 report a mean of 84.1% (range 46–100%) of patients pain-free at a follow-up of 11 years [ 2 ]. In the MoM and WW studies, most of the patients had a preoperative diagnosis of osteoarthritis and were all of a similar younger age. The watchful waiting study reported 32% of patients requiring surgery at 3 years and 48% by eight years follow-up [ 15 - 17 ]. In the MoM studies revision rates ranged from 0% to 14.3%, after a follow-up of less than five years. During the 8-year follow-up period, people managed with WW had a slight increase in their pain levels, whereas the MoM patients hip scores all improved. 91% (60/66) of MoM patients were pain free after a mean follow-up of 50.2 months in one study [ 10 ], and 71% (69/97) after a mean follow-up of 16.9 months in the only other study that reported this outcome [ 18 ]. The very limited evidence available suggests that MoM is more effective in terms of better quality of life (measured by pain scores for WW and hip scores for MoM) than WW over a follow-up of approximately three years. As the relative effectiveness of MoM is unclear the cost-effectiveness of MoM is also uncertain. It is likely the MoM procedure would cost approximately £5,500 whereas a THR would cost about £4,200 and the annual cost of WW (including the cost of NSAID (Non steroidal anti-inflammatory drugs) therapy, physiotherapy and treatment of side effects of medications) would be about £640 [ 23 ]. Whether MoM proves to be cost-effective against these alternatives depends upon the rates of revision to THR of MoM and WW, and the rates of revision of THR. The operation rates reported from the one WW study [ 15 - 17 ] and the revision rates of MoM suggest that MoM may provide better outcome at lower cost over a ten-year period. Such information remains at best tentative due to the small number of people to whom the watchful waiting data relate, the short follow-up of the MoM studies, and the uncontrolled nature of the comparison. Conclusions The use of MoM in the UK is still relatively rare. However, there has been increasing interest from younger people with hip disease who are not currently considered eligible for THR and amongst surgeons who strive for better ways to treat the patients whom they see. However, only very limited evidence are currently available on MoM and although the procedure does appear promising the lack of robust comparisons with the other treatment options and of long term data make it virtually impossible to draw robust conclusions about its relative effectiveness. Given the early promise shown by MoM there is a real need for more rigorous research. Such research would be challenging, not least because of ethical considerations, but should attempt some form of prospective, preferably randomised, comparison of MoM with a policy of delayed selective surgery. These studies should preferably be large-scale, long-term, and use standard outcome measures, both pre- and post-operatively. Competing interests The author(s) declare that they have no competing interests. Authors' contributions LW carried out the critical appraisal of the included studies and assisted in the writing up. LV coordinated the project and assisted in the writing up. KM developed the methodology for the literature search and assisted in the writing up. AG participated in the design and coordination of the study. MB assisted in the critical appraisal of the included studies. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional File 1 Search strategies. The search strategies used to search electronic databases to identify studies relevant to this review. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544574.xml |
509312 | Opening a Window to the Autistic Brain | Research focuses have shifted from "curing" autism to finding better diagnostics for early intervention, improving behavioral therapies, and gaining insight into the autistic brain | At first glance, the preschool classroom on the other side of the two-way mirror looks like any other—brightly colored rugs, scattered toys, and tiny chairs. But almost immediately an observer notices differences in the Team Toddle students here at the Neuropsychiatric Institute of the University of California at Los Angeles (UCLA) (Los Angeles, California, United States). A therapist instructs a toddler on his colors, flashing a rapid sequence of blocks at him. When the toddler starts rocking in his chair and repeatedly touching his forehead, the therapist physically restrains his hands, placing them back on the tabletop until he stops the repetitive behaviors and focuses once again on her face and the blocks. During playtime, a two-year-old girl sits by herself in the corner, fixated on some picture cards, oblivious to a group of other children playing with a racetrack and to the therapist who tries to draw her out to join the group. These children lack some of the key social skills that normal toddlers pick up naturally—looking to others for reassurance or cues, focusing on faces, and playing together. Social and communication impairment is a hallmark of autism and can show up as early as 12–18 months of age. But with an unknown cause, and genetic linkages still hazy, there is little consensus among researchers on how the disorder develops in children and how it causes a broad spectrum of social, language, and behavioral deficits. Following one line of research, David Amaral's laboratory at the M.I.N.D. Institute at the University of California at Davis Medical Center in Sacramento (California, United States) has recorded, in autistic brains, a brain volume increase in a specific structure, the amygdala, which is thought to be important for social behavior. A similar study at the University of Washington in Seattle (UW) (Seattle, Washington, United States) has reached the same conclusion. “There are so few facts about autism, to have two labs come up with the same data is phenomenal,” says Amaral. “We feel confident this is a real finding, but what does it mean to these kids?” On another research track, using functional imaging, Ralph-Axel Müller, a cognitive neuroscientist at San Diego State University (San Diego, California, United States) sees a scattering of brain activation in autistic brains that he views as an indication of a more general brain development problem underlying the disorder ( Figure 1 ). He has hypothesized that the early-developing basic functions may require more brain area in autism, pushing out and disturbing the later specialization for more complex functions. “I'm sure this is wrong,” he says, “but it will allow us to look in a more hypothesis-driven way at animal studies of how the cerebral cortex develops specialization.” Animal models may, in turn, yield clues about normal and abnormal brain development in humans. Figure 1 Brain Activation Scattering in Autism Autistic individuals show less activity, during a movement task, in areas that are normally activated (premotor and superior parietal cortex; blue areas), but unusually increased activity around these normal sites of activation (red areas). Images courtesy of Ralph-Axel Müller. “Since there is no major hypothesis as to cause [of autism], there are many plausible ideas,” says Amaral. “If we go after all of them, we will waste all of our resources. [We have to] come to some consensus about which are most plausible.” At least two levels of pursuit exist for tracing brain problems associated with autism—the exploration of the general developmental disruptions that result in an autistic brain, and the examination of more specific problems in particular brain structures that produce symptoms. Although scientists still debate how autism evolves in a patient, the field has begun in the last decade to replicate findings and make science-based arguments for interventions. Progress has come in small steps, with advances in neuroimaging and more rigorous experimental designs. Research focuses have shifted from “curing” autism to finding better diagnostics for early intervention, improving behavioral therapies, and gaining insight into the development and function of the autistic brain. Both advocacy groups and government programs have started to bring together neuroscience and genetics experts, clinicians, and families to sharpen the focus of studies and ensure progress in what has often been a messy field. A World Apart Autism spectrum disorder strikes between one and six out of every 1,000 children around the world, but diagnosis and treatment are currently limited to developed countries. Autism is four times more prevalent in boys than girls, but makes no racial, ethnic, or socioeconomic distinctions. It is characterized by three main symptoms: impaired language, social and communicative deficits, and repetitive and stereotyped behaviors, such as hand flapping, rocking, and unusual responses to sensory stimuli. Autism spectrum disorders can be broken down into other categories, such as low-functioning autism (IQ below 70), high-functioning autism (IQ above 70), and Asperger syndrome (similar to high-functioning autism but with no language deficit). Researchers suspect that there are even more distinct subsets of autism patients. For example, some patients also have epilepsy, and it has been suggested that there is a regressive form of autism—children who, at two or three years of age, appear to regress and lose developmental milestones they had already achieved. Researchers say that sorting out these different profiles—or phenotypes—of autism will be especially important in sorting out which genes or which brain abnormalities are implicated for particular deficits. This sorting should also help clarify the mounds of contradictory data that have dogged the field, by tamping down the experimental “noise” in studies. Boosting the number of children studied and following them from early infancy through adolescence and beyond will also be key components of future studies. “There is not going to be rapid progress in autism research unless we subtype,” Amaral says. He predicts that “brain differences in kids with a regressive form of autism will be different than those of kids with the more congenital type of autism.” He and others are teaming up in an autism phenotyping project that will characterize 600 children into categories of autism (comparing them to 600 children with mental retardation and 600 controls). Splitting autism into subtypes will boost both neurobiology and genetics studies ( Box 1 ) to find real effects related to specific traits. Facing Up to Autism A key area of research explores the brain's response to human faces at a young age. Studies at the UW Autism Center have shown that unlike typically developing three-year-olds, autistic children do not show a differential brain response to their mother's face compared to that of a stranger. While dysfunctional face recognition may be one of the more devastating symptoms for caregivers, it is also one of the most promising avenues for research to determine how autistic brains process their world differently. Sara Webb, a child psychologist at UW, has followed about 70 autistic children since the age of three for a longitudinal study that will test many parameters until they reach age nine. Her work has already shown that autistic three-year-olds process seeing a strange toy differently from seeing a favorite toy, in the same way a normal child does. But activity in their brains—measured through a network of electrodes placed on the scalp—is similar whether the face is familiar (for example, mom) or strange. This, Webb says, led to two hypotheses: either the brain area for face processing is not set up correctly in autistic children, or the way these children incorporate experiences from their environment is so different that the brain area develops improperly. “We think the latter is a more likely explanation at this point,” says Webb. “By the time they are adolescents or adults, they are showing the [proper] response for familiar faces.” Indeed, a functional MRI (fMRI) study by UW neuroimaging researcher Elizabeth Aylward showed that the brains of high-functioning adolescents and adults did activate the face-recognition center, the fusiform gyrus, when shown a very familiar face. However, the same subjects did not activate the center when viewing strange faces. This points to the possibility that greater experience seeing the familiar face (i.e., on a daily basis for many years) can eventually influence the appropriate brain areas. “You need the biological wiring set up properly, but you also need experience for it to function normally,” says Aylward. “We're guessing what is missing is the experience.” To test that idea, one of her graduate students will “train” half of the autistic patients in face recognition—something most children pick up on their own—by having them study, manipulate, and match faces using computer games. Then fMRI scans will be done again to see if the fusiform gyrus might now be activated when viewing strange faces, as it is in control subjects. Intense training of a similar type for reading has already been shown to effect change in brain activation in as little as three weeks for children with dyslexia. In their model, it is as if “all the parts are there, ready to go, but somehow they haven't gotten the ignition turned on,” says Aylward. At the 2004 annual meeting of the American Association for the Advancement of Science (Washington DC, United States), the UW center director Geraldine Dawson explained that this tackling of specific deficits will help researchers attach them to particular “mind modules” in the brain and will ultimately lead to the genes that control the development or function of those modules. That modular view, however, is not shared by many of her colleagues elsewhere, who argue that autistic behaviors are the result of a system-wide perturbation of early brain development and connectivity. Structural Support For example, Müller points to structural studies that seem to uphold his theory of overall disorganization of the brain's cortex. Work by Manuel Casanova and colleagues at the University of Louisville (Louisville, Kentucky, United States) shows that the “minicolumns” of neurons that make up the cortex are narrower and more numerous in autistic brains. Normally, these organized bundles appear very early in the developing fetal brain. In postmortem studies of autistic brains, Casanova found that the minicolumns had the same number of neurons, but smaller margins between the bundles. The margins, Casanova says, may act like “a shower curtain of inhibition that prevents information from flooding adjacent minicolumns.” Reducing those margins, he hypothesizes, could mean that an autistic brain has too much positive feedback, acting like a noisy amplifier. “For an autistic individual who is trying to piece together too much information from a face, maybe it's like looking at the sun,” he says. More general studies of adult autistic neuroanatomy have given conflicting results—most likely from diversity in the study populations—that make functional inferences difficult, if not impossible. But recent studies that focus on developing autistic brains earlier in life have revealed intriguing differences from normally developing children. Several studies have shown that from ages two to four, autistic children have larger overall brain volumes (and correspondingly larger head circumferences) than normal children, but that the difference had disappeared by about age six or seven. Since autism is usually diagnosed around age two or three, when the brain is already abnormally large, Eric Courchesne and colleagues at University of California, San Diego (San Diego, California, United States) hypothesized that brain overgrowth must occur earlier, before signs of autism appear. In an elegant retrospective study, the team analyzed head circumference and brain volume measurements of autistic children that started at birth and continued until 14 months of age. The study revealed that at birth, autistic children's head size is much smaller than healthy children, in the 25th percentile, but by 6–14 months, their head size had increased to the 84th percentile, an excessive growth rate. The increase correlated with increased brain volumes of both gray and white matter regions measured by structural imaging between ages two to five. The Courchesne study strongly suggests that with autism, significant unregulated brain growth occurs in the first year of life. The team also found an association between greater increases in brain size in infancy and a later age for first word, worse repetitive behavior, and a trend toward more severe autistic symptoms later, at diagnosis. The rapid growth of autistic brains may produce too many connections too quickly, without the opportunity to be shaped by the experience and input that a typically developing child accumulates over many years. At age six or later, when the growth slows, the already derailed connections may no longer be able to incorporate experiences. “By that time,” write Courchesne et al., “the period of plasticity that allows the exquisite and graceful complexity of the human brain to emerge will have passed.” Playing Well with Others This idea that autistic brains are developing at warp speed, to their detriment, fits intriguingly well with what is known about treatment of autism—the earlier and more intense behavioral therapy an autistic child receives, the better the outcome will be. That's why the toddlers at UCLA get one-on-one training by therapists, who fire rapid questions and physically repeat tasks until they sink in. Stephanny Freeman, co-director of the Early Childhood Partial Hospitalization program at UCLA (Los Angeles, California, United States), says these methods would be alien to, and lost on, typically developing two-year-olds, who would be bewildered by such a highly structured environment. Her colleague and co-director, Tanya Paparella, chimes in, “It as if we are opening a window or door to the autistic brain.” Keeping that door open as long as possible in very young autistic patients seems to give them a better prognosis than older children, who are more difficult to treat. But while most agree that early and intense therapy is good for autistic children, until recently, little research on intervention methods existed. Connie Kasari, an educational psychologist at UCLA, along with Freeman and Paparella, has run one of the first randomized, controlled trials on therapies designed to teach autistic kids social skills. The group tested two skills in particular—sharing attention with others and pretend playing ( Figure 2 ). The team hypothesizes that these skills, which normal children pick up easily and early, lay important groundwork for language development. Figure 2 Pointing as an Example of Joint Attention A child with autism (three years old) pointing to the fish in an aquarium. Photo courtesy of Connie Kasari. The team's results show that autistic children can learn these skills from intense training. At least anecdotally, some of these children have gone on to function in normal school classrooms, even making a few friends, although they are still a bit socially awkward. Whether or not improvements in those skills will correlate with language improvements will require further testing. But Kasari notes that this work is not universally accepted in the autism therapy community, and that many more controlled studies will have to be published before a system-wide change in autism preschool education can occur. Funding the Search In the last decade, National Institutes of Health funding for autism research has increased from $10 million to $80 million, and much of that has been funneled into large, multidisciplinary research projects. Advocacy groups such as Cure Autism Now (Los Angeles, California, United States) and the National Alliance for Autism Research (Princeton, New Jersey, United States) greatly influence which autism research projects get funded, both through their own grant programs and also by lobbying Congress for increased federal grants. Some question whether it is wise to let emotions and the desire to find a cure drive research agendas. In the past, tensions between government programs and advocacy programs have run high. Casanova, for one, criticizes the disproportionate flow of money to what he calls imaging and genetic “fishing expeditions” and says more should go to neuropathology studies. He points out that only about 40 postmortem, mostly adult, autistic brains have been studied so far, a tiny fraction compared to those studied in other neuropathological disorders like Alzheimer's disease or schizophrenia. But Daniel Geschwind, a neurogeneticist at UCLA, defends this approach, saying that a well-planned fishing expedition that uses the right technology and looks in the appropriate places can result in a “freezer full of fish.” He also says that parent organizations keep the field honest by “constantly reminding us to keep an eye on the ball and don't get distracted.” Geschwind, Amaral, and other top experts have recently been recruited by advocacy groups or by friends with autistic children to shift some of their research questions to examining autism. As more researchers in genetics and neuroscience have become involved, Amaral says, the tensions between the parent groups and the National Institutes of Health have eased. “The parents communicated to the scientists the tremendous need for research and the scientists convey back to them which [research projects] make sense to fund,” he says. He adds that advocacy groups have been indispensable to research, setting up large genetic and brain tissue banks and enlisting families to participate in those efforts. So, researchers say, the goals of the National Institutes of Health programs and the advocacy programs have started to come together to focus on well-executed studies that might lead to better diagnostics and earlier, proven interventions. The work of Courchesne et al. suggests that children at risk for autism might easily be diagnosed by head circumference measurements as early as the first few months of life. Imaging studies combined with training programs, such as the work at UW on face recognition, may one day be able to verify that behavioral interventions are effective at activating target brain areas. As researchers work to untangle the causes and effects of brain dysfunctions in autism, Aylward notes, there is good reason to be hopeful: “Although this is a genetic disorder, we know there is plasticity in the young brain.” Box 1. Genetic Power-Up Evidence abounds that autism results from multiple gene mutations. Identical twins share an autism diagnosis 60%–95% of the time, and a younger sibling of an autistic child is 50 times more likely to have autism. There are also four times as many autistic males as females, indicating a possible sex chromosome difference in inheritance. Genetics researchers estimate that autism is the result of mutations in anywhere from 2 to 20 genes. By studying the commonly inherited pieces of chromosomes in autistic siblings, geneticists have identified a handful of chromosome hotspots. However, each region contains hundreds of individual genes, and narrowing down to specific mutations will require studies that either involve thousands of families or tackle specific phenotypes. Daniel Geschwind, a neurogeneticist at UCLA, has already completed such a study. It reveals a linkage—the probability that a region contains a gene or genes linked to the disorder—between language deficits and a hotspot region on Chromosome 7. His team looked at a more homogenous group of autistic patients, all of whom had a similar language delay measured quantitatively by time to first spoken word. “Endophenotypes measure something that underlies the disorder in a significant way and [therefore probably] also underlies a genetic component,” says Geschwind. “We're trying to identify characteristics that really underlie the genetic peaks of interest.” Another such study, by Margaret Pericak-Vance and colleagues at Duke University Medical Center (Durham, North Carolina, United States), used the characteristic of “insistence on sameness”—a subset of stereotyped behaviors such as resisting change in routine or environment, and compulsions. By running a genetic analysis on a group of patients with the highest “insistence on sameness” scores from diagnostic tests, the Duke team increased the linkage score and further narrowed the hotspot region on Chromosome 15. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC509312.xml |
552316 | The uses of provincial administrative health databases for research on palliative care: Insights from British Columbia, Canada | Background Research indicating that people increasingly prefer to die at home suggests that palliative care is likely to play a more prominent role in the future of Canada's health care system. Unfortunately, at a time when research evidence should be informing policy and service delivery, little is known about health service utilization by Canadians at the end of life. One existing mechanism that can help address this gap is provincial administrative health data. The purpose of this study was to explore the potential of administrative health data to identify characteristics of palliative care users, patterns of formal service utilization and predictors of palliative care use. Methods Bivariate and multivariate analyses were used to examine data from the Capital Health Region, British Columbia Linked Health Databases for the period 1992/93 to 1998/99. The databases examined include continuing care, physician claims, hospital separations, and vital statistics. As the name implies, these databases can be linked at the individual level using unique identifiers so that health services utilization can be tracked across sectors. Results General patterns of service use among palliative care patients suggest that general practitioner and medical specialist visits have decreased over time and the utilization of hospital beds has increased. Utilization of community-based services (i.e. home support and home nursing care) shows an overall pattern of decline. However, when compared to non-palliative care patients, palliative care patients spent fewer nights in hospital, used fewer hours of home support, and had a greater number of home nursing care visits. Conclusions Administrative health databases can provide valuable information for examining service utilization patterns over time. However, given that decisions surrounding the designation of palliative care include factors beyond the scope of administrative databases (such as quality of life, personal preferences, social support), these databases should only be seen as one source of information to inform service delivery and policy decision making. | Background Despite the fact that we all die, the philosophy and practice of palliative and end of life care is relatively new [ 1 ]. Certainly, in the broader context of the Canadian health care system, and within most western cultures, palliative and end of life care has played a secondary role due in part to the dominance of the biomedical model and its inherent focus on cure [ 2 , 3 ]. Care at the end of life is likely to play a more prominent role, however, in the future of Canada's health care system [ 4 - 6 ]. This expectation is based on the projected increase in numbers of older persons and the associated heightened risk of developing age-related chronic diseases such as cancer and cardiopulmonary disease. Conjoint with this expectation, the focus of care is shifting from institutional to community care. Thus, as governments seek to contain health care costs and caregivers search for improved quality of life for those nearing the ends of their lives, effective palliative and end of life care will become a more prominent research and policy issue. Quality palliative and end of life care has emerged as a core value of Canada's health system [ 7 ]. Indeed, over the past few years, the Canadian government, policymakers and health care professionals have responded to increasing concerns about the quality of care for the dying. In 2001, the Canadian government appointed a Minister with Special Responsibility for palliative care and a national palliative care Secretariat was established within Health Canada [ 8 ]. A national action plan on palliative and end of life care was subsequently developed and several national subcommittees and working groups have formed to deal specifically with issues related to health care services at the end of life. One of these issues is the lack of research and data to inform health service delivery and policy decision-making in palliative and end of life care [ 7 ]. At a time when research evidence should be informing policy and health service delivery, little is known about health service utilization by Canadians at the end of life. Although countries such as Australia and the United Kingdom have established national palliative care (PC) surveillance data and extensive work has been done in the United States to identify key data elements pertaining to the end of life, Canada has been slower to develop in this area. Many Canadian PC programs have developed regional data systems [ 9 ], and a core data set for national surveillance is currently being developed [ 10 ]. National data standards and a surveillance system, however, are yet to be fully implemented in Canada. Therefore, researchers are examining existing sources of data to assist in informing health service and policy decisions. One existing mechanism that may assist in informing such decisions is provincial administrative health databases. The primary purpose of this paper is to explore the usefulness of provincial administrative health databases as a source of information to inform health care decision making and policy development in palliative and end of life care. Data from the Capital Regional District in British Columbia, Canada are examined for the period 1992/93 to 1998/99. Specifically, our analyses identify characteristics of PC patients, patterns of formal service utilization over time, and predictors of PC service utilization. The implications of our findings for both research and policy, along with recommendations for future data collection initiatives are addressed. Palliative care: Patient characteristics and service utilization Research on dying persons has grown in the past decade due, in part, to a rapidly aging population, policy concerns about increasing health expenditures, and a perception that the quality of care at the end of life is inadequate [ 11 , 12 ]. Several patterns have emerged describing patients who receive specialized PC. In Canada and the United Kingdom, for example, over 90 percent of palliative service users have a primary diagnosis of cancer [ 13 ]; 70–90 percent of palliative service users in Australia and more than 60 percent in the United States have likewise been diagnosed with cancer [ 13 , 14 ]. Levels of PC services utilization tend to be similar for both males and females [ 13 , 15 , 16 , 18 ] although there are exceptions. For example, Grande and associates (1998) reported that more males than females used PC services, while a later study by Grande and colleagues (2002) found more females used PC services. Generally, the relationship between age and PC service use is consistent across studies with services typically provided to adults in the younger older age range (i.e. 65–74 years) [ 13 , 15 , 17 - 21 ]. Results of studies investigating the relationship between socioeconomic status and PC service use have not been consistent with some studies reporting a positive relationship between income and referral to PC services [ 17 , 22 , 21 , 23 ] while others report no significant relationships [ 15 , 18 , 24 ]. Some research has examined the patterns of formal service utilization of PC patients. The more frequently cited measures of service utilization include scope of services used, along with length of stay (on a particular PC service) and location of death [ 4 , 13 , 24 , 25 ]. Yet, even these have received limited research attention. In Canada, population-based studies examining PC patients commonly use vital statistics and/or cancer registry data [ 4 , 22 , 25 ]. While providing important information, both data sources are limited in scope. Mortality data typically include underlying cause of death and date of death but little information on demographic characteristics or service utilization. Cancer registries contain demographic and service utilization variables, but are restricted to capturing only data on cancer patients and therefore exclude patients dying from other diseases [ 26 ]. Administrative health databases While administrative health databases have been used for research purposes for decades in Canada, it was the establishment of the Manitoba Centre for Health Policy and Evaluation and the subsequent development of their population health information system, POPULIS [ 27 ], which brought this data source into the spotlight. Similar developments in other Canadian provinces since that time now offer health researchers access to provincial administrative health data that can be linked across services. In British Columbia, the Centre for Health Services and Policy Research maintains and links provincial health data and is responsible for extracting and providing data to health researchers upon request from and approval by the Ministry of Health [ 28 ]. Like mortality and cancer registry data, however, there are limitations to administrative health data. These databases, for example, were originally constructed to serve a billing role; service providers submit claims in order to be reimbursed for services provided. The data contained in these datasets were thus not originally intended for research purposes. Therefore, while the databases are rich in information for select utilization, supply, and cost issues, their usefulness for addressing other factors that may influence the health of populations is limited. In the context of palliative and end of life care, it is this final point that bears further examination. Specifically, do these administrative databases identify persons receiving PC, and if so, what information is available to researchers interested in studying PC patients and their health service utilization? In addressing these questions, we are primarily concerned with providing examples of what can be done with these data rather than what the data are actually telling us about PC patients and their health care utilization. Methods Data for this paper came from the British Columbia Linked Health Database. Housed at the University of British Columbia (BC), this database contains all provincial administrative health data collected by the BC Ministry of Health including physician claims, hospital separations, long-term care data, pharmacare data and vital statistics. Each of these components can be analyzed separately, as a stand-alone database, or linked through unique identification numbers and examined in combination or as a whole. The data were originally collected as part of a larger project examining the impact of regionalization in BC from 1990/91 to 1998/99 [ 29 ]. However, as the purpose of this particular paper fell outside the objectives of the larger study, a separate request to the BC Ministry of Health for data access was submitted and approved. Ethics approval from the academic institution was also received. Given that PC patients could not be identified in the data prior to 1992/93, for the purposes of this paper, data from fiscal years 1992/93 to 1998/99 for all databases (with the exception of vital statistics where only 1998/99 figures were available) were included. Results are based on a 100 percent sample of the population aged 50 and over residing in the Capital Regional District, on Vancouver Island, British Columbia, Canada for each of the years. Both univariate and multivariate analyses were used to determine the usefulness of administrative databases for PC research in British Columbia. The starting point for these analyses was the identification of all patients designated as being in need of PC (it should be noted that not all persons who may require PC are identified as needing it in the databases, nor are all people who use PC services included in the databases). This was made possible through the 'service type' variable found in the direct care services database, one of many databases that make up the continuing care component of the administrative databases. Next, using the unique identification numbers that are consistent across the different databases, it is possible to link these different datasets thereby allowing for the examination of individual service utilization across the various segments of the health care system (specifically: physician claims, hospital separations, and the home support and home nursing care components of continuing care). The continuing care database is a suite of databases, however for the purpose of this paper only the direct care services and home support databases were accessed. The direct care services databases were used in this paper for 2 purposes. First, and most importantly, it allowed us to identify those designated palliative through an individual variable. It should be noted that those never entering the continuing care system but who receive palliative services in another sector (i.e., entered the hospital, received palliative services and died in hospital) could not be identified as such and thus are not designated palliative for these analyses. Second, the direct care services database also contains information on home nursing care. For this paper, the number of home nursing visits received by each individual in a given year was calculated. Home support utilization is tracked on a monthly basis. From these files it was possible to calculate the number of hours of home support received by each individual in a given year. All claims made by physicians are tracked in the Medical Services Plan database. It is therefore possible to calculate the number of visits to a physician made by individuals over the course of a year. The hospital separations database is so named because a hospital patient is entered into the system only when they leave the hospital (this includes deaths). Since admission and separation dates are included as variables, it is possible to calculate the number of nights each individual spends in the hospital (no overnight stay is assigned a 0). Total nights in a given year for each individual can then be summed. The vital statistics data that was accessed included underlying cause of death. This variable is comprised of ICD-9 codes that can be classified into disease categories. This procedure allowed for the identification of cancer versus non-cancer deaths used in the multivariate analysis. Each of these health databases includes a small number of demographic variables such as age and gender, and geographic identifiers such as health authority and census tract. Marital status is included in one of the continuing care databases. This lack of what are typically referred to as 'control variables' is one of the biggest limitations of administrative databases. To compensate, it is possible to link Census variables at an aggregate level to the administrative data. Of course this procedure only works for variables that lend themselves to averages. For example, it is possible to calculate the average household income for an area but it is not possible to calculate the average gender for an area. Using this premise, average household income was linked to the administrative data at the enumeration area level (the smallest geographic unit released by Statistics Canada and the BC Ministry of Health). In brief, the average household income for an enumeration area is assigned to an individual residing anywhere within the enumeration area. Thus, income is an aggregate measure while the remainder of the variables used in this paper are individual measures. Finally, as mentioned earlier, each of these databases is linkable. The term linkable is used to describe the assignment of the same unique identification number to an individual regardless of database. In other words, Person A will be assigned the same number in the hospital database and the continuing care database. As such, linkable databases allow researchers to examine health service utilization and health status of the same individual across sectors. Without the ability to link, it would be impossible, for example, to examine underlying cause of death (vital statistics database) in relation to the designation of palliative care (continuing care database). As described in the preceding paragraphs, 5 health service utilization measures were calculated: number of general practitioner visits per year; number of medical specialist visits per year; number of nights spent in hospital per year; number of hours of home support received per month; and number of home nursing care visits received per month. Next, using these variables, three different analyses were conducted: 1. Descriptive statistics (distribution, mean, median) were used to examine PC patient characteristics and health service utilization by gender. All those identified as being in need of PC for each fiscal year were examined by age, gender, income, and the 5 health service utilization measures. The sample size of PC patients ranged from a low of 74 in 1992/93 to a high of 568 in 1997/98. 2. Because service use is likely to differ by diagnosis, we compared patients designated as being in need of PC with patients who were not designated as in need of PC in order to control for diagnosis. Previous research indicates that the majority of persons receiving specialized palliative and end of life care have a cancer diagnosis [ 13 ]. While it is acknowledged that different types of cancer place different demands on the health care system, for the purposes of this study, all cancers were examined in combination. Using the underlying cause of death code from vital statistics available for 1998/99, all those who died of cancer in 1998/99 (n = 2,734) were identified. Next, comparisons involving age, gender, income, and the 5 health service utilization measures were then made between cancer patients designated palliative (n = 119) and cancer patients not designated palliative (n = 2,615). 3. To examine the influence of a PC designation on health service utilization, five multiple linear regression models were estimated using the 1998/99 data from the palliative and non-palliative sample (n = 2,734) described above. The following five dependent variables were regressed on age, gender, average household income, and designation of palliative/non-palliative: (1) number of general practitioner visits; (2) number of medical specialist visits; (3) number of nights spent in the hospital; (4) number of hours of home support; and (5) number of home nursing care visits. Prior to conducting the multivariate analyses, assumptions of normality, linearity, and collinearity were tested and adjustments were made where necessary. Results 1. Palliative care patient and health service utilization characteristics The linked administrative database allowed for a description of patient characteristics including age, gender, and income. Table 1 presents characteristics of PC patients by gender from 1992/93 to 1998/99. Results indicate that almost 40 percent of male and female PC patients are between the ages of 70 and 79 years and this trend is consistent across years for males, and varies from 34% to 45% in females. Median values for annual household income range from $46,757 to $53,377 for males and $41,923 to $48,753 for females over the study period. With the exception of 1992/93 and 1997/98 where the median income for males is substantially higher than that of females, the income differences between males and females are slight. Table 1 Palliative Care Patient Characteristics by Gender: 1992/93–1995/96 1992/93 1993/94 1994/95 1995/96 M (n = 42) F (n = 32) M (n = 212) F (n = 178) M (n = 225) F (n = 194) M (n = 207) F (n = 216) Age (%) 50–59 9.6 3.1 13.2 9.5 6.7 13.9 10.1 6.5 60–69 21.4 18.8 21.7 32.0 25.7 25.8 23.7 21.7 70–79 47.6 46.9 39.7 34.3 42.7 39.1 38.7 39.4 80+ 21.4 31.2 25.4 24.2 24.9 21.2 27.5 32.4 Median income 53377 41923 47682 46997 47272 48753 47238 47272 Palliative Care Patient Characteristics by Gender: 1996/97–1998/99 1996/97 1997/98 1998/99 M (n = 209) F (n = 189) M (n = 245) M (n = 209) F (n = 189) F (n = 194) Age (%) 50–59 8.6 9.4 6.6 8.6 9.4 13.9 60–69 27.2 14.9 14.6 27.2 14.9 25.8 70–79 39.2 45.5 40.4 39.2 45.5 39.1 80+ 25.0 30.2 38.4 25.0 30.2 21.2 Median income 46757 45896 49984 46757 45896 48753 The means for the use of the five health services are presented in Table 2 . T-test results suggest that there are few health service utilization differences between male and female PC patients. General patterns suggest that female PC patients spend more nights in hospital and receive a greater number of monthly home support hours and home nursing care visits than do their male counterparts, however, few of these gender differences reach statistical significance. One exception is number of home support hours where the results suggest that in three of the seven years examined, females receive a greater number of home support hours than males. Table 2 Mean Palliative Care Patient Service Utilization by Gender: 1992/93 – 1998/99 Annual GP visits Annual specialist visits Nights spent in hospital annually Monthly hours of home support Monthly home nursing care visits M F M F M F M F M F 1992/93 19.2 21.3 14.3 19.4 99.9 106.7 15.3 26.0 23.0 29.8 1993/94 15.0 15.7 13.8 13.3 102.9 185.5 21.9** 31.0 25.7** 26.7 1994/95 15.1 13.9 12.9 13.6 136.6 172.3 24.6 30.9 19.7 22.0 1995/96 15.3 14.9 12.7 12.4 139.1 176.0 24.9 26.9 21.5 22.5 1996/97 12.8 13.9 11.9* 14.6 138.1 211.4 26.2* 30.5 19.5 24.0 1997/98 6.8 7.3 6.8 6.8 205.9 313.4 31.8** 28.2 23.1 25.3 1998/99 3.0 3.0 1.3 1.3 117.9 150.1 18.9 27.4 17.3 21.7 *p < .05; **p < .01 (t-test results comparing males and females) Health service utilization trends observed across the years show that the number of annual general practitioner and specialist visits is fairly steady until 1997/98 when there is a dramatic drop. In contrast, the number of nights spent in hospital steadily increases over the same period until 1998/99 when a sharp reduction in hospital bed utilization is observed. Patterns of home support and home nursing care utilization over time are more variable. Home support hours per month exhibit a pattern similar to number of hospital nights, although the increases and decreases are not as large. Number of home nursing care visits has fluctuated over the years and shows a general pattern of decline from 1992/93 to 1998/99. 2. Palliative care vs. non-palliative care cancer patients and health service utilization characteristics Results comparing cancer patients designated as being in need of PC and cancer patients who were not designated as in need of PC are presented in Table 3 . Findings suggest that there are no differences between palliative and non-palliative cancer patients in terms of gender, average household income, and number of general practitioner and medical specialist visits. Differences are observed between the two groups for age, number of nights spent in hospital, number of hours of home support, and number of home nursing care visits. Specifically, PC cancer patients spend fewer nights in the hospital (p < .001), use fewer hours of home support (p < .05), and have a greater number of home nursing care visits (p < .001) than do non-PC cancer patients. Table 3 Cancer Patient Characteristics and Service Utilization by Designation of Palliative: 1998/99 Palliative Non-palliative Age group (%)*** 50–59 7.7 8.5 60–69 26.6 17.0 70–79 35.3 37.8 80+ 30.4 36.7 Gender (%) Male 47.6 47.4 Female 52.4 52.6 Income ( ) 51,718 49,804 GP visits ( ) 3.23 3.21 Specialist visits ( ) 1.21 1.34 Nights in hospital ( )*** 20.97 24.19 Home support hours ( )* 105.48 172.60 Home nursing care visits ( )*** 21.66 6.84 * p < .05; ** p < .01; *** p < .001 (X 2 and t-test results comparing palliative and non-palliative) 3. Predictors of health service utilization Regression results for the five health service utilization regression models are presented in Table 4 . Results suggest that age, gender, income, and designation of PC are not strongly predictive of health service utilization. There is a significant relationship between age and number of nights spent in hospital (p < .001), suggesting that the likelihood of spending a greater amount of time in the hospital increases with age. Being female (p < .01) and being older (p < .01) increases the amount of home support hours received, while being younger (p < .05) and being designated as being in need of PC (p < .001) increases the number of home nursing care visits received. Table 4 Regression estimates and standard errors for five regressions GP visits Specialist Visits Nights in hospital Home support hours Home nursing care visits B SE B SE B SE B SE B SE Age .00 .00 -.00 .00 .14*** .01 .14** .02 -.11* .01 Gender -.02 .01 -.01 .01 .05 .04 .12** .06 .03 .05 Income .01 .00 .00 .00 .00 .00 .04 .00 0.04 .00 Palliative .02 .03 -.02 .02 -.05 .07 -.05 .10 .29*** .05 Adj. R 2 -.001 -.001 .02 .03 .10 * p < .05; **p < .01; ***p < .001 Discussion The primary purpose of this study was to explore the usefulness of provincial administrative health databases as a source of information to inform health care decision making and policy development in palliative and end of life care. Administrative databases represent an existing source of longitudinal information on health system users. The need to track individuals' health service utilization over time has long been recognized by researchers, and this is a key advantage of such databases. At the same time, however, the breadth of information collected is limited. For example, no direct measurements of quality of life, or for that matter, quality of death are compiled. The researcher who makes use of the databases must therefore work within these confines. The findings from the present study provide insights into the precise manner in which these databases can be of value to health services researchers, within these limits. More specifically, the analyses conducted for this paper exhibit that a sample of palliative patients can be identified in one of the many provincial administrative health databases. Since individuals can be linked across sectors, it is possible to examine a number of health status and utilization indicators of these individuals. Finally, linking the health databases with the vital statistics variables gives the researcher insight into the underlying cause of death. The limitations of administrative databases arise directly from their intended purpose. They were created to serve a billing role, and although they are of value as a research tool, their limitations should be noted. First, socioeconomic variables are limited to age, gender and income. The importance of these variables is unquestioned; however, a wider range of such variables is normally a requirement for social science based health services research. Second, the precise date within a year that patients are designated palliative is not available. Since health service resource utilization has been shown to be much greater during the final few months and weeks of life, information on date designated palliative would be informative. Third, the data that would allow for an examination of either quality of care provided or quality of dying – both of which are central to any study seeking to improve service delivery and associated outcomes – is not collected. Finally, given the complexities of data collection within the health care system, these databases are unlikely to capture all individuals in need of PC. For example, given the fluctuation in the number of palliative patients across years, with special emphasis on the precipitous decline observed between 1997/98 and 1998/99, it appears that the definition and coding of the variable designating an individual palliative may be subject to change over time. If these data are to be used for research purposes, this indicates the need for careful monitoring for continuity of both definition and coding procedures. While the primary objective of this paper was not to focus on the actual trends and findings, the results hold potential for future studies. For example, the observed decrease in both GP and specialist visits between 1992/93 and 1998/99 for males and females designated as being in need of PC may be indicative of changing trends. First, persons designated palliative are increasingly signaling their desire to die at home. Second, perhaps GPs are increasingly visiting dying patients at home, with the result that these visits are not captured in the health databases since physicians cannot bill for such visits. As GPs become more aware of the need for effective PC, and as they acquire the requisite skills to do so, the territory once thought to be the exclusive realm of the specialist may be experiencing erosion. In the larger picture, this may be indicative of a shift away from a curative medical model approach to end of life care, to a more appropriate social model. At the same time that GP and specialist visits among persons designated as being in need of PC have declined, mean number of nights spent in hospital annually have shown a general increase. Ostensibly, the increase in hospital nights until the 1997/98 period is a puzzling trend: especially given findings on the general population that show a decrease in nights spent in hospital throughout the 1990's [ 30 ]. A potential explanation is that although the rhetoric surrounding the issue trumpets the need to orchestrate a "closer to home" health care delivery system for palliative persons, the reality is that the resources have not yet been committed. Consistent with this explanation, the sharp decline in hospital visits by palliative persons observed for the 1998/99 period may reflect an increased flow of resources towards effective community care. However, prior to making any conclusions about a shift in direction of resources and subsequent utilization patterns, it is necessary to examine the figures beyond 1998/99. In other words, does the decline in hospital nights continue or was this year simply an anomaly? Although it is not monotonic, the general decline in monthly home nursing visits between 1992/93 and 1998/99 is also somewhat puzzling on the surface, given the calls to enhance home nursing services for the dying. A comparison of home nursing visits between palliative and non-palliative cancer patients, however, reveals that those designated palliative receive considerably more home nursing visits. This latter observation is consistent with the increased care requirements at the end of life. In trying to decipher the differences between these two groups it would be important to examine the time of the visits in relation to death. For instance, do more visits occur in the last months or weeks leading up to death? In the multivariate analyses, it is of particular interest that being designated palliative is related only to number of home nursing care visits. In other words, being designated palliative has no bearing on general practitioner visits, medical specialist visits, number of hospital nights and number of hours of home support. This, of course, is contrary to bivariate findings presented in Table 3 that reveal significant differences between palliative and non-palliative patients in number of nights spent in hospital and number of hours of home support received. Thus, it appears that when age, gender and income are controlled for, the initial differences found between palliative and non-palliative individuals disappear. Conclusions These data provide another piece of the overall picture of service utilization at the end of life. Research into palliative and end of life care is in its infancy at a time when effective and immediate information based on sound scientific research is urgently required. To the extent that administrative databases can bridge this gap, they are of immediate use. Their primary usefulness, however, is probably as a means of identifying and sketching a larger picture, and from that picture, facilitating the generation of key questions that may serve as platforms for launching more in-depth studies. For example, the decline over time in GP and specialist visits, combined with the general decline in home nursing visits for palliative persons raises the question of whether resources have met rhetoric, and if not, why not? Given that health care and policy decisions must include information on factors beyond the scope of administrative databases (such as quality of life, personal preferences, social support), these databases should only be seen as one piece of information to inform service delivery and policy decision making for patients and families at the end of life. Competing interests The author(s) declare that they have no competing interests. Authors' contributions All authors participated in the conceptualization and design of the study. DEA drafted the initial background, methods and results sections. KIS and DEA conducted all the analyses. KIS and RCR drafted the discussion section and provided additional edits to all sections. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC552316.xml |
554767 | Is there value in using physician billing claims along with other administrative health care data to document the burden of adolescent injury? An exploratory investigation with comparison to self-reports in Ontario, Canada | Background Administrative health care databases may be particularly useful for injury surveillance, given that they are population-based, readily available, and relatively complete. Surveillance based on administrative data, though, is often restricted to injuries that result in hospitalization. Adding physician billing data to administrative data-based surveillance efforts may improve comprehensiveness, but the feasibility of such an approach has rarely been examined. It is also not clear how injury surveillance information obtained using administrative health care databases compares with that obtained using self-report surveys. This study explored the value of using physician billing data along with hospitalization data for the surveillance of adolescent injuries in Ontario, Canada. We aimed i) to document the burden of adolescent injury using administrative health care data, focusing on the relative contribution of physician billing information; and ii) to explore data quality issues by directly comparing adolescent injuries identified in administrative and self-report data. Methods The sample included adolescents aged 12 to 19 years who participated in the 1996–1997 cross-sectional Ontario Health Survey, and whose survey responses were linked to administrative health care datasets (N = 2067). Descriptive analysis was used to document the burden of injuries as a proportion of all physician care by gender and location of care, and to examine the distribution of both administratively-defined and self-reported activity-limiting injuries according to demographic characteristics. Administratively-defined and self-reported injuries were also directly compared at the individual level. Results Approximately 10% of physician care for the sample was identified as injury-related. While 18.8% of adolescents had self-reported injury in the previous year, 25.0% had documented administratively-defined injury. The distribution of injuries according to demographic characteristics was similar across data sources, but congruence was low at the individual level. Possible reasons for discrepancies between the data sources included recall errors in the survey data and errors in the physician billing data algorithm. Conclusion If further validated, physician billing data could be used along with hospital inpatient data to make an important and unique contribution to adolescent injury surveillance. The limitations inherent in different datasets highlight the need to continue rely on multiple information sources for complete injury surveillance information. | Background The contribution of surveillance systems in providing valuable information for injury prevention and control is widely recognized; for example, surveillance data can be used to highlight the burden of injury, set priorities for prevention, and evaluate preventive strategies [ 1 , 2 ]. Estimates of the population burden of injuries differ, though, depending on how information is obtained. Detailed trauma registries and special surveillance systems [e.g., [ 3 ]] contain rich contextual information on particular subsets of injuries, but since such databases are generally not population-based, they cannot be used to estimate the incidence of injury. Although population-based surveys can yield estimates of the total burden of non-fatal injuries across a broad spectrum of injury severity, they often include insufficient sample sizes for studying small population subgroups [ 4 ], and are subject to recall errors [ 5 ]. Administrative health care databases, due to their presumed near complete coverage of injuries requiring medical care and their lack of reliance on self-reports, may be particularly useful for injury surveillance. Such databases allow for local or regional estimates of the burden of injury, which has been identified as an important goal [ 2 , 6 , 7 ], and since they are pre-existing, they are cost-efficient. Administrative data also provide an opportunity to examine health care use for injury. Administrative databases only capture injuries that receive medical care, however, and since surveillance using administrative data is often based on hospitalization data alone, only relatively severe injuries are included. Decisions regarding whether to seek medical care and where to seek care for an injury may be influenced by outside factors (such as access to care, care-seeking, and practice patterns), which may lead to selection biases [ 8 - 10 ]. In Ontario, Canada, administrative health care databases that may provide information on the incidence of non-fatal injuries include hospital discharge and physician billing data. Although hospital discharge data have been widely used in Canada to study injury, the feasibility of using physician billing information for injury surveillance has rarely been investigated [ 11 ]. These data, if valid, may help to expand the coverage of administrative databases to include more minor injuries, capturing care delivered in physicians' offices and emergency departments. Although minor injuries have less impact on individuals and are less costly to the health care system on a per-injury basis, minor injuries have a large impact in terms of total population morbidity due to their frequent occurrence [ 12 , 13 ]. Expanding coverage by including physician billing data would thus serve to provide a more comprehensive picture of the total health care burden of injury, and may also reduce selection biases in the surveillance data. It is not clear how the injury information provided by administrative databases compares with that obtained from population-based surveys. A study of adolescent injuries we conducted using data from the Ontario Health Survey (OHS) [ 14 ] presented a unique opportunity to explore such a comparison; a subset of the 1996–1997 OHS data was linked by respondent to Ontario administrative health care databases, including both hospital discharge and physician billing data. The overall purpose of this study was to explore the feasibility and value of using physician billing data for Ontario, Canada, along with hospitalization data, for the surveillance of adolescent injuries. The first objective was to document the burden of adolescent injury based on administrative health care data, focusing on the relative contribution of physician billing information and comparing overall estimates with surveillance information from survey data. The second objective was to examine data quality issues, by directly comparing adolescent injuries identified using administrative health care databases ("administratively-defined injuries") with those identified using self-report survey data ("self-reported injuries"). Methods Sample and data sources The study sample included adolescents aged 12 to 19 years who participated in the health component of the 1996–1997 OHS (N = 3331), which was part of the National Population Health Survey (NPHS) [ 14 ]. Survey responses were linked to administrative health care datasets through unique health card number, respondent name, address, sex, and birthdate. Although over 95 percent of OHS respondents agreed to allow their survey responses to be linked to administrative databases, sufficient information for linkage was available for only 66 percent, including 2067 (62%) of the adolescent participants, due to missing demographic information for respondents. This subgroup provided an opportunity to examine injury occurrence using multiple data sources within the same sample. A unique set of sampling weights was created for the linked subsample of the OHS, to improve representativeness. The linked sample of adolescents was similar to the full OHS sample in terms of gender, rural/urban status, and age. Self-reported information was collected through telephone interviews. Proxy respondents provided survey information for 35 of the 2067 participants. Inpatient hospitalizations for injuries were identified using the Discharge Abstract Database (DAD) of the Canadian Institute for Health Information (CIHI) [ 15 ]. All Ontario hospitals are included in the computerized DAD, which contains clinical, demographic, and administrative data for each hospital discharge. Physician care for injuries, and specifically injuries cared for in emergency departments and in physicians' offices or other outpatient facilities, was identified using physician billing data. Approximately 94% of physicians in Ontario are paid on a fee-for-service basis, through billings to the Ontario Health Insurance Plan (OHIP) [ 16 ]. The computerized OHIP claims database captures basic information on these services (Ontario Ministry of Health and Long-Term Care). Injury measures 1. Self-reported injuries (survey data) OHS respondents were asked a series of questions related to acute injuries in the past 12 months that were, from the perspective of the respondent, serious enough to limit normal activities (examples given by the interviewer included "...a broken bone, a bad cut or burn, a sprain, or a poisoning") [ 14 ]. Participants reporting that they had experienced one or more such injuries were considered to have a self-reported injury. 2. Administratively-defined injuries i) Hospital visits for injury (identified using hospital discharge data) Inpatient hospitalizations for injury were identified for the 365-day period prior to the OHS interview, for each adolescent. An adolescent was considered to have an injury-related hospitalization if, during the one-year period, he or she had at least one documented hospital discharge with an External Cause of Injury Code (E Code) in the range 800–999 (excluding codes 870–879 and 930–949, related to medical/surgical misadventures and adverse effects of the therapeutic use of medications), based on the International Classification of Disease (ICD), 9 th revision [ 17 ]. ii) Physician care for injuries (identified using physician billing data) The OHIP physician claims database does not contain codes representing causes of injury. Rather, injuries were identified based on codes that reflect billable services ("procedure codes"), and the diagnoses associated with such services ("diagnostic codes"). To improve sensitivity, a combination of both diagnostic and procedure codes from the database was used to create an injury algorithm that would identify physician care for injury during the one-year study period, based on the methods of Tamblyn and colleagues [ 11 ]. The development of the algorithm was based on a pilot study involving 200 adolescents (further details regarding the algorithm and pilot study findings are available from the authors upon request). Two lists of codes were initially created from the full listing of diagnostic and procedure codes used in the database [ 18 , 19 ]. The first list ("definite injuries") included diagnostic and procedure codes that were viewed as being definitely related to acute injury for adolescents. Since some diagnostic and procedure codes used in the claims data were non-specific, a second list ("possible injuries") was also developed. The initial code lists were reviewed by three physicians with experience in family medicine and/or emergency care. The lists were then expanded and further reviewed by three researchers, including a primary care physician and a researcher with physiotherapy experience. All physician claims with diagnostic or procedure codes on the definite injury list were considered to represent injuries. Based on the pilot test results, claims representing possible injuries were considered injury-related only if they represented care for an adolescent who also had a definite injury claim within a two-day period, and if the possible claim could be considered to represent care for the same injury as the definite claim. The physician billing database also provided information on the location of physician care (e.g., physician's office or emergency department) for each claim. Summary of administratively-defined injury measures Adolescents who had at least one documented injury in either the hospitalization or the physician billing database were considered to have an administratively-defined injury. Adolescents with documented injury-related physician care at a physician's office or outpatient facility were considered to have a physicians' office visit for injury. Adolescents with documented injury-related physician care at an emergency department were considered to have an emergency visit for injury. Two alternative injury outcomes were created to examine the impact of decisions made in developing the physician billing algorithm. These outcomes were based on diagnoses that were relatively common and were captured only as possible injuries in the algorithm, including non-specific conditions of the musculoskeletal system and adverse effects of drugs and medications. Data analysis Burden of injury First, the injury algorithm was used with the full sample of 2067 adolescents, resulting in estimates of adolescent injury-related physician care by gender and location of care. Descriptive analysis was then carried out for adolescents with non-missing data on important OHS variables (N = 2047). In addition to examining differences in the overall observed burden of adolescent injury between administrative and survey data, we also explored whether there were differences within demographic subsets of the sample where the types of injuries experienced or the types of injury care received were likely to differ. Thus, we examined the proportion of adolescents with administratively-defined and self-reported injuries separately by gender, age group, and rural versus urban residence. Since numerous comparisons were possible within the results related to the burden of injury, we chose to report 95% confidence intervals around each proportion, rather than presenting statistical tests. Because the OHS used a complex sampling design to yield a provincially representative sample, weighted proportions were calculated. Variance estimates were adjusted using bootstrap replicate weights to account for clustering within the sample [ 14 ]. Data quality exploration Two analyses were conducted to explore data quality. First, as a sensitivity analysis for the physician billing database, we examined the impact of re-classifying two common "possible injury" diagnoses as actual injuries in the physician billing database. These diagnoses included i) non-specific musculoskeletal system diagnoses, and ii) adverse reactions to drugs and/or medications. These diagnoses were viewed as potentially problematic because they were not specific enough to injuries to warrant inclusion as "definite" injuries, but we believed that they might be commonly used by physicians providing injury-related care. Secondly, we directly compared self-reported and administratively-defined injuries at an individual level, to provide further insight into data quality. Not all self-reported injuries identified using the survey data would be expected to have led to medical treatment, and conversely, it is possible that some medically treated injuries may not have led to a restriction in normal activity. Some overlap between the survey data and administrative data was expected, though, in terms of injuries identified. Thus, we examined discrepancies between the injury variables at the level of the individual adolescent (i.e., the extent to which adolescents with administratively-defined injuries were likely to also have self-reported an injury during the same time period). Odds ratios, based on two-way data tables, were used as a measure of association for these direct comparisons of injury variables across data sources. We also explored possible reasons for the discrepancies observed, including potential recall error in the survey data, and potential error in both datasets resulting from overlap between acute injuries and repetitive strain injuries. These exploratory analyses involved, where appropriate, descriptive statistics (e.g., mean or median values) or odds ratios (as a measure of association for two dichotomous variables). All of the data quality analyses were used to examine within-sample methodologic issues. Therefore, as we did not wish to generalize the results from these analyses to a target population, unweighted analyses were conducted, and confidence intervals were not included. Results The burden of adolescent injury During the one-year period prior to the OHS interview, there were a total of 13501 physician visits for any cause among the 2067 adolescents in the initial sample (where a visit represents all of the care provided by a physician to a patient on the same date, at any location), based on the physician billing data. Of these, 1390 visits were identified as being related to injury, representing 10.3% of all physician care. The proportion of visits due to injury varied across locations of care. For example, 8.6% of physicians' office visits (n = 604) and 45.6% of emergency department visits (n = 402) were identified as injury-related. A greater proportion of physician care was injury-related among males (14.2%), relative to females (7.3%). Within the final sample (N = 2047), while 18.8% of adolescents self-reported an activity-limiting injury in the past year, 25.0% had at least one administratively-defined injury, based on the hospitalization and physician billing databases (Table 1 , weighted proportions). While 17.1% had at least one physicians' office visit for injury, 13.4% had one or more emergency department visits or inpatient stays for injury (these two outcomes were combined because only 18 adolescents had documented inpatient care for injury). Among adolescents with physician care for injury, the majority had one or two identified visits in the one-year period. Table 1 Sample characteristics and prevalence of injuries by data source (weighted) Total (Unwtd N = 2047) Males (Unwtd N = 1081) Females (Unwtd N = 966) % 95% CI % 95% CI % 95% CI Rural 17.2 (15.7, 18.7) 17.2 (15.1, 19.2) 17.2 (14.9, 19.6) Age group 12–14 years 34.0 (31.5, 36.6) 35.0 (31.6, 38.4) 33.0 (29.2, 36.8) 15–17 years 39.4 (36.7, 42.0) 38.9 (35.3, 42.5) 39.9 (36.0, 43.8) 18–19 years 26.6 (24.1, 29.1) 26.1 (22.8, 29.3) 27.2 (23.3, 31.1) Injury measures Self-reported 1 18.8 (16.9, 20.7) 22.1 (19.1, 25.0) 15.3 (12.7, 17.8) Administratively-def 2 25.0 (22.9, 27.1) 28.0 (24.9, 31.0) 21.7 (18.6, 24.7) Physician's office 3 17.1 (15.2, 18.9) 19.5 (16.8, 22.1) 14.4 (11.9, 17.0) ED/inpatient 4 13.4 (11.8, 15.0) 15.5 (13.1, 18.0) 11.0 (8.9, 13.1) 1–2 injury visits 5 16.5 (14.8, 18.3) 18.2 (15.7, 20.8) 14.6 (12.2, 17.0) > = 3 injury visits 5 8.2 (6.7, 9.6) 9.7 (7.6, 11.9) 6.4 (4.5, 8.3) CI = Confidence Interval; def=defined; ED = emergency department; unwtd = unweighted 1 Self-reported injury, identified using survey data 2 Administratively-defined injury, identified in the hospitalization or physician billing databases 3 At least 1 documented physician's office visit for injury within 1 year prior to the interview 4 Any documented emergency department or inpatient visits for injury within 1 year prior to interview 5 Number of physician visits (any location) for injury (based on physician billing data only) Table 2 shows the proportion of adolescents with each injury outcome, separately for each age group and by rural/urban status. A higher proportion of males were injured relative to females, across injury outcomes and subgroups. An exception was emergency department or inpatient attended injuries among rural adolescents, although in this case, the estimated proportion for females had high sampling variability. A small decrease in the proportion injured was observed with increasing age. This decrease was less apparent for emergency department and inpatient injury visits; again, sampling variability was high. For self-reported injuries, a higher proportion of rural adolescents was injured relative to urban adolescents, particularly for females. Rural/urban differences were not apparent for administratively-defined injuries overall. By location of care, the proportion with a physician's office visit for injury was higher for urban adolescents, while the proportion with emergency department or inpatient injury care was higher for rural adolescents. Table 2 Distribution of injury outcomes (weighted) Self-reported injury (survey data) Total (Unwtd N = 2047) Males (Unwtd N = 1081) Females (Unwtd N = 966) % 1 95% CI % 1 95% CI % 1 95% CI Age 12–14 years 21.2 (17.6, 24.8) 23.4 (18.0, 28.8) 18.6 (13.7, 23.5) Age 15–17 years 18.9 (15.8, 22.1) 23.7 (19.0, 28.4) 13.9 (9.7, 18.0) Age 18–19 years 15.6 (12.0, 19.2) 17.8 (12.5, 23.1) *13.3 (8.2, 18.3) Rural 21.9 (17.5, 26.3) 23.1 (17.0, 29.2) 20.5 (14.0, 26.9) Urban 18.2 (16.0, 20.4) 21.8 (18.4, 25.2) 14.2 (11.3, 17.1) Administratively-defined injury (hospitalization and physician billing data) Total (Unwtd N = 2047) Males (Unwtd N = 1081) Females (Unwtd N = 966) % 1 95% CI % 1 95% CI % 1 95% CI Any injury 2 Age 12–14 years 27.2 (23.1, 31.2) 29.5 (24.2, 34.9) 24.4 (18.5, 30.4) Age 15–17 years 25.1 (21.6, 28.6) 27.4 (22.4, 32.5) 22.6 (17.9, 27.3) Age 18–19 years 22.0 (18.1, 25.8) 26.7 (20.6, 32.7) 17.0 (12.0, 21.9) Rural 25.2 (20.5, 29.9) 26.6 (20.2, 33.0) 23.7 (17.1, 30.3) Urban 24.9 (22.5, 27.3) 28.3 (24.7, 31.8) 21.2 (17.9, 24.6) Phys. office injury 3 Age 12–14 years 19.8 (16.2, 23.3) 21.6 (16.7, 26.5) 17.6 (12.5, 22.8) Age 15–17 years 16.9 (13.9, 19.9) 19.8 (15.2, 24.3) 13.8 (10.0, 17.6) Age 18–19 years 13.9 (10.5, 17.3) 16.2 (11.1, 21.2) *11.5 (6.8, 16.3) Rural 14.0 (10.5, 17.5) *16.2 (10.8, 21.6) *11.6 (6.9, 16.3) Urban 17.7 (15.6, 19.8) 20.1 (17.1, 23.2) 15.0 (12.0, 18.1) ED/inpatient Injury 4 Age 12–14 years 13.8 (10.8, 16.9) 15.9 (11.5, 20.4) *11.4 (7.4, 15.4) Age 15–17 years 13.6 (11.1, 16.2) 14.5 (10.9, 18.2) 12.7 (8.9, 16.4) Age 18–19 years 12.4 (9.4, 15.4) 16.4 (11.7, 21.1) *8.2 (4.7, 11.7) Rural 18.0 (13.9, 22.1) 17.6 (12.5, 22.8) *18.4 (12.1, 24.6) Urban 12.4 (10.6, 14.2) 15.1 (12.3, 17.9) 9.5 (7.3, 11.7) CI = Confidence Interval; ED = emergency department; phys. = physician's; unwtd = unweighted 1 Row percentages (for example, of 12–14 year-olds, the percent who experienced the injury outcome) 2 Any documented injury in the administrative databases (hospitalization and physician billing data) 3 At least 1 documented physician's office visit for injury within 1 year prior to the interview 4 Any documented emergency department or inpatient visits for injury within 1 year prior to the interview * Proportion should be interpreted with caution due to high sampling variability Sensitivity analysis and comparison of injuries across data sources As a sensitivity analysis for the physician billing data algorithm, the impact of re-classifying two common "possible injury" diagnoses as actual injuries was examined. When the non-specific musculoskeletal system diagnoses were added to the injury dataset, the proportion of adolescents with administratively-defined injury increased from 25.0% to 29.3%. When adverse reactions to drugs and/or medications were added, the proportion increased only slightly, to 25.9% (weighted proportions). The results of the within-sample analysis used to compare injuries identified using different data sources are shown in Table 3 (unweighted). Section i) of the table shows the direct comparison of administratively-defined and self-reported injuries at the individual level, for the total sample, and then separately by gender, and by location of care for administratively-defined injuries. For example, of the 2047 adolescents in the final sample, 550 had a documented administratively-defined injury, while 1497 had no such injury. Of the 550 adolescents with administratively-defined injury, 213 (38.7%) self-reported an injury, compared with 193 (12.9%) among the 1497 adolescents with no administratively-defined injury. The odds ratio for the relationship between administratively-defined and self-reported injury was 4.3. There was a higher congruence between the two data sources in terms of identified injuries for females (odds ratio 5.7), relative to males (odds ratio 3.4). To examine the congruence with self-reported injury separately by location of care for the administratively-defined injuries, the third and fourth columns of the table ("administratively-defined injury") were restricted to those identified as having specifically received care at either a physician's office or an emergency department/inpatient facility. For example, 358 adolescents had a documented physician's office visit for injury, and 134 (37.4%) of these adolescents also self-reported an injury. When compared with the 12.9% of adolescents who self-reported an injury but had no administratively-defined injuries, the resulting odds ratio was 4.0. There was a higher congruence with self-reported injuries for emergency department or inpatient care for injury (odds ratio 5.8). Table 3 Exploring self-reported versus administratively-defined injury (unweighted) i) Direct comparison of injuries identified using different data sources: self-reported injury for those with and without administratively-defined injury Administratively- defined injury No administratively- defined injury Total N N Self-report N(%) N Self-report N(%) Odds ratio Total 2047 550 213 (38.7) 1497 193 (12.9) 4.3 Males 1081 318 126 (39.6) 763 123 (16.1) 3.4 Females 966 232 87 (37.5) 734 70 (9.5) 5.7 Phys. office 1 1855 358 134 (37.4) 1497 193 (12.9) 4.0 ED/ inpatient 2 1812 315 145 (46.0) 1497 193 (12.9) 5.8 ii) Time from most recent administratively-defined injury to OHS interview (N = 550) 3 N Mean (# days) Median (# days) Adolescents with self-reported injury 213 141 125 Adolesents without self-reported injury 337 173 171 Total 550 160 146 iii) Self-reported repetitive strain injuries, by self-reported & administratively-defined acute injuries (N = 2045) N Repetitive Strain Injury N (%) Odds Ratio Self-reported acute injury 406 57 (14.0) 1.9 No self-reported acute injury 1639 129 (7.9) Administratively-defined acute injury 550 88 (16.0) 2.7 No administratively-defined acute injury 1495 98 (6.6) Administratively-def. & self-rep. acute injury 213 35 (16.4) 1.1 Administratively-def. & no self-rep. acute injury 337 53 (15.7) def = defined; ED = emergency department; OHS = Ontario Health Survey; phys = physician's; self-rep. = self-reported 1 At least 1 documented physician's office visit for injury within 1 year prior to interview, based on the administrative data. Adolescents with emergency department or inpatient visits but no physicians' office visits for injury are excluded from the denominator. 2 Any documented emergency department or inpatient visits for injury within 1 year prior to interview, based on the administrative data. Adolescents with physicians' office visits but no emergency department or inpatient visits for injury are excluded from the denominator. 3 Analysis includes only those adolescents (N = 550) with administratively-defined injury In order to investigate the possibility that recall error may have led to underreporting in the survey, we investigated the relationship to recall time (Table 3 , section ii). For adolescents with at least one administratively-defined injury, those who self-reported an injury had a shorter recall time from the most recent documented administratively-defined injury to the OHS interview (median 125 days) compared with adolescents who did not self-report an injury (median 171 days). Finally, to explore whether some acute injuries may have been misreported as repetitive strain injuries in the OHS (based on a series of questions on repetitive strain injuries that preceded those on acute injuries), and to explore whether the algorithm used with the physician billing database may have led to misclassification of some repetitive strain injuries as acute injuries, we examined the relationship to self-reported repetitive strain injuries. Both self-reported acute injury and administratively-defined acute injury appeared to be related to self-reported repetitive strain injury (odds ratios 1.9 and 2.7 respectively, Table 3 , section iii), although the relationship for administratively-defined injury was stronger. Among those with administratively-defined injury, there was no strong evidence of a relationship between self-reports of acute injury and repetitive strain injury; repetitive strain injury was reported by 16.4% of the 213 adolescents with self-reported acute injury, and a similar 15.7% of the 337 adolescents without self-reported acute injury (Table 3 , section iii, last two rows). Discussion Contribution of physician billing data to injury surveillance using administrative databases This exploratory study focused on the potential value of using physician billing data in combination with hospital discharge data to document the burden of injuries among adolescents. The results suggest that adding physician billing claims to hospitalization information is a feasible method of improving the comprehensiveness of healthcare administrative datasets. Approximately 10 percent of all physician care for adolescents in the study was identified as injury-related. Although a smaller proportion of physicians' office visits was identified as injury-related, relative to emergency department physician visits, office care actually represented a larger number of visits. Thus, these relatively more minor injuries represent a large component of adolescent injury morbidity that would be missed if estimates relied on hospital data alone or even on a combination of hospital and emergency care information. The observed differences in the rural/urban distribution of adolescent injuries by location of care (Table 2 ), reflecting potential difference in injury severity or access to care between rural and urban adolescents, also highlight the importance of capturing information across the full spectrum of care. Comparison of administrative databases and self-reports: value for injury surveillance A higher proportion of adolescents was identified as having administratively-defined injury relative to self-reported injury. One might expect the definition of self-reported injury used in the survey (injuries that limit normal activities) to capture a broader spectrum of injury severity compared with the administrative data (since some activity-limiting injuries may not receive medical care). The higher proportion of adolescents with administratively-defined injury, though, suggests that there may also be a subset of medically treated injuries that do not in fact limit normal activities; in other words, perhaps the definition of injuries used in the survey was actually more restrictive. Although neither data source can be viewed as a "gold standard", these results suggest that administrative health care data may actually provide a more sensitive means of ascertaining injuries, relative to self-reported survey data. Injuries identified as medically treated using administrative data may also be viewed as representing the health concerns of the person seeking care, and they have an impact on the health care system. These findings highlight the potential importance of administrative databases as a source of population-based injury information that can be used for affordable ongoing surveillance and for examining health care system issues such as patterns of service delivery. Despite these advantages, a limitation of many claims datasets, including the OHIP database, is a lack of detail on the circumstances surrounding the occurrence of injuries. The billing data contained no external cause information, such that description of injuries by mechanism and intent was not possible. Exploring data quality issues: injury outcomes in administrative databases and self-reports Although the distribution of injuries, particularly for gender and age, was fairly similar for self-reported and administratively-defined injuries (Table 2 ), congruence of injury outcomes was relatively low at the individual level (Table 3 , section i). This may in part reflect the different definitions of injury represented in the datasets (medically treated injuries in the administrative data, versus activity-limiting injuries in the survey data). Our exploration of data quality issues, however, revealed potential errors in both databases that may have contributed to the discrepancies. In the survey data, we found some evidence of recall errors (Table 3 , section ii). This finding is supported by previous research documenting recall errors in self-reports for a variety of health outcomes, including chronic conditions [e.g., [ 20 , 21 ]] as well as injuries [e.g., [ 5 , 22 , 23 ]]. For example, in a study of parental recall of non-fatal injuries in children and adolescents, estimates of annual injury rates were found to decline as the recall period for injuries increased from two weeks to 12 months [ 5 ]; this suggests that the 12 month recall period used in the 1996–1997 OHS may have led to underreporting. In the study of parental recall referred to above, more severe injuries (resulting in surgery or hospitalization; or resulting in restriction to bed or school absence) appeared to be less subject to recall errors, relative to minor injuries [ 5 ]. This may partly explain the stronger association we found between self-reported injuries and administratively-defined injuries when administratively-defined injuries were restricted to those identified as having received emergency department or inpatient care. Studies that have directly compared self-reported health care use with health care use identified in medical records across a variety of health services have also tended to find that both males and females underreport physician visits to a greater extent than hospital or emergency care, particularly as the recall period increases [e.g., [ 24 - 26 ]]. In addition to recall error in the survey data, inaccuracies in the administrative databases may have played a role in contributing to the discrepancies between the administrative and survey data. Errors in the DAD have been identified [ 27 ], although this likely had little impact, due to the small number of injury hospitalizations. The method used to identify injuries in the physician billing database was exploratory. The sensitivity analysis and results related to repetitive strain injury (Table 3 , section iii) highlight the need to further validate the physician billing data algorithm, ideally using comparisons with medical charts. With respect to repetitive strain injury, its stronger observed relationship with administratively-defined injury (as compared with self-reported acute injury) suggests that the physician billing data algorithm may have led to the inclusion of some repetitive strain injuries. The lack of evidence for a relationship between self-reported acute injury and repetitive strain injury among those with administratively-defined injury (Table 3 , section iii) suggests that confusion with repetitive strain injury in the survey did not lead to underreporting of acute injuries. Strengths and limitations Strengths of our study included the detailed exploration of the methods used to identify injuries using physician billing data, and the unique comparison of injuries across datasets within the same sample of adolescents. In addition to the need to further validate the algorithm used with the physician billing data, study limitations included the small sample size, particularly for investigating the distribution of injuries resulting in emergency and inpatient care, and the incomplete linkage of the survey and administrative datasets. Although the incomplete data linkage may reduce the generalizability of the study findings, the linked sub-sample was similar to the full Ontario survey sample across demographic characteristics (gender, rural/urban status, and age), and the unique sampling weights created for this sub-sample may have improved representativeness. Finally, because this study capitalized on an opportunity presented by a larger study on youth injuries, we focused specifically on adolescents. Further research could examine the generalizability of both the approach and the findings to other age groups, where the types of injuries experienced and care-seeking patterns may differ. Studies in jurisdictions with similar medical claims databases would also help in assessing the generalizability of our research. Conclusion Collectively, our findings allow us to draw two main conclusions. First, the results suggest that there is potential value in using physician billing data along with other administrative health care databases for the surveillance of injuries among adolescents. Although they are lacking in details about the circumstances surrounding injuries, comprehensive administrative injury datasets may be particularly useful for describing the overall occurrence of injury at local or regional levels, and for describing the economic implications of injury for the health care system. Secondly, we identified data quality concerns in both the survey and administrative databases that suggest a need for improvement and further study; for example, further research could help to identify appropriate recall periods and question wording for minimizing errors in survey data, and to determine the level of detail needed to accurately identify injuries in administrative databases. Because various sources of data are susceptible to different limitations, it remains important to consult multiple sources of information to fully document the burden of injury [ 2 , 4 , 7 ]. Competing interests The author(s) declare that they have no competing interests. Authors' contributions BKP participated in the design and coordination of the study and the development of the physician billing algorithm, carried out all statistical analyses, and drafted the manuscript. DM participated in the design, coordination, and supervision of the study and development of the physician billing algorithm, as well as revisions to the manuscript. KNS participated in the design, coordination, and supervision of the study, as well as revisions to the manuscript. IAG participated in the design of the study and development of the physician billing algorithm, as well as revisions to the manuscript. MKC participated in the design of the study and revisions to the manuscript. JJK participated in the design of the study and revisions to the manuscript, and provided statistical guidance. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554767.xml |
509307 | Nicotine Keeps Leaf-Loving Herbivores at Bay | null | Sooner or later, a gardener looking for “nontoxic” ways to control the inevitable attack on a favorite plant will discover the nicotine remedy. Steep a cup of loose tobacco in a gallon of water, let it sit overnight, strain, and spray away. Caterpillars, aphids, and a diverse array of insects predisposed to devouring plants will soon abandon your vegetables and flowers in search of less disagreeable forage. The ultimate sitting duck, plants rely on an arsenal of chemical metabolites to fend off predators. Many of these chemicals harbor anti-herbivore properties, which have been exploited for commercial use. Nicotine, it turns out, is so toxic that it was one of the first chemicals used in agricultural insecticides. It's not clear, though, whether these toxic metabolites are really defending plants against hungry herbivores in their natural environment, especially since many insects can tolerate various plant chemicals and sometimes even incorporate them into their own defenses. Though scientists have cataloged a long list of these presumed resistance traits, there's no evidence that they offer plants a competitive advantage against their leaf-covetous foes in nature. Spodoptera exigua larva feeding on Nicotiana attenuata With plant and plant-eater engaged in an ever-escalating battle of evolutionary one-upmanship and with plants capable of producing an array of defensive responses, teasing out the predator-resistant effects of individual plant metabolites has proved challenging. Theoretically, one could track down a resistance gene by breeding plants that are genetically identical save for the gene that controls expression of a particular resistance trait. In practice, however, traditional breeding techniques aren't that precise and tend to generate additional variations in genomic regions that are linked to the target gene and that might affect resistance as well. The tools of genetic engineering have largely overcome such limitations, allowing scientists far greater control and specificity. Following this approach, Ian Baldwin and colleagues use transgenic silencing (which introduces gene “constructs” into an organism to inactivate a gene of interest) to investigate a single resistance trait, nicotine production. Even though nicotine is one of the best-studied putative resistance traits, its specific role has been unclear. To isolate the resistance effects of nicotine from possible confounding factors, Baldwin and colleagues blocked nicotine production in the Nicotiana attenuata tobacco plant. Focusing on an enzyme, called putrescine methyl transferase (PMT), central to nicotine biosynthesis, the authors used two techniques that interfere with PMT production by silencing the gene, pmt , that encodes the enzyme. One of the techniques (which adds genetic sequences called “inverted repeats” to gene fragments) proved far more effective at silencing pmt , producing 29 out of 34 plant lines with only 3%–4% of normal nicotine levels. With suitably nicotine-deprived plants, Baldwin and colleagues could directly test nicotine's role in tobacco fitness. They transplanted the transgenic plants, along with nonmutant cultivated plants, in southwestern Utah, N. attenuata 's native habitat. A subset of the plants was also treated with a chemical known to increase both nicotine content and resistance to herbivore attack. Predictably, several of the plant's natural insect enemies made an appearance. Untreated nicotine-deficient transgenic plants fared the worst, losing twice as much foliage to herbivores as nonmutant plants. Transgenic plants treated with the chemical boost performed much better, showing about the same amount of damage as the nonmutants. Interestingly, tobacco hornworms—which, as their name implies, feed primarily on tobacco—preferred nicotine-free plants when given the choice. Though the worms have evolved strategies for coping with nicotine's deleterious effects, these adaptations come at a price: worms feeding on nicotine-deficient tobacco grew bigger and faster than those feeding on plants with normal nicotine levels. These results clearly show that nicotine protects tobacco plants in their native habitat, the authors conclude, and that tobacco-chewing insects “prefer low nicotine diets.” Removing nicotine from the equation reveals the relentless pressure that plants face from herbivores. Without such defenses, plants would be unceremoniously eliminated posthaste, leaving a world without greenery, not to mention oxygen. But these experiments also demonstrate the unprecedented power of transgenic tools to peel back the obfuscating layers inherent in ecological interactions to reveal the fundamental properties of those interactions. And that scientists working to unravel the tangled web of ecological interactions would do well to take advantage of the longest running experiment on earth—the natural environment. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC509307.xml |
549044 | An immunocompromised BALB/c mouse model for respiratory syncytial virus infection | Background Respiratory syncytial virus (RSV) infection causes bronchiolitis in infants and children, which can be fatal, especially in immunocompromised patients. The BALB/c mouse, currently used as a model for studying RSV immunopathology, is semi-permissive to the virus. A mouse model that more closely mimics human RSV infection is needed. Since immunocompromised conditions increase risk of RSV infection, the possibility of enhancing RSV infection in the BALB/c mouse by pretreatment with cyclophosphamide was examined in this study. BALB/c mice were treated with cyclophosphamide (CYP) and five days later, they were infected with RSV intranasally. Pulmonary RSV titers, inflammation and airway hyperresponsiveness were measured five days after infection. Results CYP-treated mice show higher RSV titers in their lungs of than the untreated mice. Also, a decreased percentage of macrophages and an increased number of lymphocytes and neutrophils were present in the BAL of CYP-treated mice compared to controls. The CYP-treated group also exhibited augmented bronchoalveolar and interstitial pulmonary inflammation. The increased RSV infection in CYP-treated mice was accompanied by elevated expression of IL-10, IL-12 and IFN-γ mRNAs and proteins compared to controls. Examination of CYP-treated mice before RSV infection showed that CYP treatment significantly decreased both IFN-γ and IL-12 expression. Conclusions These results demonstrate that CYP-treated BALB/c mice provide a better model for studying RSV immunopathology and that decreased production of IL-12 and IFN-γ are important determinants of susceptibility to RSV infection. | Introduction Respiratory syncytial virus (RSV) is an important respiratory pathogen that produces an annual worldwide epidemic of respiratory illness primarily in children, but also in the elderly [ 1 , 2 ]. In the USA alone, RSV infection of children causes about 100,000 hospitalizations and 4,500 deaths annually (MMWR, 1996). RSV commonly precipitates bronchiolitis and exacerbates asthma but is also associated with severe life threatening respiratory infections in individuals with coronary artery disease or who are immunocompromised [ 3 - 6 ]. At the molecular level, RSV infection up-regulates the expression of several cytokines and chemokines, such as IL-1β, IL-6, IL-8, TNF-α, MIP1α, RANTES, and the adhesion molecule ICAM-1, ET-1, LTB4 and LTC4/D4/E4 [ 7 - 13 ]. Furthermore, elevated levels of cytokines and chemokines have been found in the nasal secretions of naturally RSV-infected children and of artificially-infected adults [ 14 - 17 ]. Defects in IL-12 and IFN-α production have been associated with severe RSV disease [ 18 ]. Despite progress in our understanding of immunopathology, the lack of a suitable animal model, with pathophysiology similar to humans, allowing appropriate virology, immunology, pathology and toxicology testing, has hindered the development of prophylactic and therapeutic interventions against RSV infection [ 19 ]. The pathology of RSV infection has been examined in a number of animals including primates, cotton rats, mice, calves, guinea pigs, ferrets and hamsters [ 19 ]. The choice of an experimental model is governed by the specific manifestation of the disease. The development of multiple animal models reflects the multifaceted nature of human RSV disease, in which clinical manifestations and sequelae depend upon age, genetic makeup, immunologic status and concurrent disease status of the individual [ 19 ]. Currently there is no single animal model that duplicates all forms of RSV disease. While cotton rats provide a good model for toxicologic evaluations, mice are considered advantageous for immunology and vaccine development. Furthermore, in mice the importance of IFN-γ, IL-6, IL-10 and IL-13 has been described [ 20 - 24 ]. The mouse provides an excellent model for human RSV infection because of the following: (a) the mouse is the best-characterized animal model, and experiments can be performed in this model in a cost- and time-effective manner, (b) a wide array of immunological reagents is available for studies in this model, and (c) the A2 strain of human RSV administered intranasally readily infects lungs of mice, and exhibits a time course of infection, pathology and resolution similar to that seen in humans [ 25 ]. Treatment of mice with the anti-RSV compound ribavirin decreases RSV titers in the lungs [ 26 ]. Depending upon the amount of RSV administered, the illness in micemay range from mild pneumonitis of the lung to weight loss [ 27 ]. Healthy BALB/c mice are semi-permissive to RSV and develop only limited inflammation and airway reactivity. Based on the reports that deficiency in IL-12 and IFN production increases severity of RSV disease, we hypothesized that rendering mice immunocompromised, will improve permissiveness to RSV and provide a better model for RSV infection. To test this hypothesis, BALB/c were treated with cyclophosphamide, infected with RSV, and characterized in terms of viral infectivity and pathology, immunology, and immunohistology. The results show that cyclophosphamide temporarily decreases IL-12 production and thus augments viral replication and the immunopathology of RSV disease. Materials and Methods Animals Female six-week old BALB/c mice were purchased from Jackson Laboratory (Bar Harbor, ME) and maintained in a pathogen-free environment. All procedures were reviewed and approved by the University of South Florida Committee on Animal Research. Cyclophosphamide treatment Cyclophosphamide (CYP; Sigma, St. Louis, MO) was administered to mice intraperitoneally (i.p.) at a single dose of 100 mg per kg five days prior to RSV infection. RSV infection, weight determination and tissue collection The A2 strain of human RSV (American Type Culture Collection, Manassas, VA) was propagated in Hep-2 cells (ATCC) in a monolayer culture as previously described (Behera et al., 1998). Mice were infected intranasally with 5 × 10 5 PFU of RSV in a volume of 50 μl five days after treatment with CYP. One set of animals was monitored for weight loss at days 5, 10, 15 and 22 following CYP treatment (0, 5, 10 and 17 days after RSV infection). A second set of animals was sacrificed five days after infection and their lungs were removed for determination of RSV titers, cytokine levels and histopathology. RSV plaque assay HEp-2 cells (5 × 10 5 /well) in 6-well plates were infected with 5 × 10 5 pfu RSV per well for 2 hours at 37°C. The RSV was removed and the wells were overlaid with 1.5 ml of growth medium containing 0.8% methylcellulose. The cells were then incubated at 37°C for 72 hours, after which the overlay was removed. Following incubation, the cells were fixed in cold 80% methanol for 3 hours, blocked with 1 % horse serum in PBS at 37°C for 30 min, then incubated with anti-RSV monoclonal antibody (NCL-RSV 3, Vector Laboratories, Burlingame, CA) diluted 1:400 for 1 hour at 37°C. Secondary antibody staining and substrate reactions were performed using the Vectastain ABC Kit (Vector Laboratories) and diaminobenzidine in H 2 O 2 (Pierce, Rockford, IL) was used as a chromagen. The plaques were enumerated by microscopy and the results were expressed as mean ± standard error of the mean. Determination of airway hyperresponsiveness (AHR) AHR was measured in unrestrained mice using a whole body plethysmograph (Buxco, Troy, NY), as previously described (Schwarze et al., 1997) and expressed as enhanced pause (Penh). Groups of mice (n = 4) were exposed for 5 min to nebulized PBS and subsequently to increasing concentrations (6, 12, 25 and 50 mg/ml) of nebulized methacholine (MCh; Sigma, St, Louis, MO) in PBS using an ultrasonic nebulizer. After nebulization, recordings were taken for 5 minutes. Penh values were averaged and expressed as a percentage of baseline Penh values obtained following PBS exposure. Immunohistochemical analysis Mouse lungs were rinsed with intratracheal injections of PBS then perfused with 10 % neutral buffered formalin. Lungs were removed, paraffin-embedded, sectioned at 20 μm and stained with hematoxylin and eosin (H & E). A semi-quantitative evaluation of inflammatory cells in the lung sections was performed as previously described (Kumar et al., 1999). Whole lung homogenates were prepared using a TissueMizer and assayed for cytokines IL-10, IL-12 and IFN-γ by ELISA (R & D Systems, Minneapolis MN), following the manufacturer's directions. The results are expressed as cytokine amount in picograms per gram of lung (pg/g). Detection of RSV and cytokines in the lungs by RT-PCR Total cellular RNA was isolated from lung tissue using TRIZOL reagent (Life Technologies, Gaithersburg, MD). Forward and reverse primers used were as follows: RSV-N forward: 5'-GCG ATG TCT AGG TTA GGA AGA A-3'; reverse: 5'-GCT ATG TCC TTG GGT AGT AAG CCT-3'; mouse IFN-γ Forward: 5'-GCT CTG AGA CAA TGA ACG CT-3'; reverse: 5'-AAA GAG ATA ATC TGG CTG TGC-3'; mouse IL-10 forward: 5'-GGA CTT TAA GGG TTA CTT GGG TTG CC-3'; reverse: 5'-CAT TTT GAT CAT CAT GTA TGC TTC T-3'; mouse IL-12 forward: 5'-CAG TAC ACC TGC CAC AAA GGA -3'; reverse: 5'-GTG TGA CCT TCT CTG CAG ACA -3' and β-actin forward: 5'-GAC ATG GAG AAG ATC TGG CAC-3'; reverse: 5'-TCC AGA CGC AGG ATG GCG TGA -3'. All PCRs were denatured at 95°C for 1 min, annealed at 56°C for 30 sec, and extended at 72°C for 1 min for 25–35 cycles. All amplifications were done in triplicate and repeated three times. The PCR products were separated by agarose gel electrophoresis and quantified using Advanced Quantifier Software (BioImage, Ann Arbor, MI). Cell enumeration of bronchoalveolar lavage fluid Bronchoalveolar lavage (BAL) fluid was collected and differential cell counts were performed as previously described (Kumar et al., 1999). Briefly, BAL was centrifuged and the cell pellet was suspended in 200 μl of PBS and counted using a hemocytometer. The cell suspensions were then centrifuged onto glass slides using a cytospin centrifuge at 1000 rpm for 5 min at room temperature. Cytocentrifuged cells were air dried and stained with a modified Wright's stain (Leukostat, Fisher Scientific, Atlanta, GA) which allows differential counting of monocytes and lymphocytes. At least 300 cells per sample were counted by direct microscopic observation. Statistical analysis Values for all measurements were expressed as mean ± SD or SEM. The data were analyzed by ANOVA. Paired and unpaired results were compared by a Wilcoxon rank sum test or Mann-Whitney test respectively. Differences between groups were considered significant at p < 0.05. Results Cyclophosphamide treatment augments RSV infection in mice Groups of mice were injected i.p. with a single dose of CYP or PBS and five days later infected with RSV. Five days after infection, RSV titers in one group of mice were measured by plaque assay of lung homogenates (Fig. 1A ). The mice pretreated with CYP produced significantly more ( p < 0.01) RSV plaques compared to the PBS control group. Weight loss is a clinical correlate of RSV infection, therefore weights were measured in a parallel group of mice on day 5, 10, 15, and 22 after CYP treatment (day 0, 5, 10 and 17 after RSV infection) (Fig. 1B ). CYP treatment alone resulted in a weight loss or reduced weight gain compared to PBS, but the RSV-infected, CYP-treated mice lost significantly more weight than those exposed to RSV alone p < 0.05; †.p < 0.01(vs. RSV) or PBS (p < 0.01 vs. Control). Pretreatment with CYP resulted in increased weight loss in the RSV-infected mice through day 15 and reduced weight gain at day 22 indicating that cyclophosphamide treatment exacerbated the pathology of RSV infection. Figure 1 (A) CYP increases RSV titer in the lungs of BALB/c mice. Mice were treated with CYP (100 mg/kg, i.p.) or PBS and 5 days later infected with RSV (50 μl i.n. twice, 10 6 PFU/ mouse). Animals were sacrificed on day 4 and RSV titers were measured in whole lung homogenates by RSV plaque assay. (n = 4 for each group; § P < 0.01 vs PBS group). (B) Cyclophosphamide affects body weight. Mice (n = 4) were infected with RSV alone or were treated with CYP (100 mg/kg i.p.) prior to infection. Body weights were measured on day 1, 5, 10, 15, and 22 after treatment. Bars represent means ± SEM. (* P < 0.05; †. P < 0.01 vs RSV); ‡ P < 0.05; § P < 0.01 vs control). Cyclophosphamide pretreatment increases RSV-inducible lung inflammation To examine whether CYP treatment increases inflammatory effects in the lungs of RSV-infected mice, we determined airway hyperresponsiveness (AHR), cellular infiltration into the lung and lung histopathology. Groups of BALB/c mice either infected with RSV alone or treated with cyclophosphamide (CYP) prior to infection were lavaged and cells in the fluid were centrifuged onto slides. BAL cells were stained with Leukostat. Cells were counted from 4 different slides from each group in a blinded fashion. Cell counts were plotted as percentage of total cells (Fig. 2A ). There was a decrease in the number of macrophages and increases in lymphocyte and neutrophil numbers following RSV infection that was enhanced by prior treatment with CYP. To analyze the extent of lung pathology, lungs were paraffin embedded, sectioned and stained with hemotoxylin-eosin (HE). The lung sections from RSV-infected, CYP-treated mice (Fig. 2C , a & b) showed significantly greater inflammation than lungs from mice given RSV alone (Fig. 2C , c & d). The RSV-infected groups showed greater inflammation than uninfected control mice (Fig. 2C e & f). Figure 2 (A) BAL cell differential of RSV-infected mice . Mice were treated with cyclophosphamide (CYP) or vehicle 5 days before infection with RSV. Animals were sacrificed on day 4 postinfection and BAL was performed. Following cytocentrifugation, BAL cells were stained with Leukostat and counted from 4 different slides from each group in a blinded fashion. Cell counts as percentage of total were plotted. (B) Measurement of airway hyperrresponsiveness (AHR) . Mice treated as above were tested for AHR by methacholine challenge in a plethysmograph. AHR is expressed as PENH, percent of control. (C) Lung histopathology. Mice were infected with RSV alone (C and D) or treated with cyclophosphamide (A and B) prior to RSV infection. The third group of mice was not exposed to RSV (E and F). Animals were sacrificed on day 5 and their lungs removed and sectioned. Paraffin-embedded lung sections were stained with hematoxylin-eosin. Increased RSV infection in CYP-treated BALB/c mice is associated with increased production of immunoregulatory cytokines To examine the cytokine profile in lungs from RSV-infected mice with or without CYP treatment, the gene expression of IL-10, IFN-γ and IL-12 was measured by RT-PCR. Gel profiles and densitometric analyses are shown in Fig. 3A and 3B . The results show that mice infected with RSV after CYP treatment have increased mRNA expression for all of three cytokines compared to control mice or mice infected with RSV alone. To determine whether mice treated with CYP and infected with RSV do produce more of these proteins, cytokines were measured by ELISA on homogenates prepared from whole lungs. IL-10 (p < 0.05 vs. PBS), IL-12 (p < 0.05 vs. RSV) and IFN-γ (p < 0.05 vs. PBS) levels were significantly higher in the CYP-treated group than the control group (Fig. 4A–C ). Figure 3 Detection of RSV and cytokines in the lungs of BALB/c mice. (A) RSV-N and IL-10, IL-12, IFN-γ and β-actin were checked by RT-PCR. Mice were infected with RSV alone or treated with CYP prior to infection. The third group was uninfected (PBS) as control. Animals were sacrificed on day 5, their lungs removed and RNA was isolated and used in RT-PCR assay. (B) Densitometric analysis of the band densities from part A. Relative intensity refers to the ratio of the intensity of each cDNA product to that of β-actin. Figure 4 CYP-treated BALB/c mice produce higher levels of Th1 and Th2 cytokines . Mice were infected with RSV alone or were treated with cyclophosphamide prior to infection. The third group of mice received no virus (PBS only) as control. Animals were sacrificed on day 5 and their lungs removed. Whole lung homogenates were prepared and cyrtokines were measured by ELISA. Results are given as mean ± SEM (n = 4 for each group). (A) CYP-pretreated mice produce higher IL-10 in the lungs. (‡ P < 0.05 vs. PBS). (B) IL-12 was higher in CYP-treated mice. (§ P < 0.01 vs. PBS. * P < 0.05 vs. RSV). (C) IFN-γ was higher in CYP-treated mice. (* P < 0.05 vs. RSV; ‡ P < 0.05 vs. PBS). Cyclophosphamide treatment causes a transient reduction in IL-12 and IFN-γ expression To examine the mechanism underlying the increased RSV infection in CYP-treated animals, the levels of two cytokines that exert antiviral activity, IL-12 and IFN-γ was measured after treatment with CYP (Fig. 5 ). Mice were sacrificed on days 1, 2, 4, 6 after treatment and IL-12 and IFN-γ protein levels were measured in whole lung homogenates by ELISA. Untreated BALB/c mice were used as controls. Treatment with CYP gradually decreased both IL-12 (p < 0.05) and IFN-γ (p < 0.01 vs. Control) until day 4. These results show that decreased production of IL-12 and IFN-γ may play a role in the observed increase in RSV infection in CYP-treated mice. Figure 5 Changes in IL-12 and IFN-γ levels over time in CYP-treated mice. Mice were treated with cyclophosphamide at 100 mg/kg i.p. Animals were sacrificed on day 1, 2, 4, 6 after treatment and their lungs removed. Whole lung homogenates were prepared and IL-12 (A) and IFN-γ (B) were measured by ELISA. Untreated mice were used as control. Results are shown as mean ± SEM (n = 2; ‡ P < 0.05; § P < 0.01 vs. control). Discussion The main focus of this study has been to establish and characterize an immunocompromised mouse model for studying RSV infection. Mice are only semi-permissive to RSV infection, yet can serve as a useful model for immunological studies. Compared to the traditional BALB/c mouse model, the use of cyclophosphamide to create an immunocompromised condition provides an effective means of augmenting RSV replication and disease. Pretreatment of BALB/c mice with CYP results in RSV titers in the lungs of these immunocompromised mice that are increased significantly compared to the group infected with RSV without cyclophosphamide treatment. These results are consistent with an earlier study in the cotton rat model [ 28 ]. In another study, a high titer RSV inoculum (10 7 PFU/ml) was administered intranasally to old mice and resulted in clinical illness and appreciable pathology in the lung [ 25 ], while mice inoculated with 10 6 PFU/ml, or less, did not exhibit symptoms of illness. Most studies using mice as models employ lower doses of RSV because RSV infection in humans, which induces a pneumonia-like pulmonary inflammation, occurs typically at sub-clinical RSV doses. The loss of body weight in CYP-treated mice following RSV infection compared to untreated control mice confirmed that CYP-treatment increased the susceptibility to RSV infection at lower inocula. This finding that permissiveness to RSV can be augmented by rendering mice immunocompromised is significant, as it increases the utility of the mouse model for RSV infection. Consistent with increased RSV infection, the cellular population in BAL fluid was altered. Especially significant are the increases in lymphocytes and neutrophils in CYP-treated mice compared to controls. This data is in agreement with previous reports showing that RSV infection increased lymphocyte infiltration in the lung [ 2 ]. Along with increased cellular infiltration, lung pathology, particularly epithelial denudation and goblet cell hyperplasia, is also markedly increased. Expression of IL-10, IL-12 and IFN-γ was examined to determine if RSV-induced changes in the levels of these cytokines in the lungs of CYP-treated mice played a role in the increased lung immunopathology. RSV-infected CYP-treated mice exhibited significantly increased expression of these cytokines at the protein and mRNA level in agreement with previous observations that RSV infection induced enhanced expression of Th1 and Th2 cytokines [ 22 , 29 ]. Other studies have shown that IFN-γ can induce production of IL-12 in a self-activating loop, by activating macrophages which produce IL-12 [ 30 ]. Although CYP treatment is known to induce an immunosuppressed condition, the mechanism is unclear. The results of cytokine analysis on days 1 to 5 after cyclophosphamide treatment indicated that IL-12 and IFN-γ were reduced and the reduction was highest on days four and two, respectively. These results suggest that the ability of cells to produce these cytokines at the time of RSV infection is an important determinant of the magnitude of infection in terms of increased RSV replication and titer. Impairment in IL-12 and IFN-γ production at the key moment of acute infection leads to rapid viral replication and subsequent pathology. In a previous report we demonstrated the importance of IFN-γ by artificially increasing IFN-γ levels and showing that viral titers were decreased because of the induction of 2'-5'oligoadenylate synthetase which activates RNase L to degrade viral RNA [ 31 ]. Also, studies in humans have suggested that individuals lacking IFN-α or IL-12 are at a higher risk of severe RSV disease. Thus, down-regulation of the production of these cytokines is a likely factor underlying the observed enhancement of RSV infection by cyclophosphamide treatment. In conclusion, the results of this study demonstrate that cyclophosphamide treatment of BALB/c mice renders them more susceptible to RSV infection as revealed by increased RSV titers in the lung and decreased body weight. The mechanism of this increase in infection involves transient down regulation of IFN-γ and IL-12 induced by cyclophosphamide treatment. Competing Interests The author(s) declare that they have no competing interests. Authors' Contributions XK, BAL and cell enumeration; GH, data analysis; GP, AHR, RT-PCR; MK, RSV infection and assay, tissue collection, RT-PCR; AB, cyclophosphamide treatment, tissue collection, ELISAs; TSR, immunohistochemistry; JZ, cell culture and virus preparation; RFL, experimental design and analysis; SSM, project design, experimental analysis and data interpretation. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549044.xml |
549050 | Increased proviral load in HTLV-1-infected patients with rheumatoid arthritis or connective tissue disease | Background Human T-lymphotropic virus type 1 (HTLV-1) proviral load is related to the development of HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP) and has also been shown to be elevated in the peripheral blood in HTLV-1-infected patients with uveitis or alveolitis. Increased proliferation of HTLV-1-infected cells in, or migration of such cells into, the central nervous system is also seen in HAM/TSP. In the present study, we evaluated the proviral load in a cohort of HTLV-1-infected patients with arthritic conditions. Results HTLV-1 proviral load in the peripheral blood from 12 patients with RA and 6 patients with connective tissue disease was significantly higher than that in matched asymptomatic HTLV-1 carriers, but similar to that in matched HAM/TSP controls. HAM/TSP was seen in one-third of the HTLV-1-infected patients with RA or connective tissue disease, but did not account for the higher proviral load compared to the asymptomatic carrier group. The proviral load was increased in the synovial fluid and tissue from an HTLV-1-infected patient with RA, the values suggesting that the majority of infiltrated cells were HTLV-1-infected. In the peripheral blood from HTLV-1-infected patients with RA or connective tissue disease, HTLV-1 proviral load correlated with the percentages of memory CD4+ T cells and activated T cells, and these percentages were shown to be markedly higher in the synovial fluid than in the peripheral blood in an HTLV-1-infected patient with RA. Conclusions These biological findings are consistent with a role of the retrovirus in the development of arthritis in HTLV-1-infected patients. A high level of HTLV-1-infected lymphocytes in the peripheral blood and their accumulation in situ might play a central role in the pathogenesis of HTLV-1-associated inflammatory disorders. Alternatively, the autoimmune arthritis, its etiological factors or treatments might secondarily enhance HTLV-1 proviral load. | Background Human T-lymphotropic virus type 1 (HTLV-1) is endemic in southern Japan, intertropical Africa, Melanesia, Latin America, and the Caribbean basin [ 1 ]. HTLV-1 is the etiological agent of adult T-cell leukemia [ 2 ] and HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP), an inflammatory disease of the central nervous system [ 3 , 4 ], and has also been implicated in several other inflammatory disorders, such as polymyositis [ 5 ], uveitis [ 6 ], Sjögren's syndrome [ 7 ], alveolitis [ 8 ], and infective dermatitis [ 9 ]. The possibility that HTLV-1 may cause joint disease was initially raised by reports of arthralgia and polyarthritis in patients with adult T-cell leukemia [ 10 , 11 ]. Polyarthritis has also been observed in some patients with HAM/TSP [ 12 ]. Nishioka et al. [ 13 ] described the association of a polyarthritis syndrome with HTLV-1 infection in the absence of clinical ATL or HAM/TSP, and proposed the term HTLV-1-associated arthritis (HAA). Cases of HTLV-1-infected patients with mixed connective tissue disease have also been described [ 14 ], although an association between HTLV-1 infection and systemic lupus erythematosus has not been established [ 15 ]. Apart from the possibility of neurological signs, the clinical features of HAA are similar to those of idiopathic rheumatoid arthritis (RA) [ 16 - 18 ]. Epidemiological studies have demonstrated that HTLV-1 seropositivity is a risk factor for RA in Japan [ 19 , 20 ], but a recent study conducted in South Africa, another HTLV-1 endemic area, failed to detect any association between HTLV-1 and RA [ 21 ]. This discrepancy might be due to differences in genetic background, although the possibility cannot be excluded that HAA in Japan results from the coincidental coexistence of two relatively common diseases. Interestingly, a recent prospective study demonstrated an increased incidence of arthritis in cohorts of former US blood donors infected with HTLV-1 or HTLV-2 [ 22 ]. Several findings support the hypothesis of an etiopathogenic role for HTLV-1 in HAA: ATL-like T lymphocytes have been identified in the synovial fluid and synovial tissue [ 17 , 18 , 23 ]; high titers of IgM antibodies against HTLV-1 have been found in the synovial fluid [ 23 ]; HTLV-1 proviral DNA has been detected in synovial fluid cells and synovial tissue cells [ 23 ], cultured adherent synovial stromal cells [ 24 ], and synovial macrophage cells [ 25 ]; and Tax mRNA and protein have been detected in synovial stromal cells [ 26 ]. HTLV-1 tropism for synovial cells has been confirmed in vitro [ 27 ]. Moreover, mice transgenic for Tax develop an inflammatory arthropathy resembling RA in humans [ 28 ]. The development and progression of RA is dependent on the migration of T lymphocytes into the synovial compartment [ 29 , 30 ]. Similarly, the tissue damage in HAM/TSP is thought to be caused by T cells that have infiltrated the central nervous system [ 31 , 32 ]. T lymphocytes, especially CD4 + T cells, are the main target of HTLV-1 in vivo and carry the majority of the HTLV-1 proviral load [ 33 ]. The HTLV-1 proviral load in peripheral blood mononuclear cells (PBMCs) is higher in patients with HAM/TSP than in asymptomatic HTLV-1 carriers [ 34 ] and the equilibrium set point of the proviral load is suspected to determine the development of the disease [ 35 ]. We postulated that HTLV-1 proviral load might also influence the initiation and course of HAA, and measured this marker in PBMCs from a previously described cohort [ 16 ] of HTLV-1-infected patients with RA and in a group of HTLV-1-infected patients with connective tissue disease. Results The HTLV-1 proviral load was measured in the peripheral blood of HTLV-1-infected patients with RA or connective tissue disease and in matched asymptomatic and HAM/TSP controls (Figure 1 ). The number of copies of HTLV-1 proviral DNA per 10 6 PBMCs ranged from 14,600 to 373,000 in the patients with RA (Table 1 ) and from 1,500 to 411,200 in patients with connective tissue disease (Table 2 ), the corresponding ranges in the HTLV-1 asymptomatic carriers and in patients with HAM/TSP being 50 to 97,700 and 2,100 to 392,000, respectively. The mean ± SD and median proviral loads were 133,800 ± 134,600 and 75,800 in the HTLV-1-infected patients with RA or connective tissue disease combined, the values for the RA subgroup being 114,400 ± 112,200 and 67,400 and those for the connective tissue disease subgroup 172,500 ± 176,700 and 120,800, while the corresponding values in the asymptomatic carriers were 18,800 ± 26,400 and 10,100 and those in patients with HAM/TSP 86,800 ± 90,600 and 62,400. The HTLV-1 proviral load was significantly higher in the HTLV-1-infected group with RA or connective tissue disease than in the matched asymptomatic HTLV-1 carriers ( P = 0.0012 in Wilcoxon's test, P = 0.0002 in the paired t -test after log-transformation), and the difference remained significant when the analysis focused on the RA subgroup ( P = 0.0022 in Wilcoxon's test, P = 0.0002 in the paired t -test). No differences were observed between the HTLV-1-infected group with RA or connective tissue disease and the matched HAM/TSP controls ( P > 0.05 in both Wilcoxon's test and the paired t -test). As expected, the difference between the asymptomatic carriers and the HAM/TSP group was significant ( P = 0.0001 in Wilcoxon's test, P < 0.0001 in the paired t -test). Figure 1 HTLV-1 proviral load in the peripheral blood from HTLV-1-infected patients with RA or connective tissue disease and from HAM/TSP or asymptomatic controls. The 10 th and 90 th percentiles are shown as the lower and upper horizontal bars on the vertical line, while the 25 th and 75 th percentiles are shown as the lower and upper edges of the box; the median is shown within the box. The results shown as dots fall outside the 10 th and 90 th percentiles. Table 1 Clinical and biological features of HTLV-1-infected patients with RA Patient Sex Age Associated Diseases Disease duration (year) Sharp score ESR (mm) CRP (mg/l) RF (IU/ml) Treatment HTLV-1 proviral load (copies / 10 6 PBMCs) 1 F 75 - 7 65 111 313 128 Corticosteroids, DMARDs 373,300 2 F 63 HAM/TSP 2 10 70 5 32 Corticosteroids, DMARDs 39,800 3 F 62 - 9 3 41 6 - NSAIDs 126,500 4 F 58 - 7 46 5 3 64 Corticosteroids, DMARDs 55,000 5 F 72 - 7 109 56 16 - NSAIDs, DMARDs 38,500 6 F 59 - 10 5 38 5 - Corticosteroids, DMARDs 28,900 7 F 64 - 2 8 25 12 128 Corticosteroids, DMARDs 79,800 8 F 69 - 16 126 21 30 16 Corticosteroids, DMARDs 230,000 9 F 57 HAM/TSP 13 6 29 13 - Corticosteroids 247,400 10 F 58 HAM/TSP 2 17 35 9 32 Corticosteroids, DMARDs 106,500 11 F 42 - 18 3 22 5 - Analgesics 32,200 12 F 77 HAM/TSP 7 110 88 41 - Corticosteroids, DMARDs 14,600 Sharp score : joint space narrowing score and erosion score; CRP: C-reactive protein; ESR, erythrocyte sedimentation rate; RF: rheumatoid factor; NSAIDs: non-steroidal anti-inflammatory drugs; DMARDs: disease-modifying anti-rheumatic drugs (hydroxychloroquine, leflunomide, methotrexate, etc.) Table 2 Clinical and biological features of HTLV-1-infected patients with connective tissue disease Patient Sex Age Connective tissue disease Associated diseases Disease duration (year) ESR (mm) CRP (mg/l) ANA Treatment HTLV-1 proviral load (copies / 10 6 PBMCs) 13 F 38 SLE uveitis 13 38 6 + Corticosteroids, HC 18,400 14 F 68 PM - 1 17 1 + Corticosteroids 411,200 15 F 60 SS - 2 10 1 + NSAIDs, HC 1,500 16 F 56 PM HAM/TSP 10 17 12 + Corticosteroids 71,700 17 F 63 SS - 3 22 15 + NSAIDs, HC 169,900 18 F 65 SS HAM/TSP, alveolitis 5 21 1 - Corticosteroids 362,300 SS, Sjögren's syndrome, SLE : Systemic lupus erythematosus, PM : Polymyositis, ANA: anti-nuclear antibodies; HC : hydroxychloroquine Of the 12 HTLV-1-infected patients with RA, 4 (patients 2, 9, 10, and 12) had HAM/TSP (Table 1 ); their respective HTLV-1 proviral loads were 39,800, 247,400, 106,500, and 14,600 copies per 10 6 PBMCs. Two patients with connective tissue disease (Table 2 , patients 16 and 18) had HAM/TSP, which was associated with alveolitis in patient 18, with proviral loads of 71,600 and 362,300. These samples did not account for the higher proviral load in the arthritic group compared to the asymptomatic carrier group. Indeed, after excluding the patients with co-existing HAM/TSP, the difference between the asymptomatic controls and the group with RA or connective tissue disease remained significant ( P = 0.0121, Wilcoxon's test; P = 0.0027, paired t -test), even when the analysis was restricted to the subgroup with RA alone ( P = 0.0117, Wilcoxon's test; P = 0.0003, paired t -test). The patient with connective tissue disease and uveitis (patient 13) had 18,420 copies per 10 6 PBMCs. For one of the patients with RA and HAM/TSP (patient 10), 4 consecutive frozen dry pellets of PBMCs from 1996 to 2002 were available; in these, the proviral load was relatively stable at 92,100, 73,600, 143,300, and 106,500 copies per 10 6 cells, respectively. No correlation was found between HTLV-1 proviral load and the age of the patient, the duration of illness, or the Ritchie's index score ( P > 0.05, Spearman test). HTLV-1 proviral load did not correlate with erythrocyte sedimentation rate or C-reactive protein level ( P > 0.05, Spearman test). No significant difference in HTLV-1 proviral load was seen in patients positive or negative for rheumatoid factor or antinuclear antibody ( P > 0.05, Mann-Whitney test) or receiving or not receiving either specific treatments for RA or corticotherapy ( P > 0.05, Mann-Whitney test). The proviral load was measured in two sets of PBMCs and synovial fluid or synovial tissue cells obtained from one HTLV-1-infected patient with RA at an interval of one year (Table 1 , patient 5). In the first set of samples, the HTLV-1 proviral load in the synovial fluid cells and paired PBMCs was 845,200 and 125,300 copies per 10 6 cells, respectively, while, in the second set of samples from a year later, the proviral load in the synovial tissue cells and paired PBMCs was 666,700 and 38,500 per 10 6 cells, respectively. Lymphocytes subsets and activation status were examined in the peripheral blood of HTLV-1-infected patients with RA or connective tissue disease. When correlations were examined between HTLV-1 proviral load and the percentage of T lymphocytes expressing CD45RO, CD45RA, or HLA-DR in the HTLV-1 infected patients with RA or connective tissue disease (Figure 2 ), HTLV-1 proviral load correlated positively with the percentage of CD4+ T cells expressing CD45RO and negatively with that of CD4+ T cells expressing CD45RA ( P = 0.039 and P = 0.021, Spearman test). A positive correlation was found between HTLV-1 proviral load and the percentage of CD4+ T cells expressing HLA-DR ( P = 0.008), while the correlation did not reach significance for CD8+ HLA-DR T cells. Figure 2 Correlation between HTLV-1 proviral load, and memory or activated T lymphocytes subsets in the peripheral blood from HTLV-1-infected patients with RA or connective tissue disease. Lymphocyte distribution and activation were compared in the peripheral blood and synovial fluid of one HTLV-1-infected patient with RA (patient 5). The percentage of CD4+ T cells expressing CD45RO was dramatically increased in the synovial fluid (98%) compared to the peripheral blood (48%), as was the percentage of CD4+ T lymphocytes expressing HLA-DR (94% compared to 18%). Discussion The etiology of autoimmune diseases, such as RA or mixed connective tissue disease, has yet to be established, but appears to result from complex interactions between host genetic and environmental factors [ 36 ], and retroviruses have been considered as possible causative factors [ 37 ]. HTLV-1 has been suggested to be implicated in the pathogenesis of RA in Japan, where this retrovirus is endemic [ 13 , 17 , 18 ]. Epidemiological studies have shown an association between HTLV-1 infection and RA in Japan [ 19 , 20 ], and between HTLV-1/2 infection and arthritis in the United States [ 22 ]. HTLV-1-seropositive cases with connective tissue disease have also been described, although the authors suggested a geographical, rather than an etiological, link [ 14 ]. Another study failed to demonstrate any association between HTLV-1 infection and systemic lupus erythematosus in Jamaica [ 15 ]. Considering the age, sex ratio, and HTLV-1 prevalence (4.6%) in our RA cohort, a fortuitous coincidence cannot be excluded. The prevalence of HTLV-1 infection in Martinique is estimated to be around 1.5% and increases with age, particularly among women [ 38 ]. An epidemiological study is therefore required to determine whether there is no, or a weak, association between HTLV-1 and arthritis in Martinique. Nevertheless, the present study provides biological data suggesting a contribution of HTLV-1 to the development of some cases of RA or mixed connective tissue disease. We found that: (1) The circulating HTLV-1 proviral load was higher in HTLV-1-seropositive patients with RA or connective tissue disease than in asymptomatic HTLV-1 carriers and similar to that in patients with HAM/TSP; (2) In the peripheral blood of patients with arthritis, HTLV-1 proviral load correlated with the percentage of memory and activated T CD4 + cells; (3) A high HTLV-1 proviral load was found in the synovial fluid and tissue cells in a patient with RA; (4) In this patient, the percentage of memory and activated CD4 + T cells was higher in the synovial compartment than in the peripheral blood. The HTLV-1 proviral load is thought to be a major determinant of HTLV-1-associated diseases. The HTLV-1 proviral load is higher in the peripheral blood from patients with HAM/TSP than in blood from asymptomatic carriers [ 34 ]. It is also higher in the peripheral blood of patients with HTLV-1-associated uveitis than in asymptomatic carriers [ 39 , 40 ]. Similarly, we observed a significantly higher proviral load in HTLV-1-infected patients with either RA or connective tissue disease than in HTLV-1 asymptomatic carriers. Moreover, the proviral load was higher in the synovial fluid and tissue from one HTLV-1-infected patient with RA than in the peripheral blood. Assuming an average of one HTLV-I provirus per infected cell, the proviral load values in these synovial samples suggest that the majority of infiltrated cells are infected. HTLV-1 proviral load is known to be higher in the spinal fluid than in paired blood samples from HAM/TSP [ 41 , 42 ], but not asymptomatic carriers [ 43 ]. Interestingly, a higher HTLV-1 proviral load than in the peripheral blood has also been reported in bronchoalveolar lavage fluid in patients with HTLV-1-associated alveolitis [ 44 ] and in the labial salivary glands in patients with HTLV-1-associated Sjögren's syndrome [ 45 , 46 ]. Thus, a high proviral load might be involved in the pathogenesis of several other HTLV-1-associated inflammatory disorders in addition to HAM/TSP. In HAM/TSP, the HTLV-1 proviral load reaches an equilibrium set-point that is correlated with progression of motor disability, and fluctuates by no more than 2- to 4-fold over a decade [ 35 ]. In one HTLV-1-infected patient with RA, the proviral load was found to be stable over a 6 year period. In HTLV-1-associated uveitis, the proviral load has been shown to correlate with disease activity [ 40 ]. However, in our cohort, HTLV-1 proviral load in the peripheral blood did not correlate with disease activity and was not influenced by treatment of the rheumatological disease. This suggests that HTLV-1 proviral load reaches a set-point determining the onset of the rheumatological disease, but the intensity of the symptoms might be influenced by subsequent in situ events. The unusually high proviral loads in HTLV-1 infection results mainly from the Tax-driven activation and expansion of infected cells [ 47 ]. The HTLV-1 targets are mainly CD45RO-expressing CD4 + T lymphocytes and the proviral load is reported to correlate with the number of memory T cells [ 48 ]. In our HTLV-1-infected cohort with RA or connective tissue disease, HTLV-1 proviral load correlated positively with the percentage of CD4 + T cells expressing CD45RO and negatively with that of CD4 + T cells expressing CD45RA. The HTLV-1 proviral load also correlated positively with the percentage of HLA-DR-expressing T cells. Migration of HTLV-1-infected CD4 + T cells and HTLV-1-specific CD8 + cytotoxic T lymphocytes (CTL) into the central nervous system is a critical step in the pathogenesis of HAM/TSP [ 31 , 32 ]. Similarly, infiltration of T cells plays a central role in the initiation and perpetuation of RA [ 29 , 30 ]. Thus, our finding of an increase in memory (CD45RO) and activated (HLA-DR) CD4 + T cells in the joint fluid of an HTLV-1 carrier with RA supports the hypothesis of a pathogenic involvement of HTLV-1-infected T lymphocytes. Several mechanisms are potentially involved in the occurrence of rheumatological disorders during HTLV-1 infection. Firstly, HTLV-1 infection upregulates the expression of adhesion molecules potentially involved in the migration of lymphocytes into the spinal and joint compartments [ 17 ]. Secondly, HTLV-1 might be transmitted from the infiltrated T lymphocytes to the synoviocytes [ 23 - 26 ], and subsequent Tax expression might induce proliferation of these cells. Extracellular Tax protein has also been reported to stimulate the proliferation of synoviocytes [ 49 ]. Finally, Tax expression stimulates the production of a variety of cytokines, including IL-15 and its receptor. IL-15 might represent a cornerstone between HAM/TSP and HTLV-1-associated rheumatological diseases. IL-15 favors T cell migration into the target tissue compartment [ 50 ]. Moreover, IL-15 inhibits IL2-mediated activation-induced cell death, and is suspected to both facilitate the persistence of MHC I restricted memory CD8+ T cells involved in the pathogenesis of HAM/TSP, and enhance the survival of self-reactive T cells, leading to the development of autoimmune disease [ 51 ]. Recent data argue for a possible autoimmune mechanism of tissue damage in HAM/TSP [ 52 ]. Moreover, IL-15 can also induce TNF-α synthesis by macrophages, which, in turn, stimulates a cascade of proinflammatory cytokines, including IL-1β, IL-6, and GM-CSF [ 50 , 53 ], which induce synoviocyte proliferation and are thought to be deleterious for the central nervous system [ 31 ]. Thus, the coexistence of HAM/TSP in one-third of the HTLV-1-infected patients with RA might be explained by the shared features of a high proviral load and common downstream pathways. Alternatively, autoimmune arthritis or its etiological factors might secondarily enhance HTLV-1 proviral load through cell activation, with subsequent migration of HTLV-1-infected cells into the joint or CNS. The accumulation of HTLV-1-infected lymphocytes in the synovium could result from selective infiltration and/or from oligoclonal expansion once the PBMCs have infiltrated the joint. Whether the increase in HTLV-1-infected cells in the peripheral blood and the even greater increase in the synovial compartment are the cause or an effect of the associated arthritis remains uncertain. A role of the anti-inflammatory and anti-rheumatic drugs can also not be excluded. In conclusion, our data in HTLV-1-infected patients with RA or connective tissue disease are consistent with a role of the proviral load in the development of these rheumatological disorders, although the direction of causality in this interaction remains open to question. HTLV-1 might cause a systemic immune-mediated inflammatory disease potentially involving tissues other than the central nervous system, HAM/TSP being only the major syndrome. The clinical expression of this disease might be determined by the amount of HTLV-1-infected T lymphocytes, their level of activation, and their capacity to accumulate in different body compartments. Further research is needed to increase our knowledge of the molecules involved in the homing of HTLV-1-infected CD4+ T lymphocytes and of anti-HTLV-1-specific CD8+ CTL to different target tissues. Materials and Methods Patients The study was performed in Martinique, an island in the lesser Antilles archipelago, with a population of 400,000. Between 1988 and 2001, 280 patients with RA, defined according to the American Rheumatology Association (ARA) criteria, and 335 patients with connective tissue disease were followed on an inpatient or outpatient basis at the Rheumatology Department of the Regional Teaching Hospital. Thirteen (4.6%) of the 280 patients with RA (1 male and 12 female) were found to be HTLV-1 seropositive, confirmed by Western blotting (antibodies recognizing at least rgp21, p19, and p24) and peripheral blood was obtained from 12 of these. In addition, 6 patients with connective tissue disease (three with Sjögren's syndrome, two with inflammatory myopathy, and one with systemic lupus), who were seropositive for HTLV-1, were included in the study. Samples were collected between September 2001 and June 2002. For one patient, sequential samples cryopreserved since 1996 were available. The mean age at the time of sampling was 63 years for the HTLV-1-seropositive RA patients compared to 60 years for the total RA cohort, while the mean age of the HTLV-1-seropositive patients with connective tissue disease was 58 years. The mean time interval between onset of the auto-immune disease and sampling for HTLV-1 proviral load determination were 8 years and 6 years in the patients with RA and connective tissue disease, respectively. Of the 12 HTLV-1-seropositive RA patients, 4 had HAM/TSP, defined according to the WHO guidelines [ 54 ]. Two of the patients with connective tissue disease also presented HAM/TSP. The clinical and biological features of the HTLV-1-infected patients with either RA or connective tissue disease are summarized in Tables 1 and 2 , respectively. Each HTLV-1-infected patient with RA or connective tissue disease was matched for age (± 5 years) and sex with 2 patients with HAM/TSP and 2 asymptomatic HTLV-1 carriers. Measurement of HTLV-1 proviral load PBMCs were isolated from EDTA blood by density gradient centrifugation. Synovial fluid samples were obtained by arthrocentesis and synovial tissue was obtained during arthroscopy. The synovial tissue was minced into small pieces, washed with phosphate-buffered saline, and passed through a wire mesh to collect synovial tissue cells. Cells were cryopreserved until use. DNA was extracted from 10 6 cells using a phenol/chloroform procedure. The HTLV-1 proviral load was quantified using a real-time TaqMan PCR method [ 55 ]. SK110/SK111 primers were used to amplify a 186 bp fragment of the pol gene and the dual-labeled TaqMan probe (5' FAM and 3' TAMRA) was located at 4829–4858 bp of the HTLV-1 reference sequence (HTLV ATK ). Albumin DNA was quantified in parallel to determine the input cell number and was used as an endogenous reference to normalize variations due to differences in the PBMC count or DNA extraction. For both HTLV-1 and the albumin gene, amplifications were performed on 10 μl of DNA extract using the TaqMan PCR Core Reagent kit, data being acquired with the ABI Prism 7700 Sequence Detector System (Perkin Elmer, Foster City, California, USA). Standard curves were generated using ten-fold serial dilutions of a double standard plasmid (pcHTLV-ALB) containing one copy of the target regions of both the HTLV-1 pol gene and the cellular albumin gene. The HTLV-1-infected human lymphocyte line MT2 (ECACC 93121518) was used as a control for quantification, the limit for an acceptable result being taken as 2.4–3.3 copies of the HTLV-1 pol gene per cell and the variation between series being normalized on the basis of three copies per MT2 cell. All standard dilutions and control and patient samples were run in duplicate for both HTLV-1 and albumin DNA quantification. Standard curves for HTLV-1 and albumin were accepted when the slope was between -3.322 and -3.743 (corresponding to amplification efficiencies of 100% to 85%) and the correlation coefficient, r 2 , was >0.992. If the variation between duplicate values of HTLV-1 or albumin DNA copy numbers was greater than 30%, the analysis was repeated. The normalized value for the HTLV-1 proviral load was reported as the HTLV-1 average copy number/albumin average copy number ratio × 10 6 and expressed as the number of HTLV-1 copies per 10 6 PBMCs. Determination of CD4- and CD8-positive lymphocyte counts Lymphocyte subsets in PBMCs were characterized using a panel of labeled anti-human monoclonal antibodies, consisting of fluorescein isothiocyanate-conjugated anti-CD3 (clone SK7, mouse IgG 1 ), allophycocyanin-conjugated anti-CD4 (clone SK3, mouse IgG 1 ), peridinin chlorophyll protein-conjugated anti-CD8 (clone SK1, mouse IgG 1 ), phycoerythrin (PE)-conjugated anti-CD45RO (clone UCHL-1, mouse IgG 2a ), and PE-conjugated anti-HLA-DR (clone L243, mouse IgG 2a ) (all from BD Biosciences Immunocytometry Systems, San Jose, CA), and PE-conjugated anti-CD45RA (Hl100 mouse IgG 2b , from BD Biosciences Pharmingen). The analyses were performed on a FACSCalibur (BD Biosciences Immunocytometry Systems). Statistical analysis Mann-Whitney's U test, Wilcoxon's signed rank test, paired Student's t -test, and Spearman's rank correlation were used, as appropriate. A P value < 0.05 was considered to be statistically significant. List of Abbreviations HTLV, human T-lymphotropic virus; HAM/TSP, HTLV-1-associated myelopathy/tropical spastic paraparesis; RA, rheumatoid arthritis; PBMCs, peripheral blood mononuclear cells. Competing Interests The author(s) declare that they have no competing interests. Authors' Contributions MY carried out most of the clinical and experimental work. AL was involved in the molecular biology work and contributed to the design of the study. FD and GL performed the flow cytometry analysis. SO and GJB participated in the neurological and rheumatologic evaluations, respectively. SA supervised the design and the course of the clinical study. RC conceived the study and drafted the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549050.xml |
552302 | A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation | Background Recent technological advances in integrated circuits, wireless communications, and physiological sensing allow miniature, lightweight, ultra-low power, intelligent monitoring devices. A number of these devices can be integrated into a Wireless Body Area Network (WBAN), a new enabling technology for health monitoring. Methods Using off-the-shelf wireless sensors we designed a prototype WBAN which features a standard ZigBee compliant radio and a common set of physiological, kinetic, and environmental sensors. Results We introduce a multi-tier telemedicine system and describe how we optimized our prototype WBAN implementation for computer-assisted physical rehabilitation applications and ambulatory monitoring. The system performs real-time analysis of sensors' data, provides guidance and feedback to the user, and can generate warnings based on the user's state, level of activity, and environmental conditions. In addition, all recorded information can be transferred to medical servers via the Internet and seamlessly integrated into the user's electronic medical record and research databases. Conclusion WBANs promise inexpensive, unobtrusive, and unsupervised ambulatory monitoring during normal daily activities for prolonged periods of time. To make this technology ubiquitous and affordable, a number of challenging issues should be resolved, such as system design, configuration and customization, seamless integration, standardization, further utilization of common off-the-shelf components, security and privacy, and social issues. | Introduction Wearable health monitoring systems integrated into a telemedicine system are novel information technology that will be able to support early detection of abnormal conditions and prevention of its serious consequences [ 1 , 2 ]. Many patients can benefit from continuous monitoring as a part of a diagnostic procedure, optimal maintenance of a chronic condition or during supervised recovery from an acute event or surgical procedure. Important limitations for wider acceptance of the existing systems for continuous monitoring are: a) unwieldy wires between sensors and a processing unit, b) lack of system integration of individual sensors, c) interference on a wireless communication channel shared by multiple devices, and d) nonexistent support for massive data collection and knowledge discovery. Traditionally, personal medical monitoring systems, such as Holter monitors, have been used only to collect data for off-line processing. Systems with multiple sensors for physical rehabilitation feature unwieldy wires between electrodes and the monitoring system. These wires may limit the patient's activity and level of comfort and thus negatively influence the measured results. A wearable health-monitoring device using a Personal Area Network (PAN) or Body Area Network (BAN) can be integrated into a user's clothing [ 3 ]. This system organization, however, is unsuitable for lengthy, continuous monitoring, particularly during normal activity [ 4 ], intensive training or computer-assisted rehabilitation [ 5 ]. Recent technology advances in wireless networking [ 6 ], micro-fabrication [ 7 ], and integration of physical sensors, embedded microcontrollers and radio interfaces on a single chip [ 8 ], promise a new generation of wireless sensors suitable for many applications [ 9 ]. However, the existing telemetric devices either use wireless communication channels exclusively to transfer raw data from sensors to the monitoring station, or use standard high-level wireless protocols such as Bluetooth that are too complex, power demanding, and prone to interference by other devices operating in the same frequency range. These characteristics limit their use for prolonged wearable monitoring. Simple, accurate means of monitoring daily activities outside of the laboratory are not available [ 12 ]; at the present, only estimates can be obtained from questionnaires, measures of heart rate, video assessment, and use of pedometers [ 13 ] or accelerometers [ 14 ]. Finally, records from individual monitoring sessions are rarely integrated into research databases that would provide support for data mining and knowledge discovery relevant to specific conditions and patient categories. Increased system processing power allows sophisticated real-time data processing within the confines of the wearable system. As a result, such wearable system can support biofeedback and generation of warnings. The use of biofeedback techniques has gained increased attention among researchers in the field of physical medicine and tele-rehabilitation [ 5 ]. Intensive practice schedules have been shown to be important for recovery of motor function [ 22 ]. Unfortunately, an aggressive approach to rehabilitation involving extensive therapist-supervised motor training is not a realistic expectation in today's health care system where individuals are typically seen as outpatients about twice a week for no longer than 30–45 min. Wearable technology and biofeedback systems appear to be a valid alternative, as they reduce the extensive time to set-up a patient before each session and require limited time involvement of physicians and therapists. Furthermore, wearable technology could potentially address a second factor that hinders enthusiasm for rehabilitation, namely the fact that setting up a patient for the procedure is rather time-consuming. This is because tethered sensors need to be positioned on the subject, attached to the equipment, and a software application needs to be started before each session. Wearable technology allows sensors that will be positioned on the subject for prolonged periods, therefore eliminating the need to position them for every training session. Instead, a personal server such as a PDA can almost instantly initiate a new training session whenever the subject is ready and willing to exercise. In addition to home rehabilitation, this setting also may be beneficial in the clinical setting, where precious time of physicians and therapists could be saved. Moreover, the system can issue timely warnings or alarms to the patient, or to a specialized medical response service in the event of significant deviations of the norm or medical emergencies. However, as for all systems, regular, routine maintenance (verifying configuration and thresholds) by a specialist is required. Typical examples of possible applications include stroke rehabilitation, physical rehabilitation after hip or knee surgeries, myocardial infarction rehabilitation, and traumatic brain injury rehabilitation. The assessment of the effectiveness of rehabilitation procedures has been limited to the laboratory setting; relatively little is known about rehabilitation in real-life situations. Miniature, wireless, wearable technology offers a tremendous opportunity to address this issue. We propose a wireless BAN composed of off-the-shelf sensor platforms with application-specific signal conditioning modules [ 10 ]. In this paper, we present a general system architecture and describe a recently developed activity sensor "ActiS". ActiS is based on a standard wireless sensor platform and a custom sensor board with a one-channel bio amplifier and two accelerometers [ 11 ]. As a heart sensor, ActiS can be used to monitor heart activity and position of the upper trunk. The same sensor can be used to monitor position and activity of upper and lower extremities. A wearable system with ActiS sensors would also allow one to assess metabolic rate and cumulative energy expenditure as a valuable parameter in the management of many medical conditions. An early version of the ActiS has been based on a custom developed wireless intelligent sensor and custom wireless protocols in the license-free 900 MHz Scientific and Medical Instruments (ISM) band [ 15 ]. Our initial experience indicated the importance of standard sensor platforms with ample processing power, minute power consumption, and standard software support. Such platforms were not available on the market during the design of our first prototype system. The recent introduction of an IEEE standard for low-power personal area networks (802.15.4) and ZigBee protocol stack [ 16 ], as well as new ZigBee compliant Telos sensor platform [ 17 ], motivated the development of the new system presented in this paper. TinyOS support for the selected sensor platform facilitates rapid application development [ 18 ]. Standard hardware and software architecture facilitate interoperable systems and devices that are expected to significantly influence next generation health systems [ 19 ]. This trend can also be observed in recently developed physiological monitors systems from Harvard [ 20 ] and Welch-Allen [ 21 ]. System Architecture Continuous technological advances in integrated circuits, wireless communication, and sensors enable development of miniature, non-invasive physiological sensors that communicate wirelessly with a personal server and subsequently through the Internet with a remote emergency, weather forecast or medical database server; using baseline (medical database), sensor (WBAN) and environmental (emergency or weather forecast) information, algorithms may result in patient-specific recommendations. The personal server, running on a PDA or a 3 G cell phone, provides the human-computer interface and communicates with the remote server(s). Figure 1 shows a generalized overview of a multi-tier system architecture; the lowest level encompasses a set of intelligent physiological sensors; the second level is the personal server (Internet enabled PDA, cell-phone, or home computer); and the third level encompasses a network of remote health care servers and related services (Caregiver, Physician, Clinic, Emergency, Weather). Each level represents a fairly complex subsystem with a local hierarchy employed to ensure efficiency, portability, security, and reduced cost. Figure 2 illustrates an example of information flow in an integrated WBAN system. Figure 1 Wireless Body Area Network of Intelligent Sensors for Patient Monitoring Figure 2 Data flow in an integrated WWBAN Sensor level A WBAN can include a number of physiological sensors depending on the end-user application. Information of several sensors can be combined to generate new information such as total energy expenditure. An extensive set of physiological sensors may include the following: • an ECG (electrocardiogram) sensor for monitoring heart activity • an EMG (electromyography) sensor for monitoring muscle activity • an EEG (electroencephalography) sensor for monitoring brain electrical activity • a blood pressure sensor • a tilt sensor for monitoring trunk position • a breathing sensor for monitoring respiration • movement sensors used to estimate user's activity • a "smart sock" sensor or a sensor equipped shoe insole used to delineate phases of individual steps These physiological sensors typically generate analog signals that are interfaced to standard wireless network platforms that provide computational, storage, and communication capabilities. Multiple physiological sensors can share a single wireless network node. In addition, physiological sensors can be interfaced with an intelligent sensor board that provides on-sensor processing capability and communicates with a standard wireless network platform through serial interfaces. The wireless sensor nodes should satisfy the following requirements: minimal weight, miniature form-factor, low-power operation to permit prolonged continuous monitoring, seamless integration into a WBAN, standard-based interface protocols, and patient-specific calibration, tuning, and customization. These requirements represent a challenging task, but we believe a crucial one if we want to move beyond 'stovepipe' systems in healthcare where one vendor creates all components. Only hybrid systems implemented by combining off-the-shelf, commodity hardware and software components, manufactured by different vendors promise proliferation and dramatic cost reduction. The wireless network nodes can be implemented as tiny patches or incorporated into clothes or shoes. The network nodes continuously collect and process raw information, store them locally, and send them to the personal server. Type and nature of a healthcare application will determine the frequency of relevant events (sampling, processing, storing, and communicating). Ideally, sensors periodically transmit their status and events, therefore significantly reducing power consumption and extending battery life. When local analysis of data is inconclusive or indicates an emergency situation, the upper level in the hierarchy can issue a request to transfer raw signals to the upper levels where advanced processing and storage is available. Personal server level The personal server performs the following tasks: • Initialization, configuration, and synchronization of WBAN nodes • Control and monitor operation of WBAN nodes • Collection of sensor readings from physiological sensors • Processing and integration of data from various physiological sensors providing better insight into the users state • Providing an audio and graphical user-interface that can be used to relay early warnings or guidance (e.g., during rehabilitation) • Secure communication with remote healthcare provider servers in the upper level using Internet services The personal server can be implemented on an off-the-shelf Internet-enabled PDA (Personal Digital Assistant) or 3 G cell phone, or on a home personal computer. Multiple configurations are possible depending on the type of wireless network employed. For example, the personal server can communicate with individual WBAN nodes using the Zigbee wireless protocol that provides low-power network operation and supports virtually an unlimited number of network nodes. A network coordinator, attached to the personal server, can perform some of the pre-processing and synchronization tasks. Other communication scenarios are also possible. For example, the personal server running on a Bluetooth or WLAN enabled PDA can communicate with remote upper-level services through a home computer; the computer then serves as a gateway (Figure 1 ). Relying on off-the-shelf mobile computing platforms is crucial, as these platforms will continue to grow in their capabilities and quality of services. The challenging tasks are to develop robust applications that provide simple and intuitive services (WBAN setup, data fusion, questionnaires describing detailed symptoms, activities, secure and reliable communication with remote medical servers, etc). Total information integration will allow patients to receive directions from their healthcare providers based on their current conditions. Medical services We envision various medical services in the top level of the tiered hierarchy. A healthcare provider runs a service that automatically collects data from individual patients, integrates the data into a patient's medical record, processes them, and issues recommendations, if necessary. These recommendations are also documented in the electronic medical record. If the received data are out of range or indicate an imminent medical condition, an emergency service can be notified (this can also be done locally at the personal server level). The exact location of the patient can be determined based on the Internet access entry point or directly if the personal server is equipped with a GPS sensor. Medical professionals can monitor the activity of the patient and issue altered guidance based on the new information, other prior known and relevant patient data, and the patient's environment (e.g., location and weather conditions). The large amount of data collected through such services will allow quantitative analysis of various conditions and patterns. For example, suggested targets for stride and forces of hip replacement patients could be suggested according to the previous history, external temperature, time of the day, gender, and current physiological parameters (e.g., heart rate). Moreover, the results could be stored in research databases that will allow researchers to quantify the contribution of each parameter to a given condition if adequate numbers of patients are studied in this manner. Again, it is important to emphasize that the proposed approach requires seamless integration of large amounts of data into a research database in order to be able to perform meaningful statistical analyses. ActiS – Activity Sensor The ActiS sensor was developed specifically for WBAN-based, wearable computer-assisted, rehabilitation applications. With this concept in mind, we integrated a one-channel bio-amplifier and three accelerometer channels with a low power microcontroller into an intelligent signal processing board that can be used as an extension of a standard wireless sensor platform. ActiS consists of a standard sensor platform, Telos, from Moteiv and a custom Intelligent Signal Processing Module – ISPM (Figure 3 ). A block diagram of the sensor node is shown in Figure 4 . Figure 3 Telos wireless platform with intelligent signal processing daughtercard ISPM Figure 4 Block diagram of the activity sensor (Telos platform and ISPM module) The Telos platform is an ideal fit for this application due to small footprint and open source system software support. A second generation of the Telos platform features an 8 MHz MSP430F1611 microcontroller with integrated 10 KB of RAM and 48 KB of flash memory, a USB (Universal Serial Bus) interface for programming and communication, and an integrated wireless ZigBee compliant radio with on-board antenna [ 11 ]. In addition, the Telos platform includes humidity, temperature, and light sensors that could be used as ambient sensors. The Telos platform features a 10-pin expansion connector that allows one UART (Universal Asynchronous Receiver Transmitter) and I2 C interface, two general-purpose I/O lines, and three analog input lines. The ISPM extends the capabilities of Telos by adding two perpendicular dual axis accelerometers (Analog Devices ADXL202) and a bio-amplifier with a signal conditioning circuit. The ISPM has its own MSP430F1232 processor for sampling and low-level data processing. This microcontroller was selected primarily for its compact size and ultra low power operation. Other features that were desirable for this design were the 10-bit ADC and the timer capture/compare registers that are used for acquisition of data from the accelerometers. The F1232 has hardware UART that is used for communications with Telos. The ISPM's two ADXL202 accelerometers cover all three axes of motion. One ADXL202 is mounted directly on the ISPM board and collects data for the X and Y axes in the same plane. The second ADXL202 is mounted on a daughter card that extends vertically from the ISPM. The user's physiological state is monitored using an on-board bio-amplifier implemented using an instrumentation amplifier with a signal conditioning circuit. The bio-amplifier could be used for electromyogram (EMG) or electrocardiogram (ECG) monitoring. The output of the signal conditioning circuit is connected to the local microcontroller as well as to the microcontroller on the Telos board via the expansion connector. The AD converter on the Telos board has a higher resolution (12 bit) than the F1232 on the ISPM (10 bit). This configuration gives flexibility of utilizing either microcontroller to process physiological signals. An example application of the ActiS sensor as motion sensor on an ankle is given in Figure 5 . This figure also visualizes the main components of acceleration during slow movements as projections of the gravity force (g) on the accelerometer's reference axes – A x and A y . Rotations of the sensor in the vertical plane (Θ) can be estimated as Θ = arctan(A x / A y ). A compensation for non-ideal vertical placement can be achieved using the second accelerometer (not mounted in this photo) at 90-degree angle. Instead of calculating the angular position, many systems use off-the-shelf gyroscopes to measure angular velocity for the detection of gait phases [ 32 ]. A typical example of step detection is illustrated in Figure 6 . Figure 5 Activity sensor on an ankle with symbolic representation of acceleration components Figure 6 Accelerometer based step detection using ankle sensors Issues and Applications WBAN systems can capitalize on recent technological advances that have enabled new methods for studying human activity and motion, making extended activity analysis more feasible. However, before WBAN becomes a widely accepted concept, a number of challenging system design and social issues should be resolved. If resolved successfully, WBAN systems will open a whole range of possible new applications that can significantly influence our lives. System Design Issues The development of pedometers and Micro-ElectroMechanical Systems (MEMS) accelerometers and gyroscopes show great promise in the design of wearable sensors. The main system design issues include: • types of sensors • power source • size and weight of sensors • wireless communication range and transmission characteristics of wearable sensors • sensor location and mounting • seamless system configuration • automatic uploads to the patient's electronic medical record • intuitive and simple user interface Types of sensors As for sensors, accelerometers and gyroscopes offer greater sensitivity and are more applicable for monitoring of motion since they generate continuous output. Bouten et al [ 27 ] found that frequency of human induced activity ranges from 1 to 18 Hz. Sampling rates in the existing projects vary from 10 – 100 Hz. Almost all projects in the last five years use MEMS accelerometers or a combination of accelerometers and gyroscopes [ 34 , 35 ]. As examples of full sets of sensors for research purposes, "MIThril" and Shoe Integrated Gait Sensor (SIGS) [ 26 ] systems feature 3 axes of gyroscopes, 3 axes of accelerometers, two piezoelectric sensors, two electric field sensors, two resistive band sensors, and four force sensitive resistors. These sensors can be mounted on the back of a shoe and in a shoe insole, respectively. Researchers at University of Washington School of Nursing have used off-the-shelf tri-axis accelerometer modules to study physical movement in COPD (Chronic Obstructive Pulmonary Disease) patients [ 2 ]. Both Lancaster University, UK, and ETH Zurich, Switzerland, have developed custom hardware realizing arrays of inertial sensor networks [ 24 ]. Lancaster used an array of 30 two-axis accelerometers. Similarly, ETH Zurich used a modular harness design [ 25 ]. The majority of foot-contact pedometers are designed to count steps only. Although they have been studied for use in complex energy estimation and have even shown a high degree of accuracy for walking / running activities [ 2 ] they are not well suited for rehabilitation. Power source, size/weight, and transmission characteristics To be unobtrusive, the sensors must be lightweight with small form factor. The size and weight of sensors is predominantly determined by the size and weight of batteries. Requirements for extended battery life directly oppose the requirement for small form factor and low weight. This implies that sensors have to be extremely power efficient, as frequent battery changes for multiple WBAN sensors would likely hamper users' acceptance and increase the cost. In addition, low power consumption is very important as we move toward future generations of implantable sensors that would ideally be self-powered, using energy extracted from the environment. The radio communication poses the most significant energy consumption problem. Intelligent on-sensor signal processing has the potential to save power by transmitting the processed data rather than raw signals, and consequently to extend battery life. A careful trade-off between communication and computation is crucial for an optimal design. It appears that the most promising wireless standard for WBAN applications is ZigBee, as it represents an emerging wireless technology for the low-power, short-range, wireless sensors. Location of Sensors Although the purpose of the measurement does influence sensor location, researchers seem to disagree on the ideal body location for sensors. A motion sensor attached to an ankle is the most discriminative single position for state recognition, while a combination of hip and ankle sensors discriminates the states even more [ 25 ]. In a study of the relationship between metabolic energy expenditure and various activities, researchers at Eindhoven University of Technology, the Netherlands, placed tri-axial accelerometers on a subject's back waistline [ 27 ]. Krause et al use two accelerometers on the SenseWear armband [ 31 ]. Lee et al [ 2 ] placed accelerometer sensors in the subject's thigh pocket in order to measure angular position and velocity of the thigh. Doing so, they were able to accurately monitor a subject's activity and with the assistance of gyroscopes and compass headings were able to successfully estimate a subject's change in location. Some systems employ large arrays of wearable sensors. Laerhoven et al developed a loose fitting lab coat and trousers [ 24 ] consisting of 30 sensors; Kern et al [ 25 ]developed tighter fitting modular harnesses including a total of 48 sensors. Sensor attachment is also a critical factor, since the movement of loosely attached sensors creates spurious oscillations after an abrupt movement that can generate false events or mask real events. Seamless system configuration The intelligent WBAN sensors should allow users to easily assemble a robust ad-hoc WBAN, depending on the user's state of health. We can imagine standard off-the-shelf sensors, manufactured by different vendors, and sold "over-the-counter" [ 19 ]. Each sensor should be able to identify itself and declare its operational range and functionality. In addition, they should support easy customization for a given application. Algorithms Application-specific algorithms mostly use digital signal pre-processing combined with a variety of artificial intelligence techniques to model user's states and activity in each state. Digital signal processing include filters to resolve high and low frequency components of a signal, wavelet transform algorithms to correlate heel-strike and toe-off (steps) to angular velocity measured via gyroscopes [ 30 ], power spectrum analysis and a Gaussian model to classify activity types [ 26 ]. Artificial intelligence techniques may include fuzzy logic [ 28 ] and Kohonen self-organizing maps [ 31 ]. Some systems use physiological signals to improve context identification [ 31 ]. It has been shown that the activity-induced energy expenditure (AEE) is well correlated with the sum of integrals of the high frequency component of each individual axis [ 27 ]. Most of the algorithms in the open literature are not executed in real-time, or require powerful computing platforms such as laptops for real-time analysis. Social Issues Social issues of WBAN systems include privacy/security and legal issues. Due to communication of health-related information between sensors and servers, all communication over WBAN and Internet should be encrypted to protect user's privacy. Legal regulation will be necessary to regulate access to patient-identifiable information. Possible applications The WBAN technology can be used for computer-assisted physical rehabilitation in ambulatory settings and monitoring of trends during recovery. An integrated system can synergize the information from multiple sensors, warn the user in the case of emergencies, and provide feedback during supervised recovery or normal activity. Candidate applications include post-stroke rehabilitation, orthopaedic rehabilitation (e.g. hip/knee replacement rehabilitation), and supervised recovery of cardiac patients [ 36 ]. In the case of orthopaedic rehabilitation the system can measure forces and accelerations at different points and provide feedback to the user in real-time. Unobtrusive monitoring of cardiac patients can be used to estimate intensity of activities in user's daily routine and correlate it with the heart activity. In addition, WBAN systems can be used for gait phase detection during programmable, functional electrical stimulation [ 33 ], analysis of balance and monitoring of Parkinson's disease patients in the ambulatory setting [ 32 ], computer supervision of health and activity status of elderly, weight loss therapy, obesity prevention, or in general promotion of a healthy, physically active, lifestyle. Conclusion A wearable Wireless Body Area Network (WBAN) of physiological sensors integrated into a telemedical system holds the promise to become a key infrastructure element in remotely supervised, home-based patient rehabilitation. It has the potential to provide a better and less expensive alternative for rehabilitation healthcare and may provide benefit to patients, physicians, and society through continuous monitoring in the ambulatory setting, early detection of abnormal conditions, supervised rehabilitation, and potential knowledge discovery through data mining of all gathered information. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC552302.xml |
532390 | Patterns of Intron Gain and Loss in Fungi | Little is known about the patterns of intron gain and loss or the relative contributions of these two processes to gene evolution. To investigate the dynamics of intron evolution, we analyzed orthologous genes from four filamentous fungal genomes and determined the pattern of intron conservation. We developed a probabilistic model to estimate the most likely rates of intron gain and loss giving rise to these observed conservation patterns. Our data reveal the surprising importance of intron gain. Between about 150 and 250 gains and between 150 and 350 losses were inferred in each lineage. We discuss one gene in particular (encoding 1-phosphoribosyl-5-pyrophosphate synthetase) that displays an unusually high rate of intron gain in multiple lineages. It has been recognized that introns are biased towards the 5′ ends of genes in intron-poor genomes but are evenly distributed in intron-rich genomes. Current models attribute this bias to 3′ intron loss through a poly-adenosine-primed reverse transcription mechanism. Contrary to standard models, we find no increased frequency of intron loss toward the 3′ ends of genes. Thus, recent intron dynamics do not support a model whereby 5′ intron positional bias is generated solely by 3′-biased intron loss. | Introduction Over a quarter of a century after the discovery of introns, fundamental questions about their function and evolutionary origins remain unanswered. Although intron density differs radically between organisms, the mechanisms by which introns are inserted and deleted from gene loci are not well understood. A correlation has been observed between intron density and positional bias ( Mourier and Jeffares 2003 ). Introns are evenly distributed within the coding sequence of genes in intron-rich organisms, but are biased toward the 5′ ends of genes in intron-poor organisms. This bias is particularly pronounced in the yeast Saccharomyces cerevisiae . It has been suggested that both the paucity and positional bias of introns in yeast may be due to intron loss through a mechanism of homologous recombination of spliced messages reverse-transcribed from the 3′ poly-adenylated tail ( Fink 1987 ). This reverse transcription mechanism was first demonstrated in experiments with intron-containing Ty elements in yeast ( Boeke et al. 1985 ). More recently, Mourier and Jeffares (2003) concluded that homologous recombination of cDNAs is the simplest explanation for the positional bias observed in all intron-poor eukaryotes. However, few data exist concerning the actual mechanisms and dynamics of intron evolution. Fungal genomes are in many ways ideal for exploring questions of intron evolution. The fundamental aspects of intron biology are shared between fungi and other eukaryotes, making fungi appropriate model organisms for intron study. They are gene dense with relatively simple gene structures compared with plants and animals, making gene prediction more accurate. Fungi also display a wide diversity of gene structures, ranging from far less than one intron per gene for S. cerevisiae, to approximately 1–2 introns per gene on average for many recently sequenced ascomycetes (including the organisms in this study), to roughly seven introns per gene on average for some basidiomycetes (e.g., Cryptococcus ). Finally, fungi display a strong 5′ bias in intron positions, enabling us to investigate the processes underlying this phenomenon. In principle, a 5′ intron bias could arise through various combinations of intron gain and loss, and a complete understanding of intron positional bias requires an assessment of the contributions of both of these processes. A number of studies demonstrate the occurrence of intron gain and loss in individual genes or gene families. Logsdon et al. (1995) offered early examples of well-supported intron gain by comparing triose-phosphate isomerase genes from diverse eukaryotes and demonstrated that numerous introns could be most parsimoniously explained by a single gain with no subsequent losses. O'Neill et al. (1998) later provided evidence for de novo intron insertion into the otherwise intron-less mammalian sex-determining gene SRY . Evidence for the occurrence of multiple independent intron losses has also been reported in studies such as those by Robertson (2000) , who inferred gain and loss events in a family of chemoreceptors in Caenorhabditis elegans . More recently, a number of genome-wide studies of intron dynamics have been conducted. Roy et al. (2003) described genome-wide comparisons between human and mouse (with Fugu as an outgroup) and between mouse and rat (with human as an outgroup), and observed a sparseness of intron loss and complete absence of intron gain in these closely related organisms. On the other hand, Rogozin et al. (2003) observed an abundance of lineage-specific intron loss and gain when analyzing clusters of orthologous genes in deeply branching eukaryotes. Similarly, Qiu et al. (2004) analyzed ten protein families in distantly related eukaryotes, with a single prokaryotic outgroup, and obtained evidence that extant introns are predominantly the result of intron gains. In search of clues to understand the mechanism of intron gain, Fedorov et al. (2003) aligned introns from various eukaryotes, and Coghlan and Wolfe (2004) applied a similar approach in a comparative study of nematodes. None of these studies addressed the positional bias of intron gain and loss events. Here we report the results of a genome-wide comparative analysis of intron evolution in organisms that have a strong 5′ bias in intron location and are at an appropriate evolutionary distance to reveal positional trends in intron gain and loss. Results To investigate the roles of both gain and loss in intron evolution, we compared the genomes of four recently sequenced fungi spanning at least 330 million years of evolution ( Taylor et al. 1999 ; Berbee and Taylor 2000 ; Heckman et al. 2001 ) ( Figure 1 ): Aspergillus nidulans , Fusarium graminearum , Magnaporthe grisea , and Neurospora crassa . Ortholog sets composed of one gene from each of the four genomes were identified as pairwise best bidirectional BLAST hits satisfying stringent overlap criteria. Orthologs in each set were subsequently aligned, and the locations of introns were marked. These intron positions (regions of the multiple sequence alignment containing an intron in at least one of the four sequences) were subjected to rigorous alignment quality filtering to eliminate alignment and annotation errors ( Figure 2 A). To set the filtering thresholds, we manually classified ten residue alignment windows on either side of 181 randomly selected intron positions as “clearly homologous,” “possibly homologous,” or “non-homologous.” Requiring 30% identity and 50% similarity in these windows captured 92% of the clearly homologous positions, 29% of the possibly homologous positions, and only 2% of the non-homologous positions ( Figure 2 B). Passing intron positions were split into five quintiles according to their relative position within the annotated coding sequence. Figure 1 Phylogenetic Tree and Intron Conservation Patterns (A) Phylogenetic tree of the four fungal organisms studied (M. grisea, N. crassa, F. graminearum, and A. nidulans) with estimated time scale in millions of years ago. The rooted organismal tree was constructed using an unweighted pair group method using arithmetic averages based on a concatenated alignment of 2,073 orthologous gene sets. Estimated dates of divergence from Taylor et al. (1999) , Berbee and Taylor (2000) , and Heckman et al. (2001) .(B) Classification of intron presence (+) and absence (−) patterns across the four fungal species. A blue “+” indicates a raw intron gain in the corresponding organism, a red “−” indicates a raw intron loss in the corresponding organism, a green “+” indicates a conserved intron, and all other introns are indicated in black. Figure 2 Alignment Filtering Protocol (A) Schematic of filtering protocol applied to a ten-residue window on each side of every intron position. If either side failed the filter, the position was discarded. (B) Distributions of minimum percent identity and similarity in ten-residue windows around 181 randomly selected intron positions, for three manual classifications. The minima were taken between the left and right windows. The yellow lines indicate the chosen thresholds of at least 50% similarity and 30% identity, and bars are colored yellow if they fall above the thresholds (pass) or orange if they fall below the thresholds (fail). Parentheses indicate the number of introns in each class that pass the cutoff and the total number of introns in that class. The five lowest-percent identity and similarity bars, containing 77 positions, in the “non-homologous” plot are omitted so as to not obscure the rest of the histogram. Genome-Wide Characterization of Intron Conservation We applied our analysis protocol to 2,073 putative ortholog sets that included 9,352 intron positions. Of these initial intron positions, 5,811 were removed because of low conservation surrounding the intron, or because of an adjacent gap, or both. It is possible that some of the positions neighboring gaps may in fact reflect intron gain or loss events that occurred simultaneously with coding sequence insertion or deletion ( Llopart et al. 2002 ). However, removing these positions did not significantly impact our results, as the number of positions adjacent to gaps was only about one-tenth of the number of positions that passed the quality filter, and the removal of these introns did not alter the apparent positional bias of the overall distribution ( Figure 3 ). An additional 92 introns had nearby introns with insufficient conservation between the two introns and were thus also rejected. Figure 3 Positional Biases in Intron Gain and Loss Relative intron positions were defined as the number of bases in the coding sequence upstream of the intron divided by the total length of the coding sequence. These relative positions were binned into five categories (quintiles), each representing one-fifth of the coding sequence length (quintiles numbered 1–5 on the x-axis). (A) Introns passing quality filter (light blue, back) and introns adjacent to gaps in the protein alignment that were removed by our quality filter (orange, front). (B) Raw and inferred gains. Raw gains (green, back) are those introns present in exactly one organism (excluding the outgroup, A. nidulans ). Inferred gains (blue, front) are corrected for the estimated number of cases that arose by other combinations of gain and loss events. Inferred gains are thus slightly lower than raw gains. (C) Raw and inferred losses. Raw losses (green, front) are those introns absent in the organism in question but present in at least one of its siblings (descendants of its parent in the phylogenetic tree) and one of its cousins (non-descendants of its parent). Inferred losses (red, back) are corrected for the estimated number of introns lost along multiple lineages, or gained and then lost. Inferred losses are thus slightly higher than raw losses. (D) Number of introns gained (blue) and lost (red) since last common ancestor (losses shown as negative numbers). (E) Intron loss rate at each position since last common ancestor (introns lost per ancestral intron). Error bars represent binomial standard deviation. In the end, a total of 3,450 intron positions (roughly 37% of intron positions considered) passed the quality filter. The complete set of aligned orthologs with passing and failing intron positions is provided in Table S1 . These data constitute a genome-wide survey of high-confidence aligned intron positions and their patterns of conservation over at least 330 million years of evolution. An example of an alignment of putative orthologs with three passing intron positions is shown in Figure 4 A. In each passing intron position (black-edged rectangles), individual introns are labeled according to the classes previously outlined in Figure 1 B. One intron position is conserved across all four species (green rectangle), one is a raw gain in N. crassa (blue box), and the third is present only in A. nidulans, and, because of the ambiguity in inferring gain or loss in this case, is classified as “Other” (black-edged gray rectangle). Examining the region around the one raw gained intron in N. crassa at the nucleotide level ( Figure 4 B) reveals a clean insertion of the intron sequence within a highly conserved region. The gained intron has consensus terminal dinucleotides (GT…AG) and a putative branch point sequence that matches the yeast consensus (
TACTAAC) at six of seven positions. In addition, this set of orthologs contained one poorly aligned intron position ( Figure 4 A, unedged gray rectangle) that was excluded by our filters. All three passing positions (black-edged rectangles) display high amino acid sequence conservation on both sides flanking the intron, supporting the correctness of the alignment. In contrast, the failing intron position (unedged gray rectangle) is adjacent to a region of the alignment that lacks significant conservation. The 3′ flank of this intron position displays considerable variation, especially with respect to the M. grisea gene, which was predicted to have a much longer 3′ coding region. In such an alignment region, it is difficult to distinguish true differences in intron conservation from potential annotation or alignment errors. Our filtering process thus eliminated this position from further analysis. Figure 4 Example Ortholog Alignment (A) Alignment of protein sequences for orthologs MG04228, NCU05623, FG06415, and AN1892 with intron characters inserted. “0,” “1,” and “2” indicate the phase of an intron. A black-edged rectangle indicates an intron position passing our quality filters; an unedged gray rectangle indicates an intron position that was removed by our filter. The green rectangle indicates conserved introns, the blue box marks a raw intron gain, and the gray boxes within black-edged rectangles highlight all other introns. The consensus (bottom) line characters are as follows: asterisk, identical residue in all four sequences; colon, similar residue; and period, neutral residue. (B) Nucleotide alignment of the region flanking the gained intron in (A). Putative 5′ and 3′ splice sites and a branch point sequence are highlighted in blue. Calculation of Raw Gains and Losses We calculated “raw gains” and “raw losses” by positional quintile for each organism other than the outgroup, A. nidulans . We defined raw gains as those introns present in only a single organism (see Figure 1 B). We defined raw losses as those introns that are absent in the organism in question, present in some other descendant of the organism's parent (a “sibling”), and present in some non-descendant of the parent (a “cousin”) ( Figure 1 B). Intron positions are considered conserved if present across all four organisms. Patterns of intron presence and absence that are not captured by the above definitions were excluded from the raw counts because of the ambiguity in inferring intron gain or loss events in such cases (marked as “Other” in Figure 1 B). Probabilistic Model of Intron Gain and Loss Raw gain and loss counts are based on parsimony and may differ somewhat from the true number of gain and loss events. The set of raw gains may include introns that were lost in multiple lineages, thus overcounting the true number of gains in a given lineage. Similarly, the set of raw losses excludes introns lost in the given organism and also lost in all cousins or siblings (marked as “Other” in Figure 1 B). We used a probabilistic model to correct for these inaccuracies. Our model assumes that all loss and gain events occur independently and uniformly within each quintile. In particular, we assume Dollo's postulate ( Dollo 1893 ): any introns that align to the same position must have a common ancestor (no “double gains”), as in Nei and Kumar (2000) and Rogozin et al. (2003) . Our method differs from the Dollo parsimony method described in Farris (1977) and applied in Rogozin et al. (2003) in that we do not artificially minimize loss events by assuming that gains occurred at the latest possible point in evolution. It also differs in that we allow different branches of the phylogenetic tree to have different rates of loss and gain. We applied our method separately to each of the five positional quintiles for each organism other than the outgroup, A. nidulans . First we estimate two types of intron loss rates. The organismal loss rate, q, is calculated by dividing the number of raw losses in an organism by the total number of introns present in at least one sibling and at least one cousin. This represents the fraction of introns in the parent that did not survive to the present day in that organism. For instance, the organismal loss rate in F. graminearum is given by \eqalignno q = ( AM + AN + AMN ) / (( AM + AN + AMN ) + \cr( AFM + AFN + AFMN )) (1) where AM, for example, represents the number of intron positions with an intron present in A. nidulans (A) and M. grisea (M) but absent from F. graminearum (F) and N. crassa (N). The sibling loss rate, r is defined for a given organism as the fraction of introns in the parent that did not survive in any sibling. We define “sibling raw losses” for an organism as the number of introns that are present in the organism and at least one cousin but in no sibling. This quantity is then divided by the number of introns present in that organism and at least one cousin to give the sibling loss rate. For example, the sibling loss rate for F. graminearum is given by r = ( AF ) / ( AF + AFM + AFN + AFMN ). We next correct the raw gains for each organism. Raw gains include some introns that were in fact lost in all but one lineage. We use the loss rates to calculate the expected number of these multiple losses, m, and subtract this quantity from the raw gains to obtain “inferred gains.” To calculate m we first count B 0 , the number of introns conserved in the organism and at least one sibling, but in no cousin. The quantities m and B 0 are related through the variable n 0 , the number of introns present in an organism's parent but not in any cousin, by the equations m = n 0 r (1 − q ) and B 0 = n 0 (1 − r )(1 − q ). This follows from our assumption of independent gains and losses. Thus, we can calculate the expected number of multiple losses as m = B 0 r / (1 − r ). We use the loss rates to estimate the number of introns in each organism's parent. To do so, we estimate separately the number of parental introns present in at least one cousin n 1 , and the number not present in any cousin n 0 (introduced above). To estimate the size of the set of parental introns present in at least one cousin, we first count the subset of these introns that are presently observable. An intron is in this set if it is present in at least one cousin and at least one sibling, or is present in at least one cousin and in the organism in question. We call this number of introns B 1 . By the assumption that gains and losses are independent, we have B 1 = n 1 (1 − qr ). Using this relation and the one in equation 4 above, we calculate the number of introns in the phylogenetic parent as Finally, we correct raw losses. Our definition of raw losses undercounts the true number by omitting those introns not conserved in at least one cousin and at least one sibling. Taking F. graminearum as an example, the true number of losses would also include some introns conserved in the patterns A, M, N, and MN . We calculate the number of inferred losses as n total q . This method can be extended to any phylogenetic tree and to any organism with at least one cousin. Abundance of Intron Gains One immediate conclusion stemming from our analysis is the importance of intron gain. A summary of all raw and inferred gains and losses is shown in Figure 3 . Substantial numbers of gained introns were observed in all three organisms—more than 100 independent inferred gains in each lineage, with over 200 in M. grisea ( Figure 3 B). The total numbers of gains that have occurred in each genome are likely to be substantially higher, since only predicted orthologs in all four species were considered, and roughly a third of the introns in these genes passed our quality filters. Differences in intron dynamics between lineages are also apparent, with the numbers of gained and lost introns approximately balanced in M. grisea and F. graminearum, but with roughly twice as many losses as gains in N. crassa ( Figure 3 D). It is thus apparent from these data that the process of intron gain plays a significant role in intron evolution. 1-Phosphoribosyl-5-Pyrophosphate Synthetase Genes Display Lineage-Specific Increases in Intron Gain Rate A striking example of intron gain occurs in a set of putative orthologous 1-phosphoribosyl-5-pyrophosphate (PRPP) synthetase genes. These genes encode a widely conserved protein that catalyzes the production of PRPP, a precursor in the nucleotide biosynthesis pathway. In contrast to the majority of orthologs that displayed fewer than two gained introns, the set of PRPP synthetase genes displayed a total of 22 raw gains ( Figure 5 A, blue boxes) that passed our alignment quality filters: six in N. crassa, 14 in M. grisea, and two in F. graminearum. The number of raw gains in the PRPP synthetase genes in M. grisea and N. crassa was significantly higher ( p < 3 × 10 −22 and p < 4 × 10 −9 , respectively) than the average for other genes analyzed, resulting in unusually large numbers of introns in these genes ( Figure 5 B). In comparison, the numbers of introns in PRPP synthetase genes in available animal genomes were within the typical range for the respective organisms, e.g., five in C. elegans, and six in fruitfly, human, mouse, rat, and Fugu . Thus the rate of intron gain for the PRPP synthetase gene in some fungi is unusually high. This gene represents an extreme example of the impact of intron gain and illustrates the variability of gain rates in different lineages. Figure 5 Intron Conservation in the PRPP Synthetase Gene (A) Alignment of PRPP synthetase putative orthologs MG07148, NCU06970, FG09299, and AN1965. A black-edged rectangle indicates an intron position passing our quality filters, whereas an unedged gray rectangle indicates an intron position that was removed by our filter. Blue boxes mark raw intron gains, red boxes indicate raw intron losses, and gray boxes within black-edged rectangles highlight all other introns. We manually corrected an annotation error in the first intron of the last row of the alignment. (B) Phylogenetic conservation pattern of introns in the PRPP sythetase gene. Each passing intron position was categorized as being present in A. nidulans (A), F. graminearum (F), M. grisea (M), N. crassa (N), A. nidulans and N. crassa (AN), F. graminearum and M. grisea (FM), or all four organisms (AFMN). There are no passing cases of conservation in three or four species. The number of introns in each category is shown with a purple line. The black error bar plot shows the mean and standard deviation for each category for all 2,008 ortholog sets after fitting to a Poisson distribution ( see Materials and Methods ). The number of introns in M. grisea and N. crassa is significantly higher, at the p < 1 × 10 −9 level. Fungal Introns Display Phase Bias, but Lack Observable Sequence Preference For each in-group lineage (M. grisea, N. crassa, F. graminearum), we determined the frequency of phase 0, 1, and 2 introns in the set of all intron positions ( Table 1 ). In contrast to recent reports based on a much smaller sample size indicating that phase frequencies for extant fungal introns do not differ significantly from a uniform distribution ( Qiu et al. 2004 ), our genome-wide dataset demonstrates a clear bias for phase 0 introns in each of the three fungal in-group lineages examined ( p < 4 × 10 −9 for N. crassa and p < 1 × 10 −12 for M. grisea and F. graminearum; in Table 1 , “all passing,” and similar biases were seen in the unfiltered set). The phase distributions of raw gains and raw losses for each of the three organisms are not significantly different from a uniform distribution at p < 0.01; however, the datasets for these subclasses were much smaller ( Table 1 ). Finally, we examined the exon sequences flanking gained introns, and observed no clear sequence bias ( Table 2 ). Table 1 Intron Phase Distribution for Filtering and Conservation Classes a Significantly different from uniform distribution, at p < 0.01 b One intron removed because of phase discrepancy across N. crassa, M. grisea, and F. graminearum. Fg, F. graminearum; Nc, N. crassa ; Mg, M. grisea. Table 2 Exonic Nucleotide Composition near Introns a Four nucleotides were extracted upstream and downstream of each intron in the specified organism b Four nucleotides were extracted upstream and downstream of the orthologous site in the other two organisms, consistent with method of Qiu et al. 2004 Fg, F. graminearum; Nc, N. crassa ; Mg, M. grisea. Absence of 3′ Bias in Intron Losses To determine whether the pattern of intron loss in these fungi might account for the observed bias in intron position, we examined the pattern of loss as a function of position within the gene (see Figure 3 E). Contrary to what would be expected if intron loss primarily involved homologous recombination of poly-adenosine-primed reverse transcripts, the rate of intron loss tends to be lower, rather than higher, at the 3′ ends of genes. Moreover, the highest rates of intron loss occur in the middles of genes in all three organisms. We found no evidence that this pattern was affected by our filtering methods. These findings suggest either other mutational mechanisms (e.g., reverse transcription primed internally) or the presence of selective pressure to preferentially conserve introns near the 5′ and 3′ ends of genes. Discussion We developed a system that automatically identifies evolutionary and positional patterns of intron conservation on a genome-wide scale. The core of the system is a process for stringently filtering alignments of orthologous genes to exclude potential annotation or alignment errors. The result of the filtering process is a high-confidence set of aligned intron positions. Differences in intron conservation at each individual position can be characterized as gains or losses (or ambiguous) based on parsimony. However, this does not accurately account for the possibility of multiple gain or loss events. We have developed a probabilistic model that allows for multiple events, providing a corrected estimate of the total number of gains and losses within the dataset. Our probabilistic method allows for a more accurate assessment of rates of gain and loss. In our dataset, allowing for multiple events results in only modest corrections to the rates estimated using parsimony. Our analysis demonstrates a significant role for intron gain over the past few hundred million years in the fungi analyzed. Previous analyses of specific gene families have provided evidence of specific instances of gained introns ( Logsdon et al. 1998 ; Robertson 2000 ; Hartung et al. 2002 ; Qiu et al. 2004 ). However, the relative importance of intron gain versus loss is not well understood. Recent large-scale analyses have suggested that intron gain may play a predominant role in shaping gene structures ( Qiu et al. 2004 ), although lineage-specific differences are apparent ( Rogozin et al. 2003 ). In particular, intron gain appears to occur rarely if at all in mammalian genes ( Roy et al. 2003 ). Our data suggest that intron gain is a significant driving force in the evolution of genes in fungi. In F. graminearum and M. grisea the number of introns gained was on par with the number lost and similar in magnitude to the number of introns gained in N. crassa . The mechanisms underlying intron gain are not known. We analyzed the set of predicted intron gains for possible signatures that might shed light on this process. No statistically significant bias was detected in the positions of gained introns along the coding sequence (see Figure 3 data not shown). Similarly, no preferred insertion site sequence was detectable ( Table 2 ), and no significant phase bias for gained introns was observed (see Table 1 ). The lack of an insertion site preference and absence of significant phase bias for gained introns in fungi is consistent with previous investigations and may set fungi apart from other organisms ( Qiu et al. 2004 ). Our data further indicate that intron gain can vary substantially between different gene families in a lineage-specific fashion. The PRPP synthetase gene is a particularly striking example, exhibiting significant increases in gained introns in two of the four lineages investigated. Moreover, the paucity of intron positions shared between N. crassa and M. grisea suggests the possibility of independent increases in gain rate in the two species. Alternatively, the apparent high intron gain rate exhibited by this gene may have arisen just prior to the last common ancestor of N. crassa and M. grisea . Although it is premature to speculate about possible mechanisms, one possibility is that a factor or factors responsible for intron insertion evolved to associate with the PRPP synthetase gene locus, transcript, or message at this point, leading to a higher rate of intron insertion in this gene. Finally, our results do not support the mechanism commonly proposed to account for the 5′ positional bias of introns in intron-poor organisms ( Mourier and Jeffares 2003 ). Contrary to what would be expected if intron loss primarily involved recombination of poly-adenosine-primed reverse transcripts, the rate of intron loss tends to be lower at the 3′ ends of genes. Instead, the highest rates of intron loss occur in the middles of genes in all three organisms. (This result is consistent with the results of Roy et al. (2003) in their analysis of intron evolution in mammals. Although their report describes only six instances of loss, in each case it was an internal intron.) The preference for internal introns may reflect a process of reverse transcription primed internally. Alternatively, there may be pressure to preferentially conserve introns near the 5′ and 3′ ends of genes. In particular, there is strong evidence for a functional role for the 5′-most intron in many genes. What remains clear is that the pattern of loss in these fungi over the last 330 million years cannot be explained solely by a mechanism involving 3′-end-primed reverse transcription of spliced messages. Instead, fungal intron dynamics appear to reflect a more complex interplay between intron gain and loss, an interplay that is likely to shape intron evolution in other eukaryotes. Materials and Methods Sequences and annotations All sequences and annotations were taken from the Broad Institute Fungal Genome Initiative website ( http://www.broad.mit.edu/annotation/fungi/fgi ). The following datasets were used: A. nidulans (Assembly 1, 18 February 2003), N. crassa (Assembly 3, 1 February 2001), F. graminearum (Assembly 1, 11 March 2003), and M. grisea (Assembly 2, 18 July 2002). Ortholog identification A group of four proteins, one from each organism, was considered an ortholog set if each pair was a pairwise best bidirectional BLAST hit in the respective genomes, and all the BLAST hits overlapped by at least 60% of the length of the longest protein. This yielded 2,073 sets of orthologs (out of an average of 10,500 genes in the four organisms). We repeated our analysis, requiring that each best bidirectional hit also be the only BLAST hit in each genome (spanning 60% the length of the longest protein). This protocol yielded only 1,178 ortholog sets, but gave qualitatively similar results for intron gains and losses ( Figure S1 ). Ortholog alignment The proteins in each ortholog set were aligned using ClustalW 1.82 ( Chenna et al. 2003 ), and intron position characters were inserted into the alignments, using “0,” “1,” or “2” to indicate the intron phase. Phase 0 intron characters were inserted between the amino acids coded for by the codons adjacent to that intron, and phase 1 and 2 intron characters were inserted immediately following the amino acid coded for by the codon interrupted by the intron. If an intron was not present in all the sequences at a given position, special intron gap characters, were inserted in the other sequences in order to maintain the downstream amino acid alignment. A total of 9,352 intron positions were aligned. At only 28 (0.3%) of these positions were introns of different phases aligned, making it reasonable to ignore “phase shifting” in our analysis. Alignment filtering Regions of low alignment quality were eliminated with a filter that required at least 30% identity and 50% similarity in a window of ten residues on each side of the intron position. These parameters were determined following manual classification of a set of 181 randomly selected intron positions as “clearly homologous,” “ambiguous/possibly homologous,” or “non-homologous” (see Figure 2 B). Using the parameters above, 92% of the homologous positions, 29% of the ambiguous positions and only 2% of the non-homologous positions passed the filter. To further exclude likely annotation and alignment errors, intron positions were also filtered by eliminating positions adjacent to gaps in the amino acid alignment and by eliminating positions with nearby introns but low evidence of homology in the intervening sequence. It is possible that some of these positions may in fact reflect intron gain or loss events that occurred simultaneously with coding sequence insertion or deletion. However, removing these positions did not significantly impact our results, as the number of positions adjacent to gaps was only about one-tenth of the number of positions that passed the quality filter, and the introns removed did not have an apparent positional bias (see Figure 3 A) Statistical significance of high gain rate in PRPP synthetase We modeled the number of gains in a particular organism as a Poisson distribution under two different null hypotheses. One null hypothesis was that the gains were spread uniformly across all genes. The other was that the number of gains in each gene was proportional to the length of the gene. In the first case the Poisson parameter λ is given by the total number of raw gains observed in that organism divided by the total number of ortholog sets ( p < 3 × 10 −22 for M. grisea, p < 4 × 10 −9 for N. crassa, and p < 0.007 for F. graminearum). In the second case λ is given by the total number of raw gains observed in that organism multiplied by the length of that gene in amino acids and divided by the total number of amino acids in all genes in that organism ( p < 7 × 10 −25 for M. grisea, p < 3 × 10 −10 for N. crassa, and p < 0.003 for F. graminearum) . We reported the less significant of the two p -values in the results. Analysis of intron gain phase and sequence preference For each of the three in-group lineages, the frequency of phase 0, 1, and 2 introns was determined for five different datasets: for each class of conservation (conserved, raw gains, and raw losses), for all introns passing our filter, and for all introns in the ortholog set. The p -value for the significance of phase 0 bias was determined by the χ 2 test with two degrees of freedom using equal expected phase frequencies. To detect sequence bias at intron insertion sites, we examined gained introns separately in F. graminearum, M. grisea, and N. crassa . For each gained intron, we extracted four bases upstream and downstream of orthologous sites in the other two sequences, consistent with Qiu et al. (2004) . The results are shown in Table 2 . Supporting Information Figure S1 Intron Gains and Losses Inferred from Best-Only BLAST Hit Orthologs Positional biases in intron gain, loss, and current distribution in three fungal genomes determined using orthologs predicted by a “bidirectional only hit” method. (A), (B), and (C) are roughly analogous to (D), (E), and (A), respectively, in Figure 3 . (78 KB DOC). Click here for additional data file. Table S1 Database of Alignments of All 1,447 Ortholog Sets with at Least One Passing Intron Position Also available at http://genes.mit.edu/NielsenEtAl/ . (4.3 MB ZIP). Click here for additional data file. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC532390.xml |
546411 | Presence and rehabilitation: toward second-generation virtual reality applications in neuropsychology | Virtual Reality (VR) offers a blend of attractive attributes for rehabilitation. The most exploited is its ability to create a 3D simulation of reality that can be explored by patients under the supervision of a therapist. In fact, VR can be defined as an advanced communication interface based on interactive 3D visualization, able to collect and integrate different inputs and data sets in a single real-like experience. However, "treatment is not just fixing what is broken; it is nurturing what is best" (Seligman & Csikszentmihalyi). For rehabilitators, this statement supports the growing interest in the influence of positive psychological state on objective health care outcomes. This paper introduces a bio-cultural theory of presence linking the state of optimal experience defined as "flow" to a virtual reality experience. This suggests the possibility of using VR for a new breed of rehabilitative applications focused on a strategy defined as transformation of flow . In this view, VR can be used to trigger a broad empowerment process within the flow experience induced by a high sense of presence. The link between its experiential and simulative capabilities may transform VR into the ultimate rehabilitative device. Nevertheless, further research is required to explore more in depth the link between cognitive processes, motor activities, presence and flow. | Introduction What is virtual reality (VR)? For many health care professionals, VR is first of all a technology. Since 1986, when Jaron Lamier used the term for the first time, VR has been usually described as a collection of technological devices: a computer capable of interactive 3D visualization, a head-mounted display and data gloves equipped with one or more position trackers. But is this definition enough to describe the potential of VR for rehabilitation? If we look at its actual applications in rehabilitation, the answer is probably not [ 1 ]. VR can be considered the leading edge of a general evolution of present communication interfaces like television, computer and telephone [ 2 ]. The main characteristic of this evolution is the full immersion of the human sensorimotor channels into a vivid and global communication experience [ 3 ]. In fact, VR is used in rehabilitation as "an advanced form of human-computer interface that allows the user to interact with and become immersed in a computer-generated environment in a naturalistic fashion" [ 4 ]. Following this vision Rizzo et al. [ 5 ] identify twelve assets that are available with VR for neuropsychological applications: • The capacity to systematically deliver and control dynamic, interactive 3D stimuli within an immersive environment that would be difficult to present using other means. • The capacity to create more ecologically valid assessment and rehabilitation scenarios. • The delivery of immediate performance feedbacks in a variety of forms and sensory modalities. • The provision of "cueing" stimuli or visualization tactics designed to help guide successful performance to support an error-free learning approach. • The capacity for complete performance capture and the availability of a more naturalistic/intuitive performance record for review and analysis. • The capacity to pause assessment, treatment and training for discussion and/or integration of other methods. • The design of safe testing and training environments that minimize the risks due to errors. • The capacity to improve availability of assessment and rehabilitation by persons with sensorimotor impairments via the use of adapted interface devices and tailored sensory modality presentations built into VE scenario design. • The introduction of "gaming" features into VR rehabilitation scenarios as a means to enhance motivation. • The integration of virtual human representations (avatars) for systematic applications addressing social interaction. • The potential availability of low cost libraries of VEs that could be easily accessed by professionals. • The option for self-guided independent testing and training by clients when deemed appropriate. In summary, VR provides a new human-computer interaction paradigm in which users are no longer simply external observers of images on a computer screen but are active participants within a computer-generated three-dimensional virtual world [ 6 - 8 ]: in the virtual environment (VE) the patient has the possibility of learning to manage a problematic situation related to his/her disturbance in a functionally relevant, ecologically valid experience [ 9 , 10 ]. This outline better clarifies the possible role of VR in rehabilitation: a communication interface based on interactive 3D visualization, able to collect and integrate different inputs and data sets in a single real-like experience [ 2 , 11 ]. This is possible because the key characteristic of VR, differentiating it from other media or communication systems, is the sense of presence [ 12 ], usually defined as the "sense of being there" [ 13 ], or the "feeling of being in a world that exists outside the self" [ 14 ]. This feeling is theorized to contribute to the efficacy of VR as rehabilitation tool: the successful use of VR exposure therapy for phobias [ 15 - 18 ], postraumatic stress disorders [ 19 - 21 ], and the pain reduction obtained in burn patients during a VR session [ 22 - 25 ] underline the possible role that an high level of presence, elicited by the VR experience, may have in the rehabilitation process. Thanks to presence, not only knowledge acquisition is possible in VR, but also this acquired knowledge can be transferred in a real environment [ 26 , 27 ]. This, evidence, coming also from different neuropsychological studies [ 1 ], adds value on VR use in the rehabilitation of highly social disabling cognitive functions, highlighting how goals reached in controlled settings may be transferred on patients' everyday life. Now, the challenge within this area is the creation of new paradigms [ 28 ]. As clearly underlined by Morganti [ 1 ] "More than a playing tool supporting cognitive or motor performances VR simulation has to provide a powerful chance to build personal meaning, map and strategies interacting with it." Within this context we propose to investigate the impact of VR on rehabilitation and subjective experience from a theoretical perspective that stresses the active role of individuals in interacting with their natural and cultural environment [ 29 , 30 ]. In this process a key role is played by the concept of "presence" and its link with our optimal experiences. A bio-cultural theory of presence Presence as separation between "external" and "internal" What is presence? Answering to this question is not a simple task [ 12 , 31 , 32 ]. In fact, if we check the present status of presence research, we can find two different but coexisting visions. A first group of researchers describes the sense of presence as a function of our experience of a given medium [ 33 - 41 ]. The main result of this approach is the definition of presence as the perceptual illusion of non - mediation [ 37 ], produced by means of the disappearance of the medium or its transformation into a social entity. Following this vision the experience of presence is only related to our interaction with an external artifact. The main advantage of this approach is its predictive value: the level of presence is reduced by the experience of mediation during the interaction. This approach, however, does not address some broader questions: What is presence for? Is it a specific cognitive process? What is its role in our daily experience? To answer these questions a second group of researchers considers presence as a neuropsychological phenomenon, evolved from the interplay of our biological and cultural inheritance, whose goal is the control of agency [ 2 , 12 , 42 - 49 ]. Within this paper, we will support the second vision, trying to link it with the outcome of the first one [ 50 ]. Here, presence is delineated as an evolved bio-cultural internal selection mechanism that helps the self in organizing the streams of sensory data: the more it can differentiate the self from the external world, the more is our experience of presence. The main goal of this differentiation is the control of agency to improve the possibility of survival within the external environment. To fully understand the key ideas behind this vision, three points are critical: – Presence has a simple but relevant role in our everyday experience: the control of agency through the unconscious separation of "internal" and "external". The meaning of "internal" and "external" is not related only to the body but also to the social and cultural space (situation) in which the self is in; – The presence-as-process (the separation mechanism) produces, but it is different from the presence-as-feeling (the experienced level of presence); – The presence-as-feeling is experienced indirectly by the self through the characteristics of action and experience. In fact the self perceives directly only the variations in level of presence-as-feeling: breakdowns and optimal experiences. First, presence is described here as a defining feature of self and it is related to the evolution of a key feature of any central nervous system: the embedding of sensory-referred properties into an internal functional space. As noted by Waterworth and Waterworth [ 47 ], the appearance of the sense of presence allows the nervous system to solve a key problem for its survival: how to differentiate between internal and external states. Without the emergence of the sense of presence it is impossible for the nervous system to experience distal attribution – the referencing of our perception to an external space beyond our boundaries – and effectively control its agency. In this vision it is important to distinguish between presence-as-process and presence-as-feeling. The presence-as-process is the continuous activity of the brain, organized around the three functionally and phylogenetically different layers discussed in the next paragraph, in separating "internal" and "external" within different kinds of afferent and efferent signals. A critical point here is to explain why we need to introduce a new cognitive process – presence-as-process – to monitor our activity. The answer comes from a recent paper by de Vignemont and Fourneret [ 51 ]. These authors discuss the position of Wittgenstein [ 52 ] about agency. According to this author, agency involves a primitive notion of the self as subject, which does not rely on any prior perceptual identification and which is immune to error through misidentification. However, both the neuroscience of action and the neuropsychology of schizophrenia are countering his position [ 53 ]. For instance, the analysis of positive deficits underlying positive symptoms in schizophrenia has shown that is not possible to reduce the sense of agency to action control or action awareness. To overcome this problem, de Vignemont and Fourneret distinguish in agency between the sense of initiation and the sense of one's own movements. As they underline [ 51 ] "the double sense of agency depends on the same mechanisms of action control: it results from the unconscious comparison between different kinds of afferent and efferent signals. Therefore, these monitoring systems allow one to automatically distinguish one's own actions and those of the other" (p. 15). So, presence-as-process can be described as a sophisticated form of monitoring of action and experience, transparent to the self but critical for its existence. As clarified by Russell [ 54 ]: "Action-monitoring is a subpersonal process that enables the subjects to discriminate between self-determined and world-determined changes in input. It can give rise to a mode of experience (the experience of being the cause of altered inputs and the experience of being in control) but it is not itself a mode of experience." (p.263). For this reason, the presence-as-feeling (level of presence) is not separated from the experience of the subject but it is related to the quality of our actions. It corresponds to what Heidegger [ 55 ] defined "the interrupted moment of our habitual standard, comfortable being - in - the - world " In fact, a higher level of presence-as-feeling is experienced by the self as a better quality of action and experience [ 46 , 56 ]. However, the self becomes aware of the presence-as-feeling separated by our being - in - the - world when its level is modified. More in detail, the self perceives directly only the variations in the level of presence-as-feeling: breakdowns and optimal experiences. The process of adaptation to the natural environment provided humans with specific biological features, such as the upright position, the opposing thumb and the increase in brain mass that allowed survival and reproduction in any environmental niche. At the same time, by means of the differential investment of attention and psychic resources, the individual selects and organizes the information acquired from his/her context according to an emergent, autonomous criterion: the quality of experience [ 57 , 58 ]. In our view, another evolutionary goal of presence-as-process is to track the quality of experience identifying highs and lows. On one side we have optimal experiences. According to Csikszentmihalyi [ 59 , 60 ], individuals preferentially engage in opportunities for action associated with a positive, complex and rewarding state of consciousness, defined optimal experience or flow. Here we suggest that flow is the result of the link between the highest level of presence-as-feeling, with a positive emotional state. In fact, it is also possible to experience the highest level of presence together with negative emotional states: e.g. in the battlefield during an attack from the enemy. On the other side we have breakdowns. Winograd and Flores [ 61 ] refer to presence disruptions as breakdowns: a breakdown occurres when, during our activity, an object or an environment becomes part of our consciousness. If this happens, we shift our attention from action to the object/environment to cope with it: e.g., when a wall stops our movement. Why do we experience these breakdowns? Our hypothesis is that breakdowns are a sophisticated evolutionary tool used by the presence-as-process to control the quality of experience: the more the breakdown, the less is the level of presence-as-feeling, the less is the quality of experience, and the less is the possibility of surviving in the environment. The importance of breakdowns for understanding presence is well reflected by Slater's concept of "break in presence" (BIP) [ 62 ]: a break in presence is the moment of switch between responding to signals with source in environment X to those with source in environment Y. In a BIP the critical issue is how will the actor act? To which set of signals will the actor respond? The answers to these questions are related to another important point of our vision: the meaning of "internal" and "external". In our vision, the boundaries are not only physical and related to our body (being there), but also social and cultural (making sense there). As underlined by Slater [ 63 ], Presence "is the total response (italics in the original) to being in a place, and to being in a place with other people. The 'sense of being there' is just one of many signs of presence – and to use it as a definition or a starting point is a category error: somewhat like defining humor in terms of a smile" (p. 7). If, in relatively simple organisms, this separation involves only a correct coupling between perceptions and movements, in humans it also implies the relation of the subject with a social and cultural space [ 44 , 64 ]. In fact, individuals actively interact with the environment, selecting and differentially replicating throughout their lives a subset of biological and cultural information, in terms of activities, interests and values. This vision has two important corollaries: – it is also "external" to the subject what is not related to his/her activities, interests and values. – to be more "present" in the situation (social and cultural space) defined by a symbolic system, the user has to be aware of its meaning. Only "making sense there", the user really experiences a full sense of presence. In giving sense to a situation an important role is usually played by narratives [ 56 , 65 ]. To make these concepts clearer an example may help. I'm in a restaurant for a formal dinner with my boss and some colleagues, but I don't know how to use the many different strange forks I have around my dish. In this situation I'm physically there, but the lack of knowledge puts me outside, at least partially, from the social and cultural space of the "formal dinner". The result is a limitation in my agency: I don't use the forks to avoid mistakes. This example shows clearly how both physical boundaries (wall, obstacles, etc.) and social and cultural boundaries have a strong influence on the possibility of action and the quality of experience of the subject. At this point our conclusions are: • In the real world , the feeling of presence is not the same in all the situations but can be different in relation to the characteristics of the social and cultural space the subject is in . For instance, if I'm attending a lesson in university, my level of presence can be lower or higher in relation to the interest I have in the topic discussed. If the lesson is totally boring I can be "absent" (totally internal): in absence attention is mostly directed towards internally-generated scenarios (in imagination) which are not currently present in the world [ 43 ]. The role of "absence" is critical for the survival of the subject. Is in fact in absence that the subject defines plans and organizes future behaviors. • There are some exceptional situations in real life in which the activity of the subject is characterized by a higher level of presence. In these situations the subject experiences a full sense of control and immersion. When this experience is associated to a positive emotional state, it can create an optimal experience, usually defined "flow". An example of flow is the case where a professional athlete is playing exceptionally well (positive emotion) and achieves a state of mind where nothing else matters but the game (high level of presence). The layers of presence At this point we have defined what presence is and its role in the human experience. However, there is yet another open question: how can we achieve high level of presence-as-feeling? To answer this question we have to analyze the neuropsychological nature of presence-as-process. Even if presence is a unitary feeling, on the process side it can be divided in three layers/subprocesses (for a broader and more in-depth description see [ 50 ]). These layers are phylogenetically different, and strictly related to the three levels of self identified by Damasio [ 66 ]: • The proto self : a coherent collection of neural patterns that map, moment by moment, the physical state of the organism; • The core self : a transient entity which is continuously generated through encounters with objects • The extended self : a systematic record of the more invariant properties that the organism has discovered about itself. Each layer of presence solves a particular facet of the internal/external world separation and it is characterized by specific properties. In particular we can make conceptual distinctions between proto presence (self vs. non self) , core presence (self vs. present external world) , and extended presence (self relative to present external world) . More precisely we can define proto presence as an embodied presence related to the level of perception-action coupling (self vs. non-self) . The more the organism is able to couple correctly perceptions and movements, the more it differentiates itself from the external world, thus increasing its probability of surviving. Proto presence is based on proprioception and other ways of knowing bodily orientation in the world. In a virtual world this is sometimes known as "spatial presence" and requires the tracking of body parts and appropriate updating of displays. Core presence can be described as the activity of selective attention made by the self on perceptions (self vs. present external world) : the more the organism is able to focus on its sensorial experience by leaving in the background the remaining neural processes, the more it is able to identify the present moment and its current tasks, increasing its probability of surviving. Core presence is based largely on vividness of perceptible displays. This is equivalent to "sensory presence" (e.g. in non-immersive VR) and requires good quality, preferably stereographic, graphics and other displays features. Finally, the role of extended presence is to verify the significance to the self of the events experienced in the external world (self relative to the present external world) . The more the self is present in significant experiences, the more it will be able to reach its goals, increasing the possibility of surviving. Extended presence requires intellectually and/or emotionally significant content. In humans the sense of presence-as-feeling is a direct function of these three layers: the more they are able to separate between "internal" and "external", the more is the feeling of presence, the better is the quality of action and experience. VR, presence and flow A corollary of the proposed vision is critical for our goals: it is possible to design mediated situations that elicit exceptionally high presence [ 67 - 69 ]. In particular, here we argue that virtual reality is the medium able to support the highest level of presence because it can trigger at the same time all the three layers discussed before. To understand this point and in particular the difference between VR and other media, here are some examples [ 50 ]: • In reading an engrossing book while sitting in a comfortable, safe place, extended presence will be engaged by media (engagement) but the other layers will not be involved. • In looking at a movie, you can activate a high level of core presence (vividness), a high level of extended presence (engagement) but no proto presence (spatial presence). • In experiencing an interesting immersive VR experience, proto (spatial presence), core presence (vividness) and extended presence (engagement) will be activated by the medium. • In an immersive VR experience, if you are pre-occupied with personal worries and the mediated content is not very engaging, proto (spatial) and core presence (vividness) will be invoked by the medium, but not extended presence. The possibility of activating all the three layers at the same time reduces the occurrence of breakdowns. As suggested by Marsh and colleagues [ 41 ], one of the main goals of VR is to maintain users' attention in the content/illusion of a VR system. The final result is the perceptual illusion of non - mediation [ 37 ], produced by means of the disappearance of the medium, which activates the highest level of presence. To achieve an optimal experience, however, a next step is required: the highest level of presence has to be linked to a positive emotional experience. Csikszentmihalyi [ 60 , 70 ] defines "flow" as an optimal state of consciousness characterized by a state of concentration so focused that it amounts to absolute absorption in an activity. According to Csikszentmihalyi [ 71 ] when people are in a flow state " [they] shift into a common mode of experience when they become absorbed in their activity. This mode is characterized by a narrowing of the focus of awareness, so that irrelevant perceptions and thoughts are filtered out; by loss of self-consciousness; by a responsiveness to clear goals and unambiguous feedback; and by a sense of control over the environment it is this common flow experience that people adduce as the main reason for performing the activity" (p72). Starting from this definition, different authors tried to define flow in an operational way. For Ghani and Deshpande [ 72 ] the two main characteristics of flow are (a) the total concentration in an activity and (b) the enjoyment which one derives from the activity. Moreover, these authors identified two other factors affecting the experience of flow: a sense of control over one's environment and the level of challenge relative to a certain skill level. In this paper we suggest that VR is the preferred medium for the activation of the flow experience. A number of recent experimental results might be considered to foster this vision. A first support comes from the work of Hoffman and his group in the treatment of chronic pain [ 22 - 25 ]. Few experiences are more intense than the pain associated with severe burn injuries. In particular the daily wound care – the cleaning and removal of dead tissue to prevent infection – can be so painful that even the aggressive use of opioids (morphine-related analgesics) cannot control the pain. However it is well known that distraction – for example, by having the patient listen to music – can help to reduce pain for some people. So, Hoffman and colleagues verified in a controlled study the efficacy of VR as an advanced distraction by comparing it with a popular Nintendo video game. The result showed dramatic drops in pain ratings during VR compared to the video game [ 73 ]. Further, using functional magnetic resonance imaging (fMRI) scanner they measured pain-related brain activity for each participant during conditions of no virtual reality and during virtual reality (order randomized). The team studied five regions of the brain that are known to be associated with pain processing – the anterior cingulate cortex, primary and secondary somatosensory cortex, insula, and thalamus – and found that during VR the activity in all the regions showed significant reductions. In particular, the results showed direct modulation of human brain pain responses by VR distraction: the amount of reductions in pain-related brain activity ranged from 50 percent to 97 percent. A second set of results comes from the work of Gaggioli [ 29 , 30 ]. Gaggioli compared the experience reported by a user immersed in a virtual environment with the experience reported by the same individual during other daily situations. To assess the quality of experience the author used a procedure called Experience Sampling Method (ESM), which is based on repeated on-line assessments of the external situation and personal states of consciousness [ 74 ]. Results showed that VR experience was the activity associated with the highest level of optimal experience (22% of self-reports). Reading, TV viewing and the use of other media – both in the context of learning or leisure activities – obtained lower percentages (respectively 15%, 8% and 19% of self-reports) of optimal experiences. A final result, is the preference of phobic patients for VR vs. traditional treatments as showed by two studies from García-Palacios and colleagues [ 75 , 76 ]. In their last study, which surveyed 102 patients who met DSM-IV criteria for specific phobias or panic disorder with agoraphobia, 70% of the patients asked to choose between "in vivo" exposure vs. VR exposure therapy, chose VR exposure. Further, 23.5% of the sample refused in vivo exposure whereas only 3% refused VR treatment. Presence and optimal experience: towards second-generation VR applications in rehabilitation Authentic rehabilitation implies the active participation of the patient in the cultural context, their exposure to opportunities for action and development, their freedom to select opportunities they perceive as the most challenging and meaningful ones for the subject [ 29 , 77 ]. Following this vision, another important asset potentially offered by VR to the rehabilitation process is the possibility of triggering optimal experiences [ 78 ]. Optimal experiences promote individual development. As underlined by Massimini and Delle Fave, [ 58 ] "To replicate it, a person will search for increasingly complex challenges in the associated activities and will improve his or her skill, accordingly. This process has been defined as cultivation ; it fosters the growth of complexity not only in the performance of flow activities but in individual behavior as a whole." (p. 28). This process can be activated also after a major trauma. As noted by Delle Fave [ 79 ], to cope with dramatic changes in the daily life and in the access to environmental opportunities for action, individuals may develop a strategy defined as transformation of flow . Where possible, they keep cultivating former flow activities. Otherwise, as often happens, they manage to identify new and unexpected sources of concentration and involvement, sometimes in areas very different from their previous interests. The vision behind the concept of transformation of flow is the one from "Positive Psychology" [ 80 ]. According to this vision, existing professional treatments should include therapeutic factors that are related to positive experiences. These include increasing clients' positive expectations and hope about change, general sense of optimism, self-efficacy, and coping strategies. Numerous studies of patients with life-threatening diseases suggest that those who remain optimistic show symptoms later and survive longer than patients who confront reality more objectively [ 81 ]. That is, rehabilitative treatments should also be evaluated in terms of their ability to make life more fulfilling for clients. However, it is very difficult within the traditional rehabilitative practices to cope with the sense of hopefulness and depression expressed by many patients. In this area VR may offer a critical advantage: the possibility for the patient to manage successfully in a VE a problematic situation related to his/her disturbance. Using VR in this way, the patient is more likely not only to gain an awareness of his/her need to do something to create change but also to experience a greater sense of personal efficacy [ 82 ]. This approach was recently tested in the support of cerebral palsy. More in detail, Miller and Reid [ 83 ] investigated the personal experiences of 19 children aged 8–11 with cerebral palsy involved in a virtual reality play intervention program. The results showed that children perceived engagement and flow in the virtual reality, and increased their self-competence and self-efficacy. Further, they experienced a sense of control and mastery over the virtual environment and perceived physical changes and increased social acceptance from both peers and family. In another case study, Riva [ 84 ] tested the possibility of using a VE experience – a stroll through a mountain path to reproduce the feeling of an excursion to the mountains – to support the rehabilitation of a person with spinal cord injury. The results revealed slightly improved levels of self-confidence, will, relaxation, and activity. The patient also declared subjective improvement in his sense of well-being, mood, and quality of sleep. Generally, these techniques can be used as triggers for a broader empowerment process within the flow experience induced by a high sense of presence. In psychological literature empowerment is considered a multi-faceted construct reflecting the different dimensions of being psychologically enabled, and is conceived of as a positive additive function of the following three dimensions [ 85 ]: – perceived control : includes beliefs about authority, decision-making skills, availability of resources, autonomy in the scheduling and performance of work, etc; – perceived competence : reflects role-mastery, which besides requiring the skillful accomplishment of one or more assigned tasks, also requires successful coping with non-routine role-related situations; – goal internalization : this dimension captures the energizing property of a worthy cause or exciting vision provided by the organizational leadership. Virtual reality may be considered the preferred environment for the empowerment process, since it is a special, sheltered setting where patients can start to explore and act without feeling threatened. In this sense the virtual experience can be described as an "empowering environment" that rehabilitation provides to patients: nothing the patient fears can "really" happen to them in VR. With such assurance, they can freely explore, experiment, feel, live, and experience feelings and/or thoughts. VR thus becomes a most useful intermediate step between the therapist's office and the real world. Within this frame, therapists are encouraged to explore whether and how VR induced optimal experiences may facilitate recovery [ 86 ]. The use of VR as an empowerment tool was recently tested in the support of HIV-AIDS patients [ 87 ]. The system implemented a VR walkthrough experience of a relaxing campfire in a forest. The scene contains four interactive avatars that relate narratives compiled from HIV/AIDS patients. These narratives cover the aspects of receiving an HIV+ diagnosis, intervention, and coping with living with HIV+ status. In terms of emotional impact, the participants found their experience with the system mostly encouraging, particularly the narratives relating to adjustment and coping. Challenges and Open Issues VR is an advanced communicative interface based on interactive 3D visualization. Its simulative capabilities allow for the precise presentation and control of dynamic multi-sensory 3D stimulus environments, as well as providing advanced methods for recording behavioral responses. However VR, differently from other media, can induce a high sense of presence [ 12 ], usually defined as the "sense of being there" [ 13 ], or the "feeling of being in a world that exists outside the self" [ 14 ]. Thanks to presence, not only knowledge acquisition is possible in VR, but also this acquired knowledge can be transferred in a real environment. This paper introduces a bio-cultural theory of presence linking the state of optimal experience, defined as "flow", to a virtual reality experience. The key ideas behind the proposed vision of presence are: – Presence has a simple but critical role in our everyday experience: the control of agency through the unconscious separation of "internal" and "external". The meaning of "internal" and "external" is not related only to the body but also to the social and cultural space (situation) in which the self is in. – The presence-as-process (the separation mechanism) produces, but is different from, the presence-as-feeling (the experienced level of presence); – The presence-as-feeling is experienced indirectly by the self through the characteristics of action and experience. In fact the self perceives directly only the variations in level of presence-as-feeling: breakdowns and optimal experiences; – The presence-as-process can be divided in three different layers/subprocesses. They are phylogenetically different, and strictly related to the three levels of self identified by Damasio: proto presence (self vs. non self) , core presence (self vs. present external world) , and extended presence (self relative to present external world) . A corollary of the proposed vision is the possibility to design mediated situations that elicit exceptionally high presence. In particular, here we argued that virtual reality is the medium able to support the highest level of presence because it can activate at the same time all the three layers. To achieve an optimal experience (flow), however, a next step is required: the highest level of presence has to be linked to a positive emotional experience. The link between presence, flow and VR suggests the possibility of using it for a new breed of rehabilitative applications focused on a strategy defined as transformation of flow . In this view, VR can be used to trigger a broad empowerment process within the flow experience induced by a high sense of presence. Linking this possibility to its simulative capabilities may transform VR in the ultimate rehabilitative device. Applications of VR in rehabilitation include the following disturbances: memory disorders, planning and motor disabilities, executive functions and spatial knowledge disabilities. For a full review, see Morganti [ 1 ]. However, the road is still long. Even if the significant advances in computer and graphic technology drastically improved the characteristics of a typical VE, VR is still limited by the maturity of the systems available. Today, no off-the-shelf solutions are available. So, the set up of a VR system usually requires much patience for dealing with conflicting hardware or lacking drivers. Nearly every VR system requires a dedicated staff or at least a computer technician to keep the system running smoothly. Moreover, introduction of patients and clinicians to VEs raises particular safety and ethical issues. In fact, despite developments in VR technology, some users still experience health and safety problems associated with the use of immersive headsets. Generally, for a large proportion of VR users these effects are mild and subside quickly. Further, even if the clinical rationale behind the use of VR in rehabilitation is now clear, much of this research growth has been as feasibility studies and pilot trials. Hence, there is still limited convincing evidence coming from controlled studies. Why there are so few controlled trials in VR research? The possible answers are two. First, there is a lack of standardization in VR devices and software. To date, very few of the various VR systems available are interoperable. This makes difficult their use in contexts other than those in which they were developed. Second, there is a lack of standardized protocols that can be shared by the community of researchers. Clearly, building new and additional virtual environments is important so therapists will continue to investigate applying these tools in their day-to-day clinical practice. In fact, in most circumstances, the clinical skills of the rehabilitator remain the key factor in the successful use of VR systems. Future research should explore how to develop VR environments able to provide the degree of challenge required to elicit the optimal experience. Further, research should also deepen the analysis of the link between cognitive processes, motor activities, presence and flow. This will allow a new generation of VEs in which the added value of VR is not only simulation and control. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC546411.xml |
554773 | Genome wide analysis of Arabidopsis core promoters | Background Core promoters are the gene regulatory regions most proximal to the transcription start site (TSS), central to the formation of pre-initiation complexes and for combinatorial gene regulation. The DNA elements required for core promoter function in plants are poorly understood. To establish the sequence motifs that characterize plant core promoters and to compare them to the corresponding sequences in animals, we took advantage of available full-length cDNAs (FL-cDNAs) and predicted upstream regulatory sequences to carry out the analysis of 12,749 Arabidopsis core promoters. Results Using a combination of expectation maximization and Gibbs sampling methods, we identified several motifs overrepresented in Arabidopsis core promoters. One of them corresponded to the TATA element, for which an in-depth analysis resulted in the generation of robust TATA Nucleotide Frequency Matrices (NFMs) capable of predicting Arabidopsis TATA elements with a high degree of confidence. We established that approximately 29% of all Arabidopsis promoters contain TATA motifs, clustered around position -32 with respect to the TSS. The presence of TATA elements was associated with genes represented more frequently in EST collections and with shorter 5' UTRs. No cis -elements were found over-represented in TATA-less, compared to TATA-containing promoters. Conclusion Our studies provide a first genome-wide illustration of the composition and structure of core Arabidopsis promoters. The percentage of TATA-containing promoters is much lower than commonly recognized, yet comparable to the number of Drosophila promoters containing a TATA element. Although several other DNA elements were identified as over-represented in Arabidopsis promoters, they are present in only a small fraction of the genes and they represent elements not previously described in animals, suggesting a distinct architecture of the core promoters of plant and animal genes. | Background In eukaryotes, many cellular processes are regulated at the level of transcription. Initiation of transcription by RNA polymerase II requires the assembly of the basal transcription apparatus at the core promoter, a region of about 70 bp flanking the transcription start site (TSS) [ 1 ]. Interactions mediated by components of the basal machinery and transcription factors that recognize specific cis -regulatory elements, frequently located upstream of the core promoter, ensure efficient and regulated transcription by RNA polymerase II at Class II promoters [ 2 ]. Class II core promoters often contain conserved DNA elements recognized by components of the basal transcription machinery, the general transcription factors. The best-described core promoter DNA element is the TATA box, which is recognized by TATA-binding protein (TBP). The TATA box is a T/A-rich sequence usually located 25–35 base pairs upstream of the TSS [ 3 ]. Recruitment of TBP and TBP-associated factors, all part of the TFIID complex, directs assembly of the pre-initiation complex (PIC), a highly regulated process that ensures precise initiation of transcription. The directionality of the PIC is likely to be provided by the presence of another conserved element, present in a large fraction of Class II promoters, the BRE (IIB recognition element) [ 4 , 5 ]. In addition, Initiator (Inr) elements are often present at the site of initiation of transcription in a number of eukaryotic core promoters. The Inr is a loosely conserved element containing an adenosine at the TSS and a C as the nucleotide preceding it (position -1), surrounded by a few pyrimidines [ 2 ]. The function of the Inr and the components of the basal transcription machinery that recognize this element remain poorly defined. In spite of the availability of a large number of computational programs that predict the presence of plant genes and their architecture (reviewed in [ 6 ]), accurately identifying core promoters solely based on genome sequence analysis remains a daunting task. Although no known DNA-sequence motif is present in all plant core promoters, TATA and Inr motifs represent two elements that are often present [ 7 ]. A main limitation in the analysis of plant core promoters is the insufficient amount of information available regarding TSSs, and hence the location of core promoters in genomic sequences. Over the past few years, several efforts have initiated the high-throughput production and analysis of full-length (FL) Arabidopsis cDNAs [ 8 , 9 ]. These FL-cDNAs have dramatically improved the annotation of the Arabidopsis genome [ 10 ], providing a powerful tool for the identification and analysis of core promoter elements. Here, we describe the analysis of core promoters of ca . 12,750 Arabidopsis genes, using publicly available FL-cDNA sequences. Our objectives for this study were to i) identify motifs characteristic of Arabidopsis core promoters; ii) determine how often Arabidopsis core promoters contain a TATA box, and iii) compare the architecture of Arabidopsis core promoters with those of Drosophila , the only higher eukaryote for which such a genome-wide analysis has been performed. We examined the presence, distribution and consensus sequence of conserved motifs proximal to the TSS. In addition to TATA elements, we identified several other motifs, primarily representing microsatellite elements, some of them overrepresented in particular regions of core promoters. Using Nucleotide Frequency Matrices (NFM), we carried out a genome-wide analysis for the presence and position of TATA-box elements. Our studies show that only about 29% of all Arabidopsis genes contain a recognizable TATA element. The position of the TATA motif with respect to the TSS and correlations between the presence of a TATA with EST abundance and 5' UTR lengths are discussed. Results and discussion Obtaining core promoter and 5' UTR sequences for 12,749 Arabidopsis genes As a first step towards identifying core Arabidopsis promoters, we queried TAIR's Gene Search with the condition of a FL-cDNA entry. We retrieved a total of 13,964 non-redundant hits, derived from over 28,000 total FL-cDNAs deposited at TAIR. The locus Ids for these 13,964 FL-cDNAs was used to retrieve the 5' UTR corresponding to 12,749 genes. The remaining 1,215 genes for which a 5' UTR was not retrieved corresponded to FL-cDNAs that differed between the annotations at TAIR and TIGR, sequences for which no 5'UTR was annotated or sequences with 5' UTR regions corresponding to alternative gene models. The [-500, -1] and [-50, -1] regions of all 12,749 genes was directly retrieved from the TAIR 500 bp upstream dataset. To obtain the [+1, +50] regions, we first checked the length of the 5' UTRs, which was shorter than 50 bp for 2,649 genes and which was interrupted by introns in 2,179 genes. To include into our analyses these cases, three different strategies were followed. If the 5' UTR was longer than 50 bp, and no introns were present in the corresponding [-50, -1] region (10,100 genes), a direct retrieval of the [+1, +50] region was performed from the TAIR 5' UTR dataset. If the 5' UTR was shorter than 50 bp and no intron interrupted this region (2,617 genes), we extended the 5' UTR to 50 nt with a fragment of the immediately adjacent downstream coding sequence using the TIGR cDNA dataset. Finally, if the 5' UTR was shorter than 50 nt and an intron interrupted this region (32 genes), we manually retrieved the [+1, +50] region from the genomic sequence using TAIR's SeqViewer. After these analyzes, we were able to generate datasets corresponding to the [-500, -1], [-50, -1] and [+1, +50] regions from a total of 12,749 genes. These datasets were used for all the subsequent analyzes in this study. Identification of conserved motifs in core promoters To identify sequence motifs overrepresented in Arabidopsis core promoters, we first searched for DNA elements conserved in the [-50, -1] and [+1, +50] regions of the 12,749 Arabidopsis genes. The search was carried using both MEME and AlignACE (see Methods). Motifs correspond to short sequences (6–10 bp), often recognized by a DNA-binding protein, and which can be represented by a consensus sequence. While the total number of motifs retrieved per region with these algorithms was 16 and 32 respectively, only motifs detected in at least 50 sequences with either MEME or AlignACE are shown (Figure 1 ). A comprehensive list and sequence of the remaining motifs is provided as Additional File 1 . From 20 motifs present in 50 or more sequences in the [-50, -1] or [+1, +50] regions, seven were present in both regions (Motifs 1, 2, 4, 5, 8, 9 and 10; Figure 1 ), and thus were given the same numbers. Motifs 5 and 12 are reverse-complements of each other, and they are shown separately because they are over-represented in different regions of the core-promoters (Figure 1 ). Overall, the expectation maximization method MEME appears to be a more robust motif search algorithm than the Gibbs sampling method, AlignACE, since MEME resulted in a significant higher rate of identification for most of the motifs (Figure 1 ). Two motifs identified by MEME (Motifs 10 and 12, Figure 1 ) were not identified by AlignACE in any significant number of sequences. The distribution of the different motifs within the [-50, -1] or [+1, +50] regions was also investigated (Figure 1 ). In a few cases, there was a clear enrichment of motifs at particular positions. For example, Motif 3, only present in the [-50, -1] region, was clustered in the -30 to -45 region, Motif 9, present in both regions, clustered closer to the TSS and Motif 7 showed an enrichment in the vicinity of the -50 position (Figure 1 ). Figure 1 Analysis of motifs present in the [-50, -1] and [+1, +50] regions of 12,749 Arabidopsis genes. Motifs are numbered from 1 to 13 and ordered by the number of occurrences, indicated by the numbers under the motif name. The first numeral corresponds to the number of hits using MEME, the second to the number of hits using AlignACE. For example, 2417/1852 indicates a motif found 2,417 times using MEME and 1,852 times with AlignACE. The second column for each motif shows the nucleotide frequency distribution graphed using WebLogo, where the sizes of the characters represent the frequencies of occurrence. The third column provides a graphic representation of the frequency distribution (y-axis) of each motif in the [-50, -1] or [+1, +50] regions (x-axis). Overrepresentation of motifs in the [-50, -1] or [+1, +50] regions To investigate whether the number of sequences containing each one of these motifs was accurately predicted by MEME or AlignACE and to establish which of these 13 motifs was significantly overrepresented in the [-50, -1] or [+1, +50] regions, we retrieved nucleotide frequency matrices (NFMs) for each one of these motifs from the results of the MEME search (see Methods). The NFMs for each of these motifs, provided as Additional File 2 , were used to determine their presence in the [-50, -1] or [+1, +50] regions. To establish whether the motifs were overrepresented in these regions, we used two background models. The first background model corresponded to an identical number of random sequences (columns 4 and 6 in Table 1 labeled Random) with the same nucleotide composition as the [-50, -1] or [+1, +50] regions. Because biological sequences are not random and intragenic sequences are richer in homopolymeric A/T than predicted by a random model with identical nucleotide composition, we used as the second background model the 12,749 non-core promoter [-500, -450] regions. The results are shown in Table 1 (column 2 in Table 1 labeled Real). Table 1 Motif frequency in the [-50, -1] and [+1, +50] regions of 12,749 Arabidopsis genes compared to background models [-500, -450] [-50, -1] [+1, +50] Real Real Random Real* Random Motif 1 1388 3817 778 4379 (3769) 806 Motif 2 1558 2195 235 3018 (2312) 323 Motif 3 543 1899 289 314 (289) 243 Motif 4 241 1288 109 1665 (1427) 114 Motif 5 88 382 49 421 (361) 54 Motif 6 275 477 56 894 (690) 81 Motif 7 59 153 34 28 (27) 51 Motif 8 157 282 83 421 (279) 106 Motif 9 208 385 111 519 (416) 163 Motif 10 168 270 21 519 (340) 23 Motif 11 548 460 297 1352 (213) 362 Motif 12 137 253 57 346 (308) 61 Motif 13 175 183 113 343 (241) 111 *Number in parentheses indicate the frequency of the motif in the 10,100 [+1, +50] 5' UTR sequences without introns or coding regions. Motifs 3 and Motifs 7 showed a clear Overrepresentation in the [-50, -1] interval. Motif 3 has all the characteristics of a TATA box (Figure 1 ), and was detected in 1,899 genes using the NFM, representing approximately 15% of all the genes investigated. A more detailed characterization of this motif is described below. Motif 7 was detected in a much smaller number of genes (153), and the corresponding motif with the A A/G GCCCA T/A consensus was shown before to be overrepresented in upstream regions versus coding regions of Arabidopsis genes [ 11 ]. Consistent with our findings that show an increased accumulation of this motif towards the left border of the [-50, -1] interval (Figure 1 ), this motif was previously shown to have a strong positional preference for the [-250, -50] interval [ 11 ]. Interestingly, in Arabidopsis this motif is associated with dark-induced genes and is over-represented in genes under circadian regulation [ 12 ]. Three motifs were also found to be overrepresented in the [+1, +50] region. Motif 10 resembles the (GAA) n microsatellite represented at least two fold higher in the [+1, +50] region, compared to the [-50, -1] or the [-500, -450] regions (Table 1 ). This overrepresentation cannot be explained by the modest difference in nucleotide composition between these regions, consistent with the comparable distribution in the randomly simulated datasets (Table 1 ). As described above, 2,649 of the [+1, +50] regions contain coding regions in addition to short 5' UTRs. To investigate whether the coding sequences contributed to the overrepresentation of this motif, we analyzed the presence of this motif in the 10,100 [+1, +50] "clean" 5'UTR regions, which do not contain any coding or intron sequences (shown between brackets in Table 1 under [+1, +50] Real). In these 10,100 sequences, Motif 10 was found in 340 [+1, +50] sequences, the same frequency as in the original dataset (519/12,749). Thus, this (GAA) n microsatellite is overrepresented in the [+50, +1] region, irrespective of whether it is coding or 5' UTR. (GAA) n microsatellites have been extensively researched in humans [ 13 ], but not yet associated with any functional role in Arabidopsis . Motif 13, with the consensus T/A CCGGCGA (Figure 1 ), was detected by both MEME and AlignACE only in the [+1, +50] region (Table 1 ). This motif, however, was not identified as the binding site for any known transcription factor, as deduced from searching the PLACE [ 14 ], TRANSFAC [ 15 ] and AGRIS [ 16 ] databases (not shown). Finally, Motif 11, present in a significant number of sequences (Figure 1 ), fits the Kozak consensus (ACCATGG) for a translation start ATG codon [ 17 ]. Consistently, 1,139 out of the 1,352 sequences in which we found Motif 11 have a short 5' UTR, reflected in that this motif is present in just 213 5' UTR [+1, +50] sequences (Table 1 ). While this motif is irrelevant to our analysis, it provides a good internal control regarding the sensitivity and comprehensiveness of our search for motifs in the [-50, -1] and [+1, +50] regions. Motifs 1, 2, 4, 6, and 9 correspond to microsatellites commonly found in Arabidopsis [ 18 ], displaying similar frequency distributions in the [-50, -1] and [+1, +50] regions. From these 5 motifs, only Motif 2 does not seem to be significantly overrepresented in these two regions, when compared to the [-500, -450] sequences (Table 1 ). The potential participation of microsatellites in the control of gene expression is unclear, but according to recent studies in rice and Arabidopsis , their distribution may follow a gradient in the direction of transcription [ 18 ]. Motif 8 conforms to a (CG) n microsatellite, frequent in monocots such as rice, but not often found in Arabidopsis [ 18 ], which is consistent with a low but comparable frequency in all three regions studied here (Table 1 ). The apparent higher frequency of Motif 8 in the [+1, +50] region, compared to the [-50, -1] (421 versus 282, respectively), is likely to correspond to an increased G/C content of the 5' UTR (see Methods), as reflected by the increased distribution of this motif in a random simulation of sequences with the same nucleotide composition of the corresponding [+1, +50] region (Table 1 ). Motif 9, corresponding to a (CA) n microsatellite (with n = 5), was found to be only slightly overrepresented in the [-50, -1] region, compared to the [-500, -450] background model (Table 1 ). Interestingly, however, this motif is significantly clustered in the [-35, -10] region (Figure 1 ). A similar clustering was not observed in the [+1, +50] region, where this motif is significantly overrepresented, compared to the background models (Table 1 ). Motif 5, with the consensus sequence AAACCCTA (Fig. 1 ), and similarly overrepresented in the [-50, -1] and [+1, +50] regions, compared to the random or [-500, -450] background models (Table 1 ), does not conform to a typical microsatellite sequence. Interestingly, however, the sequence of Motif 5 is precisely the reverse complement of Motif 12, which with the TAGGGTTT DNA-consensus fits the sequence of the Arabidopsis telomeric sequence [ 19 ], and of the telobox, the binding site for a MYB-related telomeric DNA-binding protein previously described in proteins from yeast, plants and animals [ 20 ]. This element, present in the 5' UTR or promoter region of many genes encoding products associated with the translational apparatus [ 21 ], was also shown to participate in the expression of Arabidopsis root meristem genes [ 22 ]. Our analysis suggests that the number of sequences containing the telobox motif in either the forward or reverse-complement configuration is much larger than previously reported [ 23 ]. Consistent with previous studies [ 23 ], only a few genes (8) contain Motif 5 or 12 in both the [-50, -1] and [+1, +50] regions. We also investigated for the presence of motifs previously shown to be overrepresented in the [-60, +40] regions of Drosophila core promoters [ 24 ]. Using the corresponding NFMs, we searched our databases for DRE (DNA-replication related element) and DPE (downstream promoter element), usually found ~30 nt downstream of the TSS [ 25 , 26 ]. Although the [-60, +40] region is shifted 10 bp towards the 5' end from our selection, the positional clustering of the DRE and DPE motifs [ 24 ] still falls under the [-50, +50] region investigated here. In our analyses, neither one of the two motifs was represented at a level significantly higher than in the random models (not shown). A CCAAT box NFM [ 7 ] did not result in any significant distribution change between real and randomly generated datasets for both regions (not shown). This was expected because CCAAT boxes usually cluster around the -75 position [ 27 ], which is outside of the [-50, +50] interval investigated here, corresponding to what is generally recognized as the core promoter region. Similarly, none of the motifs identified here appeared to correspond to Inr elements. We conclude that, with the exception of the TATA box, the elements involved in the architecture of core promoter in Arabidopsis and Drosophila are overall different. Distribution of TATA motifs in core Arabidopsis promoters According to our analysis for conserved core promoter elements, Motif 3 (Figure 1 ) is likely to represent the TATA box characteristic of many Class II promoters. Consistent with this idea, Motif 3 is significantly overrepresented in the [-50, -1] region (Table 1 ) with a clear clustering in the -30 to -45 region (Figure 1 ). Surprisingly, however, Motif 3 was only detected in 15% of the 12,749 core promoters investigated, lower than found in previous studies, which suggested that 57% of plant genes had a TATA box [ 7 ]. To investigate this striking difference between previous estimates for the frequency of a TATA box in Arabidopsis promoters and our own analyses, we utilized the previously described TATA NFM [ 7 ]. With this NFM, MotifScanner identified 3,679 TATA motifs in the [-50, -1] region, significantly higher than the number of hits in the [+1, +50] region, or in the corresponding background models (Table 1 ). Thus, according to this analysis, 28.8% of all Arabidopsis genes contain a TATA, comparable to the number of Drosophila core promoters suggested to contain a TATA box (28–34%) [ 24 ], but still significantly lower than previously reported for the analysis of 305 plant promoters [ 7 ]. Interestingly, however, if these prior studies are restricted to just the 63 sequences from Arabidopsis , only 23 showed the presence of a TATA, representing a frequency of 36.5%, comparable to our own results. Previous studies also suggested that plant TATA-less promoter were the exception [ 28 ], and that TATA-less promoters were mainly restricted to photosynthetic [ 28 ] and plastid ribosomal genes [ 29 ]. Our results, however, indicate that TATA-less promoters are found more frequently than TATA-containing promoters. We cannot rule out that Arabidopsis is the exception among the plants, a possibility to be considered given the much lower percentage of TATA-containing promoters in Arabidopsis compared to other plants [ 7 ]. More likely, however, the lack of a good knowledge of the position of the TSS may have resulted in previous studies in a very significant over-estimate of the presence of TATA elements. As an example, if the search for TATA elements is carried out on the 12,749 [-500, -1] regions, 6,316 sequences (using the MEME NFM) or 8,776 (using the expanded PlantProm NFM) are retrieved as containing a significant hit to a TATA element (Figure 2A ), corresponding to 49.5% and 70% respectively, much closer to previous, yet likely incorrect, estimates [ 28 ]. Figure 2 Position of TATA motifs in Arabidopsis promoters. A, The analysis of the 12,749 [-500, -1] regions with the MEME-derived NFM (Table 3) resulted in 6,316 sequences containing a significant hit (indicated by the red curve), 1,768 of them clustered in the [-50, -1] region. A similar analysis with the expanded and improved PlantProm-derived NFM (Table 4) resulted in 8,776 hits (blue curve), 2,507 of them clustered in the [-50, -1] region. B, Expansion of the [-50, -1] region indicating with a vertical green line that the average distance of the TATA motifs present in the [-50, -1] region is 31.7 nt from the TSS (using the first conserved T as the reference position). The sequences from all these putative TATA-containing promoters were retrieved and the NFMs were retrained with this new information. The new matrix obtained from 1,899 sequences gathered using our MEME NFM (Figure 1 ) is shown in Table 3 . Similarly, the PlantProm TATA NFM was retrained with the 3,679 sequences, resulting in an improved and expanded NFM (Table 4 ). These NFMs provide robust tools for the identification of additional plant TATA motifs. The two NFMs are significantly better than previously available plant TATA NFMs, with regards to the addition of flanking sequences that permit to expand the TATA consensus, and because of the much larger number of sequences used to build them. They have very similar nucleotide distributions, probably the biggest difference being at position 8, were the matrix derived from our MEME analysis has a much stronger requirement for an A (compare Tables 3 and 4 ). Table 3 TATA NFM derived from 1,899 motifs. Derived from MEME -4 -3 -2 -1 1 2 3 4 5 6 7 8 9 10 11 12 A 0.227 0.259 0.244 0.245 0.003 0.997 0.001 0.994 0.408 0.994 0.358 0.906 0.241 0.439 0.302 0.393 C 0.244 0.262 0.230 0.398 0.001 0.001 0.002 0.003 0.001 0.001 0.001 0.003 0.294 0.228 0.269 0.204 G 0.125 0.180 0.113 0.153 0.002 0.001 0.000 0.001 0.001 0.002 0.001 0.090 0.193 0.153 0.160 0.161 T 0.403 0.300 0.413 0.203 0.994 0.001 0.997 0.002 0.590 0.003 0.641 0.002 0.272 0.180 0.270 0.242 t n t c T A T A T/A A T/A A n a n a Table 4 TATA NFM derived from 3,679 motifs. Derived from PlantProm -4 -3 -2 -1 1 2 3 4 5 6 7 8 9 10 11 12 A 0.246 0.262 0.248 0.243 0.058 0.917 0.000 0.998 0.493 0.943 0.417 0.655 0.197 0.399 0.340 0.383 C 0.246 0.247 0.242 0.434 0.030 0.000 0.049 0.001 0.000 0.001 0.020 0.093 0.349 0.286 0.244 0.212 G 0.118 0.184 0.126 0.111 0.000 0.001 0.000 0.000 0.000 0.038 0.000 0.100 0.221 0.159 0.141 0.141 T 0.391 0.308 0.384 0.213 0.911 0.083 0.951 0.001 0.507 0.018 0.563 0.153 0.232 0.156 0.275 0.264 t n t c T A T A T/A A T/A A c a a a The new NFMs were used to scan the [-500, -1] region and establish where each of them localized a TATA with the highest probability. As shown in Figure 2A , both NFMs showed a significant peak in the [-50, -25] region, consistent with the position expected for TATA elements. To establish the average distance of TATA elements to the TSS, the MEME and PlantProm TATA NFMs were used to scan the 12,749 [-50, -1] regions and the positions of the corresponding TATA boxes were recorded and graphed (Figure 2B ). The average distance of a TATA (position 1 in Tables 3 and 4 ) to the TSS is 31.7 nt (indicated with a green line in Figure 2B ). Thus, the position of the TATA box in Arabidopsis is more similar to what is typically the case in animal promoters, usually 25–30 nt from the TSS [ 2 ] than what is found in yeast, where the TATA box has a variable position in the [-100, -40] region [ 30 ]. We investigated whether the presence of TATA motifs correlated with other properties of the corresponding genes. Based on our analysis of the 12,749 FL-cDNAs, we determined that the average size of the 5' UTR of Arabidopsis genes is 129 nt (Figure 3 ). Interestingly, when we compared the average length of the 5' UTRs of TATA-containing versus TATA-less genes, we found that TATA-containing genes had an average of 108 nt in their 5' UTRs, compared to 138 nt in TATA-less genes. This difference in the length of the 5' UTRs between these three populations of genes is evident in the sway towards shorter 5' UTRs in the TATA-containing population (Figure 3 ). The reason for this difference in 5' UTR length between TATA-containing and TATA-less promoter is not clear, although it is possible that the longer 5' UTR provide additional features that contribute to PIC assembly. We also investigated whether the presence of a TATA element made a difference in the times that each gene was represented in ESTs, an approximate indication of the relative level of expression of the corresponding gene. While each Arabidopsis gene is represented in average by 9.48 ESTs (see Methods), the 12,749 sequences utilized here are represented in average by 13.02 ESTs, suggesting that the available FL-cDNAs are likely to correspond to genes expressed at a higher level than the average Arabidopsis gene. Interestingly, however, TATA-containing genes were represented in average by 17.6 ESTs (17.68 using the MEME NFM and 17.52 using the PlantProm NFM, Tables 3 and 4 ), whereas TATA-less genes were represented by just 11.23 ESTs. These results suggest that the presence of a TATA is generally associated with genes expressed at a higher level. Gene Ontology analyses (see Methods) did not provide any insights on possible cellular functions associated with these gene clusters (not shown). An analysis of the sequences flanking the TSS, and likely containing the Inr element, did not reveal any significant difference in nucleotide composition between TATA-containing and TATA-less promoters (data not shown). Thus, the assembly of the PIC is likely to occur in Arabidopsis TATA-less promoters solely through the Inr, or regulatory elements outside of the [-50, +50] region investigated here also participate in the recognition of the core promoter by components of the basal transcriptional machinery. Figure 3 Length distribution of 5' UTRs in TATA-containing and TATA-less genes. The length of the 5' UTR of all 12,749 genes (orange bars) shows an average of 129 nt. Promoters lacking a TATA box (TATA-less, red bars) have in average 5' UTRs 138 nt long. The 5' UTR of TATA-containing genes (blue bars) are in average 108 nt long. Conclusion Understanding the architecture of core promoters is central to establishing the mechanisms by which the basal transcriptional machinery assembles and facilitates formation of the pre-initiation complex. We provide here the first genome-wide analysis of Arabidopsis core promoters. We have identified several motifs overrepresented in core promoters, with respect to background models consisting of random sequences of identical nucleotide composition or intergenic regions. With the exception of microsatellites similarly distributed in the [-50, +1] and [+1, +50] regions and the TATA element, for which an in-depth analysis was carried out, most other overrepresented motifs were present in only a small subset of the sequences analyzed. Our studies provide robust NFMs corresponding to TATA elements and other conserved motifs, and show that only 29% of all Arabidopsis promoters contain a TATA element located in average approximately 32 nt upstream of the TSS. The absence of a TATA correlates with a lower representation of the corresponding gene in public EST collections as well as with longer 5' UTR sequences. However, the absence of a TATA is not compensated for by the overrepresentation of any one of several motifs present in Drosophila core promoters, suggesting significant differences in the organization of core promoters from animals and plants. Methods Retrieval of core promoter and 5' UTR sequences To obtain the sequences of the region of promoters spanning the first 500 nt upstream of the TSS [-500, -1] and the corresponding 5' UTRs, we used the TAIR Gene Search web tool [ 31 ]. The TAIR database was queried for all genes having a full-length cDNA (FL-cDNA) entry. The corresponding 5' UTR and the [-500, -1] regions datasets were downloaded from TAIR [ 32 ], last updated on February 28, 2004. The FL-cDNA sequences were obtained from the June 10, 2004 realese of the TIGR's cDNA dataset [ 33 ]. The locus Ids of the gene queries were checked against the 5' UTR, [-500, -1] and FL-cDNA files to reject erroneous annotations. We divided the 100 bp region flanking the TSS in upstream [-50, -1] and downstream [+1, +50] sub-regions of 50 bp each. The [-50, -1] and [+1, +50] intervals of the confirmed genes were directly retrieved from the downloaded TAIR files, when possible. In those cases when the 5' UTR region was shorter than 50 bp, the TIGR file was used to extend the region to the necessary length by appending a fragment of the immediately adjacent coding sequence. When an intron interrupted the 5' UTR, we manually extracted the 50 bp region from the Arabidopsis genomic sequences using the SeqViewer tool at TAIR. Motif discovery and motif search To characterize core promoters, we first investigated features represented by conserved regions or motifs. From several algorithms available [ 34 ], we chose the expectation maximization method MEME (version 3.0.8) [ 35 ] and the Gibbs Sampling method AlignACE [ 36 ]. MEME and AlignACE were run for the [-50, -1] and [+1, +50] regions separately for the entire set of genes. For MEME, a fixed minimum motif length of 5 and a maximum of 10 was set and 20 motifs were requested using the zero or one occurrence per sequence model. For AlignACE, only the background fractional GC content of the input sequences was supplied, and all the other parameters were left at default values. MEME and AlignACE were run in the Itanium 2 Cluster at the Ohio Supercomputer Center. The results obtained with MEME were compared with those obtained with AlignACE. Motifs consisting of single nucleotide repeats (i.e. A n ) were manually parsed out independent of the number of occurrences or positional preferences. The obtained motifs were plotted according to their positions within the regions and their consensus sequences were graphed using WebLogo version 2.7 [ 37 ]. To find pre-defined motifs in the [-50, -1] and [+1, +50] regions, we used the higher order probabilistic model MotifScanner from MotifSampler version 3.0 [ 38 ]. The searches were fed with the nucleotide frequency matrices (NFMs) of the selected motifs obtained from the MEME search, and a background model of order 1 accounting for single- and di-nucleotide distributions for each set. The prior probability of finding one instance of the motif was left to the default value of 0.2. We also ran the motif search with elements conserved in core promoters of other organisms. The first two corresponded to the TATA and CCAAT elements obtained as NFMs from PlantProm [ 7 ]. The other two corresponded to the Downstream Promoter Element (DPE) and the DNA-replication Related Element (DRE) described for Drosophila core promoters [ 24 ]. For these new four elements, we performed the same analysis as described before, using the [-50, -1] and [+1, +50] region datasets and the corresponding randomly generated dataset. Generation of random sequence models After establishing that the distribution of nucleotides in the Arabidopsis [-50, -1] and [+1, +50] regions are ~65% A/T to ~35% C/G and ~61% A/T to ~39% C/G, respectively, a pseudo-random set of 50 bp sequences was generated for each region to be tested with the matrices as a way of determining the chances of finding the motifs candidates in a stochastic environment. This information was then used together with the search results obtained from the real data to support the confidence of the findings. Analysis of TATA elements For the analysis of the TATA motif, the TATA NFM previously described [ 7 ] was used against the NFM reported by our own motif search. Using MotifScanner, the distribution of TATA elements in the upstream vicinity of the TSS was investigated. After determining the location of the putative TATA motifs in the [-50, -1] region, the NFMs were retrained with the new retrieved TATA motifs. Analysis of gene ontology and expression level based on EST abundance To determine whether the occurrences of the discovered motifs were associated with specific gene functions or products we retrieved the Arabidopsis Gene Ontology Database [ 39 ] (last update July 20, 2004) and correlated the annotated molecular function, biological process or cellular component of Arabidopsis genes with the ones found in the motif clusters. Under the assumption that the contribution of a gene to transcription activity is related to the number of its detected ESTs, we downloaded a dataset from TAIR that accounts for the number of ESTs submissions per locus [ 40 ] (last update July 23, 2004). With this, we then established the relative expression levels based on the ratio of the genes containing a particular motif and the overall EST frequency per gene. List of abbreviations bp, base pair; EST, expressed sequence tag; FL-cDNA, full-length cDNA; Inr, Initiator element; NFM, nucleotide frequency matrix; nt, nucleotide; PIC, pre-initiation complex; TBP, TATA-binding protein; TSS, translations start site 5' UTR, 5' untranslated region Authors' contributions C.M. carried out all the analyses and interpreted the results. E.G. was involved in the design and supervision of the project. C.M. and E.G. jointly wrote the manuscript. Both authors read and approved the final manuscript. Table 2 Frequency of TATA frequency in the [-50, -1] and [+1, +50] regions of 12,749 Arabidopsis genes compared to background models [-500, -450] [-50, -1] [+1, +50] Real Real Random Real Random MEME 543 1899 289 314 243 PlantProm 1526 3678 1431 1084 1209 Supplementary Material Additional File 1 Complete list of the motifs present in the [-50, -1] and [+1, +50] regions of 12,749 Arabidopsis genes. The analysis was carried out as described for the results shown in Figure 1 . Click here for file Additional File 2 Nucleotide Frequency Matrices for all the motifs shown in Figure 1 . Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554773.xml |
546410 | Video capture virtual reality as a flexible and effective rehabilitation tool | Video capture virtual reality (VR) uses a video camera and software to track movement in a single plane without the need to place markers on specific bodily locations. The user's image is thereby embedded within a simulated environment such that it is possible to interact with animated graphics in a completely natural manner. Although this technology first became available more than 25 years ago, it is only within the past five years that it has been applied in rehabilitation. The objective of this article is to describe the way this technology works, to review its assets relative to other VR platforms, and to provide an overview of some of the major studies that have evaluated the use of video capture technologies for rehabilitation. | Introduction Two major goals of rehabilitation are the enhancement of functional ability and the realization of greater participation in community life. These goals are achieved by intensive intervention aimed at improving sensory, motor, cognitive and higher level-cognitive functions on the one hand, and practice in everyday activities and occupations to increase participation on the other hand [ 1 , 2 ]. Intervention is based primarily on the performance of rote exercises and/or of different types of purposeful activities and occupations [ 3 , 4 ]. The client's cognitive and motor abilities are assessed throughout the intervention period so that therapy may be continually adjusted to the client's needs. For many injuries and disabilities, the rehabilitation process is long and arduous, and clinicians face the challenge of identifying a variety of appealing, meaningful and motivating intervention tasks that may be adapted and graded to facilitate this process. Clinicians also require outcomes that may be measured accurately. Virtual reality-based therapy, one of the most innovative and promising recent developments in rehabilitation technology, appears to provide an answer to this challenge. Indeed, it is anticipated that virtual reality (VR) will have a considerable impact on rehabilitation over the next ten years [ 5 ]. Virtual reality typically refers to the use of interactive simulations created with computer hardware and software to present users with opportunities to engage in environments that appear to be and feel similar to real world objects and events [ 6 - 8 ]. Users interact with displayed images, move and manipulate virtual objects, and perform other actions in a way that attempts to "immerse" them within the simulated environment thereby engendering a feeling of presence in the virtual world [ 9 , 10 ]. The objective of this article is to briefly describe the use of VR in rehabilitation, and then emphasize the unique attributes of the video capture VR to rehabilitation, including an overview of some of the major studies that have evaluated the use of this technology for rehabilitation. Virtual reality applied to rehabilitation Virtual reality has a number of well-known assets, which make it highly suitable as a rehabilitation intervention tool [ 11 ]. These assets include the opportunity for experiential, active learning and the ability to objectively measure behavior in challenging but safe and ecologically-valid environments while maintaining strict experimental control over stimulus delivery and measurement. VR also provides the capacity to individualize treatment needs, while gradually increasing the complexity of tasks and decreasing the support provided by the clinician [ 5 , 12 ]. During the mid to late 1990s, virtual reality technologies first began to be developed and studied as potential tools for rehabilitation assessment and treatment intervention [ 7 ]. The list of applications is long and diverse, and only several examples are provided here. VR has been used as a medium for the assessment and rehabilitation of cognitive and metacognitive processes, such as visual perception, attention, memory, sequencing and executive functioning [ 13 ]. Rizzo and colleagues [ 14 , 15 ] developed a Virtual Classroom for the assessment and training of attention in children with Attention Deficits Hyperactive Disorder. Piron, et al. [ 16 ] used a virtual environment to train reaching movements, Broeren, et al. [ 17 ] used a haptic device for the assessment and training of motor coordination, and Jack et al. [ 18 ] and Merians, et al. [ 19 ] have developed a force feedback glove to improve hand strength and a joint position glove to improve the range of motion and speed of hand movement. The studies cited above share a common goal of using virtual reality to construct a simulated environment that aimed to facilitate the client's motor, cognitive or metacognitive abilities in order to improve functional ability. In some cases, the applications take advantage of the ability to adapt virtual environment to simulate real life activities such as meal preparation [ 20 ] or crossing a street [ 21 - 25 ]. The ultimate goal of such applications is to enable clients to become able to participate in their own real environments in a more independent manner. Attempting to achieve similar results via conventional therapy when clinicians and clients must deal with real world settings (e.g., a visit to a real supermarket) is fraught with difficulty. In contrast, virtual environments may be adapted with relative ease to the needs and characteristics of the clients under care. Given the variety of VR platforms and the diverse clinical populations that may benefit from VR-based intervention, it is helpful to view the VR experience as a multidimensional model that appears to be influenced by many parameters. A conceptual model was developed within the context of terminology established by the International Classification of Functioning, Disability and Health (ICF) [ 2 ] and the rehabilitation process [ 25 , 26 ]. This model helps to identify the clinical rationale underlying the use of virtual reality as an intervention tool in rehabilitation as well as to design research to investigate its efficacy for achieving improved performance in the real world. The process of using VR in rehabilitation is modeled via three nested circles, the inner "Interaction Space", the intermediate "Transfer Phase" and the outer "Real World". The "Interaction Space" denotes the interaction that occurs when the client performs within the virtual environment, experiencing functional or game-like tasks of varying levels of difficulty, i.e., the activity component according to the ICF terminology. This interaction is influenced by user characteristics, which include personal factors (e.g. age, gender, cultural background), body functions (e.g. cognitive, sensory, motor abilities) and structures (e.g., the parts of the body activated during the task). It is also influenced by the characteristics of VR platform and its underlying technology (e.g. point of view, encumbrance) that presents the virtual environment and the nature and demands of the task to be performed within the virtual environment. It is during the interaction process that sensations and perceptions related to the virtual experience take place; here the user's sense of presence is established, and the process of assigning meaning to the virtual experience as well as the actual performance of virtual tasks or activities occurs. The sense of presence enables the client to focus on the virtual task, separating himself temporarily from the real world environment. This is an important requirement when motor and, especially, cognitive abilities and skills are trained or restored. The concept of meaning is also thought to be an essential factor that enhances task performance and skills in rehabilitation in general [ 1 , 3 ], and thus also in the VR-based rehabilitation [ 27 ]. Environmental factors within the virtual environment may contribute information about issues that facilitate or hinder the client's performance, and may serve as facilitators of performance in the virtual environment leading to improved performance in the real world. Two outer circles, the "Transfer Phase" and the "Real World" denote the goal of transferring skills and abilities acquired within the "Interaction Space" and eliminating environmental barriers in order to increase participation in the real world (i.e., participation in the natural environment according to the ICF terminology). The "Transfer Phase" may be very rapid and accomplished entirely by the client or may take time and need considerable guidance and mediation from the clinician. The entire process is facilitated by the clinician whose expertise helps to actualize the potential of VR as a rehabilitation tool. Virtual reality platforms Virtual environments are experienced with the aid of special hardware and software for input (transfer of information from the user to the system) and output (transfer of information from the system to the user). The selection of appropriate hardware is important since its characteristics may greatly influence what is taking place in the Interaction Space, i.e., the way users respond (e.g. sense of presence, performance) to a virtual environment [ 28 ]. The output to the user generates different levels of immersion, which may be enhanced by different modalities including visual, auditory, haptic, vestibular and olfactory stimuli, although, to date, most VR platforms deliver primarily visual and auditory feedback. Visual information is commonly displayed by head mounted displays (HMD), projection systems, or flat screen, desktop systems of varying size. Input to a virtual environment enables the user to navigate and manipulate objects within it. Input may be achieved via direct methods such as inertial orientation tracker or by video sensing which tracks user movement. Input may also be achieved via activation of computer keyboard keys, a mouse or a joystick or even virtual buttons appearing as part of the environment. In addition to specialized hardware, application software is also necessary. In recent years, off-the-shelf, ready-for-clinical-use VR software has become available for purchase. However, more frequently, special software development tools are required in order to design and code an interactive simulated environment that will achieve a desired rehabilitation goal. In many cases, innovative intervention ideas may entail customized programming to construct a virtual environment from scratch, using traditional programming languages. Video capture VR Video capture VR consists of a family of camera-based, motion capture platforms that differ substantially from the HMD and desktop platforms in wider use. When using a video-capture VR platform, users stand or sit in a demarcated area viewing a large video screen that displays one of a series of simulated environments. Users see themselves on the screen, in the virtual environment, and their own natural movements entirely direct the progression of the task, i.e., the user's movement is the input. The result is a complete engagement of the user in the simulated task. A single video camera converts the video signal of the user's movements wherein the participant's image is processed on the same plane as screen animation, text, graphics, and sound, which respond in real-time. This process is referred to as "video gesture", i.e., the initiation of changes in a virtual reality environment through video contact. The user's live, on-screen video image responds at exactly the same time to movements, lending an intensified degree of realism to the virtual reality experience. Video capture provides both visual and auditory feedback with the visual cues being most predominant. Myron Krueger [ 29 ] was the first to investigate the potential of video capture technology in the 1970s with his innovative Videoplace installation. This was one of the first platforms that enabled users to interact with graphic objects via movements of their limbs and body, and was used to explore a variety of virtual art forms. The quality of the video image in these applications was relatively primitive, consisting of silhouetted figures. Nevertheless, the immediate response of the virtual environment in real-time to the user's movements presented compelling evidence of the possibility of using this technique for interactive simulation. The next major development occurred with the release of VividGroup's Mandala Gesture Extreme (GX) platform in 1996, together with a suite of interactive, game-type environments. This platform makes use of a chroma key-based setup so that the existing background is subtracted and replaced by a simulated background. GX VR has enjoyed considerable success around the world in numerous entertainment and educational facilities including science museums and entertainment parks. During the past five years it has also begun to be adapted for use in rehabilitation and has generated great interest in clinical settings (see below). GX VR currently offers a wide variety of gaming applications including, Birds & Balls, wherein a user is required to touch balls of different colors; if the touch is "gentle", the balls turn into doves whereas an abrupt touch causes them to burst. In another application, a soccer game, the user sees himself as the goalkeeper whose task it is to prevent balls from entering the goal area (see Figure 1 ). Figure 1 Individual with a stroke performing within the Soccer environment using the VividGroup GX system. In the late 1990s two other commercial companies developed video-capture gaming platforms, Reality Fusion's GameCam and Intel's Me2Cam Virtual Game System [ 30 ]. Both of these platforms aimed for the low-cost, general market, relying on inexpensive web camera installations that did not entail the use of the chroma key technique. For reasons that are not clear, Reality Fusion and Intel discontinued their products within the past two years. Somewhat later, Sony developed its very popular EyeToy application designed to be used with the PlayStation II platform . This is an off-the-shelf, low-cost gaming application, which provides the opportunity to interact with virtual objects that can be displayed on a standard TV monitor [ 31 ]. As with the VividGroup's GX platform, the EyeToy displays real-time images of the user. However, it does not require a chroma key blue/green backdrop behind the user nor bright ambient lighting (see Figure 2 ). This makes for an easier setup of the platform in any location but, on the other hand, it means that the user sees himself manipulating virtual objects within a video image of his own physical surrounding rather than within different virtual environments. An additional difference between the cheaper EyeToy platform and the more expensive GX platform is that the former is capable of recognizing users or objects only when they are in motion. A user who remains stationery does not exist for EyeToy applications. In contrast, the GX VR is responsive to users whether they are in motion or not. Figure 2 Individual with a stroke performing the Wishy Washy application using the Sony EyeToy system. The EyeToy application includes many motivating and competitive environments which may be played by one user or more than one user sequentially in a tournament fashion. With GX VR, two users can compete together simultaneously (e.g., boxing, spinning plates) as well as combine their efforts to create different visual effects without a competitive component (e.g., painting a rainbow, mirror image distortions and popping bubbles). The potential of these platforms for rehabilitation was readily apparent despite the fact that they were originally developed for entertainment and gaming purposes. Indeed, VividGroups's GX platform was first applied without adaptations within a clinical setting by Cunningham and Krishack [ 32 ] who used it to treat elderly patients who were unstable and at high risk for falling. Unfortunately, the inability to grade these platforms to levels suited to patients with severe cognitive or motor impairments initially limited the application of these environments in clinical settings. In order to broaden the potential clinical applications of the platforms, our research group adapted the GX VR platform [ 33 , 34 ]. VividGroup developed, and now also markets, a version of the GX platform, known as IREX (Interactive Rehabilitation EXercise) platform which enables therapists to adapt levels of difficulty and record performance outcomes [ 35 ]. Characteristics of the Video-Capture Platforms Video-capture VR differs from other platforms in a number of ways that have great relevance for its use as a tool for rehabilitation evaluation and intervention. Some of these characteristics appear to be advantageous whereas others may limit the utility of video-capture VR. Point of View Video-capture VR provides users with a mirror image view of themselves actively participating within the environment. This contrasts with other VR platforms such as the HMD which provides users with a "first person" point of view, or many desktop platforms in which the user is represented by an avatar. The use of the user's own image has been suggested to add to the realism of the environment and to the sense of presence [ 10 ]. It also provides feedback about a client's body posture and quality of movement, comparable to the use of video feedback in conventional rehabilitation during the treatment of certain conditions such as unilateral spatial neglect [ 36 ]. Freedom from encumbrance The user in video-capture VR does not have to wear or support extraneous devices such as an HMD, glove or markers in order to achieve a substantial intensity of immersion within the virtual environment. This eliminates a source of encumbrance that would likely hinder the motor response of patients with neurological or orthopedic deficits. Although the newer HMDs and stereoscopic glasses are considerably less cumbersome than previous models, little information is available regarding their use by individuals undergoing cognitive or motor rehabilitation. Interaction and Control This characteristic relates to how the user controls objects within the virtual environment. As indicated above, rather than relying on a pointing device or tracker, interaction within video-capture based environments is accomplished in a completely intuitive manner via natural motion of the head, trunk and limbs. Not only is the control of movement more natural, but, in the case of the chroma key GX VR, a "red glove" option (or any object with a distinct color) may be used to restrict system response to one or more body parts as deemed suitable for the attainment of specified therapeutic goals. For example, when it is appropriate to have the intervention directed in a more precise manner, a client may be required to repel projected balls via a specific body part (e.g., by the hand when wearing a red glove or by the head when wearing a red hat). Or, when intervention is more global, the client will not use the red glove option and thus be able to respond with any part of the body. The ability to direct a client's motor response to be either specific or global makes it possible to train diverse motor abilities such as the range of motion of different limbs and whole body balance training. Feedback A limitation of currently available video capture platforms is the reliance on visual and auditory feedback and the absence of a haptic interface that would provide participants with real-time indications of contact with the virtual stimuli. Such feedback could serve as an important addition when used in therapy since the balls, for example, could be rendered to appear to have progressively greater mass, making the task more or less difficult. It would also add an additional element of realism to the gaming experience, and ensure that feedback to participants was more realistic. This could be accomplished to some degree via a quasi-haptic effect that might use vibration to simulate a true haptic interface (A.A. Rizzo, personal communication). For example, small buzzers may be affixed to the tips of the digits. Touching a virtual ball in the Vivid GX Birds & Balls application would generate a low amplitude, high frequency "buzz". In contrast, repelling a larger ball in the Soccer application would generate a high amplitude, low frequency "buzz". User position Video-capture VR may be implemented while users stand, sit, or even walk on a treadmill. For example, the same environment may thus be suitable for training standing balance of a patient who had a stroke, sitting balance of an individual with an incomplete quadriplegic spinal cord injury, and balance during treadmill locomotion of an individual with a paraplegic spinal cord injury. Multiple users Moreover, one or more users may participate within the same environment. In some applications, the ability to have two "rival" users interact simultaneously within the same game or task adds an element of competitiveness that may be motivating. Of greater importance is the ability of the therapist to support a client or use handling techniques in order to facilitate active movement while the client interacts with the virtual stimuli. The therapist can be concealed behind the client in order not to be seen in the VE, or can join the client within the virtual environment. Two-dimensional motion plane Another limitation of the currently available video capture VR platforms is that they may be operated with only one camera. This means that all tasks must be performed within a single plane. In the case of the typical coronal plane setup where the camera is positioned to face the user, any functional movement that takes place in the sagittal or transverse planes is disregarded. Virtual scenarios must therefore be carefully designed such that a meaningful task can be performed despite the restriction to uniplanar movement. Moreover, care must be taken when analyzing the kinematic trajectories since any out-of-plane motion will not be recorded. It is encouraging to note that three dimensional, functional environments will likely soon become available (I. Cohen and A.A. Rizzo, personal communication). Applications of video-capture VR in rehabilitation Although video-capture platforms have only begun to be used for rehabilitation applications within the last five years, there are already results from a number of research groups who have studied its utility with different patient populations. In this section we highlight the major studies that provide evidence that this technology appears to be suitable for use in rehabilitation. The evidence concerning participant sense of presence, enjoyment, usability and performance are summarized as reported by studies of single platforms and by studies that compared different VR platforms Side effects None of the studies carried out to date have reported any significant occurrence of cybersickness-type side effects when using video-capture VR. Rand et al. [ 28 ] explicitly examined the incidence of side effect of a group of 89 healthy participants who experienced the GX platform. The occurrence of the side effects was very low, and no participants requested to terminate their participation in the study. To date, evidence from a fewer number of patient subjects with spinal cord injury (SCI) or stroke indicates that they also are not disturbed by side effects when using video-capture VR [ 25 , 34 ]. Presence and enjoyment Several studies examined the influence of video capture platform of the user's sense of presence and level of enjoyment. Rand et al. [ 28 ] in their study of 40 healthy young adult participants, compared two different VR platforms, the GX-monitor and a combination of GX environments viewed via an HMD. They found that the participants' sense of presence was significantly higher when using the GX monitor platform than when using the GX-HMD. In a companion study, which compared the GX-monitor with an HMD with two age groups, 33 young adults and 16 elderly participants, the older group felt a significantly higher sense of presence and enjoyment than did the younger group using the HMD. Lott et al. [ 37 ] used the IREX video capture platform and an HMD and found that the levels of presence reported by the young adult participants did not differ significantly for the two virtual reality conditions. The results of these studies showed that a high sense of presence and level of enjoyment can be achieved in a video capture VR platform. They also demonstrate that user characteristics such as age influence the sense of presence. In another study, Rand et al. [ 38 ] compared the sense of presence, performance and perceived exertion experienced by 30 healthy young participants when they engaged in two games performed within video-projected virtual environments that differed in their level of structure and spontaneity. The non-structured application was applied using VividGroup's Gesture Xtreme (GX) VR platform, and the structured application was applied using the IREX platform, a rehabilitation-oriented application of GX, developed to train a specific movement (e.g., shoulder abduction) in order to increase range of motion or endurance. No main effect or interaction effect was found for the sense of presence (assessed using Witmer & Singer's [ 39 ] Presence Questionnaire (PQ) although significant differences were found for several of the PQ sub-scales. It was concluded that it is possible to provide users with a satisfactory level of presence and enjoyment using both structured and non-structured paradigms. Therefore, both movement options, structured and non-structured, enhance the therapist's repertoire of VR intervention tools in order to maximize rehabilitation. Rand at al. [ 40 ] reported the results of another study, in which two different video-capture platforms, GX and EyeToy, were compared to determine their effect on users' sense of presence, level of enjoyment, perceived exertion and side effects. In this study, 18 healthy young adults experienced two games in each platform (Birds & Balls and Soccer in GX and Kung-Foo and Wishy-Washy in EyeToy) in a counter-balanced order. There was no significant difference in the sense of presence between the two platforms. However, the EyeToy Kung-Foo game, which encourages participants to eliminate successive invading warriors by hitting at them, was found to be significantly more enjoyable than the other games. In a continuation of this study, Rand et al. [ 40 ] examined the feasibility of using the EyeToy with healthy elderly users. Ten healthy elderly participants, aged 59 to 80 years, found this platform easy to operate and enjoyable. The results for patients with stroke at a chronic stage (1–5 years post stroke) were similar to the healthy elderly. They thought that it could contribute to their rehabilitation process, and were able to operate the platform independently. The responses of a third group of users, patients with stroke at an acute stage (1–3 months post stroke), were somewhat different. They also reported that they enjoyed the experience; however, they became frustrated while performing the EyeToy games, even when played at the easiest levels. This latter observation highlights a major limitation of the closed architecture of the EyeToy; to date, Sony has been unwilling to adapt the games to include a greater range of levels of difficulty, nor to provide tools to external programmers to do so (R. Marks, personal communication). It also emphasized the effect that user characteristics, in this case, time post onset of stroke, have on the sense of presence. The GX VR platform has consistently generated high levels of presence and enjoyment across a wide range of clinical populations and ages including adults with paraplegic spinal cord injury [ 34 ], stroke [ 25 , 33 ], and young adults with cerebral palsy and intellectual impairment [ 41 ]. A pilot study using the GX platform to determine its suitability for leisure time activities among older stroke survivors was carried out. These participants enjoyed the experience, and perceived it to be therapeutic [ 42 ]. Performance outcomes and sensitivity of video capture VR The measures of performance used by video-capture VR studies to date include response times to presented virtual stimuli, percent success with which a given game is performed (e.g., how many balls are repelled by the user in the role of soccer game goal keeper), a subjective report on how much effort the user has felt while in the environment. The chroma key video capture platforms such as GX and IREX also provide a relatively gross measure of limb kinematics. Whether these data have sufficient precision and resolution to warrant their inclusion in a research study remains to be investigated (F. MacDougal, personal communication). Sveistrup, McComas and colleagues have used the IREX platform for balance retraining. Following six weeks of training at an intensity of three sessions per week, improvement was found for all 14 participants in both the VR and control groups [ 35 ]. However, the VR group reported more confidence in their ability to "not fall" and to "not shuffle while walking". The same research group has also demonstrated that an exercise program delivered via video capture VR can improve balance and mobility in adults with traumatic brain injury [ 43 ] and the elderly [ 44 ]. Kizony et al. [ 34 ] performed a feasibility study of the GX-VR platform to train balance of people who had a paraplegic SCI. The study included 13 adult participants who had paraplegia. Results from the patient group were compared to data from a parallel study of a group of 12 healthy adult participants who performed a similar protocol, while sitting on a chair with hands supported. The results showed that the participants with SCI who had better balance function performed higher within the virtual environments and the healthy participants performed significantly better than the participants with paraplegia. This platform appeared to be suitable for use with people who have paraplegia and it was able to differentiate between participants with different levels of balance function. In a second study Kizony et al. [ 25 ] examined the relationships between cognitive and motor ability and performance within the GX-virtual environments with people who have had a stroke. Thirteen older adult patients with stroke participated in the full study. Significant moderate positive correlations were found between VR performance and cognitive abilities suggesting that higher cognitive abilities relate to higher performance within the VR. In contrast, almost no positive correlations were found with the motor abilities. Indeed, as pointed out by these authors, perhaps motor performance demands and their characteristics should not be expected to be identical within the real and the virtual worlds. It may be that differences in presence, motivation, or other factors influence the movement patterns differently in virtual versus natural environments. This result is in accordance with Lott et al.'s [ 38 ] findings which showed significant differences between functional lateral reach performed in a real versus virtual environment. They reported that the participants reached significantly further when virtual objects were presented within the virtual environment using a video capture VR platform than when they were asked to touch a person hand standing on their side. They suggest that embedding the reaching task in a game shifts the person's attention from the possibility of losing his balance thereby enabling him to achieve greater function. Rand et al. [ 28 ] used a virtual office environment which was developed by Rizzo et al., [ 15 ] and was displayed both via an HMD and via the GX-monitor platform. In this case, participants stood in front of the GX monitor and visually scanned the Virtual Office. Performance by both age groups was significantly higher when using the GX-monitor platform than when using an HMD, whereas the younger group's visual scan ability was better than the elderly on both platforms. The results also demonstrated the effect that different user characteristics, such as age and gender, have on the VR experience and thus should be taken into consideration when considering which VR platform to use in rehabilitation. Weiss at al. [ 41 ], in a study of five young male adults with physical and intellectual disabilities, explored ways in which virtual reality could provide positive and enjoyable leisure experiences during physical interactions with different game-like virtual environments and potentially lead to increased self-esteem and a sense of self-empowerment. The results of this study showed that the GX-VR platform was feasible for use with this population. The participants were able to use the platform and expressed their considerable enjoyment from the virtual games. However, the authors raised several concerns, especially that some of the participants displayed involuntary movement synergies, increased reflexes and maladaptive postures due to the too difficult levels of the games that were used in study. Thus, a more controlled study with the same population is currently in progress in order to examine more thoroughly the potential of the platform as a mean for providing leisure opportunities to this population. Performance within two games (Kung-Foo and Wishy-Washy) was measured while three different groups, young adult participants, healthy senior participants and individuals who were several years post-stroke, used several of the EyeToy games [ 40 ]. Performance was scored for each game in terms of how much of a given activity (e.g., how many windows washed, how many warriers eliminated) was accomplished within a preset time limit. Higher scores were achieved when clients were able to perform these activities faster and/or more accurately. There were significant differences in performance between the young and stroke groups, with the young adults having greater success in both games than the stroke group. The older adult group performed as well as the younger group. The performance results described above highlight the interplay between the user and VR platform characteristics, and emphasize the importance of taking these characteristics into consideration while using VR in rehabilitation. Moreover, they demonstrate the sensitivity of the VR performance measures in their capacity to differentiate between levels of participant ability. Due to the motivating nature of the game-like environments, it is important to determine how much effort healthy subjects and those with disabilities expend while engaged in these tasks. In a study of healthy young adults, the participants using the GX platform perceived the highest level of exertion while playing Soccer, less for Birds & Balls and still less for a third game, Snowboard where only weight transfer was needed [ 28 ]. When differences between the age groups were assessed, the younger group perceived higher levels of exertion in comparison to the older group. There were also differences in the perceived level of exertion of the Birds & Balls game in GX as compared to comparable games in the EyeToy [ 40 ]. Overall, the level of perceived exertion was rated as "somewhat difficult" which is an ideal level to use in therapy. Initial comparisons of VR-based intervention to conventional therapy Using the IREX platform, Sveistrup et al. [ 35 ] performed two studies designed to compare VR-delivered therapy to conventional therapy. In their first study, patients suffering from frozen shoulder received exercise either via IREX applications or via conventional physiotherapy. In both cases, therapy was directed at improving the quality of three specific shoulder joint movements. In the second study, individuals who suffered from post-traumatic brain injury were assigned to either VR-based (applications such as the virtual soccer game were used where patients were encouraged to reach towards the virtual stimulus in addition to weight transfer) or conventional therapy (e.g., stepping, picking up objects, reaching) for balance training for a total of 24 sessions. In their report on preliminary data from 14 patients, the authors concluded that both exercise programs resulted in improvement of patients' balance. However, additional benefits were identified for the VR group, including greater enthusiasm for the VR-delivered therapy program, increased enjoyment while doing the exercises, improved confidence while walking and fewer incidents of falling. Cunningham & Krishack [ 32 ] presented VR as it was used in occupational therapy to improve balance and dynamic standing tolerance with geriatric patients. They reported greater improvement in dynamic standing tolerance in a small group of older adults following a VR therapy than in a small group following a standard occupational therapy. More recently, Bisson, et al. [ 44 ] demonstrated significant improvements in balance and functional mobility in community-living older adults following a VR exercise program delivered with the IREX platform. The comparison group completed a biofeedback exercise program and also demonstrated significant balance improvement. Analysis of conventional and video capture VR treatment for SCI by specialists in rehabilitation highlighted several key differences between the two methods of intervention [ 34 ]. First, control over delivery of the stimuli via the VR platform enabled the therapist to intervene more effectively, especially in terms of physical guidance and support. In addition, the VR platform allowed precise control over delivery of the number of stimuli simultaneously presented to the patient as well as their speed and direction. These features appeared to increase the number of times a desired balance-recovery movement was performed by patients. Finally, the ease with which this platform elicited dynamic equilibrium recovery responses, an essential component in balance training and encouraged weight transfer movements was remarkable. In contrast, the static presentation of stimuli during conventional therapy restricts intervention to focus almost exclusively on weight transfer. Towards functional video-capture environments One of the newest developments in video-capture VR is the simulation of more functional environments. Rand et al. [ 45 ] have created a Virtual Mall (VMall), using the GX platform. It has been designed to support intervention of patients following a stroke who have motor and/or executive functions deficits that restrict their everyday activities. This environment enables participants to engage in tasks based on typical daily activities such as shopping in a supermarket. In the initial application, shown in Figure 3 , the user moves from aisle to aisle by activating icons located on a large monitor around thereby encouraging active movement, transfer of weight from side to side, and balance reactions. Virtual food items are manipulated (e.g., selected from a shelf and placed in a supermarket cart in accordance with a shopping list selected in advance. The performance of the task provides multiple opportunities to make decisions, plan strategies and multitask, all in a relatively intuitive manner. Output measures include a record how well the user accomplishes the task (e.g., how many correct items selected) will be recorded and saved thus giving an option to monitor improvement over time. Initial performance measures and user feedback has been recorded from six patients who had a stroke more than two years since onset and suffer from residual motor and cognitive deficits. The results suggest that the VMall provides a motivating task that requires active movement as well as the ability to plan and problem solve. Figure 3 Screen shots of the VMall showing clients with stroke selecting a shopping aisle (left panel), a food item (middle panel) and verifying the contents of the shopping cart (right panel). Sony's EyeToy Wishy Washy application involves the cleaning of successive dirty windows via wiping movements of the hand and arms. Most recently, VividGroup has developed a laundry application (V.J. Vincent, personal communication). These moves towards more functional applications are encouraging. Conclusions Evidence from the literature has demonstrated the feasibility, usability and flexibility of video-capture VR, and there is little doubt that this technology provides a useful tool for rehabilitation intervention. The results of presence questionnaires, reports of user satisfaction, and the sensitivity to differences in user ability as functions of age, gender and disability are all strong indicators of the suitability of this tool. A short video-clip, taken from a local news report of applications of video-capture VR for stroke, illustrates the extremely positive response of one user to the use of this technology (see Video 1). To date, as indicated by the studies reviewed above, video capture VR shows great promise for a variety of therapeutic goals including intervention for cognitive and motor rehabilitation, functional activities and leisure opportunities. The general assets of virtual reality summarized above combined with several assets that are unique to video-capture VR, are compelling arguments for the inclusion of this technology in the repertoire of tools available in clinical settings. Market demand, user interest and improvements in technology have led to the availability of a number of different video-capture platforms. There is no doubt that these platforms are valuable as intervention tools during the rehabilitation of patients with neurological and musculoskeletal disorders. Motivated patients would be encouraged to practice movements in a repetitive manner thereby improving their condition, an achievement that is not easy to attain via conventional therapy [ 46 ]. Currently, the two main contenders for the rehabilitation market are VividGroup's GX and IREX platforms and Sony's PlayStation II's EyeToy. Both use large monitors to display real-time images of users interacting with virtual objects in a simulated environment. The VividGroup platforms are considerably more expensive and require a more elaborate setup including a chroma key blue/green backdrop behind the user and bright, ambient lighting. Sony's EyeToy is an off-the-shelf, low-cost gaming application that may be run under almost any ambient conditions. Studies comparing these two platforms have shown that presence, enjoyment, usability and performance were equivalent under many conditions and for diverse users. Thus, despite the EyeToy's limitations, its low cost, user-friendly interface and simple setup requirements makes it highly attractive to therapists. It may be readily acquired for use in any clinical setting, and even be purchased for use at home to provide regular, intensive therapy after discharge from hospital. Nevertheless, it is clear that the EyeToy is not suited for use with the most severely impaired users. The currently available games seem to have a broad appeal for users of different ages but an open architecture that permits adaptations of existing applications and development of new environments appears to be a basic requirement to make this platform truly functional as a clinical tool. A system for generating an outcomes report comparable to the IREX platform would also be of great benefit for clinicians. Additional low-cost video-capture platforms are currently under development (M. Shahar, personal communication). Moreover, video-capture platforms that will provide three dimensional, functional environments will likely soon become available (Cohen and Rizzo, personal communication). In contrast to the EyeToy's closed architecture, VividGroup's IREX platform provides a user-friendly interface that a therapist may use to specify a much greater range of levels of difficulty. Their SDK (Software Development Kit) provides programmers with the ability to further adapt existing applications such as the standard set of games [ 33 ] and to design and implement novel applications such as the virtual mall described above [ 45 ]. The popular press has been generating a considerable amount of publicity in the EyeToy platform [ 31 ], and it is clear that low-cost video-capture systems such as these are poised to make VR available to a wide range of users. We anticipate that future developments in technology, such as low-cost virtual environments that are more functional will enable clinicians to take advantage of the considerable benefits that VR has for rehabilitation. Supplementary Material Additional File 1 Video 1: This video clip shows a patient who had a stroke using the VividGroup VR system for cognitive and motor rehabilitation. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC546410.xml |
554772 | Fuzzy species among recombinogenic bacteria | Background It is a matter of ongoing debate whether a universal species concept is possible for bacteria. Indeed, it is not clear whether closely related isolates of bacteria typically form discrete genotypic clusters that can be assigned as species. The most challenging test of whether species can be clearly delineated is provided by analysis of large populations of closely-related, highly recombinogenic, bacteria that colonise the same body site. We have used concatenated sequences of seven house-keeping loci from 770 strains of 11 named Neisseria species, and phylogenetic trees, to investigate whether genotypic clusters can be resolved among these recombinogenic bacteria and, if so, the extent to which they correspond to named species. Results Alleles at individual loci were widely distributed among the named species but this distorting effect of recombination was largely buffered by using concatenated sequences, which resolved clusters corresponding to the three species most numerous in the sample, N. meningitidis , N. lactamica and N. gonorrhoeae . A few isolates arose from the branch that separated N. meningitidis from N. lactamica leading us to describe these species as 'fuzzy'. Conclusion A multilocus approach using large samples of closely related isolates delineates species even in the highly recombinogenic human Neisseria where individual loci are inadequate for the task. This approach should be applied by taxonomists to large samples of other groups of closely-related bacteria, and especially to those where species delineation has historically been difficult, to determine whether genotypic clusters can be delineated, and to guide the definition of species. | Background The definition of bacterial species, and a concept of species applicable to all bacteria, are problems that have long exercised systematists and microbiologists [ 1 - 4 ]. While species names have been assigned to groups of organisms sharing many common phenotypic traits, and a certain minimum level of genomic similarity, attempts to define species using DNA sequences have been relatively unsuccessful. The existence of very different levels of sequence diversity among named species, and the variable extent of gene flow within and between bacterial taxa [ 5 ], complicates species concepts and definitions. Indeed, for many, bacterial species are constructs of the human mind, arising from our desire to impose order on the bacterial kingdom [ 6 , 7 ], rather than natural subdivisions imposed by underlying genetic processes, and a central question is not so much how species should best be assigned, but whether such entities exist and can be delineated. Molecular approaches to assigning bacteria to species began with the introduction of DNA-DNA hybridization, which allowed an objective assessment of the extent of sequence similarity among a set of genomes, and remains the systematicist's gold standard, defining bacterial species as those isolates whose genomes show at least 70% hybridization under standardized conditions [ 3 ]. However, few laboratories now use this method and, in practice, novel isolates (and particularly those that presently are unculturable) are usually compared to each other, and to known taxa, by assessing the sequence similarities in their 16S rRNA genes. 16S rRNA sequences are highly conserved and do not provide sufficient resolution to explore the relatedness among closely related bacterial populations and less conserved genes need to be used to delineate similar species. Individual isolates of a named species differ in gene content [ 8 ] and the distribution of these genes is key to understanding the variable properties of isolates of a species, particularly among bacterial pathogens. These auxiliary loci exist alongside the set of genes that are present in all isolates of the named species (the core genome) and which include those that encode enzymes with house-keeping functions [ 9 ]. Besides being present in all isolates of a species, the genetic variation in core house-keeping genes is considered to be largely neutral, and thus provides a more reliable indication of genetic relatedness than variation in genes that are subject to strong selection [ 10 ]. We would expect any reasonable definition of a species to delineate a cluster of isolates that have very closely-related house-keeping loci that are present in all isolates of a species (as has also been proposed for eukaryotes [ 11 ]). However, single house-keeping loci are unlikely to have sufficient variation to allow confident resolution of the different lineages. For recombinogenic bacteria, and arguably all bacteria, multi-locus approaches are required, as these provide increased resolution, and also reduce the impact of 'inter-species' recombination. Thus, a localized interspecies recombinational event at one locus, which distorts the true relatedness between species, is buffered by the more reliable indications of relatedness provided by the other loci. Furthermore, attempts to observe whether or not species exist, and how sharply they can be defined, requires the analysis of large populations of each candidate species and not just one or a few reference isolates. A multilocus approach has recently been applied to small numbers of isolates of several relatively distantly related named species of enterobacteria [ 10 ], and other bacteria [ 11 ], and to larger numbers of isolates of related bacteria that are believed to have relatively low rates of recombination [ 12 - 14 ]. However, it is unclear whether species can be resolved using a multilocus approach in the more challenging case of highly recombinogenic bacteria colonising the same body site. Ideally, we would like to know if, in a large collection of such isolates that are believed to include examples of a number of closely related named species, we can resolve well delineated clusters, and the extent to which any clusters relate to the species names assigned by standard microbiological procedures. Can such populations diverge into distinct populations, and stay distinct, in the face of frequent and promiscuous recombination? In this study we have evaluated the ability of seven individual house-keeping gene sequences, and of the concatenated sequences of these genes, to resolve a large sample of human pathogenic and commensal Neisseria into genotypic clusters. We chose this example because Neisseria are naturally transformable, are among the most recombinogenic bacteria, and there is good evidence for relatively frequent localised recombination between the named Neisseria species [ 15 , 16 ] through transformation. We demonstrate that individual genes are incapable of identifying consistent clusters among the Neisseria isolates, but the tree based on the concatenated sequences effectively resolves the three major named species within the sample, although the boundaries are fuzzy due to the presence of a small number of intermediate genotypes. Results The widespread use of multilocus sequence typing (MLST) [ 17 ] for epidemiological purposes provides the sequences of seven house-keeping gene fragments from thousands of isolates of several bacterial pathogens. However, few of the available MLST databases include any substantial numbers of isolates of multiple closely related named species. An exception is the public Neisseria MLST database, which includes several thousand sequence types (STs) of N. meningitidis and smaller numbers assigned to several other named human Neisseria species [ 18 ] on the basis of standard phenotypic tests. The first 500 STs of N. meningitidis were compared with all STs assigned to the other human Neisseria species. The sequences of the seven gene fragments were concatenated in-frame and a tree was constructed (using third codon position sites) using Mr Bayes [ 19 ]. Figure 1 is the majority rule consensus of 10 000 trees generated from the posterior probability at stationarity. All 67 STs of N. gonorrhoeae , and all but two of the 171 STs of N. lactamica , descend from single well-supported nodes (the remaining two N. lactamica clustered very anomalously and have probably been incorrectly identified). The great majority of N. meningitidis also formed a single well-resolved cluster, but a few arise from the branch leading to the N. lactamica isolates. Very similar clustering of these three species was observed using other sets of 500 N. meningitidis STs from the database, and in a neighbour-joining tree constructed using all STs in the Neisseria MLST database (data not shown). The high levels of recombination in the Neisseria make the fine structure of the tree meaningless (Figure 1 ), and here we use the tree-building software first and foremost as a clustering tool. Analysis of the individual gene trees shows that these fail to resolve the named species and highlights many examples where interspecies recombination has resulted in anomalous clustering (Figure 2 ). The clear inability of single locus trees to resolve the named species, which are well resolved using the concatenated sequences, establishes that multiple loci are required to buffer against the distorting effect of inter-species recombination at the individual loci. Although the concatenated sequences resolve three named species, N. gonorrhoeae , N. meningitidis and N. lactamica , their boundaries are not perfectly defined and a number of isolates are placed on the branch between N. lactamica and N. meningitidis , representing intermediate genotypes. The small numbers of STs assigned to other human Neisseria species do not cluster clearly. A significant separation is observed between two subtrees (A and B in Figure 1 ), although these both contain isolates assigned as N. sicca , N. mucosa and N. subflava . Multiple minimum-evolution trees constructed using all STs of these other Neisseria species and randomly selected samples of ten STs from each of N. meningitidis , N. lactamica and N. gonorrhoeae , showed the same deep split between these subtrees, which was also observed in trees constructed from all Neisseria STs (all species) in the MLST database, using Neighbour-Joining, minimum evolution and UPGMA tree-building approaches (data not shown). Discussion Current molecular definitions of species use rules or cut-off values (e.g. ≥ 70% DNA-DNA hybridization) and rarely take account of the genotypic diversity within and between populations [ 3 ]. A more natural and pragmatic approach is to analyse large populations of related isolates, that are believed to cover multiple species, and to observe whether suitable molecular methods can resolve distinct clusters in sequence space that can be given appropriate names [ 11 ]. This approach has not yet been rigorously applied to bacteria. Consequently we have no idea whether large populations of related bacteria can invariably be divided into discrete clusters using suitable molecular methods or, alternatively, whether many groups of related bacteria fall into a genetic continuum where clear divisions do not exist. Sequence-based approaches should help us answer this question. However, most studies have focused on single loci and small numbers of isolates, whereas multilocus approaches with large populations are essential as the history of individual genes (including rRNA operons [ 20 ]) may be obscured by interspecies recombination, and clusters observed using a small number of isolates may merge when larger numbers of isolates are considered. Comparison of the tree based on the concatenated sequences with the individual gene trees clearly illustrates the inadequacy of single loci for resolving N. meningitidis and N. lactamica (Figure 2 ). The concatenation of the seven housekeeping loci shows that multiple loci can buffer against the distorting effects of inter-species recombination and that the boundaries between the three dominant species in the Neisseria MLST database can be resolved. Network based methods (e.g. Neighbor-Net [ 21 ], Splitstree [ 22 ]) applied to both the concatenates and individual loci produce output with numerous reticulations, indicating the conflicting signals in the data, such that the implied relationships between STs within clusters have no phylogenetic meaning. Nevertheless, the use of multiple loci enables us to observe the species clusters even in the presence of conflicting signals. The three main clusters coincide well with the species names derived by standard microbiological procedures and the present definitions of N. meningitidis , N. lactamica and N. gonorrhoeae are reasonably secure; the two N. lactamica that clustered highly anomalously probably represent species mis-identification. The most critical test of the multilocus approach is the ability to resolve N. lactamica from N. meningitidis since these colonise the same body site, the nasopharynx. Resolution of these named species was remarkably good, although the boundaries between N. lactamica and N. meningitidis are somewhat fuzzy, due to the existence of intermediate forms. This is to be expected as recombinogenic bacteria have mosaic genomes, resulting from the occasional replacement of chromosomal segments with those from related populations. Thus, in any large dataset, there may be isolates in which one or more of the loci used in a multilocus approach to species definition will have been recently introduced from a related population. Single unusually divergent replacements, or replacements at more than one of the multiple loci, may place isolates away from the majority of isolates of the species. However, only seven STs in Figure 1 fell into this category (of 667 STs from isolates identified as either N. meningitidis or N. lactamica ), and there was no overlap between these two named species (i.e. a region containing isolates identified as both species interspersed with one another). Sorting the human commensal Neisseria into species has been difficult, with frequent revisions of species names [ 23 ]. We gain some insight into the extent and source of this difficulty in Figure 1 , where isolates assigned as N. mucosa , N. sicca and N. subflava each fall in very different parts of the tree, and the subtree shown in Figure 1A contains several closely related isolates that have been assigned to these three different named species. Additional studies of the human commensal, Neisseria (and of other groups plagued with similar problems, such as viridans streptococci) using the multilocus approach with large datasets, should clarify whether they fall into distinct clusters, or whether the difficulties in defining species by phenotypic methods reflect an underlying genetic reality in which resolved clusters are not evident. If necessary, further resolution between apparent clusters may be attempted by increasing the numbers of loci sequenced. Provided that the alleles at these loci show a degree of specificity to a given species cluster, then the resolution of that cluster will be enhanced. If this cannot be demonstrated, then it is likely that the isolates under test do not genuinely form separate populations, and should not be considered to be distinct species. This approach lends itself to "electronic taxonomy", in which systematic classification may be evermore finely elucidated through the accumulation of online sequence databases. The work described here obviously begs the question of what forces or mechanisms could generate such separation among recombining bacteria. We offer a simple model for recombining organisms as follows: consider two populations freely recombining within themselves and with each other. New mutations arising in one population will readily spread to the other, and to an observer they appear to form one cluster of related strains. If a barrier to recombination should be erected between them, such that isolates are much more likely to undergo recombination with their own population, then the rate of generation of new genotypes within each population may increase beyond the rate at which such genetic innovation is shared and the two populations begin to diverge. As the populations diverge, decreasing sequence identity will further impede recombination, thus reinforcing the effect of the original genetic barrier and creating a permanent separation [ 24 , 25 ]. It is not difficult to suggest candidate mechanisms. Niche separation is one example, and almost certainly underlies the tight well-defined cluster of N. gonorrhoeae . Unlike the other human Neisseria , which colonise the nasopharynx, the primary niche of the gonococcus is the genital tract, and it has been proposed that gonococci arose relatively recently due to the successful invasion of the genital tract by a nasopharyngeal Neisseria lineage [ 26 ]. Similarly, what appears to be single body site (e.g. the human nasopharynx) may contain multiple niches that can be exploited, leading to opportunities for speciation. Restriction-modification systems [ 27 ], limitation of transformability by differences in pheromone-type [ 28 ] and similar processes are feasible alternatives. The point at which such a group is described as a species is a matter more of human interest and attention than any intrinsic evolutionary process. The properties of the species clusters we observe will be determined by the diversification of those strains sharing the speciation loci (i.e those that determine gene flow). Because speciation is gradual, we should be able using estimates of recombination within and between groups derived from multilocus data, to define nascent species which if they continue to diversify in isolation, are expected to form distinct sequence clusters, ie species, in the future. Conclusion The bacterial domain of life is not uniform. Instead we see clumps of similar strains that share many characteristics, and with an innate human urge to classify, we have defined these as species. This work shows that by applying a simple approach using sequence data from multiple core housekeeping loci, we can resolve those clusters, provided such clusters exist. However, these species clusters are not ideal entities with sharp and unambiguous boundaries; instead they come in multiple forms and their fringes, especially in recombinogenic bacteria, may be fuzzy and indistinct. A multilocus approach using large numbers of isolates will provide data that help us to develop theoretical models of how species emerge, and relate these to the observed population genetic structure of bacteria. This should be enormously helpful to taxonomists, whose foremost duty will remain to provide us with pragmatic species designations which attempt to reflect the underlying genetic reality. Methods Strains The contents of the publicly accessible Neisseria MLST database [ 17 , 18 ] were used to explore the validity of the approach described here for other species. Alleles at the seven MLST loci of all isolates defined as Neisserial species other than N. meningitidis (67 isolates of N. gonorrhoeae , 171 of N. lactamica , 5 of N. sicca , 3 of N. mucosa , 5 of N. cinerea , 7 of N. polysacchareae , 3 of N. flava , 4 of N. perflava , 4 of N. subflava and 1 isolate of N. flavescens ) were concatenated as described below, and analysed together with the concatenated sequences of N. meningitidis strains with ST numbers from 1 to 500. Species definitions were as recorded at [ 17 , 18 ], and were according to standard clinical microbiological schema. The sequences of the individual alleles at the seven loci in the above Neisseria were also used to construct individual gene trees. Phylogenetics and population genetics MLST loci were concatenated in-frame to form a 3267 bp sequence, of which only third position sites were used in subsequent analyses. To illustrate clustering in this dataset, a tree was constructed using Mr Bayes 3.0b4 [ 19 ]. A starting tree was determined in PAUP (version 4 beta 10) [ 29 ] using the Neighbour-Joining method with distances corrected using the HKY85 model. The starting tree was input into Mr Bayes, and four Markov Chain Monte Carlo chains were run with default heating parameters until convergence and 10 000 trees were sampled from the posterior probability distribution. These were then used to produce a 50% majority rule consensus tree. Minimum evolution trees for individual loci were constructed in MEGA 2.1 [ 30 ]. Third position sites were used with the Kimura 2-parameter distance correction. List of abbreviations rRNA ribosomal RNA MLST Multi Locus Sequence Typing ST Sequence Type UPGMA Unweighted Pair Group Method with Arithmetic Mean Authors' contributions BGS conceived of the study and drafted the manuscript, CF participated in study design and analysis of results, WPH designed the study, carried out the analyses and interpreted the results, and drafted the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554772.xml |
546405 | Homeostatic capabilities of the choroid plexus epithelium in Alzheimer's disease | As the secretory source of vitamins, peptides and hormones for neurons, the choroid plexus (CP) epithelium critically provides substances for brain homeostasis. This distributive process of cerebrospinal fluid (CSF) volume transmission reaches many cellular targets in the CNS. In ageing and ageing-related dementias, the CP-CSF system is less able to regulate brain interstitial fluid. CP primarily generates CSF bulk flow, and so its malfunctioning exacerbates Alzheimers disease (AD). Considerable attention has been devoted to the blood-brain barrier in AD, but more insight is needed on regulatory systems at the human blood-CSF barrier in order to improve epithelial function in severe disease. Using autopsied CP specimens from AD patients, we immunocytochemically examined expression of heat shock proteins (HSP90 and GRP94), fibroblast growth factor receptors (FGFr) and a fluid-regulatory protein (NaK2Cl cotransporter isoform 1 or NKCC1). CP upregulated HSP90, FGFr and NKCC1, even in end-stage AD. These CP adjustments involve growth factors and neuropeptides that help to buffer perturbations in CNS water balance and metabolism. They shed light on CP-CSF system responses to ventriculomegaly and the altered intracranial pressure that occurs in AD and normal pressure hydrocephalus. The ability of injured CP to express key regulatory proteins even at Braak stage V/VI, points to plasticity and function that may be boosted by drug treatment to expedite CSF dynamics. The enhanced expression of human CP 'homeostatic proteins' in AD dementia is discussed in relation to brain deficits and pharmacology. | Review Choroid plexus impact on Alzheimer's disease Accumulating evidence supports the idea that continually decreasing choroid plexus (CP) function in advanced ageing exacerbates Alzheimer's disease (AD). As part of a new paradigm to explain brain interstitium deterioration in age-related dementias, increasingly more attention is being paid to the role of compromised blood-CSF [ 1 ] and blood-brain [ 2 ] barriers. Structural alterations and functional failures in CP as well as brain capillary transport systems adversely affect fluid dynamics and composition [ 3 , 4 ]. This review treats mainly CP dysfunction and the compensatory reactions that occur in this epithelium in AD. Efficient CSF turnover is essential for a healthy brain. It depends upon an exquisite balance between CSF formation and reabsorption [ 5 ]. Compromised secretory phenomena at the CP 'upstream' predispose the brain to AD-type problems. On the other hand, defective clearance of CSF at the arachnoid membrane 'downstream' leads to normal pressure hydrocephalus (NPH) [ 6 ]. The pivotal role of the CP in CSF homeostasis and brain viability becomes more evident when the system fails. More information is needed to evaluate how diminishing choroidal functions affect the surrounding brain in the face of AD and other dementias. Brain fluid homeostasis: The role of CSF volume transmission In serving the brain's metabolic needs by supplying 'biochemical goods', the CP uses CSF as a conduit for convecting substances [ 7 ]. Numerous solutes ranging from ions to large proteins are entrained in the CSF that percolates through the ventricular axis (Fig. 1 ). CSF intimately contacts the periventricular brain tissue with which it exchanges materials bidirectionally by diffusion and bulk flow. Continually undergoing chemical modification as it flows downstream, the CSF completes the volume transmission process [ 7 ] by draining into venous blood at distal arachnoidal sites (Fig. 1 ). Figure 1 Schema for CSF convection of water and solutes: Arterial blood perfusing the choroid plexus continually provides water, ions and organic substrates for the CP epithelial cells to form the CSF that underlies volume transmission. Manufactured by the CP epithelium as the result of numerous transport processes, the CSF is actively secreted into the ventricles. In transit through the ventricular cavities to the arachnoid drainage sites, the CSF exchanges anabolites and catabolites with brain interstitial fluid. As a result, trophic and signaling molecules are delivered (blue arrow) to the neurons and, concurrently, toxic waste products and unneeded proteins are removed (red arrow) by CSF 'sink action' on the brain. The permeable ependymal membrane allows bi-directional diffusion of beneficial and harmful molecules. Bulk flow or volume transmission of CSF is thus essential in effecting the homeostasis of fluid composition. Reduced formation of CSF and stagnated flow in ageing and AD [ 4 ] limits the delivery of substances to neurons. This debilitates brain function [ 3 ]. Numerous proteins are synthesized and secreted by CP into CSF. Other substances are transported from blood. Table 1 overviews molecules normally distributed by the CP-CSF nexus. Arginine vasopressin (AVP) is a neuropeptide synthesized by CP epithelium and secreted into CSF [ 8 , 9 ]; it regulates CSF formation and modulates hippocampal memory mechanisms [ 10 , 11 ]. Growth factors foster cell growth, blood supply and water balance [ 1 ]. Trace element distribution across CP is complex and involves many mechanisms. Iron transport into CSF, for example, is regulated by several proteins [ 1 ] alterations in which may predispose to amyloidogenesis [ 12 ]. Vitamins B and C are actively transported at the blood-CSF barrier [ 13 ]. This stabilizes their concentration in CSF, except in late-onset AD [ 14 ]. The net entry of nucleoside bases into CSF is also determined by active transporters in CP [ 15 , 16 ]. Cysteine protease inhibitors like cystatin C are synthesized in CP epithelium for transport into CSF [ 17 ]. Hormones such as leptin and prolactin are translocated from plasma to CSF by saturable choroidal receptors [ 18 , 19 ]. Overall it is striking that the CP-CSF interface engages in such prolific transport. Disease-associated interference with CP transporters and volume transmission limits the availability of CSF molecules for the brain [ 1 , 20 ]. Table 1 Substances distributed to brain by transport at the blood-CSF gateway Class or group Examples Functions References Neuropeptides Arginine vasopressin Regulation of CSF formation [8–11] Growth factors Basic fibroblast growth factor Integration of water balance [55, 104] Trace elements Iron Enzyme function in neurons/glia [1, 12] Vitamins Ascorbate & folate Antioxidants & co-factors for brain [13, 14] Nucleoside bases Thymidine Nucleic acids & drug transport [15, 16] Protease inhibitors Cystatin C Neuroprotection after ischemia [1, 17] Hormones Prolactin & leptin Modulation of hypothalamus [18, 19] Structural and functional damage to the choroid plexus in AD Structure intimately relates to function in various epithelia. Therefore it is useful to analyze histopathological damage to CP at various stages of AD in order to clarify the onset of functional losses at the blood-CSF barrier. Ageing and AD cause similar degeneration in CP. Structural changes in AD though are usually greater than in non-demented control counterparts [ 21 ]. Serot and colleagues have delineated modifications in the choroidal epithelium, stroma and vessels in AD subjects [ 22 - 24 ]. There are substantial alterations in all tissue compartments. Several features characterizing the CP in AD are highlighted in Table 2 . Epithelial cells are typically truncated, with an average volume 70% of that at birth. Such epithelial atrophy likely affects cellular functions. Particularly prominent is an increase in the number of Biondi bodies in AD [ 25 ]. These bodies are fibrillar inclusions in the cytoplasm of very old epithelial cells. Lipofuchsin vacuoles also occur frequently in the cytoplasm. The basement membrane underlying the cell often thickens to 350 nm, compared to a much thinner membrane (ca. 100 nm) in neonates [ 21 ]. Another liability in AD is immunological deposition of C1q, IgG and IgM along the epithelial basement membrane. As fibrosis intensifies with age and disease, the stroma attains a thickness of a few tenths of a micron [ 21 ]. At the inner core of the choroidal villus, the blood vessel walls thicken. This coincides with the appearance of amyloid, hyaline bodies, psammomas and calcifications. Altogether, the histopathologic changes in CP compartments point to grossly-declining secretory functions in AD. Such structural abnormalities coincide with diminished CSF production in ageing [ 26 ] and AD patients [ 4 ]. Table 2 Pathological changes in choroid plexus in Alzheimer's disease a Epithelial atrophy (↓ cell size by 1/3) ↑ Biondi bodies Lipofuchsin vacuoles Basement membrane thickening (3-fold ↑) ↑ Stromal fibrosis IgG and IgM depositions a Described by Serot and colleagues in refs. 22–24 Due to the functional nexus of the CSF with brain interstitial fluid, the neuronal microenvironment in AD is impacted by markedly altered transport and permeability in CP. When neurodegenerative diseases, ischemia [ 27 , 28 ] and elevated pressure [ 29 , 30 ] inflict damage on the choroidal epithelium, there are resultant adverse changes in CSF composition and volume. Because neurons are sensitive to instabilities in CSF dynamics and constituents, it is important to assess the nature and progression of disrupted CP function in chronic diseases. In view of the expanding number of AD victims, it is timely to consider patterns of expression of chaperone proteins, receptors and transporters in CP at various Braak stages. Such information may abet future attempts to stabilize choroid plexus functional integrity perturbed in early AD dementia. Choroid plexus defense against insults in ageing and degeneration The CP is multifunctional, performing a wide range of homeostatic functions for the CNS [ 5 ]. CSF homeostasis is mediated mainly by CP. It involves the activity of many protein transporters and receptors at the basolateral and apical surfaces of the epithelial cells [ 31 ]. In addition to providing organic solutes for nutritive and trophic support of the brain, the CP secretions into the ventricles adjust the pH, osmolality, [K + ] and immune molecule content of the CNS extracellular fluid [ 7 ]. Accordingly, healthy neurons are fundamentally dependent upon transport at the blood-CSF and blood-brain interfaces. Moreover to provide steady neuroprotection by regulating the extracellular milieu, the CP must protect itself against various stressor agents that build up during ageing and disease. There have been few investigations of the homeostatic systems within CP that stabilize choroidal functions in the face of ageing and AD. We have explored some candidate systems for compensatory responses by CP. Cytoplasmic heat shock proteins chaperone and perform housekeeping to maintain a healthy steady-state intraepithelial milieu [ 1 , 32 ]. Growth factors critically minimize cell morbidity and mortality [ 27 ]. Fluid-regulating proteins correct ion and water imbalances that occur in neurodegenerative diseases. Consequently our hypothesis is that the upregulation of certain proteins in CP (such as those discussed above) thwarts certain untoward effects of ageing and disease progression. The protein expression aspect of the hypothesis was tested by analyzing immunostaining patterns in human CP specimens from patients with varying severity of AD. Analyses of human choroid plexus: Usefulness and challenges It is difficult to functionally assess the CP in vivo [ 30 , 33 ], particularly in man. Alternatively one can evaluate the status of human CP in disease by analyzing autopsied tissues for variable protein expression [ 34 ]. Such findings are compared to appropriate age-matched controls. Immumocytochemical and biochemical data gleaned from human CP highlight directions to pursue with living animal models: transgenic mice with an AD phenotype or aged rats with CSF pathophysiology. Because protein expression relates to disease progression, it is essential to standardize grading for the severity of AD in subjects. We use a modified Braak & Braak staging system [ 35 ]: stages I/II (mild; disease involves hippocampus and entorhinal cortex); stages III/IV (moderate; AD spreading to the rest of the limbic lobe, e.g., amygdala); and stages V/VI (severe; further AD spreading to the prefrontal neocortex). Information about functional proteins in human CP is scarce. However protein expression was recently analyzed in autopsied lateral ventricle CP from normal adult brains and in those with confirmed AD [ 36 , 37 ]. The ages investigated were generally between 65 and 90 yr for all subjects. Control individuals typically died from cardiac disease or tumors. AD specimens covered all Braak stages. Causes of the AD deaths were usually cardiac complications or pneumonia [ 37 ]. Fortunately the postmortem intervals (mainly between 2 and 18 hr) did not affect antibody staining [ 37 ]. For specimen quality it is desirable to procure choroidal tissues quickly after death. Human CP banks are needed to systematically catalog tissues from various stages of AD. Regulation of CP 'homeostatic proteins' in health and disease Ageing and AD dementia tax the CP and other CNS transport interfaces. In late life the deteriorating brain presents many potentially-destructive metabolites to the CSF for multi-site excretion into blood. The greater burden of macromolecule disposal in AD occurs when CP and arachnoid membrane, due to ageing debilities, are less able to transfer solutes. Nevertheless the CP seemingly attempts to maintain its 'epithelial soundness' when challenged to perform additional cleansing acts for the brain extracellular fluid (CSF). In healthly, young adults the CP epithelium sensitively acclimates to chemical and physical distortions in blood, CSF and parenchyma. Following cortical stabbing in rats, the CP upregulates TGFβ presumably to provide this CSF-borne growth factor for repairing the injury [ 38 ]. A similar phenomenon occurs with IGF-II [ 39 ], which is manufactured in the CNS mainly by CP. In diabetes there is enhanced expression of the NKCC1 cotransporter in CP for adjusting CSF dynamics and water distribution [ 40 ]. Such compensatory responses at the blood-CSF barrier in early adulthood raise the question about CP's ability in later life, when besieged by the deficits of aging and AD, to adequately respond by expressing certain 'homeostatic proteins'. Accordingly to assess pathophysiologic consequences of AD we investigated human CP's ability to upregulate certain functional proteins (as distinguished from structural ones) in advanced states of AD dementia. With advancing knowledge about intricate cell physiology (coordinated interactions among organelles, cytoplasm and membrane-bound proteins) it is relevant to evaluate components of cellular homeostasis. The term 'homeostatic proteins'refers to chaperones, receptors and transporters that stabilize the internal environment of the cell. CSF stability depends in large part upon CP epithelial homeostasis. Our group has analyzed several 'homeostatic proteins' involved in CP intracellular milieu stabilization: Heat shock proteins A wide spectrum of protection against neurodegeneration is provided by HSPs. The list of protective effects bestowed by HSP molecules is diverse and includes: accelerated degradation of misfolded proteins, maintenance of membrane lipid integrity, prevention of deleterious protein aggregation, and preclusion of damage to the translational apparatus [ 41 , 42 ]. HSPs are also known as 'stress proteins' due to their role in shepherding adaptive responses to stressors such as ischemia, trauma, fever, dehydration, hydrocephalus and other brain disorders. To evaluate human CP expression, we selected HSPs overexpressed in AD brains: GRP94 and HSP90. In contrast to previously found upregulation in brain, there was downregulated GRP94 in lateral ventricle CP. In aged controls there was abundant staining in the epithelial cytoplasm and stroma (Fig. 2 , top left). However in AD there was a striking decrease in immunostaining of GRP94 choroidal tissues (Fig. 2 , top right). GRP94 is an atypical HSP in responding specifically to glucose deprivation rather than to generalized intracellular oxidative stress. In AD the opposite responses in GRP94 expression by CP vs. brain are interesting but not unexpected because secretory epithelium has biochemical characteristics fundamentally different from neurons. GRP94 chaperones protein folding, especially in endoplasmic reticulum [ 43 ]. Underexpressed GRP94 in CP of AD subjects may render the reticulum vulnerable to unfolded proteins. Figure 2 Heat shock (stress) protein expression in human CP: Formalin-fixed, paraffin-embedded specimens (8–10 micrometers thick) were de-paraffinized and rehydrated. Sections were incubated overnight with antibodies against HSP90 (1/500; SPA830) and GRP94 (1/500;SPA850) and stained by the ABC technique (Vectastain Elite ABC peroxidase). Deposition of the brown chromogen (diaminobenzidine) reaction product was either substantially reduced by preabsorption blocking or virtually eliminated by omission of either primary or secondary antibodies. Slides were assessed blindly for staining intensity and distribution [37]. See text for description of localization and interpretation. For each HSP, images are representative of 10 AD specimens and 5 age-matched controls. In the case of HSP90, the reverse pattern of GRP94 was observed. There was faint staining of non-AD tissues (Fig. 2 , bottom left) but strong expression in epithelial cytoplasm in AD CP (Fig. 2 , bottom right). Multiple effects can be induced by a particular HSP. By complexing with several intracellular protein kinases, HSP90 could alter CSF secretion; and by inducing the heme-regulated e1F-2 alpha kinase, the overexpressed HSP90 may downregulate gene transcription [ 44 ]. Another possible effect of HSP90 is to beneficially accelerate clearance (reabsorption) of Aβ peptide by CP. Such facilitation occurs in the microglial handling of Aβ [ 45 ]. It would be worthwhile to pursue the role of CP HSPs in removing Aβ from CSF. FGF peptides and receptors The FGF superfamily of peptides and its multiple receptors in CP, ependyma, and brain, modulate many actions on neurons and non-neural cells [ 30 ]. FGF2, or basic FGF, is prototypic of the family. CP synthesizes and releases FGF2 into CSF. Choroidally-secreted FGF2 stimulates receptors (FGFr) nearby in the CP apical membrane [ 9 ] and at more distant sites in the brain parenchyma [ 46 ]. FGF/FGFr is apparently unique among growth factors in directly effecting balance in the brain fluids, including the formation of CSF. This is relevant to AD in which brain FGF is increased [ 47 ] and CSF turnover declines [ 1 , 4 , 6 ]. FGF and FGFr are also fundamentally important in fostering neuron generation from stem cells in the subventricular zone (SVZ). This requires coordination between the CP-CSF and periventricular regions [ 46 , 48 ]. Pharmacological manipulation of SVZ stem cells is potentially important at all stages of life. For pathological and therapeutic reasons, therefore, it is important to delineate FGFr expression patterns and their significance in i) ontogeny, ii) normal adult maintenance, and iii) neurodegeneration. i) Ontogeny Receptor plasticity in aging and AD is better seen in light of information on FGF/FGFr expression dynamics in early life. In the fetus the formation of CP, neuronal stem cells and brain is promoted by CP growth factor secretion and CSF distribution [ 46 , 49 ]. Intense activity of FGF/FGFr figures prominently in CNS viability and expansion. Expression of FGFr-2 and -3 in murine CP is maintained prenatally, whereas FGFr-1 and -4 are present during the 2 nd but not 3 rd gestational week [ 49 ]. FGF peptides released from CP use autocrine and paracrine mechanisms to stimulate various forms of FGFr expressed by CP epithelium. Specific functions of the four different receptor isoforms need elucidation. Despite limited data for CP FGFr expression patterns in aging, the genetic regulation of FGFr during embryonic life [ 49 ] suggests the potential to pharmacologically enhance FGFr expression in AD. The goal in filling these knowledge gaps about FGFr is to attain more efficacious treatment of injuries to the brain interior. FGF2 derived from CP is also conveyed by CSF bulk flow to the fetal germinal matrix where it acts on stem cell FGFr to promote neuronal maturation [ 46 ]. By this endocrine-like mechanism, the CSF-mediated distribution of FGF2 and other peptides plays a prominent role in 'spawning' new neurons in the periventricular regions. Distorted CSF volume and flow in hydrocephalus interferes with the CSF provision of FGF2 to FGFr on stem cells in the SVZ [ 46 ]. Brain malformation ensues. Clearly the orderly function of the CP-CSF system, e.g., the programmed secretion and distribution of growth factors, is essential to normal CNS development. ii) Adult maintenance and response to stressors The FGF/FGFr system also has a key role in adult CNS fluid homeostasis. CP helps the brain adapt to alterations in blood composition and flow. In otherwise healthy young adults, the imposition of dehydration or sudden ischemia upon the CNS elicits striking adaptive changes in CP epithelium. To endure insults by chemical or physical stressors [ 29 ], it is critical that CP viability be maintained so that the brain can continue to benefit from 'homeostatic adjustments' in transport phenomena at the blood-CSF barrier [ 50 ]. Growth factors are an integral part of these adaptive responses to stress. Dehydration and ischemia are common to aging and AD. Elucidation of CP-CSF growth factor responses to these disorders in normal adults should enhance our perspective on homeostatic capabilities in AD. Dehydration seriously threatens CNS functions. Adjustments to dehydration in the healthy adult brain feature ion and water redistribution among fluid compartments. These compensatory responses to plasma hyperosmolality stabilize neuronal and interstitial volumes. Brain 'barriers' or transport interfaces are sites for the fluid homeostatic mechanisms. A working model for the restoration of fluid balance is offered: Dehydration or hyperosmolality upregulates the FGF2 and AVP peptides in CP [ 51 , 52 ]. FGF2 released by CP binds in an autocrine manner to FGFr. Such FGFr stimulation likely promotes AVP release from CP epithelium [ 9 ]. The extruded AVP then binds V1 receptors in CP to regulate ion transport [ 53 ] and fluid production [ 10 , 54 ]. FGF2 works in concert with AVP to control fluid movement across the blood-CSF barrier [ 51 , 55 ]. Interestingly in AD there are upregulated receptors for FGF (Fig. 3 ) and AVP [ 56 ] in human CP, presumably in response to fluid imbalance. Cumulative evidence points to co-localized growth factors and neuropeptides jointly stabilizing brain fluids after perturbed osmolality and volume. Figure 3 FGF receptor expression in human CP: Lateral ventricle plexuses obtained from pathologically-confirmed AD subjects were immersion-fixed in paraformaldehyde, cryoprotected and stored at -70°C. CP segments were free-floated for 72–96 hr in a 1/500 polyclonal antibody against the FGFr, which recognizes all FGF receptor subtypes. Specific staining was established by antibody omission/preabsorption [36]. ABC peroxidase/diaminobenzidine technique was used. Arrowhead points to small punctated dots of immunoreactivity. Localization of FGFr is described in text. Images are typical of those obtained from 8 controls and 8 AD specimens, respectively. Ischemia is another disorder with neuropathological consequences that are mitigated by growth factor upregulation or administration [ 57 ]. Bilateral carotid artery occlusion in young adult rats for 6–10 min wreaks damage to tissues surrounding the lateral ventricles [ 58 , 59 ]. Hence severe transient forebrain ischemia (TFI) injures the lateral plexus as well as the hippocampus [ 60 ]. However peptides such as FGF2 and TGFβ defend the forebrain interior against ischemic and hypoxic insults [ 57 , 58 , 61 ]. Although TFI with hypotension (40 mmHg) destroys many choroidal epithelial cells [ 60 ], there is a role by growth factors to efficiently repair the breached blood-CSF barrier [ 60 ]. Restitution of the epithelial lining of the choroidal villi within several hours post-stroke [ 60 ] implies the importance of a functional CP for CNS viability. Upregulated secretion of FGF2, TGFβ [ 58 ] and other growth factors by CP [ 27 ] undoubtedly protects the blood-CSF barrier and periventricular brain against compromised blood flow. A time-course analysis of FGF2-FGFr expression in the aging vs. diseased CP will reveal how the blood-CSF interface responds to reduced blood flow in NPH and AD [ 62 , 63 ]. iii) Neurodegeneration FGF2 titers and FGFr receptor densities in degenerating CNS compartments provide insight on adaptive responses. FGF2 concentration is augmented in the AD brain [ 47 ]. Moreover with immunostaining and ELISA it was demonstrated that FGF2 levels in CP are sustained in AD [ 36 ]. It is thus probable that FGF2 and other factors secreted into CSF of aged adults are essential to forestalling harm to neurons in ischemia. A CSF feedback control system for the choroidal production of FGF2 has been suggested by FGFr identification in young adult rat CP [ 9 ]. An elevated level of FGF2 in AD brains [ 47 ] is interpreted as peptide sequestration from CSF [ 36 ], thereby lowering CSF concentration. Diminished CSF FGF2 could cause a compensatory increase in CP FGFr expression. Testing this postulate, we found enhanced staining for FGFr in AD CP epithelium (Fig. 3 ). This observation supports a role of the CP-CSF system in responding to increased demands by brain for FGF2. To build this model, information is needed for FGF2 and FGFr isoforms in various regions of the CNS and CSF at specific stages of dementia. FGF2 has an interesting relationship with amyloid. A worthwhile goal is to probe mechanisms of FGF2 interaction with amyloid in neuronal networks, extracellular matrix and CP-CSF. FGF2 co-localizes with several chemical forms of amyloid. Neuronal coexistence of FGF2 and amyloid precursor protein (APP) [ 64 , 65 ] intimates a functional relationship between FGF2 and APP, perhaps in post-injury regeneration. FGF2 also minimizes metabolic injury caused by Aβ peptides. It was initially observed that FGF2 applied to cultured neurons reduced neurotoxicity of aggregated Aβ [ 66 ]. More recent findings confirm FGF2's benefit in abolishing neurotoxicity produced by Aβ 1-43 [ 67 ] and attenuation of oxidative stress in hippocampal neurons induced by Aβ peptides [ 68 ]. In the extracellular matrix FGF2 competes with Aβ and APP for binding sites on heparan sulfate proteoglycans [ 69 , 70 ]. This competitive binding by FGF2 may suppress interstitial amyloid plaque formation. Because the interstitium receives FGF2 from CSF, we predict that pharmacological boosting of CP secretion of FGF2 would relieve AD. FGF2-mediated protective regulation of CP transport phenomena potentially affects the course of neurodegeneration. Considerable evidence points to CP's ability to remove Aβ from CSF [ 71 - 73 ]. This implicates reabsorptive transport at the blood-CSF barrier to reduce CSF Aβ burden in advanced AD. In clearing Aβ from the CNS, the CP epithelium is exposed to substantial amounts of Aβ with the potential to curtail energized ion transport and fluid formation. CP Na-K-ATPase, a key enzyme in CSF production [ 74 ], also enables the transport of organic compounds [ 13 ]. Significantly, FGF2 lessens the toxicity of Aβ on Na-K-ATPase activity and mitochondrial function in cultured hippocampal neurons [ 68 ]. Toxic Aβ loads on CP transporters might impair secretion. The resultant decrease in CSF volume transmission and turnover would further destabilize the CNS. However treatment of AD with FGF analogs and IGF-1 [ 75 ] holds promise for countering Aβ toxicity [ 68 ] by creating a better CSF 'metabolic environment' for CP and brain. Fluid-regulating proteins The diminished ability of CP to form fluid in advanced ageing and AD begs the question of how epithelial ion transport proteins are altered by distorted neurochemistry in senescence. In very old laboratory mammals the Na-K-ATPase activity of CP and the CSF generated by it are cut in half [ 26 , 76 ]. Another ion-translocating protein coupled to CSF formation is the apical NaK2Cl cotransporter isoform1 (NKCC1). NKCC1 transports Na, K and Cl into and out of the choroidal epithelium [ 77 ], depending upon ion gradients and hormonal modulation. Versatile bidirectional transport via NKCC1 confers flexibility for regulating ion movements and concentrations in the CP-CSF. Fluid secretion at the blood-CSF interface is linked to ion fluxes mediated by the loop-diuretic sensitive NKCC1 [ 78 , 79 ]. NKCC1 information is plentiful for laboratory animal CP-CSF [ 78 - 84 ] but scarce for the human counterpart. The cation-Cl superfamily of cotransporters includes NKCC1 and consists of 7 isoforms that actively transport Na and/or K electroneutrally with Cl [ 85 ]. NKCC1 in CP has several functions, i.e., to regulate epithelial [Cl], stabilize CSF [K] and control fluid secretion [ 77 ]. The T4 antibody differentially stains the NKCC1 secretory isoform in the apical membrane but not the KCl isoform at the basolateral surface of CP. In brain fluid homeostasis the expression of NKCC1 in CP is likely sensitive to perturbations in CSF osmolality, choroidal epithelial cell volume/ ion concentrations, intracranial pressure and ventricular volume. To shed light on compensatory responses by the CP to disease, our group analyzed NKCC1 expression in congenital, high-pressure hydrocephalus; and in adult chronic, closer-to-normal pressure hydrocephalus in AD/NPH syndromes. The NKCC1 helps cells and organs adjust to disrupted fluid balance. One thus expects homeostatic upregulation of this CP cotransporter in AD with its altered CSF dynamics. For delineating NKCC1 expression we used the T4 antibody to immunostain CP at various stages of AD dementia. Robust staining of the lateral ventricle plexus (even at Braak stage V/VI) occurred in the apical membrane and cytoplasm (Fig. 4 ). In an earlier study of CP specimens from various mammalian species (unpublished data), we observed uniform and consistent staining of the apical membrane NKCC1. T4 staining of the cytoplasm was more variable. In the human CPs analyzed, however, there was consistent cytoplasmic staining in AD and age-matched controls (Fig. 4 ). This may represent cytoplasmic NKCC1 protein available for insertion into the apical membrane. Figure 4 NaK2Cl cotransporter expression in human CP: Lateral ventricle plexuses were incubated with T4 (not thyroxine) antibody, which stains the secretory isoform 1 of the NaK2Cl (NKCC1) cotransporter protein. The T4 antibody (mouse monoclonal; 1:100) was from the University of Iowa Developmental Studies Hybridoma Bank (Iowa City, IA); the biotinylated secondary was a rat-absorbed horse antibody. Diaminobenzidine was used to develop the brown reaction product. Controls (negative staining results; not shown) involved omission of secondary and/or primary antibody. AD tissues were from patients at Braak stage V/VI (top right) and III/IV (bottom right). Images are representative of 6 CPs analyzed for AD (mean age of 76 yr) and 6 for age-matched controls (mean age of 76 yr). On average, the staining intensity of AD specimens was 50% greater than controls. The text describes staining localization. All photographs are at the same magnification. AD choroidal tissues had greater T4 staining than controls (Fig. 4 ). The apical NKCC1 (Fig. 5 ) is strategically positioned to sense physical changes in CSF resulting from AD deterioration. Changes in pressure [ 86 ] or volume represent potential stimuli for inducing NKCC1 in CP. Ventriculomegaly and transient elevations in ICP in AD and NPH may elicit a compensatory response in CP to downregulate CSF formation by promoting ion reabsorption via the NKCC1 (Fig. 5 ). This scheme fits the enhanced expression of NKCC1 in CP of HTx congenital hydrocephalus rats with ventriculomegaly [ 87 ]. ANP, AVP, angiotensin II, serotonin and catecholamines all reduce CSF formation rate [ 99 ]. Table 3 recapitulates the stimulating effect of these same agents on NKCC1 transport activity in various tissues [ 88 - 94 ]. This prompts the hypothesis that CSF formation is decreased secondarily to enhanced reabsorptive uptake of Na, K and Cl by CP from CSF (Fig. 5 ). Information about protein levels and mRNA for NKCC1 should relate CP cotransporter expression with CSF dynamics, ICP and ventricular volume. Figure 5 A working model for hydrocephalus-induced alterations in NKCC1 cotransporter expression in CP: Ion transport across basolateral and apical surfaces is driven by transmembrane ion gradients [74, 83]. Normally the net transport of Na, K, Cl, and HCO3 from choroid cell into CSF is integral to CSF production [5, 7]. However, we postulate that when CSF formation is inhibited by various neurohumoral agents there is stimulated (+) inward NaK2Cl flux from CSF into the cell; this would increase cytoplasmic Na and Cl concentration [77] thus creating a less favorable ion gradient for basolateral uptake of Na and Cl from plasma. Consequently, there is inhibited (-) basolateral ion uptake, and sequentially, reduced apical extrusion of Na into CSF [40]. Net effect = decreased CSF formation. Consistent with this idea are observations of enhanced expression of CP NKCCl in congenital hydrocephalus [87] and AD (Fig. 4), both of which are generally associated with lower rates of CSF formation. Agents in Table 3 (e.g., Ang II) simultaneously stimulate the inward and outward arms of NKCC1, but the former three times the latter, resulting in net inward flux of ions [discussed in ref. 93]. The model thus vectorially emphasizes the inward arm of the NKCC1 (large arrowheads) as the one primarily stimulated by agents that suppress CSF formation. We hypothesize that in hydrocephalus, with increased intracranial pressure and/or ventriculomegaly, there is an associated attenuation of fluid output by CP. Table 3 Stimulation of NaK2Cl cotransport by hormones, neurotransmitters and peptides that inhibit choroid plexus-CSF formation Active agent Model Species Cotransport activity a Reference Atrial natriuretic peptide Neuroblastoma Human 82% 88 Angiotensin II Aortic endothelium Cow 38% 94 Arginine vasopressin Medullary TAL b Mouse 66% 89 Serotonin agonist c Fibroblasts Cell line 49% 90 Adrenergic agonist e Parotid gland epithelium Rat R5 staining d 91 Adrenergic agonist e Skeletal muscle (plantaris) Rat 1700% 92 Basic fibroblast growth factor Aortic endothelium Cow 40% 93 a Bumetanide-sensitive 86 Rb uptake by cells b Thick ascending limb of kidney tubule c (-)-2,5-dimethoxy-4-bromoamphetamine d R5 antibody-detected phosphorylation of NKCC1 proportional to functional activity e Isoproterenol Other factors must be considered in interpreting the enhanced NKCC1 expression (Fig. 4 ). An alternative but not mutually exclusive explanation is that upregulated NKCC1 in AD helps to counter cell shrinkage [ 95 ] in CP (Table 2 ). Moreover, a rise in CSF [K] resulting from neuronal damage, would be buffered by the NKCC1 [ 77 ] and Na pump [ 96 ] in CP. Therefore it is possible that NKCC1 in CP is concurrently carrying out several physiologic responses to stresses imposed by aging and disease. The ventriculomegaly and reduced CSF formation rate observed in AD and NPH [ 4 ] are consistent with the upregulated NKCC1 in human CP. To corroborate the model, however, more data are needed for humans and animals to tightly link alterations in CP transport and fluid turnover with AD progression. Because HSP90 binds to NKCC1 and modulates its function [ 97 ], it is also of interest to explore how upregulated HSP90 in AD (Fig. 2 ) mechanistically relates to enhanced expression of NKCC1 (Fig. 4 ). The choroid plexus as a 'bioreactor' to brain diseases CP is highly equipped, homeostatically speaking, to help the brain adapt to the metabolic distortions of dementia. Injured neurons undergo compositional changes. These cellular perturbations are transmitted to the extracellular space and distort the interstitial fluid composition. Many catabolites in the interstitium eventually gain access to the ventricles where they contact the CPs. By accepting a host of CSF-borne molecules, for either transport or receptor stimulation, the CP mediates a wide scope of renal- and hepatic-like activities [ 13 , 98 ]. This epithelial interface also integrates many neurohumoral activities of the endocrine and immune systems [ 98 , 99 ]. Diseases afflicting the CNS generate injury metabolites and cytokines that are conveyed to the ependyma, pia-glia, arachnoid membrane and CP epithelium. These interfaces handle catabolites and peptide fragments [ 100 ] by reabsorbing them into the systemic circulation for clearance or by sequestering toxic substances in lysosomes for metabolic conversion. Harmful substances are thereby effectively removed from the brain-CSF system. Signaling molecules are also carried by CSF from diseased regions to the CP. There they bind to specific receptors in the apical membrane and consequently elicit a variety of bioreactive responses. Binding sites for a wide array of peptides, proteins and other organic substances abound in the mammalian CP [ 73 ]. Choroidal epithelial cells can thus be regarded as 'bioreactors' that respond to chemical changes in extracellular fluid. Their response includes the synthesis of peptides, growth factors and sundry molecules for homeostatically repairing injured neurons. Currently, little is known about the spectrum of responses by CP to the disrupted CNS homeostasis in AD. It would be informative to compare AD stages for expression abilities of CP vs. the ependyma and meninges. Differences as well as similarities are anticipated. The initial studies of gene expression in human CP reported herein reveal that in Braak stages V/VI there is still strong expression of HSP90 and NKCC1 (Figs. 2 & 4 ). Therefore even though the CP in advanced AD shows extensive histopathology (Table 2 ) and has reduced enzymatic activities and fluid formation [ 4 , 6 , 26 ], it evidently retains the ability to react to biochemical perturbations by expressing housekeeping proteins. This stabilizing effect on the blood-CSF barrier epithelium enables regulatory phenomena that ultimately support AD-stressed neurons. Insight can be gained by investigating CSF neurochemical composition as a function of AD severity. Additionally it would be instructive to analyze how cultured CP epithelium reacts to 'pathological' CSF from patients at progressive Braak stages. The Z310 cultured cell line and primary cultures of CP are useful for such analyses [ 101 , 102 ]. Given the significance of CP in facilitating repair of CNS structures, it should also be fruitful to analyze in vivo gene expressions that reflect pathophysiological interactions between CP-CSF and brain. How can CP and the regions it nourishes be protected from AD? Therapeutic strategies to halt CNS deterioration should include ways to defend the CP epithelium against the oxidative ravages of ageing and AD. Prolonging CP viability and work efficiency may be important in maintaining the well being of geriatric patients. Even without a cure for AD, if deleterious changes in the brain interstitium were minimized in the elderly by stabilizing the CP-CSF (as well as the BBB), then it might be feasible to prevent the early manifestations of AD (Braak stages I/II) from intensifying into the debilitating pathology of V/VI. One class of agents with considerable potential for AD therapeutics is the growth factor group. A useful paradigm for CSF growth factors and neuroprotection has evolved from experiments on transient forebrain ischemia in rats [ 59 , 60 ]. The TFI experiments characterized the destructive effects of acute ischemia on the lateral ventricle CP and the nearby CA1 region of hippocampus [ 50 ]; and the time course of cell recovery (or death) in these adjacent regions protected (or not) by supplemental infusion of FGF2 via CSF prior to the ischemic insult [ 61 ]. Exogenous FGF2 administered before or after the induced ischemia lessened cell death in CA1, probably in part by stabilizing CP functions [ 28 , 50 , 61 ]. Moreover the CP substantially repaired its epithelial cell barrier by 24 hr post-TFI, even without supplemental FGF2 [ 60 ]. Collectively these findings manifest the impressive plasticity of CP to reconstitute itself after disruption; and reveal the potential therapeutic value of pharmacologically-administered growth factors to reduce harm to CP and hippocampus. Several other growth factors synthesized by CP and secreted into CSF should be explored in translational research dealing with interacting ischemia, hydrocephalus and AD [ 1 , 4 , 6 , 28 , 36 , 50 , 55 , 103 , 104 ]. Following TFI the CP upregulates TGFβ, another peptide that helps the CNS adjust to injury [ 58 ]. Consequently choroidally-manufactured growth factors in disease states can benefit the plexus locally (by autocrine and paracrine mechanisms) and the brain more globally (by endocrine-like bulk flow of CSF). Our ischemia findings for the CP-CSF-hippocampus compartments [ 27 , 58 , 60 ] relate to AD in that reduced blood flow to the ageing CNS exacerbates AD progression. Growth factor supplements (e.g., VEGF, NGF, IGF-II & HGF) could augment viability in AD-vulnerable regions by enhancing vascularization, preventing programmed cell death or promoting stem cell conversion in the subventricular zone (SVZ). Newly-formed neurons in the SVZ might then migrate to atrophic regions to replace destroyed cells. One pharmacologic approach to stall AD onset is to administer a combination of growth factors designed to increase neuroprotection while minimizing fibrosis [ 55 ]. We theorize that an optimal regimen of growth factors and neuropeptides would restore CP function or prevent further loss of homeostatic capabilities. A look towards pharmacologic manipulation of CP in AD dementia In searching for agents to modify CP epithelial protein expression, the route of delivery of the active drug is of primary importance. Unlike the brain with its impermeable microvessels, the CP readily takes up water-soluble drugs from the plasma due to the highly-permeable choroidal capillaries. Consequently water-soluble agents freely diffuse to receptors or binding sites at the basolateral surface of the epithelium. Access of blood-borne hydrophilic compounds to the CSF-side of CP however is problematic because the tight junctions and basolateral membrane impede diffusing molecules as small as mannitol [ 105 ]. Molecular sieving at the basolateral membrane restricts the permeation of hydrophilic molecules as small as urea (m.w. = 60) into the CP-CSF compartments [ 106 ]. To circumvent the blood-CSF barrier, therapeutic agents are delivered into CSF by lateral ventricle catheters in experimental animals [ 55 ] or hydrocephalic patients. Gene therapy offers the additional challenge of finding a viral vector that selectively targets the CP epithelium for transduction [ 107 ]. Timely, innovative strategies are in order to find specific ways to target and improve CP function in neurodegenerative states. A compelling aspect of CSF translational research is to identify agents that effectively regulate fluid formation by CP. Whereas the difficulty in congenital hydrocephalus is to downregulate CSF formation, the challenge in AD is to enhance CSF turnover perhaps by accelerating fluid production as well as outflow. Augmented flow of CSF enhances 'sink action' [ 108 , 109 ]. This would expedite clearance of toxic molecules like Aβ out of the brain interstitial fluid into the ventricles. New agents that stimulate CP to secrete CSF more rapidly should be tested in aged animals and those with AD-phenotypes. A consistent finding of decreased CNS burdens of Aβ in animals with increased CSF turnover could spur the development of drugs for brain 'cleansing' in AD. Another CP pathology meriting pharmacological attention is the massive fibrosis that progressively envelops the interstitium in old age. This interstitial fibrosis is even more extensive in AD [ 21 ]. Fibrosis undoubtedly impedes efficient movement of molecules between blood and CSF. It is pertinent to ascertain if therapeutic minimization of CP fibrosis in later life would permit a brisk turnover of CSF to be sustained. Attenuating the formation of Biondi bodies and other choroidal cellular inclusions (Table 2 ) would be a unique approach to prevent age- and disease-related curtailment of CSF production. Optimal vascular-interstitial-epithelial interactions in the CP are foundational for vigorous CSF dynamics. The longer the blood-CSF interface retains its epithelial secretory capabilities, the more successfully it can conduct homeostatic activities to ward off AD. Recapitulation and projections The CP has the main responsibility for CSF homeostasis. Therefore the functional status of the blood-CSF barrier is of great consequence to the CNS. Maintaining the CSF at a stable, specialized composition is of the utmost importance to neurons. CSF is prominent in regulating brain interstitial fluid with which it exchanges nutrients and waste products. Diseases markedly affect these molecular exchanges. Maintaining healthy bidirectional transport across the CP epithelium (CSF-blood) and ependyma (CSF-brain) is thus integral to a sound brain fluid environment. CSF macrocirculation through the ventriculo-subarachnoid system together with CSF microcirculation in the perivascular Virchow-Robin spaces [ 110 , 111 ] perform distributive as well as collective functions. By gathering waste products, the CSF is a quasi-lymphatic system with critical functions in excreting harmful peptides and proteins. In early life the upregulated secretions of CP play a central role in brain ontogeny by furnishing growth factors to the germinal matrix. At the end of life with disease onset or aging consequences, the CP transporters are upregulated again to rescue failing neurons by providing neurotrophic materials to CSF. Equally important, peptides in excess such as amyloid beta (Aβ) fragments in AD must be eliminated from CNS by perivascular pathways [ 112 ]. To facilitate clearance of Aβ, the continual production of CSF by CP sustains 'sink action' on the brain interstitium [ 109 ]. As the main generator of CSF, the CP has a pivotal role in helping the brain cope with the twin stressors of ageing and disease. More attention should be focused on the blood-CSF interface for pharmacologic opportunities to stave off CP dysfunction. Our findings on human CP expression of HSP90, FGFr and NKCC1 demonstrate that this epithelium in AD reacts to metabolic insults by upregulating certain proteins. This suggests that even diseased CP could respond to therapeutic agents, thus opening new vistas for treating CSF dysfunction in age-related dementias. Conclusions Investigation of CP in AD is an area that is opening up. Translational research can now intensely focus on molecular factors that disable the CP to the point of reducing its ability to preserve brain integrity. Systematic CSF analyses using mass spectrometry and other cutting-edge biotechnology should generate neurochemical data specific for disease stages. New imaging approaches are essential to provide much needed functional data for CP, CSF and periventricular regions in AD patients. This should expedite the modeling of CP-CSF malfunctions and their resolution. Deeper insight into the pathophysiology of the blood-CSF transport interface will help to realize the development of novel therapeutic regimens for the AD family of diseases. Abbreviations AD, Alzheimer's disease; APP, amyloid precursor protein; Aβ, beta amyloid; AVP, arginine vasopressin; BBB, blood-brain barrier; CP, choroid plexus; FGF2, basic fibroblast growth factor 2; FGFr, receptor for fibroblast growth factor; GRP94, glucose regulatory protein 94; HSP90, heat shock protein 90; NGF, nerve growth factor; NKCC1, Na-K-2Cl cotransporter secretory isoform 1; NPH, normal pressure hydrocephalus; TGFβ, transforming growth factor beta; SVZ, subventricular zone; TFI, transient forebrain ischemia; VEGF, vascular endothelial growth factor; IGF-II, insulin-like growth factor II; HGF, hepatocyte growth factor; Declaration of Competing Interests The author(s) declare that they have no competing interests. Authors contributions CJ had the primary responsibility of organizing and writing the review, and had NIH support (NS 27601) to do the NaK2Cl cotransporter experiments. PM carried out the NKCC1 immunostaining runs with the T4 antibody and provided interpretation of the regional stainings. RT conducted the experimental analyses of the heat shock proteins and generated figures. AS did the image processing analyses of the cotransporter expression and assisted with the literature analysis. JD contributed ideas for the altered CP-CSF dynamics in hydrocephalus and ventriculomegaly. GS has developed the model that CP-CSF malfunction exacerbates AD progression, and helped to revise the manuscript. ES was responsible for the FGFr experiments and interpreted the human CP data. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC546405.xml |
552303 | Using purine skews to predict genes in AT-rich poxviruses | Background Clusters or runs of purines on the mRNA synonymous strand have been found in many different organisms including orthopoxviruses. The purine bias that is exhibited by these clusters can be observed using a purine skew and in the case of poxviruses, these skews can be used to help determine the coding strand of a particular segment of the genome. Combined with previous findings that minor ORFs have lower than average aspartate and glutamate composition and higher than average serine composition, purine content can be used to predict the likelihood of a poxvirus ORF being a "real gene". Results Using purine skews and a "quality" measure designed to incorporate previous findings about minor ORFs, we have found that in our training case (vaccinia virus strain Copenhagen), 59 of 65 minor (small and unlikely to be a real genes) ORFs were correctly classified as being minor. Of the 201 major (large and likely to be real genes) vaccinia ORFs, 192 were correctly classified as being major. Performing a similar analysis with the entomopoxvirus amsacta moorei (AMEV), it was found that 4 major ORFs were incorrectly classified as minor and 9 minor ORFs were incorrectly classified as major. The purine abundance observed for major ORFs in vaccinia virus was found to stem primarily from the first codon position with both the second and third codon positions containing roughly equal amounts of purines and pyrimidines. Conclusion Purine skews and a "quality" measure can be used to predict functional ORFs and purine skews in particular can be used to determine which of two overlapping ORFs is most likely to be the real gene if neither of the two ORFs has orthologs in other poxviruses. | Background In 1966, Szybalski first discovered that the mRNA synonymous strand of DNA contained a predominance of purine-rich clusters [ 1 ]; by convention, the top strand of a linear dsDNA molecule is viewed 5'→3', therefore when transcription of a gene is to the right, the top strand is considered the mRNA synonymous strand and if transcription is to the left, the top strand is the template strand. Chargaff's second parity rule states that for single-stranded DNA %A ≅ %T and %C ≅ %G [ 2 , 3 ] and implies that for regions with clusters of purines there must be local deviations from Chargaff's second parity rule favoring purines [ 4 ]. These local deviations from Chargaff's second parity rule also known as Chargaff differences have been seen in a variety of organisms including vaccinia virus; Bell et al. determined that Chargaff differences do correlate with direction of transcription and that the number of A nucleotides is greater than the number of T nucleotides in 83 of 92 vaccinia genes [ 4 ]. Many programs have been designed to predict genes, but few actually rate the "quality" or significance of the prediction and leave researchers to evaluate this themselves. In poxviruses, predicting which ORFs are likely to be expressed (genes) without the use of biochemical analysis usually involves simply choosing a minimum ORF length cut-off and excluding all ORFs that are smaller than the cut-off. Analysis may be extended to include manual inspection of each predicted ORF for the presence of promoter consensus sequences. Excluding ORFs that are smaller in size than the cut-off, however, risks missing genes that are unusually short; during annotation of vaccinia virus strain Copenhagen (VACV-COP) at least three recently verified genes (ranging from 162 bp – 231 bp) were not included in the initial annotation of the complete genome; these genes, VACV-COP A2.5L [ 5 , 6 ], A14.5L [ 7 ] and G5.5R [ 8 ] have now been included in our Poxvirus Orthologous Clusters (POCs) database [ 9 ]. Poxvirus genes are transcribed from both DNA strands and so far have never been shown to overlap more than a few nucleotides. Despite this knowledge, some poxvirus genomes have been liberally annotated so as to include all ORFs above a certain size, irrespective of whether they overlap larger well-characterized genes. Thus, the current GenBank file for VACV-COP contains 202 major (large and likely to be real genes) ORFs and 64 minor (small and unlikely to be real genes) ORFs [ 10 , 11 ]. The majority of these minor ORFs in VACV-COP overlap larger, major ORFs on the opposite DNA strand. In this paper, it is shown that for the AT rich poxviruses, the purine skews can be used to help predict the synonymous (coding) strand, particularly in regions where smaller ORFs overlap each other on opposite strands of the genome and neither have orthologs in other poxvirus genomes. Furthermore, it is shown that the majority of minor ORFs found in VACV-COP are unlikely to be functional genes and that based on purine content, two of the three genes initially excluded from the annotation of the vaccinia virus genome due to their small size, fit our definition of a major ORF. Results and discussion Figure 1 shows the genomic purine skew (Figure 1a ) and the direction of transcription (Figure 1b ) for the major ORFs (genes) in VACV-COP. Since the major ORFs of VACV-COP are spread out evenly across the genome, and Figure 1b was created using only the major VACV-COP ORFs, the two figures (Figure 1a and 1b ) follow very similar trends. A characteristic "W" shaped plot can be seen for both graphs; in Figure 1b , this is the result of a trend for large blocks of genes to be transcribed in the same direction (see arrows in Figure 1b ). These data indicate a good correlation between the purine content of the genomic DNA and the direction of transcription; for example, for genes that are transcribed in the leftward direction, the bottom/synonymous strand is purine rich and the opposite is true for genes that are transcribed to the right. The correlation between purine content and the likelihood that an ORF is major is further supported by the fact that 180 of the 202 major ORFs of VACV-COP have a purine content greater than or equal to 50%. In this way, purine skews can be used to help annotate newly sequenced genomes by aiding in the determination of the mRNA synonymous strand. Figure 1 Correlation between purine skew and direction of transcription of VACV-COP genome, excluding the non-coding terminal inverted repeats. (a) Purine skew drawn using DNAGrapher. Regions of the top strand that exhibit a purine bias will have a trend to the upward direction whereas regions that exhibit a pyrimidine bias will be drawn in the downward direction. Two example regions of changes in strand bias are shaded in green and marked (i) and (ii) (b) VACV-COP major ORFs drawn according to the strand of the genome on which each ORF is located. Beginning with a value of zero for the first major ORF of the genome, a numerical value of +1 or -1 is added to the value of the previous ORF depending on if the ORF is located on the top or bottom strand, respectively. (c) Gene orientation in two example regions demonstrating a change in strand bias. (i) Strand bias changes from a purine bias on the bottom strand, to a purine bias on the top strand that encompasses 1 gene on the top strand. (ii) Strand bias changes from a purine bias on the bottom strand, to a purine bias on the top strand that encompasses 4 genes located on the top strand. When the purine skew (Figure 1a ) slopes in the downward direction, this is due to a pyrimidine bias on the top strand and a commensurate purine bias on the bottom strand indicating that the major ORFs are located on the bottom strand. In regions where the purine skew changes direction from a downward slope to an upward slope or vice versa, these are regions on the genome where the transcription direction of the genes in the genome changes. For example, the purine skew appears to change direction from a downward slope to an upward slope at position 32,800 bp and then changes again from an upward slope to a downward slope at position 33,500. Figure 1c (i) shows that within this region (32,800–33,500 bp), there is one gene (VACV-COP K7R) that is located on the top strand (upwards slope on purine skew) and is flanked by genes that are located on the bottom strand (downward slope on purine skew). A second example can be seen in figure 1 c (ii) where an upward slope in the purine skew occurs between positions 52,400 and 57,000. In this case, the upward sloping region encompasses four genes (VACV-COP E5R, E6R, E7R and E8R) and the two downward sloping regions flanking each side of this region encompass genes that are located on the bottom strand. It was previously shown that minor ORFs in VACV-COP tend to have higher than average serine content as well as lower than average aspartate and glutamate content [ 12 ]. Based on these observations and our current finding that the synonymous DNA strand is usually purine rich, we created a simple mathematical equation designed to provide a "quality" measure of each ORF. The results of the formula [Ser%-Asp%-Glu%+(50-AG%)], which essentially sums the trends in amino acid composition (3 amino acids) and purine content, are shown in Figure 2 . If peptides are translated from ORFs on the non-synonymous strand, they tend to have a higher than average Ser%, but lower than average Asp% and Glu% (due to properties of the genetic code), and have a lower than average purine content. By subtracting the actual %purine from the genome average for VACV-COP (50%), if the ORF is major, the numerical result of the equation is negative and if the results of the equation are positive, the ORF is predicted to be minor. Figure 2 Results of the "quality" measure for VACV-COP. Y-axis plots results of the "quality" calculation (Ser%-Asp%-Glu%+[50%-AG%]) and X-axis depicts rank of each ORF. Plotting the results of this equation, we found that of the 266 ORFs originally predicted in VACV-COP, 6 ORFs (VACV-COP A ORF G, VACV-COP A ORF T, VACV-COP B ORF G, VACV-COP C ORF F, VACV-COP E ORF D, and VACV-COP F ORF A) were incorrectly classified as being major and 9 ORFs (VACV-COP A9L, VACV-COP A13L, VACV-COP A14L, VACV-COP A14.5L, VACV-COP A38L, VACV-COP A43R, VACV-COP C3L, VACV-COP I5L, VACV-COP I6L) were incorrectly classified as being minor. It was found that the majority of incorrectly classified major ORFs were misclassified because they are small membrane proteins that had a lower aspartate and glutamate content than other major ORFs and that the majority of incorrectly classified minor ORFs were misclassified because they have a lower serine and higher purine percentage compared to other minor ORFs despite the fact that all but one minor ORF (VACV-COP A ORF T) overlap a major ORF on the opposite strand (Table 1 ). There were three genes that had initially been excluded from the annotation of VACV-COP due to their small size. Two of these genes (VACV-COP A2.5L and VACV-COP G5.5R) have a negative "quality" measure value indicating that they are major. One of these genes (VACV-COP A14.5L) was misclassified as minor likely due to the fact that it is a small membrane protein (Table 1 ). Table 1 List of VACV-COP ORFs that were incorrectly classified. Major ORFS incorrectly classified as minor ORF name ORF size (bp) Serine content (%) Aspartate content (%) Glutamate content (%) Purine content (%) Explanation VACV-COP A13L 210 11.43 1.43 2.86 48.82 Small, membrane protein VACV-COP A14L 270 11.11 3.33 0 45.79 Small, membrane protein VACV-COP A14.5L 159 7.55 1.89 1.89 44.45 Small, membrane protein VACV-COP A38L 831 7.94 3.97 2.53 47.25 Membrane protein VACV-COP A43R 582 10.31 5.67 1.55 51.11 Membrane protein VACV-COP C3L 789 13.31 4.18 3.8 52.27 High Ser%, low Asp% and Glu% VACV-COP I5L 237 5.06 2.53 1.27 49.58 Small, membrane protein VACV-COP I6L 1146 10.99 4.45 4.45 49.7 High Ser%, low Asp% and Glu% Minor ORFs incorrectly classified as major ORF name ORF size (bp) Serine content (%) Aspartate content (%) Glutamate content (%) Purine content (%) Explanation VACV-COP A ORF G 225 6.67 4 8 54.39 Low Ser%, high Asp% and AG% VACV-COP A ORF T 243 1.23 3.7 2.47 51.63 Overlaps on same strand as major ORF VACV-COP B ORF G 273 1.1 3.3 1.1 53.26 Low Ser%, high AG% VACV-COP C ORF F 273 1.1 3.3 1.1 53.26 Low Ser%, high AG% VACV-COP E ORF D 198 9.09 4.55 6.06 55.72 High Asp%, Glu%, AG% VACV-COP F ORF A 201 4.48 4.48 0 50.49 Low Ser% A similar analysis was repeated for the genome of amsacta moorei (AMEV), an extremely AT-rich (82%) entomopoxvirus [ 13 ]. The AMEV genome was chosen for two reasons: (1) because it is not closely related to any known poxviruses and therefore its genome contains a large number of genes with unknown function and (2) its genome was liberally annotated and therefore it is questionable which ORFs are likely to be functional genes. Thus, the "quality" measure was used to predict which AMEV ORFs are most likely to be minor. Figure 3 graphically depicts the results of the "quality" measure calculation for AMEV. Due to the extreme AT-richness of the AMEV genome, it was necessary to modify the "quality" measure to the following formula: [Ser%-Asp%-Glu%+(49%-AG%)]. 49% was chosen instead of 50% for the purine portion of this equation since the average purine content of the entire AMEV genome is 49%. As was the case with VACV-COP, if the ORF is minor, the results of the "quality" measure will be positive. Figure 3 Results of the "quality" measure for amsacta moorei virus (AMEV). Y-axis plots results of the "quality" calculation (Ser%-Asp%-Glu%+[49%-AG%]) and X-axis depicts rank of each ORF. It was found that there were 51 ORFs that had a positive "quality" value and are therefore considered minor. Of these 51 ORFs, 41 ORFs further fit our definition of a minor ORF as they overlapped another larger ORF on the opposite strand and 4 major ORFs (AMEV-161, AMEV-164, AMEV-171, and AMEV-183) were incorrectly classified as minor even though they each have orthologs in other poxviruses and are therefore major (Table 2 ). The remaining 6 ORFs (AMEV-001, AMEV-089, AMEV-148, AMEV-198, AMEV-ITR02, and AMEV-ITR08) that were classified as minor using our "quality" measure were found not to overlap any ORFs on the opposite or same DNA strand and were further analyzed using the AMEV purine skew in order to try and determine the correct coding strand in each of these 6 regions (Table 3 ). It was found that for 5 (AMEV-001, AMEV-089, AMEV-148, AMEV-198, AMEV-ITR02) of these 6 ORFs, the purine skew indicates a coding strand opposite the strand on which these ORFs are located, or in other words, that these ORFs are minor. For 1 (AMEV-ITR08) ORF, the purine skew indicated a coding strand identical to the strand on which this ORF is located and therefore this ORF may actually be major. AMEV-ITR08 does not have any orthologs in other poxviruses but it does show a 73.6% amino acid identity with the AMEV-ITR07 ORF which was classified as being major using the "quality" calculation further supporting that AMEV-ITR08 is likely major. AMEV-ITR08 was predicted to contain a transmembrane domain [ 13 ] which could explain why it was misclassified. Table 2 List of AMEV ORFs that were incorrectly classified. Major ORFS incorrectly classified as minor ORF name ORF size (bp) Serine content (%) Aspartate content (%) Glutamate content (%) Purine content (%) Explanation AMEV-161 243 11.11 2.47 1.23 47.56 Membrane protein AMEV-164 708 7.63 2.12 2.97 47.97 High Ser%, low Asp% and Glu% AMEV-171 276 3.26 1.09 1.09 48.39 Low Asp% and Glu% AMEV-183 675 6.67 3.11 1.33 51.18 Low AG% and low Glu% Minor ORFs incorrectly classified as major ORF name ORF size (bp) Serine content (%) Aspartate content (%) Glutamate content (%) Purine content (%) Explanation AMEV-152 225 0 12 1.33 60.97 Overlaps on same strand as major ORF AMEV-189 180 1.67 8.33 1.67 43.17 Low Ser%, high Asp% AMEV-191 228 0 2.63 10.53 61.9 Overlaps on same strand as major ORF Table 3 List of 6 AMEV ORFs classified as minor that do not fit the definition of a minor ORF and conclusions as to whether or not they are minor. ORF name DNA strand on which ORF is located Direction of purine skew Conclusion AMEV-001 Top Down Minor AMEV-089 Top Down Minor AMEV-148 Bottom Up Minor AMEV-198 Bottom Up Minor AMEV-ITR02 Top Down Minor AMEV-ITR08 Top Up May be major There were three ORFs that had been classified as major (negative value for the "quality" measure) yet overlapped a larger gene on the opposite or same DNA strand (Table 2 ). Two of these ORFs (AMEV-152 and AMEV-191) overlap a larger ORF on the same strand and therefore neither the purine skew nor the "quality" measure are capable of determining which ORF is major; and one ORF (AMEV-189) overlaps the much larger spheroidin gene on the opposite strand and was likely misclassified due to its lower than average serine content and higher than average aspartate content. For the analyses shown in figures 2 and 3 , the cut-off value used in both cases was zero. The value of zero was chosen in the training case (VACV-COP) because it represented a reasonable cut-off between genes that were known to be major and ORFs that were known to be minor with minimal misclassification of genes. With our test case (AMEV), since it was not known which ORFs were major or minor, a cut-off of zero was initially used with the presumption that the cut-off may need to be adjusted due to the extreme AT-richness of the AMEV genome. Analyzing the "quality" measure data obtained for AMEV with a cut-off of zero yielded satisfactory results in that the number of overlapping and therefore likely to be minor ORFs that were misclassified was relatively low and because of this we decided to maintain the zero cut-off. It is likely that a cut-off of zero worked well with AMEV despite its extremely AT-rich genome because the "quality" measure that was used reflected the average AG% of the genome. It is also likely that other poxvirus genomes that are analyzed using our method would use a cut-off of zero, provided the "quality" measure that was used was changed to reflect the average AG content of the genome, although we have yet to test whether this cut-off is universal throughout all poxviruses. Thus far we have shown that purine skews can be used to predict the coding strand of poxvirus genomes and that major ORFs in VACV-COP and in AMEV usually contain greater than 50% and 49% purines, respectively. In order to explain this purine richness in genes, the purine (R) to pyrimidine (Y) ratio (R:Y) was calculated for each codon position of each coding and non-coding ORF in VACV-COP. A Student's T-test was used to compare the mean R/Y ratio values for the coding (genes) and non-coding ORFs at each codon position; means were considered statistically different when the p-value was less than 0.05. At the first nucleotide position in the codon, both VACV-COP major and minor ORFs are rich in purines but the major ORFs (genes) have significantly (p < 0.05) higher levels of purines at this position (Table 4 ). At the second nucleotide position the major ORFs have a R:Y ratio of approximately 1 and the minor ORFs have a significantly lower R:Y ratio (p < 0.05) indicating that minor ORFs are pyrimidine rich at the second codon position whereas major ORFs contain roughly equal amounts of purines and pyrimidines at this position. At position 3, no statistical difference was found, with both major and minor ORFs being rich in pyrimidines. Thus, for the first and second nucleotide positions of the codons, the major ORFs (genes) have significantly higher purine content than the minor ORFs. Table 4 Mean purine to pyrimidine ratios for each codon position of vaccinia virus Copenhagen major and minor ORFs. Positions marked with an asterisk (*) are statistically different. Purine/Pyrimidine (R/Y) ratio at each codon position Position 1* Position 2* Position 3 Major ORFs 1.77 0.99 0.93 Minor ORFs 1.21 0.75 0.96 It is important to remember that the use of purine/amino acid content of the coding strand and predicted protein, respectively, are just two measures that can be used to help predict whether an ORF is likely to be a functional gene and that usually they are only useful in discriminating between coding and non-coding strands. Occasionally ORFs that are fragments of bone fide genes are also flagged as non-functional, this is probably because of unusual amino acid content in small protein sub-domains. An example of this is the A25L ORF of VACV-COP that was flagged as non-functional by this method even though it is a fragment of the ATI protein. In a similar way, fragmentation of genes into smaller ORFs can also lead to unusual isoelectric points in the resulting predicted proteins; the 14 ORFs with a predicted pI of >9.6 are all minor ORFs or gene fragments. Thus, multiple approaches that may also include promoter analysis must be applied to attempt to correctly annotate small orphan ORFs in these genomes and there is no guarantee that the process will be 100% successful. Conclusion We have successfully shown that in the case of AT-rich poxviruses, purine skews can be used to help predict the coding regions of the genome. This is particularly useful if predicted ORFs overlap each other and it is not apparently obvious which ORF is major (when neither ORF has an ortholog in another poxvirus genome). A second method that can be used in conjunction with purine skews is to calculate the "quality" of each predicted ORF using information from amino acid composition and purine content. For a given ORF, if the results of this calculation are negative the ORF is predicted to be a functional gene, and if the results of the calculation are positive, the ORF is predicted to be minor. By comparing purine to pyrimidine (R/Y) ratios at each codon position of major and minor vaccinia virus ORFs, it was found that the purine abundance seen for major ORFs stems primarily from the first codon position with both the second and third codon positions containing equal amounts of purines and pyrimidines. The software used to create the purine skews (DNAGrapher) and the VOCs database are both available for public use via the web [ 14 , 15 ]. Methods Purine skews Purine skews were created using the DNA Grapher feature in VOCs [ 9 ]. The DNA Grapher program implements the algorithm originally developed by Lobry [ 16 ]. The algorithm assigns a direction to each base encountered in the sequence. In the case of purine skews, the graph begins at position (0,0) and move upwards one unit if the base encountered is a purine (A or G) and moves downwards one unit if the base encountered is a pyrimidine, (C or T). The plot continues in this fashion until the end of the sequence is reached. A variable window size can also be set. In this case, the plot trend will be either upwards or downwards, depending on the average number of purines or pyrimidines in the window. The window then slides over the number of bases defined by the window size. For example, if the window size was defined as 10 bp, the window will slide over to the eleventh base and then count the average. The DNAGrapher program is integrated into the VOCs software and is also accessible as a Java WebStart program [ 14 ]. Graphing ORFs by strand The 202 major ORFs (genes) of VACV-COP were ordered in ascending order according to their start positions on the genome and then plotted according to which strand they are located using Microsoft Excel. The first gene was plotted at position 0 of the y-axis of the graph and a value of either -1 or +1 was added to the next gene on the genome depending on if it was on the bottom or top strand respectively. ORF "quality" calculation The analysis of VACV-COP ORFs was performed by plotting the results of the following equation: Ser%-Asp%-Glu%+(50%-AG%) where Ser% is serine percentage, Asp% is aspartate percentage, Glu% is glutamate percentage, AG% is purine percentage and the value of 50% is the average purine content of the VACV-COP genome. The "quality measure" for AMEV ORFs used the following formula: Ser%-Asp%-Glu%+(49%-AG%) where the only modification of this formula from VACV-COP was the value of 49% which reflects the average purine content of the AMEV genome. The amino acid composition and purine data was obtained from the VOCs database which is available on the internet as a Java Web Start program [ 9 , 15 ]. The results of the equation for each ORF were tabulated, sorted in ascending order and assigned a rank from 1 being the ORF with the most negative value to either 266 in the case of VACV-COP or 292 in the case of AMEV being the ORF with the most positive value. These results of the calculation were plotted on the y-axis and the rank of each ORF was plotted on the x-axis using Microsoft Excel. Purine/pyrimidine ratio comparison To analyse the ratio of purines to pyrimidines at each codon position, the total number of each nucleotide at each codon position was first calculated using the codontree program with the BC=A option (calculate the base composition at all 3 codon positions) selected [ 17 , 18 ]. Once the base composition at each codon position was calculated, the purine to pyrimidine ratio (R/Y) was calculated for each ORF of the dataset. The mean values of the R/Y ratio for each dataset were compared using Student's T-Test to determine if the mean R/Y ratio for each dataset was statistically different. The null hypothesis for the Student's T-test was that the means were equal and the null hypothesis was rejected if the p-value was < 0.05. The two datasets used for this portion of the paper consisted of (1) all ORFs classified as major in VACV-COP and (2) all ORFs classified as minor in VACV-COP. Authors' contributions MD performed all analyses and wrote the manuscript. CU conceived of, and supervised the study, and edited the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC552303.xml |
544561 | Duloxetine for the long-term treatment of Major Depressive Disorder in patients aged 65 and older: an open-label study | Background Late-life depression is a common, chronic and recurring disorder for which guidelines recommend long-term therapy. The safety and efficacy of duloxetine for the treatment of major depressive disorder (MDD) were evaluated using data from elderly patients (age ≥ 65 years; n = 101) who participated in a large, multinational, open-label study. Methods Patients meeting DSM-IV criteria for MDD received duloxetine 80 mg/d (40 mg twice daily (BID)) to 120 mg/d (60 mg BID) for up to 52 weeks. Efficacy measures included the Clinical Global Impression of Severity (CGI-S) scale, the 17-item Hamilton Rating Scale for Depression (HAMD 17 ), the Beck Depression Inventory-II (BDI-II), the Patient Global Impression of Improvement (PGI-I) scale, and the Sheehan Disability Scale (SDS). Safety and tolerability were evaluated using discontinuation rates, spontaneously reported adverse events, and changes in vital signs, ECG, and laboratory analytes. Results Mean changes in HAMD 17 total score at Weeks 6, 28, and 52 were -13.0, -17.4 and -17.5 (all p-values <.001). Significant improvement (p < .001) in both clinician- (CGI-S) and patient-rated (PGI-I) measures of improvement were observed at Week 1 and sustained throughout the study. Observed case response rates at Weeks 6, 28, and 52 were 62.9%, 84.9%, and 89.4%, respectively, while the corresponding rates of remission were 41.4%, 69.8%, and 72.3%. Adverse events led to discontinuation in 27 (26.7%) patients. Treatment-emergent adverse events reported by >10% of patients included dizziness, nausea, constipation, somnolence, insomnia, dry mouth, and diarrhea. Most events occurred early in the study. Mean changes at endpoint in blood pressure and body weight were less than 2.0 mm Hg, and -0.1 kg, respectively. Conclusions In this open-label study, duloxetine was effective, safe, and well tolerated in the long-term treatment of MDD in patients aged 65 and older. | Background Late-life depression is a common and disabling condition which represents a substantial public health concern [ 1 ]. The prevalence of major depressive disorder (MDD) in the community-dwelling elderly population is estimated at 1–3%, with depressive symptoms being present in approximately 15% [ 2 ]. The rate of occurrence of MDD is even higher among institutionalized older patients. In long-term care patients the incidence has been estimated to be 12% to 25%, with subsyndromal depressive symptoms present in an additional 18% to 30% [ 3 ]. Despite advances in available antidepressant treatments, limitations still exist in both efficacy and safety. Tricyclic antidepressants (TCAs) generally provide robust efficacy, but a number of side effects associated with this class of medications are of particular concern in older patients (e.g. anticholinergic adverse events, orthostatic hypotension, and sedation). Selective serotonin reuptake inhibitors (SSRIs) have provided an improved tolerability profile compared to the TCAs through lower rates of adverse events, and substantially lower toxicity in overdose [ 4 ]. Furthermore, SSRIs do not appear to exhibit age-related increases in occurrence of adverse events [ 5 ]. However, these newer selective antidepressants appear, in general, to achieve equivalent or lower remission rates compared with the older tricyclics [ 6 ]. Duloxetine is a potent dual reuptake inhibitor of serotonin (5-HT) and norepinephrine (NE) [ 7 ]. The efficacy of duloxetine in the acute treatment of MDD has been established in randomized, double-blind, placebo-controlled studies in patients aged 18 and older [ 8 - 11 ]. A subsequent post-hoc analysis of efficacy data from these studies, focusing upon those patients aged 55 and older receiving once-daily duloxetine (60 mg), supported the findings in the general patient population [ 12 ]. The safety and tolerability of duloxetine have also been demonstrated under double-blind conditions. In placebo-controlled trials of duloxetine in patients aged 18 and older (doses from 40 – 120 mg/d) the most frequently reported adverse events were nausea, headache, dry mouth, fatigue, insomnia, and dizziness, while the overall safety profile of duloxetine was comparable to that of available SSRI medications [ 11 ]. A comparable safety and tolerability profile was observed following a post-hoc analysis of data from those patients aged 55 and older, including a low incidence of cardiovascular adverse events and minimal effects upon blood pressure and heart rate [ 12 ]. However, these acute placebo-controlled trials of duloxetine were of 9 weeks duration or less. An NIH consensus panel has recommended that geriatric patients be given continuing antidepressant treatment for at least 6 months for a first episode and for at least 1 year for recurrent episodes [ 13 ], while some investigators suggest that maintenance treatment in the elderly be extended to 2 years [ 14 ]. In order to evaluate the long-term tolerability, safety, and efficacy of duloxetine, a one-year open-label trial in depressed patients was undertaken. This report examines the subset of patients aged 65 and older who participated in the study. While patients in this study received doses of 80 mg/d or 120 mg/d, it should be noted that the approved dose range for duloxetine for the treatment of MDD is 40–60 mg/d. Methods Study design This was a 52-week, open-label, single-arm study of outpatients (aged ≥ 18 years) meeting Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV) [ 15 ] criteria for MDD. The study included a total of 1279 patients at 52 investigative sites in Argentina, Brazil, Canada, Columbia, Mexico, the United States, and Venezuela. The primary objective of the study was to evaluate the safety of duloxetine (80 or 120 mg/d given as two equal doses per day, i.e. 40 to 60 mg BID) for up to 52 weeks. During the first week of therapy, all patients received duloxetine 40 mg BID. Patients unable to tolerate 40 mg BID could have their dose decreased to 20 mg BID, but were required to increase the dose to 40 mg BID at Week 2. Patients unable to tolerate 40 mg BID were discontinued from the study. During the remainder of the study, the patient's dose could be adjusted up to 60 mg BID or down to 40 mg BID, based upon the physician's clinical evaluation of tolerability and efficacy. This report focuses upon data taken from the subset of patients aged 65 years and older (n = 101) within the larger study described above. Patients The study protocol was approved by the ethics committee at each site in accordance with the principles of the Declaration of Helsinki. All patients provided written informed consent prior to the administration of any study procedures or study drug. All patients were required to have a Clinical Global Impression of Severity (CGI-S) score ≥ 3 at the screening and baseline study visits. Patients were excluded for the following reasons: a previous or current diagnosis of schizophrenia, schizophreniform disorder, schizoaffective disorder, or bipolar disorder; presence of an Axis II disorder that would interfere with protocol compliance; serious medical illness; taking benzodiazepines on a daily basis for ≥ 2 weeks prior to enrollment; a history of substance dependence within the last year; or a positive urine drug screen. Subjects judged to be at risk for suicide were also excluded. Concomitant medications Patients were not permitted to receive other antidepressant, antimanic, or antipsychotic agents during the study. Episodic use (≤ 3 consecutive days, and no more than 100 total days) of benzodiazepines was permitted. The use of benadryl, chloral hydrate, cough and cold medications, and narcotics, was allowed on an episodic basis only. Subjects were permitted to take antihypertensives, antiarrhythmics, antibiotics, and multivitamins among other medications while in the study. Efficacy measures Efficacy was assessed using the CGI-S scale [ 16 ] ( a priori specified as the primary outcome), the HAMD 17 total score [ 17 ], HAMD 17 subscales (core – Items 1, 2, 3, 7, and 8; Maier – Items 1, 2, 7, 8, 9, and 10; anxiety/somatization – Items 10, 11, 12, 13, 15, and 17; retardation – Items 1, 7, 8, and 14; sleep – Items 4, 5, and 6), the Beck Depression Inventory-II (BDI-II) [ 18 ], and the Patient Global Impression of Improvement (PGI-I) scale [ 16 ]. Patient-rated quality of life was evaluated using the Sheehan Disability Scale (SDS) [ 19 ], which is a composite of 3 self-rated 10-point Likert response subscales (0 = no disability, 1–3 = mild, 4–6 = moderate, 7–9 = marked, 10 = extreme) to assess work, family, and social functioning during the past month. All outcomes were assessed at Weeks 6, 28, and 52, or upon early discontinuation, except for PGI-I and CGI-S scales which were collected at all visits. Patients were defined as responders if they had a decrease from baseline of at least 50% in HAMD 17 total score. Patients were defined as remitters if they had a HAMD 17 total score ≤ 7. Safety measures Safety measures included spontaneously reported adverse events, serious adverse events (events that led to outcome of death, inpatient hospitalization, cancer, severe or permanent disability, congenital abnormality, or life-threatening condition), vital signs, electrocardiograms (ECGs), and laboratory analyses. Adverse events and vital signs were collected at each visit. Lilly reference ranges were used to define limits for abnormal laboratory values [ 20 ], and potentially clinically significant (PCS) changes in selected laboratory analytes [ 21 ]. PCS changes in blood pressure were defined as follows: (i) Low supine (or standing) systolic BP: ≤ 90 mm Hg and a decrease from baseline of ≥ 20 mm Hg; (ii) High supine (or standing) systolic BP: ≥ 180 mm Hg and an increase from baseline of ≥ 20 mm Hg (iii) Low supine (or standing) diastolic BP: ≤ 50 mm Hg and a decrease from baseline of ≥ 15 mm Hg; (iv) High supine (or standing) diastolic BP: ≥ 105 mm Hg and an increase from baseline of ≥ 15 mm Hg. Patients were considered hypertensive at baseline if they had a historical diagnosis, secondary condition, or adverse event at the baseline visit consistent with a clinical diagnosis of hypertension or high blood pressure. ECGs were collected at baseline and Weeks 4, 28, 52 or at early discontinuation. Patients at 2 sites in Mexico and 1 site in Columbia also had ECGs over-read by a cardiologist at a central location. For these ECGs, QT intervals were corrected (QTc) using Fridericia's correction (QTcF). All other patients had ECGs read by the site for classification as either normal or abnormal. Limits for PCS QTc values were an increase in QTcF of ≥ 30 msec and any postbaseline value ≥ 450 msec for males or ≥ 470 msec for females [ 22 ]. Statistical analyses Mean changes from baseline to last observation in laboratory analytes, vital signs, and ECG intervals were assessed using ANOVA with models that included investigator. Longitudinal mean changes and categorical changes (temporal patterns) were assessed via a likelihood-based repeated measures approach. Models for mean changes included investigator, visit, baseline value, and baseline-by-visit interaction. Mean change in CGI-S score was compared between younger (age <65) and elderly (age ≥ 65) patients using the repeated measures analysis as previously described, with age group and age group-by-visit interaction added to the model. Differences between young and elderly patients in rates of treatment emergent adverse events were assessed using Fisher's exact test. Results Patient disposition This report was based on data from 101 patients aged 65 and older. The oldest patient was 87 years of age, while the median age was 70. Patient characteristics at baseline are summarized in Table 1 . Table 1 Summary of patient demographics and psychiatric history a Duloxetine, 80–120 mg/d † (n = 101) Gender , n (%) Female 72 (71.3) Male 29 (28.7) Age , y 71.9 (5.4) Age range , y 65 – 87 Weight , kg 66.5 (14.5) Ethnicity , n (%) Caucasian 43 (42.6) Hispanic 55 (54.5) Other 3 (3.0) Age at onset , y 63.5 (13.3) Current duration , wks 86.0 (161.0) Number of previous episodes 1.1 (2.1) Duration of last episode , wks 57.6 (110.2) a. Listed as mean (SD) unless otherwise stated. † Administered as 40 mg BID or 60 mg BID Efficacy Mean changes from baseline for all efficacy outcomes were highly significant (p < .001, t-test for mean change) at all assessment times (Table 2 ). In the case of CGI-S and PGI-I scales, significant improvements were observed at Week 1 and at all subsequent visits (p < .001, t-test for mean). Observed case response rates at Weeks 6, 28, and 52 were 62.9% (44/70), 84.9% (45/53), and 89.4% (42/47), respectively, while the corresponding rates of remission were 41.4% (29/70), 69.8% (37/53), and 72.3% (34/47), respectively. Table 2 Efficacy outcome measures Outcome measure Mean baseline score Mean change (SE) Week 6 Week 28 Week 52 CGI-Severity 4.51 -2.08 (0.11)** -2.93 (0.12)** -3.15 (0.12)** PGI-Improvement N/A 2.33 (0.14)** 1.83 (0.16)** 1.84 (0.16)** HAMD 17 Total Score 21.8 -13.0 (0.7)** -17.4 (0.8)** -17.5 (0.8)** Anxiety subscale 6.70 -3.46 (0.30)** -4.89 (0.33)** -4.90 (0.34)** Core subscale 8.83 -5.65 (0.33)** -7.50 (0.36)** -7.61 (0.38)** Maier subscale 10.7 -6.64 (0.38)** -8.90 (0.42)** -9.06 (0.44)** Retardation subscale 7.84 -4.49 (0.27)** -6.58 (0.30)** -6.49 (0.31)** Sleep subscale 3.68 -2.34 (0.21)** -2.84 (0.23)** -2.83 (0.24)** HAMD 17 Item 1 2.64 -1.73 (0.11)** -2.30 (0.13)** -2.30 (0.13)** HAMD 17 Item 3 0.74 -0.58 (0.06)** -0.59 (0.06)** -0.61 (0.06)** BDI-II Total Score 29.5 -15.8 (1.0)** -22.3 (1.1)** -22.0 (1.1)** Sheehan Disability Scale Work item 6.91 -3.01 (0.32)** -4.60 (0.37)** -4.27 (0.39)** Family item 6.82 -3.63 (0.32)** -4.88 (0.35)** -4.95 (0.37)** Social item 7.27 -3.45 (0.34)** -4.57 (0.38)** -4.85 (0.40)** ** p < .001 from t-test for mean change CGI-Severity = Clinical Global Impression of Severity; PGI-Improvement = Patient Global Impression of Improvement; HAMD 17 = 17-item Hamilton Rating Scale for Depression; BDI-II = Beck Depression Inventory-II A comparison of visitwise mean changes in CGI-S score between elderly patients (age ≥ 65, n = 101) and those patients in the study aged <65 years (n = 1178; Figure 1 ) revealed a somewhat more rapid onset of efficacy in younger patients, with differences between age groups being statistically significant at Weeks 2, 3 and 4. At subsequent visits the differences between age groups became progressively smaller, and mean changes were essentially equal at the study endpoint. Figure 1 Comparison of mean change in CGI-Severity score for duloxetine-treated patients aged ≥ 65 years (n = 98) and age 18–64 years (n = 1121). * p ≤ .05 for between-group comparison. Treatment discontinuation The most common reasons for study discontinuation were adverse event (26.7%), personal conflict/other reasons (9.9%), and noncompliance (5.0%). The adverse events leading to discontinuation in >1.0% of enrolled patients at a duloxetine dose of 80–120 mg/d were somnolence (4.0%), dizziness (3.0%), diarrhea (2.0%), hypertension (2.0%), and vomiting (2.0%). Two-thirds of the discontinuations due to adverse events (18/27) occurred within 2 weeks of initiation of therapy. Serious adverse events A total of 9 enrolled patients reported serious adverse events during the study. Most of these events were considered by the investigator to be unrelated to duloxetine exposure. The serious adverse events reported by more than 1 patient were hip fracture (2), and confusion (2), while there were single reports of agitation, angina pectoris, cerebrovascular disorder, coronary artery atherosclerosis, dementia, dizziness, hypomania, and myocardial ischemia. Individual occurrences were few, thus no clear temporal pattern of incidence of each event could be determined. Treatment-emergent adverse events Treatment-emergent adverse events occurring in >5% of patients during the open-label therapy phase (Weeks 1 through 52) are summarized in Table 3 . The incidence for these events during Weeks 1 to 8 and Weeks 9 to 52 are also listed in Table 3 . During Weeks 1 through 52, adverse events reported by more than 10% of patients were dizziness, nausea, constipation, somnolence, insomnia, dry mouth, diarrhea, headache, and increased sweating. Over 75% of occurrences of these events were rated as mild or moderate in severity. The incidence of treatment-emergent adverse events was lower during the latter 44 weeks of the study (Weeks 9 to 52) than during the first 8 weeks. Each event with an incidence of at least 5% during Weeks 9 to 52 was also present at the same or higher rate during the first 8 weeks. Table 3 Treatment-emergent adverse events † Event Weeks 1–8, n (%) Weeks 9–52, n (%) Weeks 1–52, n (%) Nausea 29 (28.7) 0 (0.0) 29 (28.7) Dizziness 27 (26.7) 5 (5.0) 31 (30.7) Somnolence 22 (21.8) 1 (1.0) 23 (22.8) Constipation 20 (19.8) 5 (5.0) 23 (22.8) Dry mouth 16 (15.8) 4 (4.0) 18 (17.8) Insomnia 15 (14.9) 8 (7.9) 22 (21.8) Headache 11 (10.9) 6 (5.9) 16 (15.8) Increased sweating 11 (10.9) 4 (4.0) 15 (14.9) Diarrhea 11 (10.9) 6 (5.9) 17 (16.8) Tremor 7 (6.9) 2 (2.0) 9 (8.9) Anxiety NEC 7 (6.9) 3 (3.0) 10 (9.9) Fatigue 7 (6.9) 4 (4.0) 9 (8.9) Decreased appetite 7 (6.9) 1 (1.0) 7 (6.9) Vomiting 7 (6.9) 3 (3.0) 10 (9.9) Anorexia 6 (5.9) 3 (3.0) 8 (7.9) Back pain 5 (5.0) 2 (2.0) 6 (5.9) Abdominal pain upper 4 (4.0) 2 (2.0) 6 (5.9) † Events with an occurrence > 5% in Weeks 1–52. Rates of occurrence of other adverse events of importance in an elderly population were low: 2 patients experienced a fall, while there were single reports of syncope and postural hypotension. When analyzed by age group, patients aged 65 and older were found to report a significantly lower incidence of insomnia and headache than those patients aged <65 (Table 4 ). No other significant differences were observed between age groups. Table 4 Treatment-emergent adverse events by age group † N (%) Event Age 18 – 64 (n = 1178) Age ≥ 65 (n = 101) p-Value Nausea 406 (34.5) 29 (28.7) .274 Insomnia 378 (32.1) 22 (21.8) .034 Headache 373 (31.7) 16 (15.8) <.001 Somnolence 358 (30.4) 23 (22.8) .114 Dry mouth 282 (23.9) 18 (17.8) .180 Dizziness 267 (22. 7) 31 (30.7) .085 Constipation 250 (21.2) 23 (22.8) .705 Increased sweating 177 (15.0) 15 (14.9) 1.00 Anxiety 176 (14.9) 10 (9.9) .188 Diarrhea 157 (13.3) 17 (16.8) .363 Fatigue 125 (10.6) 9 (8.9) .735 † Events with an occurrence > 10% in Weeks 1–52. Cardiovascular profile Mean changes from baseline to last observation for standing and supine systolic and diastolic blood pressures were less than 2 mm Hg and not significantly different from zero: supine systolic BP -1.5 mm Hg (p = .364), supine diastolic BP -1.8 mm Hg (p = .141), standing systolic BP -1.9 mm Hg (p = .269), standing diastolic BP -0.1 mm Hg (p = .907). Using repeated measures analysis, mean changes in blood pressure were <4 mm Hg at every visit from baseline to endpoint. A mean change analysis was utilized to compare blood pressure in patients who were hypertensive (n = 40) versus non-hypertensive (n = 58) at baseline. Baseline hypertensive patients exhibited small mean decreases (<4 mm Hg) in both standing and supine systolic and diastolic blood pressures from baseline to endpoint, while patients who were not hypertensive at baseline demonstrated mean changes in these same measures of 0.3 to -1.1 mm Hg (Figure 2 ). Figure 2 Mean change from baseline to endpoint in blood pressure (mm Hg) for baseline hypertensive and non-hypertensive patients aged ≥ 65 years receiving duloxetine (80–120 mg/d). p > .10 for all between-group comparisons. Mean baseline-to-endpoint increases were observed for supine pulse (mean change = 1.6 bpm, p = .105) and standing pulse (mean change = 1.1 bpm, p = .338) but these values did not differ significantly from zero. Rates of occurrence of potentially clinically significant (PCS) values for systolic and diastolic blood pressures were generally low. The incidence of PCS low standing systolic blood pressure was 5/96 (5.2%), while all other assessed blood pressure and pulse readings had incidences of PCS values <2.5%. There were no significant changes in cardiac intervals detected by ECG. Mean changes from baseline to last observation were: PR -3.3 msec (p = .363), QRS -2.5 msec (p = .420), QT 5.0 msec (p = .730), and QTcF 6.2 msec (p = .553). No patient experienced a PCS QTcF value during the course of the study. Body weight After 52 weeks of treatment, the mean change in weight from baseline to last observation was -0.1 kg (p = .741), while a mean weight change of +0.3 kg was determined using MMRM analysis (p = .386 for t-test for mean change at endpoint; Figure 3 ). Mean changes in weight at early visits were negative (weight loss), mean changes at intermediate visits were near zero, while mean changes at later visits were positive (weight gain). A total of 3/98 patients (3.1%) experienced PCS weight loss while 6/98 (6.1%) reported a PCS weight gain (PCS weight change is defined as a change of ≥ 10% of baseline body weight). The 3 patients displaying PCS weight loss had baseline body mass indices (BMI) of 24.9, 28.5 and 32.1, while those experiencing weight gain had BMIs at baseline ranging from 19.9 to 26.7. Figure 3 Mean change in weight (kg) for duloxetine-treated patients aged ≥ 65 years (dose 80–120 mg/d, n = 98). *p ≤ .05 from t-test for mean change. Laboratory analytes Statistically significant mean changes were observed in some laboratory analytes. Despite the statistical significance, the magnitudes of the mean changes were generally small and not considered clinically relevant in light of the low incidence of potentially clinically significant (PCS) values. Discontinuation-emergent adverse events All patients who proceeded past Week 52 received no study drug for 2 weeks until Week 54 via abrupt discontinuation (no taper). Discontinuation-emergent adverse events occurring in ≥ 5% of patients were dizziness (8.9%), anxiety (7.9%), headache (5.0%), and insomnia (5.0%). Discussion The current analysis focused upon 101 depressed patients aged 65 years and older who received long-term, open-label treatment with duloxetine (80 mg/d or 120 mg/d). Efficacy was demonstrated on all assessed outcome measures, both clinician- and patient-rated. Highly significant improvements were seen in both patient- and clinician-rated depression and health outcome scales (CGI-S, HAMD 17 , BDI-II, PGI-I, SDS) at all visits. By way of comparison, significantly greater improvements for duloxetine compared with placebo were observed in HAMD 17 total score, HAMD 17 subscales and CGI-S score in two 9-week, placebo-controlled studies of duloxetine (60 mg once daily (QD)) in patients aged 55 years and older [ 12 ]. Onset of efficacy is an important consideration in antidepressant trials, but in the absence of a placebo arm it is especially difficult to define and assess [ 23 ]. However, the significant improvements from baseline in CGI-S and PGI-I scales at Weeks 1 and 2 are consistent with results from double-blind, placebo-controlled trials in which duloxetine demonstrated significant superiority over placebo as early as Week 1 on core emotional symptoms of depression (HAMD 17 Maier subscale), and global improvement (CGI-S scale) [ 8 ]. It has also been suggested that treatment response may be slower and/or less robust in an elderly population compared with a younger cohort [ 24 ]. Indeed, in the present study a more rapid onset of efficacy was observed in duloxetine-treated patients aged 18–64 when compared with those patients aged ≥ 65. However, the magnitude of treatment differences between age groups progressively diminished and was not significant at any visit after Week 4. This result may have substantial clinical relevance for long-term treatment. It suggests that, although those patients aged 65 and older may exhibit a somewhat less rapid onset of antidepressant action than a younger cohort, elderly patients are able to reach and sustain a level of depressive symptom improvement equal to that observed in younger patients. In this study, observed case response and remission rates following 6 weeks of open-label duloxetine therapy (62.9% and 41.4%, respectively) were comparable to the response and remission rates (52.8% and 44.1%, respectively) observed in older patients in two 9-week double-blind, placebo controlled trials of duloxetine (60 mg QD) [ 12 ]. Furthermore, remission rates at 52 weeks in the present study were only slightly less than response rates (72.3% and 89.4%, respectively), implying that those patients who responded had a high probability of achieving complete symptom resolution. A growing body of evidence suggests that remission, rather than response, should be the goal of antidepressant treatment [ 25 ]. Responders who do not remit may have appreciable residual symptomatology, and patients with residual symptoms have been found to be at higher risk for relapse or recurrence [ 26 ]. Given the high rates of relapse and recurrence observed among elderly patients, achievement of remission assumes an added degree of importance. In light of the recommendation that elderly patients receive at least 12–18 months of antidepressant therapy [ 27 ], the long-term safety and tolerability of these medications are of considerable importance. Duloxetine was safely administered and well-tolerated in this long-term study. While the discontinuation rate due to adverse events (26.7%) was somewhat higher than that observed in older patients (aged ≥ 55) in two 9-week, placebo-controlled trials of duloxetine 60 mg QD (21.0%), the difference in these rates suggests that few patients stopped taking medication during the periods associated with continuation and maintenance treatment. The discontinuation rate is also comparable to that observed in a 54-week study of fluoxetine in elderly patients [ 28 ], and is only slightly higher than that obtained from a meta-analysis of acute-phase (≤ 8 week) trials of SSRIs in elderly patients (14.3%–22.8%) [ 29 ]. Given the one-year duration of this study, and the administration of duloxetine at the upper end of its studied dose range (80–120 mg/d) throughout the trial, the long-term tolerability of duloxetine in elderly patients appears to be comparable to that of SSRIs. The incidence and pattern of treatment-emergent adverse events during Weeks 1 to 8 of this study were generally similar to those observed in acute-phase, placebo-controlled trials in older patients [ 12 ]. The most frequently reported adverse events were nausea, dizziness, somnolence, constipation, and dry mouth. Most of the events were either mild or moderate in severity and transient in nature. During the last 44 weeks of the study, no adverse event occurred in more than 8% of the patient population and the incidence of each specific event was generally lower in the entire period from Weeks 9–52 than in the initial 8 weeks of the study. Thus, patients who tolerated duloxetine during the early period of the trial were likely to tolerate long-term dosing. Administration of medication to elderly patients necessitates consideration of the physiological changes which accompany aging. Such changes can result in substantial differences in adverse event profiles between older and younger patient populations [ 30 ]. In this study, comparisons between age groups (18–64 years vs. ≥ 65 years) of the most commonly reported treatment emergent adverse events revealed significant differences only in the rates of insomnia and headache. Furthermore, in each of these cases the higher rates were observed in the younger age group. In the absence of a placebo control arm these results must be viewed with an appropriate degree of caution, but they provide an indication that the adverse event profile for duloxetine in the elderly may be similar to that observed in younger patients. Antidepressants with benign cardiovascular profiles may be particularly suitable for the treatment of an elderly population, in which heart disease is more prevalent than in younger patients [ 31 ]. In this study, duloxetine-treated patients exhibited small (less than 2 mm Hg) mean changes in blood pressure from baseline to endpoint and low rates of PCS blood pressure values. Furthermore, those patients with baseline hypertension demonstrated a mean decrease in blood pressure compared with normotensive patients. Consistent with the profile of duloxetine as a NE reuptake inhibitor, small mean increases (less than 2 bpm) were observed in heart rate. Mean changes in corrected QT interval were small and not significantly different from zero, suggesting duloxetine did not prolong QT intervals. Collectively, these data indicate that duloxetine exhibits a favorable cardiovascular profile in elderly patients. Weight change is an important consideration in older patients being treated with antidepressants [ 32 ], especially during long-term treatment. Following 52 weeks of open-label duloxetine treatment, mean change in weight from baseline to last observation was -0.1 kg. Repeated measures analysis was used to derive a longitudinal profile of weight change. This revealed a small (<1 kg) decrease in weight at early visits, consistent with the weight change of -0.2 kg observed in older patients in two 9-week acute trials of duloxetine [ 12 ]. However, mean changes at intermediate visits approached zero, while mean changes at the last 2 visits were positive (weight gain). A total of 3/98 patients (3.1%) reported a PCS weight loss while 6/98 (6.1%) reported a PCS weight gain. By way of comparison, a recent study of weight change among depressed nursing facility residents aged >65 who received ≥ 6 months of antidepressant treatment found rates of clinically important weight loss and weight gain (defined as ≥ 10% change in body weight or Minimum Data Set-Plus weight loss or weight gain marker) of 14.7% and 14.4%, respectively [ 33 ]. It is important to consider all of the safety findings described here in light of the dosing and design requirements of the study. The doses used in this open-label study were up to 2-fold greater than the once-daily 60 mg duloxetine dose which has been shown to provide robust efficacy in older patients in placebo-controlled trials [ 12 ]. The dosing and other design features of the study (e.g. the intensive visit schedule) were specifically included to maximize the probability of uncovering adverse reactions to duloxetine. Furthermore, no special dosing guidelines were implemented for these elderly patients. While lower doses of many antidepressants are recommended in the elderly [ 34 ], especially due to concerns of adverse events among the TCAs, this can lead to the use of subtherapeutic doses and corresponding reductions in efficacy [ 35 ]. In this study, however, the comparable adverse event profiles observed for elderly and younger age groups suggest that a duloxetine dose which has been shown to provide robust efficacy may be safely administered in depressed patients regardless of age. Only in particularly sensitive elderly patients may dosing adjustments be required. Conclusions Results from this open-label study of depressed patients aged 65 and older suggest that duloxetine is safe and well tolerated in long-term use. Statistically significant and clinically relevant improvements in all assessed efficacy measures were observed at each patient visit. Furthermore, the efficacy and adverse event profile of duloxetine appears to be comparable in older (age ≥ 65) and younger patients (age 18–64). These results, together with those obtained from acute phase, double-blind, placebo-controlled trials, support the efficacy of duloxetine in the treatment of major depression in older patients. Competing interests Drs. Wohlreich, Mallinckrodt, and Watkin are employees of Eli Lilly and Company. Dr. Hay was employed by Eli Lilly and Company at the time of the study. Authors' contributions MMW, CHM, JGW, and DPH participated in interpretation of data and drafting of the manuscript. CHM carried out the statistical analyses. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544561.xml |
532391 | A Transcriptional Profile of Aging in the Human Kidney | In this study, we found 985 genes that change expression in the cortex and the medulla of the kidney with age. Some of the genes whose transcripts increase in abundance with age are known to be specifically expressed in immune cells, suggesting that immune surveillance or inflammation increases with age. The age-regulated genes show a similar aging profile in the cortex and the medulla, suggesting a common underlying mechanism for aging. Expression profiles of these age-regulated genes mark not only age, but also the relative health and physiology of the kidney in older individuals. Finally, the set of aging-regulated kidney genes suggests specific mechanisms and pathways that may play a role in kidney degeneration with age. | Introduction Aging affects nearly all organisms and is a major risk factor in most human diseases. Recent work has begun to uncover molecular mechanisms that specify lifespan and to identify alterations in cellular physiology that occur at the end of life ( Tissenbaum and Guarente 2002 ). For example, oxidative damage caused by the generation of free radicals in the mitochondria has been found to hasten aging by causing an accumulation of damaged cellular components ( Droge 2003 ). Telomere shortening may also play a role in aging by preventing DNA replication and cell division in later years ( Hasty et al. 2003 ). Genetic studies have identified many genes that play a role in specifying lifespan. For example, mutations in yeast sir2 (chromatin regulator), worm daf-2 (insulin-like growth factor receptor), fly methuselah (tyrosine kinase receptor), mouse p53, and the human Werner's syndrome gene (DNA helicase) cause dramatic changes in lifespan ( Guarente and Kenyon 2000 ). Several aging mechanisms alter longevity in multiple organisms. For example, mutations in the gene encoding insulin-like growth factor receptor alter lifespan in worms, flies, and mice, indicating that an endocrine signaling pathway has a conserved role in aging ( Hekimi and Guarente 2003 ). Genetic studies have shown that aging can be slowed in mutants that are defective in a wide range of cellular processes (such as mitochondrial function, chromatin regulation, insulin signaling, transcriptional regulation, and genome stability). This indicates that aging is a complex process driven by diverse molecular pathways and biochemical events. As such, a powerful approach to study aging is to use systems biology, which allows a multitude of factors affecting aging to be analyzed in parallel. For example, DNA microarrays and gene expression chips have been used to perform a genome-wide analysis of changes in gene expression in old age. Extensive studies in Caenorhabditis elegans and Drosophila melanogaster have identified hundreds of age-regulated genes ( Hill et al. 2000 ; Zou et al. 2000 ; Lund et al. 2002 ; Pletcher et al. 2002 ; Murphy et al. 2003 ). Several studies have described age-regulated genes in the muscle and brain of mice ( Lee et al. 1999 , 2000 ) and the retina and muscle of humans ( Yoshida et al. 2002 ; Welle et al. 2003 , 2004 ). These age-regulated genes may serve as markers of aging, enabling one to assess physiological age independently of chronological age. Analysis of the functions of these age-regulated genes has identified specific biochemical mechanisms that change toward the end of life. A key question still unresolved is to what extent the mechanisms of aging are conserved between species with vastly different lifespans. Some studies suggest that similar mechanisms are involved in aging in many species. For example, caloric restriction extends lifespan in yeast, worms, flies, mice, and primates ( Weindruch 2003 ). Additionally, signaling through the insulin-like growth factor pathway, chromatin regulation by sir2, and oxidative damage have each been shown to affect lifespan in diverse model organisms ( Tissenbaum and Guarente 2002 ). Other studies emphasize that changes occurring at the end of life are unlikely to be evolutionarily conserved ( Kirkwood and Austad 2000 ). In the wild, very few animals (including humans) survive to their maximal biological lifespan. Thus, the changes in physiology that occur in very old animals have minimal effects on the fitness of individuals, and are unlikely to be evolutionarily conserved. Therefore, aging is likely to be species-specific, and studies of old age in model organisms are unlikely to be relevant to humans. We have begun our studies of human aging by focusing on the kidney, an organ that shows a quantifiable decline in function with age. One of the primary functions of the kidney is to remove toxins from the blood, which involves filtering plasma through specialized capillary beds (glomeruli) in the renal cortex. The primary function of the tubules within the medulla is to concentrate or dilute the urine so as to maintain fluid balance. The major age-related change in kidney function is a 25% decline in the glomerular filtration rate starting at age 40 ( Hoang et al. 2003 ). The ability of the medulla to concentrate urine declines progressively with age. In this study, we present a molecular portrait of the aging process in the human kidney by analyzing gene expression as a function of age on a genome-wide scale. We show that age regulation is similar in the cortex and the medulla, and that age-regulated genes in the kidney are broadly expressed. We show that the expression profiles of age-regulated genes correlate well with the morphological and physiological state of the kidney in old age. Finally, we analyze the set of age-regulated genes to identify specific metabolic processes and cellular functions that change as a function of age, and discuss their possible roles in specifying the functional lifespan of the kidney. Results To procure material for analyzing changes in gene expression with age in the human kidney, we obtained kidney samples from normal tissue removed at nephrectomy for either removal of a tumor or for transplantation from 74 patients ranging in age from 27 to 92 y (Tables S1 and S2 ). We dissected each of the 74 kidney samples into cortex (72 samples) and medulla (62 samples) sections, isolated total RNA from each section, synthesized biotinylated complementary RNA (cRNA), and hybridized the labeled cRNA to Affymetrix high-density oligonucleotide arrays (HG-U133A and HG-U133B, containing a total of 44,928 probe sets corresponding to approximately 33,000 well-substantiated human genes). The level of expression for each gene was determined using DChip ( Zhong et al. 2003 ), and the gene chip data were entered into the Stanford Microarray Database ( http://genome-www5.stanford.edu/ ) . Using our dataset, the expression level for every gene as a function of age could be plotted. For example, the expression of CDO1 (which encodes a cysteine dioxygenase type 1 protein) tended to increase with age. There was also variation between subjects and between the cortex and the medulla ( Figure 1 A). Nearly all of the variation represents true differences between samples, as very little variation was observed when we performed repeat hybridizations using the same tissue sample (data not shown). Figure 1 Age-Regulated Genes (A) Shown are expression levels for gene CDO1 . White and black circles represent expression from cortex and medulla, respectively. The y-axis indicates log 2 (expression level), and the x-axis indicates age of patient (years). Dotted and solid lines indicate best fit slopes for the cortex and medulla values, respectively. (B) For every gene, we calculated a one-sided p˜
-value that its expression changes with age. Shown is a histogram representing all of the genes represented by the Affymetrix DNA chip. Genes that decrease with age have p˜
-values near zero, and genes that increase with age have p˜
-values near one. If there were no age-regulated genes (i.e., the true β kj = 0 for every gene j ), then the histogram of p˜
-values would be flat (i.e., have a uniform distribution on the interval from zero to one). The x-axis shows the p˜
-value, and the y-axis shows the number of genes with that p˜
-value. There are 985 genes with a p- value less than 0.001. We used a linear regression model to identify genes that showed a statistically significant change in expression with age (i.e., were age-regulated). We saw large differences in expression between tissue types and between the sexes. These differences were of similar magnitude for both young and old subjects, so that aging in one tissue (or sex) typically ran parallel to aging in the other (as seen in Figure 1 A). Our linear regression model allowed for these parallel trends; reasons for arriving at such a model are given below. Mathematically, our model takes the form In equation 1 , Y ij is the base 2 logarithm of the expression level for gene j in sample i, Age i is the age in years of the subject contributing sample i, Sex i is one if sample i came from a male subject (and zero for female), Tissue i is one if sample i was a medulla sample (and zero for cortex), and ɛ ij is a random error term. The coefficients β kj for k = 0, 1, 2, and 3 are values to be estimated from data. Our primary interest is in β 1 j , which describes how quickly the expression of gene j changes with age, with β 1 j = 0 for genes with no linear age relationship. In model 1 and others that we considered, the coefficients were estimated by least squares. The estimated values β^ kj can differ from zero, even when the true coefficient is zero. We judged statistical significance through p- values, where a value of p i j near zero corresponds to a large absolute value |β^ kj |
unlikely to have arisen by chance. Such p- values do not distinguish genes that increase with age from those that decrease with age. We also use one-tailed p- values, written p˜ kj , taking values near zero to be significantly decreasing trends and those near one to be significantly increasing trends (see Materials and Methods ). To make p- values comparable over genes, it is essential to use the same model for all genes. Before settling on the common model 1, we considered an alternative that allowed a quadratic trend in age. The p˜
-values for the quadratic coefficient (not shown) gave no reason to suspect that a curved relationship was needed. Similarly, a piecewise linear age relationship (with bends at ages 50 and 70) was not significantly better than a linear one. Large and statistically significant differences in expression were found for the two tissue types, and so the tissue type was included in equation 1 . Incorporating tissue type into the model reduces the estimate of the noise variance, leading to greater power for detecting an age relationship. Similarly, a small number of genes were found to have significantly different expression between sexes. Seven genes were found to have a difference at p < 0.001 for both sex and age. We performed a genome-wide scan for genes that changed expression with respect to age. Age-regulated genes can be identified by plotting p˜
-values for age based on model 1 ( Figure 1 B). Genes that significantly decrease in expression with age appear in a peak on the left, while those whose expression increases with age are in a peak at the right. Using model 1, we found 985 genes that change with respect to age ( p < 0.001), which is considerably greater than would be expected by chance (approximately 45 from a total of 44,928 genes). Of these, 742 genes increase expression with age and 243 decrease expression with age ( Table S3 ). Most of our samples were taken from patients that underwent nephrectomy for various medical reasons (see Table S1 ). We evaluated whether pathology, medical history, or medication might be factors that confounded our aging analysis. For example, if old people tend to be hypertensive more often than young, then genes that respond to hypertension may appear to be age-related. We identified 20 different medical and other factors that might potentially confound our study, including race, blood pressure, diabetes, and type and size of tumor present in the kidney (see Table S1 ). Fourteen factors (such as diabetes or proteinuria) affected less than ten patients, making it unlikely that they could account for age-related change in gene expression in the 74 patients analyzed. Six factors occurred in ten or more patients (non-white race, two types of tumors, size of tumor, and hypertension), but it is unlikely that these affected our aging study for the following reasons. First, with the exception of transitional cell carcinoma, none of the other factors were skewed with respect to age, and would not be expected to bias gene expression in an age-related fashion ( Figure S1 ). Second, the two types of tumors (renal cell carcinoma and transitional cell carcinoma) were localized to an isolated region of the kidney. Our normal samples were obtained from the region of the kidney furthest from the carcinoma, were not directly contaminated with cancer cells, and appeared normal histologically (see Materials and Methods ). This procedure for obtaining kidney samples has been used previously to profile gene expression in normal kidney ( Higgins et al. 2004 ) and as a normal control in a kidney cancer study ( Higgins et al. 2003 ). Third, we used regression models to directly test whether our aging studies were affected by seven medical factors: renal cell carcinoma, transitional cell carcinoma, size of tumor, hypertension, systolic blood pressure, diastolic blood pressure, and diabetes mellitus. For renal cell carcinoma, we used a regression model predicting expression from age, sex, tissue type, and a zero/one variable indicating whether the sample came from a patient with renal cell carcinoma or not. The result gave a p -value for whether renal cell carcinoma affected each of the 44,928 genes present on the Affymetrix DNA chip. The smallest p -value was 0.00013. We would expect to see almost six such p- values by chance alone. This result indicates that the presence of renal cell carcinoma does not significantly affect the expression of any gene in the normal tissue from the same kidney, compared to normal tissues taken from kidneys without renal cell carcinoma. Next, we plotted the results using only the age-regulated genes, to investigate whether adjustments for renal cell carcinoma could affect their change in expression with respect to age. We used one regression model that included a renal cell carcinoma term and another model that did not have the term. We then selected genes that showed statistically significant ( p < 0.001) age regulation using either of these models. Renal cell carcinoma does not significantly affect the age slopes for these genes ( Figure S2 A), indicating that this medical factor has little effect on age-related gene expression. We repeated the regression analysis for six other factors that might confound our results (transitional cell carcinoma, size of tumor, hypertension, systolic blood pressure, diastolic blood pressure, and diabetes mellitus). The regression slopes changed very little with and without these factors, indicating that these factors do not strongly affect our analysis of age regulation ( Figure S2B–S2G ). Fourth, five of the samples were from kidneys that did not have tumors, and two of these were from donor kidneys used for transplantation that had no associated pathology at all. The expression profile from these five patients was similar to the profile from other samples used in our study. In summary, it is unlikely that these disease and medical factors have confounded our analysis of age-regulated changes in gene expression. Changes in the expression for some of the 985 age-regulated genes may directly reflect the aging process in the kidney; these genes would serve both as aging markers and provide clues about molecular mechanisms for aging in the kidney. Other changes may result from an age-related change in the relative proportion of cell types within the kidney, such as would result from increased infiltration of immune cells with age. Finally, the expression changes may reflect the downstream response of the kidney to an age-related process elsewhere, such as would result from age-related changes in blood pressure or vascular supply. Common Mechanisms of Aging in the Cortex and Medulla of the Kidney Since the cortex and medulla contain different cell types and have distinct functions, it was of interest to test whether they age similarly. It is possible that the pattern of degeneration in a particular cell type reflects those metabolic pathways that are used most heavily by that cell. For example, there could be deterioration in cell adhesion in glomerular epithelial cells that form part of the filtration barrier in the cortex, while there could be an age-related decline in ion traffic or water flow across the apical or basolateral membranes of tubular epithelial cells in the medulla. Alternatively, distinct cell types could show a common pattern of age-related decline involving pathways common to all cells, such as protein synthesis and mitochondrial function. This degeneration of core cellular processes would affect every cell function, including filtration by glomerular epithelial cells and water and solute reabsorption by tubular epithelial cells. To test whether age-related gene expression changes are different in cortex and medulla, we considered a model in which a term of the form β 4 j × Tissue × Age was added to the model in equation 1 . In such a model, the change in expression with age is linear within each tissue type, but the slope in the medulla is larger than that in the cortex by β 4 j . Figure 2 A shows the histogram of the p˜ 4 j -values. Genes showing tissue-specific slopes would appear in peaks on the left and right. The figure shows neither of these peaks, indicating there is no statistically significant difference in aging between the two tissue types. Figure 2 Similar Age-Regulation in Cortex and Medulla (A) For every gene, we calculated a p˜
-value that there is a Tissue i ×Age i effect, and plotted the results in a histogram. Genes that show different age regulation in the cortex or the medulla would be contained in peaks on the left and right parts of the histogram. The figure shows that the number of genes that have different expression levels in the cortex and medulla is about the same as or less than would be expected by chance. The x-axis shows one-sided p˜
-values for Tissue i ×Age i , and the y-axis shows the number of genes with that p˜
-value. There is a systematic under-representation of the edge regions compared to a random sample of uniform random variables because of correlations among the 44,928 p˜
-values computed from 133 samples. (B) To show whether aging in the cortex and the medulla is similar, we selected age-regulated genes in the cortex and calculated the one-tailed p˜
-value for age effects in the medulla. The histogram shows these selected p˜
-values. The spike at the right shows genes that increase with age in the medulla. Those genes also increased with age in the cortex. (C) Shown is a scatterplot of all 684 genes that are age-regulated in either the medulla or the cortex ( p < 0.001). The y-axis is the slope for the medulla of the expression change with respect to age, and the x-axis is the slope for the cortex. The solid line is the least squares line, with a slope of 0.58. The dotted line has a slope of one and passes through the origin. (D) Same as (C) but for 22 genes that are age-regulated in both the cortex and the medulla ( p < 0.001). To further investigate coordinate aging in the cortex and medulla, we searched for age-regulated genes in each of these tissues independently, and then tested whether age-regulated genes in one were also age-regulated in the other. Specifically, to find age-regulated genes in the cortex, we fit the model using the cortex samples only. To find age-regulated genes in the medulla, we fit the model using only the medulla samples. We found 634 genes in the cortex samples and 72 genes in the medulla samples that showed significant changes in expression with age ( p < 0.001). Having identified age-regulated genes in the cortex, we next examined whether they were also age-regulated in the medulla. Figure 2 B shows the p˜
-values for change with age in the medulla samples, for those genes that are age-regulated ( p < 0.001) in the cortex samples. If aging in the medulla were unrelated to aging in the cortex, we would expect to see a flat histogram. The actual histogram has a strong peak of genes on the right, indicating that significantly age-regulated genes in the cortex tend to also be significantly age-regulated in the medulla. Of the 634 genes that increased expression with age in the cortex, 22 also increased expression with age in the medulla, compared with the 0.6 genes expected at p = 0.001. We obtained similar results when we took the converse approach, first selecting the 72 age-regulated genes in the medulla, and then testing whether they were also age-regulated in the cortex (data not shown). Next, we compared the slope of expression with respect to age in the cortex to that in the medulla ( Figure 2 C). The results show a strong correlation between age coefficients in cortex and medulla. For the 684 genes age-regulated in at least one of the tissue types, the age coefficients had a correlation of r = 0.487. Models 2 and 3 allow us to investigate whether the cortex and medulla age at the same rate as specified in model 1. For the 22 genes that are significantly age-related in both tissues, the age coefficients have a high correlation ( r = 0.96), and the slopes themselves are numerically close ( Figure 2 D). We found a small mean absolute difference in slopes of 0.00185 (log 2 expression per year), corresponding to only a 6% divergence in expression over 50 y. Given the strong similarities in the aging profiles of these two tissue types, we are able to increase the statistical power of our analysis by pooling the cortex and medulla datasets (resulting in model 1). Increased Expression of Immune Genes in the Kidney in Old Age We examined the list of 985 age-regulated genes, and immediately found evidence for increased expression of genes from immunocytes. Many of the 985 age-regulated genes are expressed specifically in B cells (e.g., immunoglobulin mu, kappa, and lambda), T cells (e.g., T cell receptor beta), or neutrophils (e.g., neutrophil cytosolic factors 1 and 4) (see Table S3 ). Nearly all of these immune genes increase expression with age. These results suggest that there are increased numbers of immune cells in the kidney in old age, resulting in an age-related increase in abundance in all genes that are expressed specifically in these cells. Immune function is known to decline with age, and the increased numbers of immunocytes in the kidney might compensate for decreased function in individual immune cells, either for immune surveillance or for responding to low levels of inflammation occurring normally. In addition to increased cell numbers, the apparent increase in expression of the immune genes could also be due to increased expression within the immune cells themselves. Immunohistochemical experiments using antibodies directed against markers specific for B cells, T cells, or neutrophils showed that the kidney samples contained a small proportion of immune cells (less than 1%) in sporadic clusters scattered throughout each section (data not shown). The number of immune cells varied greatly from section to section, and thus it was not possible to use immunohistochemistry to confirm a quantitative increase in the numbers of immune cells in the kidney with age. If the number of immune cells increases with age in our kidney samples, then any gene showing an age-related increase in expression might do so because it is expressed in immune cells and not because it is age-regulated in the kidney. As immune cells comprise only a small fraction of the kidney sample, age-regulated genes that are expressed at higher levels in the kidney than the blood are likely to be expressed in kidney cells themselves. To compare gene expression levels between the blood and the kidney, we purified RNA from whole blood from five new individuals, prepared labeled cRNA, and then hybridized it to Affymetrix gene chips in the same manner as before. We computed the log 2 of the expression level for each gene, and then calculated an average expression level for the blood (five samples) and the kidney (134 samples). Of the 985 genes that change expression with age, 538 are expressed at higher levels in blood cells than in the kidney samples. Age-related changes in the RNA abundance of these genes may reflect either changes in the fraction of immune cells in the kidney or age-related changes in expression in kidney cells. The remaining 447 are expressed at higher levels in the kidney than in whole blood, and age regulation of these genes is likely to reflect expression changes in kidney cells themselves ( Table S4 ). Of these 447 genes, 257 have increased expression levels in old age (age-induced) and 190 have decreased expression levels (age-repressed) ( Figure 3 ). Figure 3 Expression of the 447 Genes as a Function of Age Rows correspond to age-regulated genes, ordered from most highly induced to most highly repressed. Columns correspond to individual patients, ordered from youngest to oldest. The age of certain patients is shown for reference. Left panel refers to data from cortex samples, and right panel depicts data from medulla samples. The first row shows the chronicity index (ChI; morphological appearance and physiological state of the kidney),from blue (healthiest) to yellow (least healthy) as indicated in the scale bar. Key genes discussed in the text are marked. Scale shows log 2 of the expression level (Exp). A navigable version of this figure can be found at http://cmgm.stanford.edu/~kimlab/aging_kidney/explorer.html . Age Regulation Compared to Developmental Regulation Aging is thought to be caused by slow degeneration of the transcriptome (the entire set of genes expressed in a tissue), rather than a qualitative change in expression, as occurs during tissue specification. As such, changes in gene expression associated with aging should be less than expression differences between different types of tissues. To confirm this idea, we compared the magnitude of gene expression differences due to differentiation (cortex versus medulla) to those due to aging. We used the same approach as before to evaluate differences in expression in cortex versus medulla on a genome-wide scale. For every gene, we calculated the p- value for differential expression in the cortex and the medulla, and plotted the results in a histogram ( Figure 4 ). Genes contained in the peak on the right are more abundant in the medulla whereas genes in the peak on the left are more abundant in the cortex. There were 23,322 genes that were differentially expressed between the cortex and medulla ( p < 0.001), indicating that regulation of expression due to differentiation (between the cortex and medulla) is much greater than that related to aging. This result is consistent with the idea that aging results from a slow degeneration of a core transcriptome in the cortex and the medulla of the kidney. Figure 4 Differential Expression in the Cortex and the Medulla For each gene, we calculated a p˜
-value for expression differences in the cortex versus the medulla. Shown is a histogram of these p˜
-values. Genes enriched in the cortex are in a peak on the left, and genes enriched in the medulla are in a peak on the right. The x-axis indicates p˜
-value, and the y-axis indicates number of genes. Majority of Age-Regulated Genes in the Kidney Are Expressed Broadly To address whether different organs have distinct or common aging profiles, we analyzed whether the 447 age-regulated genes in the kidney were expressed specifically in the kidney or broadly in many tissues. If the kidney has its own specific pattern of aging, one might expect that the set of 447 aging-regulated genes would be enriched for those expressed specifically in the kidney, such as genes that have direct roles in forming the filtration barrier or in regulating ion or water reabsorption. If there is a common profile for aging shared among tissues, one might expect that most of the list of 447 aging-regulated genes would be expressed in many tissues. We determined the level of expression of the age-regulated genes in different tissues using data from a previous study reporting a genome-wide profile of gene expression in 26 different human tissues with Affymetrix gene arrays ( Su et al. 2002 ). Of the 447 aging-regulated kidney genes, 227 are represented in the previous work. Nearly all of these have general, rather than kidney-specific, expression patterns; specifically, we calculated the median expression level from all tissues and compared this to the average expression level from the kidney samples. We found that only seven of the 227 aging-regulated genes were enriched in the kidney more than 2-fold compared to the median level from all tissues ( Figure 5 ). The observation that nearly all of these 227 age-regulated genes are expressed in many tissues suggests that they act in common cellular pathways. Altered expression of these genes in old age may weaken these common functions, subsequently leading to physiological decline of kidney-specific functions. Figure 5 Developmental Profile of the Age-Regulated Genes Shown are the log 2 of the expression levels for 227 age-regulated genes in 26 human tissues, using data from Su et al. (2002) . Rows correspond to genes, columns correspond to human tissues. a, kidney; b, cerebellum; c, whole brain; d, cerebral cortex; e, caudate nucleus; f, amygdala; g, thalamus; h, corpus callosum; i, spinal cord; j, whole blood; k, testis; l, pancreas; m, placenta; n, pituitary gland; o, thyroid gland; p, prostate; q, ovary; r, uterus; s, salivary gland; t, trachea; u, lung; v, thymus; w, spleen; x, adrenal gland; y, liver; z, heart. Scale shows log 2 of the expression level. A navigable version of this figure can be found at http://cmgm.stanford.edu/~kimlab/aging_kidney/explorer.html . Molecular Markers for Physiological Aging The expression levels of these 447 age-regulated genes constitute a molecular profile of aging, and we can examine the expression profile of individual patients to observe how they compare to the average for their age group. Older individuals tended to express age-induced genes at higher levels and age-repressed genes at lower levels than younger individuals. However, certain individuals had unusual expression profiles, in which genes were expressed at levels more typical of a different age group. For example, patient 81 was 78 y old but had an expression profile as though she were older (see Figure 3 ). Her kidney showed very high levels of age-induced genes and very low levels of age-repressed genes. Patient 95 was 81 y old, with an expression profile similar to patients 30 or 40 y younger. Do the molecular gene expression profiles correlate with the physiological ages of the kidney samples? That is, does patient 81 have a kidney showing excessive age-related damage and does patient 95 have a kidney with unusually good health? To answer these questions, we determined the morphological and physiological states of the kidneys from each of the patients by examining histological stains. As people grow older, there is a general decline in the morphological appearance of the kidney: (1) the glomeruli lose their structure and their capillaries are replaced with fibrous tissue (glomerular sclerosis), (2) the tubules collapse and atrophy, and the interstitial space between them widens and scars (tubular atrophy/interstitial fibrosis), and (3) there is a thickening of the innermost layer of the arteriole wall due to the accumulation of hyaline material (arterial intimal hyalinosis). We gave three scores to each kidney section corresponding to the appearance of the glomeruli, the tubules, and the arterioles. Scores ranged from zero for normal appearance for youthful patients to four for an advanced state of glomerular sclerosis, tubular atrophy/interstitial fibrosis, or arterial intimal hyalinosis (see Table S1 ). We then added the glomerular, tubular, and arteriolar scores together to form a combined score ranging from zero (best) to 12 (worst), termed the chronicity index. The chronicity index is a quantitative estimate of the morphological appearance and physiological state of the kidney for each of the patients (see Table S1 ). Figure 6 shows an example of a kidney in good condition from patient 40 (29 y old with a chronicity score of zero) and a kidney showing age-related morphological decline from patient 62 (84 y old with a chronicity score of ten). As expected, the chronicity index shows a strong positive correlation with age showing that morphology and function tend to be worse for older subjects ( Figure 7 ). Figure 6 Chronicity Index of Kidney Samples Histology from patient 40 is shown on the left, demonstrating a normal glomerulus (G), tubules and interstitial space (T), and arteriole (A), respectively (chronicity score of zero). Histology from patient 62 is shown on the right, demonstrating glomerulosclerosis (g), tubular atrophy and interstitial fibrosis (t), and arterial intimal hyalinosis (a), respectively (chronicity score of ten). Hematoxylin and eosin staining of paraffin-embedded sections. Figure 7 Chronicity Index Increases with Age Shown is the chronicity index versus age for most of the kidney samples used in this study. The line shows the least squared fit through the data points. We then compared the chronicity index to the gene expression profiles of the 447 age-regulated genes as a function of age (see Figure 3 ). In general, we found that the gene expression profiles correlated well with the chronicity index. Patients with expression profiles normally associated with people much older also had a high chronicity index; for example, the expression profile of patient 81 was similar to that of patients who were much older, and the chronicity index was also unusually high for the patient's age. Conversely, patients with expression profiles normally associated with younger people tended to have a low chronicity index for their age, such as patient 95. Although the 447 age-regulated genes were selected solely on the basis of their change with chronological age, these results indicate that their expression profiles are able to predict patients that have kidneys exhibiting unusual health or abnormal degeneration for their age. Thus, the 447 age-regulated genes can be used as molecular markers for physiological decline in the kidney during aging. Age-Regulated Genes in the Kidney Some of the 447 age-regulated genes may be involved in either causing or preventing aging in the kidney, whereas expression changes for others may be a consequence of age-related cellular changes. A candidate from our list that might promote age-related decline is mortalin-2 (which encodes Heat Shock Protein 70), which decreases expression in the kidney in old age. Heat shock proteins act as protein chaperones, and likely function to counteract cell senescence by alleviating the accumulation of damaged proteins in old cells. In human fibroblasts, overexpression of mortalin-2 extends lifespan in vitro ( Kaul et al. 2003 ). In the nematode C. elegans, overexpression of mortalin or HSP-16 (a related heat shock protein) extends longevity, and several genes encoding heat shock proteins decrease expression in old age ( Lund et al. 2002 ). Reduced expression of mortalin-2 in old human kidneys could increase the accumulation of denatured proteins and thereby reduce general cellular function. A gene from our list that might function to prevent aging is the gene encoding insulin-like growth factor receptor, which decreases expression in old age. Loss-of-function mutations in this gene result in extended longevity in worms, flies, and mice ( Tissenbaum and Guarente 2002 ). This observation suggests that decreased expression of this gene during normal aging might help prolong the functional lifespan of human kidneys. We examined the list of 447 age-regulated genes for functional groups showing a consistent change with age. One group includes genes involved in the formation of the extracellular matrix, which show a consistent increase in expression in old age. Seven age-regulated genes encode proteins known to play key roles in maintaining epithelial polarity (three types of claudins, two cadherins, occludin, and a cell adhesion molecule), all but one of which increase expression in old age (see Table S4 ). Forty-nine age-regulated genes encode protein components of the extracellular matrix, all but four of which increase expression in old age. In the kidney, the extracellular matrix could play a key role in governing the filtration of blood via the basement membrane, a capacity that declines with age. The observation that genes involved in forming the extracellular matrix increase expression in the kidney with age may be directly relevant to the age-related decline in glomerular filtration rate. Another functional group is a set of 11 genes encoding ribosomal proteins, all of which increase expression with age. Protein synthesis rates are known to decline as animals grow older, and increased expression of these ribosomal protein genes may serve to offset this. Changes in the expression of regulatory genes with age may have particularly strong effects on kidney metabolism and function, since these changes are likely to initiate cascades of changes in downstream genes. We examined our list of 447 age-regulated genes for those that are likely to function as regulatory genes. Of the 447 age-regulated genes, 15 encode transcription factors and 51 encode proteins that are part of signaling pathways. Age-Regulated Genes Enriched in the Glomeruli As filtration of the blood takes place in the glomerulus, age-regulated genes that are enriched in the glomerulus may be especially important for understanding how kidney function declines with age. We identified genes enriched in the glomerulus using data from a previous study, in which cDNA microarrays were used to compare expression levels in the glomeruli relative to the rest of the kidney ( Higgins et al. 2004 ). Of the 447 genes identified in our study, 213 were represented on the cDNA microarrays in the previous experiment, and 19 were enriched greater than 2-fold in the glomeruli relative to total kidney ( Table S5 ). These included four genes that encode proteins involved in the formation of the extracellular membrane (a type 5 collagen, alpha-2 macroglobulin, and two tissue inhibitors of met-alloproteinase), all of which increase expression with age. Discussion Old age is associated with a functional decline in a myriad of molecular and cellular processes. To gain a global perspective of the diverse pathways that change with age, we performed a whole-genome analysis of gene expression as a function of age for kidney samples from 74 patients ranging in age from 27 to 92 y. Many factors affect gene expression in addition to age, including variability between individuals, between different tissues within the kidney, and between sexes. The large number of samples in our dataset provided good power for identifying age-regulated genes in noisy data despite small changes in expression, and allowed us to use a statistical linear regression model to identify 985 genes that change expression with age. The results from this work show that transcriptional differences between young and old individuals involve an accumulation of small changes in expression from many genes, rather than resulting from large expression changes in a small number of genes. This observation suggests that functional decline in old age is not the result of the complete failure of a small number of cellular processes. Rather, it is the slight weakening of many pathways that cumulatively causes a significant decrease in cell function. Studying aging by analyzing one pathway at a time is difficult, because any single pathway might show only a small change with respect to age and might contribute only a small amount to the overall functional decline in old age. By contrast, functional genomics is a powerful approach to study aging, because many genes can be simultaneously scanned in parallel for small changes in expression. Although the cortex and medulla are comprised of different types of cells and perform different physiological functions, our results suggest that they share a common mechanism for aging. Previous experiments have characterized changes in expression for human fibroblasts, muscle, and the retina with age ( Ly et al. 2000 ; Yoshida et al. 2002 ; Welle et al. 2003 , 2004 ). We plotted the expression levels of the 985 aging-regulated genes found in this work against the dataset of aging in muscle ( Welle et al. 2003 ), and found that these genes did not show much age regulation in muscle. Specifically, the Pearson correlation (r) of the regression slopes for these 985 genes was only 0.085 between the kidney and muscle aging experiments and hence accounts for only 0.0072 of the variance between these two tissues ( Figure S3 ). It is unclear whether this amount of correlation is biologically relevant. The small sample size used in the study of aging in human muscles might have limited our ability to detect similarities in aging in the two organs. It will be important to use a larger sample size of muscle tissues in future experiments to discern common patterns of age regulation in the kidney and the muscle with higher resolution. Aging has been best studied in model organisms, and it is thus of great interest to discern whether aging in these species is similar to the aging process in humans. Previous studies have reported gene expression changes associated with old age for worms, flies, and several tissues from mice ( Lee et al. 1999 , 2000 ; Hill et al. 2000 ; Zou et al. 2000 ; Lund et al. 2002 ; Pletcher et al. 2002 ; Murphy et al. 2003 ). We found no correlation between age regulation in human kidney and age regulation in either worms or flies ( Figure S4 ). Although our analysis did not show evidence for evolutionary conservation of age regulation, a previous study suggested that there is a small overlap in age-regulated gene expression between flies and worms ( McCarroll et al. 2004 ). However, most of the similarities occured in young or middle-aged animals, rather than old animals. There is thus little evidence for evolutionary conservation of changes in gene expression in old age, emphasizing the need to elucidate mechanisms of aging using human subjects themselves and not model organisms. Many of the age-regulated genes in the kidney may change in response to declining kidney function. Functional decline of the kidney with age varies between individuals, and these genes could be used as diagnostic markers to evaluate levels of kidney function in older patients. This could provide invaluable information in understanding the clinical course of kidney aging and the suitability of using older kidneys in organ transplants. Other genes may be directly regulated by aging per se, and these genes could pinpoint mechanisms that play key roles in the aging process itself. Materials and Methods Samples Normal kidney samples were obtained either from biopsies of donor kidneys for transplantation or from nephrectromy patients (with informed consent) in which the pathology was localized and did not involve the part of the kidney sampled. Key factors from the medical record for each patient used in this study are listed in Table S1 , and include sex, race, age, blood pressure, pathology, medications, serum creatinine, and urinary protein concentrations. Kidney tissue was harvested meticulously with the intention of gathering normal tissue uninvolved in the tumor. Tissue was taken from a point as far away from the tumor as possible. Any samples that showed evidence of pathological involvement or in which there was only tissue in close proximity to the tumor were discarded. Kidney sections were immediately frozen on dry ice and stored at −80 °C until use. The same harvesting sources and techniques have been used previously to profile expression in normal kidney ( Higgins et al. 2004 ) and to provide normal controls in a study on kidney cancer ( Higgins et al. 2003 ). Histology Frozen tissues were placed in cryomolds, embedded in Cryo Tissue Tek O.C.T. Compound (Sakura Finetek, Torrance, California, United States) and cut into 4-μm sections (Leica Microsystems, Wetzlar, Germany). Sections were stained with hematoxylin and eosin, and then histologically evaluated to exclude samples showing abnormal histology. Histology slides were also marked into two main functional sections, the cortex and medulla, to help aid in accurate dissection of these two areas. We reviewed radiological findings for all tumors and histology for all slides. We excluded any cases in which radiological imaging, gross examination at the time of resection, or histological review of the removed tissue indicated that there might be tumor involvement of the normal areas. Cases with incomplete or unclear medical records were excluded from this study. RNA isolation Frozen kidney tissue samples were dissected into cortex and medulla sections. Portions were weighed (0.05–0.75 g), cut into small pieces on dry ice, and then placed in 1 ml of TRIzol Reagent (Invitrogen, Carlsbad, California, United States) per 50–100 mg of tissue. The tissue was homogenized using a PowerGen700 homogenizer (Fisher Scientific, Pittsburgh, Pennsylvania, United States), and the total RNA was isolated according to the TRIzol Reagent protocol. High-density oligonucleotide arrays A standard protocol designed by Affymetrix (Santa Clara, California, United States) for their HG-U133A and HG-U133B high-density oligonucleotide arrays was slightly modified by the Stanford Genome Technology Center (Stanford, California, United States), and all samples were processed in their facility (see Protocol S1 ). Eight micrograms of total RNA was used to synthesize cRNA for each sample, and 15 μg of cRNA was hybridized to each DNA chip. The samples were done in random order with respect to tissue type and age. Microarray data normalization and analysis Using the dChip program ( Zhong et al. 2003 ), microarray data (.cel files) were normalized according to the stable invariant set, and gene expression values were calculated using a perfect match model. All arrays passed the quality controls set by dChip. All of the Affymetrix data are available at the Stanford Microarray Database ( http://genome-www5.stanford.edu/ ) and at the Web site http://cmgm.stanford.edu/approximatelykimlab/aging_kidney/ . The Affymetrix probe IDs and the locus link IDs for the genes discussed in the paper are in Tables S3–S5 . The accession numbers for all genes on the Affymetrix arrays can be obtained from the Stanford Microarray Database. Regression models and p- values The p- values we use are based on t -tests from standard linear regression theory. Under the hypothesis H 0 that β kj = 0, the estimated coefficient β^ kj is a random variable. The least squares value is a particular number, β^ LS kj . The p- value measures the extent to which the least squares value is surprisingly large assuming H 0 holds. Specifically, the two-tailed p- value is and the one-tailed p- value we use is Sometimes p˜ kj is employed to test H 0 against an alternative hypothesis of β kj < 0. We use it because it distinguishes between significant increasing and significant decreasing coefficients. Under H 0 , the distribution of p is U (0,1), and so is that of p˜
. Numerically, the equation holds. The t -test is derived under an assumption of normally distributed errors. The data showed estimated errors with heavier than normal tails. The t -test is known to be robust against heavy-tailed errors. A linear regression is more appropriate for these data than is an analysis of variance (ANOVA) on age groups, because the latter is aimed at piecewise constant expression patterns, and it is not plausible that expression should change sharply at a given age. A genome-wide ANOVA (data not shown) did, however, find a similar group of age-related genes. Unlike ANOVA, regression summarizes the age effect in one coefficient. This is advantageous for interpretation and for statistical power when there is little nonlinearity. The decision of whether to include a variable in model 1 was based on the collection of p- values for all the genes. If the histogram of p˜
values differed sharply from uniform, and if the smallest p- values were small compared to 1/44,928, then the coefficient was included. Gene lists were made using a threshold p- value of 0.001. Such a gene list can be expected to have about 44 genes in it by chance, even if all of the coefficients are really zero. Thus, of the 985 age-related genes, it is plausible that about 44 of them are false positives. We have chosen to work with a fixed significance level, instead of attempting to fix the false discovery rate, because our test statistics are strongly correlated. We were concerned that intra-subject correlations might have affected our results. For each of 59 subjects with both cortex and medulla samples, we subtracted log 2 expression in the cortex from that in the medulla, and fit a regression of the difference versus age and sex. Such an analysis removes intra-subject correlations. There was again no evidence of genes aging differently in the two tissue types (data not shown). Supporting Information Figure S1 Age Distribution of Medical and Related Factors Each row shows the presence of a medical or related factor. Age of patients is shown on the y-axis. Only transitional cell carcinoma showed a strong age bias. We have identified over 20 different factors that might potentially confound our study on aging, such as race, blood pressure, diabetes, and type and size of tumor adjacent to the normal section (see Table S1 ). (221 KB PDF). Click here for additional data file. Figure S2 Medical Factors Do Not Affect Age Regulation We used regression models to directly test whether our aging studies were affected by seven medical factors: renal cell carcinoma, transitional cell carcinoma, size of tumor, hypertension, systolic blood pressure, diastolic blood pressure, or diabetes mellitus. Scatterplots show age-related slopes using a regression model that includes a term for the medical factor compared to slopes from a regression model that does not include that medical factor. (A) Effect of renal cell carcinoma (RCC) on age-related expression. We selected genes that showed statistically significant ( p < 0.001) age regulation using either a model with a renal cell carcinoma term or without a renal cell carcinoma term. The vertical and horizontal axes show the slope from a model with and without the renal cell carcinoma term, respectively. The slopes change very little with and without the renal cell carcinoma term. As one might expect, many of the genes that are significant at the 0.001 level are just barely so. There were 866 genes significant in both models, 119 significant only when renal cell carcinoma was not in the model, and 86 significant only when renal cell carcinoma was in the model. The overall picture of age relationship changes very little whether a term for renal cell carcinoma is included in the model or not. We also used a regression model predicting expression from age, sex, tissue type, and a zero/one variable indicating whether the sample came from a patient with renal cell carcinoma or not. The result gave a p- value for whether renal cell carcinoma affected each of the 44,928 genes present on the Affymetrix DNA chip. The smallest p- value we saw was 0.00013. We would expect to see almost six such p- values by chance alone. This result indicates that the presence of renal cell carcinoma does not significantly affect the expression of any gene in the normal tissue from the same kidney, compared to normal tissues taken from kidneys without renal cell carcinoma. (B) Effect of transitional cell carcinoma (TCC) on age-related expression. Scatterplot showing age-related slopes with and without a term for transitional cell carcinoma. Transitional cell carcinoma was present in 13 patients, all of whom were old. Thus if transitional cell carcinoma affected gene expression in adjacent normal tissue, then it might bias our results on aging. (B) shows data for presence or absence of transitional cell carcinoma in the model. The gene with the smallest p- value for transitional cell carcinoma had a p- value of 8.8 × 10 −6 . The expected number of p- values this small in 44,928 trials is 0.4, so the presence of this gene is not particularly compelling evidence that transitional cell carcinoma biased our results. The histogram of p- values looks uniform, as we would expect if transitional cell carcinoma were very weakly related, or not related, to expression changes with age (data not shown). We have not used false discovery rate techniques for this problem, because the age coefficients for different genes are far from independent. The scatterplot shows that transitional cell carcinoma does not affect age-related slopes very much. (C) Tumor size does not affect age regulation. (D) Hypertension (HTN) does not affect age regulation. (E) Systolic blood pressure (SBP) does not affect age regulation. (F) Diastolic blood pressure (DPB) does not affect age regulation. (G) Diabetes mellitus (DM) does not affect age regulation. (307 KB JPG). Click here for additional data file. Figure S3 Comparison of Age Regulation of Gene Expression between Kidney and Muscle Tissue in Humans We obtained the muscle dataset from the GEO database ( Welle et al. 2003 ). To compare age regulation in the kidney and muscle, we queried whether the 447 genes identified as age-regulated in the kidney were similarly age-regulated in the muscle. We determined regression coefficients for the 447 genes in the muscle dataset using multiple regression, in a manner similar to the kidney dataset. For each of the 447 genes, we plotted regression slope in kidney against regression slope in muscle, and found an overall weak Pearson correlation of 0.085 ( p < 0.004). A Pearson correlation value of 0.085 implies that 0.72% of the variance in the muscle regression coefficients is due to variance in the associated kidney regression coefficients. We note that the muscle dataset had a small sample size ( n = 16), which may not be large enough to sufficiently detect similarity in age regulation with the kidney. (59 KB XLS). Click here for additional data file. Figure S4 Comparison of Age Regulation of Gene Expression between Humans, Flies, and Worms Reveals No Correlation We compared patterns of gene expression in the aging time course data from C. elegans ( Lund et al. 2002 ) and D. melanogaster ( Pletcher et al. 2002 ) to those in the data for the human kidney. We identified orthologous genes using the criterion that they exhibit best reciprocal BLAST hits between species. Beginning with the set of 447 age-regulated genes in the human kidney, we identified 119 worm and 142 fly orthologs. From the set of 167 age-regulated genes in the worm, we identified 60 human orthologs. From 1,264 age-regulated genes in the fly, we identified 465 human orthologs. (A) Regression slopes of age-regulated genes from human kidney and D. melanogaster . Open triangles denote age-regulated genes in humans and their orthologs in flies. Open circles denote age-regulated genes in flies and their orthologs in humans. The scatterplot shows the regression slopes from the human kidney and the fly aging datasets ( Pletcher et al. 2002 ). Specifically, the age-regulated human genes paired with fly orthologs show a Pearson correlation r = −0.05 ( p = 0.27) for human and fly, while the age-regulated fly genes paired with human orthologs show a Pearson correlation r = −0.05 ( p = 0.12). (B) Regression slopes of age-regulated genes from human kidney and C. elegans . Open circles denote age-regulated genes in humans and their orthologs in worms. Open triangles denote age-regulated genes in worms and their orthologs in humans. The scatterplot shows the regression slopes from the human kidney and C. elegans aging datasets ( Lund et al. 2002 ). The age-regulated human genes paired with worm orthologs show a Pearson correlation r = 0.05 ( p = 0.54). The age-regulated worm genes paired with human orthologs show a Pearson correlation r = −0.01 ( p = 0.08). These results show no evidence for overlap in the aging process between different species. (509 KB PDF). Click here for additional data file. Protocol S1 Affymetrix HG-U133 Set Gene Chip Protocol (40 KB DOC). Click here for additional data file. Table S1 Medical History of Patients (33 KB XLS). Click here for additional data file. Table S2 Patients Recruited by Age Group (13 KB XLS). Click here for additional data file. Table S3 Age-Related Genes ( p < 0.001) Arranged by p- Value (135 KB XLS). Click here for additional data file. Table S4 Age-Related Genes ( p < 0.001) Excluding Those with Higher Expression Levels in Blood than in Kidney, Arranged by Fold Change (75 KB XLS). Click here for additional data file. Table S5 Age-Related Genes by Location within the Kidney (53 KB XLS). Click here for additional data file. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC532391.xml |
509313 | Yeast Use Dual Gain Controls to Amplify Protein Processing | null | Machinery within the endoplasmic reticulum (ER) of eukaryotic cells modifies, folds, and assembles proteins as needed to suit their functions at or past the cell membrane. When this system is hampered or overtaxed, a buildup of unfolded or misfolded proteins within the ER triggers the “unfolded protein response,” which alerts the nucleus to boost production of protein-processing machinery that helps proteins fold. This system for adjusting manufacturing capacity is similar in organisms from yeast to human. If the unfolded protein response cannot be turned on when needed, cells die. Prior study suggested that, in yeast cells, the response to unfolded protein buildup is binary: either off or on. In this month's PLoS Biology , biochemist Peter Walter and his colleagues from the University of California at San Francisco demonstrate two new signaling mechanisms that appear to give the yeast unfolded protein response the means for amplitude adjustment. In yeast, a transcription regulator called Hac1p activates the genes required for the unfolded protein response. A cytoplasmic pool of HAC1 messenger RNA waits in readiness for ER emergencies, each molecule locked against translation into protein by intronic RNA sequences that interrupt mRNA translation. Accumulation of unfolded or misfolded proteins in the ER releases this translational block, triggering production of Hac1p and activating the unfolded protein response. Previously, this binary HAC1 signal was the only known regulator of the unfolded protein response in yeast. In two complementary papers, Walter and colleagues now present evidence that the repertoire includes new factors and regulators that amplify the unfolded protein response under conditions of ER stress. Wiring the unfolded protein response Leber et al. stressed yeast cells by exposing them to substances that cause protein misfolding or buildup in the ER. In response, the cells ratcheted up transcription levels of HAC1 severalfold. Primed with high levels of HAC1 mRNA, the cells were ready to produce a bumper crop of Hac1p and to induce a supercharged unfolded protein response. In the accompanying paper by Patil et al., the authors show that Hac1p is not working alone. A second regulator of transcription called Gcn4p is required to activate most of the genes associated with the unfolded protein response. The regulatory elements of these genes now appear far more diverse than previously appreciated. The authors propose that cells adjust the levels of Hac1p and Gcn4p to drive a continuum of transcriptional programs equipped to deal with incoming challenges. Together, these two papers demonstrate that the control of the unfolded protein response is far from a simple on/off mechanism, but exhibits complex fine-tuning through a network of signaling pathways that interpret and respond to the cell's needs. Control elements of UPR target genes | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC509313.xml |
554000 | Initial development and testing of a novel foam-based pressure sensor for wearable sensing | Background This paper provides an overview of initial research conducted in the development of pressure-sensitive foam and its application in wearable sensing. The foam sensor is composed of polypyrrole-coated polyurethane foam, which exhibits a piezo-resistive reaction when exposed to electrical current. The use of this polymer-coated foam is attractive for wearable sensing due to the sensor's retention of desirable mechanical properties similar to those exhibited by textile structures. Methods The development of the foam sensor is described, as well as the development of a prototype sensing garment with sensors in several areas on the torso to measure breathing, shoulder movement, neck movement, and scapula pressure. Sensor properties were characterized, and data from pilot tests was examined visually. Results The foam exhibits a positive linear conductance response to increased pressure. Torso tests show that it responds in a predictable and measurable manner to breathing, shoulder movement, neck movement, and scapula pressure. Conclusion The polypyrrole foam shows considerable promise as a sensor for medical, wearable, and ubiquitous computing applications. Further investigation of the foam's consistency of response, durability over time, and specificity of response is necessary. | Background We live in a world of information, and emerging technologies compel us to look for new ways to collect, process, and distribute information. Today we are faced with a significant information overload problem as users struggle to locate the right information in the right way at the right time. In response, a number of researchers have suggested that adaptive information technologies may hold the key to the next generation of ubiquitous information systems, systems that automatically adapt to changes in their environment and usage in order to deliver users a more intelligent, proactive and personalized information service. In this paper we provide an overview of initial research conducted as part of the Adaptive Information Cluster a multi-disciplinary research cluster that brings together researchers in areas such as wearable computing, sensor technologies, information retrieval and artificial intelligence with a view to developing the next generation of intelligent, sensor-based wearable computing technologies. Sensing in the wearable environment is crucial for many applications, but existing sensor technologies pose significant wearability problems when integrated into the user's peri-personal space. One of the most compelling needs for wearable technology is in the continuous monitoring of the human body, be that for medical monitoring or to inform the operation of a context-aware computerized application. While many technologies that are often made wearable (such as music players or telephones) function nearly as well (or sometimes better) as portable devices, almost all continuous body-sensing technologies must be worn to be effective. However, because of their ubiquitous, constant-wear nature, such technologies must prioritise the effects of the technology on the user's physical comfort as well as social comfort. Traditional sensing technologies are rarely designed for continuous, on-body use: those that require skin contact are generally designed to be used in a hospital or doctor's office, and those that do not are generally designed for use in stationary devices. Consequently, the achievement of certain design goals for existing sensors (such as durability) is ultimately detrimental to the user's comfort when applied to the wearable environment. For example, durability often equals stiffness, which results in a solid device that can cause discomfort by localizing pressure. Textile-based sensors offer a compromise solution to this problem, by retaining the characteristics associated with comfort and wearability (properties of standard, non-electronic garments). Many textile-based sensors are actually sensing materials used to coat a textile [ 1 ] or sensing materials formed into fibres and woven or knitted into a textile structure [ 2 ]. The properties sought by textile-based sensors can include flexibility, surface area, washability, stretch, and hand (texture of textile). However, they must also include the properties required for the electronic device, including durability, power consumption, and ease of connection into a circuit. Metallic components, designed to function in rigid environments, often do not satisfy these needs. For instance, a metallic element in a high-flex environment (such as a garment) will soon break. However the recently discovered [ 3 ] conducting electroactive polymers (CEP), offer a potential solution to this problem. CEPs such as polypyrrole (PPy), polyaniline and polythiophene constitute a class of polymeric materials which are inherently able to conduct charge through their polymeric structure. They can be reversibly switched from the doped conducting state to the undoped insulating state upon chemical or electrochemical treatment. In particular, polypyrrole has attracted much interest because it is easily prepared as films, powders and composites, has a relatively high conductivity and is relatively stable in the conducting state. However, when the black precipitate of PPy has been formed it is insoluble to all known solvents and is non-processable. To overcome this PPy can be simultaneously polymerised and deposited onto the substrate [ 3 ]. The result is that the substrate is covered with a thin layer of PPy rendering the whole object conducting without compromising the mechanical properties of the substrate. Methods Sensor Development In previous work [ 4 ], a novel polymer synthesis methodology was developed to create a textile-like structure capable of sensing changes in planar or perpendicular pressure, by coating an open-cell polyurethane (PU) foam with a CEP (polypyrrole). The method used for sensor fabrication is described in [ 4 ]. The method involved soaking the substrate, the PU foam in an aqueous monomer and dopant solution. An aqueous oxidant solution was then introduced into the reaction vessel to initiate polymerisation. This lead to the precipitation of doped PPy, which subsequently deposited onto the PU substrate. Sensor Characterization Characterisation for the PPy-coated PU foam was carried out using a number of methods as described in [ 4 ]. It was found that increasing the weight placed upon the PPy-PU foam or shortening the overall length of the foam resulted in a proportional decrease in the electrical resistance measured across the foam in a linear fashion. Results from tests carried out using the Instron™ tensile testing instrument, courtesy of the University of Bath, England, showed that the stress-strain profile of the unadulterated PU foam sample and that of the PPy-coated PU foams sample were similar showing regions of elastic and inelastic responses to force. Problems such as repeatability and long-term aging of the foam were identified. The issue of repeatability was due to hysteresis effects observed during the tensile testing of the foam. These effects were observed for the coated and uncoated samples thus originating from the PU substrate. The effect of the PPy coating was to make the entire foam conducting without compromising the soft, compressible mechanical properties of the foam substrate. Torso Garment Once a predictable reaction was observed from the foam, it was applied to the wearable environment to explore its utility in garment systems. It was integrated into a torso garment in several ways to investigate the ability of the foam sensor to monitor specific body changes and physiological signals. The test garment contained foam sensors in 6 locations: the top outer edge of each shoulder, the back of the neck, the superior protrusion of each scapula, and the right side rib cage under the bust (Figure 1 ). Sensor positions were chosen to test the foam reaction to 4 different actions: breathing, shoulder movement, neck movement, and shoulder-blade pressure. Figure 1 Garment Structure and Sensor Layout The test garment was a sleeveless, collared shirt, closely fitted and nonextensile. The outer garment layer was a 100% polyester satin weave, and the inner layer was a 100% acrylic satin weave. The collar was 80% nylon, 20% elastine jersey knit. The structure of the garment was crucial to the quality of data obtained, as its textile composition, design, and fit moderated the amount of force present between the body and the sensors. In this study, the prototype garment was fitted to one test subject, to eliminate inter-subject anthropometric variation. Sensors were sewn between the two garment layers, allowing them to be easily removed and interchanged. In each test two wire leads were attached to the foam sensors and to a constant current digital multi-meter, HP, Leixlip, Ireland. Data was collected at a rate of 3 points per second. The finished prototype garment is shown in Figure 2 . Figure 2 Prototype pressure-sensitive torso garment Breathing The breathing sensor was attached on the subject's left-side rib cage, under the bust. The sensor measured 2.75 × 1.5 × 0.5 cm. Data was gathered with the subject standing, and the subject was instructed to breathe deeply for a period of approximately one minute. Shoulder Movement Two shoulder movement sensors were attached at the outer edge of the garment at the apex of each shoulder (above the subject's axilla). The sensors measured 1.5 × 2.0 × 0.5 cm. Data was gathered with the subject seated, and the subject was instructed to raise one shoulder repeatedly to its maximum height. Neck Movement The neck motion sensor was attached vertically along the subject's spine, at the back of the neck extending from 4 cm below the top of the collar (approximately 2 nd vertebra) to 2.5 cm below the neckline of the garment (approximately 4 th vertebra). The sensor measured 1.5 × 5.5 × 0.5 cm. Data was gathered with the subject seated, and the subject was instructed to perform four full neck extensions (backwards movement) and three full neck flexions (forward movement). Shoulder Blade Pressure Two pressure pads were attached, one over the superior edge of each scapula. The sensors measured 8 × 4 × 0.5 cm. Data was collected with the subject alternately supine and seated, on a hard surface. Results Sensor Characteristics The sequential coating of PU foam with conducting polymers resulted in an increase of the overall weight of the foam and the conductivity of the foam also from being an insulating material to a conductive material (ca. 1.41 mS/cm). The conductivity of the modified foam depends on the weight of conducting polymer deposited, which in turn depends on the number of coating layers deposited on to the foam substrate. It has been shown previously [ 4 ] that by coating the PU foam substrate a total of three times with PPy an electrical resistance of 1 kΩ/cm can be achieved. The PPy-PU foam was rubbed vigorously and rinsed with cold Milli-Q water to remove any loosely bound PPy. The stability of the bound PPy onto the PU substrate was excellent and resistance of the foam did not change with subsequent hand washings in cold Milli-Q water. The electrical conductivity is good remaining in the kΩ/cm region for up to 3 months. Torso Garment Integrating the foam sensors into the torso garment caused little alteration in the visual or tactile properties of the garment. The largest sensors, the scapula pressure pads, caused the only visible change to the appearance of the garment, as these were the only sensors that possessed enough volume to change the surface topology of the garment. Although comfort was not a measured variable, there appeared to be no change in the tactile comfort of the garment when the sensors were added. In demonstration, both the test subject and other viewers had difficulty locating the sensors within the garment without direction. Breathing As seen in Figure 3a , deep breathing resulted in a sinusoidal resistance curve, varying between approximately 2 kΩ and 4 kΩ. These are absolute values and a low total change compared to the other sensors. This is a result of the age of the foam: The breathing sensor was replaced with week-old foam prior to the test, while the other sensors were 2 months old. The sensor foams are composites of PPy and PU and so the absolute resistance of the foam will be affected by each of these components. Firstly the absolute resistance of the PPy may vary with time due to the gradual oxidation of the polymeric backbone. Also hysteresis in the PU foam substrate as observed during the tensile testing will cause problems to the measured absolute resistance. This hysteresis effect of the PU foam during use can be seen as the gradual and positive drift in the measured resistance that can be seen in Figure 3a . This drift was calculated as 26.67% change of resistance per minute. However if the foam sensor is allowed to relax, un-used, for 2 hours, then the resistance returns to the initial resistance value. However, the sensor output appears to be sufficiently robust, even in its unfiltered state, for a reliable determination of the wearer's respiratory rate, for example. In order to normalise the data so that the sensitivity of the sensor could be determined, the relative resistance of the foam sensor was plotted as in Figure 3b . This was calculated by dividing the absolute resistance at a given time t , R t , by the initial baseline resistance, R 0 . It can be seen in Figure 3b that there was an approximate 20% change in the relative resistance of the foam sensor between inhalation and exhalation. Figure 3 a) Absolute resistance response to Deep Breathing, b) relative resistance response (R t /R 0 ) to Deep Breathing Shoulder Movement The response of the foam to shoulder movements was an approximate 100% decrease in relative resistance as seen in Figure 4 . Once again the data appears sufficiently robust to reliable detect each shoulder movement; however no test was performed to detect the foam reaction to shoulder movements of varying magnitudes. Figure 4 Resistance Response to Shoulder Lift Neck Movement The foam responded to full neck extensions, section A in Figure 5 , with an 80% decrease in the relative resistance. Full flexion of the neck, section B in Figure 5 involved a smaller body movement, which was detected as a smaller decrease (30%) in the relative resistance of the sensor. This data indicates that the dorsal neck sensor placement exhibits a response of greater magnitude for extension than for flexion. Since the sensor provides no additional qualitative information, it is hypothesized that a second sensor would be required to determine the difference between a small extension and a large flexion. Figure 5 Resistance Response to Neck Movement Shoulder-Blade Pressure The foam responded with a 60% increase in the relative resistance when the subject moved from supine (applying pressure to the scapula area) to a seated position (no pressure), as seen in Figure 6 . The response time of this sensor, that is, the time taken for the resistance to stabilise after the subject moved to a seated position, was approximately 8 seconds. The response time was shown previously [ 4 ] to be inversely related to the force applied to it and is also influenced by the size of the sensor. The foam sensor in this position measured 32 cm 2 versus 2–12 cm 2 for the other sensors and so the response time for the shoulder-blade foam sensor would be slightly slower than that for the other sensor positions, e.g. 4 seconds for shoulder lift foam sensor. Figure 6 Resistance Response to Constant Scapula Pressure Discussion As demonstrated, pressure sensing in the wearable environment can provide useful descriptive information about the physical state of the user. Conducting electroactive polymers are attractive for sensing in a garment-integrated context because of their ability to retain the tactile and mechanical properties of a textile-based structure. In the garment integration, the foam sensors had little effect on the comfort or wearability of a standard garment. However, more investigation is necessary to determine the accuracy of the foam sensor responses, particularly the repeatability of response. As seen in the torso sensor evaluation, the age of the sensor had a significant impact on the absolute resistance of the sensors. It has been shown previously that if PPy is left to open to atmosphere then there is a gradual increase in the electrical resistance due to oxidation of the polymeric backbone [ 5 ]. However, the coating itself did not delaminate from the foam substrate, even during hand-washing of the foam sensors. This indicates that if the oxidation were prevented, the sensor would be durable and washable over an indefinite period of time. In a garment-integrated context, washability of components is important to the preservation of normal user patterns of care and maintenance of clothing. In the torso integration, the raw pilot test data indicates that foam sensors can provide detectable responses to all of the body signals investigated, although careful sensor placement is important to the quality of data gathered. In this study, inter-subject anthropometric variation was controlled by limiting the number of subjects to one, and by custom-engineering the garment to fit that subject precisely. However, in a real-world scenario such control would not be possible, and sensor locations across a broad variety of body shapes and sizes would be hard to predict. Similar issues would arise with sizing, fit, and sensor locations on the foot. Because of the increased number of sensors and precision of locations, this variable would become even more difficult to control, however were the number and locations of sensors increased still more to create a uniform grid of pressure sensors, the fit issue could be avoided. An additional problem of hysteresis caused by the PU foam substrate results in the gradual and positive increase in the resistance of the foam sensor. Since the position and the relative resistance of the PPy-coated PU sensors are crucial to their sensitivity, calibration of the sensors would be required on a regular basis. This calibration would involve setting the baseline resistance and range of the measured resistance of the sensors as determined through a series of standard repeatable exercises by the subject. Once these parameters are set subject monitoring could be commenced. There are many applications of wearable sensing for which this type of sensor is particularly well suited. For example, in the monitoring of high-pressure body areas for individuals with reduced tactile sensation (such as diabetics suffering from neuropathy) the foam sensor would allow pressure points to be monitored without introducing a solid sensor element into a pressurized area close to the skin that could create more irritation. Rigid sensors in such an area could easily create more irritation and exacerbate the problem, but a foam sensor not only would not create irritation, it could actually protect the body from irritants by providing an additional layer of cushioning on key pressure points. Outside of medical applications, knowledge of the state of the body is essential in many wearable, mobile, and ubiquitous computing applications. It is common in wearable and ubiquitous computing applications for a system to make decisions based on its perception of the needs and wants of the user. A subtle, comfortable sensor that demands no attention or adaptation from the user can allow the application to function invisibly, reducing the cognitive load on the user. Conclusion Based on these preliminary data, polypyrrole-coated conductive foam shows considerable promise as a basic sensing technology, and for use in detecting body movements, physiological functions, and body state from body-garment interactions. Importantly, the sensor maintains the attractive structural properties of foam, consistent with the objectives of wearability and comfort in a smart garment. Further study is necessary to fully understand the ability of the foam to serve as a reliable sensor over time and under the hostile conditions that garments must usually face. For instance, further work is required to understand and determine the effects of oxidation on baseline drift, the influence of variable conductance responses, calibrations of these responses and the optimal locations for sensors. In addition, processing algorithms for extraction of patterns from gathered data are required, as well as wearable and wireless hardware to allow the data to be used in real-time. Future work includes in-depth analysis of foam responses in controlled environments, and evaluation of optimal sensor location for monitoring of specific activities and conditions. Competing interests The author(s) declare that they have no competing interests. Authors' contributions LED created the garment prototypes, participated in the prototype pilot evaluations, and drafted the manuscript. SB created the foam sensors, participated in the prototype pilot evaluations, and drafted the manuscript. BS participated in the project organization and supervised the research. DD participated in the project organization and supervised the research. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554000.xml |
546404 | Virtual reality and physical rehabilitation: a new toy or a new research and rehabilitation tool? | Virtual reality (VR) technology is rapidly becoming a popular application for physical rehabilitation and motor control research. But questions remain about whether this technology really extends our ability to influence the nervous system or whether moving within a virtual environment just motivates the individual to perform. I served as guest editor of this month's issue of the Journal of NeuroEngineering and Rehabilitation (JNER) for a group of papers on augmented and virtual reality in rehabilitation. These papers demonstrate a variety of approaches taken for applying VR technology to physical rehabilitation. The papers by Kenyon et al. and Sparto et al. address critical questions about how this technology can be applied to physical rehabilitation and research. The papers by Sveistrup and Viau et al. explore whether action within a virtual environment is equivalent to motor performance within the physical environment. Finally, papers by Riva et al. and Weiss et al. discuss the important characteristics of a virtual environment that will be most effective for obtaining changes in the motor system. | Prevalence of virtual reality technology Virtual reality (VR) technology has been used for several decades for a variety of psychosocial interventions. But since the early 1990's there has been an explosion of laboratories and clinics promoting the use of virtual technology for physical rehabilitation [ 1 - 4 ]. Presently, combining the words virtual reality and rehabilitation brings up 132 articles in PubMed. I served as guest editor of a group of six papers on augmented and virtual reality in rehabilitation that appear this month on the Journal of NeuroEngineering and Rehabilitation (JNER). These papers demonstrate a variety of approaches taken for applying VR technology to physical rehabilitation. VR describes a computer-generated scenario (a virtual world) with which the user can interact in 3 dimensions so that the user feels that he or she is part of the scene [ 6 ]. Currently, there are 4 forms of virtual environments: head mounted display, augmented, Fish Tank, and projection-based [see [ 5 - 7 ] for a review]. A totally immersive VR system is the head mounted display (HMD) where the subject sees only the computer-generated image and the rest of the physical world is blocked from view. With augmented VR systems both computer generated images and the physical world are visible to the subject. Hence, the computer world is overlaid on the physical world. With "Fish Tank" VR, the stereo images are produced on a monitor in front of the subject [ 8 ]. These systems have a limited field of view (FOV) and space in which one can interact with the scene. Consequently, the resulting FOV is smaller than that available with other VR systems but the accompanying pixel visual angle is also smaller and, therefore, better. With projection-based VR, the computer generated imagery is projected on a screen or wall in front of the user much like that in a theater [ 9 ]. Back-projection is often used instead of front-projection to insure that the projected scene is not obscured by the subject's body. These systems usually have a wide field of view and can be multi-walled and floor systems as with the CAVE™ technology. Among the papers published this month on JNER, Sparto et al. present studies using a monocular projection based virtual environment to determine if patients with vestibular disorders will tolerate wide FOV environments. Also, Kenyon et al. explore emerging VR technologies and the application of a stereo projection based VR system to research in a posture laboratory. Why use a virtual world for rehabilitation? Many people question why we don't just have subjects perform motor tasks in the real world. The answer to this question is that VR offers us the opportunity to bring the complexity of the physical world into the controlled environment of the laboratory. VR gives us the potential to move away from reductionism in science and towards the measurement of natural movement within natural complex environments. In general, VR allows us to create a synthetic environment with precise control over a large number of physical variables that influence behavior while recording physiological and kinematic responses [ 10 ]. To this topic relate the papers by Sveistrup and Viau et al. also published on JNER this month. Viau et al. compare the kinematic strategies of reach, grasp, and place movements performed with physical and virtual objects by healthy adults and those with hemiparesis. Sveistrup presents current work on motor rehabilitation using virtual environments and virtual reality and, where possible, compares outcomes with those achieved in controlled real-world applications. There are numerous strengths underlying the use of VR with rehabilitation [ 11 , 12 ]. Among these are that VR provides the opportunity for ecological validity, stimulus control and consistency, real-time performance feedback, independent practice, stimulus and response modifications that are contingent on a user's physical abilities, a safe testing and training environment, the opportunity for graduated exposure to stimuli, the ability to distract or augment the performer's attention, and perhaps most important to therapeutic intervention, motivation for the performer. In the group of papers that I guest-edited for JNER, the application of Fish Tank VR as a rehabilitation tool for patients with spinal cord injury is explored by Weiss et al. Another question that has arisen at meetings and in the review of the papers for JNER is under what circumstances a computer generated environment should be considered virtual reality? Factors that differ among many of the laboratories claiming to use virtual reality and that also emerge amongst this group of papers include field of view, the presence of stereo vision, and real-time feedback of head position so that the scene can be updated to reflect natural movement of the visual world. There is evidence demonstrating that a transfer of training from the virtual to the physical environment is greater if the learner is immersed in the training environment [ 13 ]. Perhaps then the most important and defining factor for VR is the sense of presence of the performer in the environment. Thus, the first paper by Riva et al. that appears on JNER this month focuses on the meaning of presence and its importance to the use of VR for rehabilitation. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC546404.xml |
549051 | The changes of CD4+CD25+/CD4+ proportion in spleen of tumor-bearing BALB/c mice | CD4+CD25+ regulatory T lymphocytes (T R ) constitute 5–10% of peripheral CD4+ T cells in naive mice and humans, and play an important role in controlling immune responses. Accumulating evidences show that T R cells are involved in some physiological processes and pathologic conditions such as autoimmune diseases, transplantation tolerance and cancer, and might be a promising therapeutic target for these diseases. To evaluate the change of CD4+CD25+ T R cells in mouse tumor models, CD4+CD25+ subset in peripheral blood and spleen lymphocytes from normal or C26 colon-carcinoma-bearing BABL/c mice were analyzed by flow cytometry using double staining with CD4 and CD25 antibodies. The proportion of CD4+CD25+/CD4+ in spleen lymphocytes was found to be higher than that in peripheral blood lymphocytes in normal mice. No difference was observed in the proportion in peripheral blood lymphocytes between tumor bearing mice and normal mice, while there was a significant increase in the proportion in spleen lymphocytes in tumor bearing mice as compared with normal mice. Moreover, the proportion increased in accordance with the increase in the tumor sizes. The increase in the proportion was due to the decrease in CD4+ in lymphocytes, which is resulted from decreased CD4+CD25- subset in lymphocytes. Our observation suggests the CD4+CD25+/CD4+ proportion in spleen lymphocytes might be a sensitive index to evaluate the T R in tumor mouse models, and our results provide some information on strategies of antitumor immunotherapy targeting CD4+CD25+ regulatory T lymphocytes. | Background Early in 1970s, the concept of suppressor T cells was developed and it was envisioned that this subset of lymphocytes was responsible for the active control, and ultimately the termination, of immune responses [ 1 ]. But the characters of this subset had not been well studied mainly because its distinct phenotype was not identified. In 1990s, Sakaguchi et al found that a subset of CD4+ lymphocytes in peripheral blood of normal mice expressed the IL-2R-α (CD25) and it down-regulated the immune response to self and non-self antigens [ 2 ]. Soon the CD4+CD25+ lymphocytes were verified as one group of suppressor T cell and termed as thymic derived "naturally occurring" regulatory T cells (T R ). T R represents a minor (5–10%) component of peripheral CD4+ T cells but plays an important role in controlling immune responses [ 3 ]. Accumulating evidences show that T R cells possess potent suppressive activity both in vivo and in vitro and are involved in autoimmune diseases, transplantation tolerance and tumor immunity [ 2 - 5 ]. The transfer of CD4+CD25- cells into nude mice resulted in autoimmune diseases; reconstitution of CD4+CD25+ cells after transfer of CD4+CD25- cells prevented the development of autoimmunity [ 2 ]. Similarly, depletion of these cells induced gastritis and late-onset diabetes [ 6 ], impaired development or dysfunction of these cells increased susceptibility to experimental autoimmune encephalomyelitis [ 7 ], multiple sclerosis [ 8 ] and other autoimmune diseases [ 9 , 10 ]. Conversely, an increased percentage of CD4+CD25+ T R cells in total CD4+ T cells was found in peripheral blood of cancer patients [ 11 - 14 ] and depletion of CD25+ cells alone or combination with other strategies might cause tumor regression [ 4 , 15 , 16 ]. All these studies indicated the importance of T R cells in controlling immune response. The mechanism of how the T R cells control immune response is still unclear. Previous studies show that activated T R cells strongly inhibit proliferative responses of CD4+ or CD8+ T cells in vitro [ 17 , 18 ], moreover, it down-regulates co-stimulatory molecules on dendritic cells (DC) [ 19 ], inhibit the maturation and antigen-presenting function of DC [ 20 ], and suppress activated and matured DC driven responses [ 21 ]. The important role of T R cells in immunoregulation makes it be recognized as an attractive therapeutic target for immune-related diseases. In our animal experiments of antitumor immunotherapy that targeting CD4+CD25+ T R cells, to our surprise, we did not find an increase of CD4+CD25+/CD4+ in peripheral blood of tumor bearing BALB/c or C57BL/6 mice, this is not in accordance with the increase of the proportion in cancer patients as reported by Wolf et al [ 11 ]. In order to find a way to evaluate the CD4+CD25+ T R cells in tumor-bearing mice, we analyzed CD4+CD25+ subset in peripheral blood and spleen lymphocytes from normal or C26 colon-carcinoma-bearing mice by flow cytometry. Methods Mice and tumor model 6 to 8 weeks BALB/c mice were purchased from the Laboratory Animal Center of Sun Yet-sen University. Mouse C26 colon carcinoma cell line was a gift from Prof. Li-Jian Xian (Cancer Center, Sun Yet-sen University). The C26 Cells were cultured in RPMI 1640 medium (Gibco Invitorogen Corporation) supplemented with 10% fetal calf serum (FCS; Gibco Invitorogen Corporation, Carlsbad, CA), 100 U/ml of penicillin G and 100 μg/ml of streptomycin, and the medium was renewed every 2 to 3 days. After growing to confluency, the cells were detached with trypsin-EDTA, resuspended in serum-free RPMI 1640 medium and inoculated subcutaneously at right axilla with 1 × 10 5 to 1 × 10 7 live tumor cells per mouse. Reagents PE-conjugated anti-mouse CD4, Cychrome-conjugated anti-mouse CD25 antibodies were purchased from eBioscience. Red blood cell lysis buffer is composed of 0.155 M ammonium chloride, 0.01 M potassium bicarbonate, and 0.1 mM EDTA. Fixation solution contains 1% paraformaldehyde in PBS. Samples preparation and flow cytometry Mouse peripheral blood was collected from orbital plexus and anticoagulated with 20 U/ml sodium heparin. Single-cell suspensions of splenocytes were prepared by grinding the spleen with the plunger of a disposable syringe, passing the ground spleen through nylon mesh, and suspending the cells in PBS. Mouse peripheral blood or spleen single-cell suspensions were stained with PE-conjugated anti-mouse CD4 and Cychrome-conjugated anti-mouse CD25 antibodies at 4°C for 30 minutes. Then, erythrocytes were lysed by red blood cell lysis buffer. After wash with PBS, the samples were fixed with fixation solution and analyzed on a FACScalibur™ flow cytometer (BD Biosciences) with CELLQuest™ software. Statistical Analysis The data are summarized as the mean ± standard error. Statistical analysis was performed using the Student t test, statistical significance was accepted at the P < 0.05 level. Results CD4+CD25+/CD4+ in peripheral blood and spleens from normal BALB/c mice To evaluate the normal proportion of CD4+CD25+/CD4+ in mice, 6 to 8 weeks normal BALB/c mice (n = 10) were sacrificed to test the proportion in spleen and peripheral blood lymphocytes by flow cytometry using anti-mouse CD4 and CD25 antibodies. The total CD4+ lymphocytes, CD4+CD25+ subset, and CD4+CD25+/CD4+ in spleen or peripheral blood lymphocytes were shown in Table 1 . In normal mice, the CD4+CD25+ T R cells appear in spleen or peripheral blood in a relative stable percentage manner. The proportion of CD4+CD25+/CD4+ in peripheral blood was 6.19 ± 0.86%, which is in accordance with the results reported by others [ 3 ]. Otherwise, the CD4+CD25+/CD4+ proportion in spleen was higher than that in peripheral blood (10.23 ± 1.88% vs 6.19 ± 0.86%, P < 0.001). And, the higher level of the proportion in spleen is due to a lower level of the total CD4+ lymphocytes (CD4+CD25+ plus CD4+CD25-) in spleen than that in peripheral blood (37.06 ± 5.76 vs 56.80 ± 6.38, P < 0.001). The representable figures of peripheral blood and spleen lymphocytes double stained with CD4 and CD25 antibodies were shown in Figure 1 . Table 1 The percentages of CD4+CD25+ and CD4+, and the proportions of CD4+CD25+/CD4+ in peripheral blood and spleen lymphocytes from normal BALB/c mice. CD4+ CD4+CD25+ CD4+CD25+/CD4+ total lymphocyte peripheral blood (n = 10) 56.80 ± 6.38 3.50 ± 0.45 6.19 ± 0.86 6.73 ± 0.84 (10 9 /L) spleen (n = 10) 37.06 ± 5.76 3.79 ± 0.93 10.23 ± 1.88 1.54 ± 0.23 (× 10 8 ) P value <0.001 0.38 <0.001 - Figure 1 The proportions of CD4+CD25+ subset in peripheral blood and spleen lymphocytes from normal BALB/c mice. Mouse peripheral blood (A) or spleen single-cell suspensions (B) were collected or prepared, and stained with PE-conjugated anti-mouse CD4 and Cychrome-conjugated anti-mouse CD25 antibodies, after the lysis of erythrocytes, the samples were analyzed by flow cytometry. CD4+CD25+/CD4+ in peripheral blood and spleens from C26 tumor-bearing BALB/c mice To investigate the possible changes of the proportion in tumor bearing mice, 1 × 10 5 to 1 × 10 7 live C26 colon carcinoma cells were inoculated subcutaneously at right axilla of BALB/c mice respectively (n = 12). 20 days later, tumor nodules were formed at different sizes from 7 to 40 mm in diameters. The mice were sacrificed and peripheral blood and spleen lymphocytes were prepared for double staining with anti-mouse CD4 and CD25 antibodies. In peripheral blood, we did not find an increase in CD4+CD25+/CD4+ in tumor bearing mice, compared with that in normal mice. Otherwise, an increased proportion of CD4+CD25+/CD4+ in spleen lymphocytes was observed in tumor bearing mice, moreover, the proportion increased in accordance with the increase in tumor sizes, as shown in Figure 2A . The representable double staining figure of peripheral blood or spleen lymphocytes from tumor bearing mice were shown in Figure 2B–C . Considering the short tumor bearing duration, we prolonged the observation to 50 to 60 days, the increase in the proportion was not yet observed in peripheral blood (data not shown). Figure 2 The relationship between tumor sizes and the CD4+CD25+/CD4+ proportions in peripheral blood or spleen lymphocytes in tumor bearing mice. (A) 1 × 10 5 to 1 × 10 7 C26 colon carcinoma cells were inoculated subcutaneously at right axilla of BALB/c mice (n = 12). 20 days later, after tumor sizes were measured, the mice were sacrificed and peripheral blood lymphocytes (○) and spleen (■) lymphocytes were stained with anti-mouse CD4 and CD25 antibodies. x-axis represents the diameters of tumors; y-axis represents the proportion of CD4+CD25+/CD4+. The absolute total lymphocyte counts were 9.85 ± 2.34 (× 10 9 /L) in peripheral blood and 2.37 ± 0.77 (× 10 8 ) in spleen. The representative figures of CD4+CD25+ subset in peripheral blood (B) or spleen (C) lymphocytes from tumor bearing mice were also shown. The changes of the percentages of CD4+CD25+, CD4+CD25- and total CD4+ cells in spleen lymphocytes from tumor bearing mice The proportion of CD4+CD25+/CD4+ was determined by two factors: CD4+CD25+ in lymphocytes (numerator) and total CD4+ in lymphocytes (denominator). The increase of the proportion may be due to the increase of CD4+CD25+ subset or decrease of CD4+ subsets, or both. To investigate the possible reason that the proportion increased in spleen lymphocytes of tumor-bearing mice, we analyzed the CD4+CD25+ and total CD4+ cells in spleen lymphocytes. We found there was no obvious change in CD4+CD25+ in spleen lymphocytes from tumor bearing mice, otherwise, a decrease in total CD4+ lymphocytes was found with the increase of the tumor sizes, and the decrease was mainly due to the decrease of CD4+CD25- subset, as shown in Figure 3 and Figure 2C . Figure 3 The relationship between tumor sizes and the percentages of total CD4+ (■), CD4+CD25- (□) or CD4+CD25+ (○) cells in spleen lymphocytes in tumor bearing mice. Samples were prepared and analyzed as in Figure 2, x-axis represents the diameter of tumors; y-axis represents the percentages in spleen lymphocytes. Discussion The identification of CD4+CD25+ as the phenotype of regulatory T lymphocytes is one of the highlights of recent immunological progress. These cells are proven to be involved in autoimmune diseases, transplantation tolerance and tumor immunity, etc [ 3 ]. The relationship between cancer and immune system has been studied and debated for a long time, now we know that immunodeficient or immunosuppressed humans or animals show greater incidences of cancer [ 22 ]; at the same time, immune function in cancer patients are often compromised by tumor itself or related treatment, and this often leads patients to disadvantageous situation. To restore the immune function in cancer patients is an important element in cancer treatment. The identification of CD4+CD25+ T R cells provided a new way to study relationship between tumor development and immune suppression. A higher proportion of CD4+CD25+ T R cells was found in peripheral blood of cancer patients and to be related to poor prognosis of the diseases [ 11 , 12 ]. Depletion of CD4+CD25+ T R cells using anti-CD25 mAb could promote anti-tumor immunity [ 4 , 15 , 16 ]. All these indicated that CD4+CD25+ T R cells maybe an attractive target to restore or improve immune function in cancer treatment. In our animal experiments of antitumor immunotherapy, we did not find an increase of CD4+CD25+/CD4+ in peripheral blood in tumor bearing BALB/c mice, this is not in accordance with the results in cancer patients reported previously [ 11 ]. To find a way to evaluate the CD4+CD25+/CD4+ in antitumor immunotherapy targeting CD4+CD25+ T R cells, we analyzed the proportion in peripheral blood and spleen lymphocytes in normal or C26 colon-carcinoma-bearing mice by flow cytometry. In present study, the proportion of CD4+CD25+/CD4+ in peripheral blood of normal mice was about 6.19%, which was compatible with the results reported previously (5–10%). But in spleen lymphocytes from normal mice, we found a higher proportion of CD4+CD25+/CD4+ (around 10%), and the higher proportion is due to a lower level of total CD4+ lymphocytes in spleen, compared with that in peripheral blood, whereas the percentages of the CD4+CD25+ cells are similar. In C26-colon-carcinoma bearing BALB/c mice, we found an increase of CD4+CD25+/CD4+ in spleen but not in peripheral blood, furthermore, the proportion in spleen lymphocytes increased with the increase of tumor sizes. The phenomenon that the increase of the proportion in spleen separates with that in peripheral blood may be due to: 1). Spleen is a professional immune organ, which maybe more sensitive to the changes of immune situation than peripheral blood; 2). In this study, what we used is artificial tumor model, not spontaneous tumor model, and the tumor grew so quickly to cause mice moribund or dead that the increase of the proportion did not appear in peripheral blood. To observe the increase of the proportion in peripheral blood of tumor bearing mice may need a longer observation duration, or had better use spontaneous tumor models. In our experiments, we found the increase of CD4+CD25+/CD4+ is due to the decrease of CD4+ in lymphocytes, which is the result of decreased CD4+CD25- subset in lymphocytes. Our results support the observations reported by Sasada [ 12 ], in which the relative increase in the proportion of CD4+CD25+ T cells in patients with gastrointestinal malignancies are due to a selective reduction in the number of CD4+CD25- T cells. A possible explanation for this is that CD4+CD25- subset is more sensitive to clonal deletion or apoptosis than CD4+CD25+ T cells [ 12 , 23 , 24 ]. Furthermore, it is possible that some factors, such as tumor-derived antigens or molecules, can induce apoptosis selectively in the CD4+CD25- subset but not in the CD4+CD25+ subset [ 12 ]. The relationship between cancer and immune system has been debated for a long time. Our results provided direct evidence that the tumor might compromise the immune function, since our tumor model was established on BALB/c mice with normal immune function. It is known that tumor cells secrete immunosuppressive cytokines such as IL-10 and TGF-β [ 25 - 27 ], and the cytokines may induce CD4+CD25- lymphocytes to convert to CD4+CD25+ T R cells [ 28 , 29 ]. These all support the theory that tumor may compromise the immune function. Conclusions In normal BALB/c mice, CD4+CD25+/CD4+ proportion in spleen lymphocytes is higher than that in peripheral blood lymphocytes. In C26-colon-carcinoma bearing mice, no difference was found in the proportion in peripheral blood lymphocytes compared with normal mice; Otherwise, the proportion in spleen lymphocytes obviously increased, moreover, the proportion increased in accordance with the increase of tumor sizes. The increase of the proportion is due to the decrease of total CD4+ in lymphocytes, which is resulted from decreased CD4+CD25- subset in lymphocytes. Our observation suggest the CD4+CD25+/CD4+ proportion in spleen lymphocytes might be a sensitive index to evaluate the T R in tumor mouse models rather than that in peripheral blood lymphocytes, and our results provide some information on strategies of antitumor immunotherapy targeting CD4+CD25+ regulatory T lymphocytes. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549051.xml |
554996 | Correction: Lipid phosphate phosphatases dimerise, but this interaction is not required for in vivo activity | null | After the publication of this work [ 1 ] it was brought to our attention that the concentrations of the reagents listed for the cell lysis buffer are those of the original stock solutions and not the required final concentrations. The correct final concentrations for this buffer are as follows: HEPES 50 mM NaCl 100 mM NaF 10 mM EDTA 5 mM Na3VO4 0.5 mM NEM 2 mM Triton 0.1% Complete protease inhibitors (Roche). We regret any inconvenience that this inaccuracy may have caused, and thank Dr. Bräuer for bringing it to our attention. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554996.xml |
544560 | The product of the split ends gene is required for the maintenance of positional information during Drosophila development | Background The Drosophila split ends ( spen ) gene encodes a large nuclear protein containing three RNP-type RNA binding motifs, and a conserved transcriptional co-repressor-interacting domain at the C-terminus. Genetic analyses indicate that spen interacts with pathways that regulate the function of Hox proteins, the response to various signaling cascades and cell cycle control. Although spen mutants affect only a small subset of morphological structures in embryos, it has been difficult to find a common theme in spen mutant structural alterations, or in the interactions of spen with known signaling pathways. Results By generating clones of spen mutant cells in wing imaginal discs, we show that spen function is required for the correct formation and positioning of veins and mechanosensory bristles both on the anterior wing margin and on the notum, and for the maintenance of planar polarity. Wing vein phenotypic alterations are enhanced by mutations in the crinkled ( ck ) gene, encoding a non-conventional myosin, and correlate with an abnormal spatial expression of Delta, an early marker of vein formation in third instar wing imaginal discs. Positioning defects were also evident in the organization of the embryonic peripheral nervous system, accompanied by abnormal E-Cadherin expression in the epidermis. Conclusions The data presented indicate that the role of spen is necessary to maintain the correct positioning of cells within a pre-specified domain throughout development. Its requirement for epithelial planar polarity, its interaction with ck , and the abnormal E-Cadherin expression associated with spen mutations suggest that spen exerts its function by interacting with basic cellular mechanisms required to maintain multicellular organization in metazoans. This role for spen may explain why mutations in this gene interact with the outcome of multiple signaling pathways. | Background The morphological complexity of metazoans is achieved through the regulation of multiple genes in an orchestrated spatial and temporal manner. One of these genes, split ends ( spen ), was initially identified in a screen for mutations affecting axonal outgrowth in the nervous system in Drosophila [ 1 ]. Additional mutations in spen were isolated in a screen for genetic modifiers of Deformed ( Dfd ) function. Dfd encodes a Hox transcription factor that specifies maxillary segment identity during development [ 2 ]. spen was subsequently found to enhance embryonic thoracic defects resulting from loss of function mutations in the Hox gene Antennapedia [ 3 ]. Other studies have found mutations in Drosophila spen as modifiers of mutations in components of Ras/MAP kinase pathways, including Raf kinase [ 4 ], kinase suppressor of Ras [ 5 ], loss of function mutations in the gene encoding the protein tyrosine phosphatase Corkscrew [ 6 ], and in the ETS family transcription factor, Aop/Yan [ 7 , 8 ]. Mutations in the spen gene have also been identified as enhancers of gain of function phenotypes caused by overexpression of E2F or Cyclin E in eye cells [ 9 , 10 ], both of which are required for progression through the S phase of the cell cycle, as well as Dacapo , a cyclin dependent kinase inhibitor [ 9 ]. Overexpression of Spen may interfere with Notch signaling during the development of adult external sensory organs [ 11 ], and spen function is required for the maternal expression of the Notch pathway transcription factor encoded by Suppressor of Hairless (Su(H)) [ 12 ]. Recent evidence also suggests that spen may participate in the transduction of the Wingless (Wg) signal within a subset of cells in the wing imaginal disc [ 13 ]. The Spen protein is ubiquitously expressed throughout embryogenesis. Differential splicing of spen results in isoforms encoding at least two proteins of ~5500 amino acids containing three tandem RNP-type RNA binding domains and a SPOC ( S pen P aralogous and O rthologous C terminal) domain at the carboxy terminus [ 3 ]. These domains are highly conserved in both the mouse and human orthologs, called Msx-2 Interacting Nuclear Target (MINT) and SMRT/HDAC1 Associated Repressor Protein (SHARP), respectively. There is increasing evidence indicating that Spen-related polypeptides play a role in transcriptional repression. MINT may participate in bone development by binding to the osteocalcin promoter, via its RNP motifs, and repressing transcription in a binding complex with the homeodomain protein Msx-2 [ 14 ]. The interaction between SHARP and Silencing Mediator for Retinoid and Thyroid-hormone receptors (SMRT) can lead to the recruitment of histone deacetylase complexes through the conserved SPOC domain [ 15 , 16 ]. Both SHARP and MINT have also been proposed as negative regulators of the Notch signaling pathway in mammals. SHARP has been shown to bind directly to RBP-Jκ and repress the HES-1 promoter in an HDAC-dependent manner [ 17 ]. Although deletion of MINT coding sequences in mice results in embryonic lethality around E 14.5 due to multiple abnormalities, the analysis of hematopoiesis derived from MINT -/- precursors reveals a defect in B cell development that could be attributed to defects in Notch signaling [ 18 ]. Despite the sum of genetic and biochemical evidence, a selective role for Spen-like proteins in a particular pathway in mammals or Drosophila remains unclear. Because wing development is a well characterized system for the study of primary pattern formation, diverse signaling pathways, and cell cycle control [ 19 , 20 ], we have used mitotic recombination in the wing disc to analyze spen mutant mosaics. An additional advantage is that, because wings are not essential for adult viability, the study of a large number of specimens is possible. In this study, we show that the function of spen is necessary for the maintenance of planar polarity, and for the correct formation and positioning of veins and mechanosensory bristles on the anterior wing margin and the notum. Alterations in vein formation in spen clones correlate with abnormal spatial expression of Delta, an early marker of vein formation in third instar wing imaginal discs. All wing phenotypes are enhanced in a crinkled ( ck ) mutant background, a gene encoding the non-canonical myosin VIIa. The abnormal position of sensory organs is also observed during embryonic PNS development. In contrast with previous reports, we show that spen is not essential for the determination of specific cell fates nor for cell survival, nor is it directly required for the outcome of the Ras, Notch, and Wingless signaling pathways. Based on our observations, we propose that spen is required for cells to maintain positional identity and tissue cohesiveness during the organized growth of wing and notum epithelia. Results spen function is essential throughout development spen has been shown to be involved in many processes during embryonic development. Mutant modifier and gain of function screens indicate that spen participates in a variety of developmental processes in Drosophila , including Hox gene function [ 2 , 3 ], cell cycle control [ 9 , 10 ] and the modulation of signal transduction pathways such as Ras [ 4 - 8 ], Notch [ 11 , 12 ], and Wingless [ 13 ]. Given these varied interactions, it is unclear how spen functions through a common mechanism of action. To better understand a role for spen , we have generated genetic mosaics in adult tissues by using Flp1 -mediated mitotic recombination [ 21 ]. The spen poc 361 and spen poc 231 mutant alleles have been previously described [ 3 ]. Although not molecularly characterized, there is evidence indicating that spen poc 361 is a null allele. First, in maternal and zygotic spen poc 361 mutant embryos, Spen protein cannot be detected with a polyclonal antibody raised against the region encoding amino acids 3203–3714 [[ 3 ], and not shown], suggesting that this mutant is either a protein null or that it is truncated before this region. Second, the spen poc 361 allele displays nearly the same strength as the TE21A deficiency, which deletes the entire locus [ 3 ]. Both spen alleles were recombined onto chromosomes containing an FRT sequence at 40A [ 3 ], and were subjected to heat shock driven Flp1-mediated mitotic recombination with a M(2) 36 F FRT 40 A chromosome. Heat shocks delivered at different times during larval and pupal development rendered few escapers (<1%), which presented with little or no mosaicism as revealed by the absence of the M dominant marker (not shown), indicating that the function of spen is essential during all stages of development. spen mutant mosaics affect vein morphology and planar polarity in adult wings Because the studies described above did not provide any information about the nature of this lethality, mosaics were generated through expression of the Flp1 recombinase in wing imaginal discs, where spen mRNA is expressed ubiquitously as revealed by RNA in situ hybridization (not shown). Using the MS1096-GAL4 driver [[ 22 ], Figure 1A ], the expression of a UAS-Flp1 transgene [ 23 ] induced mitotic recombination in wing discs between the FRT chromosome bearing a spen mutant allele and an FRT chromosome containing a ubiquitously expressed GFP transgene on 2L. The homozygous viable 2piM FRT 40 A chromosome [ 21 ] was used as a control in all crosses. Initially, to avoid non-specific interference of additional mutations in the analysis (see below), clones were not phenotypically marked in adults, although their formation could be followed in discs by looking at the expression of the GFP transgene. In order to prevent the evaluation of non-specific abnormalities caused by the MS1096-GAL4 insertion on the X chromosome of hemizygous males, only heterozygous females were analyzed. Figure 1 Generation of spen mutant clones in wing imaginal discs . (A) Expression of a UAS-GFP transgene driven by MS1096 GAL4 in third instar wing imaginal discs. As described previously [22], the transgene is expressed mainly in the dorsal wing pouch, with weaker expression in the ventral side and in the prospective notum. (B, C) Virgin females with the genotype { MS1096 GAL4; 40 2piM FRT 40 A }, or { MS1096 GAL4; spen poc 361 FRT 40 A / ln (2LR) CyO }, were crossed to { w*; GFP FRT 40 A ; UAS-Flp1/TM6b } males. Shown are wing imaginal discs isolated from the resulting third instar larvae with the genotypes { MS1096 GAL4/+; 40 2 piM FRT 40 A /GFP FRT 40 A ; UAS Flp1/ + } (B), and { MS1096 GAL4/+; spen poc 361 FRT 40 A /GFP FRT 40 A ; UAS-Flp1/ + }(C). Green fluorescence reveals either compound heterozygous cells (not subjected to mitotic recombination) or GFP homozygotes (brightest), while dark spots indicate 2 piM (B) or spen poc 361 (C) homozygous clones. Dorsal is up and anterior is to the left. To know the relative area covered by either 2 piM or spen poc 361 homozygous clones versus wt clones, regions of equal intensity within the images were artificially colored in Adobe Photoshop using the Paint Bucket tool (D, E). Colors corresponding to mutant (red) or wt (blue) areas were extruded independently, and the total number of pixels contained within the regions of interest were calculated using the Kodak 1D Image Analysis Software (Eastman Kodak Company, Rochester, NY). The results are represented as the fraction covered by each genotype for each cross in a total of three discs for the crosses involving 2 piM , and 5 discs for spen poc 361 (F). Using this approach, clones were generated on the dorsal side of the wing pouch, haltere imaginal discs, and to a minor extent on the ventral side of the wing pouch and on the prospective scutellum (Figure 1A-C ). In third instar imaginal discs, the relative area covered by spen mutant cells was comparable in size to that obtained in the control crosses (Figure 1F ), indicating that loss of function of the spen gene did not appear to autonomously affect cell viability during growth of the disc. The generation of spen mutant mosaics during wing development caused phenotypic abnormalities that included both the formation of ectopic patches of vein material and the loss of vein material (mostly distally), as well as subtle mis-localization of both longitudinal and cross veins, frequently accompanied by thickening of the veins (Figure 2 ). The ectopic vein material was always observed around veins, and was never detected in the middle of intervein regions. Additional abnormalities included the disruption of cell polarity, as evidenced by the abnormal orientation of wing blade trichomes (Figure 3 ) and the mis-placement of bristles at the wing margin (see below on Figure 4 ), whereas their morphology appeared normal. No disruptions in any of the main axes of the wing (A/P, D/V and P/D) were observed in wings containing spen mutant clones. Figure 2 The presence of spen mutant clones affects wing vein morphology . Virgin females with the genotype { MS1096 GAL4; 40 2piM FRT 40 A }, { MS1096 GAL4; spen poc 231 FRT 40 A / ln (2LR) CyO }, or { MS1096 GAL4; spen poc 361 FRT 40 A / ln (2LR) CyO }, were crossed to { w*; GFP FRT 40 A ; UAS-Flp1/TM6b } males, and adult wings were isolated from progeny females with the following genotypes: { MS1096 GAL4/+; 40 2 piM FRT 40 A /GFP FRT 40 A ; UAS Flp1/ + } (A), { MS1096 GAL4/+; spen poc 361 FRT 40 A / GFP FRT 40A; TM6b/ + } (B), { MS1096 GAL4/+; spen poc 231 FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (C, D), and { MS1096 GAL4/+; spen poc 361 FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (E, F). Arrows indicate gain (C, E, F), loss (D, F), or misplacement (E) of vein material. Figure 3 spen mutations affect wing hair polarity . Crosses were performed as described in Figure 1. Shown are details (see inset in A) of adult wings isolated from progeny females with the genotypes: { MS1096 GAL4/ +; 40 2piM FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (A), { MS1096 GAL4/ +; spen poc 361 FRT 40 A / GFP FRT 40 A ; TM6b/ + } (B), { MS1096 GAL4/ +; spen poc 231 FRT 40 A /GFP FRT 40 A ; UAS-Flp1/ + } (C), and { MS1096 GAL4/ + ; spen poc 361 FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (D). Arrows indicate the direction of the bristles. Figure 4 Incorrect positioning of wing elements in mosaic spen mutant wings is enhanced by mutations in the crinkled ( ck ) gene . Virgin females with the genotype { MS1096 GAL4; ck FRT 40 A / ln (2LR) CyO }, or { MS1096 GAL4; spen poc 361 ck FRT 40 A / ln (2LR) CyO }, were crossed to { w*; GFP FRT 40 A ; UAS-Flp1/ TM6b } males, and adult wings were isolated from progeny females with the following genotypes: { MS1096 GAL4/+; ck FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (A, F), { MS1096 GAL4/+; spen poc 361 ck FRT 40 A / GFP FRT 40 A ; TM6b/ + } (B-E, G). Red lines separate either wt or compound heterozygous cells (indicated as wt) from spen mutant cells. All phenotypes were more penetrant when the spen poc 361 allele was used instead of the spen poc 231 , confirming previous results indicating that spen poc 361 represents a stronger allele [ 2 , 3 ]. None of the phenotypes described above were observed when a control chromosome was subjected to mitotic recombination (Figures 2A , and 3A ), or when either the driver ( MS1096-GAL4 ), or the UAS-Flp1 transgene, were independently present in a spen heterozygous background (Figures 2B , 3B , and not shown). Morphological alterations in mosaic spen mutant wings are enhanced by mutations in crinkled ( ck ) To determine whether the phenotypes observed in spen mosaic mutant wings were cell autonomous, spen mutant chromosomes were marked with crinkled ( ck ), a commonly used recessive marker on 2L. While the presence of ck homozygous clones had no effect on wing vein patterning and morphology (Figure 4A ), the presence of ck on the spen mutant chromosome markedly enhanced the severity of the phenotypes previously observed with spen mutants alone (Figure 4B ). Likewise, we observed an excess (Figure 4C ) or absence (Figure 4D ) of vein material, and misplacement of both longitudinal and cross veins (Figure 4E ). In most cases, these abnormalities correlated with the presence of the ck marker phenotype. However, there were cases in which clones on the dorsal side affected vein morphology on the ventral side and vice versa . Therefore, the phenotypic effects caused by spen mutations are not exclusively cell autonomous, although it appears autonomous in cells within a given cell layer. A "misplacement" effect was also observed at the wing margin, most frequently affecting the dorsal bristles. In a normal wing, the bristles are evenly spaced in a row along the dorsal side of the anterior wing margin (Figure 4 ). In the presence of spen mutant clones, the spacing between these bristles was altered (Figure 4G ). Additionally, a mis-alignment and occasional tufting of the thick trichomes that are found along the wing margin was also observed. Again, these phenotypes were not exclusively cell autonomous, as was the case with aberrations in wing vein morphology. Alternatively, it is plausible that spen mutations alter the phenotypic manifestation of ck mutants, therefore leading to an incorrect conclusion about the autonomy of the spen mutations. spen mutant clones disrupt the expression of Delta and Cut in wing imaginal discs Veins are generated in specific domains within the wing field and require the action of early patterning genes that establish basic positional values composing the main axes (D/V, A/P). This process is followed by the initiation of vein formation, and finally, vein differentiation [ 19 ]. The phenotypes observed upon generation of spen mutant clones in wing imaginal discs are consistent with a role for spen at later stages when vein formation takes place, as the establishment of compartment boundaries appears unaffected (Figure 2 ). One gene product that is involved in early vein patterning and differentiation is Delta (Dl), which participates in delimiting vein boundaries along prospective vein forming domains through lateral inhibition [ 19 ]. In third instar wing imaginal discs, expression of Dl correlates with the prospective L3, L4, and L5 veins (see Figure 5B ). Figure 5 Delta protein expression is abnormally distributed in the presence of spen mutant clones . Crosses were performed as described in Figure 1, and third instar imaginal disks were isolated from progeny female larvae with the following genotypes: { MS1096 GAL4/+; 40 2piM FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (A-C), and { MS1096 GAL4/ +; spen poc 361 FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (D-I). Delta expression (in red) was detected in by using a mouse anti-Delta MAb followed by a Cy3-conjugated anti mouse antibody. The area indicated by the arrows in E and F is shown at higher magnification in G to I. Dorsal is up and anterior is to the left. Dl protein expression was normal in third instar wing discs containing homozygous clones for both the 2 piM , and the GFP FRT marker chromosomes, which confer nuclear GFP expression (Figure 5A-C ). In the presence of spen mutant clones, Dl expression was inconsistently abnormal: In some cases we observed ectopic expression of Dl within spen mutant clones, although this was not the norm. Similarly, Dl expression could be absent from normal regions adjacent to spen mutant cells, suggesting a non-autonomous effect (Figure 5D-I ). Most frequently, however, the abnormal expression of Dl within a spen clone was consistent with a shift or misplacement of Dl-expressing cells. As shown in Figure 5 (Panels E, F and H, I), Dl-positive cells are located away from the position where they are expected, at the intersection of the prospective L4 vein at the wing margin. One of the genes whose expression delineates the D/V boundary at the wing margin beginning at mid to late third instar is the homeodomain transcription factor Cut (Ct) [ 24 , 25 ]. The analysis of Ct expression is of particular interest for our study because both the N and wg signaling pathways cooperate to maintain its expression at the margin [ 26 , 27 ], and both signaling pathways have been reported to be affected by spen mutations [ 11 - 13 ]. As shown (Figure 6 ), the expression of Ct was not subjected to major alterations at the wing margin in the presence of spen mutant clones. Occasionally, Ct expression was broader, or detected in cells a few cell diameters away from the margin (Figure 6G-L ), coinciding with the presence of spen mutant cells. This observation is consistent with our previous results showing abnormal spatial expression of Dl, which is in turn required to restrict Ct expression to the margin [ 27 ]. Figure 6 Analysis of Cut protein expression in the presence of spen mutant clones . Crosses were performed as described in Figure 1, and third instar imaginal disks were isolated from progeny female larvae with the following genotypes: { MS1096 GAL4/+; 40 2piM FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (A-C), and { MS1096 GAL4/ +; spen poc 361 FRT 40 A / GFP FRT 40 A ; UAS-Flp1/ + } (D-L). Cut expression (in red) was detected with an anti-Cut MAb as described in Figure 4 for Dl, and the absence of GFP (in green) defines mutant cells as explained on Figure 4. G to H shows a magnification of the spot indicated by an arrow on E and F. Note that the Cut protein is present in spen mutant cells. Panels J to L show the margin of another disc not shown in the figure. The arrow indicates a group of heterozygous cells that are away from the margin, surrounded by a group on spen mutant cells. Dorsal is up and anterior is to the left. The organization of the PNS is abnormal in both adult spen mutant clones and maternal and zygotic spen mutant embryos It was unclear whether the effect of spen mutant clones on the spacing of dorsal sensory bristles at the anterior wing margin was due to abnormal positioning or to a defect in the correct specification of sensory organ precursors. To determine if this effect could be generalized to other adult structures, expression of the Flp1 recombinase was directed to the anterior compartment of the wing disc with a dpp DISC -GAL4 driver [ 28 ]. Although in this case the clones were not marked in adults, analysis of GFP expression in third instar imaginal discs showed that spen mutant clones were generated throughout the anterior compartment of the wing disc, including most of the prospective notum (data not shown). The generation of large spen mutant clones in the notum severely affected the final adult pattern of both the macro and microchaetae (Figure 7 ). This patterning defect was easily observed in the alignment of microchaetae along the dorsal midline, and the overall phenotype was more penetrant in males than in females. The number and position of macrochaetae were also affected, and the observed defects included their loss (Figure 7C ), gain (Figure 7B,C,E , and 7F ), and abnormal positioning (Figure 7B,C,E,F ). In each case, these abnormalities were never associated with the appearance of double trichogens (bristles) or thormogens (sockets), which would indicate a defect in the specification of particular cell fates during the formation of the external sensory organs. Figure 7 Sensory bristle number and position is abnormal in notums containing spen mutant clones . Virgin females with the genotype { spen poc 231 FRT 40 A / ln (2LR) CyO; dpp DISK GAL4/ TM6b } or { spen poc 361 FRT 40 A / ln (2LR) CyO; dpp DISK GAL4/ TM6b }, were crossed to { w*; GFP FRT 40 A ; UAS-Flp1/ TM6b } males, and adult notums were isolated from progeny females (A-C), or males (D-F) with the following genotypes: { spen poc 361 FRT 40 A / GFP FRT 40 A ; dpp DISK GAL4/ TM6b } (A, D), { spen poc 231 FRT 40 A / GFP FRT 40 A ; dpp DISK GAL4/ UAS-Flp1 } (B, E), and { spen poc 361 FRT 40 A / GFP FRT 40 A ; dpp DISK GAL4/ UAS-Flp1 } (C, F). Lines delineate the bristles at the dorsal midline. Empty circles indicate loss of bristles (in C), and arrows indicate either gain, or abnormal location of macrochaetae (B, C, E, and F). Previous reports have linked spen to the organization of the embryonic peripheral and central nervous systems [ 1 , 8 , 12 ]. In agreement with these studies, and with our findings in adults, we observed an abnormal distribution of neurons in spen maternal and zygotic embryos, as evidenced by immunodetection of the pan neural marker Elav (Figure 8 ). Similar to the previously described phenotypes, they had variable penetrance with a change in the number of Elav positive cells (either more of fewer) not consistently observed. Figure 8 Abnormal positioning of PNS neurons in spen maternal and zygotic mutant embryos . Maternal and zygotic spen mutant embryos were obtained as described in Materials and Methods, and at stage 14–15 were stained for the neuron specific marker Elav, together with an anti β-galactosidase antiserum to reveal the presence of the CyO , wg-lacZ balancer in heterozygotes (not shown), followed by biotinylated secondary antibodies, and streptavidin conjugated horse radish peroxidase. Brown staining reveals the nuclei of PNS neurons in wt (A, B), maternal and zygotic spen 231 (C, D), or spen 361 (E, F) embryos. Abnormal PNS distribution correlates with altered epidermal expression of E-cadherin in spen mutant embryos Peripheral neurogenesis starts at the epidermis, where sensory organ precursors are specified, and give rise to sensory organ cells through carefully polarized cell divisions. Aside from these divisions, additional cell types might be recruited from the epidermis to take part in the formation of internal sensory organs [reviewed in [ 29 ]]. Increased epithelial activity at the sites of sensory organ formation may reveal defects in cell adhesion, such as those caused by mutations in the E-Cadherin mutant shotgun ( shg ), whose loss of function results in holes in the epithelium. These holes, which later appear in the cuticle, presumably arise from a failure to re-establish a status quo at sites of high morphogenetic activity [ 30 ]. The fact that spen mutant embryos die at the end of embryogenesis with sclerotic patches and holes in their epidermis in the ventrolateral thorax and lateral abdomen [[ 3 ]; K. Mace, J. Pearson, W. McGinnis, submitted], together with the evidence that the embryonic PNS is disorganized in these embryos, may suggest a defect in cell adhesion at these sites. As shown in Figure 9 , embryos lacking spen function display abnormal PNS neuron positioning and morphology compared to wild type as revealed by 22C10 staining (compare Figures 9A and 9D ). The surrounding epidermal cells show a dramatic upregulation of E-cadherin (compare Figures 9B and 9E ). The placement of these abnormal neurons is precisely within this field of epidermal cells (see merge, Figure 9F ) that have been shown to be undergoing a wound response due to a failure of epidermal epithelial integrity [K Mace, J Pearson, W McGinnis, submitted]. Figure 9 Increased epidermal expression of E-cadherin correlates with abnormal positioning of embryonic PNS neurons . Stage 16 wild type embryos (A-C) or maternal and zygotic spen mutant embryos (D-F) were generated as described in Materials and Methods. The expression of the PNS neuronal marker 22C10 (green) and E-cadherin (red), were detected with specific MAbs followed by fluorescein conjugated anti-mouse antiserum and a Cy3 conjugated anti rat as described earlier in Figure 4, and in the Material and Methods section. Because E-cadherin is an important intercellular adhesion molecule, we wanted to test whether the defects seen in the developing adult wing were also associated with changes in its expression. Additionally, we assayed the expression of Crumbs, another component of the adherens junction, and β-tubulin, a marker of cell polarity. None of these proteins showed any detectable changes in expression or localization within spen mutant clones in the developing wing disc (not shown). It is possible that, because the wing disc epithelium is not subject to the same level of stress as the embryonic epidermis at the sites of internal sensory organ formation, upregulation of E-Cadherin expression is specific to the embryonic phenotype, and does not occur in wing discs. Therefore, it remains unclear how spen could affect, if at all, cell adhesion in this tissue. Discussion spen mutant cells are viable The generation of large spen mutant clones in adults, using heat-shock driven expression of FLP recombinase in a Minute background indicated that spen function was essential during all stages of development. This result is consistent with the pleiotropic effects previously described for spen . However, when clones were generated specifically in wing imaginal discs, we observed that mutations in the spen gene did not affect the viability or the size of mutant cells during growth of the disc, as evidenced by normal cell size and number within the adult clones. These observations are intriguing given previous reports linking the function of spen to cell cycle progression [ 9 , 10 ]. In these studies, spen mutations were shown to enhance morphological abnormalities upon overexpression of wild type E2F [ 9 ] and Cyclin E [ 10 ], leading to the conclusion that spen has a negative regulatory role on these cell cycle regulators. If spen had a negative role in cell cycle progression in the wing imaginal disc, we would expect that spen mutant clones would be larger than their twin spots after mitotic recombination. However, other authors have observed that spen mutant clones are indeed smaller than their wild type counterparts [ 13 ]. If that were the case, we should expect that the spen mutant cells would increase their size, in order to offset a decrease in cell division rates, as it is the case for E2F mutants [ 20 ]. However, as stated above, both cell size and total area coverage were similar between spen mutant and wild type cells. This observation is in agreement with our observations in spen maternal and zygotic mutant embryos, which do not show differences in cell cycle progression versus wt embryos as assessed by 8-Bromo deoxy-Uridine incorporation, string transcript expression, or Proliferating Cell Nuclear Antigen (PCNA) protein expression (data not shown). Alternatively, there is no evidence of increased cell death in spen mutant cells as revealed by Acridine Orange staining, or reaper mRNA expression (data not shown). Therefore, we conclude that the function of the spen gene per se is not essential for cellular viability and normal progression through the cell cycle. spen and Ras signaling Recently, there has been increasing experimental evidence suggesting that the product of the spen gene might be an integral component of the Ras signaling cascade. Such an interaction has been found in the search for genes interacting with a viable allele of corkscrew [ 6 ], and in gain of function screens utilizing the overexpression of components of the Ras pathway during eye development. These include activated Raf [ 4 ], kinase supressor of ras ( ksr ) [ 5 ], and a constitutive repressor form of Anterior Open (Aop) /Yan [ 7 ]. Wing vein formation in Drosophila provides an amenable system to analyze mutations affecting the MAPK pathway, as it depends on the function of the EGF receptor (DER), as well as other genes encoding components of the pathway, such as Star (S) , rhomboid (rho) , and the DER ligand, vein (vn) [ 19 , 31 ]. While loss of rho and S function result in non-autonomous loss of vein phenotypes, gain of function of DER and rho have the opposite effect [ 19 , 31 , 32 ]. If the product of the spen gene were an integral component of the Ras pathway, we would expect that the loss of function of spen would generate similar phenotypes as those obtained with other genes acting in the DER pathway. However, this was not the case: spen mutant clones generated during wing development showed both indiscriminate loss and gain of vein material, resulting in a vein phenotype that could not be directly correlated with any mutant in the Ras pathway known to date. Similarly, the phenotypes observed in spen mutant embryos do not clearly correlate with mutations in components of the Ras signaling pathway. For instance, the expression of orthodenticle ( otd ) mRNA at the ventral midline, which is dependent on DER function, and is abnormal in mutants that are defective in this signaling pathway [ 33 ], was indistinguishable between spen maternal and zygotic mutant embryos and wild type embryos (data not shown). Thus, our data do not support a direct correlation between the loss of spen function and a specific defect in DER/Ras-dependent signaling during embryogenesis or during imaginal disc development. spen and Notch signaling A relationship between spen and Notch (N) function has been previously suggested, where the former appears to be necessary to maintain the embryonic expression of Suppressor of Hairless ( Su(H) ) [ 12 ] a downstream effector of N [ 34 ]. The observation that the gain of function of spen may also interact with N signaling [ 11 ] further strengthens the relationship between N and spen function. Additionally, there is supporting biochemical evidence indicating that both the human and murine orthologs of Spen, interact with RBP-Jκ/CBF-1, a mammalian ortholog of Su(H) [ 17 , 18 ]. The interaction of both SHARP and MINT with RBP-Jκ/CBF-1 prevents the interaction of the latter with the intracellular fragment of activated N, thus suggesting that both SHARP and MINT are negative regulators of N signaling in mammals. On the other hand, inactivation of the murine MINT gene does not clearly reflect a defect in N signaling. Loss of function of MINT in hematopoietic precursors, revealed that splenic B cells differentiated more efficiently toward the marginal zone than to the follicular type. This phenotype, attributed by the authors to a defect in N signaling, is in conflict with the fact that the election of T versus B cell fates in the lymphoid lineage, also dependent on N signaling, appears unaffected [ 18 ]. Indeed, variations in the numbers of follicular and marginal zone B cells, as reported for the MINT-deficient B cells, may also be attributed to migration defects leading to their abnormal distribution within the spleen [ 35 ]. Thus, it seems that a specific role for MINT in the mammalian N pathway has not been clearly defined yet. The displacement of veins observed in adult wings in the presence of spen mutant clones, frequently accompanied by widening of the veins, is a phenotype consistent with defects in the N signaling pathway [ 19 ]. Furthermore, this phenotype correlates with abnormal (diffuse and/or ectopic) expression of Delta (Dl) in third instar imaginal discs both in spen mutant clones, and in cells adjacent to these clones. However, the role of the N signaling pathway has been well established as crucial for the determination of cell types during the development of external sensory organs. Defects in the N pathway alter sense organ cellular composition by affecting alternative cell fate decisions [ 36 , 37 ]. In our experimental system, the generation of spen mutant clones did not interfere with bristle formation per se , but with their spatial distribution. Both micro and macrochaetae were incorrectly positioned throughout the notum and, in some cases, chaetae were absent, but ectopic supernumerary bristles were also seen. Nevertheless, all external sensory organs appeared to have normal morphology, suggesting that there were no mis-specifications of cell fates, as it would be expected for N signaling defects. Additional evidence that supports this is that the expression of Cut at the wing margin, which is dependent upon both Su(H) and N function [ 26 , 27 ], was detected within spen mutant clones. Therefore, we conclude that the function of spen is not essential for N signaling in wing imaginal discs. spen and wg signaling Recent evidence suggests that spen is required to transduce some aspects of the Wingless (Wg) signal in the wing imaginal disc, showing a requirement for spen for the expression of senseless , a downstream target of the Wg pathway [ 13 ]. The loss of Senseless in spen mutant clones would be predicted to lead to the absence of external sensory organs in adult wings [ 38 ], which is a phenotype that we did not observe with the alleles tested. Furthermore, we could not consistently correlate any of the abnormalities observed in adult wings containing spen mutant clones to known defects in the Wg signaling pathway. Clonal analysis in third instar imaginal discs did not reveal specific defects in the Wg pathway either: Wg signaling is required for Dl expression at the wing margin [ 27 ], and Dl expression was topologically affected but not absent in spen mutant clones. Taken together, it appears that spen function is required for one Wg signaling target (Senseless) [ 13 ], but not for others (Dl). These results, that could be explained by the differential penetrance of the different spen alleles used, do not seem to support a principal role for Spen in Wg signaling during wing imaginal disc development. A general role for Spen? Based on previous reports and the data presented herein, the available evidence does not support a specific role for spen in any signaling pathway in particular. We would like to propose that the common theme that could best define spen function at the morphological level is that it appears necessary for the correct spatial organization of individual cells within a specific group during growth and development. How could Spen instruct cells to maintain a specific position, without affecting their fate directly? A plausible explanation is that it could affect cell adhesion. In fact, in spen mutant embryos, the expression of E-cadherin was up-regulated at sites of high epithelial morphogenetic activity, generating a phenotype similar to the E-cadherin mutant embryos, as it has been observed in other studies [ 39 ]. It is plausible that the increase in E-cadherin expression is the result of a wound response to a defect in epithelial integrity, caused by spen mutations [K. Mace, J. Pearson, and W. McGinnis, submitted]. A defect in cell adhesion and/or cytoskeletal rearrangements could also explain specific aspects of the spen embryonic phenotype. The holes that result in abnormal cuticle deposition in the embryonic epidermis are due to a failure of epidermal epithelial integrity. These cells subsequently undergo a wound response at the end of embryogenesis. Additionally, some of the phenotypes resulting from the loss of spen are indeed similar to those seen in mutants for the gene encoding Daschous, a cadherin involved in cell adhesion [ 40 ]. However, blistering of the wings, a phenotype that is often found in cell adhesion mutants, was never observed in any of the spen mutant clones. A role for Spen in cell adhesion and/or cytoskeletal rearrangements could also be inferred through its genetic interaction with crinkled (Myosin VIIA), and the planar cell polarity phenotype observed in mutant cells for spen in the wing blade. Myosin VIIA is associated with the cadherin-catenins complex and participates in the creation of a tension force between the actin cytoskeleton and adherens junctions, which is predicted to strengthen cell-cell adhesion [ 41 ]. Furthermore, ck acts downstream of Drosophila Rho-associated kinase ( Drok ), which links Frizzled-mediated planar cell polarity signaling to the actin cytoskeleton [ 42 ]. Myosin VIIA mutations have been described in vertebrates, including those causing the Usher syndrome in humans [ 43 ], the shaker-1 mutation in mice [ 44 ], and the mariner mutation in zebrafish [ 45 ]. Interestingly, these mutations, among other symptoms, cause splaying and abnormal distribution of sensory hair cells in the inner ear, leading to deafness in mice and humans, and mechanosensory defects in zebrafish. It seems plausible that spen may regulate the expression or function of components affecting the outcome of pathways involved in cytoskeletal rearrangements and epithelial planar polarity and, hence, affect cell positioning. However, a direct requirement for spen function in the Ck or Drok pathways is unlikely, since mutations in these genes result in different phenotypes than those observed in spen mutants. An influential role for spen in mechanisms of intercellular adhesion and/or cytoskeletal rearrangements may also be relevant to understanding its suggestive role in human cancer. The search of public human sequence resources [ 46 , 47 ] reveals one spen ortholog (SHARP), and three putative Short Spen-like Protein (SSLP) orthologs in the human genome (Figure 10 ). At least one of these genes (OTT/RBM15) is involved in a recurrent translocation detected in acute megakaryocitic leukemia [ 48 , 49 ], and a potentially aberrant transcript for another human SSLP ortholog at 3p21 has been identified in cDNA isolated from human cancer cells (Figure 10 ). Despite the presence of common domains, the functional relationship between large and small Spen-related polypeptides is still unknown. It is plausible that in Drosophila , SSLP might rescue some of the functions of spen during early embryonic development, as evidenced by the incomplete penetrance of phenotypes seen in spen maternal and zygotic mutant embryos. Complementation at this level has been suggested by others to explain the incomplete penetrance of spen mutations in wing discs [ 13 ], although it should be noted that the region required for Spen to interact with transcription factors such as Msx-2 or nuclear receptors, is apparently missing in SSLP proteins. Therefore, the potential redundancy of Spen and SSLP will have to be determined. Figure 10 Human spen -related genes . The figure shows all Drosophila and human related sequences putatively encoding polypeptides with three RNP-type RNA binding motifs at the N terminus (purple boxes), and the SPOC domain at the C terminus (yellow boxes). The gray box on Hs SHARP indicates the region that contains motifs required for the interaction of SHARP/MINT with Msx-2 (residues 2070 to 2394 [14]), nuclear receptors (residues 2201 to 2707 [15]), and RBP-Jκ (residues 2803 to 2817 [17], and 2638 to 2777 [18]). The expected sizes for each peptide is shown on the right, while names and chromosomal localization in humans is on the left. Asterisks indicate that the peptides have been predicted from the genomic sequence, either because there are no known full length ESTs corresponding to the genomic regions analyzed (case for Hs SSLP at 3p21), or because there are no reported ESTs at all (Hs SSLP at 5q23). In the case of the SSLP at 5q23, there are also stop codons in frame with the putative ORF, so it is likely that this sequence represents a pseudogene. Accession numbers for the sequences likely to represent full length cDNAs are: AAF13218 (Dm SPEN), NP_055816 (Hs SHARP), AAF59160 (Dm SSLP), NP_073605 (OTT/RBM15), and CAC38829 (OTT-MAL fusion). The putative full length ORF for the Hs SSLP at 3p21 was predicted using GENESCAN [54] on the genomic sequence AC092037. The truncated cDNA arising from this gene is found under AAA72367 or NP_037418. The Hs SSLP at 5q23 was predicted with GENESCAN from the genomic segment AC005915, and the assembly was completed with ENSP322787, a predicted peptide from the Ensembl Database [42]. Conclusions We have shown that the function of the spen gene is essential for all stages of development. The experimental evidence indicates that Spen participates in processes that regulate planar cell polarity and may influence cytoskeletal organization, and its loss results in specific phenotypes that can not be solely explained by defects in a specific signaling pathway. In order to unify our observations with those previously reported by others, we would like to propose that the function of Spen is necessary for the maintenance of correct cell positioning during growth, ensuring that structures that are determined early during development are correctly positioned in the adult (Figure 11 ). As cells are determined early during development to become part of a specific structure, their position has to be maintained during growth according to a pre-established pattern. If cells were unable to maintain their position, we would expect phenotypes similar to those obtained in spen mutant clones. Structures would be misplaced, and in some cases would be absent if the cells that were predetermined to adopt a specific fate fall within "forbidden" positions (Figure 11 ). This mechanistic model could explain why spen interacts genetically with signaling pathways that require and/or specify precise spatial organization during metazoan development. Figure 11 A mechanistic model for Spen function . The cartoon illustrates our conclusions to explain the defects seen in the presence of spen mutant clones. Wild type or heterozygote cells are depicted with green nuclei, and spen mutant cells with gray nuclei. The appearance of spen mutant cells in fields that will give rise to specific structures such as bristles or veins would imply the lack of an instructive signal to remain in place during growth of the disc. This will finally result in a progressive mis-localization of cells, ultimately leading to the abnormal positioning of structures after development is completed. Such a model would explain that, in some cases as in the proneural clusters, a change of fate is generated because negative instructive signals that depend on cell to cell contact are lost, resulting in the formation of two sensory organ precursor (SOP) cells within the same pro neural cluster (shown as red cells in B and C). The same situation may occur in the formation of wing veins, where similar Notch dependent regulatory mechanisms take place (E). Loss of or veins (or bristles) may occur when vein-forming cells move into intervein regions after their commitment has taken place, therefore leaving a hole where they should have been, which has been filled with cells that are unable to form vein at that spot. Methods Drosophila stocks All flies were grown at 21°C in standard medium, and were obtained from the Bloomington Stock Center, unless indicated otherwise. The stocks { y , w; spen poc 361 , ck, FRT 40 A /In(2LR)O, Cy }, { MS1096-GAL4; spen poc 361 , ck, FRT 40 A /In(2LR)O, Cy , wg - lacZ }, { MS1096-GAL4; spen poc 231 , FRT 40 A /In(2LR)O , Cy , wg - lacZ }, { MS1096-GAL4; ck, FRT 40 A /In(2LR)O, Cy , wg-lacZ }, { MS1096-GAL4; 2 piM FRT 40 A / In(2LR)O , Cy , wg-lacZ }, { ck FRT 40 A / In(2LR)O , Cy; dpp DISC - GAL4/TM6B }, { spen poc 361 , ck FRT 40 A /In(2LR)O, Cy; dpp DISC - GAL4/TM6B }, { spen poc 361 , FRT 40 A /In(2LR)O , Cy; dpp DISC - GAL4/TM6B }, { spen poc 231 , FRT 40 A /In(2LR)O , Cy; dpp DISC - GAL4/TM6B }, and { y , w; GFP(2L), FRT 40 A ; UAS-Flp1/TM6B } were generated with the following lines: { spen poc 231 /In(2LR)O , Cy , wg-lacZ }, { spen poc 361 /In(2LR)O , Cy , wg-lacZ } [ 3 ], { ck, FRT 40 A /In(2LR)O , Cy } (obtained through recombination from the line { S X 155 , ck, FRT 40 A /In(2LR)O , Cy } [ 31 ], { w*; P { w + mC = UAS-Flp1.D } JD2 /TM3 , Sb 1 }, { w 1118 ; P { w + mC = Ubi-GFP(S65T)nls } 2L P { ry + t 7.2 = neoFRT 40 A } /In(2LR)O , Cy }, { w 1118 ; P { w + mC = piM } 21C P { w + mC = piM}36F P{ry + t 7.2 = neoFRT 40 A }, {w 1118 ; +; dpp DISC -GAL4/In(3LR)TM6B } [ 28 ], and { MS1096-GAL4 X } [ 22 ]. The line { hsFlp1; M(2)36F, FRT 40 A } was generated with { hsFlp1; noc Sco /In(2LR)O, Cy }, { y , w; FRT 40 A }, and { M(2)36F /SM5 }. Other stocks used were { ck 14 /In(2LR)O, Cy }, { ck 16 / In(2LR)O, Cy }, and y, w*; P { w + mC = UAS-GFP::lacZ.nls } 15:3 . The ovo D technique [ 50 ] was used to generate maternal germline clones as previously described [ 3 ]. These females were crossed to { Df (2L)TE21A/In(2LR)O, Cy, wg-lacZ } males, that carry a deletion spanning the spen locus, to obtain maternal and zygotic mutant embryos that were collected at 25°C. Immunodetection To preserve GFP fluorescence, third instar imaginal discs were collected in cold PBS and fixed for 10–20 minutes on ice with methanol free 10% formaldehyde (Polysciences, Inc, Warrington, PA). After washing thoroughly with PBT (PBS with 0.1% Tween 20), and preincubating in the same buffer containing 10% Bovine Serum Albumin (BSA), the fixed discs were incubated with antibodies in PBT with 1% BSA. The mouse anti Drosophila Delta 594-9B monoclonal antibody (MAb) [ 51 ] was used purified at a concentration of 1:1000. The 22C10 [ 52 ], and anti Cut 2B10 [ 53 ] MAbs were obtained from the Developmental Studies Hybridoma Bank (DSHB, University of Iowa Department of Biological Sciences, Iowa City, IA), and used as recommended. The E-Cadherin rat MAb [ 54 ] was used at a 1:20 dilution. For fluorescent detection, FITC or Cy3-conjugated donkey anti mouse, or goat anti rat (Jackson Immunoresearch, West Grove, PA) were used. Discs were whole-mounted in mounting medium (Vector Laboratories, Burlingame, CA). Fluorescent images were captured with a Spot Digital Camera (Diagnostic Instruments, Inc, Sterling Heights, MI) using a Nikon Microphot-FXA microscope. Confocal images were acquired on a Leica TCS SP2 confocal microscope (Mannheim, Germany). The embryonic expression of the neuron-specific marker Elav was immunodetected as described [ 55 ] with the mAb 9F8A9 [ 56 ] obtained from DSHB, and was used as a culture supernatant at 1:100, followed by incubation with a biotinylated goat anti-mouse (Jackson Immunoresearch, West Grove, PA), and a streptavidin-horse radish peroxidase (HRP) conjugate (DAKO, Carpinteria, CA). Similarly, β-galactosidase (LacZ) was detected with a rabbit antiserum (Cappel-ICN Biomedicals, Irvine, CA), at a 1:100 dilution, followed by a biotinylated goat anti-rabbit as above. Peroxidase activity was detected with the Immunopure Metal Enhanced DAB Substrate Kit (Pierce Biotechnology, Rockford, IL). Adult structures Adult flies were collected in 70% ethanol, and stored in isopropanol. Wings were detached from the dehydrated adults and mounted with DPX (Fluka, Buchs, Switzerland). Notums were dissected, embedded in Lactic Acid: Hoyers (1:2) [ 57 ], and photographed in the same medium after clearing (usually 24 hours) using the equipment described above. Computer aided sequence analysis Human genomic, and human and Drosophila cDNA sequences were retrieved from the Ensembl Genome Server [ 46 ], and from databases at the National Center for Biotechnology Information [ 47 ]. Sequence searches were performed using BLAST [ 58 ]. Composite multiple alignments were performed with MACAW [ 59 ] and Clustal X [ 60 ]. Genomic DNA sequences coding for putative spen related cDNAs in the human genome were analyzed by using GENESCAN [ 61 ]. Abbreviations used DAB: Di amino benzidine GFP: Green Fluorescent Protein HRP: Horseradish peroxidase LacZ: β-galactosidase MAb: Monoclonal antibody SSLP: Short Spen-like protein Authors' contributions KM performed the experiments shown in Figure 9 , contributed to the generation and analysis of maternal and zygotic spen mutant embryos, and examined the expression of cell adhesion and polarity markers in spen mutant clones in wing discs, most of which are results not shown. She also actively participated in the writing and the elaboration of the conclusions of this work. AT did the rest. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544560.xml |
554766 | Impact of tetrachloroethylene-contaminated drinking water on the risk of breast cancer: Using a dose model to assess exposure in a case-control study | Background A population-based case-control study was undertaken in 1997 to investigate the association between tetrachloroethylene (PCE) exposure from public drinking water and breast cancer among permanent residents of the Cape Cod region of Massachusetts. PCE, a volatile organic chemical, leached from the vinyl lining of certain water distribution pipes into drinking water from the late 1960s through the early 1980s. The measure of exposure in the original study, referred to as the relative delivered dose (RDD), was based on an amount of PCE in the tap water entering the home and estimated with a mathematical model that involved only characteristics of the distribution system. Methods In the current analysis, we constructed a personal delivered dose (PDD) model that included personal information on tap water consumption and bathing habits so that inhalation, ingestion, and dermal absorption were also considered. We reanalyzed the association between PCE and breast cancer and compared the results to the original RDD analysis of subjects with complete data. Results The PDD model produced higher adjusted odds ratios than the RDD model for exposures > 50 th and >75 th percentile when shorter latency periods were considered, and for exposures < 50 th and >90 th percentile when longer latency periods were considered. Overall, however, the results from the PDD analysis did not differ greatly from the RDD analysis. Conclusion The inputs that most heavily influenced the PDD model were initial water concentration and duration of exposure. These variables were also included in the RDD model. In this study population, personal factors like bath and shower temperature, bathing frequencies and durations, and water consumption did not differ greatly among subjects, so including this information in the model did not significantly change subjects' exposure classification. | Background In 1988, an unusually high incidence of cancer in the Cape Cod region of Massachusetts prompted a series of epidemiological studies to investigate possible environmental risk factors associated with the region, including tetrachloroethylene-contaminated drinking water [ 1 - 7 ]. Tetrachloroethylene (or perchloroethylene, PCE) entered the drinking water when it leached from vinyl liners of water distribution pipes introduced in the late 1960s. When the contamination was discovered, the Massachusetts Department of Environmental Protection began flushing and bleeding the pipes in 1980. At that time, the suggested limit set by the Environmental Protection Agency (EPA) was 40 ppb [ 8 ], but has since been lowered to a mandatory Maximum Contaminant Level (MCL) of 5 ppb. A population-based case-control study was undertaken to investigate the association between tetrachloroethylene exposure from public drinking water and breast cancer [ 5 ]. The study defined exposure using a cumulative measure Webler and Brown termed the relative delivered dose (RDD) [ 9 ]. Calculations for the RDD use the rate at which PCE leached from the pipe liner, the surface area of the interior of the pipe, and the upstream load. The RDD is relative to the total delivered mass of PCE entering each residence over time, but the constants and variables assumed to be constant were dropped from the analysis. While this allowed for grouping of the population into exposure categories, the RDD value computed is not an actual water concentration. Refer to Webler and Brown for a detailed description of the RDD model [ 9 ]. Because PCE is a volatile organic chemical that readily escapes from water into air, the amount of PCE inhaled during showers and baths, as well as the amount ingested and dermally absorbed, was relevant. The RDD measure does not consider these exposure pathways, which could potentially result in bias from exposure misclassification. Using personal exposure factors such as tap water consumption and bathing habits, we constructed a dose model to quantify the relative amount of PCE taken in by each subject, which we refer to as the personal delivered dose (PDD). The dose values calculated by the PDD model were subsequently used to measure the strength of the association between PCE exposure and the risk of breast cancer. The objective was to see if additional information contained in individual survey data affected associations between breast cancer and PCE exposure. Methods Study Population The population-based case-control study was designed to evaluate the association between breast cancer and tetrachloroethylene (PCE) exposure from public drinking water [ 5 ]. During the period 1987–1993, the Massachusetts Cancer Registry recorded 672 incident cases of female breast cancer among permanent residents of the Massachusetts towns Barnstable, Bourne, Brewster, Chatham, Falmouth, Mashpee, Provincetown, and Sandwich, where pipes with PCE-containing vinyl liners had been installed. Female controls were chosen to represent the underlying population that gave rise to the cases. Selection criteria required controls to be permanent residents of the same towns during 1987–1993. Controls were frequency matched to cases on age and vital status. Because many of the cases were elderly or deceased, three different sources of controls were used: (1) random digit dialing identified living controls less than 65 years of age; (2) Centers for Medicare and Medicaid Services, formerly the Health Care Financing Administration, identified the living controls 65 years of age or older; and (3) death certificates identified controls who had died from 1987 onward. The resulting 616 controls provide an estimate of the exposure distribution in the underlying population. Subjects or their next-of-kin completed extensive interviews, which provided information on demographics (e.g., age, sex, marital status, education), a 40-year residential history, and potential confounders (e.g., age, family history of breast cancer, age of first live or still birth, oral contraceptive use). Next-of-kin served as proxies for cases and controls who were deceased or too ill to participate in the interview. "Index years" were randomly assigned to controls to achieve a distribution similar to that of cases' diagnosis years and only exposures before the diagnosis year (for cases) and index year (for controls) were counted. The analysis considered a range of latent periods: 0, 5, 7, 9, 11, 13, 15, 17, and 19 years. For a detailed description of the methods, see Aschengrau et al. [ 5 ]. Dose Model If individual behavior in water use is an important element in a person's exposure, using the relative delivered dose (RDD) could bias the results. The RDD quantifies the amount of PCE in the drinking water, but does not consider exposure from inhalation, dermal absorption, and ingestion. PCE is a volatile organic compound and daily indoor inhalation exposure to contaminated water from showering can be up to six times greater than exposure from ingestion [ 10 ]. To further quantify dose and reduce exposure misclassification, a number of personal factors (e.g., bottled water consumption, duration and frequency of showers and baths) were considered. Non-proxy cases and controls were interviewed about many of these factors: the number of glasses of tap water consumed per day, including drinks made with tap water, such as coffee or lemonade; the use of bottled water; and the temperature, frequency, and duration of showers and baths. Information on a subject's physical characteristics, such as height and usual weight, was also obtained. Certain model parameters not provided by the questionnaire were obtained from the current scientific literature (e.g., inhalation rate, water flow rate, air exchange rate). We used this information to construct a personal delivered dose (PDD) model that considered three exposure routes: inhalation, dermal absorption, and ingestion. The RDD value was converted into an annual concentration and used as the initial water concentration for the PDD model (mg/L). The amount of PCE contributed by inhalation is a function of the temperature, frequency and duration of baths and showers, and the concentration of PCE in the bathtub/shower stall air. To determine the amount of PCE that volatilized from the water, the two-resistance theory was applied to temperature dependent physical and chemical properties of PCE [ 11 ]. The dermal absorption component of the model estimated each subject's surface area (from her height and weight) and determined the amount of PCE absorbed during baths and showers using Fick's first law [ 12 ]. The amount of PCE that a subject ingested was dependent on the volume of tap water consumed. By summing the total amount of PCE from the three exposure routes over all exposed residences, we arrived at a personal delivered dose (PDD) for each subject. A detailed description of the dose model is provided in Additional file 1 : Dose Model Appendix. Data Analysis Questions regarding tap water use and bathing habits were not asked in proxy interviews so the PDD analysis was restricted to non-proxy subjects (n = 885, Table 1 ). To accurately compare results from the RDD and PDD analyses, we first recalculated associations using the original RDD exposure measure for only the non-proxy subjects. Women with cumulative RDD exposures were compared with never-exposed women. Never-exposed women did not live downstream of vinyl-lined pipes. Table 1 Number of subjects by proxy/non-proxy, PCE-exposed/unexposed, and case/control status. Non-Proxy Subjects Proxy Subjects Total Subjects PCE-exposed 189 102 291 Cases 101 54 155 Controls 88 48 136 Unexposed 696 301 997 Cases 360 157 517 Controls 336 144 480 Total 885 403 1288 Cases 461 211 672 Controls 424 192 616 We defined a series of four exposure levels based on the exposure distribution of exposed controls. The lowest exposure level included all exposed subjects with RDD values less than or equal to the 50 th percentile. The remaining exposure levels were nested and included all RDD values greater than the 50 th percentile, greater than the 75 th percentile, and greater than the 90 th percentile. Therefore, a subject exposed at > 90 th percentile was also considered exposed at > 75 th and >50 th percentiles. We chose to nest exposure categories because there were too few subjects for mutually exclusive categories. There are no previous studies comparing nested exposure categories to mutually exclusive exposure categories. Exposure groups were further categorized for latent periods that ranged from 0 to 19 years. Each exposure level was treated as a binary variable in separate multiple logistic regression models. Odds ratios (ORs) were calculated for each exposure level relative to never-exposed cases (n = 360) and controls (n = 336). The adjusted analysis controlled for a group of core confounders: age at diagnosis or index year, family history of breast cancer, personal history of breast cancer (before current diagnosis or index year), age at first live birth or stillbirth, and occupational exposure to PCE. These factors were chosen as confounders a priori based on the current scientific literature. Additional potential confounders were added to the logistic regression models along with the core confounders, including history of benign breast disease; past use of diethylstilbestrol, oral contraceptives, and menopausal hormones; cigarette smoking history; alcohol drinking history; history of ionizing radiation treatment; quetlet index (measure of obesity); race; marital status; religion; education level; and physical activity level. None of these additional variables changed the adjusted estimates by more than 10%, and so the final models included only the core confounders. Adjusted analyses were not performed if there were fewer than three exposed cases and three exposed controls in an exposure level [ 5 ]. We calculated 95% confidence intervals (CIs) for the adjusted ORs using maximum likelihood estimates of the standard errors [ 13 ]. We then repeated the crude and adjusted analyses using each subject's personal delivered dose (PDD) as an exposure measure. The PDD distributions of the exposed controls were used to define the same four exposure levels: less than or equal to the 50 th percentile, greater than the 50 th percentile, greater than the 75 th percentile, and greater than the 90 th percentile. The referent category remained never exposed cases and controls. We also conducted a goodness-of-fit analysis to compare the RDD and PDD exposure measures and to determine which model performed better [ 14 ]. We compared the deviance of the models at different exposure levels and latencies. Lastly, we performed a nonparametric rank test to determine if the ranks of the subjects' PDD exposures differed significantly from the ranks of their RDD exposures. Results RDD analysis We were interested in comparing the results of the Aschengrau et al. RDD analysis using all subjects to the restricted analysis performed on only non-proxy subjects. The distributions of core confounders were similar among non-proxy and all subjects, except non-proxy subjects were younger than all subjects (Table 2 ). The number of exposed subjects was reduced by 35% when proxies were removed (from 291 to 189) when no latency was considered. The number of unexposed subjects used as a common reference group for all analyses was reduced by 30% (from 997 to 696). The median, 75 th percentile, and 90 th percentile RDD values for the non-proxy exposed controls were similar to the values for the exposed controls among all subjects (Table 2 ). We compared analyses for ever vs. never PCE-exposed and found that the odds ratios were similar for the non-proxy subjects and all subjects (Table 3 ) [ 5 ]. Table 2 Distribution of selected confounders of breast cancer subjects (%) and RDDs of PCE-exposed controls. Characteristic Non-Proxy Cases (n = 461) Non-Proxy Controls (N = 424) All Cases (n = 672) All Controls (N = 616) Age at diagnosis or index years 1–49 years 19.7 (91) 20.6 (87) 16.5 (111) 16.7 (103) 50–59 years 13.7 (63) 17.0 (72) 12.2 (82) 13.6 (84) 60–69 years 33.0 (152) 31.1 (132) 31.5 (211) 29.9 (184) 70–79 years 28.8 (133) 25.2 (107) 28.4 (191) 26.0 (160) 80+ years 4.8 (22) 6.1 (26) 11.4 (77) 13.8 (85) Age at first birth or stillbirth < 30 years 60.6 (279) 65.7 (278) 61.0 (410) 66.8 (411) 30+ years 13.8 (64) 12.4 (53) 14.5 (97) 12.8 (79) Nulliparous 25.6 (118) 21.9 (93) 24.5 (165) 20.4 (126) Prior breast cancer 5.6 (26) 3.3 (14) 5.4 (36) 4.8 (30) Family history of breast cancer 24.3 (112) 15.8 (67) 25.6 (172) 15.5 (95) Occupational exposure to PCE 16.3 (75) 16.0 (68) 15.5 (104) 14.8 (91) RDD Exposure (for no latency) Minimum --- 0.001 --- 0.001 Maximum --- 206.9 --- 243.8 Median --- 2.9 --- 2.5 75 th percentile --- 11.9 --- 12.1 90 th percentile --- 31.0 --- 29.2 Table 3 Tetrachloroethylene exposure history of breast cancer subjects, adjusted a odds ratios, and 95% confidence intervals. Latency period, years Exposed Cases Exposed Controls Adjusted ORs (95 % CI) 0 Non-proxy Subjects 101 88 1.1 (0.8–1.5) All Subjects 155 136 1.1 (0.8–1.4) 5 Non-proxy Subjects 87 69 1.2 (0.9–1.8) All Subjects 129 107 1.2 (0.9–1.6) 7 Non-proxy Subjects 71 61 1.1 (0.8–1.6) All Subjects 111 96 1.1 (0.8–1.5) 9 Non-proxy Subjects 63 57 1.1 (0.7–1.6) All Subjects 97 85 1.1 (0.8–1.5) 11 Non-proxy Subjects 49 43 1.1 (0.6–1.7) All Subjects 79 65 1.2 (0.8–1.7) 13 Non-proxy Subjects 43 32 1.3 (0.7–2.1) All Subjects 61 45 1.3 (0.9–2.0) 15 Non-proxy Subjects 30 21 1.4 (0.7–2.6) All Subjects 44 31 1.4 (0.9–2.3) 17 Non-proxy Subjects 15 15 1.0 (0.4–2.2) All Subjects 21 21 1.0 (0.6–2.0) 19 Non-proxy Subjects 6 6 1.1 (0.3–3.5) All Subjects 9 9 1.1 (0.4–2.9) a The OR was calculated relative to never-exposed cases (n = 360 for Non-Proxy , n = 517 for ALL ) and controls (n = 336 for Non-Proxy , n = 480 for All ). Controlled for age at diagnosis or index year, family history of breast cancer, personal history of breast cancer (before current diagnosis or index year), age at first live-birth or still birth, occupational exposure to PCE, and vital status at interview (for All analysis only). PDD analysis The distribution of cumulative RDD and PDD values ranged by five orders of magnitude, equivalent to a range from micrograms to hundreds of milligrams. Of the 189 exposed subjects in the no latency analysis, the personal delivered dose model changed the exposure categories of 39 subjects. However, the result from a non-parametric signed rank test indicates that the subjects' RDD ranks and PDDs rank are not significantly different (p = 0.81). In general, odds ratios from the PDD analysis were slightly higher than the RDD analysis for exposure levels above the 50 th and 75 th percentiles at shorter latency periods (see Additional file 2 : Table 4). At longer latencies, the ORs for the lowest and highest exposure groups in the PDD analysis were slightly higher than the RDD analysis, but small numbers of exposed subjects limited the adjusted analyses. The odds ratios for breast cancer increased with increased latency and higher exposure categories, although the odds ratios were not statistically significant. The confidence intervals were generally the same width for both the RDD and PDD analyses; they included the null value in both analyses, and grew wider as the exposure level and latency increased. Overall, the results from the PDD analysis did not differ greatly from the RDD analysis and any differences were well within the variation present in the RDD data, which formed the "input" to the PDD analysis. The best fitting model is often but not always the one that produces the higher odds ratio [ 14 ]. The deviance measure of goodness-of-fit was smaller for the PDD than the RDD model at shorter latencies and lower exposure levels and larger at longer latencies and higher exposure levels. However, the close agreement between the goodness-of-fit measures suggests that there is little difference between the two models (see Additional file 3 : Table 5). Further evidence of this is provided by the results of the nonparametric rank test, which indicated the two exposure rankings were not statistically different. Discussion The dose model was constructed to reduce nondifferential exposure misclassification due to variations in personal behavior. In the RDD analysis, exposure was based solely on subjects' RDD values and did not take into consideration factors such as bathing habits and bottled water consumption. Nondifferential exposure misclassification should bias results towards the null when the exposure is dichotomous. Based on this reasoning, we expected the moderate elevations in risk observed in the RDD analysis by Aschengrau et al. [ 5 ] to increase further in the current PDD analysis. The results show that, in general, this was not the case. Overall, the risks calculated from the PDD analysis differed only slightly from the RDD analysis, if at all. The fact that the PDD model did not increase the odds ratios may be due to a number of reasons. A possible explanation is that no association exists between exposure to PCE and breast cancer, but there is a fairly large body of literature now that supports a carcinogenic effect for PCE in humans. The biologic rationale for a breast cancer effect stems from a hypothesis described by Labreche and Goldberg that organic solvents such as PCE may act either directly as genotoxic agents or indirectly through their metabolites to increase the risk of breast cancer [ 15 ]. More likely, the impact of variations in personal habits was small in comparison to variations in characteristics of the drinking water distribution system, or the questionnaire information did not accurately account for individual variations. Errors in estimating the RDD values used in the dose model may explain why the model made little difference in determining risk. Improper assumptions or incorrect input variables in the Webler-Brown model led to errors in the RDD values [ 5 ]. The resulting exposure misclassification would not be corrected using the dose model. As a result, the dose model would still be biased. Furthermore, both RDD and PDD are measures of cumulative exposure, where exposure was summed over a subject's residences on Cape Cod. One subject may have been exposed at a high intensity for two or more short residency durations while another subject with the same exposure value may have been exposed at a low intensity for one long residency duration. The exposure pattern can influence cancer risk if, for example, a threshold intensity of PCE must be reached in order to cause breast cancer or if breast cancer induction requires prolonged continuous exposure [ 16 ]. Another limitation of the analysis was the restriction to subjects with non-proxy interviews, which reduced the sample size by 31%. When all subjects were included in the RDD analysis, small to moderate increases were observed among women whose exposure level was greater than the 90 th percentile [ 5 ]. When only non-proxy subjects were included, we no longer observed moderate increases. This difference may be due to the fact that the maximum RDD value was higher for all subjects than for non-proxies. Therefore, the use of only non-proxy subjects may not accurately reflect population risk. Imputing values for proxy subjects is a possible option for future analyses. Faulty recall in the behavioral data is another possible reason why the PDD model did not strengthen the association between breast cancer and PCE. Subjects were asked to remember details about bathing habits and drinking water that occurred up to forty years before the interview. As a result, the exposure data obtained at interview may not be accurate. The inputs that most heavily influenced the PDD model were initial water concentration and duration of exposure. These variables were also included in the RDD model. In this study population, personal factors like bath and shower temperature, bathing frequencies and durations, and water consumption did not differ greatly among subjects. Therefore, including these characteristics in the PDD model did not significantly improve the exposure measure or change which subjects were considered exposed and to what level they were exposed. Conclusion In an attempt to characterize PCE exposure more precisely, we constructed a dose model that considered exposure from inhalation, ingestion, and dermal absorption. The model incorporated personal information on tap water use and bathing habits obtained from study interviews. The dose values calculated by the model were subsequently used to measure the strength of the association between PCE exposure and the risk of breast cancer. Although our results from the PDD analysis did not differ greatly from the RDD analysis, it remains important to assess exposure as accurately as practical in an epidemiological investigation. Many factors such as tap water use and bathing habits could be considered when determining exposure to volatile chemicals in domestic water supplies, but our analysis suggests that the use of such ancillary data does not always result in an improvement in exposure accuracy if the ancillary data are inaccurate or if they have little effect on an individual's exposure level. Abbreviations PCE, Tetrachloroethylene; RDD, Relative Delivered Dose; PDD, Personal Delivered Dose; EPA, Environmental Protection Agency; MCL, Maximum Contaminant Level; ORs, Odds Ratios; CIs, Confidence Intervals Competing Interests The author(s) declare they have no competing interests. Authors' Contributions VV created the dose model, conducted the statistical analyses, and drafted the manuscript. AA provided the data and assisted in epidemiologic analysis and editing. DO participated in the design of the study and the editing of the manuscript. All authors read and approved the final manuscript. Supplementary Material Additional File 1 This document describes the dose model in more detail. Click here for file Additional File 2 This document provides a table of adjusted odds ratios for breast cancer by tetrachloroethylene exposure levels in RDD and PDD analyses. Click here for file Additional File 3 This document provides a table of deviance measures for logistic regression models by tetrachloroethylene exposure levels in RDD and PDD analyses. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554766.xml |
544549 | Whisker Velocity Patterns Tell Rats What They're Feeling | null | Whiskers don't fossilize, so it's hard to say when they first evolved. But it's quite likely they emerged along with mammals, over 200 million years ago. To elude the eye (and feet) of ungainly dinosaurs, it's thought these shrew-like prototypes foraged at night and sought refuge underground, where the sensory advantages of whiskers would come in handy. Nocturnal animals use whiskers much like the blind use walking sticks: to navigate their surroundings, explore close objects, and avoid running into things. Whiskers, or vibrissae, connect to nerves, blood vessels, and muscles. These special connections allow rats, for example, to actively “whisk” the surface of objects and discern fine differences in texture, just as we move our fingertips along a surface to pick up details. In the wild, whisking helps rats navigate unfamiliar terrain to find food. But how does the brain know what the animal is touching? Rat whiskers scan surfaces in a rhythmic motion that excites sensory receptor cells embedded in their whisker pad. Receptors in each whisker shaft are innervated by several hundred “first-order neurons” that relay sensory signals to second-order neurons in the brain stem, then on to third-order neurons in the thalamus, and finally on to the cortex, where sensory stimuli are integrated in cell clusters called barrels. Ehsan Arabzadeh, Erik Zorzin, and Mathew Diamond work with rats to investigate how sensory receptors extract fundamental features from complex and diverse stimuli to encode texture. Not much is known about how receptor and cortical neurons respond to active whisking along irregular surfaces, though responses to simple stimuli (like sinusoidal vibrations) suggest that neurons might represent texture by encoding kinetic features of whisker vibrations, in particular, velocity. In a new study, Diamond and colleagues investigate the connection between textures, whisker vibrations, and neural codes: do distinct textures produce distinct vibrations? If so, how are these vibrations encoded and reported? Timing of neuronal activity captures sensory information The authors first collected kinetic data of whiskers moving across different textured surfaces. Stimulating cranial nerve VII of anesthetized rats (the motor nerve) generated whisking movements akin to those seen in conscious rats; the kinetics of these movements and the vibrations of the whisker shafts were measured under different conditions, including no contact with objects (“free whisk”), contact with smooth objects, and contact with various grades of sandpaper. These vibrations were then “played back” to other rats, while measuring the neuronal activity at two critical stages in the sensory pathway: the first-order neurons that innervate the whiskers and the barrel cortex neurons that integrate the incoming signal. Altogether, the authors collected a neural dataset consisting of first-order recordings, barrel cortical cluster recordings, and simultaneous paired recordings from both sites, all in response to playback of the library of texture-related vibrations. This approach afforded the opportunity to directly compare encoding of information at both levels in the sensory pathway. These recordings show, the authors argue, that temporally distinct firing patterns in the trigeminal ganglion (the cell bodies of the first-order neurons) and cortex captured the kinetic features of the texture-induced vibrations. Each texture's “kinetic signature” is encoded by a characteristic, temporally precise firing pattern associated with whisker movement. Compared to free whisking, coarse sandpaper produced irregular bursts of high and low velocity, and both first-order and cortical neurons fired far more impulses for coarse sandpaper than for free whisks. The authors then used stimuli consisting of random velocities to uncover the “tuning curves” of neurons, and simulations showed that these neuronal tuning curves could perfectly predict the real neural responses to textures. Noting the close match between the simulated and natural responses, Diamond and colleagues conclude that the texture-induced firing patterns observed in the first-order and cortical neurons suggest that neurons selectively encode elemental kinetic features—namely, high velocity—to tell rats what they're whisking. This selectivity allows even a single whisker to transmit significant bits of texture-specific information to the brain. Interesting as rat whisking may be, these findings have relevance beyond the world of whiskered beings, shedding light on the underlying neural processes that translate touch into recognition. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544549.xml |
509306 | Gcn4p and Novel Upstream Activating Sequences Regulate Targets of the Unfolded Protein Response | Eukaryotic cells respond to accumulation of unfolded proteins in the endoplasmic reticulum (ER) by activating the unfolded protein response (UPR), a signal transduction pathway that communicates between the ER and the nucleus. In yeast, a large set of UPR target genes has been experimentally determined, but the previously characterized unfolded protein response element (UPRE), an upstream activating sequence (UAS) found in the promoter of the UPR target gene KAR2, cannot account for the transcriptional regulation of most genes in this set. To address this puzzle, we analyzed the promoters of UPR target genes computationally, identifying as candidate UASs short sequences that are statistically overrepresented. We tested the most promising of these candidate UASs for biological activity, and identified two novel UPREs, which are necessary and sufficient for UPR activation of promoters. A genetic screen for activators of the novel motifs revealed that the transcription factor Gcn4p plays an essential and previously unrecognized role in the UPR: Gcn4p and its activator Gcn2p are required for induction of a majority of UPR target genes during ER stress. Both Hac1p and Gcn4p bind target gene promoters to stimulate transcriptional induction. Regulation of Gcn4p levels in response to changing physiological conditions may function as an additional means to modulate the UPR. The discovery of a role for Gcn4p in the yeast UPR reveals an additional level of complexity and demonstrates a surprising conservation of the signaling circuit between yeast and metazoan cells. | Introduction The vast majority of all cellular secretory and membrane proteins are folded and modified in the endoplasmic reticulum (ER), from which they are transported to their final destination in the secretory pathway. When the protein folding capacity of the ER is exceeded or experimentally impaired, unfolded proteins accumulate in the ER and activate the unfolded protein response (UPR). The UPR allows the ER to communicate with the nucleus ( Patil and Walter 2001 ), where a comprehensive gene expression program is induced to adjust the protein folding capacity of the cell according to need. In the yeast S. cerevisiae, unfolded ER proteins stimulate the ER-resident bifunctional transmembrane kinase/endoribonuclease Ire1p ( Cox et al. 1993 ; Mori et al. 1993 ; Sidrauski and Walter 1997 ). When activated, Ire1p excises a 252-nucleotide intron from the mRNA encoding Hac1p, a bZIP transcription factor required for induction of all UPR target genes ( Cox and Walter 1996 ; Mori et al. 1996 ; Sidrauski and Walter 1997 ). Removal of the HAC1 intron and subsequent ligation of the two liberated exons by tRNA ligase ( Sidrauski et al. 1996 ) produces a spliced mRNA that is efficiently translated ( Kawahara et al. 1997 ). In the absence of splicing, the intron blocks translation of the mRNA ( Rüegsegger et al. 2001 ). Splicing is therefore a prerequisite for Hac1p production and thus serves as the key regulatory step in the UPR. When it is produced, Hac1p binds an upstream activating sequence (UAS), the unfolded protein response element (UPRE), found in the promoters of UPR target genes ( Mori et al. 1992 ; Kohno et al. 1993 ), thereby stimulating the transcriptional response to protein unfolding. Several salient features of the UPR are conserved between yeast and metazoans. In metazoans, Ire1p orthologs Ire1-α and Ire1-β remove a short intron from the XBP-1 mRNA, which encodes a bZIP transcription factor analogous to Hac1p ( Wang et al. 1998 ; Miyoshi et al. 2000 ; Urano et al. 2000 ; Calfon et al. 2002 ). The metazoan UPR, however, is implemented by at least two additional ER-resident sensors, which are thought to act in parallel and induce multiple downstream transcriptional activators not known to exist in yeast. A second branch of ER-to-nucleus signaling is mediated by ATF-6, a bZIP transcription factor that is synthesized as an integral ER transmembrane protein ( Haze et al. 1999 ). Upon UPR induction, ATF-6 is proteolytically cleaved, liberating a soluble fragment that moves to the nucleus to induce transcription in association with XBP-1 ( Wang et al. 2000 ; Ye et al. 2000 ; Steiner et al. 2001 ; Yoshida et al. 2001 ; Lee et al. 2002a ). A third branch of the metazoan UPR provides translational control by the ER transmembrane kinase PERK ( Harding et al. 1999 ; Liu et al. 2000 ). When activated in response to protein misfolding in the ER, PERK phosphorylates the translation initiation factor eIF-2α, thereby down-tuning translation of many mRNAs (and decreasing the translocational load on the ER) ( Harding et al. 2000a , 2000b ). Under conditions of limiting eIF-2α activity, however, some mRNAs containing short upstream open reading frames (ORFs) in their 5′ UTR are preferentially translated. One of these mRNAs encodes a third bZIP transcription factor, ATF-4, which collaborates with XBP-1 and other cellular stress signaling factors to activate UPR targets ( Harding et al. 2000a , 2003 ; Ma et al. 2002 ). The UPR target genes of yeast have been comprehensively defined by microarray expression profiling, where they comprise a significant fraction of the yeast genome (381 genes, more than 5% of the ORFs) ( Travers et al. 2000 ). The UPR target genes encode many proteins that play critical roles in the ER, the Golgi apparatus, and throughout the secretory pathway. Hence, the UPR can be thought of as a means of homeostatic control, serving to remodel the secretory pathway according to the cell's need. The set of 381 genes was defined by microarray hybridization expression profiling, using a stringent quantitative “filter” that required the expression profile of each target gene to closely match that of previously known and well-characterized UPR target genes. In particular, the filter demanded that the expression profile of a target gene closely correlate to that of canonical UPR targets over a time course of UPR induction, and that induction be significantly greater in wild-type (WT) than in either Δ ire1 or Δ hac1 cells. The identification of this vast set of target genes poses an enigma in light of the previously characterized UPRE. The UPRE was originally defined as a 22-bp sequence element of the KAR2 /BiP promoter ( Mori et al. 1992 ) and subsequently carefully refined to nucleotide precision as a semipalindromic seven-nucleotide consensus, CAGNGTG ( Mori et al. 1998 ). Point mutations in any one of the six conserved nucleotides or deletion of the central nucleotide was shown to have severely detrimental effects on the ability of the element to function as an autonomous UAS when placed into an otherwise silent promoter. Yet, inspection of the 381 promoter sequences of the experimentally defined set of target genes failed to reveal a recognizable UPRE in most of them. This observation is particularly surprising given that the UPRE is thought to be the Hac1p binding site, and HAC1 has been shown to be required for activation of all UPR target genes. One possible resolution to this paradox is that additional, heretofore unrecognized UPREs exist that are required for the activation of the genes lacking the “classical” UPRE. A requirement for new cis -activating sequences in the promoters of UPR target genes raises the possibility that such sequences could be bound by other trans -acting factors, alone or in combination with Hac1p, and thus contribute to the transcriptional complexity of the UPR. Results Computational Identification of Target Motifs To identify sequence motifs shared by the set of UPR target genes, we employed a bioinformatics approach to build a “dictionary” of putative regulatory elements from the promoters of these genes. In this approach, DNA sequence is considered as a “text” (a long string of nucleotides), which is modeled as having been composed by concatenating “words” (short oligonucleotides) drawn from a probabilistic “dictionary” according to their frequencies. To infer the dictionary from the observed text, we employed the previously developed computational algorithm, MobyDick, which was developed based on a probabilistic segmentation model ( Bussemaker et al. 2000a , 2000b ). MobyDick has been used previously to identify regulatory sites in large sets of promoters activated during sporulation or by specific cell-cycle stage. We first constructed a dictionary from the UPR target gene promoters. To this end, we compiled a text from the promoters of all 381 UPR target genes as previously defined ( Travers et al. 2000 ). We defined the promoter region for each ORF as the 600 nucleotides upstream of the initiation codon. Probabilistic segmentation analysis using the MobyDick algorithm indicated that the target gene promoters are best modeled by a dictionary of about 300 words of eight nucleotides or less (for details of this and subsequent calculations, see Materials and Methods ; a complete report of the dictionary with associated statistics appears in Table S1 ). These words represent the sequences that are most frequent in the target gene promoters. Because words with similar sequences are likely to possess similar biological activity, we considered groups of related words as units in our subsequent analysis. We grouped the dictionary into motifs by performing every possible pairwise alignment between all words, and then clustering words with high mutual alignment scores. A motif may contain two or more words, or just a single word. For a multiword motif, the words defining the motif are similar to one another and share common core sequences ( Figure 1 B; Table S2 ). The clustering procedure yielded about 100 motifs, about half of which contain multiple words. Figure 1 Computational Selection of Candidate Regulatory Motifs (A) Candidate regulatory motifs are overrepresented in UPR target promoters. Sequence motifs were ranked in order of overrepresentation, i.e., on the number of observed appearances in target promoters relative to the expectation from the total appearances in all promoters. −log 10 P, a metric of overrepresentation, is plotted against rank (circles). Eight motifs were chosen for experimental characterization (open circles). (B) Best words grouped into eight candidate motifs. The eight most overrepresented motifs from Fig. 1 A, aligned to illustrate common core sequences. The example of each motif chosen for experimental characterization is underlined. We reasoned that motifs that are likeliest to represent bona fide regulatory elements will be nonrandomly distributed in the genome and appear more often in the UPR target gene promoters than expected by chance. Therefore, we counted the number of times each motif (i.e., a sequence match to any of the words the motif comprises) appeared in the approximately 6,000 promoters in the genome, and computed from this figure the frequency with which each motif would be expected to appear in a promoter if it were distributed randomly throughout the genome. We then counted the number of times the motif was actually found in the 381 target promoters and calculated the probability P of this many or more appearances occurring by chance. A small P value (high −log 10 P ) indicates that the motif is overrepresented relative to the expectation. Figure 1 A shows the motifs ranked in order of decreasing overrepresentation, with −log 10 P for each motif plotted against this rank. We chose the eight highest-ranking motifs (open circles) as candidates for experimental testing ( Figure 1 B), analyzing a single example of each (underlined sequences). Experimental Verification of Novel UPREs To determine whether any of the eight candidate motifs would function as bona fide UPREs, we introduced three tandem repeats of a single representative sequence of each motif into a lacZ reporter construct that contains a crippled version of the CYC1 promoter that is transcriptionally silent in the absence of a UAS ( Guarente and Mason 1983 ). Analogous constructs containing the “classical,” KAR2 -derived UPRE inserted upstream of the core promoter have been shown to drive transcription of this reporter gene under ER stress ( Mori et al. 1992 ; Cox et al. 1993 ). As a positive control for UPR-dependent gene expression, we used a construct containing a triple repeat of the KAR2 -derived UPRE ( Cox and Walter 1996 ). We transformed the resulting plasmids into yeast and assayed for β-galactosidase activity in response to ER stress. Of the eight reporter constructs, the two containing Motif 1 and Motif 8 were transcriptionally activated when cells were treated with tunicamycin (Tm) ( Figure 2 A), or dithiothreitol (DTT) (unpublished data), both inducers of the UPR. The other six motifs showed no activity above baseline (unpublished data). Neither Motif 1 nor Motif 8 showed any activity in the absence of ER stress, and no activation was observed upon UPR induction in either Δ ire1 or Δ hac1 strains. Hence, as with the “classical” UPRE, these two motifs are sufficient to confer transcriptional activation upon a promoter in an IRE1 - , HAC1 - , and ER stress-dependent manner. We therefore conclude that the bioinformatics analysis has identified two novel UPREs present in target gene promoters; hereafter, we refer to Motif 1 and Motif 8 as UPRE-2 and UPRE-3, respectively. Correspondingly, we shall refer to the classical, KAR2 -derived UPRE as UPRE-1. Figure 2 Identification of Two Novel Sequence Motifs Necessary and Sufficient for UPR Activation (A) Motif 1 and Motif 8 are sufficient to confer UPR-responsive transcription on an artificial promoter . Single representative sequences of the KAR2- derived UPRE and candidate regulatory motifs Motif 1 and Motif 8 were cloned into a crippled promoter driving lacZ, transformed into yeast (WT, Δ ire1, and Δ hac1 ), and β-galactosidase activity monitored in response to Tm treatment. (B) UPRE-2 (Motif 1) is necessary for UPR-dependent activation of the ERO1 promoter. lacZ was placed under the control of the WT ERO1 promoter (+ UPRE-2) or a mutant (− UPRE-2), and β-galactosidase activity monitored in response to DTT treatment. (C) UPRE-3 (Motif 8) is necessary for UPR-dependent activation of the DHH1 promoter. As in (B), except using the DHH1 promoter, in which UPRE-3 appears once. (D) Novel motifs explain a greater fraction of UPR target gene activation. Sets of genes whose promoters contain UPR-responsive UASs UPRE-1, UPRE-2, UPRE-3, or a combination, are here depicted in Venn diagram format as subsets of the 381-gene UPR target set. To test whether these motifs are also necessary for transcriptional activation, we designed lacZ reporter constructs derived from two native promoters in which the motifs appear. We chose for UPRE-2 the promoter of ERO1, encoding an ER resident redox protein, and for UPRE-3 the promoter of DHH1, encoding an RNA helicase. Both genes are robust targets of the UPR ( Travers et al. 2000 ) and lack a recognizable UPRE-1. First, we verified that the reporters responded to ER stress in a UPR-dependent manner. WT but not Δ ire1 or Δ hac1 cells bearing the UPRE-2-containing ERO1 -promoter-driven reporter expressed higher levels of β-galactosidase after treatment with DTT ( Figure 2 B, “+ UPRE-2” columns). In a mutant version of this reporter construct, in which the UPRE-2 was ablated and replaced by an unrelated sequence of identical length, inducibility of the ERO1 promoter was decreased by approximately 4-fold (“− UPRE-2” columns). Similarly, WT but not Δ ire1 or Δ hac1 cells bearing the UPRE-3-containing DHH1 -promoter-driven reporter expressed higher levels of β-galactosidase after treatment with DTT ( Figure 2 C, “+ UPRE-3” columns); ablation of UPRE-3 from the DHH1 promoter entirely eliminated induction by ER stress ( “− UPRE-3” columns). Taken together, the data presented so far indicate that, as with the classical UPRE-1, UPRE-2 and UPRE-3 are both sufficient ( Figure 2 A) and necessary ( Figure 2 B and 2 C) to confer UPR inducibility on a target promoter. The addition of UPRE-2 and UPRE-3 to the repertoire of UPREs triples the number of genes in the UPR target set whose induction we can explain by invoking the presence of a well-defined UAS ( Figure 2 D). Identification of High-Copy Activators of UPRE-2 The existence of functional cis -regulatory elements that differ in sequence from the canonical UPRE-1 suggests that trans -activating factors other than Hac1p may bind these elements. Alternatively, Hac1p, alone or accompanied by another factor or factors, may be able to recognize multiple sequences. To distinguish between these possibilities and potentially reveal novel regulatory factors, we attempted to identify genes which, when overexpressed, activate transcription of the UPRE-2 reporter plasmid in the absence of an ER stress signal. The design of this screen recapitulates the approach which identified HAC1 as a high-copy activator of the UPRE-1 ( Cox and Walter 1996 ). We transformed a strain bearing the UPRE-2- lacZ reporter with a 2-μm-plasmid-derived ( high-copy) genomic DNA library ( Miller et al. 1984 ). A Δ ire1 strain was used in order to focus the screen on genes acting downstream of IRE1. Use of the Δ ire1 strain also avoided a background of false positives resulting from library plasmids encoding secretory proteins whose overexpression might activate Ire1p. Transformants were plated on synthetic defined media and, after appearance of colonies, overlaid with soft agar containing the β-galactosidase substrate X-gal. Colonies that turned significantly more blue than control (untransformed) colonies were recovered and rescreened by the same assay. Plasmids from positively rescreened clones were retransformed into the Δ ire1 UPRE-2- lacZ strain to verify plasmid linkage of the activator phenotype. We screened a total of 112,000 transformants, representing a predicted genomic coverage of approximately 50x. Thirty-eight positive transformants passed through repetition and plasmid linkage tests, and 18 of these stably maintained the activator phenotype over many generations. Positive plasmids fell into two classes, as defined by the minimal region of overlap of their insert sequences. One class of inserts (ten plasmids) shared the IRE1 locus and surrounding sequences; IRE1 has been previously shown to be activated by overexpression and is a high-copy activator of UPRE-1 ( Cox et al. 1993 ). Recovery of this locus demonstrates that the screen was able to capture genes of physiological relevance to the pathway. The second class of positive inserts (eight plasmids) shared the GCN4 locus. GCN4 encodes a bZIP transcription factor, which has been well-characterized as a component of the cellular response to amino acid starvation and other stresses ( Natarajan et al. 2001 ; reviewed in Hinnebusch 1997 ) but has not been previously demonstrated to play a role in the UPR. We constructed a 2-μm plasmid bearing only GCN4, transformed it into WT, Δ ire1, and Δ hac1 strains carrying UPRE-1- lacZ, UPRE-2- lacZ, and UPRE-3- lacZ reporters, and assayed for β-galactosidase activity ( Figure 3 A). GCN4 overexpression stimulated UPRE-2-driven reporter activity in all three genotypes ( “+ GCN4 2μ” columns), indicating that overexpression of GCN4 is sufficient to stimulate transcription from the UPRE-2-driven reporter gene in the absence of ER stress, Ire1p activity, or Hac1p production. We also starved cells for histidine by administering 3-aminotriazole (3-AT), which induces translation of Gcn4p ( Albrecht et al. 1998 ). As when cells expressed high levels of GCN4, amino acid starved cells exhibited a significant increase of UPRE-2 transcription in the absence of ER stress ( “+3-AT, −Tm” columns). GCN4 overexpression alone did not activate transcription from either UPRE-1 or UPRE-3 reporter genes, emphasizing that these motifs are not synonymous with UPRE-2. Figure 3 GCN4 Encodes a Novel Transcription Factor in the UPR (A) Overexpression of GCN4 is sufficient for activation of UPRE-2, but not UPRE-1 or UPRE-3. UPRE-driven transcriptional activity as a function of Gcn4p levels, elevated either as a result of overexpression (+ GCN4–2μ ) or amino acid starvation (+ 3-AT), in the presence or absence of ER stress (Tm). (B) GCN4 and GCN2 are necessary for ER stress-dependent activation of UPRE-1 and UPRE-2. UPRE-driven transcriptional activity as a function of GCN4 pathway genes (WT, Δ gcn4, and Δ gcn2 ) in the presence or absence of ER stress (Tm). (C) GCN4 and GCN2 are required for UPR-dependent transcriptional activation of a subset of target genes. Fold changes in mRNA levels were determined by microarray for DTT-treated vs. -untreated WT, Δ ire1, Δ gcn4, and Δ gcn2 strains (columns). Histograms show distribution of log 2 -fold changes for non-UPR target genes (light bars) and for UPR target genes (dark bars), which contain UPRE-1, UPRE-2, UPRE-3, or still unidentified UPREs (rows) in their promoters. (D) Target gene regulation differs significantly in WT and Δgcn4/Δgcn2 mutants. Means (μ) and standard deviations (σ) for log 2 -fold change in gene expression for non-UPR target genes, and for genes that fall inside the UPR target gene set and contain UPRE-1, UPRE-2, or UPRE-3 in their promoters. Z statistic (z) and P value (P) : higher z reflects a greater difference between the distribution for UPRE-containing target genes and nontarget genes; lower P indicates a more highly significant difference. For detailed calculations, see Materials and Methods . GCN4 and GCN2 Are Required for Activation of All Three UPREs Having demonstrated that GCN4 overexpression is sufficient to activate transcription from a UPRE-2 reporter, we next asked whether GCN4 is also necessary to activate transcription in response to ER stress. We deleted GCN4 from strains bearing UPRE-1, UPRE-2, and UPRE-3 reporter constructs and assayed β-galactosidase activity in response to UPR activation. Upon UPR induction, HAC1 mRNA was spliced normally, and Hac1p was produced at WT levels in Δ gcn4 mutants (unpublished data). However, Δ gcn4 cells failed to induce transcription, not only of the UPRE-2-driven reporter but also of the UPRE-1- and UPRE-3-driven reporters ( Figure 3 B). Hence we conclude that GCN4 is required for ER stress responsiveness of all three UPREs. Consistent with the genetic requirement for GCN4, high levels of Gcn4p potentiate transcription from all UPREs when the UPR is activated. GCN4 overexpression increases the level of reporter activation in WT cells when the UPR is induced ( Figure 3 A, compare “ GCN4 +Tm” to “ GCN4 − Tm” data), suggesting that GCN4 activity is limiting for UPR-dependent transcription from all three UPREs. Similarly, stimulation of Gcn4p production by amino acid starvation also increases the magnitude of the transcriptional response ( Figure 3 A, “+3-AT, +Tm” data). In its role in the transcriptional response to amino acid starvation, GCN4 is activated at the translational level. Uncharged tRNAs are detected by the kinase Gcn2p, which phosphorylates initiation factor 2α (eIF-2α); when eIF-2α is phosphorylated, scanning ribosomes fail to initiate at upstream ORFs encoded by the GCN4 5′ UTR and are able to initiate translation at the GCN4 ORF itself ( Hinnebusch 1997 ). We therefore asked whether GCN2 is also required for GCN4 activity in the context of the UPR. As with Δ gcn4 cells, Δ gcn2 strains were also unable to mount a transcriptional response from any of the reporter constructs ( Figure 3 B). Given that GCN4 and GCN2 are necessary for ER stress-dependent transcription in an artificial promoter context, we next asked whether these genes are required for upregulation of the target genes of the UPR. To this end, we measured steady-state mRNA levels by microarray hybridization, comparing WT, Δ ire1, Δ gcn4, and Δ gcn2 cells treated with DTT for 30 min (by which time the UPR is qualitatively complete; Travers et al. 2000 ) to untreated samples of the same genotype. WT cells were taken as a positive control for UPR induction, and Δ ire1 cells as a negative control. Fold change in expression of a given gene was computed as the ratio of mRNA level in the treated sample to the level in an untreated sample of the same genotype. In our analysis, we considered five subsets of genes: the sets of UPR target genes containing a UPRE-1, UPRE-2, or UPRE-3 in their promoter, the set of UPR target genes without an identified UPRE in their promoters (“no UPRE”), and the set of genes previously identified as UPR-independent (“nontargets”) ( Travers et al. 2000 ). The distributions of the log 2 -fold changes for each subset of genes in each genotype relative to the set of nontarget genes are illustrated in Figure 3 C. For each gene set in each genotype, we determined the difference between the distributions of log 2 -fold changes in UPRE target genes and those in nontarget genes. The statistical significance of these differences is represented by the z scores and P values enumerated in Figure 3 D; higher z and lower P indicate a greater difference between distributions and higher significance (for details see Materials and Methods ). The majority of the genes in the nontarget set ( Figure 3 C, all histograms, light bars) are not differentially regulated by ER stress in the WT and mutant strains. As previously shown, however, genes of the UPR target set are significantly more upregulated in the WT than in Δ ire1 cells ( Figure 3 C, compare dark bars versus light bars between histograms a and b, e and f, i and j, and m and n ). This is the case both for target genes bearing any UPRE in the promoter ( Figure 3 C, histograms a–l ) as well as the remainder of the target set for which a UPRE has not been identified ( Figure 3 C, histograms m–p ). For those genes with an identified UPRE in their promoters, expression patterns in both Δ gcn4 ( Figure 3 C, histograms c, g, and k ) and Δ gcn2 mutants ( Figure 3 C, histograms d, h, and l ) show trends similar to those in Δ ire1 . In both mutants, the sets of genes whose promoters contain a UPRE are significantly less upregulated relative to their induction in the WT. Some UPR target genes exhibit residual upregulation in Δ gcn4 and Δ gcn2, suggesting that these promoters have only a partial requirement for GCN4 / GCN2 . This effect is most prominent for genes containing UPRE-1 in the Δ gcn2 mutant ( Figure 3 C, histogram j ), where the residual induction crosses the threshold into marginal statistical significance ( Figure 3 D, “Δ gcn2, UPRE-1”; p = 3.4 × 10 −4 ); it is possible that the residual levels of Gcn4p present in a Δ gcn2 mutant are sufficient to allow UPR transcription from these promoters, or alternatively that UPRE-1 promoters are relatively less sensitive to Gcn4p levels (and concomitantly, relatively more reliant on Hac1p) for induction (see Discussion). In contrast, induction of the “no UPRE” genes is quite high in Δ gcn4 and Δ gcn2 cells ( Figure 3 C, histograms o and p versus m ). As a population, these genes are not significantly less upregulated in the mutants than in the WT. It would appear that the UPREs identified to date define a special subset of UPR target genes that are responsive not only to IRE1 and HAC1 but that are particularly sensitive to the GCN4 / GCN2 branch of the pathway. Overall, in both Δ gcn4 and Δ gcn2 mutants, the pattern of gene regulation during the UPR is similar to that in the Δ ire1 mutant: Mean fold changes of UPRE-containing target genes are lower in these mutants than in the WT. We conclude that GCN4 and GCN2 play a broad role in the UPR, contributing significantly to the upregulation of a large subset of UPR target genes. Gcn4p Is Upregulated in Response to ER Stress Given the requirement for GCN4 in UPR-dependent transcription, and in particular the observation that Gcn4p appears to be limiting for the magnitude of the transcriptional response ( Figure 3 A), we asked next whether Gcn4p levels would be subject to posttranscriptional regulation under conditions of ER stress. We discounted the possibility that GCN4 would be regulated at the transcriptional level, as our previous studies showed that GCN4 mRNA levels are unchanged over the course of the UPR ( Travers et al. 2000 ). We constructed strains expressing a C-terminally myc-epitope-tagged allele of Gcn4p, which complements the slow growth phenotype of a Δ gcn4 mutant and is inducible by amino acid starvation resulting from 3-AT treatment ( Figure 4 A, “Gcn4p” lanes, compare “wt, +3-AT” to “wt, 0 min”). Over a time course of UPR induction, Gcn4p-myc levels exhibited a transient increase of 2.5-fold, peaking after 15 min and gradually decaying to uninduced levels after 60–120 min ( Figure 4 A, “WT” lanes; quantitated in Figure 4 B). This temporary increase in Gcn4p was not observed in UPR-deficient mutants: neither Δ ire1 nor Δ hac1 mutants exhibited increased levels of Gcn4p over the time course of UPR induction. Figure 4 Gcn4p Protein Levels Are Upregulated during the UPR (A) Gcn4p levels, but not eIF-2α phosphorylation, rise under ER stress in a UPR-dependent manner. Cells bearing a C-terminally myc -tagged allele of GCN4 were treated with Tm for 0, 15, 30, 60, or 120 min. Western blots probed with anti-myc recognizing Gcn4p-myc (top panels) or phospho-specific anti-eIF-2α antibody (bottom panel). Gcn4p blot for the Δ gcn2 mutant is 5x overexposed so that the bands are visible. (B) Quantitation of the Gcn4p-myc protein levels in Figure 4 A. Data reflect an average of four experiments, normalized against Gcn4p levels in the WT t = 0 samples. In the context of other stress responses (e.g., amino acid starvation), Gcn4p levels are regulated via phosphorylation of eIF-2α by Gcn2p ( Dever et al. 1993 ; Hinnebusch 1993 ; Diallinas and Thireos 1994 ). Because GCN2 is required for induction of UPR-dependent transcription, we asked whether GCN2 was required for the rise in Gcn4p levels we observed during Tm treatment. Basal levels of Gcn4p are low in a Δ gcn2 strain (less than 10% of WT), as previously reported ( Hinnebusch 1993 ; Tavernarakis and Thireos 1996 ). We observed no increase in Gcn4p levels during the time course in this mutant ( Figure 4 B). These data are consistent with two possibilities: first, that Gcn2p is responsible for both basal levels of Gcn4p and its induction upon ER stress; or second, that Gcn2p is responsible only for maintaining basal levels of Gcn4p, while another pathway mediated by Ire1p/Hac1p further elevates Gcn4p levels during the UPR. If Gcn2p is responsible for upregulation of Gcn4p during the UPR, we should observe a concomitant increase in the level of eIF-2α phosphorylation. We did not observe such an increase ( Figure 4 A, “eIF-2α-P” lanes), which is consistent with the idea that Gcn2p's role in the UPR is primarily to maintain basal levels of Gcn4p, not to upregulate Gcn4p via increased eIF-2α phosphorylation. Other workers have observed a transient increase in phospho-eIF-2α under Tm treatment ( Cherkasova and Hinnebusch 2003 ). It is possible that strain differences or the significantly greater doses of Tm used in the previous study (4 and 20 μg/ml versus our 1 μg/ml) explain this disparity. Consistent with our findings, Cherkasova and Hinnebusch (2003) predict derepression of GCN4 by ER stress mediated by increased phospho-eIF-2α. Here, we observe increased Gcn4p levels under ER stress conditions even when phospho-eIF-2α levels are not detectably altered. Epistasis of HAC1 and GCN4 GCN4 plays an essential role in the UPR, with a knockout phenotype closely resembling that of Δ ire1 and Δ hac1: the absence of any of these genes prevents transcriptional activation by ER stress. This observation could be a consequence of one of several different mechanisms: Gcn4p might act upstream or downstream of Hac1p in the same linear pathway, or act in a parallel pathway that converges at target promoters. Two lines of evidence from data already introduced argue that Gcn4p does not act upstream of Hac1p. First, GCN4 overexpression is sufficient to activate transcription from UPRE-2 in a Δ hac1 mutant (see Figure 3 A), indicating that Gcn4p's influence on target promoters can occur by a Hac1p-independent mechanism. Second, the transient upregulation of Gcn4p levels observed under ER stress is absent in the Δ hac1 mutant (see Figure 4 A), indicating that Hac1p levels determine Gcn4p levels. Further evidence that Gcn4p does not act upstream of Hac1p is provided by the observation that expression of Hac1p cannot activate transcription in a Δ gcn4 mutant ( Figure 5 ). In a WT cell, expression of Hac1p produced from a HAC1 gene lacking the intron is sufficient to activate transcription from the UPRE-1 ( Cox and Walter 1996 ; Figure 5 , “UPRE-1” columns). Constitutive expression of Hac1p is also sufficient to activate UPRE-2, and to a lesser extent UPRE-3, in the absence of ER stress ( Figure 5 , “WT, +Hac1p” columns) and in the absence of Ire1p ( Figure 5 , “Δ ire1 , +Hac1p” columns). In the absence of GCN4, however, the constitutive expression of Hac1p does not activate transcription from any of the three reporter constructs ( Figure 5 , “Δ gcn4, +Hac1p” columns), suggesting that Hac1p's function at promoters containing any one of the three UPREs requires the presence of Gcn4p. Thus, Gcn4p must act at the same point as or downstream of Hac1p. Following the same line of reasoning, for UPRE-1 and UPRE-3, GCN4 overexpression alone is insufficient to activate transcription in the absence of HAC1 (e.g., see Figure 3 A, Δ hac1 mutants), indicating that at UPRE-containing promoters Hac1p must act at the same point as or downstream of Gcn4p. Thus, the observations enumerated here are consistent with the interpretation that Gcn4p and Hac1p act together at target gene promoters. Figure 5 GCN4 Acts with or Downstream of HAC1 UPRE reporter activity as a function of Hac1p expression and UPR pathway genes. To express Hac1p in the absence of ER stress, we used an intron-less allele of HAC1, which is constitutively translated. A Gcn4p/Hac1p Complex Binds Both the UPRE-1 and UPRE-2 To explore this possibility directly, we performed gel-retardation assays with the UPRE-1-containing segment of the KAR2 promoter (oligo 1), used in previous experiments demonstrating direct binding of Hac1p to UPRE-1 ( Cox and Walter 1996 ), and the UPRE-2-containing segment of the ERO1 promoter (oligo 2). 32 P-labeled oligonucleotides were incubated with cell extracts and subjected to native (nondenaturing) polyacrylamide gel electrophoresis, and visualized by autoradiography ( Figure 6 ). Figure 6 Hac1p and Gcn4p Directly Interact with UPRE-1 and UPRE-2 32 P-labeled oligos bearing either UPRE-1 or UPRE-2 promoter were incubated with crude cell extracts, and subjected to nondenaturing polyacrylamide gel electrophoresis. (A) Extract: Samples were of the WT, or bore deletions in IRE1 Δ ire1), GCN4 ( Δ gcn4), or GCN2 ( Δ gcn2), and were treated with Tm (+) or mock treated (−). Labeled oligos contained either UPRE-1 (1) or UPRE-2 (2). Binding reactions were incubated with no unlabeled competitor (−) or with 100x excess of unlabeled WT UPRE-1 (1), an inactive mutant version of UPRE-1 (1*), UPRE-2 (2), or an inactive mutant version of UPRE-2 (2*). (B) Extract: Samples from a strain overexpressing GCN4 (2μ- GCN4; lanes 1 and 2) or from a strain expressing myc-tagged Gcn4p and HA-tagged Hac1p (GCN4-myc and HA-HAC1). Binding reactions were incubated with no antibody (−), anti-myc recognizing Gcn4p-myc (myc), anti-HA recognizing HA-Hac1p (HA), or both antibodies simultaneously (myc/HA). Bands represent the following: a, Gcn4p + Hac1p + anti-myc + anti-HA; b, Gcn4p + Hac1p + anti-HA; c, Gcn4p + Hac1p + anti-myc; d, Gcn4p + Hac1p; e, Gcn4p. *, an unidentified band that appears only when extracts include both Gcn4-myc and HA-Hac1p and when both antibodies are included in the binding reaction. As previously observed, oligo 1's mobility was retarded when incubated with crude extracts from UPR-induced cells, but not extracts from untreated cells ( Figure 6 A; compare lane 2 to lane 1). Likewise, oligo 2 was specifically shifted by extracts from UPR-induced cells (compare lane 6 to lane 5). The binding activity is specific: for both oligos, the mobility shift was competed out by 100-fold excess of an unlabeled identical sequence (lanes 3 and 7) but not by a transcriptionally inactive point mutant of the same sequence (lanes 4 and 8). The binding activity is dependent on an intact UPR. No gel retardation was observed for either sequence in an Δ ire1 mutant (lanes 9 and 12), in which Hac1p cannot be synthesized. Likewise, in both Δ gcn4 and Δ gcn2 mutants, the binding activity observed in WT cells was absent. In both Δ gcn4 and Δ gcn2 mutants, however, a faster migrating complex appeared, which likely represents Hac1p alone binding the oligos (lanes 10, 11, 13, and 14). To demonstrate Gcn4p and Hac1p binding conclusively, we performed supershift analyses of the WT complex by addition of antibodies to either protein. We constructed a strain expressing both HA-epitope-tagged Hac1p and myc-tagged Gcn4p. Extracts from Tm-treated cells were incubated with antibodies against either or both tagged proteins. Antibodies recognizing either the tagged Gcn4p-myc ( Figure 6 B, lanes 5 and 6) or HA-Hac1p (lanes 7 and 8) supershifted the bound complex to different extents (compare lanes 7 and 8 to lanes 3 and 4). Hence, both Gcn4p and Hac1p can bind to sequences containing UPRE-1 and UPRE-2. Addition of both antibodies to the same binding reaction resulted in an ultrashifted band, migrating more slowly than the bands in either of the single antibody reactions (lanes 9 and 10). If Hac1p and Gcn4p bound DNA in distinct, separate complexes, we would expect to see two bands of identical mobility to those seen in lanes 5–8. We conclude that the mobility-shifted complex observed in UPR-induced WT cells therefore must contain both transcription factors, since no ultrashift would occur if the proteins were bound to separate complexes, and that Hac1p and Gcn4p act together at the same location to activate transcription upon UPR induction. (Similar gel-shift experiments performed with an oligonucleotide representative of UPRE-3 failed, indicating that transcription factor binding may be of reduced affinity at this sequence. This interpretation is consistent with the overall lower activity of the UPRE-3 reporter constructs (see Figure 2 A). Further evidence that Gcn4p can bind UPRE-2 is provided by the observation that overexpression of GCN4 alone in an otherwise WT cell, in the absence of ER stress, resulted in a mobility shift for oligo 2 ( Figure 6 B, lane 2). This complex migrated faster than the WT complex (e.g., Figure 6 B, lane 4). Because the extract was made from untreated cells, no Hac1p was present, indicating that the complex contains Gcn4p alone. The GCN4 -dependent shift is not observed for oligo 1, consistent with observations above that Gcn4p overproduction is sufficient to activate transcription of a UPRE-2 reporter but not a UPRE-1 reporter (see Figure 3 A). Reciprocally, Hac1p is present in the Δ gcn4 and Δ gcn2 mutants, but Gcn4p is absent; it therefore seems likely that the faster migrating bands in Δ gcn4 /Δ gcn2 mutants ( Figure 6 A, lanes 10, 11, 13, and 14) represent oligonucleotides bound to Hac1p alone. Discussion Identification of Novel UASs Beginning only with the set of genes induced by the UPR and the promoter sequences of all genes in the genome, we computationally identified candidate motifs that obeyed the statistical properties we would expect of regulatory sequences, i.e., high frequency in UPR target promoters, and enrichment in the target promoters relative to the rest of the promoters in the genome. Two of these motifs, UPRE-2 and UPRE-3, are both necessary and sufficient to confer ER stress responsiveness in an IRE1 - and HAC1 -dependent manner on promoters which contain them. These novel sequences are activated under the same conditions as UPRE-1. Functional non-synonymy of these sequences, however, is illustrated by the activation of UPRE-2 by GCN4 overexpression alone, a condition under which UPRE-1 and UPRE-3 are silent, and by the quantitative difference with which the motifs respond to UPR activation (UPRE-2 > UPRE-1 > UPRE-3). Although the two new UPRE sequences look at first glance entirely different from the well-characterized UPRE-1, one of them may share “half-site” similarity: UPRE-2 has a three base identity with UPRE-1 at the 3′ end (TACGTG versus CAGNGTG); whether these bases make equivalent contacts with the bound transcription factors remains to be determined. Taken together, the sequence diversity of the motifs conferring similar transcriptional control upon binding of the same transcriptional activators illustrates the difficulty of predicting biological regulation from promoter sequences alone, even if binding sites in one context are well defined experimentally. The identification of these novel sequences allows a greater proportion of UPR target gene regulation to be explained within the paradigm of modular transcriptional control, i.e., in which a “portable” sequence module (a UAS) located within a promoter confers pathway responsiveness on the gene in question. The two novel motifs described triple the number of target genes whose regulation can be described in terms of a modular control mechanism, thus adding significantly to the repertoire of cis -acting elements known to act in the UPR. And yet, the resulting description of UPR transcription remains incomplete, as approximately 50% of the target genes still lack a recognizable UPREs. It may be that more biologically active motifs exist among the 109 motifs that emerged from the overrepresentation analysis, as many of the untested motifs are overrepresented relative to chance in the UPR target set by many orders of magnitude. For the eight motifs tested, we tested whether a motif was necessary for promoter induction only if it had already been shown to be sufficient in the artificial promoter system. Because of this experimental approach, it remains possible that some motifs not found to be sufficient are dependent for their activity on some contextual parameter (e.g., particular nearby flanking sequences). Thus it may be that some UPREs are not generally portable to other contexts, but are nonetheless necessary for UPR responsiveness of the native promoters in which they reside. Also, particularly rare motifs would have been omitted from the dictionary; thus, it is possible that complementary computational approaches might allow detection of uncommon motifs that this analysis missed. Finally, some UASs may remain ultimately undiscoverable within the paradigm of modular regulation. Motifs that are particularly sensitive to chromatin structure or position relative to the transcription initiation site would not be detected by an approach that neglected these parameters. It might be argued that the approach here enjoys no relative advantage over testing random oligonucleotides from UPR promoters. If every sequence from each target promoter were to be tested for activity, it is possible that additional elements not revealed by the bioinformatic approach would be discovered. For example, the residual upregulation of ERO1 after removal of UPRE-2 (see Figure 2 B) suggests that at least one cryptic element exists in that promoter. On the other hand, the DHH1 promoter shows no residual upregulation after removal of UPRE-3 (see Figure 2 C). If the average number of sites (candidate plus cryptic) per promoter is similar (1–2) throughout the target gene set, our computational approach represents a highly efficient means of identifying a subset of regulatory motifs. On the other hand, if the average is significantly higher, it is possible that testing random subsequences of target promoters would also be efficient. From the small number of promoters we studied in depth, it is not possible to calculate a meaningful upper bound for the average number of undiscovered regulatory sites per promoter. Nonetheless, within the sample size of our study, the yield of active regulatory sites per candidate tested (two of eight) is much higher than any reasonable a priori estimate of the density of regulatory elements in the UPR target promoters. One indication of a possible shortcoming of our computational approach is the finding that the probabilistic segmentation did not return the classical UPRE-1 as a significant “word,” i.e., the approach failed to generate a comprehensive list of all known active UPREs. The absence of UPRE-1 from the dictionary indicates that no sequence matching the experimentally defined degenerate consensus CAGNGTG is intrinsically overrepresented in the target promoters, i.e., this motif does not occur in the “text” of target gene promoters with a higher frequency than that with which its component subsequences would appear together by chance. Neither is this sequence overrepresented in the target promoter set relative to the promoters of the nontarget genes. The motif CAGNGTG has an overrepresentation score −log 10 P of 0.37, far beneath the enrichment of any of the 109 motifs assembled from dictionary words (see Figure 1 A). Hence, among genes that possess a UPRE-1 in their promoters, there are more instances of unresponsiveness to the UPR than instances of regulation, even though UPRE-1 has been experimentally demonstrated to be necessary and sufficient for upregulation in response to ER stress. A plausible resolution to this paradox may be that the UPRE-1 is heavily dependent on context. The experiments that defined the key core nucleotides proceeded by single point mutation at each position while holding constant the identity of all other nucleotides from the source 22-bp stretch of the KAR2 promoter; thus the seven-nucleotide “core sequence” may only specify those bases which are necessary for activity, but not define a module which is generally functional outside its original context of flanking sequence. If this were the case, we would not expect to recover UPRE-1 in a bioinformatic analysis of all target genes. Indeed, alignment of the KAR2 promoter from S. cerevisiae and three related budding yeasts reveals that UPRE-1 lies in the middle of a highly conserved 21-bp sequence which is 100% identical across three of the species ( Figure 7 A). This conserved stretch may represent a context that is essential for the transcriptional function of the core sequence. We speculate that recognition of the extended context may be performed by Hac1p without the collaboration of Gcn4p, as suggested by the observation that promoters which contain a UPRE are more dependent on GCN4 / GCN2 than are those genes in which a short modular UAS has not been identified (see Figure 3 C, histograms o and p ). Figure 7 Multiple Alignment of UPRE-1 and UPRE-2 from Three Budding Yeasts Alignment of partial promoter sequences from S. cerevisiae and homologous sequences in related yeasts. Numerical coordinates reflect the distance from the first nucleotide of the initiation codon in the S. cerevisiae promoter. (A) A segment of the KAR2/YJL034W promoter and homologs. The core sequence of UPRE-1 is indicated. (B) A segment of the ERO1 / YML130C promoter and homologs. The core sequence of UPRE-2 is indicated (above). The consensus binding site of Gcn4p is aligned (below). Despite these qualifications, the approach has successfully uncovered novel information about how the UPR is regulated. The appealing aspect of the strategy described here is that such studies are not limited to the UPR but can be generally employed in the study of any transcriptional response in any organism for which promoter sequences for all genes are known and in which the comprehensive genomic output of the response can be measured by expression profiling. The sole requirement of the probabilistic segmentation/overrepresentation computations is that a partition of the genome (into “target genes” and “nontarget genes”) be made on the basis of some meaningful difference in expression levels under the conditions of interest; the analysis thereafter proceeds by comparing the distribution of candidate motifs in the target gene set and the remainder of the genome. Further refinement of the mathematical tools therefore promises to be of invaluable help in our quest for a comprehensive understanding of the logic and complex interactions of transcriptional programs in eukaryotic cells. GCN4 Is an Essential Transcription Factor of the UPR The overexpression screen for activators of UPRE-2 revealed a role for the transcription factor Gcn4p, which we show to be required not only for activity of UPRE-2 but for all three known UPREs. Gcn4p and its upstream activator Gcn2p thus join Ire1p, Hac1p, and Rlg1p in the list of essential players in the yeast UPR. GCN4 encodes a well-characterized transcription factor acting in several distinct stress responses including amino acid starvation, glucose limitation, and ultraviolet irradiation ( Hinnebusch 1997 ; Yang et al. 2000 ; Natarajan et al. 2001 ; Stitzel et al. 2001 ), but has not previously been demonstrated to play any role in the UPR. Here, we demonstrate that GCN4 is required for normal induction of UPR transcription, both in the context of artificial promoters containing any of the known UPREs and in the context of the native promoters of most target genes. GCN2, a gene implicated in regulating GCN4 in other stress responses, is similarly required for a normal UPR, perhaps because GCN2 function is required to maintain the basal level of Gcn4p in a cell even under normal growth conditions. Our gel-mobility shift studies demonstrate a direct physical association between Hac1p and Gcn4p and the sequence motifs UPRE-1 and UPRE-2. Gcn4p and Hac1p are bZIP proteins, a family whose members bind DNA as dimers ( Ransone et al. 1993 ; Hsu et al. 1994 ). It therefore seems likely that Gcn4p and Hac1p stimulate transcription by binding promoter DNA as a heterodimer, although we cannot rule out higher order complexes. The promoter sequences UPRE-1 and UPRE-2 have identical genetic requirements for activation, but their behavior in response to genetic perturbations is not strictly identical. UPRE-2 can be activated by high levels of GCN4 alone (see Figure 3 A), but UPRE-1 cannot. This can be explained by the binding studies, which demonstrate that UPRE-2 (but not UPRE-1) can bind Gcn4p in the absence of Hac1p (see Figure 6 B, lanes 1 and 2); indeed, Gcn4p is known to bind DNA as a monomer as well as a dimer ( Cranz et al. 2004 ) and can bind DNA sequences containing even a consensus half-site ( Hollenbeck and Oakley 2000 ). The basis for this differential affinity for Gcn4p is strongly suggested by a refined consensus sequence for UPRE-2, and is illustrated by multiple species alignment of the ERO1 promoter ( Figure 7 B). We searched for examples of UPRE-2 core sequences that were conserved in UPR target genes across five yeast species, and extracted core and flanking sequences to derive a generalized consensus (see Materials and Methods ). The resulting consensus was revealed to be T(C/T)ACGTGT(C/T)(A/C), which differs from the experimentally established UPRE-1 consensus by two nucleotides essential for activity in the KAR2 promoter context. The conserved extended context of UPRE-2 in this promoter aligns with a consensus binding site for Gcn4p defined by computational analysis of the set of promoters that bind Gcn4p in a genome-wide chromatin immunoprecipitation assay (analysis by W. Wang and H. Li, unpublished data; chromatin immunoprecipitations in Lee et al. 2002b ). Comparison of multiple alignments of the extended contexts of UPRE-1 and UPRE-2 in the KAR2 and ERO1 promoters (compare Figure 7 A and 7 B) reveals that the two sequence contexts share a six-nucleotide segment, CGTGTC. The match between UPRE-2 and the Gcn4p consensus is imperfect (five of seven positions), suggesting that the association with Gcn4p and UPR promoters is not identical to the binding of Gcn4p to its “classical” amino acid starvation targets. Rather, these observations suggest that the proposed Gcn4p/Hac1p heterodimeric complex binds to a composite site, of which UPRE-1 and UPRE-2 represent different forms with stronger relative affinities to Hac1p and Gcn4p, respectively. Such a model would explain the residual upregulation of UPRE-1-containing genes in a Δ gcn2 mutant (see Figure 3 C, histogram j ), which retains some expression of Gcn4p. In the absence of Hac1p but in the presence of high concentrations of Gcn4p (e.g., when GCN4 is overexpressed), Gcn4p can bind the UPRE-2 on its own, either as a homodimer or a monomer. Upregulation of Gcn4p by ER Stress The transient upregulation of Gcn4p levels, which we observe upon UPR induction, may therefore serve to increase the transcriptional output of the response, especially early in the response. Most UPR target genes are robustly induced after 15 min of ER stress ( Travers et al. 2000 ); hence, the increase in Gcn4p levels occurs at a time suggestive of a role in the initial response. Gcn4p itself mediates a broad transcriptional program in response to a diverse set of cellular conditions and stresses ( Natarajan et al. 2001 ). The recruitment of Gcn4p therefore provides an opportunity for crosstalk between regulatory pathways and fine-tuning of the magnitude of the UPR. For example, under amino acid starvation, Gcn4p levels are high relative to the baseline of normal growth. In this state, cells with accumulated unfolded ER protein might wish to upregulate ER-associated protein degradation (one output of the UPR; Casagrande et al. 2000 ; Friedlander et al. 2000 ; Travers et al. 2000 ) beyond the level normally provided by the UPR. Such a mechanism might provide for an additional source of amino acids through protein catabolism. Elevated Gcn4p levels and the concomitant increased induction of UPR target genes would serve this need. This view raises the possibility that those genes that most stringently require GCN4 for normal UPR induction are those that are most urgently required by the cell under specific conditions, under which UPR is induced and Gcn4p levels are high for reasons unrelated to ER stress. The relationship between the cellular stress responses that regulate Gcn4p and the potentiation of UPR transcription will therefore be an important subject for future study. The mechanism by which IRE1 and HAC1 mediate the transient increase in Gcn4p remains to be elucidated. Given that Hac1p and Gcn4p are observed in the same complex with DNA, one intriguing possibility is that association with Hac1p serves to stabilize Gcn4p. GCN4 and the Super-UPR: Two Ways to Modulate the UPR We propose a model of UPR transcriptional activation that is illustrated in Figure 8 . According to the circuit diagram in Figure 8 A, HAC1 mRNA splicing retains its role as the “switch” that turns the UPR on or off. Gcn4p, whose levels appear to be limiting for the extent of gene regulation, would therefore play a role in setting the “gain” or “volume” of the response, perhaps allowing communication from other stress response pathways in the cell. Such a gain control could serve as an adjunct to the “Super-UPR” (S-UPR) gain control described in the accompanying paper ( Leber et al. 2004 ), whereby an IRE1- independent ER surveillance mechanism regulates the transcription of the HAC1 mRNA in response to compound stresses on the secretory pathway. S-UPR induction proceeds unimpaired in Δ gcn4 cells, indicating that the S-UPR is mechanistically distinct from the regulation described here ( Leber et al. 2004 ). Whereas the S-UPR monitors conditions of the ER, the GCN4 branch would integrate information gleaned from the cytosol. Both of these gain controls have the potential to act not only as modulators of the magnitude of the response but also as a tuning dial: UPR targets respond differentially to increased level of HAC1 during the S-UPR (see the Class 1, 2, and 3 genes in Figure 6 of Leber et al. [2004] ). Likewise, different UPR targets exhibit differential dependence on Gcn4p, as is apparent from the variable upregulation of UPR targets in Δ gcn4 and Δ gcn2 mutants (see Figure 3 C). The observations suggest that increased levels of Gcn4p might serve to differentially upregulate a subset of target genes. Figure 8 Model of Gcn4p/Hac1p Action in the UPR (A) The expanded circuitry of the UPR . The classical UPR (red), the S-UPR (blue), and regulated Gcn4p levels (green) are integrated at target promoters. Transcriptional regulation of HAC1 mRNA levels, providing one level of gain control, is depicted as a rheostat under supervision of a logical AND gate informed by multiple inputs from the ER. Splicing of HAC1 mRNA by Ire1p, providing a binary on/off control, is depicted by a switch. Regulation of Gcn4p levels by Gcn2p under changing cellular conditions adds an additional layer of gain control. Together, activity levels of Hac1p, Gcn4p, and the proposed UPR modulatory factor ( Leber et al. 2004 ) collaborate to determine the magnitude of the transcriptional output signal. (B) Mechanism of Gcn4p/Hac1p action at target promoters. In the absence of Hac1p, Gcn4p is present in the cell as a consequence of baseline activity of Gcn2p. At normal concentrations, Gcn4p is unable to bind or activate a target UPRE, but it may bind when Gcn4p levels are elevated. Upon induction of the UPR, Ire1p is activated and Hac1 is synthesized. Hac1p can bind, but not activate, target UPREs. Binding of target DNA by a Gcn4p/Hac1p heterodimer results in a transcriptionally active complex. Gcn4p levels are upregulated under UPR induction, perhaps as a consequence of stabilization by interaction with Hac1p. From a mechanistic standpoint, ER stress activates Ire1p, which, through nonconventional splicing, induces Hac1p production ( Figure 8 B). Hac1p can bind to the known UPREs, but by itself forms a protein–DNA complex that is not competent to upregulate transcription. Gcn4p, which is present at a basal level in cells under normal growth conditions as a result of baseline Gcn2p activity, is unable to bind UPREs in the absence of Hac1p. Gcn4p may bind some UPRE sequences, providing a weak bypass of Hac1p, when it is present at physiologically elevated levels. When Hac1p is produced, Gcn4p is recruited to the UPRE, presumably forming a more stable ternary complex containing promoter DNA, Gcn4p, and Hac1p, and transcription is induced. This ternary complex could be established serially, in which case an inactive Hac1p/UPRE complex would be recognized by Gcn4p, or by recognition of the UPRE by a preformed heterodimer of Gcn4p and Hac1p. Conservation between Yeast and Mammalian UPR Advances in the understanding of the metazoan UPR system has been richly informed by the study of yeast. The elucidation of a role for Gcn4p in the yeast UPR allows us to draw even stronger parallels between the yeast and metazoan systems. In higher eukaryotes, the ER-resident transmembrane kinase PERK is activated by protein unfolding. PERK's cytosolic domain is homologous to Gcn2p and likewise phosphorylates eIF-2α, thereby downregulating general translation but also promoting the selective translation of mRNAs containing upstream ORFs in their 5′ UTR sequences. One of these mRNAs encodes ATF-4, a bZIP transcription factor that represents the metazoan ortholog of Gcn4p. Intriguingly, and in strict analogy to the joint action of Gcn4p and Hac1p proposed here, ATF-4 in metazoan cells collaborates with the Hac1p ortholog XBP-1 to stimulate UPR target gene transcription. The analogies between the roles of Gcn4p/Hac1p/Gcn2p and ATF-4/XBP-1/PERK suggest that the function of these proteins has been amazingly conserved in the UPR, although the nature of the connections between pathway components may have been adapted over evolutionary time: Yeast does not have an identified PERK ortholog that feeds ER-derived information into the GCN4 branch of the network. Another parallel concerns S-UPR regulation. In the accompanying paper, Leber et al. (2004) demonstrate that compound secretory stress upregulates HAC1 mRNA. The mode of modulation of the UPR by the superimposed control of the S-UPR bears a resemblance to the known function of another metazoan transcription factor, ATF-6, which is activated by regulated proteolysis in response to ER stress and in turn upregulates XBP-1 transcription. In comparison to the metazoan UPR, where multiple ER-resident proteins communicate in a seemingly parallel way with multiple downstream transcription factors, Ire1p and Hac1p remain the central players in the yeast UPR. GCN4 and the S-UPR provide modulatory functions. Nonetheless, the addition to the repertoire of the yeast UPR effectors of an additional transcription factor (Gcn4p) and of a mechanism for transcriptional regulation of Hac1p (S-UPR; Leber et al. 2004 ) suggests that the UPR functions as a regulatory network, with its opportunities for crosstalk with other pathways and regulation by cellular state. But most importantly, both the central players and the connectivity of the circuits involved appear to be conserved among eukaryotes and evolutionarily ancient. Materials and Methods Computational and quantitative methods To build the dictionary of putative regulatory elements for UPR target genes, we first extracted the 600-bp upstream regions of all UPR target genes. To get rid of simple repeats unlikely to be regulatory elements (such as AT-rich repeats and transposable elements), we removed exact repeats of lengths 15 bp or longer, and kept the remaining fragments of lengths longer than 50 bp. What remained was the input sequence for the dictionary construction. We used the MobyDick algorithm based on probabilistic segmentation ( Bussemaker et al. 2000b ) to build a dictionary of putative regulatory elements. MobyDick builds the dictionary by iterating through fitting and testing steps. Starting with the frequencies of single bases, the algorithm finds overrepresented two-nucleotide pairs (testing step), adds them to the dictionary, determines their probabilities by maximizing the likelihood of observing the sequence data (fitting step), and continues to build larger fragments iteratively. Adjustable parameters were as follows: L, the maximum word length, was set to 8, and MaxP, the probability of seeing at least one false positive at each testing step when all words of length up to L are tested, was set to 0.999 (relaxed cutoff). MobyDick generated a dictionary of 328 words. We filtered out words that were too short, appeared in too many copies (such as AT-rich short repeats), or were of low quality (the algorithm calculates a quality factor for each word describing how likely it is that the word can be made by chance from shorter words). With the filters number_of_copies < 200, length > 4, and quality_factor > 0.2, we obtained 201 words. Using the filtered dictionary, we grouped similar words into motifs using the clustering algorithm CAST ( Ben-Dor et al. 1999 ), as follows: We first computed pairwise alignment scores for all the words in the dictionary, using gapless alignment with a scoring scheme derived from a simple mutation model. The model assumes that a base x mutates to any other given base y with probability p /3, and remains the same base with a probability (1 − p ). The score for a pair x–y is given by the log-odds-ratio of observing the pair under the mutation model versus observing the pair at random. With the choice of p = 0.5 (the result is insensitive to the actual p value chosen as long as p is much smaller than 0.75), a matching pair scores ln(2), and a mismatch scores ln(2/3). We normalized the scores to fall between 0 and 1 by the largest score. We then used the CAST algorithm to group words into clusters, with the threshold parameter set at 0.7 (the lower bound of the normalized score averaged over all pairs in a cluster). This procedure generated 109 motifs. To test which motifs are significantly overrepresented in the promoters of UPR target genes, we counted for each motif the total number of occurrences in all promoters, and calculated the expected number of occurrences N exp in the UPR target gene promoters based on the genome-wide frequencies. We then counted the observed number of occurrences N obs of the motif in the promoters of UPR target genes. We used Poisson statistics to calculate the probability P of observing a number of occurrences equal to or greater than N obs by chance, based on N exp . The test based on Poisson statistics is a very good approximation of the more rigorous test based on the binomial distribution, where the probability P is the probability of seeing a specific instance of the motif in the UPR gene set and the total number of trials N t is the total number of copies of the motif in the genome. Since P is small (0.059) and N t is large (ranging from approximately ten to approximately 1000) but the product N exp is finite, the resulting distribution is well approximated by a Poisson distribution with mean = N exp . To derive a general consensus for UPRE-2 that includes context information beyond the core motif, we took the five-nucleotide core ACGTG from the Motif 1 alignment (see Figure 1 B) and searched the promoters of UPR target genes for the occurrences of this core motif that are conserved across five yeast species. We first took the sequence data for S. cerevisiae, S. bayanus, S. mikatae, S. paradoxus, and S. kudriavevii ( Cliften et al. 2003 ; Kellis et al. 2003 ) and performed multiple sequence alignment on all the orthologous promoters. We then searched for conserved blocks on both strands where ACGTG occurs in all species and is correctly aligned. We found 60 instances of conserved blocks in UPR target gene promoters for which multiple sequence alignment data were available. We then extracted ACGTG plus 10-bp flanking sequences on both side in S. cerevisiae and performed a multiple local sequence alignment of the S. cerevisiae sequences from each of the 60 conserved blocks using the Consensus algorithm ( Hertz and Stormo 1999 ), setting matrix_width to 15. The result of the alignment was a position-specific frequency matrix. We derived a consensus sequence from the matrix using the convention by Cavener (1987) . The alignment matrix and raw sequence data are available in Table S3 . Plasmids and recombinant DNA DNA manipulations, cloning, and yeast culture were performed as previously described ( Sherman et al. 1986 ; Ausubel 1988 ; Guthrie and Fink 2002 ) unless otherwise noted. UPRE reporter constructs (used in Figures 2 A, 3 A, 3 B, and 5 ) were based on the plasmid pPW344/pJC104 ( Cox et al. 1993 ), which contains a triple repeat of the KAR2 -derived UPRE; this plasmid was used as the UPRE-1 reporter in all experiments. To construct UPRE reporters used to test Motifs 1–8, we removed the UPRE-1 repeat from pPW344 by digestion with BglII and XhoI, and replaced it with a triple repeat of a 15-nucleotide sequence encompassing the motif in question and the flanking sequence context. Source sequences were chosen from promoters that exhibited robust induction by the UPR ( Travers et al. 2000 ) and, if possible, did not contain a match to the canonical ( KAR2 -derived) UPRE. Intact promoter reporter constructs (pPW668–pPW671) used in Figure 2 B and 2 C were also based on plasmid pPW344. Here the promoter of pPW344 (BamHI/BglII fragment) was replaced by a single PCR fragment spanning the approximately 600 nucleotides immediately upstream of either the ERO1 or DHH1 initiation codon, or by two fragments spanning the same sequence but with the UPRE motif replaced by a restriction site. The high-copy GCN4 plasmid (pPW672) used in Figure 3 A consists of the region plus1000 nucleotides one either side of the GCN4 ORF. Source sequence contexts, olignucleotide sequences, and select PCR primers are compiled in Table S4 . The plasmids expressing the activated allele of HAC1 used in Figure 5 (pPW322/pRC43) and the N-terminally HA-tagged allele of HAC1 (pPW353/pJC316) used in Figure 6 B were as previously described ( Cox and Walter 1996 ). Knockouts of GCN4 and GCN2 and the integrated GCN4-myc were constructed by PCR cassette/generic primer mutagenesis ( Longtine et al. 1998 ). Yeast strains All base strains used in this study are enumerated in Table 1 . As appropriate, these strains were transformed with plasmids from Table 2 for use in experiments. Table 1 Yeast Strains Table 2 Yeast Plasmids Cell culture and growth conditions For all experiments, samples were diluted from saturated overnight cultures and regrown to midlog phase (OD600 = 0.5) prior to addition of drug. DTT (Sigma, St. Louis, Missouri, United States) was added to cultures to a final concentration of 2 mM. Tm (Boehringer Mannheim, Indianapolis, Indiana, United States) was added to cultures to a final concentration of 1 μg/ml. 3-AT (Sigma) was added to cultures to a final concentration of 10 mM. All 3-AT treatments were performed on strains WT for the HIS3 gene; for histidine-deprived cultures, overnight cultures were washed three times in SD-histidine, then diluted to low density in SD-histidine and grown to midlog phase before the addition of the drug. To assay β-galactosidase activity on solid growth media, we overlaid plates with buffered soft agar containing X-gal (Sigma) as described previously ( Cox and Walter 1996 ). For liquid cultures, we used a colorimetric ONPG assay ( Holley and Yamamoto 1995 ). Gene expression profiling. Strains were grown in YPD (pH 5.4) as in Travers et al. (2000) to midlog phase (OD = 0.5) and then either treated with 2 mM DTT or left untreated. RNA was extracted as described by Ruegsegger et al. (2001) , and mRNA was purified with a PolyATtract kit (Promega, Madison, Wisconsin, United States). Microarray analysis used yeast spotted-cDNA ORF arrays printed at the University of California, San Francisco, Core Center for Genomics and Proteomics ( http://derisilab.ucsf.edu/more ) and was performed as described previously ( Carroll et al. 2001 ). Measurements reported are the average of three independent experiments. We tested the statistical significance of the induction for the three gene sets (UPRE-1, UPRE-2, and UPRE-3 genes) in four different strains (WT, Δ ire1, Δ gcn4, and Δ gcn2 ) using a z -score scheme. For a given gene set and a given strain, we calculated the average fold induction for genes in the set and compared it to the value for the genome overall. The null hypothesis was that the selected gene set was no different from a randomly selected set (same total number) from the genome overall. Under this hypothesis, the average μ has a distribution well approximated by a normal distribution (due to the central limit theorem) with mean μ genome and standard deviation σ/√( Nset ), where N set is the total number of genes in the test set. We computed a z -score, z =√( Nset )(μ−μ genome )/σ, which should have a standard normal distribution (zero mean and unit variance) under the null hypothesis. The P value was calculated by integrating the standard normal curve from z to infinity. Isolation and detection of protein. Protein preparation, electrophoresis, and Western blotting proceeded as described in the accompanying paper ( Leber et al. 2004 ). Gcn4p-myc (see Figure 5 A) was detected using a mouse anti-myc monoclonal antibody (Molecular Probes, Eugene, Oregon, United States); eIF-2α-phosphate was detected by a commercial phospho-specific mouse polyclonal (Upstate Biotechnology, Lake Placid, New York, United States). Gel retardation analysis. Gel shifts were performed as previously described ( Cox and Walter 1996 ) except that we found it important to elevate the acrylamide concentration to 5% and lower the in-gel glycerol concentration to 4%. UPRE-1 oligo and UPRE-1 mutant are based on sequences previously described ( Cox and Walter 1996 ). UPRE-2 oligo is a fragment of the ERO1 promoter centered around the UAS. UPRE-2 mutant is a point mutation that does not support transcription in an artificial promoter context (unpublished data). For sequences, see Table S4 . Competition experiments used a 100-fold excess of unlabeled oligonucleotide. Supporting Information Table S1 Dictionary of “Words” Compiled by MobyDick This table contains an alphabetical list of the dictionary “words” compiled by the MobyDick algorithm from the “text” comprising the promoters of UPRE target genes. Associated statistics for each word are as follows: N, the average number of times the string is delimited as a word among all segmentations of the data; Xi, the number of matches of the word anywhere in the text; p, the frequency of drawing the word from the dictionary, optimized over all words to give the maximum likelihood of observing the text; Z = p + p s , where p s is the probability with which the word can be made by combining shorter words from the dictionary; sig = significance = Np /sqrt( N [Z − p ]). (10 KB TXT). Click here for additional data file. Table S2 Ranked Listing of the Motifs Assembled by Clustering from the Dictionary Words N tot is the number of times a given motif appeared in the promoters of the genome overall; N exp is the number of times one would expect a given motif to appear in the 381 promoters of UPR target genes if the motif were distributed randomly throughout all promoters; N obs is the number of times a given motif actually appears in the target gene promoters; and −log 10 P is a measure of overrepresentation based on Poisson statistics ( P is the likelihood that a given observed distribution would occur by chance). (3 KB TXT). Click here for additional data file. Table S3 UPRE-Containing Promoter Alignments This table contains CLUSTALW alignments for the KAR2 and ERO1 promoters, derived from S. cerevisiae and related budding yeasts. Asterisk indicates 100% conserved residues. Scer, S. cerevisiae; Skud, S. kudriavevii; Spar, S. paradoxus; Smik, S. mikatae; Sbay, S. bayanus . (8 KB TXT). Click here for additional data file. Table S4 Oligonucleotide Sequences and Cloning Schemes This table contains the sequences of primers and olignonucleotide sequences used in construction of plasmids for this study, as well as oligonucleotide sequences used for probes in the gel-shift analysis. (38 KB DOC). Click here for additional data file. Accession Numbers The GenBank accession numbers of the gene products discussed in this paper are Dhh1p (NP_010121), Ero1p (NP_013576), Gcn4p (NP_010907), Gcn2p (NP_010569), Hac1p (NP_011946), and Ire1p (NP_116622). Microarray data can be accessed at the Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) database as platform number GPL1001 and sample numbers GSM16985–GSM1988. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC509306.xml |
524256 | Where to Start? Alternate Protein Translation Mechanism Creates Unanticipated Antigens | null | In the spirit of good health, cells are constantly subjecting their protein contents to immunological surveillance by cytotoxic (killer) T cells. Tens of thousands of major histocompatibility complex (MHC) class I molecules cradle peptides (bits of proteins) on cell surfaces, and T cells detect any suspicious peptides with extreme sensitivity. If a cell is infected with a virus, peptides created from viral DNA will end up on the cell's surface as antigens, triggering immunological red flags. Most—but not all—of the peptides presented by MHC class I molecules are created by conventional cellular mechanisms: with the help of a ribosome, three mRNA nucleotides (a codon) are decoded into a corresponding amino acid, which is strung as the next link on an elongating peptide. Most peptides begin with the amino acid methionine, coded by the mRNA nucleotide triplet A-U-G (AUG). But some peptides are “cryptic,” arising from normally untranslated regions of mRNA or initiated with codons other than AUG. Previous studies suggested that an unconventional translation mechanism creates some cryptic peptides. But how? And why? Only one type of translation initiator, transfer RNA (tRNA), specific for AUG and loaded with a methionine molecule, is known. Protein synthesis beginning at alternate codons has been attributed to imprecise pairing between the methionine translation initiator and mRNA. This, however, does not explain proteins that do not begin with methionine. Only two mechanisms for building non-methionine-initiated peptides have been discovered. In a new study, Susan R. Schwab et al. characterize one of them, the CUG-initiated translation of a peptide starting with leucine instead of methionine. The authors explored cellular translation by engineering cells to create peptides of interest and present them through matching MHC molecules on the cells' surfaces. Then, by harnessing the exquisite sensitivity of T cells to probe for antigens on MHC molecules, they could identify which peptides were created under different experimental conditions. Their findings point to a unique translation mechanism. In the other known example of a non-methionine-initiated peptide, translation beginning at GCU or CAA is guided by a specific folded structure of mRNA nucleotides called the internal ribosome entry site. Schwab et al. have found that no similar structure is necessary for CUG-initiated translation. However, similar to the standard mechanism of AUG initiation, they found that ribosomes do scan for CUG. Additionally, the presence of a specific ribosome-binding sequence in mRNA (the “Kozak context”) near a CUG site can enhance the efficiency of initiation there. Schwab et al. have also suggested a possible purpose for this translation mechanism. Under stress, cells can down-regulate conventional translation, which curbs the production of viral proteins in the event of an infection but also suppresses the creation of antigens needed to flag down T cells for an immune response. Here, Schwab et al. report that peptides starting with leucine were produced in the absence of the protein eIF2, which normally aids in AUG-initiated peptide synthesis. Cells under stress slow conventional translation by restraining the function of eIF2. Therefore, CUG-initiated translation, which works without eIF2, might provide an out for stressed cells needing to create peptides. This alternative could be a great way to avoid pumping out viral proteins and still create antigens for T cell surveillance—unless, of course, viruses take advantage of the loophole for their own peptide production. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524256.xml |
549045 | Signalling crosstalk in FGF2-mediated protection of endothelial cells from HIV-gp120 | Background The blood brain barrier (BBB) is the first line of defence of the central nervous system (CNS) against circulating pathogens, such as HIV. The cytotoxic HIV protein, gp120, damages endothelial cells of the BBB, thereby compromising its integrity, which may lead to migration of HIV-infected cells into the brain. Fibroblast growth factor 2 (FGF2), produced primarily by astrocytes, promotes endothelial cell fitness and angiogenesis. We hypothesized that treatment of human umbilical vein endothelial cells (HUVEC) with FGF2 would protect the cells from gp120-mediated toxicity via endothelial cell survival signalling. Results Exposure of HUVEC to gp120 resulted in dose- and time-dependent cell death; whereas, pre-treatment of endothelial cells with FGF2 protected cells from gp120 angiotoxicity. Treatment of HUVEC with FGF2 resulted in dose- and time-dependent activation of the extracellular regulated kinase (ERK), with moderate effects on phosphoinositol 3 kinase (PI3K) and protein kinase B (PKB), also known as AKT, but no effects on glycogen synthase kinase 3 (GSK3β) activity. Using pharmacological approaches, gene transfer and kinase activity assays, we show that FGF2-mediated angioprotection against gp120 toxicity is regulated by crosstalk among the ERK, PI3K-AKT and PKC signalling pathways. Conclusions Taken together, these results suggest that FGF2 may play a significant role in maintaining the integrity of the BBB during the progress of HIV associated cerebral endothelial cell damage. | Background Maintenance of blood brain barrier (BBB) integrity is critical to prevent the passage of potentially harmful factors, such as pathogens or toxins into the brain. During the progression of central nervous system (CNS) infectious disease, pathogens might gain access to the brain by compromising the integrity of the BBB. In the course of AIDS, HIV enters the brain at early stages, disrupting the components of the BBB resulting in a chronic state of inflammation known as HIV encephalitis (HIVE) [ 1 , 2 ]. HIVE is characterized by the presence of HIV-infected microglia and / or macrophages in the brain, the formation of multinucleated giant cells and microglial nodules, astrogliosis and myelin pallor, the combined effects of which could result in cognitive impairment [ 3 ]. Because endothelial cells of the BBB provide the first point of contact between blood-borne viral products and the brain, they provide the front line of defence against viral entry into the CNS. Alterations in signalling between components of the BBB with either HIV proteins or factors produced in response to HIV infection, such as cytokines and chemokines, may disrupt BBB integrity, resulting in a compromise that could promote transmigration of activated monocytes or HIV infected cells into the brain. Toxic viral products released by HIV-infected cells such as, gp120, Tat or Nef, together with cytokines and chemokines from activated monocytes, can act to increase BBB permeability [ 4 - 8 ]. Cell-free gp120 is found in the serum of HIV infected patients and crosses the BBB by absorptive endocytosis [ 9 ] and has been detected in the perivascular regions of the brain [ 10 ]. Gp120 is toxic to uninfected cells such as cerebral endothelial cells [ 8 ], and induces numerous signalling alterations in glial cells leading to indirect neuronal dysfunction and death [ 11 , 12 ]. Huang et al. have shown that gp120 promotes apoptosis in human umbilical vascular endothelial cells (HUVEC) by acting through CXCR4 and CCR5 chemokine receptors to increase activation of protein kinase (PKC) [ 13 , 14 ]. Furthermore, these studies show that the toxic effects of gp120 were blocked by PKC antagonists, sphingosine, phorbol esters and fibroblast growth factor 2 (FGF2) [ 13 ]. While viral products and inflammatory response proteins may damage components of the BBB, other factors, such as growth factors, may work to preserve BBB integrity through maintaining endothelial cell fitness. In this context, FGF2 is of particular interest for several reasons. First, FGF2 is produced primarily by astrocytes that are in proximity to cerebral endothelial cells in the blood brain barrier [ 15 ]. Among the known astrocyte-derived growth factors, only FGF2 mimics the signalling actions of astrocytes to the BBB [ 15 , 16 ]. Second, of the four FGF receptors (FGFR), FGFR1 is mainly expressed on neurons and endothelial cells while FGFR2 and FGFR3 are found on glial cells [ 17 - 19 ]. FGF2, which binds to FGFR1, exhibits a wide range of angiotrophic effects [ 15 , 16 ] and promotes the survival of cortical and hippocampal neurons [ 15 , 16 , 20 - 22 ]. Third, FGF2 signals through FGFR1 and activates phosphoinositol 3 kinase (PI3K), protein kinase C (PKC), extracellular regulated kinase (ERK), and p38 pathways [ 23 - 25 ]. Both ERK and p38 belong to the mitogen-activated protein kinase (MAPK) signalling pathways and have been shown to be involved in regulating endothelial cell survival [ 15 , 16 ]. FGF2 protection of HUVEC from gp120 is proposed to occur by preventing the gp120-mediated increase in PKC activity [ 13 ], however, protective signalling mechanisms directly induced by FGF2 have not been addressed. Therefore, we investigated the signalling pathways involved in FGF2-mediated protection against gp120 toxicity in HUVEC. Our studies indicate that FGF2 protects endothelial cells from gp120-mediated toxicity by crosstalk among several signalling pathways downstream of the tyrosine kinase FGFR. These pathways include, the ERK, PI3K/AKT and PKC signalling cascades. Likewise, other studies have suggested that signalling pathways that inhibit cell death (e.g., p38, MAPK/ERK) and survival pathways (e.g., AKT/PKB) may represent the next investigational step in inhibition of HIV-related CNS toxicity [ 26 ]. In this context, FGF2-mediated signalling may play an important role in maintaining BBB integrity during HIV trafficking into the brain and/or cell-free gp120 interactions with cerebral endothelial cells. Results FGF2 protects endothelial cells from gp120-mediated toxicity Consistent with previous reports [ 13 , 14 , 27 , 28 ], our results showed that gp120 (25 ng/ml) increased cell death of HUVEC above control by approximately 27.5% (average of results from all viability assays) after 24 h exposure (Fig. 1 ) as determined by Trypan Blue Exclusion, TUNEL, and FA/PI staining (Fig. 1E, J, O , respectively). However, cells pre-treated with FGF2 (20 ng/ml) for 24 h and then exposed to gp120 displayed essentially the same percentage of cell death as untreated control cells (Fig. 1D, E, I, J, N, O ). Although FGF2 treatment of HUVEC most likely improved overall cell fitness [ 15 , 16 ], no significant differences in the total numbers of cells (Fig. 2A ) or in cell viability were observed between control and FGF2 treated cultures (Fig. 1E, J, O and 2B ). Furthermore, time course experiments indicated that simultaneous treatment (data not shown) or pre-treatment with FGF2 up to 24 h was effective at protecting cells from gp120 toxicity (Fig. 2B ). These results indicate that FGF2 is protective against gp120-mediated toxicity in HUVEC. FGF2 activates ERK in HUVEC To explore mechanisms involved in the angio-protective effects of FGF2 against gp120, we first investigated FGF2-stimulated signalling mechanisms that are involved in cell survival pathways. The binding of FGF2 to its receptor (FGFR1) induces several signalling cascades, such as MAPK-mediated ERK activation and AKT-mediated GSK3β inactivation, both of which regulate cell survival. We first determined the effects of FGF2 stimulation on phosphorylation of ERK and GSK3β in time course experiments in HUVEC (Fig. 3 , lanes 1–3). Western blot analysis showed that HUVEC treated with FGF2 (20 ng/ml) resulted in maximum ERK phosphorylation 5–10 min after stimulation (Fig. 3A , lanes 1–3), followed by a progressive decrease reaching undetectable levels at 60 min (data not shown), with no effect on levels of total ERK (Fig. 3B , lanes 1–3). Neither GSK3β (Fig. 3C ) nor PKC (data not shown) phosphorylation was affected by FGF2 treatment. To test the specificity of FGF2 on signalling, HUVEC were exposed to pharmacological inhibitors for PI3K (LY294002), ERK (U0126), and PKC (Bis I and Gö6983) for 30 min prior to FGF2 treatment (Fig. 3A , lanes 4–7, respectively). ERK phosphorylation was inhibited by blocking ERK and PKC (Fig. 3 , lanes 5–7). Interestingly, blocking the PI3K/AKT/GSK3β pathway resulted in a dramatic increase in ERK phosphorylation (Fig. 3A , lane 4). Neither FGF2 nor inhibitors affected levels of total ERK (Fig. 3B ). With regard to GSK3β, blocking PI3K with LY294002 and PKC with Bis I or Gö6983 also inhibited GSK3β phosphorylation (Fig. 3C , lanes 4, 6, 7), albeit to a lesser degree. Treatment with the ERK inhibitor U0126 increased GSK3β phosphorylation (Fig. 3C , lane 5). Neither FGF2 nor inhibitors affected total levels of PI3K or GSK3β (Fig. 3D, E ). These inhibitor studies suggest that FGF2 signalling involves crosstalk between PI3K/AKT/GSK3β and ERK that is possibly mediated by PKC (Fig. 3A , lane 4 and 3C , lane 5). To further confirm that these changes in kinase signalling are mediated by FGF2, immuno-complex kinase assays were performed (Fig. 4A, B ). As indicated by an astrisk (*) in Fig. 4A , lane 2, FGF2 treatment increased ERK activity significantly above levels observed in un-treated control cells (Fig. 4A ). Likewise, and as shown in Figure 3 , FGF2-mediated ERK activity was significantly greater than control in the presence of the PI3K inhibitor LY294002 (Fig. 4A ). The ERK inhibitors PD and U0126, and the PKC inhibitors Bis I and Gö6983 significantly blocked FGF2-mediated ERK activity (Fig. 4A, D ) as shown in Figure 3 . Conversely, FGF2 alone or in the presence of the inhibitors LY294002, Bis I and Gö6983 had minimal effects on GSK3β activity (Fig 4B, D ). However, the ERK inhibitor U0126 significantly decreased GSK3β activity (Fig. 4B ). PD98059 also decreased GSK3β activity although at statistically insignificant levels (Fig. 4B ). Cell viability was not significantly affected by FGF2 or inhibitor treatments (Fig. 4C ) ensuring that effects of inhibitors on kinase activity were not due to cell death. Taken together these data show that FGF2 activates ERK signalling in HUVEC but has little effect on GSK3β activity unless FGF2-mediated ERK phosphorylation is blocked. Furthermore, independently of FGF2, PI3K/AKT and PKC signalling is necessary for GSK3β phosphorylation. However, once GSK3β is phosphorylated, the kinase activity of GSK3β is independent of PI3K/AKT and PKC downstream signalling. On the other hand, GSK3β phosphorylation is influenced, to some degree, by FGF2-mediated ERK phosphorylation since blocking ERK phosphorylation results in a significant increase in the phosphorylation of GSK3β. Likewise, the kinase activity of GSK3β also appears to require ERK phosphorylation for maximal activation. In summary, the FGF2-mediated kinase activity of ERK and GSK3β appears to involve crosstalk between these pathways and possibly PKC. The potential roles of ERK and GSK3β phosphorylation and activity in FGF2-mediated protection from gp120 were investigated. FGF2 angioprotection in HUVEC against gp120 toxicity is mediated, in part, by ERK signalling To investigate the potential role of ERK and PI3K/AKT/GSK3β signalling in FGF2-mediated angioprotection against gp120, HUVEC were treated with LY294002, U0126, Bis I, or Gö6983 for 30 min prior to FGF2 and gp120 exposure (Fig. 5 ). Results from cell toxicity assays determined by Trypan blue exclusion (Fig. 5A ), support our previous data (Fig. 1 ) showing that exposure to gp120 alone significantly increased cell death above control and FGF2 treated cells; whereas, cells pre-treated with FGF2 before exposure to gp120 were protected (Fig. 5 ). The protective effects of FGF2 against gp120 were significantly blocked by U0126, which inhibits MEK to block ERK phosphorylation, (Fig. 5A ). Blocking PI3K with LY 294002 partially blocked FGF2 protection, although at levels insignificant from control. FGF2 protection from gp120 was not affected by blocking PKC with Bis I or Gö6983 (Fig. 5A ). Treating cells with U0126 to block ERK phosphorylation, and gp120 in the absence of FGF2 resulted in significant cell death compared to untreated cells (Fig. 5B ). Moreover, pre-incubation of FGF2 with anti-FGF2 antibody completely neutralized FGF2-mediated angioprotection against gp120 (Fig. 5B ). These results indicate that ERK phosphorylation is significantly involved in FGF2-mediated angioprotection from gp120. PI3K/AKT/GSK3β signalling is partially involved in FGF2 protection from gp120; whereas, PKC signalling in the presence of FGF2 is not necessary for protection from gp120. These results suggest that FGF2 protects endothelial cells from gp120 largely by ERK stimulation with a partial contribution by GSK3β phosphorylation. To further confirm the contribution of these signalling pathways in FGF2 protection against gp120, HUVEC infected with caERK or caAKT were exposed to gp120 and assayed for cell viability. As expected, endothelial cells infected with caERK and exposed to gp120 were significantly protected from gp120 toxicity (Fig. 6 ). caAKT conveyed only partial protection from gp120 toxicity, less than either caERK or FGF2 treatment (Fig. 6 ). In control experiments where HUVEC were infected with GFP adenovirus, no protective effects against gp120 were observed (Fig. 6 ). Furthermore, none of the adenoviral constructs alone promoted significant cell toxicity (Fig. 6 ). In agreement with our previous data, these results suggest that ERK activation plays a significant role in protection of endothelial cells from gp120, and AKT/GSK3β is also be involved. To confirm that the gene transfer approach resulted in ERK and AKT phosphorylation and kinase activation, Western blot (Fig. 7A–D ) and immuno-complex assays (Fig. 7E, F ) were performed. ERK kinase activity was detected using an antibody that recognizes only the phosphorylated form of ERK1/2. Consistent with our previous experiments (Fig. 3A ), FGF2 stimulation resulted in an increase of both ERK1 (44 kDa) and ERK2 (42 kDa) phosphorylation (Fig. 7A ). Levels of FGF2-mediated phosphorylation of ERK2 were greater than ERK1 (Fig. 7A , lane 2). Infection with the GFP adenoviral construct alone had no effect on ERK1/2 phosphorylation (Fig. 7A , lane 3). In contrast, infection with caERK resulted in a significant increase in ERK1 phosphorylation with no effect on ERK2 (Fig. 7A , lane 5). FGF2 treatment in combination with caERK induced high levels of ERK1 phosphorylation with only moderate increases in ERK2 phosphorylation (Fig. 7A , lane 6). These results indicate that FGF2 stimulation results in phosphorylation of mainly ERK2; whereas gene transfer of caERK or the combination of FGF2 and caERK mainly increased ERK1 phosphorylation. Importantly, total ERK activity levels were similar in caERK with or without FGF2 (Fig. 7E and 7F ). Moreover, the level of protection conveyed by FGF2 alone was similar to protection by caERK or caERK plus FGF2. On the other hand, caAKT alone had no effect on ERK1/2 phosphorylation (Fig. 7A , lane 7), whereas, FGF2 treatment in combination with caAKT (Fig. 7A , lane 8) had similar effects on ERK1/2 phosphorylation as observed with FGF2 (lane 2) alone or with GFP and FGF2 (lane 4). Levels of total ERK were not affected by FGF2, GFP, caERK or caAKT (Fig. 7B ). Infection of HUVEC with caAKT resulted in a slight increase in baseline levels of AKT phosphorylation (Fig. 7C , lane 7). Levels of total AKT were not affected by FGF2, GFP, caERK, or caAKT (Fig. 7D ). Consistent with Western blot analyses, immunocomplex assays show that caERK and/or FGF2 increased levels of ERK activity (Fig. 7E , lanes 2–4, 6 and 7F ), whereas neither caAKT nor GFP resulted in increased ERK activity in the absence of FGF2 (Fig. 7E lanes 5, 7 and Fig. 7F ). Results from inhibitor studies (Fig. 5 ) and gene transfer experiments (Fig. 6 ) suggest that both ERK and PI3K/AKT (albeit to a lesser degree) are involved in FGF2-mediated protection against gp120 toxicity. Furthermore, blocking the ERK-mediated pathway results in an increase in GSK3β phosphorylation and vice versa: blocking the AKT/GSK3β pathway after FGF2 stimulation results in an increase in ERK phosphorylation. These results suggest that when endothelial cells are exposed to gp120, FGF2 may mediate protection that involves crosstalk between the ERK and PI3K pathways (Fig. 3A and 3C , and Fig. 6 ). Moreover, inhibitor studies suggest PKC may be involved in this signalling convergence, but a direct role of PKC in FGF2 protection against gp120 is unclear. PKC may be involved in crosstalk between ERK and AKT signalling pathways during FGF2 protection from gp120 Our studies using pharmacological inhibitors suggest that PKC may be involved in a crosstalk mechanism observed between the ERK and AKT/GSK3β pathways in FGF2 signalling. For example, when HUVEC were exposed to PKC inhibitors Bis I and Gö6983 prior to FGF2 treatment, ERK phosphorylation was inhibited to below baseline levels, showing that FGF2-mediated ERK phosphorylation is at least in part influenced by PKC phosphorylation (Fig. 3A , lanes 6 and 7). Likewise, PKC inhibitors partially inhibited GSK3β phosphorylation after FGF2 stimulation (Fig. 3C , lanes 6 and 7). Furthermore, since Huang et al. have shown that total PKC phosphorylation increases with gp120 treatment in HUVEC and that FGF2 is protective [ 13 ], we explored the possibility that similar crosstalk might be involved in the FGF2-mediated protection from gp120. To address these signalling events, we determined which signalling pathways were initiated by FGF2 and which were initiated by gp120. To differentiate the effects of gp120 on ERK, GSK3β and PKC phosphorylation from those obtained in Fig. 3 where FGF2 alone was utilized, we treated endothelial cells with 1) gp120 alone (Fig. 8A, C ), 2 ) gp120 in combination with inhibitors (Fig. 8A, C ), and 3 ) inhibitors, FGF2 and gp120 (Fig. 8B, C ). Treatment of endothelial cells with gp120 alone (Fig. 8A , lane 2, 8C ) or with inhibitors alone (data not shown) did not change levels of ERK phosphorylation. However, when endothelial cells were treated with LY204002 and then exposed to gp120 for 30 min, a significant increase in ERK phosphorylation was observed (Fig. 8A , lane 3, 8C ). Furthermore, in the presence of both FGF2 and gp120 and the inhibitor LY294002, ERK phosphorylation also increased (Fig. 8B , lane 3, 8C ). Interestingly, in the presence of the PKC inhibitor that includes inhibition of the ζ isoform (Gö6983), ERK phosphorylation is returned to approximately control levels (Fig. 8A–B , lane 5, 8C ). On the other hand, inhibition of the classic isoforms of PKC, α, β and γ with Bis I almost completely blocks ERK phosphorylation in the presence of FGF2 and gp120 (Fig. 8B , lane 4), as does inhibition of ERK phosphorylation with U0126 (Fig. 8B , lane 6). These results suggest that PKC signalling may be involved in FGF2-stimulated ERK phosphorylation that protects against gp120. Treatment of HUVEC with gp120 alone, or with gp120 and inhibitors to block ERK, or PI3K/AKT/GSK3β had little effect on GSK3β phosphorylation (Fig. 8A , lanes 1–3, 6, 8C ); whereas, blocking PKC decreased levels of GSK3β phosphorylation (Fig. 8A , lanes 4 and 5). Likewise, treatment of HUVEC with FGF2 alone or with FGF2, gp120 and inhibitors to block PI3K/AKT/GSK3β or ERK had little effect on GSK3β phosphorylation (Fig. 8B , lanes 1–3, 6, 8C ); whereas, blocking PKC decreased levels of GSK3β phosphorylation (Fig. 8B , lanes 4 and 5, 8C ). In summary, in the presence of FGF2 and inhibitors for FGFR and PI3K/AKT/GSK3β, ERK phosphorylation increases (Fig. 3A ). However, in the presence of FGF2 or FGF2 and inhibitors for PKC or ERK, ERK phosphorylation decreases (Fig 8A, B, C ). Likewise, PKC inhibitors almost completely abolish GSK3β phosphorylation in the presence of gp120, independently of FGF2 stimulation (Fig. 8B, C ). Together, these findings point to PKC involvement with FGF2 stimulated signalling in HUVEC during challenge with gp120, however further experimentation is needed to confirm any role of PKC in FGF2-mediated protection from gp120. Discussion The present study is the first to show that FGF2 protects HUVEC against the toxic effects of gp120 via crosstalk of the ERK-PI3K/AKT pathways (Fig. 9 ). Consistent with these finding, FGF2 has been shown to protect endothelial cells from oxidative stress [ 29 ] and radiation [ 30 , 31 ]. These studies suggest that PKC is involved in protection against ultra-violet radiation, since blocking PKC abrogates FGF2-mediated protection [ 31 ]. Similarly, a recent study showed that FGF2 also protected endothelial cells from gp120-mediated toxicity that was induced by dysregulation of PKC activity to promote apoptosis [ 13 , 28 ]; however, the pathways by which FGF2 protected endothelial cells from gp120 remained unclear and may be represented by independent mechanisms. Therefore, our study focused on signalling pathways involved in angioprotection upon exposure to gp120. gp120 has been reported to dysregulate PKC signalling but also to induce ERK phosphorylation in several systems by different pathways [ 32 - 34 ]. Likewise, our studies suggest that gp120 and FGF2 signalling in HUVEC may, in some aspects, overlap and involve primarily ERK and to a lesser extent AKT/GSK3β signalling. In this context, when HUVEC were treated with the ERK inhibitor U0126, then exposed to gp120, a significant increase in cell death above control was observed; however, the amount of cell death observed under these conditions was less than that observed in cells treated with gp120 alone. In HUVEC, PKC phosphorylation does not change when stimulated with FGF2 and PKC does not appear to be directly involved in FGF2-mediated protection from gp120 since inhibitors of this pathway had no effect on angioprotection. However, previous studies have shown that PCK may play a role in the MAPK signalling cascade, through upstream crosstalk with Ras (Figure 9 ) [ 35 , 36 ]. Moreover, in the presence of gp120 with or without FGF2, both ERK and PKC inhibitors completely block ERK phosphorylation, suggesting that while PKC is involved in ERK phosphorylation, the protective properties of ERK are not dependent on PKC. In support of these conclusions, the current study shows that inhibition of ERK, and to a lesser degree PI3K/AKT, blocks FGF2-mediated protection from gp120. Our data suggest that FGF2 signalling via ERK-PI3K/AKT crosstalk is responsible for protection of endothelial cells from gp120. Other mechanisms that could contribute to FGF2-mediated protection against gp120 may include, but are not limited to, interaction of FGF2 with heparin sulfate receptors and/or stimulation of alternative pathways not involving ERK [ 37 ]. Consistent with these findings, FGF2 protects cardiac myocytes from inducible nitric oxide synthetase induced apoptosis by the ERK signalling pathway [ 38 ], and in neuronal cells FGF2-mediated ERK activation is essential for survival signalling [ 39 ]. Our studies provide evidence for the first time that FGF2-mediated protection of endothelial cells against gp120 toxicity largely occurs through an ERK-dependent pathway. Our data also suggest crosstalk between the PI3K/AKT and ERK pathways, since blocking PI3K resulted in a significant increase in ERK phosphorylation in FGF2 treated endothelial cells. Likewise, blocking ERK caused an increase in phosphorylation of GSK3β, which is directly downstream of PI3K/AKT signalling. In this context, it is possible that upon stimulation by growth factors such as FGF2, endothelial cells utilize several signalling cascades that are capable of crosstalk to promote cell fitness and survival, as suggested by studies involving vascular endothelial growth factor (VEGF) signalling in the presence or absence of serum [ 40 ]. In these studies, it was shown that crosstalk between the AKT and p38 pathways may regulate cell survival during serum withdrawal and VEGF stimulation [ 40 ]. Our studies also point toward signalling crosstalk during FGF2 protection from gp120. Crosstalk between PI3K and p38 was shown to be mediated by MAPK kinase kinase (MEKK3) in VEGF signalling [ 40 ]. Likewise, in FGF2 signalling, crosstalk between PI3K/AKT and ERK might be mediated by PKC [ 41 ]. This is consistent with previous studies showing that in VEGF-stimulated endothelial cells, inhibition of PI3K resulted in an increase in the phosphorylation of ERK1/2 and p38 phosphorylation [ 42 ]. Together with the findings in this study, these reports emphasize the importance of different signalling pathways communicating to regulate intracellular signal transduction in endothelial cell survival [ 43 , 44 ]. The observations reported in this study have potential importance to the maintenance of BBB integrity in host response during HIV infection. FGF2 is produced by astrocytes in close proximity to endothelial cells of the BBB and functions to improve cell fitness and barrier integrity. In in vitro models of the BBB, FGF2 treatment of endothelial cells mimics the effects of astrocyte co-culture by improving tight junction integrity [ 15 ]. Numerous studies have shown that disruption of this key component in the BBB is central to HIV infection of the CNS and is a hallmark of HIVE [ 45 ]. This is particularly important during HIV trafficking into the CNS because endothelial cells of the BBB are the first neural cells to come in contact with HIV infected cells or HIV products. Regulation among signalling crosstalk in endothelial cells by FGF2 is important since these are the cells of the BBB that first encounter HIV-infected and/or activated cells and HIV products such as gp120. Migration of HIV-infected and/or activated cells into the brain is largely regulated by endothelial cell integrity. During the progression of HIVE, activated and HIV-infected monocytes produce cytokines and chemokines and release HIV products that act in concert to compromise the integrity of the BBB [ 46 ]. This triggers a series of signalling events that may result in the alteration of tight junction proteins, such as zona occludins, thereby promoting migration of HIV-infected cells into the brain parenchyma [ 45 , 47 , 48 ]. Alternatively, astroglial cells that are also an important component of the BBB might produce trophic factors such as FGF2 in response to endothelial cell distress in attempts to maintain BBB integrity. Among them, factors produced by damaged endothelial cells, including tissue factor, can induce the early growth response-1 gene (Egr-1) transcription factor in astrocytes that in turn directs expression of FGF2 [ 49 ]. Conclusions In summary, the present study shows that FGF2 is protective against gp120 toxicity via crosstalk of ERK-PI3K/AKT signalling pathways during compensatory signalling. This finding is important for understanding the pathogenesis of HIVE because factors produced by components of the BBB, such as FGF2 by astrocytes, in response to toxins such as HIV-gp120 may be responsible in part for angio-protection of endothelial cells of brain microvasculature. Methods Cell culture HUVEC (Clonetics ® , BioWhittaker, Inc., Walkersville, MD) were grown in complete media (endothelial basal medium [EBM] supplemented with bovine brain extract (12 μg/ml), human epithelial growth factor (10 ng/ml), hydrocortisone (1 μg/ml), GA-1000 (Gentamicin and Amphotericin B, 1 μg/ml) (Clonetics) and 20% fetal bovine serum (Irvine Scientific, Irvine, CA). Complete growth media were changed to minimal media (EBM, GA-1000, 1% serum, Clonetics) for 24 h prior to treatments. HUVEC were chosen because previous studies have characterized this cell line with regard to FGF2 mediated signaling responses and much of the work conducted in the present study complements and builds on data from these studies [ 13 , 27 ]. Furthermore, HUVEC mimic numerous characteristics of cerebral endothelial cells. Both short-term signaling events and long-term viability of HUVEC were addressed after treatment with a combination of inhibitors, FGF2, and gp120, or with each component alone, as described below. HUVEC treatments to determine viability For viability assays, HUVEC were treated with either 20 ng/ml FGF2, (Calbiochem, La Jolla, CA) or full-length recombinant HIV-1 BaL gp120 (25 ng/ml) NIH Research and Reagent Program, Rockville, MD and Bartels-Mardx, Carlsbad, CA) for 30 min, 1 h, 6 h, 12 h and 24 h. Recombinant HIV-1 BaL used in these experiments is a macrophage trophic virus and binds to CD4 and signals via CCR5. For protection assays, HUVEC were treated either simultaneously with FGF2 and gp120 or pre-treated with FGF2 for 30 min, 1 h, 6 h, 12 h and 24 h before the addition of 25 ng/ml gp120. HUVEC were harvested 24 h after the addition of gp120 for viability assays. Viability assays For trypan blue exclusion assays, HUVEC were rinsed with warm PBS, harvested, collected by gentle centrifugation, resuspended in a PBS/ trypan blue solution (1:1, vol:vol) and counted as previously described [ 50 ]. Terminal dUTP end labeling (TUNEL) staining was carried out essentially as described previously [ 51 , 52 ]. Cells were grown on coverslips, rinsed with PBS and fixed with 4% paraformaldehyde for 20 min at room temperature. After rinsing with PBS, cells were permeabilized with 1% H 2 O 2 in 1× PBS-Tween-20 for 10 min at room temperature, rinsed twice with PBS and air-dried for 2 min. TUNEL was conducted according to the manufacturer's instructions for staining (Roche Diagnostics, Indianapolis, IN) and counterstained with Eosin Y. TUNEL positive cells were detected with 3, 3' diaminobenzidine (DAB) (Sigma) and counted with a computer-aided analysis system (Quantinet 570C, Leica, Bannockburn, IL). Cell death was also assayed by fluorescent staining with fluorescein diacetate (FA) and propidium iodide (PI) as previously described [ 27 ]. The FA (Sigma) working solution was prepared by adding 10 μl of stock FA (50 mg FA in 10 ml of acetone) to 2.5 ml PBS. The FA/PI (Sigma) cocktail was prepared by adding 1 μl of FA working solution to 300 μl of PI (1 mg PI in 50 ml PBS). After rinsing once in warm PBS, 20 μl of the FA/PI cocktail was added to cells on coverslips and incubated 15 min in the dark. Coverslips were placed cell-side up on SuperFrost slides (Fisher Scientific, Pittsburgh, PA) under anti-fading media (Vector Laboratories, Inc., Burlingame, CA) and immediately imaged with laser scanning confocal microscope (LSCM, MRC1024, Bio-RAD, Hercules, CA). HUVEC treatments for signalling events Signalling events mediated by FGF2 and/or gp120 were determined via Western Blot (WB) analyses. Cells were treated with either 20 ng/ml FGF2 (Calbiochem, La Jolla, CA) or gp120 for 30 min, 1 h, 6 h, 12 h and 24 h and analyzed by WB. Additionally, HUVEC were treated with inhibitors alone. To test the effects of FGF2 stimulation or gp120 exposure on downstream signalling, prior to FGF2 treatment, cells were pre-treated with inhibitors targeting different steps in the MAPK, PKC or AKT/glycogen synthase kinase 3-beta (GSK3β) pathways. For these experiments, cells were incubated for 30 min with the: (i) 10 μM PI3K inhibitor LY294002 (Calbiochem), (ii) 10 μM PKC inhibitors Gö6983 (Calbiochem), 2 μM Bisindolymaleimide I (Calbiochem), (iii) 10 μM MEK inhibitors U0126 or 20 μm PD98059 (Calbiochem). To test the specificity of FGF2-mediated protection against gp120, HUVEC were incubated with 20-fold excess anti-FGF2 neutralizing antibody prior to the addition of 20 ng/ml FGF2. Cells were incubated in the presence of anti-FGF2 antibody and FGF2 for 24 h, then exposed to 25 ng/ml gp120 for 24 h and assayed for viability, ERK phosphorylation and kinase activity. To determine the signalling events caused by gp120, with or without FGF2 and inhibitors, the following conditions were utilized: 1) cells were treated with inhibitors for 30 min as previously described, and then exposed to 25 ng/ml for 30 min, 2) inhibitors for 30 min then gp120 for 30, 3) inhibitors for 30 min, FGF2 for 10 min and gp120 for 30 min. After treatments cells were immediately harvested for Western analyses. Western blot analysis in HUVEC Briefly, after treatments, cell monolayers were harvested and solubilized in HEPES homogenization buffer (1 mM HEPES, 5 mM Benzamidine, 2 mM 2-Mercaptoethanol, 3 mM EDTA, 0.5 mM Magnesium Sulfate, 0.05% Sodium Azide, Protease Inhibitor Cocktail III and Phosphatase Inhibitor Cocktail I) (Calbiochem). Protein concentration was determined by the method of Lowry and between 10–15 μg of protein were separated by electrophoresis on 10% Bis-Tris NuPage Gels (Invitrogen, Carlsbad, CA). Samples were then electroblotted onto Immunobilon P nitrocellulose membranes (Millipore, Bedford, MA). Proteins were immunolabeled with primary antibodies against phosphoERK1/2 (1:2500) (Thr202/tyr204, mouse monoclonal phosphoERK antibody) (Cell Signalling Technology), total ERK1/2 (1:2500) (anti-mouse monoclonal ERK1/2 antibody) (Pharmigen), phosphoGSK3β (1:2500) (Ser 9, anti-rabbit polyclonal phospho-GSK3β antibody) (Cell Signalling Technology), total GSK3β (1:2500) (anti-mouse monoclonal GSK3β antibody) (Transduction Laboratories, Lexington, KY), phosphoAkt (Thr308, anti-rabbit polyclonal phosphoAKT antibody) (Calbiochem), total AKT (1:2500) (anti-rabbit polyclonal AKT antibody), (Calbiochem), anti-mouse monoclonal PI3K antibody(1:1000) (Transduction Labs), anti-rabbit phospho-PKC (pan) that detects phosphorylation of PKC isoforms α, β, δ, ε, and η (Cell Signalling Technology, Beverly, MA) and anti-rabbit actin antibody (1:1000) (Chemicon, San Diego, CA). Blots were incubated with the HRP-tagged secondary antibody, detected with the ECL reagent (DuPont NEN, Boston, MA), followed by autoradiography. As a control, HUVEC were pre-treated with one of the following pharmacological inhibitors: MTA, LY294002, Gö6983, Bisindolymaleimide I, U0126 or PD98059 for 30 min and then FGF2 and gp120 were added simultaneously. Cell viability was assayed 24 h later. Adenoviral constructs and transfection Recombinant adenoviral constructs encoding constitutively active (ca) forms of ERK and AKT were prepared as previously described [ 53 , 54 ] (kindly provided by Dr. Kazuhiko Namikawa, Asahikawa Medical College, Asahikawa, Japan and Dr. Kenneth Walsh, Tufts University, Boston, MA, respectively). Adenovirus encoding the green fluorescent protein (GFP-Ad) as previously described [ 55 ] was used as a control to account for any effects that may be due to adenoviral infection. Briefly, for ca-ERK, cDNA fragments containing the entire coding regions for human MAP/ERK kinase 1 (MEK1) were isolated from human embryonic kidney cells (HEK293) by PCR. ca-ERK lacks the nuclear export signal (amino acids 32–51) and has glutamic acid substitutions for two phosphorylation sites, Ser 218 and Ser 222 , was prepared by site-directed mutagenesis and fused to the hemagglutinin tag sequence, as previously described [ 56 ]. ca-AKT, has the c- src myristoylation sequence fused in frame to the N-terminus of the FLAG-AKT coding sequence [ 54 ]. High titer recombinant viral stocks (10 11 plaque forming units) were generated in HEK293 cells and stored at -80°C. Endothelial cells were plated at approximately 50% confluency in complete media (20% serum) and grown for 24 h at 37°C, 5% CO 2 . HUVEC were changed to minimal media (1% serum) for 6 h and then half of the media was removed from each sample, pooled and stored at 37°C, 5% CO 2 . HUVEC were infected at a multiplicity of infection of 50 in pre-conditioned minimal media for 4 h, achieving a 40–50% transduction efficiency (data not shown). Minimal medium containing adenovirus was replaced with pooled pre-conditioned minimal media and cell cultures were further incubated for 48 h at 37°C and 5% CO 2 . After 48 h, cells were treated with FGF2 (10 ng/ml) for 10 min, harvested in lysis buffer, stored at -20°C, and later used for ERK and AKT kinase assays. For immunocytochemistry, cells on coverslips were blocked overnight at 4°C in 10% horse serum and 5% BSA. Coverslips for ca-ERK were then labelled overnight at 4°C with primary anti-Hemagglutinin (1:150) (Roche Diagnostics) and for ca-AKT with primary anti-FLAG (1:50) (Sigma) followed by incubation with secondary biotinylated IgG (Vector Laboratories) (1:200) for 1 h at room temperature. Hemagglutinin and FLAG proteins were detected with DAB (Sigma) and visualized by light microscopy to access HA production. Experiments were conducted at least three times to ensure reproducibility. Immunocomplex kinase assays ERK and AKT Assays were performed essentially as previously described with some modifications [ 57 ]. Briefly, cells were rinsed twice with cold phosphate-buffered saline and incubated for 20 min on ice in lysis buffer (1% Triton X-100, 10 % glycerol, 50 mM HEPES, pH 7.4, 140 mM NaCl, 1 mM EDTA, 1 mM Na 3 VO 4 , 1 mM phenyl-methylsulfonylfluoride, 5 μg/ml aprotinin, 5 μg/ml leupeptin, 1 mM dithiothreitol). The cell lysates were then centrifuged for 10 min at 14,000 rpm and protein concentration was determined using the BCA reagent (Pierce, Rockford, IL). Two hundred microliters of the supernatant were pre-absorbed with a protein G-sepharose (Amersham Pharmacia Biotech, Uppsala, Sweden) for 1 h at 4°C. The pre-cleared lysates were incubated with 1 μg/sample of anti-ERK monoclonal antibody (1:50) (Pharmingen, San Diego, CA) or polyclonal anti-human AKT antibody (anti-PKB 88–100) (Calbiochem) overnight at 4°C, followed by incubation with protein G-sepharose for 2 h at 4°C. After washing twice with the lysis buffer and twice with a kinase buffer (20 mM HEPES, pH 7.2, 0.1 mM Na 3 VO 4 , 10 mM glycerophosphate, 10 mM MgCl 2 , 1 mM dithiothreitol, 0.1 mM EGTA), the immune complexes were incubated in 30 μl of the kinase buffer containing 20 μg myelin basic protein for ERK (Sigma) or 1 μg of GSK3β fusion protein (Cell Signalling, Beverly, MA) for AKT and 10 μCi of [γ- 32 P] ATP (6000 Ci/mmol, PerkinElmer, Boston, MA) for 30 min at 30 °C. Reactions were terminated by the addition of 5 μl of 500 mM EDTA and 5 mM ATP. After adding 4× Laemmili SDS sample buffer and boiling 5 min, samples were separated by 15% SDS-PAGE, followed by autoradiography. Quantification was performed with the PhosphorImager using the Image Quant software (Molecular Dynamics, Sunnyvale, CA). Statistical analysis All experiments were performed in a blind code fashion. After results were obtained, the code was broken and analysis was performed by utilizing one-way analysis of variance (ANOVA) with post hoc Dunnett's or Tukey-Kramer. Authors' contributions DL designed and conducted FGF2, gp120 and inhibitors experiments, Western analyses, viability assays and composed the manuscript. RH assisted in design of, and conducted FGF2, gp120 and inhibitors experiments, Western analyses and viability assays. MH designed and conducted activity assays and gene transfer experiments. MD obtained adenoviral constructs, designed experimental methods for infection and analysed output measures. EM performed light microscopy and immunocytochemical experiments and composed with DL the first draft of the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549045.xml |
506785 | Web GIS in practice: an interactive geographical interface to English Primary Care Trust performance ratings for 2003 and 2004 | Background On 21 July 2004, the Healthcare Commission released its annual star ratings of the performance of NHS Primary Care Trusts (PCTs) in England for the year ending March 2004. The Healthcare Commission started work on 1 April 2004, taking over all the functions of the former Commission for Health Improvement , which had released the corresponding PCT ratings for 2002/2003 in July 2003. Results We produced two Web-based interactive maps of PCT star ratings, one for 2003 and the other for 2004 , with handy functions like map search (by PCT name or part of it). The maps feature a colour-blind friendly quadri-colour scheme to represent PCT star ratings. Clicking a PCT on any of the maps will display the detailed performance report of that PCT for the corresponding year. Conclusion Using our Web-based interactive maps, users can visually appreciate at a glance the distribution of PCT performance across England. They can visually compare the performance of different PCTs in the same year and also between 2003 and 2004 (by switching between the synchronised 'PCT Ratings 2003' and 'PCT Ratings 2004' themes). The performance of many PCTs has improved in 2004, whereas some PCTs achieved lower ratings in 2004 compared to 2003. Web-based interactive geographical interfaces offer an intuitive way of indexing, accessing, mining, and understanding large healthcare information sets describing geographically differentiated phenomena. By acting as an enhanced alternative or supplement to purely textual online interfaces, interactive Web maps can further empower organisations and decision makers. | Background On Wednesday 21 July 2004, the Healthcare Commission released its annual star ratings of the performance of NHS Primary Care Trusts (PCTs) in England for the year ending March 2004. The Healthcare Commission started work on 1 April 2004, taking over all the functions of the former Commission for Health Improvement , which had released the corresponding PCT ratings for 2002/2003 in July 2003 [ 1 ]. A star rating scheme is adopted. PCTs with the highest levels of performance in the measured areas are awarded a rating of three stars. PCTs with mostly high levels of performance, but which are not consistent across all measured areas, are awarded a rating of two stars. PCTs where there is some cause for concern about particular areas of measured performance are awarded a rating of one star. PCTs that have shown the poorest levels of measured performance or little progress in implementing clinical governance receive a rating of zero stars [ 2 ]. The performance ratings Web pages of the Healthcare Commission and the former Commission for Health Improvement offer a very limited "geographic search" restricted to browsing results by Strategic Health Authority (SHA) and . This "geographic search" does not allow any visual appreciation of PCT performance levels, or any visual comparisons to be made between PCTs or between 2003 and 2004 result sets. Results We produced two Web-based interactive map sets of PCT star ratings, one for 2003 and the other for 2004 . The maps use a yellow-green-blue quadri-colour scheme to represent PCT star ratings. Users can switch between the two map sets or themes, 'PCT Ratings 2003' and 'PCT Ratings 2004', within the same pane (the two themes are synchronised, so that when users switch themes the corresponding tile from the other theme is always displayed – Figure 1 ). Map zooming (100% to 800%), panning, MapTips (displaying PCT names), and legends are available. Dynamic overview maps are offered as navigational help (at zoom levels 200%-800%). Map search is also possible (by PCT name or part of it – Figure 2 ). Clicking a PCT on any of the maps will display the detailed performance report of that PCT for the corresponding year. Printer-friendly versions of the maps can be generated for direct printing from the Web browser. Figure 1 Screenshots from our Web-based interactive maps of PCT star ratings for 2003 and 2004. Screenshots from our Web-based interactive maps of PCT star ratings for 2003 and 2004 . When users switch between 'PCT Ratings 2003' and 'PCT Ratings 2004', the corresponding tile from the other theme is always displayed within the same pane, allowing instant 2003–2004 visual comparisons to be made. In this Figure, Northumberland Care Trust can be seen achieving lower ratings in 2004 (1 star–light green) compared to 2003 (3 stars–dark blue). The detailed 2004 performance report of Northumberland Care Trust (from the Web site of the Healthcare Commission) has been displayed by clicking the Trust shape on the 'PCT Ratings 2004' map. Note the overview maps displaying the position of the current map tile on a miniature complete map of England. Figure 2 Screenshot of the map search box. Screenshot of the map search box, which allows users to locate a Trust on the maps by typing the Trust name or part of it. Selecting a PCT from the 'Result' list and clicking 'show' will zoom into and display the corresponding map tile for that PCT. The maps were successfully tested in both Microsoft Internet Explorer and Mozilla Firefox Web browsers. Discussion Many people are more visually oriented and find that spending long hours browsing the flat textual indices of the Internet leaves a lot to be desired, especially when it comes to navigating large online datasets and understanding the relationships, patterns and trends buried in them. Information resources and large textual datasets (like the detailed PCT performance reports in our case–more than 600 PCT reports for 2003 and 2004 combined) can be organised and navigated based on their geographical attributes [ 3 ]. These geographical aspects of textual information are sometimes very useful as an index to information, providing an intuitive way of accessing, mining, and understanding it. Some information types like PCT performance ratings lend themselves very well and naturally to geographical indexing and visualisation. In fact, PCT performance ratings describe a geographically differentiated phenomenon, which is the variation in the performance and quality of primary healthcare services between different areas across England. Using our Web-based interactive maps, users can quickly and intuitively locate any PCT and retrieve detailed performance information about it. They can also visually appreciate at a glance the distribution of PCT performance across England; for example, one can instantly note that there were no three star (dark blue) PCTs in the London region in 2002/2003 and that this has remained unchanged in 2003/2004. Users can visually compare the performance of different PCTs in the same year and also between 2003 and 2004 (by switching between 'PCT Ratings 2003' and 'PCT Ratings 2004' themes). The performance of many PCTs has improved in 2004, whereas some PCTs, e.g., Northumberland Care Trust – Figure 1 , achieved lower ratings in 2004 compared to 2003. Conclusions Web-based interactive geographical interfaces offer an intuitive way of indexing, accessing, mining, and understanding large healthcare information sets describing geographically differentiated phenomena, and can act as an enhanced alternative or supplement to purely textual online interfaces. Geographical interfaces enable instant visual comparisons to be made between different geographical areas and over time (when information sets and maps for successive periods of time are available), thus empowering organisations and decision makers. Methods Star ratings of English PCTs for the years 2002/2003 and 2003/2004 were obtained from the Web sites of the former Commission for Health Improvement and the Healthcare Commission respectively. The Internet addresses (URLs) of the corresponding detailed reports of PCT performance were also harvested from the same sources. The maps were created in ESRI ArcView GIS Version 3.1 . We used the 2001 Census PCT (post April 2002 change) boundary dataset, which is the copyright of the Crown/Ordnance Survey , and is freely available to the UK academic community from EDINA UKBORDERS service with the support of the ESRC and JISC . The names/boundaries and labels (codes) of few PCTs changed between 2003 and 2004, but this was properly cared for in our exercise. We inserted four new fields in the original PCT boundary dataset table to store the 2003 and 2004 star ratings and corresponding detailed report URLs for all English PCTs. The PCTs in the output maps are coloured according to the values in their star rating fields (0, 1, 2, or 3 corresponding to the number of stars awarded), with light colours for low ratings to dark colours for high ratings. We used ColorBrewer and [ 4 ]) to select a suitable colour scheme for our maps. Our chosen scheme is colour blind friendly, black and white photocopy friendly (for printed output), LCD projector friendly, laptop (LCD) friendly, CRT screen friendly, and colour printing friendly–all at the same time (Figure 3 ). Figure 3 Screenshot of ColorBrewer online tool. Screenshot of ColorBrewer online tool showing the colour scheme we have chosen for our maps. This yellow-green-blue quadri-colour scheme is colour blind friendly, black and white photocopy friendly (for printed output), LCD projector friendly, laptop (LCD) friendly, CRT screen friendly, and colour printing friendly–all at the same time. The corresponding Hue-Saturation-Value numerical triplets for the four colours in our scheme are also shown, ready for using in ArcView 3.x. The online interactive maps were then produced using the Demo version of alta4 HTML ImageMapper 3.5 extension for ESRI ArcView GIS 3.x and Figure 4 ), and its companion tool alta4 ThemeBrowser 1.0. ThemeBrowser is used to combine separate HTML ImageMapper projects (in our case 'PCT Ratings 2003' and 'PCT Ratings 2004' map sets) into a single ThemeBrowser Web page (see ). Figure 4 Screenshot of the Demo version of HTML ImageMapper 3.5 extension within ArcView GIS 3.1. Screenshot of the evaluation version of alta4 HTML ImageMapper 3.5 extension within ESRI ArcView GIS 3.1, showing the main settings we have used to generate our Web-based 'PCT Ratings 2004' interactive map set. The dialogue box on the right shows the 'MapTip Field' and 'Click Action/URL Field' settings associated with features (PCTs) on the output map. It is noteworthy that HTML ImageMapper does not require any server side software installation, and as such is much simpler to use than some other Internet GIS solutions like the client/server version of ALOV Map/TimeMap . The standalone versions of ALOV Map/TimeMap and JShape Java applets, which don't require any server side setup, are limited by the fact that they need to download the whole map shapefile from the Web server before they can start on the client side, and so are not suitable for large datasets (our PCT boundary dataset in ESRI shapefile format is about 50 MB in size). Other options for generating interactive Web maps from a desktop GIS are discussed in [ 5 ]. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC506785.xml |
549079 | Reference gene selection for quantitative real-time PCR analysis in virus infected cells: SARS corona virus, Yellow fever virus, Human Herpesvirus-6, Camelpox virus and Cytomegalovirus infections | Ten potential reference genes were compared for their use in experiments investigating cellular mRNA expression of virus infected cells. Human cell lines were infected with Cytomegalovirus, Human Herpesvirus-6, Camelpox virus, SARS coronavirus or Yellow fever virus. The expression levels of these genes and the viral replication were determined by real-time PCR. Genes were ranked by the BestKeeper tool, the GeNorm tool and by criteria we reported previously. Ranking lists of the genes tested were tool dependent. However, over all, β-actin is an unsuitable as reference gene, whereas TATA-Box binding protein and peptidyl-prolyl-isomerase A are stable reference genes for expression studies in virus infected cells. | Background Quantitative real-time PCR (QPCR) has become the favoured tool in mRNA expression analysis and also in virus diagnostics [ 1 ]. Real-time PCR has outperformed classical and semi-quantitative PCR methods in terms of accuracy, reproducibility, safety and convenience for the precise monitoring of viral load in clinical material, as well as for the investigation of the expression of cellular genes in response to virus infection. However, the most prominent problem in quantitative mRNA expression analysis is the selection of an appropriate control gene. For years, the glyceraldehyde 3-phosphate dehydrogenase (GAP) gene and the β-actin (Act) gene were used as control genes in classical molecular methods for RNA detection. Recently, evidence accumulated that especially these two genes, GAP and Act, are unsuitable controls in quantitative mRNA expression analysis due to setting dependent variations in expression [ 2 - 4 ]. Recently, we have confirmed these results by investigating the expressional stability of 13 potential reference genes in 16 different tissues and presented more suitable genes like the RNA polymerase II gene [ 5 ]. However, an evaluation of reference genes in virus infected cells has not been performed so far. Therefore, the selection of the 10 most promising reference genes, GAP, Act, peptidyl prolyl isomerase A (PPI), glucose 6-phosphate dehydrogenase (G6P), TATA-Box binding protein (TBP), β2-microglobulin (β2M), α-tubulin (Tub), ribosomal protein L13 (L13), phospholipase A2 (PLA) and RNA polymerase II (RPII) were evaluated in cell lines infected with members of different virus families: coronavirus (SARS-coronavirus), flavivirus (yellow fever virus, (YF)), herpesvirus (Human herpesvirus-6 (HHV-6) and cytomegalovirus (CMV)) and orthopoxvirus camelpox (CAMP), covering also DNA and RNA viruses. Quantification of viral RNA was performed to proof and monitor infection. Thereafter the candidate reference genes were evaluated by the BestKeeper tool [ 6 ], the GeNorm tool [ 7 ] and the algorithm we described previously [ 5 ]. Results An efficient infection could be evidenced by a significant increase of viral RNA or DNA for all 5 viruses over time (table 1 ). Despite progressing viral replication, the expression of some of the reference genes remained constant, while other genes were varying in expression according to accumulation of infected cells. Table 1 Cell culture conditions and results of virus kinetics CMV HHV-6 CAMP SARS YF cell line MRC-5 CCRF-HSB-2 HepG2 Huh-7D12 HepG2 multiplicity of infection 2.0 0.5 0.5 1.0 0.5 time to maximal infection /h 72 120 24 72 96 max. infected cells % 100 >70 >90 >70 >80 measuring point /h 0,6,12,24,48,72 0,24,48,72,96,120 0,1,3,6,12,24 0,2,4,22,42 0,24,48,72,96 The experimentally obtained data for each virus and each gene were analysed using three different methods. The reference gene evaluation of the BestKeeper tool is shown in table 2 . A low standard deviation (SD) of the C T values should be expected for useful reference genes and a high SD for genes that are susceptible to virus replication. Corresponding to the recent estimation the SD of the C T value was highest for Act in 4 of 5 viruses, indicating that Act is no reliable reference gene in this setting. In contrast, TBP and PPI displayed the highest expressional stability for 4 of 5 viruses. To find a general conclusion, the total of all SD values from all virus experiments (sum V ) was calculated for each reference gene. As shown in table 2 , TBP and PPI seemed to be the least regulated genes in this analysis (sum v = 2.29 for both), followed by GAP (sum v = 3.49) and β2M (sum v = 3.96). All other genes showed moderate total SD values (sum v > 4.58), except Act (sum v = 11.28), confirming to be the most inappropriate reference gene. It is remarkable that the obtained BestKeeper index values are low, despite the inclusion of Act in the calculation. Calculating BestKeeper vs. each reference gene using Pearson correlation displayed very inconsistent results (table 3 ). Act showed the highest SD values in all virus infections, but a significantly high correlation. In contrast TBP displayed low correlation that was statistically not significant in most cases. When summing up the SD values of all reference genes for each virus infection (sum HRG ), it seems that CAMP infection caused the highest variations in reference gene expression. Table 2 Results from BestKeeper analysis, SD [±C T ] RPII Act β2M L13 PLA TBP GAP PPI G6P Tub BK sum RGC CMV 0.59 2.70 0.51 0.36 0.72 0.41 0.66 0.43 0.71 0.69 0.56 7.78 HHV-6 2.77 1.09 0.50 0.87 0.88 0.35 0.59 0.26 0.92 0.78 0.63 9.02 CAMP 1.84 2.70 1.46 2.34 1.72 0.49 0.61 0.70 1.47 1.36 1.10 14.70 SARS 0.39 1.72 0.41 0.53 0.58 0.32 0.56 0.34 0.81 0.55 0.40 6.21 YF 1.36 3.06 1.07 0.67 1.64 0.71 1.08 0.56 0.80 1.19 0.98 12.16 sum V 6.95 11.28 3.96 4.77 5.55 2.29 3.49 2.29 4.71 4.58 Table 3 Results from BestKeeper analysis, Bestkeeper vs. Reference gene candidate Coeff. of corr. [r] ( p -Value) RPII Act β2M L13 PLA TBP GAP PPI G6P Tub CMV 0.75 0.79 0.76 0.13 0.89 0.10 0.92 0.91 0.75 0.95 (0.005) (0.002) (0.005) (0.698) (0.001) (0.763) (0.001) (0.001) (0.005) (0.001) HHV-6 0.79 0.73 0.54 0.30 0.93 0.79 0.94 0.75 0.82 0.97 (0.002) (0.007) (0.069) (0.350) (0.001) (0.002) (0.001) (0.005) (0.001) (0.001) CAMP 0.91 0.18 0.98 0.95 0.99 0.78 0.63 0.99 0.45 0.59 (0.002) (0.662) (0.001) (0.001) (0.001) (0.022) (0.092) (0.001) (0.268) (0.127) SARS 0.48 0.77 0.41 0.27 0.85 0.88 0.46 0.36 0.73 0.84 (0.162) (0.010) (0.236) (0.452) (0.002) (0.001) (0.177) (0.307) (0.017) (0.002) YF 0.90 0.91 0.96 0.25 0.98 0.94 0.99 0.92 0.93 0.92 (0.001) (0.001) (0.001) (0.492) (0.001) (0.001) (0.001) (0.001) (0.001) (0.001) Abbreviations: SD [± C T ]: the standard deviation of the C T ; BK : BestKeeper ; Sum V : Sum of viral infection SD values; Sum RGC : Sum of reference gene SD values Analysing the expression data with the GeNorm tool showed slightly deviant results (table 4 ). First, the value sum V , representing the SD of a reference gene over all viruses, was lowest for PPI (sum V = 6.08) confirming the results obtained by the Bestkeeper tool. However, β2M (sum V = 6.11), GAP (sum V = 6.19) and TBP (sum V = 6.29) turned out to be comparably reliable as reference genes. Second, also the GeNorm tool showed that Act is by far the worst reference gene (sum V = 14.20). Table 4 Results from GeNorm analysis (M ≤ 0.5) RPII Act β2M L13 PLA TBP GAP PPI G6P Tub sum RGC CMV 1.41 3.41 1.42 1.63 1.45 1.69 1.38 1.37 4.79 1.54 20.09 HHV-6 2.82 1.38 1.15 1.55 1.19 1.03 0.95 1.08 1.15 0.96 13.27 CAMP 1.70 3.84 1.40 1.94 1.49 1.57 1.66 1.40 2.04 1.92 18.95 SARS 0.83 1.88 0.82 1.06 0.87 0.70 0.89 0.84 1.04 0.80 9.73 YF 1.65 3.69 1.32 1.87 2.02 1.30 1.31 1.39 1.31 1.48 17.34 sum V 8.41 14.20 6.11 8.05 7.03 6.29 6.19 6.08 10.33 6.70 Abbreviations: Sum V : Sum of viral infection GeNorm values; sum RGC : sum of reference gene GeNorm values Applying the calculation mode presented previously [ 5 ], that is based on the calculation of ΔΔC T values (table 5 ), Act was most susceptible to virus infection for 3 of 5 viruses and displayed the highest ΔΔC T value over all viruses (sum V = 45.23). The two genes with the lowest ΔΔC T value were TBP (sum V = 9.82) and PPI (sum V = 10.04), corresponding to the results of the Bestkeeper and the GeNorm tool. Table 5 Results from ΔΔC T analysis RPII Act β2M L13 PLA TBP GAP PPI G6P Tub sum RGC CMV 2.10 11.55 3.03 2.18 3.95 2.36 2.90 2.54 12.51 2.39 45.49 HHV-6 5.98 3.54 3.35 2.89 4.99 0.88 2.27 1.25 3.35 2.30 30.78 CAMP 3.59 14.19 3.94 3.17 2.71 1.23 3.19 1.78 2.22 3.33 39.33 SARS 1.19 1.71 2.14 1.93 2.52 1.11 2.75 1.34 4.14 1.78 20.58 YF 9.01 14.25 5.78 2.90 9.62 4.24 6.35 3.14 5.42 7.48 68.17 sum V 21.87 45.23 18.22 13.07 23.78 9.82 17.45 10.04 27.62 17.27 Abbreviations: sum V : sum of viral infection values; sum RGC : sum of reference gene values Discussion To date, it is generally accepted, that the selection of the ideal reference gene in gene expression analysis has to be done for each individual experimental setting by evaluating several genes and using the best two or three of these genes as reference. Obviously there is no "one good gene for all experiments" recommendation. However, it is helpful to find putative candidates that can be shortlisted when setting up a new experimental design. Therefore, we determined the expression of previously tested reference genes in a setting of virus infected human cell lines. Capable reference genes were evaluated using three independent methods: Bestkeeper, GeNorm and the ΔΔC T method, and their results were compared. All three tools ranked actin at the last position, indicating that it is an unsuitable reference gene in virus infected cells. The actin gene shows significant variations with increasing degree of infection. The best genes obtained from all three calculation tools were TBP and PPI. TBP seems to be a relative stable expressed gene during the course of virus replication of different viruses in different cells. However, as previously shown [ 5 ] TBP is not expressed in all tissues and therefore its use may be limited. Interestingly, classical reference genes like β2M and GAP were also acceptable regarding to a stable expression in virus infected cells. All other genes showed moderate expression stability. The analysis of our data set according to the Bestkeeper tool revealed very good BestKeeper indices; even actin was included into our gene panel. These findings demonstrate the usefulness of analysing a wide variety of reference gene candidates. The inconsistent data regarding to the Bestkeeper calculation of the coefficient of correlation and the corresponding p-values may be a result of the Pearson correlation . As described by Pfaffl et al . its use is limited to groups without heterogeneous variances, but the tested reference genes have very different expression levels resulting in significant variances. Paffl et al . also described that new versions of Bestkeeper should circumvent these problems by use of Sperman and Kendall Tau correlation . However, one problem still remains to be solved; both tools, the BestKeeper and the GeNorm, can not compare paired probes. This is the great advantage of the ΔΔC T method, or any other method which directly compares paired samples. From this point of view the use of a method like the ΔΔC T should be applied first before considering additional tools for further elucidation of the acquired data. Conclusions In summary, TBP and PPI turned out to be the best reference genes in virus infected cells. These genes are a good point to start reference gene selection in gene expression studies in virus infection experiments. Material and Methods Virus culture and virus detection by real-time PCR Camelpox strain CP-19, CMV strain AD169, HHV-6 strain U1102, SARS coronavirus strain 6109 and YFV strain 17D were propagated according to standard procedures [ 8 - 10 ]. The respective MOI and time of cell culture are shown in table 1 and were chosen to allow maximal infection as determined by immunofluorescence and real-time PCR [ 8 - 11 ]. For kinetic studies, cells were harvested at several time points (table 1 ) and RNA was extracted. The RNA transcription level of putative reference genes was determined by quantitative real-time PCR as described below. Extraction of RNA Total RNA from 1 × 10 6 cells was prepared using the QIAamp RNA Blood Mini Kit and RNase-free DNase set (Qiagen, Hilden, Germany) according to the manufacturer's recommendations for cultured cells. RNA solution was treated with DNA- free (Ambion, Huntingdon, United Kingdom). cDNA synthesis cDNA was produced using the Superscript III RT-PCR System (Invitrogen, Karlsruhe, Germany) according to the manufacturer's recommendations for oligo(dT) 20 primed cDNA-synthesis. cDNA synthesis was performed using 1 μg of RNA, at 50°C. Finally, cDNA was diluted 1:5 before use in QPCR. Quantitative TaqMan PCR Primers, TaqMan probes and QPCR conditions for reference gene analysis were used as previously described [ 5 ]. PCR was performed in a Perkin Elmer 7700 Sequence Detection System in 96-well microtiter plates using a final volume of 25 μl. Calculations Analysis was performed with the BestKeeper [ 6 ] and GeNorm [ 7 ] tools. The ΔΔC T value was calculated as follows: First the ΔC T for each time point of probe assessment between virus and Mock infected cells was calculated. In a second step the maximal differences between the time points were calculated as ΔΔC T . Competing interests The author(s) declare that they have no competing interests. Authors' contributions AR conceived the study, carried out the HHV-6 experiments and real-time PCR assays and drafted the manuscript. ST carried out the CMV experiments. HB carried out the YF experiments. MM carried out the SARS experiments. WS participated in the design of the study. AN carried out the CAMP experiments, participated in design and coordination of the study and helped to draft the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549079.xml |
544575 | Brief inactivation of c-Myc is not sufficient for sustained regression of c-Myc-induced tumours of pancreatic islets and skin epidermis | Background Tumour regression observed in many conditional mouse models following oncogene inactivation provides the impetus to develop, and a platform to preclinically evaluate, novel therapeutics to inactivate specific oncogenes. Inactivating single oncogenes, such as c-Myc, can reverse even advanced tumours. Intriguingly, transient c-Myc inactivation proved sufficient for sustained osteosarcoma regression; the resulting osteocyte differentiation potentially explaining loss of c-Myc's oncogenic properties. But would this apply to other tumours? Results We show that brief inactivation of c-Myc does not sustain tumour regression in two distinct tissue types; tumour cells in pancreatic islets and skin epidermis continue to avoid apoptosis after c-Myc reactivation, by virtue of Bcl-x L over-expression or a favourable microenvironment, respectively. Moreover, tumours progress despite reacquiring a differentiated phenotype and partial loss of vasculature during c-Myc inactivation. Interestingly, reactivating c-Myc in β-cell tumours appears to result not only in further growth of the tumour, but also re-expansion of the accompanying angiogenesis and more pronounced β-cell invasion (adenocarcinoma). Conclusions Given that transient c-Myc inactivation could under some circumstances produce sustained tumour regression, the possible application of this potentially less toxic strategy in treating other tumours has been suggested. We show that brief inactivation of c-Myc fails to sustain tumour regression in two distinct models of tumourigenesis: pancreatic islets and skin epidermis. These findings challenge the potential for cancer therapies aimed at transient oncogene inactivation, at least under those circumstances where tumour cell differentiation and alteration of epigenetic context fail to reinstate apoptosis. Together, these results suggest that treatment schedules will need to be informed by knowledge of the molecular basis and environmental context of any given cancer. | Background Various mouse models of tumourigenesis have been established using conditional systems to either induce or knockout particular genes (oncogenes and tumour suppressors, respectively) in a tissue-specific and time-dependent manner. The ability to switch expression of a given oncogene 'on' or 'off' in vivo has provided insight into the mechanisms by which certain oncogenes can initiate tumourigenesis either alone or in combination with other genetic lesions, and importantly, whether inactivation of the initiating oncogene is sufficient to cause tumour regression (reviewed in: [ 1 , 2 ]). Given the importance of oncogene activation in human cancers, specific targeting of oncogenic pathways provides a potentially effective therapeutic strategy. For example, targeting of the HER2/Neu receptor tyrosine kinase (which is overexpressed in up to 30% of primary human breast cancers) with the neutralizing antibody Trastuzumab has been used successfully in clinical trials, in combination with other agents, to slow disease progression (refs in [ 3 ]). Similarly, patients with chronic myelogenous leukaemia (CML) have been effectively treated with the ABL kinase inhibitor, Imatinib (Gleevec), inducing clinical remission whilst in the CML phase [ 4 , 5 ]. Several studies using conditional mouse models of various cancers have unexpectedly shown that inactivation of the initiating oncogene is sufficient for reversal not only of the primary tumour but also of invasive and metastatic lesions, many of which contain multiple genetic and epigenetic alterations [ 6 - 18 ]. The tumour regression observed in many of these models following sustained oncogene inactivation provides a powerful platform on which to build a deeper understanding of fundamental tumour biology and with which to preclinically evaluate novel therapeutics to target specific genes. A recent study has shown that brief inactivation (10 days) of c-Myc was sufficient for the sustained regression of c-Myc induced invasive osteogenic sarcomas in transgenic mice [ 19 ]; subsequent re-activation of c-Myc led to extensive apoptosis rather than restoration of the neoplastic phenotype. Possible explanations for this outcome include changes in epigenetic context that may have occurred within the cell type, that is, presumably between the immature cell in which c-Myc was originally activated and the more differentiated cell resulting from subsequent (brief) inactivation of c-Myc. In this tumour model, although c-Myc expression is initiated in immature osteoblasts during embryogenesis, subsequent inactivation of c-Myc in osteogenic sarcoma cells induces differentiation into mature osteocytes. Therefore, re-activation of c-Myc now takes place in a different cellular context and induces apoptosis rather than neoplastic progression. However, irrespective of the actual underlying mechanisms, these intriguing findings suggest the novel possibility of employing transient inactivation of c-Myc as a therapeutic strategy in certain cancers, thus limiting potential toxic effects resulting from prolonged therapeutic inactivation [ 1 , 2 ]. In fact, there is widespread interest in determining the optimal timing of existing therapies, including trials of pulsatile or 'metronomic' chemotherapy regimens in various cancers. Self-evidently, therefore, it was essential to determine if this phenomenon was unique to this mouse model or if sustained regression of tumours originating in different tissues and under differing circumstances could also be induced by transient c-Myc inactivation. Previously, we have shown that sustained c-Myc inactivation in locally invasive pancreatic islet tumours (induced by c-Myc activation in β-cells on a background of Bcl-x L overexpression) induced β-cell growth arrest and re-differentiation into mature β-cells, accompanied by the collapse of tumour vasculature and tumour cell mass resulting from apoptosis, despite the constitutive expression of Bcl-x L in the tumour cells [ 9 ]. Similarly, in pre-malignant skin epidermal tumours (papillomatosis) induced following activation of c-Myc alone, skin lesions completely regressed within 4 weeks after sustained inactivation of c-Myc [ 8 ]. However, given the continual outward migration and shedding of growth-arrested and re-differentiated keratinocytes from the skin surface, it was not established whether this action alone was responsible for the removal of skin tumour cells or if apoptosis also played a part. These conditional models, in which c-Myc-induced tumours of pancreatic islets and skin epidermis can be initiated at any given time in the adult animal, were chosen for the studies presented here and offer several advantages (reviewed in [ 20 ]). First, tumourigenesis after c-Myc activation is initiated and proceeds by different routes, with inherent pro-apoptotic activity avoided by an additional genetic alteration (expression of the anti-apoptotic protein, Bclx L ) as is the case in the islet tumourigenesis model, or by the presence of survival cues in the microenvironment, as seen in c-Myc-induced skin tumours. Second, we already know that sustained c-Myc inactivation leads to regression in both cases. Here we show the consequences of inactivating c-Myc transiently, for a period of 4 to 9 days, in these distinct tumour types in vivo . In contrast to the osteogenic sarcoma model, re-activating c-Myc in islet tumours does not lead to accelerated β-cell apoptosis, but rather restores the oncogenic properties of c-Myc, rapidly re-initiating β-cell proliferation, loss of differentiation, loss of E-cadherin, local invasion and angiogenesis. This occurs despite the re-differentiation of previously c-Myc-activated tumour cells to a more mature phenotype and the loss of some of the newly acquired vasculature, occurring during the period of c-Myc inactivation. Moreover, as no new β-cells arise during the period of c-Myc inactivation, replication is probably restored in those same cells that have previously experienced c-Myc activation. Similarly, in epidermis, reactivating c-Myc in suprabasal keratinocytes does not result in apoptosis, which remains confined to the shedding areas of parakeratosis at the skin surface, but restores the papillomatous phenotype, inducing cell proliferation and dysplasia. These results are in line with a very recent study in a different system, published while this manuscript was under consideration. Shachaf and colleagues demonstrated that invasive c-Myc-induced hepatocellular carcinomas regress when c-Myc expression is turned off but, interestingly, some tumour cells remain 'dormant' even for prolonged periods and contribute to cancer progression if c-Myc expression is subsequently reinitiated [ 21 ]. Taken together, these findings suggest that a cautious approach is required in considering cancer therapies aimed at transient oncogene inactivation. First, a more comprehensive understanding of the genetic basis and environmental context of any individual tumour would be required in order to predict the likely success of such a treatment schedule. Second, at least under those circumstances where tumour cell differentiation and alteration of epigenetic context would not be predicted to reinstate apoptosis and no alternative mechanism exists for tumour cell removal, sustained inactivation of the offending oncogene would seem the desired therapeutic goal. Results Brief inactivation of c-Myc in islet tumours does not sustain tumour regression Islet tumours were induced in nine mice by activation of c-MycER TAM (c-Myc) in β-cells of adult pIns-c-MycER TAM mice cross-bred with RIP7-Bcl-x L mice [ 22 ] by daily intraperitoneal (IP) administration of 4-OHT for 14 days as previously described [ 9 ]. Three mice from different litters were sacrificed for histological examination of the pancreas. c-Myc was then inactivated by 4-OHT withdrawal (see Methods) for 9 days in the remaining six littermates (two from each litter), after which time one mouse from each litter was sacrificed for collection of pancreata and analysis. In the remaining three mice, reactivation of c-Myc by daily IP injection of 4-OHT was carried out for 5 days before sacrifice and analysis. For each analysis, sections (5–10μm) were cut throughout the length of the pancreas and every tenth section was selected for histological and immunohistochemical examination. In the absence of 4-OHT, pIns-c-MycER TAM / RIP-Bcl-x L double transgenic mice exhibited normal islet morphology (Figure 1 ) with no measurable β-cell proliferation or apoptosis (Figure 2 ). As expected, activation of c-Myc for 14 days triggered the progression of angiogenic islet tumours (Figure 1 ) accompanied by β-cell proliferation, loss of differentiation (as demonstrated by down-regulation of insulin) and down-regulation of the intercellular adhesion molecule E-cadherin (Figure 2 ). Following inactivation of c-Myc for 9 days, histological analyses of pancreata showed signs of vasculature collapse particularly in larger islets (Figure 1 -extravasated erythrocytes), accompanied by cessation of β-cell proliferation, re-differentiation (up-regulation of insulin) and reestablishment of cell-cell contacts as E-cadherin expression was restored (Figure 2 ). Vascular endothelial cell apoptosis was detected in larger islets by co-immunostaining of vascular basal lamina with anti-laminin antibodies together with TUNEL (Figure 2 ), with only a small number of apoptotic β-cells detected at this time-point (average of 1 cell per islet section) following co-immunostaining of β-cells with anti-insulin antibodies together with TUNEL (Figure 2 ). Reactivation of c-Myc for 5 days in partially regressed islet tumours (9 days of inactive c-MycER TAM ) led to restoration of the oncogenic properties of c-Myc (Figure 1 and 2 ): β-cell proliferation, loss of differentiation (down-regulation of insulin), and loss of cell-cell contacts (down-regulation of E-cadherin). In contrast to the mouse model of osteogenic sarcoma [ 19 ], where reactivation of c-Myc in growth-arrested, re-differentiated osteocytes induced their rapid demise through apoptosis, reactivation of c-Myc in islet tumours did not lead to an increase in the number of cells undergoing apoptosis compared to the original islet tumours formed after 14 days of c-Myc activation (Figure 2 ). Importantly, in contrast to partially regressed islet tumours (9 days of c-Myc inactivation), where most apoptotic cells were found in the vasculature (Figure 2 ), the majority of apoptotic cells in both the original 14-day islet tumours and reactivated tumours were not present within the vasculature but rather found predominantly adjacent to blood vessels (averages of 3–4 cells per islet section c-Myc 'on' versus 3–5 cells per islet c-Myc 'on-off-on', both representing less than 0.1% of islet cells). A small number of these TUNEL-positive cells could be identified as β-cells, as deduced following co-staining with insulin (Figure 2 ) or Nkx6.1 (data not shown). Of the remaining apoptotic cells, some could be identified as leukocytes by co-staining with CD45 (data not shown). The identity of the remainder is unclear, but could include some β-cells with complete loss of normal differentiation markers. Despite the presence of apoptotic cells during c-Myc activation, levels are clearly insufficient to prevent islet tumour progression as c-Myc-induced β-cell proliferation far exceeds apoptosis and tumours rapidly and inexorably expand over time. In order to establish whether islet tumour progression was indeed being maintained following reactivation of c-Myc we examined a longer period of c-Myc reactivation. In this case, mice were treated daily with IP 4-OHT for 14 days (following an initial 2 week period of c-Myc activation and 9 day transient period of c-Myc inactivation). Sections were cut throughout the pancreas and selected sections at every 100 μm were stained with H&E for histological analysis as well as with immunohistochemical markers for proliferation (Ki-67), differentiation (insulin), apoptosis (TUNEL) and loss of cell-cell contact (E-cadherin). Importantly, pancreatic islets following 14 days of c-Myc reactivation showed no signs of tumour regression but rather progression of islet tumourigenesis, with the vast majority of islets showing more pronounced invasion (adenocarcinoma) (Figure 3 ). Brief inactivation of c-Myc in papillomatous lesions of the skin does not sustain tumour regression Development of papillomatosis (a pre-malignant lesion of skin epidermis resembling actinic keratosis in humans) was induced in nine mice following activation of c-MycER TAM (c-Myc) in suprabasal keratinocytes of adult Involucrin-c-MycER TAM transgenic skin by daily topical administration of 4-OHT for 14 days as previously described [ 8 ]. Three mice from different litters were sacrificed for histological examination of the skin. c-Myc was then inactivated by 4-OHT withdrawal (see Methods) for 5 days in the remaining six littermates (two from each litter), after which time one mouse from each litter was sacrificed for collection of skin and analysis. In the remaining three mice, reactivation of c-Myc by daily topical application of 4-OHT was carried out for 5 days before sacrifice and analysis. We chose to inactivate c-Myc for 5 days in order to minimize the loss of neoplastic keratinocytes through outward migration and shedding from the skin surface, prior to reactivation of c-Myc, in order to maximise potential detection of any keratinocyte apoptosis. For each analysis, sections (5–10μm) were cut throughout two 10 mm pieces of skin and every tenth section was selected for histological and immunohistochemical examination. In the absence of 4-OHT, Involucrin-c-MycER TAM transgenic mice exhibited normal skin morphology (Figure 4 ) with no detectable suprabasal keratinocyte proliferation or apoptosis (Figure 5 ). As expected, activation of c-Myc for 14 days triggered the progression of papillomatosis with marked epidermal hyperplasia, dysplasia, angiogenesis, and formation of nucleated cornified layers (parakeratosis) (Figure 4 ). These "parakeratotic tiers" gave the appearance of dry, scabby lesions, which were eventually lost through lifting or flaking from the surface. The hyperplastic phenotype resulted from c-Myc induced proliferation of suprabasal keratinocytes, as detected using antibodies specific for the proliferation marker Ki-67 (Figure 5 ). As demonstrated in our earlier studies [ 8 ], c-Myc alone in the absence of any ectopic anti-apoptotic lesion is sufficient to induce pre-malignant neoplastic lesions in the skin. This contrasts with results in the pancreatic islet β-cells, where c-Myc activation alone induces widespread apoptosis [ 9 ]. Activation of c-Myc in the epidermis, however, is associated with essentially no detectable apoptosis when examined by co-staining with the specific suprabasal keratinocyte marker keratin 1 together with TUNEL (Figure 5 ). The only detectable TUNEL positive cells were those present at the surface of the skin about to be shed and nucleated cells within the parakeratotic tiers (Figure 5 ). Subsequent inactivation of c-Myc for 5 days led to redifferentiation of suprabasal keratinocytes as evidenced by the appearance of granular cells and loss of dysplasia (Figure 4 ) concomitant with a marked reduction in the number of proliferating suprabasal keratinocytes (Figure 5 ). Importantly, there was no increase in the number of cells undergoing apoptosis, with TUNEL positive cells again confined to shedding keratinocytes (Figure 5 ), indicating that regression of these skin lesions occurs through loss of neoplastic keratinocytes by shedding. Although tumour vasculature was examined for endothelial apoptosis, there was no measurable increase in cell death at this particular time-point. It is likely, however, that a more extensive analysis would determine the particular stages at which vascular collapse occurs following c-Myc inactivation. Reactivation of c-Myc in skin lesions for 5 days led to restoration of papillomatosis (Figure 4 ). In contrast to the osteogenic sarcoma model, reactivation of c-Myc in growth arrested, redifferentiated keratinocytes did not result in increased apoptosis, despite the absence of an anti-apoptotic lesion (Figure 5 ), but again resulted in increased levels of suprabasal proliferation extending beyond the basal compartment (Figure 5 ); but as in the c-Myc 'on', no replicating cells were present in the granular layer. However, we cannot confirm what proportion of replicating suprabasal keratinocytes, following c-Myc reactivation, had previously experienced c-Myc activation. This question arises as new 'c-Myc naïve' keratinocytes are likely to have entered the suprabasal compartment as a result of on-going basal layer replication during the transient period of c-Myc inactivation. Discussion The potential to inactivate or mitigate the action of oncogenes is increasingly being exploited in the design of new therapeutic agents to reverse tumourigenesis in cancer therapy [ 1 , 23 - 25 ]. Importantly, despite the seeming genetic complexity of many cancers, a compelling body of evidence suggests that inactivation of single oncogenes can be sufficient for tumour regression. In several transgenic mouse models of cancer, the generation of invasive/metastatic tumours in which more than one genetic event is involved can be reversed following inactivation of a single oncogene [ 7 , 9 , 12 , 16 ]. In addition, c-Myc-induced lymphomas shown to be genomically complex and unstable regressed following c-Myc inactivation [ 15 ]. Strikingly, extensive lung metastases, arising in mice bearing Neu -induced mammary tumours, rapidly and fully regressed following inactivation of Neu , despite tumour cells acquiring additional mutations [ 14 ]. These findings suggest that some metastatic lesions may remain responsive to therapeutic intervention originally targeted to the primary lesion, although some tumours have been shown to escape dependence upon the initiating oncogene [ 6 , 7 , 11 , 14 , 16 , 19 ]. Recent data from a mouse model of osteogenic sarcoma showed that even transient c-Myc inactivation can result in sustained tumour regression [ 19 ]. In this model, reactivation of c-Myc after a brief period of tumour regression led to extensive apoptosis of osteoblasts rather than restoration of the tumour phenotype. This increased sensitivity to apoptosis may be due to epigenetic changes that have occurred within the newly differentiated cells (when c-Myc was inactivated). In light of these findings, it was important to establish whether reactivating c-Myc in other tumour models, consisting of other cell types, would also lead to apoptosis. In other words, would such cells behave differently from the original differentiated cell (in which c-Myc was first activated) and become more sensitive to Myc-induced apoptosis as a result of epigenetic changes? Here we show, in contrast to the osteogenic sarcoma model, that re-activating c-Myc in islet β-cell tumours restores the oncogenic properties of c-Myc, rapidly re-initiating β-cell proliferation, loss of differentiation, loss of E-cadherin, local invasion and angiogenesis. This occurs despite the re-differentiation of previously c-Myc-activated tumour cells to a more mature phenotype and the loss of some of the newly acquired vasculature, occurring during the period of c-Myc inactivation. Similarly, in epidermis, reactivating c-Myc in suprabasal keratinocytes does not result in apoptosis, which remains confined to the shedding areas of parakeratosis at the skin surface, but restores the papillomatous phenotype, inducing cell proliferation and dysplasia. Self-evidently, to restore vulnerability to the pro-apoptotic activity of c-Myc would necessitate the tumour cells losing their resistance to apoptosis. In this case the results of Jain et al. [ 19 ] may be explained by the origin of osteosarcomas in their model in bone progenitor cells, which we assume were able to avoid c-Myc-induced apoptosis. Subsequently, despite the retention of some features of immaturity, these cells progress to a differentiated phenotype upon transient c-Myc inactivation, which renders them susceptible to apoptosis when c-Myc is reactivated. The assumption we must make here is that whatever uncharacterised additional mutations these cells may have acquired along their journey to malignancy, these did not include mutations able to confer resistance to apoptosis in the more mature osteocyte. One may speculate about the underlying mechanisms, one possibility being the lack of a selective evolutionary advantage to acquisition of an anti-apoptotic lesion, given that some of these cells must already have been capable of avoiding apoptosis – perhaps due to their primitive developmental stage. In our pIns-MycER TAM mice, where mature β-cells are normally highly sensitive to the pro-apoptotic activity of c-Myc [ 9 ], invasive β-cell tumours only originate once apoptosis is prevented, in this case by constitutive over-expression of the protein Bclx L . Without such apoptosis suppression the majority of β-cells undergo apoptosis upon c-Myc activation, rendering mice diabetic within a few days. This may be one difference between our own results and those of Jain et al. [ 19 ], namely that in our system an acquired resistance to apoptosis is required from the outset, but once this is in place and stays in place, tumours develop and progress whatever the differentiation status of these cells. In contrast, in the system studied by Jain et al. [ 19 ], cells can lose their resistance to apoptosis whilst c-Myc is inactivated. This notion is supported by the rapid continuation of replication and tumour growth in our mice after a transient period of c-Myc inactivation despite this being sufficient to restore a differentiated phenotype in β-cells. In contrast to the osteogenic sarcoma model [ 19 ], it is less likely that epigenetic changes have occurred: c-Myc is originally activated in mature islet β-cells of the adult pancreas, and following inactivation of c-Myc in islet tumours there is a restoration to differentiated adult β-cells. Importantly, despite the beginning of vascular collapse during c-Myc inactivation, reactivating c-Myc appears to result not only in further growth of the tumour, but also re-expansion of the accompanying angiogenesis and more pronounced islet invasion (adenocarcinoma). The persistence of β-cell resistance to the pro-apoptotic activity of c-Myc is probably conferred by the continued presence of the anti-apoptotic protein Bclx L . Although the general consensus view is that human cancers largely arise from stem cells or precursor cells, it is equally plausible that some cancers might also originate from more mature cells. Intriguingly, recent studies in pancreatic islets of the adult mouse suggest that putative stem cells do not proliferate and produce new β-cells in adult animals, but rather β-cell turnover is maintained by proliferation of mature β-cells [ 26 ]. It is therefore quite plausible that islet tumours might also originate from these more mature cells, which as the major source of replicating cells in the adult would also be those most likely to acquire cancer-causing mutations. Another illuminating study from Bachoo et al. [ 27 ] shows that dysregulation of specific genetic pathways, rather than cell-of-origin, dictates the emergence and phenotype of high-grade gliomas. In order to investigate whether the absence of an anti-apoptotic mutation would result in a shift from resistance to apoptosis towards vulnerability following transient inactivation of c-Myc, we examined a different mouse tumour model. In suprabasal keratinocytes, c-Myc activation induces relentless replication leading to hyperplasia, angiogenesis and a premalignant phenotype resembling actinic keratosis. In this system, differentiated suprabasal keratinocytes presumably avoid apoptosis due to the permissive environment of the epidermis, as they readily undergo apoptosis when removed from this environment [ 8 ]. It was, however, possible that the dramatic increase in cell numbers following c-Myc-induced epidermal hyperplasia might overwhelm the survival signals within this tissue, which although sufficient to prevent any discernible apoptosis during sustained c-Myc activation might no longer be able to prevent it after a transient inactivation (with any accompanying restoration of normal differentiation). In fact, we see no obvious change in the behaviour of keratinocytes from before to after a period of c-Myc inactivation. Apoptosis remains confined to the area of parakeratosis, which accompanies c-Myc-induced hyperplasia and papillomatosis. At these time-points we do not see any prominent endothelial cell apoptosis, so the exact point at which the angiogenesis collapses is not known. Interestingly, a recent publication suggests that following a transient period of c-Myc inactivation, at least some previously c-Myc activated suprabasal keratinocytes may differentiate to an extent sufficient to render them unable to re-enter the cell cycle after c-Myc reactivation [ 28 ]. This is supported by our original work suggesting that more differentiated suprabasal keratinocytes in the granular compartment are generally refractory to c-Myc-induced replication [ 8 ]. In this case, restoration of papillomatous lesions in Involucrin-c-MycER TAM mice might instead result largely from the replication of 'c-Myc naïve' keratinocytes newly generated from the basal layer during the period of c-Myc inactivation [ 28 ]. Whatever the underlying explanation, it is difficult to be sure that all previously c-Myc activated cells have undergone irreversible growth arrest and therefore make no contribution to restoration of the tumour phenotype. This issue may only be resolved fully by the development of a suitable labelling technique, which could indelibly mark all c-Myc activated keratinocytes, but only up to the point at which c-Myc is inactivated and not when c-Myc is reactivated. Fortunately, with the islet model, there is no such element of doubt. In this case no new β-cells are formed during the period of c-Myc inactivation, with replication essentially absent within the pancreas during this period. The replication of β-cells after c-Myc reactivation must, therefore, be taking place in those same cells that have re-differentiated whilst c-Myc was deactivated. Therefore, one can confidently state that transient c-Myc inactivation in tumours originating in pIns-c-MycER TAM / RIP7-Bcl-x L double transgenic mice will not lead to either apoptosis or irreversible growth arrest in tumour cells. In a recently published study, Shachaf and colleagues demonstrate in a Tet-regulatable conditional mouse model that invasive c-Myc-induced hepatocellular carcinomas regress when ectopic c-Myc expression is turned off. Importantly, by employing a bioluminescence technique to label hepatocellular cancer cells, it was shown that some erstwhile tumour cells re-differentiate but avoid apoptosis and remain 'dormant' even for prolonged periods after c-Myc transgene expression is turned off. These labelled cells can then once more contribute to cancer progression if c-Myc transgene expression is subsequently restored [ 21 ]. Extrapolating from these various results one may assume that where avoidance of c-Myc induced apoptosis is a product of cellular immaturity (as may be the case in some stem cell populations), then as long as c-Myc inactivation induces differentiation, and, presumably, no anti-apoptotic mutation has been acquired, a transient period may suffice for sustained tumour regression. However, in many cases where an anti-apoptotic lesion is also present (loss of p53/p19ARF; upregulation of antiapoptotic Bcl2 family members etc), or potentially the microenvironment continues to prevent apoptosis, sustained inactivation would be essential for tumour regression. Moreover, it can also be stated that partial reversal of angiogenesis, at least in the islet tumours, will not have any lasting impact on tumour progression if angiogenesis automatically continues apace of further growth of the tumour. Finally, it seems likely that reacquisition of a differentiated phenotype does not preclude previously c-Myc activated cancer cells from re-exhibiting cancer behaviour once c-Myc is reactivated – removal of these cells by apoptosis or other means would seem necessary to remove the threat of cancer recrudescence. Identifying these key differences in behaviour between different cell types/developmental stages is not an academic exercise, but can give vital information about the mechanisms and context whereby oncogene activity may be determined, which in addition to the biological interest might also provide new knowledge of direct relevance to human cancer. Given the fact that deregulation of c-Myc expression is one of the most frequently described abnormalities in human cancers and has been observed in β-cell derived tumours and in human skin epidermal tumours [ 29 - 32 ], our observations may have important ramifications for human cancers. It seems likely that therapies directed at oncogene targets will need to be individually tailored to fit the individual tumour types. Thus, detailed knowledge of the molecular 'road map' to cancer for any individual tumour would be needed before determining the optimal treatment targets and therapeutic schedule. In some cases, where describing the molecular basis of the tumour suggests no inherent resistance to apoptosis, transient c-Myc inactivation may prove an effective part of the therapeutic strategy, whereas identifying the presence of lesions known to suppress c-Myc apoptosis would direct therapy at maintaining sustained c-Myc inactivation. Moreover, such detailed molecular information on the cancer cells would have to be interpreted in the context of the relevant microenvironments within which these cells exist. However, although we are still some distance from realising these goals of molecular fingerprinting and individualised therapy for cancer, the continually expanding literature surrounding successful tumour regression with various strategies aimed at oncogene inactivation and the knowledge gained strongly suggest that the journey is worth undertaking. Conclusions In several transgenic mouse models of cancer, the generation of invasive/metastatic tumours in which more than one genetic event was involved can be reversed following inactivation of a single oncogene. These findings suggest that some metastatic lesions may remain responsive to therapeutic intervention originally targeted to the primary lesion. Recent data from a mouse model of osteogenic sarcoma showed that even transient c-Myc inactivation can result in sustained tumour regression [ 19 ]. Here we show the consequences of inactivating c-Myc transiently in two distinct tumour types in vivo . In contrast to the osteogenic sarcoma model, re-activating c-Myc in islet β-cell tumours does not lead to accelerated β-cell apoptosis, but rather restores the oncogenic properties of c-Myc, rapidly re-initiating β-cell proliferation, loss of differentiation, loss of E-cadherin, local invasion and angiogenesis. This occurs despite the re-differentiation of previously c-Myc-activated tumour cells to a more mature phenotype and the loss of some of the newly acquired vasculature, occurring during the period of c-Myc inactivation. Similarly, in epidermis, reactivating c-Myc in suprabasal keratinocytes does not result in apoptosis, which remains confined to the shedding areas of parakeratosis at the skin surface, but restores the papillomatous phenotype, inducing cell proliferation and dysplasia. The differences between the conditional tumour models used by ourselves and Jain et al. [ 19 ], rather than detracting from the conclusions drawn as is frequently the case, serve to highlight the importance of identifying different cellular contexts in which transient inactivation of oncogenes may provide a valid therapeutic approach. These results are significant in that they suggest that epigenetic changes resulting in increased sensitivity to apoptotic stimuli will be determining the effects of altering Myc levels. Although it remains to be seen whether transient inactivation of other oncogenes can result in sustainable tumour regression, these studies begin to define the requirements necessary for transient c-Myc inactivation to be effective as a cancer therapy. Thus, we would challenge the potential for cancer therapies aimed at transient oncogene inactivation, at least under those circumstances where tumour cell differentiation and alteration of epigenetic context fail to reinstate apoptosis and no alternative mechanism exists for tumour cell removal. One would also have to be cautious about therapies that instead of removing cancer cells might rely largely on promoting re-differentiation – such 're-differentiated' cancer cells could probably all too readily reacquire their cancer potential. Together, these results suggest that treatment schedules will need to be informed by knowledge of the molecular basis and environmental context of any given cancer. Methods Transgenic mice pIns-c-MycER TAM and Involucrin-c-MycER TAM mice were generated by cloning a full-length human c- myc cDNA fused to the hormone-binding domain of a modified estrogen receptor (c-MycER TAM ) downstream of the rat insulin promoter and the human involucrin promoter, respectively, as previously described [ 8 , 9 ]. DNA constructs were injected into male pronuclei of day 1-fertilized (CBA × C57BL/6)F1 embryos and injected embryos were transferred into day 1-plugged pseudopregnant foster mice and the litters screened for presence of the transgene by Southern blotting. Heterozygous founder mice were backcrossed appropriately to establish transgenic lines. Heterozygous RIP7-Bcl-x L mice were obtained from Dr Doug Hanahan [ 22 ]. Litters from all transgenic mice and appropriate F1 crosses were routinely genotyped by PCR analysis on genomic DNA (1 to 5 μl) isolated from ear biopsies. DNA was extracted by incubating each ear disc in "Hotshot" reagent (25 mM NaOH, 0.2 mM disodium EDTA; pH12) for 10 minutes at 95°C. Following this, 75 μl of neutralizing agent (40 mM Tris-HCl, pH5) was added and the sample cooled to 4°C overnight. Primers used for the detection of c-MycER TAM cDNA: (forward) 5' CCA AAG GTT GGC AGC CCT CAT GTC 3'; (reverse) 5' AGG GTC AAG TTG GAC AGT GTC AGA GTC 3'. PCR program: 94°C 2 min 1 cycle, [94°C 1 min, 57°C 1 min, 72°C 2 min] 30 cycles, 72°C 10 min 1 cycle. PCR product size: 413 bp. Primers used for the detection of RIP7-Bcl-x L cDNA: (forward) 5' AGC ACT TTC TGC AGA CCT AGC AC 3'; (reverse) 5' CAG CTC CCG GTT GCT CTG AGA C 3'. PCR program: [94°C 1 min, 60°C 30 s, 72°C 2 min] 30 cycles, 72°C 3 min 1 cycle. Transgenic mice were housed under barrier conditions with a 12 hour light/dark cycle and access to food and water ad libitum . Activation and inactivation of c-MycER TAM protein Expression of the chimeric protein, c-MycER TAM , was targeted to pancreatic β-cells using a rat insulin promoter, or to suprabasal keratinocytes using the human involucrin promoter. As shown in our previous publications [ 8 , 9 ], the transgenically expressed c-MycER TAM protein remains inactive due to association of the cells' own hsp90 with the ER TAM . Upon administration of 4-hydroxytamoxifen (4-OHT), hsp90 is displaced allowing association of c-Myc's partner, Max, to form transcriptionally active heterodimers [ 33 ]. To activate c-MycER TAM protein in pancreatic β-cells of adult transgenic mice, 1 mg of 4-OHT (Sigma) sonicated in peanut oil (1 mg/0.2 ml) was administered daily by IP injection. To activate c-MycER TAM protein in skin epidermis of adult transgenic mice, 1 mg of 4-OHT (Sigma) dissolved in ethanol (1 mg/0.2 ml) was administered daily by topical application to a shaved area of dorsal skin. Inactivation of c-MycER TAM protein was achieved following withdrawal of 4OHT. As c-MycER TAM RNA and protein levels remain unchanged in the presence or absence of 4OHT, Northern and Western blot analysis will not confirm whether the protein is inactive. Thus, to confirm inactivity of the c-MycER TAM protein in pancreatic β-cells, we show reversal of several markers of Myc activation by day 4 of 4OHT withdrawal -growth arrest, re-differentiation, re-establishment of cell-cell contact – by immunohistochemistry (see ref [ 9 ] and Results section). In addition, we have gene array data confirming the rapid normalisation of Myc-regulated gene expression, of tamoxifen withdrawal (eg. insulin, pdx-1, Isl-1, cyclin D; data not shown). Similarly, inactivation of c-MycER TAM protein in skin epidermis was confirmed using immunohistochemistry for markers of re-differentiation (K1 and K14) and growth arrest (see Results section). The tight regulation of the c-MycER TAM protein in skin epidermis was also previously shown in [ 8 ] using in situ hybridisation for detection of ODC RNA, a known c-Myc target gene; by day 5 following withdrawal of 4OHT, ODC RNA is no longer detected. Histological and immunohistochemical analysis of pancreatic tissue Pancreata or skin were excised from mice and 5–10 mm pieces of tissue were fixed overnight in neutral-buffered formalin, embedded in paraffin wax and sectioned (5–10 μm). Frozen sections were prepared from tissue embedded in OCT and frozen in foil on a bath of dry ice and ethanol. Prior to staining, frozen sections were air-dried and fixed in 4% paraformaldehyde for 15 minutes. Alternatively, for frozen sections, tissue was fixed in 4% paraformaldehyde for 2 hours followed by incubation in 30% sucrose overnight at 4°C. For pancreata analysis, sections (5–10μm) were cut throughout the entire pancreas and every tenth section was selected for histological and immunohistochemical examination. For skin, sections (5–10μm) were cut through two 10 mm pieces of tissue and every tenth section was selected for analysis. Primary antibodies were as follows: rabbit polyclonals Ki-67 (Novacastra) and Nkx6.1 (Ole Madsen, NovoNordisk); rabbit anti-mouse laminin (Sigma); guinea-pig anti-porcine insulin (Dako); rat anti-mouse E-cadherin, (Zymed); rabbit anti-mouse keratin 1 (BabCo); rat anti-mouse CD45 (AbCam). E-cadherin and laminin antibodies were found to label reliably only frozen tissue sections. Other antibodies were effective when used on both paraffin-embedded and frozen sections, although Ki-67 and Nkx6.1 required epitope retrieval by microwaving paraffin-embedded sections at 700 W for 2 × 10 minutes in 0.01 M citrate buffer, pH6.0 (Vector) followed by immersion in cold water. Antibodies were diluted in incubation buffer: PBS/0.5% Triton X-100 containing 1:25 dilution of serum from the same species as the secondary antibody. Primary antibodies for insulin and Ki-67 were applied together to sections for 1 hour. Sections were then incubated in Texas Red-conjugated goat anti-guinea pig Ig secondary antibody together with FITC-conjugated goat anti-rabbit secondary antibody (Vector). After washing, sections were mounted in Vectashield mounting medium (Vector). To detect cells undergoing apoptosis, costaining with TUNEL/insulin, TUNEL/laminin and TUNEL/K1, immunofluorescent staining was performed by applying insulin, laminin or K1 antibodies to sections for 1 hour at room temperature followed by Texas Red-conjugated goat anti-guinea pig (for insulin antibodies) and goat anti-rabbit (for laminin and K1 antibodies) Ig secondary antibody (Vector). TUNEL staining was subsequently performed using ApopTag Fluorescein Direct kit (Chemicon) for frozen tissue sections and ApopTag Fluorescein Indirect kit (Chemicon) for paraffin-embedded tissue sections. Authors' contributions SP participated in the design of the study, administered 4OHT to relevant mice, collected tissue, carried out immunohistochemical staining, coordinated and analysed data, drafted the manuscript, and provided part of the funds. SA carried out genotyping, assisted with the administering of 4OHT to mice, collected tissues, cut sections and assisted with capturing of images. LC assisted with genotyping, cutting of sections, and immunohistochemical staining. VI assisted with genotyping and immunohistochemical staining. SZ assisted with genotyping and immunohistochemical staining. MK participated in the design of the study, assisted with the coordination and analyses of data, helped draft the manuscript and provided funding. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC544575.xml |
522814 | Child-Pugh classification dependent alterations in serum leptin levels among cirrhotic patients: a case controlled study | Background As anorexia and hypermetabolism are common in cirrhosis, leptin levels may be increased in this disease. In this study, we investigated the relation between the severity of disease and serum leptin levels in post-hepatitis cirrhosis and the role of body composition, gender and viral aetiology of cirrhosis in this association. Methods Thirty-five cases with post-hepatitis cirrhosis and 15 healthy controls were enrolled in this study. Body composition including body mass index, body fat percentage and body fat mass were determined. Serum leptin levels were assayed. Results Leptin levels were significantly higher among cirrhotic patients independent of sex compared to controls (p = 0.001). Female patients in both groups have had higher leptin levels than males (in cirrhotics p = 0.029, in controls p = 0.02). Cirrhotic patients in each of A, B and C subgroups according to the Child- Pugh classification revealed significantly different levels compared to controls (p = 0.046, p = 0.004, p = 0.0001, respectively). Male cirrhotics in Child-Pugh Class B and C subgroups had significantly higher leptin levels compared to male controls (p = 0.006, p = 0.008). On the other hand, female patients only in Child Pugh class C subgroup have had higher levels of serum leptin compared to controls (p = 0.022). Child-Pugh classification has been found to be the sole discriminator in determination of leptin levels in cirrhotics by linear regression (beta: 0.435 p = 0.015). Conclusion Serum leptin levels increase in advanced liver disease independently of gender, body composition in posthepatitic cirrhosis. The increase is more abundant among patients that belong to C subgroup according to the Child- Pugh classification. | Background Leptin, a 16-kilodalton protein, is involved in the regulation of food intake and body composition [ 1 ]. It was discovered in 1994 by Friedman et al. [ 2 ] and has been proposed to physiologically regulate body weight by suppressing appetite and increasing energy expenditure [ 1 , 3 , 4 ]. In normal humans, circulating level of leptin is higher in women than in men [ 3 , 5 ]. Besides of gender dependency, circulating leptin levels correlate with the body fat mass (BFM) and body mass index (BMI) in healthy subjects [ 5 - 7 ]. Malnutrition is a common feature of cirrhotic patients [ 8 ]. A negative energy balance, and thus catabolism caused by energy expenditure is considered to be of pathophysiological relevance in cirrhosis [ 9 ]. Several studies have shown that circulating leptin levels are modestly elevated in patients with alcoholic cirrhosis, suggesting that leptin might be involved in the malnutrition of cirrhosis [ 10 , 11 ]. While some studies have been supported these findings, others have reported low serum leptin levels in post-hepatitis cirrhotic patients [ 10 , 12 , 13 ]. In addition, nutritional status of cirrhotic cases represents a wide range in normal to severe malnutrition, connected with severity of the disease [ 8 ]. It appears that relationship of serum leptin levels and nutritional status in post-hepatitis cirrhosis has not been fully clarified yet. In this study, we investigated the relation between the severity of disease and serum leptin levels in post-hepatitis cirrhosis and the role of body composition, gender and viral aetiology of cirrhosis in this association. Methods Thirty-five cases with post-hepatitis cirrhosis (17 male, 18 female; mean age: 51.5 ± 12) which were diagnosed on the basis of the clinical, laboratory, radiological, and/or histopathological findings, and 15 healthy controls (8 male, 7 female; mean age: 49.4 ± 8) were enrolled in this study. Cirrhotic cases were assigned into 3 groups on the basis of the Child-Pugh classification [ 14 ] as follows: Child A (n = 10), Child B (n = 14) and Child C (n = 11). Causative agents of cirrhosis were viral hepatitis B (n = 20) and hepatitis C (n = 15). As leptin is a gender dependent peptide, control and cirrhotic group were divided into two groups as male and female. Exclusion criteria were history of cancer, diabetes mellitus, and alcoholism, existence of pleural effusion, gastrointestinal bleeding, acute infection and renal failure, treatment with corticosteroids, immunosuppressive agents and oral contraceptive within the last 6 months. Control group consisted of healthy individuals with normal medical history, physical examination and blood biochemistry. None of them have had a restriction of diet for loosing weight during the last three months. Subjects who receive any medication have not been included into control group. The local human institutional review committee approved the study and written consents were received from all participants. Body composition such as BMI, skin fold thickness, body fat percentage (BFP), and BFM analysis was performed in both cirrhotic cases and controls. To avoid incorrect BMI determination and body composition analysis, cirrhotic cases with ascite and edema had been put on sodium restricted diet of 51 mmol per day and they were received diuretics (spiranolactone 100–200 mg and, if necessary, furosemide 40–80 mg per day) until ascite and edema have been resolved. Cirrhotic cases with refractory ascite unresponsive to therapy impaired renal function following diuretic therapy, and triceps skinfold thickness less than 10 th percentile [ 15 ] were excluded. BMI was determined as the actual body weight relative to the square of the body height (BMI, kg/m 2 ). Measurements of skin fold thickness were conducted at four different sites on the left side of the body (triceps, biceps, sub-scapular and supra-iliac) using a Holtain skinfold caliper (Holtain, Crosswell, Crymych, Dyfed, UK). All the measurements were made by the same physician (FB). Two measurements were made at each site and the average values were obtained. The BFP was calculated using the Jacksons' formula [ 16 ]. BFM was calculated using BFP and body weight as kilogram. Diet containing 1 g/kg body weight of protein and 30 kcal/kg body weight of non-protein calories, was described to be consumed by both cirrhotic and controls for 2 weeks before serum leptin level measurement was performed. Blood samples were obtained in the morning following 12 hours of fasting, they were centrifugated and serum was separated after storage for one hour at room temperature. Biochemical analyses were done during the same day. Serum samples for measurement of leptin levels were stored at -20°C until they were used. Serum leptin levels were measured as ng/ml via immunoradiometric assay (IRMA) method by using Human Leptin IRMA DSL-23100 (Diagnostic Systems Laboratories, Inc. Texas, USA) kit. Following test procedures, test tubes were assessed with Gammabyt-CR gamma counter for one hour. Measurements for standards, controls and serums were repeated for confirmation. Sensitivity of the test was 0.10 ng/ml. Statistical analysis Data were presented as median and range. Qualitative variables were assessed by Chi-square test. Between whole and sub-group comparisons were performed by non-parametric Kruskal-Wallis and Mann-Whitney U tests. A linear logistic regression analysis was performed with serum leptin levels as dependent variable and age, gender, BFM, aetiology of cirrhosis, Child-Pugh classification as independent variables in cirrhotics. A p value of <0.05 was considered statistically significant. Results Patient profiles and body composition Clinical and demographic characteristics of all and gender-based sub-groups were shown in table 1 . In male and female subjects, no statistically significant difference was observed in age, BMI, BFP and BFM between the controls and cirrhotic group (both, p > 0.05). Following Child Pugh Classification, gender based or not, there were no significant differences in terms of BMI, BFP and BFM between controls and cirrhotic patients in each group according to the Child-Pugh classification (Figure 1 ) for each sex (Figure 2 and 3 ). Table 1 Characteristics of cirrhotic patients and controls in whole and gender based sub-groups. Age Year BMI Kg/m 2 BFP % BFM Kg Leptin ng/ml Cirrhotic (n = 35) 53 (28–73) 24 (18–33) 27.9 (18.5–39) 19.4 (9.6–34) 13.5 (1.6–41)* Female (n = 18) 48 (28–73) 24 (18–33) 32 (24–39) 20.1 (9.6–34) 15.5 (7.4–41)** Male (n = 17) 54 (35–66) 23 (19–27) 24.6 (18.5–28) 18.6 (11–26) 10.9 (1.6–36)*** Control (n = 15) 47 (37–65) 24 (20–26) 27.7 (22.4–37) 19.4 (14–26) 6.4 (0.14–16.3) Female (n = 7) 43 (37–61) 24 (22–25) 33.2 (30–37) 20 (18–26) 7.2 (5.58–16.3) Male (n = 8) 53 (42–65) 24 (20–26) 24.9 (22.4–28) 18.6 (14–21) 3.7 (0.14–8.7) Note: Data were presented as median and range. Groups and subgroups did not differ in terms of age, BMI, BFP, and BFM (p > 0.05). *Cirrhotic vs. controls (p = 0.001), ** cirrhotic females vs. control females (p = 0.025), ***Cirrhotic males vs. control males (p = 0.002) BMI; Body mass index, BFP; Body fat percentage, BFM; Body fat mass Figure 1 Following Child Pugh Classification, there were no significant differences in terms of body mass index (BMI), body fat percentage (BFP) and body fat mass (BFM) between controls and cirrhotic patients (both, p > 0.05). Figure 2 Following Child-Pugh Classification, there were no significant differences in terms of body mass index (BMI), body fat percentage (BFP) and body fat mass (BFM) between female controls and female cirrhotic patients (both, p > 0.05). Figure 3 Following Child-Pugh Classification, there were no significant differences in terms of body mass index (BMI), body fat percentage (BFP) and body fat mass (BFM) between male controls and male cirrhotic patients (both, p > 0.05). Leptin levels Serum leptin levels were significantly higher in cirrhotic group than controls (p = 0.001) (Table 1 ). There was a significant difference between the leptin levels of men and women in both control and cirrhotic groups (p = 0.029, p = 0.02, respectively) (Table 1 ). Leptin levels were elevated in both female and male cirrhotics compared to controls (p = 0.025, p = 0.002, respectively) (Table 1 ). Cirrhotic patients in each of A, B and C subgroups according to the Child- Pugh classification revealed significantly different leptin levels [(ng/ml with median and range; 9.46 (1.6–30), 12.8 (4.2–18.8), 14.7 (8–41), respectively)] compared to controls (ng/ml with median and range; 6.4 (0.14–16.3) (p = 0.046, p = 0.004, p = 0.0001, respectively). Gender based serum leptin levels of controls and cirrhotic cases that were grouped according to Child Pugh Classification were as shown in Figure 4 . Male patients in the control group had significantly lower serum leptin levels compared to cirrhotic male cases that belongs to B and C classes (p = 0.006, p = 0.008, respectively). However, the difference was not significant between the control males and Child Pugh class A males (p = 0.234). On the other hand, female gender revealed significant difference only between Child Pugh C class patients and controls (p = 0.022). Figure 4 Leptin levels in controls and cirrhotic patients by gender and Child-Pugh class. Male patients in the control group had significantly lower leptin levels compared to cirrhotic male cases that belongs to B and C classes (p = 0.006, p = 0.008, respectively). On the other hand, female gender revealed significant difference only between Child Pugh C class patients and controls (p = 0.02). In controls and Child Pugh B class patients, females had higher leptin levels than males. * P < 0.02 vs. controls, in the same gender. ◆ P < 0.05 vs. different gender in the same group. When age, gender, BFM, hepatitis B and C virus as etiologic factors of cirrhosis and child A, B and C as Child-Pugh classification were tested as independent variables for determination of serum leptin levels as dependent variable by linear logistic regression analysis in cirrhotic group, analysis result showed that Child-Pugh classification was the sole discriminator in determination of serum leptin levels in cirrhotic cases (beta: 0.435, p = 0.015) (Table 2 ). Table 2 Linear regression analysis (R 2 = 0.326) with serum leptin as dependent variable in the cirrhotic group (n = 35). Independent variables Beta p Gender (M-F) -0.307 0.065 Age (years) -0.227 0.183 BFM (kg) 0.006 0.974 Viral Etiologic Factor (HBV-HCV) 0.167 0.315 Child-Pugh Classification (A-B-C) 0.435 0.015* Beta, beta regression coefficient; M, Male; F, Female; BFM, Body fat mass; HBV, Hepatitis B Virus; HCV, Hepatitis C Virus; A, Child-Pugh Class A; B, Child-Pugh Class B; C, Child-Pugh Class C. Discussion Leptin regulates body weight by suppressing appetite and increasing energy expenditure [ 1 , 3 , 4 ]. Anorexia and increased energy expenditure usually accompanies to the cirrhosis [ 17 ]. McCullough et al. reported modestly elevated circulating leptin levels in patients with alcoholic cirrhosis and they suggested that elevated serum leptin levels in cirrhosis might be responsible for the high prevalence of malnutrition among cirrhotic patients [ 11 ]. In our study, we also observed that circulating leptin levels were increased in non-alcoholic cirrhosis caused by viral hepatitis without severe energy malnutrition state. Leptin levels are higher in woman than in men [ 3 , 6 ]. McCullough et al. found higher leptin levels among female cirrhotics than male cirrhotics, although the difference was not statistically significant [ 11 ]. These concepts are especially important in cirrhosis, because cirrhotics have gender-dependent alterations in body composition and sex steroids [ 18 , 19 ]. When we considered gender in our study, serum leptin levels were significantly higher among females than males in both controls and cirrhotics. In addition, cirrhotic females and males had higher levels of serum leptin than the controls with the same gender. Since BMI and BFM values do not differ according to the sex and the presence or absence of cirrhosis, increased serum leptin levels could not be simply dedicated to BFM or malnutrition status in cirrhosis. In addition, linear regression test in the present study has shown that disease severity, which was determined by Child-Pugh classification, was the sole significant determinant of serum leptin levels in cirrhosis. In previous studies, association between the severity of cirrhosis and serum leptin levels is controversial [ 11 - 13 ]. Henriksen et al suggested that the elevated circulating leptin in patients with alcoholic cirrhosis was most likely caused by a combination of decreased renal extraction and increased release from subcutaneous abdominal, femoral, gluteal, retroperitoneal, pelvic, and upper limb fat tissue areas [ 20 ]. For this reason, we excluded cases with renal clearance impairment to avoid of accumulation of leptin in serum. In addition, using 4 different site of skinfold thickness measurement to calculate BFP and excluding cases with ascite that do not respond to diuretic therapy, we targeted to determine the relationship between body composition and serum leptin levels in controls and cirrhotics. In this study, BFM was found to associate with serum leptin levels in controls. However, BFM does not associate with serum leptin levels among cirrhotic patients. Therefore, we conclude that leptin production may differ among healthy and cirrhotic subjects. In an animal study, it has been shown that chronic ethanol consumption leads to increased serum concentrations of tumor necrosis factor and related cytokines such as leptin by inducing over production of these factors in the liver and peripheral adipose tissues [ 21 ]. Leptin secretion from adipocytes may be enhanced by cytokines released as a part of the inflammatory or fibrogenic process. Alternatively, as suggested, cirrhotic patients may simply exhibit decreased hepatic clearance of this protein [ 22 ]. Conclusion Serum leptin levels increase in advanced liver disease independently of gender, body composition and viral etiologic factor in post-hepatitis cirrhosis. The increase is more abundant among patients that belong to C subgroup according to the Child- Pugh classification. Abbreviations BMI, body mass index; BFP, body fat percentage; BFM, body fat mass Competing interests The authors declare that they have no competing interests. Authors' contributions Bolukbas FF conceived of the study, and participated in its design and coordination. Bolukbas FF, Bolukbas C, Erdogan M and Zeyrek F collected the samples and carried out the laboratory analysis. Bolukbas C conceived of the study and participated in the sequence alignment and drafted the manuscript. Horoz M participated in the design of the study, participated in the sequence alignment and drafted the manuscript. Gumus M collected the clinical data and performed the statistical analysis. Yayla A drafted the manuscript and revised it critically for important intellectual content. Ovunc O participated in study design and coordination and revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC522814.xml |
449904 | Training the Immune Response: B-cells' Master Regulator | null | Viruses, bacteria, and other pathogens betray their presence in the body through exterior proteins, distinct to each strain. To prepare for the multitude of potential infectious agents, developing B-cells shuffle their genes to produce as many as a billion different antibodies, one to match almost any foreign protein. Upon infection, a limited subset of these antibodies will recognize a particular pathogen and mobilize a larger, targeted immune response. B-cells producing the “recognizing” antibody refine and test genetic modifications, adjusting the antibody's fit to the foreign entity. B-cells compete for the best match, or highest affinity; the winners survive to produce more cells and more antibodies against the invader. Occurrence of Ig gene conversion and hypermutation on an evolutionary tree B-cells require an enzyme called activation-induced cytidine deaminase (AID) to develop the most effective antibody. AID generates mutations in the highly variable target-recognition region of an antibody. Removing the AID gene prevents antibody refinement in mature human and mouse B-cells—which use a process called somatic hypermutation to alter single nucleotides in the antibody gene—as well as chicken cells that use a different process called gene conversion to produce variation. Unlike the single nucleotide changes caused by hypermutation, gene conversion modifies an antibody by swapping part of its antigen-binding region for a replacement gene segment. Preference for hypermutation versus gene conversion varies across species, and can even vary within a species. B-cells in chickens use gene conversion through adolescence, when the cells move from a hindgut organ called the bursa into the spleen, where hypermutation takes over. It is unclear precisely how AID induces either somatic hypermutation or gene conversion, and how it chooses one over the other. Several recent studies suggest that AID's effectiveness may depend on damage to a single DNA base—specifically, changing a cytidine to uracil, which AID can do in either DNA or RNA. To test whether AID causes hypermutation and gene conversion through a common pathway, Jean-Marie Buerstedde and colleagues at the National Research Center for Environment and Health in Munich, Germany, deleted the donor genes that supply replacement segments for gene conversion in chicken bursa cells. The cells not only stopped performing gene conversion; they revved up single nucleotide mutations in a pattern that looked suspiciously like somatic hypermutation. The mutations targeted hotspots for gene conversion, suggesting that hypermutation and gene conversion share common starting points along antibody genes. This paper adds evidence that AID functions by swapping a single DNA base to induce multiple modes of gene shuffling and refinement in B-cells. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC449904.xml |
534092 | Alteration of T cell immunity by lentiviral transduction of human monocyte-derived dendritic cells | Background Dendritic cells (DCs) are professional antigen-presenting cells that play important roles during human immunodeficiency virus type 1 (HIV-1) infection. HIV-1 derived lentiviral vectors (LVs) transduce DCs at high efficiency but their effects on DC functions have not been carefully studied. Modification of DCs using LVs may lead to important applications in transplantation, treatment of cancer, autoimmune and infectious diseases. Results Using DCs prepared from multiple blood donors, we report that LV transduction of DCs resulted in altered DC phenotypes and functions. Lentiviral transduction of DCs resulted in down-regulation of cell surface molecules including CD1a, co-stimulatory molecules CD80, CD86, ICAM-1, and DC-SIGN. DCs transduced with LVs displayed a diminished capacity to polarize naive T cells to differentiate into Th1 effectors. This impaired Th1 response could be fully corrected by co-transduction of DCs with LVs encoding interleukin-12 (IL-12), interferon-gamma (IFN-γ), or small interfering RNA (siRNA) targeting IL-10. Conclusions DCs transduced with LVs in vitro displayed diminished Th1 functions due to altered DC phenotypes. Our study addresses an important issue concerning lentiviral infection and modification of DC functions, and provides a rational approach using LVs for immunotherapy. | Background During HIV-1 infection, an increase in DC-SIGN and CD40 has been reported, as has a decrease in the expression of CD80 and CD86 in dendritic cells (DCs) of lymphoid tissue [ 1 ]. Although some suggest that HIV-1 infection reduces the production of IL-12 by DCs,[ 2 ] others have shown that DCs derived from HIV-1-infected individuals express both IL-12 and IL-10 at levels similar to those in non-infected individuals[ 3 ] While these studies have explored the effects of wild-type HIV-1 on DC functions, the possible effects of HIV-1-derived lentiviral vectors (LVs) on DC functions have not been well characterized [ 1 ]. LVs are useful gene transfer tools that can efficiently target many types of cells including DCs. As important immune modulating cells for immunotherapy and vaccine applications, DCs play critical roles in activating the host immune response. DCs can capture, process, and present foreign antigens, migrate to lymphoid-rich tissues, and stimulate antigen-specific immune responses [ 4 ]. DCs present a variety of signals to stimulate T cells and initiate immune response; these signals involve multiple signaling mediators, including MHC molecules harboring antigenic peptides (signal 1), the co-stimulatory molecules CD80, CD86, and ICAM-1 (signal 2), and cytokines such as IL-12, IL-4, and IL-10 (signal 3) [ 5 ]. Engagement between DCs and T cells not only stimulates T-cell proliferation, but also polarizes differentiation of naïve T helper (Th) cells into IFN-γ-producing Th1 or IL-4-producing Th2 effector cells [ 6 , 7 ]. Production of IL-12 by DCs early in an immune response is critical for polarization of CD4 + T cells toward Th1 function, which is essential for the clearance of intracellular pathogens. IL10, on the other hand, suppresses IL-12 production from DCs and diminishes the commitment of Th1 differentiation. Besides cytokine signaling, there is accumulating evidence that co-stimulatory molecules and adhesion molecules such as CD80, CD86, and ICAM-1 not only engage in T-cell stimulation, but also direct the differentiation of naive T cells [ 8 - 10 ]. Efficient gene transfer into DCs without cytotoxicity has always been difficult [ 11 , 12 ]. LVs transduce DCs at high efficiencies with little to no cytotoxicity, and the transduced DCs retain their immature phenotype, are able to respond to maturation signals, and maintain immunostimulatory potential in both autologous and allogeneic settings [ 13 - 16 ]. In this study, we carefully analyzed cellular response to LV transduction by evaluating changes in DC phenotypes using monocyte-derived DCs prepared from more than 40 blood donors. We investigated the function of DCs to polarize naive T cells to Th effectors after LV infection. Our results demonstrated altered DC functions after LV gene transfer. Most importantly, we illustrated effective modulation of DC immunity by LV expression of different cytokines or siRNA molecules. Materials and Methods Generation of monocyte-derived dendritic cells Peripheral blood mononuclear cells (PBMCs) from healthy donors (Civitan Blood Center, Gainesville, FL) were isolated from buffy coats by gradient density centrifugation in Ficoll-Hypaque (Sigma-Aldrich, St. Louis, MO) as previously described [ 17 ]. DCs were prepared according to the method of Thurner et al. [ 18 ], with the following modifications: On Day 0, five million PBMCs per well were seeded into twelve-well culture plates with serum-free AIM-V medium (Invitrogen Corp. Carlsbad, CA). The PBMCs were incubated at 37°C for 1 hr and the non-adherent cells were gently washed off; the remaining adherent monocytic cells were further cultured in AIM-V medium until Day 1. The culture medium was removed with care not to disturb the loosely adherent cells, and 1 ml per well of new AIM-V medium containing 560 u/ml of recombinant human GM-CSF (Research Diagnostic Inc., Flanders, NJ) and 25 ng/ml of IL-4 (R&D Systems, Minneapolis, MN) was added and the cells were cultured at 37°C and 5% CO 2 . On Day 3, 1 ml of fresh AIM-V medium containing 560 u/ml of GM-CSF and 25 ng/ml of IL-4 was added to the culture. On Day 5, the non-adherent cells were harvested by gentle pipetting. After washing, the DCs were frozen for later use or used immediately. Lentiviral vector construction and preparation MLV and LVs were constructed as described previously [ 19 , 20 ]. The self-inactivating pTYF vectors expressing CD80, CD86, GM-CSF, and IL-12 genes under the EF1α promoter control were constructed by inserting cDNAs that have been previously functionally characterized [ 21 - 23 ]. The cDNA of ICAM-1 was derived from pGEM-T-ICAM-1 kindly provided by Dr. Eric Long. The cDNAs of Flt3L, CD40L, and IL-7 were amplified by RT-PCR using the primers listed below with a modified eukaryotic translation initiation codon (CCACC-AUG): Flt3L sense 5'-TTT CTA GAC CAC CAT GAC AGT GCT GGC GCC AG-3' and antisense 5'-AAG GAT CCT CAG TGC TCC ACA AGC AG-3'; CD40L sense 5'-TTT CTA GAC CAC CAT GAT CGA AAC ATA CAA C-3' and antisense 5'-TTG AAT TCT TAT GTT CAG AGT TTG AGT AAG CC-3'; IL-7 sense 5'AAG CGG CCG CCA CCA TGT TCC ATG TTT CTT-3' and antisense 5'-TTC TCG AGT TAT CAG TGT TCT TTA GTG CCC ATC-3'. The LVs were produced and concentrated as described previously [ 20 ]. Lentiviral siRNA vectors were generated as previously described, using four oligonucleotides. IL-10i#1: sense 5'-GAT CCC CAG CCA TGA GTG AGT TTG ACT TCA AGA GAG TCA AAC TCA CTC ATG GCT TTT TTG GAA A-3' and antisense 5'-AGC TTT TCC AAA AAA GCC ATG AGT GAG TTT GAC TCT CTT GAA GTC AAA CTC ACT CAT GGC TGG G-3'; IL10i#2: sense 5'-GAT CCC CGG GTT ACC TGG GTT GCC AAT TCA AGA GAT TGG CAA CCC AGG TAA CCC TTT TTG GAA A-3' and antisense 5'-AGC TTT TCC AAA AAG GGT TAC CTG GGT TGC CAA TCT CTT GAA TTG GCA ACC CAG GTA ACC CGG G-3' [ 24 ]. Lentiviral transduction of immature DCs and DC maturation We plated Day-5 immature DCs at 5 × 10 5 per well in a 24-well plate containing 200 μl of medium supplemented with GM-CSF (560 u/ml) and IL-4 (25 ng/ml). DC infection was carried out by adding concentrated LVs to the cells at a multiplicity of infection (MOI) of 50–100 (~10 5 –10 6 transducing units/ng of p24) as previously described [ 25 ]. The infected cells were incubated at 37°C for 2 hr with gentle shaking every 30 min, then 1 ml of DC medium was added and the culture was incubated with the viral vectors for an additional 12 hr. DC maturation was induced by adding lipopolysaccharide (LPS) at a final concentration of 80 ng/ml and TNF-α at a final concentration of 20 u/ml and incubated for 24 hr. To collect mature DCs, the cells were treated with AIM-V medium containing 2 mM EDTA at 37°C for 20 min, and washed three times with PBS. Antibody staining and flow cytometry For analysis of cell-surface marker expression by flow cytometry, we incubated DCs for 10 min with normal mouse serum and then 30 min with fluorochrome-conjugated anti-human monoclonal antibodies. In different experiments, these antibodies included HLA-ABC (Tu149, mouse IgG2a, FITC-labeled, Caltag Laboratories, Burlingame, CA); HLA-DR (TU36, mouse IgG2b, FITC-labeled, Caltag Laboratories); CD1a (HI49, mouse IgG1k, APC-labeled, Becton Dickinson Pharmigen, San Diego, CA); CD80 (L307.4, mouse IgG1k, Cychrome-labeled, Becton Dickinson);CD86 (RMMP-2, Rat IgG2a, FITC-labeled, Caltag Laboratories); ICAM-1 (15.2, FITC-labeled, Calbiochem); DC-SIGN (eB-h209, rat IgG2a, APC-labeled, eBioscience, San Diego, CA); CD11c (Bly-6, mouse IgG1, PE-labeled, BD Pharmigen); CD40 (5C3, mouse IgG1, Cy-chrome-labeled, Becton Dickinson); CD123 (mouse IgG1, PE-labeled, BD Pharmigen); and CD83 (HB15e, mouse IgG1, R-PE-labeled, Becton Dickinson). We included the corresponding isotype control antibody in each staining condition. After two washes, the cells were resuspended and fixed in 1% paraformaldehyde in PBS and analyzed using a FACSCalibur flow cytometer and the CELLQUEST program (Becton Dickinson). The live cells were gated by forward- and side-light scatter characteristics and the percentage of positive cells and the mean fluorescence intensity (MFI) of the population were determined. FACS sort of lacZ-positive cells The lentiviral siRNA vector-transduced cells co-expressing nuclear lacZ gene were separated from un-transduced cells by staining with fluorescent LacZ substrate and sorted by FACS. To label the lacZ-positive cells, we resuspended cells in 100 ul medium and added 100 ul of FDG (fluorescein di-beta-D-galactopyranoside) working solution (2 mM) which was diluted from a 10 × stock FDG solution (20 mM). The stock solution was made by dissolving 5 mg FDG (MW 657, Molecular Probe, Eugene, OR) in a 1:1 mixture of DMSO/ethanol and mixing with ice-cold ddH 2 O to make an 8:1:1 ddH 2 O/DMSO/ethanol solution. The cells were incubated in 37°C water bath for 1–1.5 min, and diluted with 10-fold volume of cold medium and kept on ice until FACS sorting. Preparation of naïve CD4+ T cells The CD4 + T cells from PBMCs were collected by negative selection, using a CD4 + T cell isolation Rosette cocktail (StemCell Technologies, Vancouver, BC) according to the manufacturer's instructions. Briefly, we centrifuged 45 ml of buffy coat (approximately 5 × 10 8 PBMCs) in a sterile 200-ml centrifuge tube with 2.25 ml of the CD4 + T cell-enrichment Rosette cocktail at 25°C for 25 min. Thereafter, 45 ml of PBS containing 2% FBS was added to dilute the buffy coat. After gentle mixing, we layered 30 ml of the diluted buffy coat on top of 15 ml of Ficoll Hypaque in a 50-ml centrifuge tube and centrifuged for 25 min at 1,200 g . Non-rosetting cells were harvested at the Ficoll interface and washed twice with PBS (2% FBS), counted, and cryopreserved in aliquots in liquid nitrogen for future use. The purity of the isolated CD4 + T cells was consistently above 95%. The CD4 + CD45RA naïve T cells were purified based on negative selection of CD45RO - cells using the MACS (Miltenyi Biotec, Auburn, CA) magnetic affinity column according to the manufacturer's instructions. In vitro induction of Th functions and intracellular cytokine staining The in vitro DC:T cell co-culture method was modified based on Caron et al[ 26 ]. Briefly, we co-cultured purified naïve CD4 T cells with allogeneic mature DCs at different ratios (20:1 to 10:1) in serum-free AIM-V media. On day 5, 50 u/ml of rhIL-2 was added and the culture was expanded and replenished with fresh AIM-V medium containing rhIL-2 every other day for up to 3 weeks. After day 12, we washed the quiescent Th cells and re-stimulated them with PMA (10 ng/ml or 0.0162 μM) and ionomycin (1 μg/ml, Sigma-Aldrich) for 5 hr, adding Brefeldin A (1.5 μg/ml) during the last 2.5 hr of culture. We then fixed, permeablized, and stained the cells with FITC-labeled anti-IFN-γ and PE-labeled anti-IL-4 mAb (Pharmigen, San Diego, CA). The cells were analyzed in a FACSCalibur flow cytometer (BD Biosciences, San Diego, CA). DC-mediated mixed lymphocyte reaction We co-cultured serial dilutions of DCs, from 10,000 cells per well to 313 cells per well, with 1 × 10 5 allogeneic CD4 T cells in a 96-well U-bottomed plate in a total volume of 200 μl for 5 days. The proliferation of T cells was monitored by adding 20 μl of the CellTiter96 solution to each well according to the manufacturer's instruction (Promega). The cells were further cultured for 4 hr before reading the OD 490 value using a microplate reader (EL808, BIO-TEK Instrument Inc.,Winooski, VT). Results LVs altered surface marker expression in peripheral blood monocyte-derived DCs To investigate the effects of lentiviral vectors (LVs) on DCs, we transduced monocyte-derived DCs with LVs encoding different reporter genes. The efficiency of LV transduction of DCs is illustrated by a reporter gene assay (Fig. 1A ). The DCs were derived from healthy donors' PBMCs, and on day 5 (d5) of culture, the immature DCs (imDC) were infected with LV-PLAP (encoding placenta alkaline phosphatase). Analysis of PLAP activity on day 7 demonstrated transduction efficiency of > 80% (Fig 1B ). Figure 1 LV transduction of DCs and analysis of surface marker expression. PBMC-derived DCs were infected with LV on Day 5 (d5) and after maturation, co-cultured with naïve CD4 T cells for 1–2 weeks before intracellular cytokine staining (ICCS) and flow cytometry (FACS) analysis. The d5 DCs were transduced by LV-PLAP and 48 hr later, analyzed by PLAP enzyme assay. (B) FACS analysis of DC surface markers after viral transduction. The d5 DCs were transduced with LV carrying PLAP or Cre as reporter gene, MLV vectors, empty LV, or no vector controls (mock) and treated with TNF-α plus LPS. The cell surface markers were stained with antibodies and analyzed by FACS. The numbers represent mean fluorescence index (MFI) and the results are representatives of six experiments. DC functions through surface receptor signaling To see if LVs affected DC surface marker expression, we examined the expression profile of surface molecules on DCs by antibody immunostaining. We transduced PBMC-derived imDC with mock (control 293 supernatants) vectors, empty LV particles, LV, and MLV carrying a reporter gene. After induction of maturation with LPS plus TNF-α, we harvested the DCs for antibody staining and FACS. The results are shown in Fig. 1C and summarized in Table 1 . Among the surface molecules tested, CD1a, CD80, CD86, ICAM-1, and DC-SIGN were down-regulated after LV transduction, but not after transduction with empty LV or MLV. The same result was obtained using different preparations of LVs carrying either PLAP or Cre as the reporter gene. Table 1 Surface marker profile of DCs transduced with LVs or MLV vectors. Geometrical Mean Fluorescence ± SD Surface Marker Mock Empty LV LV MLV CD11c 48.8 ± 3.2 47.2 ± 1.3 52.3 ± 2.3 55.3 ± 1.1 CD123 13.0 ± 0.4 13.4 ± 0.8 14.9 ± 0.6 15.7 ± 0.1 CD1a 27.3 ± 1.1 27.6 ± 2.9 21.5 ± 0.2* 31.0 ± 0.3 CD40 8.6 ± 0.1 8.9 ± 0.6 8.6 ± 0.1 9.0 ± 0.3 ICAM-1 462.6 ± 57.5 376.5 ± 30.1 179.5 ± 3.4*** 498.5 ± 6.9 CD62L 3.3 ± 0.1 3.2 ± 0.03 3.7 ± 0.1 3.3 ± 0.4 CD80 (B7-1) 9.9 ± 0.9 10.6 ± 0.7 9.3 ± 0.2* 11.3 ± 0.4 CD83 5.8 ± 0.3 5.8 ± 0.1 6.4 ± 0.01 6.0 ± 0.3 CD86 (B7-2) 39.6 ± 3.5 39.6 ± 2.5 31.4 ± 0.4* 47.3 ± 1.5 DC-SIGN 62.7 ± 4.5 55.7 ± 0.4 50.6 ± 1.5* 68.6 ± 4.1 HLA-ABC 13.9 ± 1.3 15.8 ± 1.0 14.6 ± 0.3 17.2 ± 0.9 HLA-DR 31.5 ± 0.8 28.6 ± 2.2 26.9 ± 0.4 33.2 ± 1.7 Results are presented as geometrical mean fluorescence after FACS. Asterisks (*) denote significance of difference by Student t-test (*P < 0.05, **P < 0.01, ***P < 0.001). LV transduction impaired DC-mediated Th1 immunity It has been reported that retroviral infection induces up-regulation of Th2 cytokines including IL-10 and impairs DC maturation [ 27 , 28 ]. Because HIV causes immune suppression and the preceding results showed that LV infection altered the surface marker expression profile of DCs, we suspected that LV infection might also affect DC activation of T cells. To test this, we set up an in vitro immunity assay using co-culture of human DCs and naïve T cells. We generated DCs from PBMCs and infected the d5 DCs with LV carrying a reporter gene. To characterize the function of DCs, we purified naïve CD4 + T cells from healthy donors' blood and co-cultured the T cells with allogeneic monocyte-derived DCs treated with TNFα and LPS to induce maturation, as illustrated in Fig. 2 . The co-cultured T cells were allowed to expand and rest for more than 7 days after DC priming. To analyze Th response, on days 7 and 9 we reactivated the resting T cells with ionomycin and PMA, and subjected the T cells to ICCS using antibodies against IFN-γ and IL-4. We found that the IFN-γ-producing Th1 cell populations were dramatically reduced when incubated with DCs transduced with LVs, from 72% (day 7) and 75% (day 9) for the control to 27% (day 7) and 22% (day 9) for the LV-transduced DCs. The Th2 populations remained essentially unchanged (Fig. 2 ). In naïve T cells the Th1 response is regulated by the "master transcription regulators" T-bet and GATA-3.[ 29 ] Analysis of T-bet and GATA-3 expression in T cells after coculture with LV-transduced DCs showed decreased expression of both T-bet and GATA-3 RNA, and the relative T-bet expression correlated with the Th differentiation according to ICCS of T cells after 8 days of co-culture (data not shown). Figure 2 Impaired Th1 response induced by LV-transduced DCs. We analyzed T helper function by using DC:T cell co-culture and IL-4 and IFN-γICCS. Immature DCs were infected with mock (293T supernatants) or LV on d5 and treated with LPS and TNFα. The DCs were harvested and co-cultured with naïve CD4+ T cells at a DC:T cell ratio of 1:10. On Day 7 amd 9 after co-culture, the cells were re-stimulated and the T helper cell populations were examined by INF-γ and IL-4 antibody ICCS as described in the Materials and Methods. The percentages of cell populations are indicated in the FACS quadrants. The results are representative of four independent experiments. Up-regulation of CD80 and CD86 expression did not restore DC functions Because T cell co-stimulatory molecules are important mediators of DC functions, the down-regulation of CD80 and CD86 in DCs after LV transduction might contribute to the observed Th1 impairment. To examine this possibility, CD80 and CD86 were up-regulated in DCs using LVs encoding these two genes to see if the impaired Th1 response could be corrected. The LVs encoding human CD80 and CD86 were constructed as shown in Fig. 3A . The functions of these CD80 and CD86 genes have been previously demonstrated in an in vivo study [ 21 ]. DCs were transduced with LVs expressing a reporter, CD80 or CD86 gene, and then treated with LPS and TNF-α 12 hr later. Thirty-six hours after LV transduction, we analyzed the transduced DCs for CD80 and CD86 expression by FACS, using anti-CD80 and anti-CD86 antibodies. The results were consistent with our earlier findings; CD80 expression was reduced from 41% to 35% after LV-PLAP infection, while CD86 expression was reduced from 61% to 49% (Fig. 3B ). Their expression was up-regulated after transduction with LVs encoding CD80 and CD86; the expression of CD80 was up-regulated from 35% to 44%, and the expression of CD86 from 49% to 76%. Figure 3 LV modification of DC immune functions. (A) Diagram of LV constructs containinging different immune modulatory genes. (B) Up-regulation of T cell costimulators in DCs transduced with LV-CD80 or LV-CD86. Immature DCs were transduced with mock, LV-PLAP, LV-CD80, or LV-CD86 for 12 hr, induced to mature, and analyzed 24 hr later using anti-CD80 and anti-CD86 antibodies. The mean fluorescence intensity and percentage of positive cells are shown. (C) Th1/Th2 assay of DCs with up-regulated CD80 or CD86. The T-cell activation function of DCs was analyzed by DC:T cell co-culture. ICCS and FACS for T helper function were performed 8 days after co-culture. The percentages of different T-cell populations are shown. (D) Th1/Th2 assay of DCs co-transduced with different LV immune modulatory genes. DCs were transduced with LV (LV-PLAP), and co-transduced with LVs encoding different immune modulatory genes, including IL-12, CD40L, IFN-γ, FL, GM-CSF, and ICAM-1, or incubated with soluble IFN-γ. DCs were then treated with TNF-α and LPS and co-cultured with naïve CD4 T cells. The T cells were analyzed for IL-4 and IFN-γ expression by ICCS and FACS 9 days after co-culture. The percentages of different T cell populations are shown in the quadrants. The results are representative of six independent experiments. To see if the up-regulation of the T-cell co-stimulatory molecules in DCs could restore the Th1 response, we co-cultured naïve CD4 T cells with DCs transduced with mock, LV-PLAP, LV-PLAP plus LV-CD80, or LV-PLAP plus LV-CD86. After 8 days, the T cells were reactivated and analyzed by ICCS and FACS using anti-IL-4 and anti-IFN-γ antibodies as described earlier. We found that after LV transduction the Th1 population was reduced from 24% to 13%. Moreover, this impairment could not be corrected by the up-regulation of CD80 and CD86 in DCs (from 13% to 12% and 13%, respectively, Fig. 3C ). Modification of DC immunity by LVs encoding immune modulatory genes Cytokine signaling is important in DC-mediated Th differentiation; for examples, IL-12 is critical to Th1 development, and Flt3-ligand (FL) has been shown to enhance IL-12 production in DCs [ 30 ]. To overcome the impaired Th1 response after LV transduction, we investigated whether modification of the local cytokine environment in the DC:T cell synapse could promote a Th1 response. LVs expressing different cytokine and receptor genes, including FL, GM-CSF, IL-12 (a bi-cistronic IL-12A and IL-12B construct), CD40L, IFN-γ, and ICAM1 were constructed (Fig. 3A ). Expression or function of these different immune modulatory genes has been previously demonstrated. [ 21 - 23 ] DCs were transduced with LVs carrying reporter gene PLAP either alone or co-transduced with different immune modulatory genes. As positive control, we treated DCs with soluble IFN-γ before maturation and DC:T-cell co-culture. The Th function of the LV-transduced DCs was analyzed by DC:T cell co-culture followed by ICCS and FACS analysis of IFN-γ and IL-4, as described earlier. The results showed that LV transduction alone reduced IFN-γ-producing Th1 cell population as found above, from 8.16% to 3.46%. However, co-transduction with LV encoding IL-12 enhanced Th1 response from 3.46% to 9.38%, while co-transduction with LV encoding IFN-γ increased such response from 3.46% to 13.08%, an increase that was similar to that produced by soluble IFN-γ (Fig. 3D ). LVs expressing FL, GM-CSF, CD40L, or ICAM-1, on the other hand, exhibited no significant effect. Modulation of DC function by LVs expressing siRNA targeting IL-10 IL-10 is a critical immune modulatory gene and modulation of IL-10 gene expression may alter DC function. To test this, we constructed LVs encoding siRNA targeting IL-10. We chose two regions in the IL-10 mRNA as the siRNA target sites (Fig. 4A ). The siRNA expression was driven by a human H1 polIII promoter that was cloned into LVs as previously reported.[ 24 ] The LV-siRNA vector also carries a nlacZ reporter gene convenient for vector titer determination and for the identification of transduced cells. To demonstrate the siRNA effects, we transduced B cells with IL-10-siRNA LVs or a control siRNA LV targeting GFP gene, and after transduction, the B cells were expanded and the lacZ-positive cells were FACS-sorted using fluorescent substrate FDG. The expression of IL-10 was quantified by ICCS and FACS using anti-IL-10 Ab. The result demonstrated IL-10 suppression in the lacZ-positive B cells that were transduced with LVs expressing the two IL-10 specific siRNAs but not the non-specific siRNA targeting GFP gene (Fig. 4B ). Figure 4 Modification of DCs by LV-siRNA targeting IL-10. (A) LV-siRNA targeting IL-10. LV siRNAs targeting two different sites of IL-10 mRNA were illustrated. The predicted hairpin siRNA structure is shown. (B) Illustration of efficient down-regulation of IL-10 in B lymphocytes after LV IL-10 siRNA transduction. Epstein-Barr virus (EBV) transformed B cells were transduced with LV siRNA targeting IL-10 (#1 and #2) or GFP gene. The siRNA LVs also carry a lacZ reporter gene which could be labeled with fluorescein di-b-D-galactopyranoside (FDG) to separate the transduced from un-transduced cells by FACS sort. (C) Immature DCs were transduced with mock, empty LVs, LV-nlacZ, or LV-nlacZ plus LV-siIL-10 #1 or #2, treated with LPS, and analyzed by ICCS and FACS using anti-IL-10 antibody. (D) Enhanced Th1 response by DCs transduced with LV-siRNA targeting IL-10. DCs were transduced with LVs and either co-transduced with LV-siRNA targeting IL-10 (LV-IL10i#2) or GFP (GFPi) or treated with soluble IFN-γ as controls, and the DCs were then assayed for T-cell activation function by DC:T cell co-culture. The T cells were fully rested before reactivation with PMA and ionomycin after 10 days of co-culture. The numbers shown in the FACS quadrants are percentages of the total gated cell population. Results are representative of three independent experiments. The effect of the IL-10 LV siRNAs was then examined in DCs by co-transduction using a reporter LV and the IL-10 LV-siRNAs. The transduced DCs were then treated with LPS and analyzed for IL-10 expression as described above. Again, the empty LV had no effect and LV transduction alone up-regulated IL-10 expression. However, co-transduction with LV-siRNA targeting IL-10 down-regulated IL-10 expression (Fig. 4C ); the low level of IL-10 expression in DCs was expected as the DC culture was derived and maintained in GM-CSF and IL-4 supplemented media. To examine whether co-transduction of DCs with LVs expressing the IL-10 siRNA could promote a Th1 response, we transduced DCs with LV alone or together with either an LV-siRNA (#2) or a control LV-siRNA (GFPi). For positive control, we incubated DCs with soluble IFN-γ as previously described. After the DCs were co-cultured with naïve T cells for 10 days, the T cells were reactivated and analyzed for Th functions by ICCS to determine intracellular expression of IFN-γ and IL-4. The results clearly demonstrated that the IL-10 LV-siRNA vector, but not the GFPi LV-siRNA vector, enhanced Th1 response at levels comparable to that of the positive control (DCs treated with soluble IFN-γ, Fig. 4D ). Discussion Although HIV-1 is an immunopathogen in humans, HIV-1 derived vectors do not contain viral genes and have been rendered replication-defective. In this study, we found that LV transduction of DCs resulted in altered DC surface marker phenotypes. These changes in DC phenotypes led to suppressed function in mediating the Th1 immunity. DCs transduced by LVs did not lose the capacity to stimulate allogeneic T-cell proliferation, as reported by others [ 13 , 14 , 16 ]. However, in the DC:T cell co-culture functional assay, we showed that after LV transduction, DCs had significantly reduced ability to polarize naïve CD4+ T cells to differentiate into Th1 effectors. The changed gene-expression profile of DCs after LV transduction correlates with Th1 suppression. As demonstrated here, DC-mediated immunity requires antigen presentation, T cell co-stimulation, and cytokine production, all of which were down-regulated upon LV infection. These results are consistent with a recent study demonstrating cultured immature DCs and DCs from 6 of 10 HIV-1 patients display reduced maturation function and diminished MLR in DC:T cell coculture [ 28 ]. Cytokines have critical roles in shaping up the immune response [ 31 , 32 ]. We have detected up-regulation of IL-10 in HUVEC, B cells and DCs after LV infection suggested possible immune suppression by LVs (data not shown). Earlier work has shown that IL-10 inhibits the expression of IL-12 and co-stimulatory molecules in DCs,[ 32 ] a finding that correlates with its ability to inhibit the primary T-cell response and induce a stage of anergy in allo- or peptide-antigen-activated T cells [ 33 ]. IL-10 has also been shown to down-regulate ICAM-1 in human melanoma cells [ 34 ]. Here we showed that LV transduction of DCs, led to down-regulation of CD80, CD86, and ICAM-1. Many of these immune regulatory genes are activated through transcriptional factor NF-κB. Using cDNA microarray analysis, we detected reduced NF-κB expression in DCs after LV infection (not shown), suggesting that LV infection may trigger a cascade of immune suppression through down-regulation of the NF-κB signaling pathway. It has been reported that HIV-1 Tat up-regulates IL-10 as a result of intranuclear translocation of NF-κB and activation of the protein kinases ERK1 and ERK2 [ 35 ]. However, the LVs used in this study do not carry a tat gene. The fact that empty LV particles did not induce the same effects as did intact LVs, suggests that Tat or other virion-associated proteins do not play a role. Thus, it is plausible that events after retroviral attachment and fusion, such as reverse transcription and integration, might trigger the observed cellular response. It would be interesting to see if such immune suppression also occurs in vivo following LV gene transfer. DCs, during their interaction with T cells, provide multiple signals to polarize naïve T cells. These signals include the co-stimulatory molecules CD80, CD86, and ICAM-1, which are considered "signal 2" for T-cell stimulation. The roles of these co-stimulatory molecules on Th differentiation remain controversial. Many studies have shown that ICAM-1 promotes Th1 commitment [ 36 ]. CD80 and CD86 have been reported to polarize CD4 + T cells toward the Th2 subset through engagement with CD28 [ 37 - 39 ]. However, CD80 could also interact with CTLA-4 to induce Th1 polarization [ 40 ]. Moreover, CD86 has been reported to be a Th1-driving factor [ 41 ]. Further studies are needed to address the roles of co-stimulatory molecules in the development of DC and T-cell immunity. Nevertheless, the down-regulation of T cell co-stimulatory molecules in DCs after LV transduction could potentially have an impact on the DC-mediated Th1 response. The analysis of surface-marker expression profile also revealed down-regulation of CD1a and DC-SIGN in DCs after LV transduction. CD1a is a nonpolymorphic histocompatibility antigen associated, like MHC class I molecules, with beta-2-microglobulin, and is responsible for the presentation of lipid antigens. DC-SIGN (DC-specific, ICAM-3 grabbing nonintegrin) is a 44-kDa type I membrane protein with an external mannose-binding, C-type lectin domain [ 42 ]. It has been postulated that DC-SIGN interacts with ICAM-3 on T cells to allow sufficient DC-T cell adhesion and, in addition, that DC-SIGN is a new member of the co-stimulatory molecule family [ 5 , 43 ]. With these characteristics, the down-regulation of CD1a and DC-SIGN might also contribute to the impaired Th1 function of DCs. Polarization of naïve Th cells into Th1 cells is critical for the induction of cellular immunity against intracellular pathogens and cancer cells. The observed impairment of the Th1 response by LV-transduced DCs raises a potential issue with LV-based immunotherapy. We illustrated that co-transduction with LV encoding IL-12 or IFN-γ, but not CD80, CD86, or ICAM-1, in DCs effectively restored Th1 immunity. In addition, co-transduction with LVs expressing small interfering RNA targeting IL-10 could also promote DC-mediated Th1 immunity. In a step toward future generation of vaccines, LVs encoding IL-12 and IL-10-siRNA as potent Th1 adjuvants may be used to enhance the cellular immune response in the prime-and-boost vaccination regimen. In summary, our study has addressed an important immune suppression effect of LVs and presented a solution that is important for future LV-based DC immunotherapy applications. List of abbreviations used HIV-1 human immunodeficiency virus type 1 LV lentiviral vector DC dendritic cell Th T helper MLV murine leukemia virus RT-PCR reverse transcription-polymerase chain reaction FACS fluorescence activated cell sorter ICCS intracellular cytokine staining IL interleukin PLAP placenta alkaline phosphatase siRNA small interfering RNA PBMC peripheral blood mononuclear cells MOI multiplicity of infection LPS lipopolysaccharide TNFα tumor necrosis factor alpha INF-γ interferon-gamma Competing interests The author(s) declare that they have no competing interests. Authors' contributions The study was conceived by LJC; XC and LJC participated in designing and coordinating the study; JH carried out some of the lentiviral constructions, siRNA designs and participated in result discussion; XC performed the statistical analysis; LJC and XC carried out detailed analysis of the results and XC drafted and LJC finalized the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC534092.xml |
547915 | Brainstem levels of transcription factor AP-2 in rat are changed after treatment with phenelzine, but not with citalopram | Background Before therapeutic effect is obtained after treatment with antidepressant drugs, like serotonin selective reuptake inhibitors (SSRIs), tricyclic antidepressants (TCAs) and monoamine oxidase inhibitors (MAO-Is) there is an initial lag-period of a few weeks. Neuronal adaptations on a molecular level are supposed to be involved in the initiation of the antidepressant effect. Transcription factor AP-2 is essential for neuronal development and many genes involved in the brainstem monoaminergic systems have binding sites for AP-2 in their regulatory regions. The genotype of the AP-2β isoform has been associated with e.g. anxiety-related personality traits and with platelet MAO activity. In addition, previous studies have shown that the levels of AP-2α and AP-2β in rat whole brain were decreased after 10 days of treatment with citalopram (SSRI) and imipramine (TCA), and were increased with phenelzine (MAO-I). Results In the present study, we report that treatment with citalopram for 1, 7 or 21 days did not have effect on the AP-2 levels in rat brainstem. However, after treatment with phenelzine for 1, 7 or 21 days the levels of AP-2α and AP-2β had increased after 7 days, but had returned to control levels at day 21. Conclusion The decrease in AP-2 levels in rat whole brain previously seen after treatment with citalopram does not seem to be localised to the brainstem, it may rather occur in the monoaminergic terminal projection areas. The present data suggest that the increase in AP-2 levels previously seen in rat whole brain after subchronic treatment with phenelzine is located in the brainstem. It cannot, however, be excluded that other brain regions are involved. | Background There is a large number of pharmacological strategies for treating serotonin-related disorders like depression and anxiety. Well-known antidepressant drug effects are blockade of the serotonin (5-HT) and/or norepinephrine (NE) reuptake pumps, direct effects on the 5-HT- and/or NE receptors and inhibition of the monoamine oxidase A (MAO-A) enzyme. Irrespective of which drug or combination of drugs that is used, there is a delay of a few weeks before any therapeutic effect is noticed. This initial lag-period usually is associated with several side-effects, many of which fade away with the appearance of the therapeutic effect. Considering the lag-period needed for the antidepressant effect to emerge, it has been suggested that the antidepressant mechanisms involve secondary molecular neuronal adaptations, rather than being a result of the primary actions of the drugs [ 1 , 2 ]. Some attempts to elucidate the molecular mechanisms involved in such lag-period have been reported, for example both SSRIs and MAO-Is cause desensitization of somatodendritic 5-HT 1A autoreceptors [ 3 , 4 ]. Furthermore, SSRIs have also, during the initial lag period, been shown to cause desensitization of terminal 5-HT 1B/1D autoreceptors [ 5 , 6 ]. Gaining more knowledge about molecular mechanisms involved in the initial lag-period may prove important for the discovery of new antidepressant drug targets and also for the knowledge of the mechanisms of action of antidepressants. Transcription factors, with their specific ability to regulate gene expression, have been suggested as prominent novel drug targets [ 7 - 11 ]. Transcription factor AP-2 is a critical regulatory factor for neuronal gene expression and neuronal development, e.g., in the brainstem [ 12 - 14 ]. Five different AP-2 genes have been identified, i.e., AP-2α, AP-2β, AP-2γ, AP-2δ and AP-2ε [ 15 - 19 ]. The isoforms are expressed from different genes and have a molecular weight of around 50 kD. The cis-acting DNA sequences 5'-(G/C)CCCA(G/C)(G/C)(G/C)-3' and the palindromic sequence 5'-GCCNNNGGC-3' are considered as consensus AP-2 binding sites for all AP-2 proteins [ 20 ]. Several genes encoding proteins involved in the brainstem monoaminergic systems have multiple AP-2 binding sites in their regulatory regions [ 20 - 26 ] indicating an involvement of AP-2 in the expression of these genes. We have recently reported positive correlations between brainstem AP-2α and AP-2β levels and monoaminergic activity in rat frontal cortex [ 27 ], indicating a regulatory function of AP-2α and AP-2β not only for neuronal development, but also for neuronal adaptive mechanisms in the adult brain. In two independent studies it was shown that the AP-2β genotype is associated with anxiety-related personality traits [ 28 , 29 ]. The AP-2β genotype has also been linked to binge-eating disorder [ 30 ] and to platelet MAO activity [ 31 ], which the latter is associated with personality traits. Furthermore, the AP-2β genotype has been associated to CSF-levels of homovanillic acid (HVA) in women [ 32 ]. In a previous study, we reported that the levels of AP-2α and AP-2β were decreased in rat whole brain after treatment for 10 days with citalopram, imipramine and lithium, respectively [ 33 ]. We have also reported that citalopram changes the levels of AP-2α and AP-2β in rat whole brain in a time-dependent manner, i.e., AP-2 levels were decreased after 7 days of treatment but returned to control levels after 21 days of treatment [ 34 ]. Furthermore, AP-2α and AP-2β levels were shown to be increased in rat whole brain after 10 days of treatment with phenelzine [ 35 ]. In the present study, we report that neither treatment with citalopram for 1, 7 nor 21 days affect the AP-2α and AP-2β levels in the rat brainstem. Treatment with phenelzine, however, increased the levels of both AP-2α and AP-2β in the rat brainstem after 7 days of treatment, but after 21 days of treatment the levels had returned to control levels. Results The mean relative amounts of AP-2α and AP-2β protein ± standard deviation (SD) for each of the citalopram-, phenelzine- and saline treated animal groups are shown in tables 1 and 2 , respectively. No significant differences in the amounts of AP-2α and AP-2β were found between any of the citalopram and saline treated groups. With regard to phenelzine treatment, there was a significant increase in AP-2α levels after 7 days of treatment (2.74 ± 0.57 vs 2.16 ± 0.14: mean relative amount of AP-2α ± SD, phenelzine vs saline, p = 0.037), which returned to control levels after 21 days. A similar result was obtained with regard to AP-2β levels with a significant increase only after 7 days of treatment (2.25 ± 0.25 vs 1.86 ± 0.22: mean relative amount of AP-2β ± SD, phenelzine vs saline, p = 0.017). Table 1 Relative amount of AP-2α protein ± SD, for the different animal treatment groups. day 1 day 7 day 21 saline 2.29 ± 0.43 2.16 ± 0.14 2.45 ± 0.33 phenelzine 2.28 ± 0.61 2.74 ± 0.57 * 2.54 ± 0.43 citalopram 2.35 ± 0.31 2.33 ± 0.49 2.49 ± 0.47 Values are means ± SD, for each group of animals, n = 6. *p < 0.05 as compared to animals treated with saline for the same time-period. Table 2 Relative amount of AP-2β protein ± SD, for the different animal treatment groups. day 1 day 7 day 21 saline 2.00 ± 0.44 1.86 ± 0.22 2.10 ± 0.14 phenelzine 2.14 ± 0.61 2.25 ± 0.25 * 2.00 ± 0.28 citalopram 2.16 ± 0.34 2.07 ± 0.47 2.05 ± 0.28 Values are means ± SD, for each group of animals, n = 6. *p < 0.05 as compared to animals treated with saline for the same time-period. When comparing the untreated naive animals with the treated animal groups no differences with regard to the levels of AP-2α (untreated animals: 2.49 ± 0.71, mean relative amount AP-2α ± SD) or AP-2β (untreated animals: 2.04 ± 0.64, mean relative amount AP-2β ± SD) were found. Moreover, no differences in the levels of AP-2α and AP-2β were observed between the groups of animals treated with saline for the different time periods. Discussion In two independent studies, we have shown that the levels of AP-2α and AP-2β in rat whole brain were decreased after subchronic (7 or 10 days) treatment with citalopram [ 33 , 34 ]. We have also shown that the AP-2α and AP-2β levels in rat whole brain were increased after treatment with phenelzine for 10 days [ 35 ]. For several reasons, we hypothesized that the brainstem should be of particular importance in this regard. Thus, many genes encoding proteins involved in the brainstem monoaminergic systems have binding sites for AP-2 in their regulatory regions. Furthermore, we have previously observed correlations between brainstem AP-2 levels and cortical monoamine activity [ 27 ]. The lag-period initially seen before the antidepressant therapeutic effect is obtained also made it interesting to study possible changes in AP-2 levels over time. In the present study, we found that treatment with citalopram for 1, 7 or 21 days did not have any effect on the brainstem levels of AP-2α and AP-2β. Treatment with phenelzine for 7 days, on the other hand, increased the brainstem levels of AP-2α and AP-2β, but after 21 days of treatment the levels had returned to control levels. The phenelzine data presented here, are in line with our previous study showing an increase in AP-2α and AP-2β levels in rat whole brain after 10 days of treatment with phenelzine [ 35 ]. The different responses on AP-2 of citalopram and phenelzine are likely to be explained by the different molecular mechanisms of the two drugs. Considerering the specific drug targets for citalopram and phenelzine, respectively, the target for citalopram, 5-HTT, is specific for membranes of the serotonergic system, while the target for phenelzine, MAO-A, is not located in the serononergic neurons [ 36 ]. It has been shown that some SSRIs, to some extent, are also able to inhibit MAO-activity [ 37 ]. However, all SSRIs, including citalopram, tested had a much higher selectivity for MAO-B than MAO-A and MAO-A, and not MAO-B, is the enzyme considered to be involved in the antidepressant effect [ 38 ]. Thus, a MAO-inhibiting effect of citalopram should not be a confounding factor with regard to interpretation of the present results. The fact that citalopram treatment did not affect AP-2 levels in rat brainstem indicates that the decrease in AP-2 levels previously seen in rat whole brain after citalopram treatment, takes place in some other brain region. It has been shown that postsynaptic 5-HT 1A receptors in hippocampus get an enhanced response to 5-HT after treatment with TCA (that blocks both 5-HT and NE reuptake) for a time-period that corresponds to the time required for initiation of the antidepressant therapeutic effect [ 39 ]. An increased 5-HT responsiveness has also been demonstrated for other serotonin receptors than 5-HT 1A and in other projection areas than hippocampus [ 40 ]. Thus, there are reasons to presume that the effect we have seen on AP-2 levels after subchronic citalopram treatment occurs in the 5-HT projection areas rather than in the 5-HT cell bodies in the brainstem. With regard to the increase in brainstem levels of AP-2 after 7 days of phenelzine treatment, it is in line with previous reports that MAO-Is partially enhance the 5-HT transmission by increasing the amount of 5-HT released per action potential [ 38 ], an effect which is likely to be regulated by presynaptic mechanisms located in the brainstem. The temporary changes in AP-2 levels after administration of antidepressants (present data and [ 34 ]), coinciding in time with the appearance of side-effects, makes it tempting to speculate that those two phenomena are somehow interrelated. In a previous study, we reported higher levels of AP-2α and AP-2β in rat whole brain in untreated naive animals compared to animals treated with citalopram or saline for different time periods [ 34 ]. In the present study, however, we did not see any differences in the brainstem levels of AP-2α and AP-2β between naive untreated animals and treated animal groups. This indicates that the changes in AP-2α and AP-2β levels in rat whole brain previously seen in animals treated with citalopram or saline compared to naive untreated animals are located in some other AP-2 containing brain region than the brainstem. As mentioned earlier, we have previously shown that the AP-2β genotype is associated with platelet MAO activity [ 31 ]. Thus, it is seems likely that AP-2 is involved in the regulation of the expression of the MAO enzyme. A possible explanation for the elevated AP-2α and AP-2β levels during subchronic phenelzine treatment could be that they are part of a feedback mechanism to counteract the reduction in MAO activity. Conclusions Unraveling of the molecular mechanisms involved in the initial phase of antidepressant treatment is essential for the development of new efficient antidepressant drugs with less side-effects. We find transcription factors, such as AP-2, with ability to regulate expression of specific genes involved in the monoaminergic mechanisms, to be interesting candidates as novel antidepressant drug targets. Methods Animals and treatment paradigms Adult male Sprague-Dawley rats (10 weeks of age, B&K Universal AB, Sollentuna, Sweden) were housed in groups of three and maintained on a 12 hour light /dark cycle with food and water freely available. Animals were administered phenelzine (n = 18, 10 mg/kg, Sigma, Sweden), or citalopram (n = 18, 10 mg/kg, Lundbeck AB, Helsingborg, Sweden) subcutaneously with daily injections. All drugs were dissolved in saline (NaCl, 9 mg/kg). Saline treated animals (n = 18) received saline injections in the same volume as that given the citalopram and phenelzine treated animals. Each group of animals was treated for 1, 7 or 21 days, respectively. All animals were sacrificed by CO 2 inhalation 24 hours after their last injection. A group of untreated naive animals (n = 6) was sacrificed after 21 days. After sacrifice the brainstem was dissected and nuclear extracts were prepared for measurement of AP-2 levels by Enzyme-Linked Immunosorbent Assay(ELISA). This study was carried out with permission from the local animal ethics committee in Uppsala, Sweden. Extraction of nuclear extracts Rat brainstem was homogenized in 3 ml buffer A (10 mM HEPES, 10 mM KCl, 0.1 mM EDTA, 0.1 mM EGTA, 1 mM DTT, 0.5 mM PMSF, pH 7.9). The homogenate was incubated on ice for 15 minutes. To this 125 μl Nonidet P40 was added, and the homogenate was centrifuged for 30 seconds at 14000 rpm in 4°C. The pellet was resuspended in 500 μl buffer C 20 mM HEPES, 0.4 M NaCl, 1 mM EDTA, 1 mM EGTA, 1 mM DTT, 1 mM PMSF, pH 7.9). Thereafter the tubes were put on a shaker for 15 minutes and centrifuged at 14000 rpm for 5 minutes (4°C). The supernatant i.e. the nuclear protein were aliquoted and stored at -80°C. The protein concentration for all nuclear extracts were determined by the method by Lowry at al. (1951) [ 41 ]. The concentration of nuclear extracts were ~8 μg/μl. ELISA measurements 96-well microtiter plates were coated (50 μl/ well) with nuclear extracts (10 μg/ml) diluted in 50 mM Carbonate-Bicarbonate buffer pH 9.0. The plates were covered with parafilm and incubated overnight at 4°C. Antigen solution was then removed and 200 μl blocking buffer (PBS, 1 % BSA) was added to each well and the plates were incubated for two hours in room temperature. Following this the blocking buffer was removed and the plates were washed with PBS. Primary antibody (50 μl, goat polyclonal AP-2α and AP-2β, 15 μg/ml respectively, SDS Biosciences, Falkenberg, Sweden) diluted in blocking buffer was then added and the plates incubated overnight at 4°C. After incubation the antibody was removed and the plates were washed three times with Wash buffer I (PBS, 0.05 % Tween-20). Secondary antibody (Donkey anti-goat IgG AP conjugated, SDS Biosciences, Falkenberg, Sweden) diluted 1:350 in blocking buffer, was then added (50 μl) to each well and the plates were incubated for two hours in room temperature. After removal of the secondary antibody the plates were washed three times with Wash buffer I, and once with Wash buffer II (10 mM diethanolamine, 0.5 mM MgCl 2 , pH 9.5). Thereafter, 50 μl substrate (Phosphatase substrate, 5 mg tablets, Sigma, Sweden, diluted in 5 ml Wash buffer II) was added to each well. The reaction continued for 30 minutes and was terminated by adding 50 μl of 0.1 M EDTA, pH 7.5. The plates were analysed in an ELISA reader (Molecular Devices, Thermo Max) at optical density (OD) 405/490. The OD of the AP-2 isoforms for each rat was correlated to a value in a standard curve, where known concentrations of antibody were plotted against optical density. The value form the standard curve was then divided with the concentration of total protein in the nuclear extracts. The quota was used as a value of the relative amount of AP-2α and AP-2β protein. Each rat were analysed twice for accuracy. Statistical analyses The statistical comparisons between drug treated and saline treated animals for each time-point were analysed using unpaired t-test. When comparing the groups of untreated animals with all treatment groups we used analysis of variance (ANOVA) and Fisher's Protected Least Significant Difference (PLSD). To test if any of the groups of saline treated animals differed in the amounts of AP-2α and AP-2β protein ANOVA and PLSD test were used. All calculations were performed using Stat View 5.0 software (SAS Institute Inc., Cary, NC, USA). Results have been considered statistically significant when p < 0.05. Authors' contributions CB planned the experiments, carried out the molecular and statistical analyses and drafted the manuscript, MD participated in planning the experiments and writing of the manuscript, LO conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC547915.xml |
552317 | Efficient decoding algorithms for generalized hidden Markov model gene finders | Background The Generalized Hidden Markov Model (GHMM) has proven a useful framework for the task of computational gene prediction in eukaryotic genomes, due to its flexibility and probabilistic underpinnings. As the focus of the gene finding community shifts toward the use of homology information to improve prediction accuracy, extensions to the basic GHMM model are being explored as possible ways to integrate this homology information into the prediction process. Particularly prominent among these extensions are those techniques which call for the simultaneous prediction of genes in two or more genomes at once, thereby increasing significantly the computational cost of prediction and highlighting the importance of speed and memory efficiency in the implementation of the underlying GHMM algorithms. Unfortunately, the task of implementing an efficient GHMM-based gene finder is already a nontrivial one, and it can be expected that this task will only grow more onerous as our models increase in complexity. Results As a first step toward addressing the implementation challenges of these next-generation systems, we describe in detail two software architectures for GHMM-based gene finders, one comprising the common array-based approach, and the other a highly optimized algorithm which requires significantly less memory while achieving virtually identical speed. We then show how both of these architectures can be accelerated by a factor of two by optimizing their content sensors. We finish with a brief illustration of the impact these optimizations have had on the feasibility of our new homology-based gene finder, TWAIN. Conclusions In describing a number of optimizations for GHMM-based gene finders and making available two complete open-source software systems embodying these methods, it is our hope that others will be more enabled to explore promising extensions to the GHMM framework, thereby improving the state-of-the-art in gene prediction techniques. | Background Generalized Hidden Markov Models have seen wide use in recent years in the field of computational gene prediction. A number of ab initio gene-finding programs are now available which utilize this mathematical framework internally for the modeling and evaluation of gene structure [ 1 - 6 ], and newer systems are now emerging which expand this framework by simultaneously modeling two genomes at once, in order to harness the mutually informative signals present in homologous gene structures from recently diverged species. As greater numbers of such genomes become available, it is tempting to consider the possibility of integrating all this information into increasingly complex models of gene structure and evolution. Notwithstanding our eagerness to utilize this expected flood of genomic data, methods have yet to be demonstrated which can perform such large-scale parallel analyses without requiring inordinate computational resources. In the case of Generalized Pair HMMs (GPHMMs), for example, the only systems in existence of which we are familiar make a number of relatively restrictive assumptions in order to reduce the computational complexity of the problem to a more tolerable level [ 7 , 8 , 15 ]. Yet, even these systems are currently capable of handling no more than two genomes at once. If larger numbers of genomes are to be simultaneously integrated into the gene prediction process in a truly useful manner, then it is reasonable to suggest that new methods will be needed for efficient modeling of parallel gene structures and their evolution. Assuming for now that these methods are likely to continue to build on the basic GHMM framework, we feel it is important that efficient methods of GHMM implementation be properly disseminated for the benefit of those who are to work on this next generation of eukaryotic gene finders. Modeling genes with a GHMM A Hidden Markov Model (HMM) is a state-based generative model which transitions stochastically from state to state, emitting a single symbol from each state. A GHMM (or semi-Markov model) generalizes this scenario by allowing individual states to emit strings of symbols rather than one symbol at a time [ 9 , 10 ]. A GHMM is parameterized by its transition probabilities, its state duration (i.e., feature length) probabilities, and its state emission probabilities. These probabilities influence the behavior of the model in terms of which sequences are most likely to be emitted and which series of states are most likely to be visited by the model as it generates its output. Eukaryotic gene prediction entails the parsing of a DNA sequence into a set of putative CDSs ( coding segments , hereafter referred to informally as "genes") and their corresponding exon-intron structures [ 11 ]. Thus, the problem of eukaryotic gene prediction can be approximately stated as one of parsing sequences over the nucleotide alphabet Σ = {A,C,G,T} according to the regular expression: Σ*( ATG Σ*( GT Σ* AG )*Σ* Γ )*Σ*, (1) where the signals (start and stop codons, donors, and acceptors) have been underlined for clarity, and where Γ = {TAG,TGA,TAA} represents a stop codon. (The actual nucleotides comprising these signals may differ between organisms; we have given the most common ones). An additional constraint not explicitly represented in Formula 1 is that the number of non-intron nucleotides between the start and stop codons of a single gene must be a multiple of three, and furthermore, if these nucleotides are aggregated into a discrete number of nonoverlapping triples, or codons , then none of these codons must be a stop codon, other than the stop codon which terminates the gene. Note that the Σ* terms in Formula 1 permit the occurrence of pseudo-signals – e.g., an ATG triple which does not comprise a true start codon. Gene prediction with a GHMM thus entails parsing with an ambiguous stochastic regular grammar; the challenge is to find the most probable parse of an input sequence, given the GHMM parameters and the input sequence. In the case of simple Hidden Markov Models, this optimal parsing (or decoding ) problem can be solved with the well-known Viterbi algorithm , a dynamic programming algorithm with run time linear in the sequence length (for a fixed number of states) [ 12 ]. A modified Viterbi algorithm is required in the case of GHMMs, since each state can now emit more than one symbol at a time [ 2 ], resulting in the following optimization problem: where φ is a parse of the sequence consisting of a series of states q i and state durations d i , 0≤ i ≤ n , with each state q i emitting subsequence S i of length d i , so that the concatenation of all S 0 S 1 ... S n produces the complete output sequence S (but note that states q 0 and q n are silent , producing no output). P e ( S i | q i , d i ) denotes the probability that state q i emits subsequence S i , given duration d i ; P t ( q i | q i -1 ) is the probability that the GHMM transitions from state q i -1 to state q i ; and P d ( d i | q i ) is the probability that state q i has duration d i . The argmax is over all parses of the DNA sequence into well-formed exon-intron structures; hence, the problem is one of finding the parse which maximizes the product in Equation 2. Implementation The PSA decoding algorithm The approach commonly used in GHMM gene finders for evaluating Equation 2 is to allocate several arrays, one per variable-length feature state, and to evaluate the arrays left-to-right along the length of the input sequence according to a dynamic programming algorithm, which we will detail below. We refer to this approach as the Prefix Sum Arrays (PSA) approach, since the values in the aforementioned arrays represent cumulative scores for prefixes of the sequence. Without loss of generality, let us consider the GHMM structure depicted in Figure 1 . Although individual GHMMs will differ from this particular structure on specific points, the model in Figure 1 is general enough to serve as a concrete example as we illustrate the operation of the algorithm. The diamonds denote the states for fixed length features (ATG = start codon, TAG = stop codon, GT = donor, AG = acceptor) and the circles denote states for variable length features (N = intergenic, I = intron, E sng = single exon, E init = initial exon, E int = internal exon, E fin = final exon). This model generates genes only on the forward strand of the DNA; to obtain a double-stranded model one can simply mirror the structure and link the forward and reverse models through a single merged intergenic state. Associated with each diamond state is a signal sensor such as a weight matrix (WMM) or some other fixed-length model (e.g., a WAM, WWAM, MDD tree, etc.) [ 13 ], and with each circular state is associated a variable-length content sensor , such as a Markov chain (MC) or an Interpolated Markov Model (IMM) [ 14 ]. For the purposes of illustration, we will consider only the simplest of each model type, since the more complex model types commonly in use can in general be handled generically within the GHMM framework. The simplest fixed-length model is the WMM: where x h .. x h + n denotes the subsequence currently within a sliding ( n + 1)-element window, called the context window , and P ( x |θ[ i ]) denotes the probability of nucleotide x occurring at position i within the window, for model θ. In practice, all of the probabilities described in all of these models are represented in log space (to reduce the incidence of numerical underflow on the computer), so that products of probabilities can be replaced with sums of their logs. The simplest variable-length model used in practice is the Markov chain. An n th -order Markov chain M for state q i would evaluate the probability P ( S i | q i , d i ) of a putative feature S i according to: where x j is the j th nucleotide in the sequence of the putative feature, d i is the length of that feature, and P M ( x j | x j - n .. x j -1 ) is the probability of nucleotide x j conditional on the identities of its n predecessor nucleotides, according to content model M . As with the fixed-length model described above, this computation is typically done in log space. In scoring the signals and content regions of a putative gene parse, it will be important for us to carefully differentiate between the nucleotides which are scored by a signal sensor and those which are scored by a content sensor in a putative parse. As shown in Figure 2 , the content and signal regions must partition the sequence into non-overlapping segments; allowing overlaps would result in double-counting of nucleotide probabilities, which can lead to undesirable biases in the decoding algorithm. The first step of the PSA algorithm is to compute a prefix sum array for each content sensor. For noncoding states (introns and intergenic) this can be formalized as shown in Figure 3 . In the case of exon states, it is important to capture the different statistical properties present in the three codon positions, referred to as phase 0 , phase 1 , and phase 2 . We employ three Markov chains, M 0 , M 1 , and M 2 , corresponding to these three phases. Together, these three chains constitute a three-periodic Markov chain , M {0,1,2} . Exon states then require three arrays, each of which can be initialized using the procedure shown in Figure 4 . In this way, we can initialize the three arrays α i, 0 , α i, 1 , and α i, 2 for an exon state q i as follows: for ω ← 0 to 2 do init_phased(α i,ω , S, M {0, 1, 2} ,ω) ; The individual chains M 0 , M 1 , and M 2 comprising M {0,1,2} are applied in periodic fashion within the procedure init_phased() to compute conditional probabilities of successive nucleotides along the length of the array. The three arrays are phase-shifted by one from each other, with each element in the array storing the cumulative score of the prefix up to the current nucleotide. The first nucleotide is taken to be in phase ω for array α i ,ω. Initializing the arrays for reverse-strand states can be achieved by simply reverse-complementing the DNA sequence and then reversing the order of the resulting arrays (keeping in mind later that the reverse-strand arrays tabulate their sums from the right, rather than the left, and that ω is the phase of the last array entry rather than the first). Once the prefix sum arrays have been initialized for all variable-duration states, we make another left-to-right pass over the input sequence to look for all possible matches to the fixed-length states, via the signal sensors. In general, a signal sensor θ models the statistical biases of nucleotides at fixed positions surrounding a signal of a given type, such as a start codon. Whenever an appropriate consensus is encountered (such as ATG for the start codon sensor), the signal sensor's fixed-length window is superimposed around the putative signal (i.e., with a margin of zero or more nucleotides on either side of the signal consensus) and evaluated to produce a logarithmic signal score R S = log P ( x h .. x h + n -1 |θ), where h is the position of the beginning of the window and n is the window length. If signal thresholding is desired, R S can be compared to a pre-specified threshold and those locations scoring below the threshold can be eliminated from consideration as putative signals. The remaining candidates for signals of each type are then inserted into a type-specific signal queue for consideration later as possible predecessors of subsequent signals in a putative gene model. As each new signal is encountered, the optimal predecessors for the signal are selected from among the current contents of the signal queues, using a scoring function described below. In the example (forward strand) GHMM depicted in Figure 1 , the possible (predecessor→successor) patterns are: ATG→TAG ATG→GT GT→AG AG→GT AG→TAG TAG→ATG Associated with each of these patterns is a transition probability, P t ( q i | q i -1 ), which is included in the scoring of a possible predecessor; this probability can be accessed quickly by indexing into a two-dimensional array. The logarithmic transition score will be denoted R T ( q i -1 , q i ) = log P t ( q i | q i -1 ). The distance from a prospective predecessor to the current signal is also included in the evaluation in the form of P d ( d i | q i ) for distance (=duration) d i and signal type (=state) q i . This probability can usually be obtained relatively quickly, depending on the representation of the duration distributions. If the distributions have been fitted to a curve with a simple algebraic formula, then evaluation of the formula is typically a constant-time operation. If a histogram is instead maintained, then a binary search is typically required to find the histogram interval containing the given distance. We denote the logarithmic duration score R D ( q i , q j ) = log P d ( d i | q i : j ) where d i is the length of the content region delimited by signals q i and q j , and q i : j is the variable-length state corresponding to that content region. Following Equation 2, the final component of the scoring function is the emission probability P e ( S i | q i , d i ). For a fixed-length state, this is simply the score produced by the signal sensor. For a variable-length state q i , P e can be evaluated very quickly by indexing into the prefix sum array α i ,γ for state q i and phase γ at the appropriate indices for the two signals and simply performing subtraction: R C ( s pred , s cur ,ω) ← α i ,γ [ wpos ( s cur ) - 1] - α i ,γ [ wpos ( s pred ) + wlen ( s pred ) - 1], (5) where wpos ( s ) is the 0-based position (within the full input sequence) of the first nucleotide in the context window for signal s , wlen ( s ) is the length of the context window for signal s , and s pred and s cur are the predecessor and current signals, respectively. In the case of coding features, γ is the phase of the array and ω = (γ + pos ( s cur )) mod 3 is the phase of s cur , for pos ( s cur ) the position of the leftmost consensus base of s cur . For reverse-strand features, since the prefix sum arrays tabulate their sums from the right instead of the left, the subtraction must be reversed: R C ( s pred , s cur ,ω) ← α i ,γ [ wpos ( s pred ) + wlen ( s pred )] - α i ,γ [ wpos ( s cur )], (6) and ω = (γ + L - pos ( s cur ) - 1) mod 3, for L the sequence length. For noncoding features, the phases can be ignored when computing R C , since there is only one array per noncoding state. The resulting optimization function is: for current signal s j and predecessor signal s i ; R I ( s i ,γ i ) denotes the logarithmic inductive score for signal s i in phase γ i . For forward-strand coding features, the phases γ i and γ j are related by: γ i = (γ j - Δ) mod 3, (8) for Δ the putative exon length, or, equivalently, γ j = (γ i + Δ) mod 3. (9) These relations can be converted to the reverse strand by swapping + and -. For introns, γ i = γ j . For intergenic features, the phase will always be 0 for a forward strand signal and 2 for a reverse strand signal (since on the reverse strand the leftmost base of a 3-base signal would be in phase 2). The result of Equation 7 is the optimal predecessor for signal s j . This scoring function is evaluated for all appropriate predecessor signals, which are readily available in one or more queues, as mentioned above. A pointer called a trellis link is then created, pointing from the current signal to its optimal predecessor. In the case of those signals that can terminate an exon or an intron, three optimal predecessors must be retained, one for each phase. The inductive score R I ( s j ,γ j ) of the new signal s j is then initialized from the selected predecessor s i as follows: R I ( s j , γ j ) ← R I ( s i , γ j ) + R T ( s i , s j ) + R D ( s i , s j ) + R C ( s i , s j , γ j ) + R S ( s j ), (10) where R S ( s j ) is the logarithmic score produced by the signal sensor for signal s j . A final step to be performed at each position along the input sequence is to drop from each queue any signal that has been rendered unreachable from all subsequent positions due to intervening stop codons. Except for the final stop codon of a gene, in-phase (i.e., in phase 0) stop codons are generally not permitted in coding exons; for this reason, any potential stop codon (regardless of its signal score) will eclipse any preceding start codon or acceptor site (or, on the reverse strand, stop codon or donor site) in the corresponding phase. The algorithm shown in Figure 5 addresses this issue by dropping any fully eclipsed signal (i.e., eclipsed in all three phases) from its queue. For the reverse strand, line 3 of eclipse() should be changed to: ω ← (p-pos(s)-len(s) - 1) mod 3; where len(s) is the length of the consensus sequence for signal s (e.g., 3 for ATG). Note that by x mod 3 we mean the positive remainder after division of x by 3; in some programming languages (such as C/C++), a negative remainder may be returned, in which case 3 should be added to the result. A special case of eclipsing which is not handled by eclipse() is that which occurs when a stop codon straddles an intron; this can be handled fairly simply by checking for such when considering each donor signal as a prospective predecessor for an acceptor signal (or vice-versa on the reverse strand). As each predecessor is evaluated, the bases immediately before the donor and immediately following the acceptor are examined, and if a stop codon is formed, the predecessor is no longer considered eligible for selection in the corresponding phase. As shown in Figure 5 , when a signal has been eclipsed in all three phases it can be removed from its queue. In this way, as a signal falls further and further behind the current position in the sequence, the signal becomes more and more likely to be eclipsed in all three phases as randomly formed stop codons are encountered in the sequence, so that coding queues (e.g., those holding forward strand start codons and acceptors, or reverse strand donors and stop codons) tend not to grow without bound, but to be limited on average to some maximal load determined by the nucleotide composition statistics of the sequence. Because of this effect, the expected number of signals which must be considered during predecessor evaluation can be considered effectively constant in practice. In the case of noncoding queues (e.g., those holding forward strand donors or stop codons, etc.), the assumption that noncoding features follow a geometric (i.e., exponentially decreasing) distribution allows us to limit these queues to a single element (per phase), because once a noncoding predecessor has been selected in a given phase, no other noncoding predecessor which has already been compared to the selected predecessor can ever become more attractive by virtue of its transition probability (since they are the same for signals of the same type, of which all the signals in a single queue are), its duration probability (since the geometric distribution ensures that their respective duration probabilities decrease at the same rate), nor its sequence probability (since any nucleotides encountered after seeing the two potential predecessors will affect their sequence scores identically). Because the coding and noncoding queues are effectively limited to a constant load (as argued above), the expected processing time at each nucleotide is O(1) in practice and therefore the entire algorithm up to this point requires time O( L ) for an input sequence of length L and a GHMM with a fixed number of states. It will be seen that the traceback procedure described below also requires time O( L ), and so this is the time complexity of the PSA decoding algorithm for normal eukaryotic genomes (i.e., those not especially lacking in random stop codons). Once the end of the sequence is reached, the optimal parse φ can be reconstructed by tracing back through the trellis links. In order for this to be done, a set of virtual, anchor signals (one of each type) must be instantiated at either terminus of the sequence (each having signal score R S = 0). Those at the left terminus will have been entered into the appropriate queues at the very start of the algorithm as prospective targets for the first trellis links (and having inductive scores R I = 0), and those at the right terminus are the last signals to be evaluated and linked into the trellis. The highest scoring of these right terminal anchor signals is selected (in its highest-scoring phase) as the starting point for the traceback procedure. Traceback consists merely of following the trellis links backward while adjusting for phase changes across exons, as shown in Figure 6 . Modifications to Figure 6 for features on the reverse-strand include changing the AG on line 8 to GT, changing the subtraction on line 9 to addition, and changing the 0 on line 7 to 2. It should be clear from the foregoing that the space requirements of the PSA decoding algorithm are O ( L | Q |) for sequence length L and variable-duration state set Q . If, for example, array elements are 8-byte double-precision floating point numbers, then the GHMM depicted in Figure 1 would require 14 prefix sum arrays (4 exon states × 3 phases + 1 intergenic state + 1 intron state), resulting in a memory requirement of at least 112 bytes per nucleotide. Generalizing this GHMM to handle both DNA strands would increase this to 216 bytes per nucleotide, so that processing of a 1 Mb sequence would require at least 216 Mb of RAM just for the arrays. Adding states for 5' and 3' untranslated regions would increase this to 248 Mb of RAM for a 1 Mb sequence, or over 1 Gb of RAM for a 5 Mb sequence. For the purposes of comparative gene finding on multiple organisms with large genes, these requirements seem less than ideal, especially when one considers the possibility of adding yet other states. The memory requirements can be reduced in several ways. First, Markov chains can be shared by similar states. For example, the intron and intergenic states can share a single Markov chain trained on pooled noncoding DNA, and all the exon states can use the same three-periodic Markov chain trained on pooled coding DNA. To our knowledge, the extent to which this optimization affects the accuracy of the resulting gene finder has not been systematically investigated, though it is commonly used in practice. Second, the models for exons can be modified so as to utilize likelihood ratios instead of probabilities. If the models for exons are re-parameterized to compute: and the noncoding models are modified to compute: then the latter can be seen to be unnecessary, since it will always evaluate to 1. Such a modification is valid and will have no effect on the mathematical structure of the optimization problem given in Equation 2 as long as the denominator is evaluated using a Markov chain or other multiplicative model, since the effect of the denominator on inductive scores will then be constant across all possible predecessors for any given signal. Using such ratios allows us to skip the evaluation of all noncoding states, so that the number of prefix sum arrays required for a double-stranded version of the GHMM in Figure 1 would be only 6 (assuming the previous optimization is applied as well), corresponding to the three exon phases on two strands. Furthermore, to the extent that these likelihood ratios are expected to have a relatively limited numerical range, lower-precision floating point numbers can be used, or the ratios could instead be multiplied by an appropriate scaling factor and then stored as 2-byte integers [ 2 ]. This is a significant reduction, though asymptotically the complexity is still O( L | Q |). An additional consideration is that the log-likelihood strategy makes unavailable (or at least inseparable) the raw coding and noncoding scores, which might be desired later for some unforeseen application. A third method of reducing the memory requirements is to eliminate the prefix sum arrays altogether, resulting in what we call the Dynamic Score Propagation (DSP) algorithm. The DSP decoding algorithm Informally, the DSP algorithm is similar to the PSA algorithm except that rather than storing all nucleotide scores for all content sensors in a set of prefix sum arrays, we instead store only the specific elements of those arrays that are needed for assessing prospective predecessors during the trellis formation. Associated with each signal is a "propagator" variable which represents the log probability of the highest-scoring partial parse up to and including this signal. As processing proceeds left-to-right along the sequence, these propagators are updated so as to extend these partial parses up to the current position. In this way, the inductive score of each signal is incrementally propagated up to each potential successor signal that is encountered during processing; when a signal is eclipsed in all phases by stop codons (i.e., removed from its respective queue), propagation of that signal's inductive score halts, since further updates would be useless beyond that point. Because no prefix sum arrays are allocated, and because the signal queues are effectively limited in size (as argued previously), the expected memory requirements of DSP will be seen to be O( L +| Q |), where the constant factor associated with the L term is small, reflecting only the number of signals per nucleotide emitted by the signal sensors, as well as the memory required to store the sequence itself. Let us introduce some notation. We define a propagator π to be a 3-element array, indexed using the notation π[ i ] for 0≤ i ≤2; when dealing with multiple propagators, π j [ i ] will denote element i of the j th propagator. Each signal s i will now have associated with it a propagator, denoted π i . For signals which can be members of multiple queues (such as start codons, which can be members of both the initial exon queue and the single exon queue), the signal will have one propagator per queue, but it will be clear from the context to which propagator we refer. Each queue will also have a propagator associated with it, though for the sake of reducing ambiguity we will refer to these as accumulators and represent them with the symbol α. The purpose of the accumulators is to reduce the number of updates to individual signal propagators; otherwise, every signal propagator in every queue would need to be updated at every position in the input sequence. The accumulator for a given queue will accumulate additions to be made to the propagators of the signals currently in the queue. The update of signal propagators from their queue's accumulator is delayed as long as possible, as described below. Accumulator scores are initialized to zero, as are the propagator scores for the left terminus anchor signals; the general case of propagator initialization will be described shortly. Updating of a propagator π from an accumulator α is simple in the case of a noncoding queue: ∀ 0≤ω≤2 π[ω] ← π[ω] + α[0]. (13) For coding queues, the update must take into account the location of the signal s associated with the propagator π, in order to synchronize the periodic association between phase and array index: ∀ 0≤ω≤2 π[ω] ← π[ω] + α[(ω - pos ( s ) - len ( s )) mod 3], (14) or, on the reverse strand: ∀ 0≤ω≤2 π[ω] ← π[ω] + α[(ω + pos ( s ) + len ( s )) mod 3]. (15) Given a content sensor M , a coding accumulator can be updated according to the rule: ∀ 0≤ω≤2 α[ω] ← α[ω] + log P M [(ω+ f ) mod 3] ( x f ), (16) or, on the reverse strand: ∀ 0≤ω≤2 α[ω] ← α[ω] + log P W [(ω- f ) mod 3] ( x f ), (17) where f is the position of the current nucleotide x f , P M [ω] ( x f ) is the probability assigned to x f by the content sensor M in phase ω, and W is the reverse-complementary model to M which computes the probability of its parameter on the opposite strand and taking contexts from the right rather than from the left. This update occurs once at each position along the input sequence. Use of f provides an absolute frame of reference when updating the accumulator. This is necessary because the accumulator for a queue has no intrinsic notion of phase: unlike an individual signal, a queue is not rooted at any particular location relative to the sequence. For noncoding queues, only the 0 th element of the accumulator must be updated: α[0] ← α[0] + log P M ( x f ). (18) All that remains is to specify the rule for selecting an optimal predecessor and using it to initialize a new signal's propagator. We first consider new signals which terminate a putative exon. Let s i denote the predecessor under consideration and s j the new signal. Denote by Δ the length of the putative exon. Then on the forward strand, we can compare predecessors with respect to phase ω via the scoring function R CI + R D + R T , where R D and R T are the duration and transition scores described earlier and R CI includes the content score and the inductive score from the previous signal: ∀ 0≤ω≤2 R CI ( s i ,ω) ← π i [(ω - Δ) mod 3]. (19) On the reverse strand we have: ∀ 0≤ω≤2 R CI ( s i ,ω) ← π i [(ω + Δ) mod 3]. (20) For introns it is still necessary to separate the three phase-specific scores to avoid greedy behavior, though the phase does not change across an intron, so no Δ term is necessary: ∀ 0≤ω≤2 R CI ( s i ,ω) ← π i [ω]. (21) When the preceding feature is intergenic we need only refer to phase zero of the preceding stop codon: R CI ( s i ,ω) ← π i [0], (22) or, on the reverse strand, phase 2 of the preceding start codon (since the leftmost base of the reverse-strand start codon will reside in phase 2). Once an optimal predecessor with score R CI + R D + R T is selected with respect to a given phase ω, the appropriate element of the new signal's propagator can be initialized directly: π j [ω] ← R CI ( s i ,ω) + R D ( s i , s j ) + R T ( s i , s j ) + R S ( s j ), (23) where R S ( s j ) = P ( context ( s j )|θ j ) is the score assigned to the context window of the new signal s j by the appropriate signal sensor θ j . An exception to Equation 23 occurs when ω is not a valid phase for signal s j (e.g., phase 1 for a start codon), in which case we instead set π j [ω] to -∞. One final complication arises from the fact that the algorithm, as we have presented it, does not permit adjacent signals in a prospective parse to have overlapping signal sensor windows; to allow such would be to permit double-counting of nucleotide probabilities, thereby biasing the probabilistic scoring function. It is a simple matter to reformulate the algorithm so that signal sensors score only the two or three consensus nucleotides of the signals under consideration; this would allow adjacent signals in a prospective parse to be as close as possible without actually overlapping (i.e., a single exon consisting of the sequence ATGTAG would be permitted, even if the start codon and stop codon context windows overlapped). However, doing so might be expected to decrease gene finder accuracy, for two reasons: (1) statistical biases occurring at fixed positions relative to signals of a given type can in general be better exploited by a signal sensor specifically trained on such positions than by a content sensor trained on data pooled from many positions at variable distances from the signal, and (2) in the case of Markov chains and Interpolated Markov Models, probability estimates for nucleotides immediately following a signal can be inadvertently conditioned on the few trailing nucleotides of the preceding feature (assuming the chain has a sufficiently high order), even though the models are typically not trained accordingly. For these reasons, we prefer to use signal sensors which impose a moderate margin around their respective signals, both to detect any biologically relevant biases which might exist within those margins, and to ensure that content sensors condition their probabilities only on nucleotides within the same feature. Given the foregoing, it is necessary to utilize a separate "holding queue" for signals which have recently been detected by their signal sensors but which have context windows still overlapping the current position in the DSP algorithm. The reason for this is that propagator updates via Equations 13–15 must not be applied to signals having context windows overlapping any nucleotides already accounted for in the accumulator scores, since to do so would be to double-count probabilities. It is therefore necessary to observe the following discipline. Associated with each signal queue G i there must be a separate holding queue , H i . When a signal is instantiated by a signal sensor it is added to the appropriate H i rather than to G i . As the algorithm advances along the sequence, at each new position we must examine the contents of each holding queue H i to identify any signal having a context window which has now passed completely to the left of the current position. If one or more such signals are identified, then we first update the propagators of all the signals in the main queue G i using Equations 13–15, then zero-out the values of the accumulator α i for that queue, and then allow the recently passed signals to graduate from H i to G i . Observe that at this point all the signals in G i have in their propagators scores which have effectively been propagated up to the same point in the sequence, and that point is immediately left of the current position; this invariant is necessary for the proper operation of the algorithm. All content sensors are then evaluated at the current position and their resulting single-nucleotide scores are used to update the accumulators for their respective queues. Finally, whenever it becomes necessary to evaluate the signals in some queue G i as possible predecessors of a new signal, we must first update the propagators of all the elements of G i as described above, so that the comparison will be based on fully propagated scores. Equivalence of DSP and PSA We now give a proof that DSP is mathematically equivalent to PSA, since it may not be entirely obvious from the foregoing description. We will consider only the forward strand cases; the proof for the reverse strand cases can be derived by a series of trivial substitutions in the proof below. To begin, we show by induction that the signal propagator π j [ω] for signal s j is initialized to the PSA inductive score R I ( s j ,ω). For the basis step, recall that the left terminus anchor signals were initialized to have zero scores in both PSA and DSP, regardless of whether a given signal began a coding or noncoding feature. In the case of coding features, substituting Equation 19 into Equation 23 yields: π j [ω] ← π i [(ω - Δ) mod 3] + R D ( s i , s j ) + R T ( s i , s j ) + R S ( s j ). (24) According to Equation 10, this initialization will result in π j [ω] = R I ( s j ,ω) only if: π i [(ω - Δ) mod 3] = R I ( s i ,γ i ) + R C ( s i , s j ,ω), (25) where γ i = (ω - Δ) mod 3 according to Equation 8. At the time that signal s j is instantiated by its signal sensor, π i has been propagated up to e = wpos ( s j ) - 1, the nucleotide just before the leftmost position of the context window for s j . By the inductive hypothesis, π i [γ i ] was initialized to R I ( s i ,γ i ). This initialization occurred at the time when the current DSP position was at the beginning of the predecessor's context window. Note, however, that π i effectively began receiving updates at position b = wpos ( s i ) + wlen ( s i ), the position immediately following the end of the signal's context window, at which point s i graduated from its holding queue. Thus, π i [γ i ] will have accumulated content scores for positions b through e , inclusive. In order to establish Equation 25, we need to show that these accumulations sum to precisely R C ( s i , s j ,ω). Substituting Equation 16 into Equation 14 we get the following formula describing propagator updates as if they came directly from content sensor M : ∀ 0≤ω≤2 π[ω] ← π[ω] + log P M [(ω+ Δ ) mod 3] ( x f ), (26) where Δ = f -( pos ( s i ) + len ( s i )) is the distance between the rightmost end of signal s i and the current position f in the DSP algorithm. Let us introduce the notation: F( i , j ,ω) = ∑ k = i .. j log P M [(ω+ k ) mod 3] ( x k ). (27) Using this notation, π i [γ i ] has since its initialization accumulated F( b , e ,γ i - pos ( s i ) - len ( s i )); this can be verified by expanding this expression via Equation 27 and observing that the result equals a summation of the log term in Equation 26 over f = b to e . Looking at init_phased(), it should be obvious that the effect of lines 5 and 8 will be that: α i ,γ [ h ] = ∑ k = 0.. h log P M [( k +γ) mod 3] ( x k ) = F(0,h,γ). (28) According to Equation 5, showing that π i [γ i ] has accumulated R C ( s i , s j ,ω) is therefore equivalent to: F ( b , e ,ψ) = F (0, wpos ( s j ) - 1,γ) - F (0, wpos ( s i ) + wlen ( s i ) - 1,γ), (29) where ψ = γ i - pos ( s i ) - len ( s i ) and γ = ω - pos ( s j ). Equivalently: F ( b , e ,ψ) = F (0, e ,γ) - F (0, b - 1,γ). (30) To see that ψ ≡ γ( mod 3), observe that pos ( s j ) - ( pos ( s i ) + len ( s i )) = Δ, the length of the putative exon (possibly shortened by three bases, in the case where s i is a start codon), and further that γ i - ω ≡ -Δ( mod 3) according to Equation 8, so that ψ - γ ≡ Δ-Δ ≡ 0( mod 3). Thus, Equation 30 is equivalent to: F ( b , e ,γ) = F (0, e ,γ) - F (0, b - 1,γ), (31) which can be established as a tautology by simple algebra after expansion with Equation 27. This shows that the signal propagator for signal s j is initialized to the PSA inductive score R I ( s j ,ω), and thus establishes the inductive step of the proof in the case of coding features. To see that the above arguments also hold for noncoding features, note that Equation 21 simplifies Equation 25 to: π i [ω] = R I ( s i ,ω) + R C ( s i , s j ), (32) that Equations 13 and 18 combine to simplify Equation 26 to: ∀ 0≤ω≤2 π[ω] ← π[ω] + log P M ( x f ), (33) and that lines 4 and 6 of init_nonphased() cause: α i [ h ] = ∑ k = 0.. h log P M ( x k ) = F NC (0, h ), (34) for F NC ( i , j ) = ∑ k = i .. j log P M ( x k ). We can thus reformulate Equation 29 as: F NC ( b , e ) = F NC (0, wpos ( s cur ) - 1) - F NC (0, wpos ( s pred ) + wlen ( s pred ) - 1), (35) or, equivalently: F NC ( b , e ) = F NC (0, e ) - F NC (0, b - 1), (36) which is again a tautology. In the interests of brevity, we leave it up to the reader to verify that the above arguments still apply when the noncoding features are intergenic, thereby invoking Equation 22 rather than Equation 21 in formulating Equation 31. To see that the selection of optimal predecessors is also performed identically in the two algorithms, note that the PSA criterion given in Equation 7 is equivalent to the argmax( R CI + R D + R T ) criterion of DSP as long as R CI ( s i ,ω) = R C ( s i , s j ,ω) + R I ( s i ,γ i ) at the time the optimal predecessor is selected, which we have in fact already shown by establishing Equation 25. Thus, DSP and PSA build identical trellises; application of the same traceback() procedure should therefore produce identical gene predictions. Fast decoding of Markov chains Markov chains are typically implemented in GHMM-based gene finders using hash tables, due to the simplicity of such an implementation. Thus, for a given Markov chain M we may utilize a hash table which associates the probability P M ( x j | x j - n .. x j -1 ) with the sequence x j - n .. x j . Although hash tables provide a relatively efficient solution for this task, they are wasteful in the sense that as we evaluate the chain on successive nucleotides in a sequence, we repeatedly manipulate preceding nucleotides in forming successive substrings to be indexed into the hash table. A much faster (and much more elegant) solution is to employ a Finite State Machine (FSM) in which states exist for all possible sequences of length n +1 or less, and where the state having label x j - n .. x j emits the probability P M ( x j | x j - n .. x j -1 ), for n th -order Markov chain M . In this way, the transition probabilities of the Markov chain become the state emissions of the FSM. During a single left-to-right scan of a sequence, each base requires only a single two-dimensional array indexing operation to access the desired probability, and a single integer value store operation to remember the identity of the new state. When compared to the typical regime of arithmetic and bit-shift operations over an ( n +1)-element string that would be required for a typical hash function, the difference can be significant. Implementing this optimization is fairly straightforward, both for conventional Markov chains and for Interpolated Markov Models, whether homogeneous or three-periodic. Central to the method is a means of mapping between state labels and integer state identifiers for use in indexing into the transition table. The base-4 number system can be utilized for this purpose, assuming a nucleotide mapping such as ∇ = {A↔0, C↔1, G↔2, T↔3}. To account for lower-order states, define: which gives the total number of strings of length less than L . Converting a string S = x 0 .. x L -1 to base-4 can be accomplished as follows: Now a string S can be mapped to a state index using: state ( S ) = B (| S |) + λ( S ), (39) where | S | denotes the length of S . Given this integer↔label mapping and an n th -order Markov chain in hash table format, the FSM state emissions can be initialized by indexing state labels into the hash table to obtain the Markov chain transition probabilities. The transition table can be initialized fairly simply by noting that the successor of state x 0 .. x L -1 upon seeing symbol s is x 1 .. x L -1 s if L = n + 1, or x 0 .. x L -1 s for L < n + 1. A model for the reverse strand can be handled by applying this scheme in reverse, so that the state with label x j - n .. x j emits the probability P M ( x j - n | x j - n +1 .. x j ), and the lower-order states are reserved for the end of the sequence rather than the beginning. Results Table 1 shows the memory and time requirements for two GHMM gene finders, one using the PSA algorithm and the other the DSP algorithm, on a 922 Kb sequence. Note that the DSP gene finder has 31 states, while the PSA gene finder explicitly evaluates only 6 states, so that they both give a ratio of 2.8 seconds per state on this sequence, while the ratio of memory per state is 14 Mb for the PSA gene finder and 0.95 Mb for the DSP gene finder. Thus, the DSP and PSA algorithms appear to consume the same amount of time per state, while DSP requires only a fraction of the memory (per state) as PSA. Table 2 shows the results of applying the FSM optimization to a DSP gene finder to accelerate its content sensors. As can be seen from the table, the FSM approach reduces execution time by more than half (as compared to a hash table implementation), while also reducing total RAM usage. The DSP/FSM configuration reported here utilized both conventional Markov chains as well as Interpolated Markov Models, both represented using FSMs. Note that the hashing software used for comparison was a very efficient implementation which used native C character arrays; in particular, we did not use the C++ Standard Template Library (STL) implementations of string and hash , due to efficiency concerns regarding the re-copying of string arguments to the hash function. Our custom string hashing implementation was found to be much faster than the STL implementation (data not shown). Accordingly, one can expect an FSM implementation to show even greater gains as compared to an STL-based hashing implementation. We utilized our DSP-based gene finder TIGRscan [ 5 ] in the construction of our syntenic gene finder TWAIN, a Generalized Pair HMM which performs gene prediction in two genomes simultaneously. TWAIN operates by invoking a modified version of TIGRscan to build a directed acyclic graph of all high-scoring parses of each of the two input sequences. Early experiments indicated that these parse graphs could be quite large in practice and may therefore require a significant portion of available RAM for their storage. In addition, the dynamic programming matrix used by TWAIN promised to be large as well. It was in anticipation of this problem that we were prompted to develop TIGRscan using the DSP architecture, to minimize the memory requirements of the underlying GHMM, thereby freeing the remaining available memory for use by the rest of the machinery within TWAIN. As a result of these and other optimizations (such as our use of a sparse matrix representation for TWAIN's dynamic programming algorithm) we were able to apply TWAIN's gene prediction component to a pair of fungal genomes ( Aspergillus fumigatus and A. nidulans ) while consuming under 50 Mb of RAM, whereas an earlier prototype of this system applied to the same input data routinely exhausted all available memory on a computer with 1 Gb of RAM. We are hopeful that through the use of optimizations such as those described here we will be able to apply TWAIN to other pairs of genomes with longer genes, and possibly extend the program to handle more than two species simultaneously. Conclusions In describing a number of optimizations for GHMM-based gene finders and making available two complete open-source software systems embodying these methods, it is our hope that others will be more enabled to explore promising extensions to the GHMM framework, thereby improving the state-of-the-art in gene prediction techniques. Availability and requirements * Project name: TIGRscan, GlimmerHMM * Project home page: * Operating system(s): Linux/UNIX * Programming language: C/C++ * Other requirements: compiled using gcc 3.3.3 * License: Artistic License, see * Any restrictions to use by non-academics: terms of Artistic License Authors' contributions The DSP algorithm was devised by WHM, who also performed the computational experiments and wrote the manuscript. The PSA gene finder GlimmerHMM was implemented by MP. MP, ALD, and SLS provided detailed insights into the PSA architecture and provided valuable comments on the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC552317.xml |
524518 | Genetic alterations and in vivo tumorigenicity of neurospheres derived from an adult glioblastoma | Pediatric brain tumors may originate from cells endowed with neural stem/precursor cell properties, growing in vitro as neurospheres. We have found that these cells can also be present in adult brain tumors and form highly infiltrating gliomas in the brain of immunodeficient mice. Neurospheres were grown from three adult brain tumors and two pediatric gliomas. Differentiation of the neurospheres from one adult glioblastoma decreased nestin expression and increased that of glial and neuronal markers. Loss of heterozygosity of 10q and 9p was present in the original glioblastoma, in the neurospheres and in tumors grown into mice, suggesting that PTEN and CDKN2A alterations are key genetic events in tumor initiating cells with neural precursor properties. | Recent data have proposed that brain tumors contain a "core" of stem cells providing them with the potential to grow aggressively, escaping the effects of radiotherapy and chemotherapy [ 1 , 2 ]. These cancer stem cells were isolated from medulloblastomas or gliomas and grew in vitro as neurospheres, suspended clonal aggregates containing cells with different levels of commitment [ 3 ]. Such observations, derived from pediatric tumors only, did not include data on the in vivo tumorigenicity of cancer stem cells. We have found that neurospheres from an adult glioblastoma (GBM) have the potential to express glial and/or neuronal markers and form highly infiltrating gliomas into the brain of immune-deficient mice. The neurospheres were derived from three adult brain tumors and two pediatric malignant gliomas (BT1–BT5, see Additional file 1 ). The neurospheres of BT1, a glioblastoma multiforme (GBM) were studied by flow-cytometry and immunohistochemistry. Under differentiating conditions (EGF-bFGF-LIF withdrawal and FBS addition) nestin expression decreased and BT1 neurospheres expressed high levels of neuronal and astrocytic markers. Remarkably, most of the cells expressed both such markers, suggesting the altered function of a complete differentiation program (see Additional file 2 ). To test their neoplastic potential we injected BT1 and BT2 (a central neurocytoma) neurospheres into nude mice. All the mice injected intracerebrally (i.c.) with BT1 neurospheres, but none of those injected subcutaneously (s.c.), developed brain tumors that were lethal after 3, 5 and 6 months, respectively. After 4 months, however, none of the mice injected with BT2 neurospheres developed a tumor. Adherent cells from the same two patients were also injected i.c. and s.c. into nude mice. Two of three mice injected i.c. with BT1 adherent cells, but none of those injected with BT2 cells, developed a brain tumor that were lethal 4 and 5 months after injection, respectively. All the brain tumors in nude mice appeared as large, infiltrating gliomas (Fig. 1A-B ) with features of a grade II-III oligoastrocytoma (Fig 1D-E ). Both the primary tumor (Fig 1F ), and the tumors in nude mice (Fig 1G-H ) expressed nestin. Figure 1 Histological analysis of BT1 and BT1-derived tumors in nude mice. BT1 neurospheres (1 × 10e5) were stereotactically injected into the left hemisphere of nude mice (Charles River Italia, Calco, Italy; n = 3) or subcutaneously (n = 3). Nude mice were also injected with 1 × 10e5 BT1 adherent cells into the brain (n = 3) or subcutaneously (n = 3). Cells from BT2 were injected with similar procedures into nude mice. Control mice (n = 3) were injected with 1 × 10e5 neural stem/progenitor cells obtained from C57BL6J mice with previously described methods [11]. Fig 1A-B shows the GFAP staining in brown of coronal sections of the tumor derived from neurospheres (1A) or from adherent cells (1B). The right part on the figures correspond to the left hemisphere, were cells were injected. Fig. 1C-E show H-E staining of the primary tumor with features of a glioblastoma multiforme (1C) and of a tumor in mouse brain derived from neurospheres, showing an area with a prevailing aspect of oligodendroglioma (1D) or adherent cells, exhibiting anaplastic changes (1E). Fig. 1F-H show nestin staining of the primary tumor (1F) and of a tumor in mouse brain derived from neurospheres (1G) or adherent cells (1H). The five chromosomal regions showing frequent allelic imbalance in gliomas (1p, 9p, 10q, 17p and 19q) were investigated on six specimens obtained from BT1 surgery. No allelic loss was detected in specimen 1 (S1; frontal area); S2 and S3 (fronto-temporal area) showed LOH on chromosome 10q; S4 and S5 (temporal area) had LOH on 10q and 9p (Fig 2 ). Neurospheres were derived from S6 (temporal) and their analysis showed the same alterations of S5, i.e. LOH on 10q and 9p (Fig. 2 ). Adherent cells deriving from S6 did not show any detectable LOH and no alteration was found on 1p, 17p and 19q. In the primary tumor the allelic imbalance was partial, in neurospheres, on the contrary, it was complete. Interestingly, not only tumors deriving from BT1 neurospheres but also the tumor from adherent cells showed LOH on 9p and 10q (Fig 2 ). Figure 2 Genetic analysis on BT1, BT1-neurospheres and adherent cells and BT1-tumors in nude mice. DNA was extracted from frozen tissues, cell cultures or lymphocytes, using standard protocols. Primers, microsatellite markers and PCR conditions for LOH analysis were described before [12]. We also investigated markers 9S157 and 9S171 flanking the CDKN2A gene on 9p21. Before doing microsatellite analysis on mouse tumors we confirmed that PCR primers did not hybridize on mouse DNA. For cytogenetic analysis cells were harvested with 0.1 μg/ml Colcemid (Karyomax Colcemid, Life Technologies) overnight. Hypotonic treatment, fixation and GTG banding of metaphase chromosomes were performed with standard methods. The karyotypes were described in accordance with ISCN guidelines Spectral karyotyping was performed on metaphase cells according to the manufacturer's instructions (ASI, Carlsbad, CA) and to published procedures [13]. Spectral images were acquired and analyzed with an SD200 Spectral Bio-imaging System (ASI Ltd., MigdalHaemek, Israel) and a charged-coupled device camera (Hamamatsu, Bridgewater, NJ) connected to a Zeiss Axioskop 2 microscope (Carl Zeiss, Canada) and analyzed by the use of SKYVIEW (version 1.6.1; ASI) software. The upper panel shows the results of LOH analysis on 9p and 10q of the different samples outlined on the left. The lower panel illustrates a representative spectral karyotype of neurospheres obtained with the simultaneous hybridization of 24 combinatorially labeled chromosome painting probes. Karyotype display of chromosome banding (inverted DAPI) and SKY analysis (chromosomes were assigned a pseudo-color according to the measured spectrum) are shown. The number (7) next to the marker chromosome (der(3)) indicates the origin of inserted material. Cytogenetic analysis of BT1 neurospheres showed a pseudo-diploid karyotype with monosomy of chromosomes 9, 10, 18, trisomy of chromosomes 19 and 20 and presence of three marker chromosomes. A pseudo-tetraploid clone was also present, resulting from duplication of the pseudo-diploid clone and with the same numerical and structural abnormalities (Fig. 2 ). The G-banding karyotype of BT1 adherent cells resulted 46, XY. SKY analysis confirmed the numerical changes (monosomies and trisomies) shown by G-banding and allowed to unravel the nature of a the marker chromosomes as a der(3)ins(3;7)(3pter→3q11::7q11→7q22::3q11→3qter). Three observations are provided by the follow-up of nude mice injected with BT1 cells. First, tumors only developed into the brain and not subcutaneously. Thus, in BT1 the cancer "stem" cells required to be in their niche, i.e. the brain, to develop tumors and the evolution of these tumors resembled closely that of "real" gliomas. The phenotype of such gliomas, however, appeared less aggressive than in the original tumor, possibly because the cancer "stem" cells were conditioned by in vitro passaging and by growth in the brain of immune-deficient mice. Second, the tumors obtained from neurospheres were completely different from those obtained from established cell lines like U87, 9L, C6 or F98: they grew slower, were highly infiltrating and showed a morphological pattern resembling that of an anaplastic, mixed glioma, but without necrotic areas and palisade cells typical of a GBM (compare Fig. 1C with 1D-E ). LOH studies demonstrated the loss of a region chromosome 10q where PTEN is located. PTEN is a critical tumor suppressor gene in GBM but has also an important role in the regulation of neural stem cell proliferation [ 4 - 6 ]. Its loss can therefore be a central event in the neoplastic derangement of brain cancer "stem" cells. We also found combined 9p LOH associated to 10qLOH in S4–5 and in the neurospheres, but not in S2–3, suggesting that 9p LOH is secondary to that on 10q. LOH on 9p suggests the alteration of the important tumor suppressor gene CDKN2A , encoding p16 and p14(ARF). p16 expression is absent or defective in glioblastomas [ 7 , 8 ] and p16 has an important role in the terminal differentiation of neural precursor cells [ 9 ]. Furthermore, p16 is the main target through which Bmi1 regulates neural stem cell differentiation and self-renewal [ 10 ]. Third, LOH on 10q and 9p were present not only in the original tumor and in neurospheres but also in neurosphere-derived gliomas in nude mice. Remarkably, even if adherent cells had a normal karyotype and no allelic imbalance, the derived tumors did show 10q and 9p LOH. This suggests that few adherent cells with these genetic abnormalities escaped our analysis and underwent a positive selection in vivo. These results, therefore, point to PTEN and CDKN2A alterations as critical events in tumor initiating cells, a definition synonymous of cancer stem cells. The identification of neurospheres from adult brain tumors, and specifically from an adult GBM, is strengthening the case for the importance of cancer "stem" cells in the genesis of these malignancies. A thorough genetic dissection of such cells on a larger scale should give new insights for the therapeutic targeting of these cancer "queen-bee" cells. Supplementary Material Additional File 1 Additional file 1 (Tunici et al-Additional file 1.doc) contains Methods with references, comments on in vitro data and the legend to the additional file 2. Click here for file Additional File 2 Additional file 2 (Tunici et al-Additional file 2.ppt) contains figures of brain tumor neurospheres, and flow cytometry and immunohistochemical data for their characterization. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524518.xml |
553974 | A class of models for analyzing GeneChip® gene expression analysis array data | Background Various analytical methods exist that first quantify gene expression and then analyze differentially expressed genes from Affymetrix GeneChip ® gene expression analysis array data. These methods differ in the choice of probe measure (quantification of probe hybridization), summarizing multiple probe intensities into a gene expression value, and analysis of differential gene expression. Research papers that describe these methods focus on performance, and how their approaches differ from others. To better understand the common features and differences between various methods, and to evaluate their impact on the results of gene expression analysis, we describe a class of models, referred to as generalized probe models (GPMs), which encompass various currently available methods. Results Using an empirical dataset, we compared different formulations of GPMs, and GPMs with three other commonly used methods, i.e. MAS 5.0, dChip, and RMA. The comparison shows that, on a genome-wide scale , different methods yield similar results if the same probe measures are chosen. Conclusion In this paper we present a general framework, i.e. GPMs, which encompasses various methods. GPMs permit the use of a wide range of probe measures and facilitate appropriate comparison between commonly used methods. We demonstrate that the dissimilar results stem primarily from different choice of probe measures, rather than other factors. | Background Microarray experiments are routinely conducted to assess associations of experimental factors (or disease outcomes) with gene expression profiles. The Affymetrix GeneChip ® gene expression analysis array, one of most commonly used microarray technologies, uses multiple oligonuleotides (25-mers) to measure expression abundance of a single gene. Recognizing that non-specific hybridization could significantly alter the accurate quantification of transcript abundance, Affymetrix designs the array to contain two types of probes. Probes that are perfectly complementary to the target sequence, called Perfect Matches (PM), are intended to measure mainly specific hybridization. A second set of probes identical to PM except for a single nucleotide in the center of the probe sequence (the 13 th nucleotide), called Mismatches (MM), are intended to quantify non-specific hybridization [ 1 ]. A PM and its corresponding MM constitutes a probe pair, and multiple probe pairs, i.e. a probe set, are summarized to measure transcript abundance for a particular gene. "Probe measure" is used in this paper to refer to the manner in which probe hybridization is quantified based on a pair of PM and MM intensity values. For example, PM-MM is a probe measure, and PM only is another probe measure. A number of methods have been developed to quantify gene expression abundance from GeneChip ® expression analysis array data using different probe measures and summary schemes. Among them, Microarray Suite 5.0 (MAS 5.0) [ 1 ], dChip [ 2 ] and robust multiple-array average (RMA) [ 3 ] are the best known. Prior to MAS 5.0, the probe measure used in MAS 4.0 was PM-MM [ 4 ]. The problem arises when a significant proportion of MM values, (~33% in the HuGeneFL array and ~25% in the Human Genome U133A array), is greater than the corresponding PM values, which makes PM-MM negative. To resolve this anomaly, in MAS 5.0, Affymetrix computes an "ideal mismatch" (IM) based on missing data theory such that PM-IM is always greater than zero [ 1 ]. Then, all probe pairs are used to estimate a gene expression value based on Tukey's Biweight algorithm. However, even with the use of IM, the variation among probes could be greater than between samples. Li and Wong modelled probe level data to generate model based expression index (MBEI) and implemented it in the dChip software [ 2 ]. Noting that probe specificity is significant, highly reproducible and predictable, Li and Wong used a hybridization rate parameter to account for the hybridization specificity for a probe. For a probe pair, hybridization rates are different for PM and MM; the former is always greater than the latter, and both are greater than zero. The rate was fixed for the same probe across all the samples. Both PM and MM together or PM only, can be used in the Li and Wong model. Another approach, RMA, available from Bioconductor [ 5 ], summarizes probe intensities into a gene expression measure based on an additive model on the logarithmic scale of a background corrected PM (PM rma ) [ 3 ]. RMA estimates a common mean non-specific hybridization background (for an entire chip) from PM using a convolution model and then subtracts this background from PM to generate the PM rma . The gene expression obtained from either MAS 5.0 or dChip or RMA can then be used to associate the gene expression values with experimental factors using an algorithm of the users' choice. Three main factors affect the analytical results of differential gene expression analysis: the probe measure chosen, the algorithm used to summarize probe level data into gene expression (called summary algorithm in this paper), and the model used to associate gene expression with the experimental factors (called association model). Direct comparisons of the various approaches proposed for analyzing GeneChip ® gene expression data are complicated considering these three factors. Generalizing the various algorithms into one framework would facilitate comparisons. In this paper we propose a class of generalized probe models (GPMs) that includes various analytical approaches for GeneChip ® gene expression analysis array data as special cases. Using an empirical dataset, we assess the impact of different processes on the analytical results by comparing different formulations of GPM as well as GPMs with three other methods, MAS 5.0, dChip, and RMA. Results We applied GPM to the analyses of data obtained from a study investigating gene response to ATRA (all trans retinoic acid) or drug diluent (ETOH, ethyl alcohol). Briefly, at twenty-four hours after treatment, total RNA was extracted from cells, processed and hybridized to the HuGeneFL GeneChip ® . The dataset consists of ten samples from the ATRA treatment group and ten samples from the control group (ETOH treated) in four medulloblastoma cell lines. We are interested in identifying genes that are differentially expressed between the two treatment groups. We used three different probe measures: PM-IM, PM only and PM rma , and compared the performance of different methods using standardized coefficients, defined as the estimated coefficient divided by its standard error. The reason for using this index is that the standardized coefficients, usually known as Z-score test statistics, are independent of scale, and may be used to make statistical inference. GPM-1 (2), GPM-2 (3) and GPM-3 (4) can be derived from the full GPM model (1) by making different statistical modeling assumptions. GPM-1 takes summarized gene expressions and associates them with experimental factors; GPM-2 and GPM-3 directly associate probe level data with experimental factors without first summarizing gene expressions (see Methods). Comparison of GPMs with three other commonly used methods We compared GPMs with three commonly used methods, i.e. MAS 5.0, dChip, and RMA. Each of these methods dictates its own specific probe measure, i.e. PM-IM in MAS 5.0, PM-MM, or PM only in dChip and PM rma in RMA. We found that all the methods were similar when the same probe measure was used, and the dissimilarity between the MAS 5.0 and other PM based approaches most likely stems from the different probe measure used. We first computed the gene expression using the software available for MAS 5.0, dChip (using the PM only option) and RMA, and then estimated the standardized coefficients for each gene with the association model GPM-1. We refer to these analytical options as MAS 5.0 PM-IM and hereafter (in MAS 5.0 PM-IM we omitted the term GPM-1 which indicates the association model used since GPM-1 is the only one in GPMs that handles gene expression values), dChip PM and RMA PM rma , respectively. Figure 1 shows the pair-wise comparisons among MAS 5.0 PM-IM, dChip PM, and RMA PM rma . For each pair-wise comparison, we plotted the standardized coefficients for each pair in a XY plot. To assess the similarity between two methods, we computed the correlation coefficients (R) between the standardized coefficients generated from the two methods. In addition, we computed the mean squared error (MSE) between the two standardized coefficients, i.e. , where N is total number of genes, Z j 1 and Z j 2 are the standardized coefficients for j th gene, for two methods, respectively. When two methods are similar, the XY plot of their standardized coefficients will lie closely along the diagonal line. Correspondingly, the correlation coefficient will be closer to one and the MSE will be closer to zero. In Figure 1 , we see smaller R and larger MSE in the comparisons of MAS 5.0 PM-IM versus dChip PM, and MAS 5.0 PM-IM versus RMA PM rma compared to dChip PM versus RMA PM rma . Figure 1 Comparison among MAS 5.0, dChip and RMA. Gene expression was computed from MAS 5.0, dChip, and RMA using the probe measure dictated in the methods. Standardized coefficients for each method were estimated using the association model GPM-1 and were plotted pair-wise. Next, using probe measures of PM only and PM-IM, we directly (in a single step) estimated standardized coefficients with GPM-2 (referred as GPM-2 PM and GPM-2 PM-IM. We omitted the term which indicates the summary algorithm since GPM-2 and GPM-3 directly associate the probe level data with the experimental factors without first summarizing across all probes) and GPM-3, respectively. Figure 2 shows the pair-wise comparisons among MAS 5.0 PM-IM, GPM-2 PM, GPM-3 PM, GPM-2 PM-IM, and GPM-3 PM-IM. We see greater similarity between MAS 5.0 PM-IM and GPM-2 PM-IM or GPM-3 PM-IM (Figure 2 , second row), than between MAS 5.0 PM-IM and GPM-2 PM or GPM-3 PM (Figure 2 , first row). In the latter comparisons, only the probe measure is different indicating that the probe measure plays a more significant role than the combined effect of the summary algorithm and the association model. Figure 2 Impact of probe measure on analytical results. Gene expression from MAS 5.0 was used to estimate standardized coefficients using the association model GPM-1. Either PM-IM or PM only was used directly in GPM-2 or GPM-3 models. Standardized coefficients were plotted pair-wise. Comparison among the GPMs We also compared the results from GPM-1 (2), GPM-2 (3) and GPM-3 (4), to evaluate their differences, and found similar results when using the same probe measure. We selected the top eight candidate genes from the results of MAS 5.0 PM-IM and used them to compare the performance of GPM-2 PM-IM and GPM-3 PM-IM. In Table 1 , for the eight selected genes we list estimated coefficients, their standard errors and standardized coefficients, estimated under the three GPM models. From Table 1 , for these eight selected genes, the statistics generated from the three GPMs formulation are similar using probe measure of PM-IM. Next, to compare the standardized coefficients on a genome-wide scale, Figure 3 panel A shows the pair-wise comparisons using probe measure of PM-IM. Figure 3 panel B shows the pair-wise comparisons using PM only and PM rma . The six plots in Figure 3 demonstrate the similarity of standardized coefficients on a genome-wide scale among variants of GPMs when the same (or similar, in the case of PM versus PM rma ) probe measures are used in the analyses. Table 1 Estimated parameters for eight candidate genes from GPMs Probe set ID Y00291_at L13738_at D79990_at M13666_at X02158_rna1_at X84002_at L19605_at M60503_at Parameters β SE Z β SE Z β SE Z β SE Z β SE Z β SE Z β SE Z β SE Z GPM-1 PM-IM 3.17 0.48 6.65 0.80 0.18 4.50 3.04 0.68 4.47 1.07 0.25 4.24 -1.07 0.25 -4.24 0.63 0.15 4.18 0.51 0.12 4.16 -2.05 0.50 -4.13 GPM-2 PM-IM 2.79 0.36 7.74 0.65 0.13 4.89 2.92 0.67 4.3 1.11 0.24 4.6 -0.92 0.24 -3.86 0.33 0.12 2.63 0.51 0.09 5.53 -1.56 0.35 -4.44 GPM-3 PM-IM 3.05 0.35 8.60 0.71 0.12 5.69 3.10 0.63 4.89 1.31 0.22 5.93 -1.40 0.25 -5.73 0.25 0.11 2.26 0.52 0.09 6.02 -2.25 0.38 -5.96 Estimated coefficients ( β ), their standard errors ( SE ) and standardized coefficients ( Z ) for eight candidate genes from GPM-1 (using MAS 5.0 expression measure), GPM-2 and GPM-3 using PM-IM Figure 3 3A:Comparison among GPMs using PM-IM. Gene expression from MAS 5.0 was used to estimate standardized coefficients using the association model GPM-1. Standardized coefficients from GPM-1, GPM-2 and GPM-3 with same PM-IM probe measure were plotted pair-wise. 3B: Comparison of GPM-2, GPM-3, dChip and RMA using PM. Gene expression from dChip, and RMA were used to estimate standardized coefficients using the association model GPM-1. Standardized coefficients from GPM-2, GMP-3, dChip and RMA with probe measures of PM or PM rma were plotted pair-wise. In summary, we conclude that the GPMs are similar to MAS 5.0, dChip and RMA on a genome-wide level when using the same probe measures and that the choice of probe measure may be more important than the summary algorithms to obtain the gene expression or models used to compute the coefficients. Discussion In this paper, we have described a general framework that can be used to compare various methods and evaluate their similarities and differences. We found that various methods tend to generate similar results, on a genome-wide scale, when the same probe measure is chosen, and probe measure seems to have greater impact on the analytical results than other factors. In Figure 1 , we compared the standardized coefficients estimated with GPM-1 using gene expression computed from MAS 5.0, dChip and RMA with their own dictated probe measures. Since we consistently used GPM-1 as the association modeling machinery for each analysis, we assessed the combined impact of probe measure and summary algorithm. We found that the results obtained from dChip PM and RMA PM rma were similar to each other, but different from those obtained from MAS 5.0 PM-IM. Although dChip PM and RMA PM rma use different summary algorithms, their analytical results are similar due to the PM based probe measures used in both analyses. In Figure 2 , we see again, that on a genome-wide scale, results from MAS 5.0, GPM-2 and GPM-3 are more similar when same probe measure is used than when the probe measures are different, indicating that the probe measure plays a key role in determining the similarity of results from two methods. Our preliminary analyses suggest that the choice of probe measure has bigger impact on the results than summary algorithm and association modeling. For the three variants of GPMs, we compared the standardized coefficients from GPM-2 or GPM-3 with those from GPM-1 using the gene expression values computed in MAS 5.0, dChip and RMA. From the high R values (and correspondingly, low MSEs) in the six plots shown in Figure 3 , we infer that the standardized coefficients obtained from variants of GPMs are similar when they used the same probe measure. For seven of the eight candidate genes selected by GPM-1 using gene expression values generated by MAS 5.0, the gene-specific regression coefficients were similar among the MAS 5.0 PM-IM, GPM-2 PM-IM and GPM-3 PM-IM. This indicates that for these seven genes it makes little difference between using summary measures or modeling directly at the probe level data in GMP-2 or GPM-3, when the same probe measure is used. In addition to the three factors we mentioned (i.e. choice of probe measure, summary algorithm and association modeling) that have an impact on analytical results, data pre-processing/normalization could also affect the analytical results. Some researchers combine the probe measure and pre-processing normalization together. Normalization matters the most when the arrays in an experiment are not comparable to each other. In such cases, normalization process could significantly impact on the result. In our case, we normalized the data in the GPMs using a regression-based approach [ 6 ], either at probe level in GPM-2 and GPM-3, or on gene expression level in GPM-1. The expression measures obtained from dChip and RMA were normalized by their own normalization schemes. However, even with the different normalization schemes, probe measure appears to be the primary factor to impact the results in our data set. An important feature of the framework presented in this paper is that it accommodates various probe measures (see Table 2 ) to quantify the abundance of the transcript. A question arises: how does one combine results from analyses using different probe measures. This is the dilemma we face when we analyze thousands of genes simultaneously. On the one hand, microarray technology is still imperfect and it is prudent to evaluate a number of exploratory approaches. On the other hand, by the very nature of the problem, it is unlikely that a single approach will be equally appropriate for each gene. The reality is that microarrays afford a rapid preliminary assessment of thousands of genes for future experimental validation. Ultimately, any scientific validation has to be drawn from further bench experiments. Table 2 A list of selected probe measures Scenario Calculation Annotation 1. MAS4.0-equivalent Z jik = ( y ji 1 k - y ji 0 k ) Direct difference between PM and MM 2. MAS5.0-equivalent Z jik = ( y ji 1 k - ) is the Idealized Mis-match (IM) 3. PM only Z jik = y ji 1 k Ignore MM 4. RMA-equivalent Z jik = log( y ji 1 k - ) is the mean background estimated from PM for the k th chip 5. Log ratio Z jik = ln( y ji 1 k / y ji 0 k ) Difference on the logarithmic scale 6. Log difference Z jik = ln( y ji 1 k - ) Difference on the logarithmic scale 7. Log PM Z jik = ln( y ji 1 k ) PM only on the logarithmic scale 8. Box-Cox on PM Z jik = ( - 1) / ω Box-Cox transformation on PM only 9. Box-Cox on PM-IM Z jik = [( y ji 1 k - ) ω - 1] / ω Box-Cox transformation on PM-IM To facilitate the evaluation and use of GPMs, we have developed a software program, called ProbePlus that implements our GPMs. This program will be made available to academic researchers through the website . Conclusions In this paper we describe a general framework to analyze GeneChip ® gene expression analysis array data. This framework is flexible to permit comparisons of different methods with respect to the choice of probe measure and analytical models used. We found that different methods yield similar result when probe measures are the same. Methods The generalized probe model Consider an experimental study with K chips. Each chip is engineered to assess levels of J gene expressions. Each gene has I probe pairs. Now let y jilk denote the intensity value for the j th gene ( j = 1,2... J ), the i th probe pair ( i = 1,2... I ), PM ( l = 1) or MM ( l = 0), and the k th sample ( k = 1,2... K ). Table 3 displays the notation for a typical microarray dataset. The probe intensity y jilk , quantifying the abundance of the RNA hybridized on a probe, is treated as a random variable, influenced by the effects of probe-specific hybridization, gene-specific hybridization, non-specific hybridization and random noise. In this paper we use Z jik to denote the quantification of the signal of the i th probe, in the j th gene from the k th sample. Z jik could be based on any probe measure, such as PM only or PM-IM (some other selected probe measures are listed in Table 2 ). Table 3 A typical probe-level data generated from GeneChip ® gene expression analysis array Sample ID Probe PM (1) 1 2 ... k ... K Covariate ID MM(0) x 1 x 2 ... x k ... x K ORF 1 1 1 y 1111 y 1112 y 111 k ... y 111 K 1 0 y 1101 y 1102 y 110 k ... y 110 K 2 1 y 1211 y 1212 y 121 k ... y 121 K 2 0 y 1201 y 1202 y 120 k ... y 120 K ... N 1 y 1 N 11 y 1 N 12 y 1 N 1 k ... y 1 N 1 K N 0 y 1 N 01 y 1 N 02 y 1 N 0 k ... y 1 N 0 K ... ORF j 1 1 y j 111 y j 112 y j 11 k ... y j 11 K 1 0 y j 101 y j 102 y j 10 k ... y j 10 K 2 1 y j 211 y j 212 y j 21 k ... y j 21 K 2 0 y j 201 y j 202 y j 20 k ... y j 20 K ... i 1 y ji 11 y ji 12 y ji 1 k ... y ji 1 K i 0 y ji 01 y ji 02 y ji 0 k ... y ji 0 K ... N 1 y jN 11 y jN 12 y jN 1 k ... y jN 1 K N 0 y jN 01 y jN 02 y jN 0 k ... y jN 0 K ... ORF J 1 1 y J 111 y J 112 y J 11 k ... y J 11 K 1 0 y J 101 y J 102 y J 10 k ... y J 10 K 2 1 y J 211 y J 212 y J 21 k ... y J 21 K 2 0 y J 201 y J 202 y J 20 k ... y J 20 K ... N 1 y JN 11 y JN 12 y JN 1 k ... y JN 1 K N 0 y JN 01 y JN 02 y JN 0 k ... y JN 0 K In a typical experiment as described above, it is frequently of interest to discover genes that are significantly associated with one or more experimental covariates x k . For example, consider an experiment to discover genes that are differentially expressed between two groups, x k takes a binary values: x k = 0 for the control group and x k = 1 for the treatment group. To achieve the scientific objective, the analytic procedure is to assess associations of = ( Z j 1 k , Z j 2 k ,..., Z jNk )' with covariates x k via the distribution function f ( | x k ). In essence, Z jik are treated as vectors of multivariate correlated outcome variables, and used to identify the probes/genes that are differentially expressed. Recognizing the high dimensionality of multiple probes and multiple genes, we propose to apply a marginal model that uses marginal means to describe relationship of probes/genes with the covariates without the necessity of specifying the full distribution f ( | x k ). Our framework directly associates experimental factors with probe intensities and is referred to as the generalized probe model or GPM. We propose the following (1) to describe relationship between and x k , where ( δ k , λ k ) are chip-specific heterogeneity factors for k th chip [ 7 ], τ ji are gene- and probe-specific parameters quantifying the mean intensity value for the i th probe of the j th gene, β ji quantifies the gene- and probe-specific parameters quantifying the difference between treated and control groups, and v jik quantifies expression values for individual probe pairs. Lastly, ( ξ j 1 k , ξ j 2 k ,..., ξ jNk ) represents a vector of gene- and probe-specific random variations across K independent samples. Since probe pairs are selected to target the j th gene and are spatially arranged by a pre-selected design to eliminate common artifacts, they may be correlated because of cross-hybridizations or spatial dependencies. From the biological perspective, specifying a joint distribution for ( ξ j 1 k , ξ j 2 k ,..., ξ jNk ) would be difficult, if not impossible. It is thus preferable to leave it unspecified. The above GPM (1) includes a range of more simplified models based on specific assumptions. First, under the assumption that all probe-specific parameters are the same, i.e., τ ji = τ j and β ji = β j , the general model (1) simplifies to the following model: and is equivalent to using a summarized gene expression to associate with the experimental factors [ 7 ]. For simplicity and comparison with other special models, we refer this model as GPM-1. If one postulates that all probe-specific parameters are not the same, but follow an additive probe model, then general model (1) under modeling assumption that β ji = β j , with probe-specific values ( τ j 1 , τ j 2 ,..., τ jN ) may be written as in which estimating β j is of primary interest. This variation of the general model is referred to as GPM-2. On the other hand, the probe parameters may follow a multiplicative model (in the spirit of Li and Wong's model), then the third model, referred to as GPM-3, is derived under the assumption that τ ji ≈ φ ji τ j and β ji ≈ φ ji β j , and may be written as where φ ji denote the multiplicative probe-specific effects and can be uniquely determined by constraining the mean to be one. Estimation and inference Our estimation procedures do not require any assumptions with respect to the error distribution, since any distributional assumptions, which may be appropriate for some genes, are likely to be violated for other genes. To ensure the robustness of statistical inference, we propose to use generalized estimating equation theory, which has been fully described in a seminal paper by [ 8 ]. In the current context, we choose the "working independence" assumption for modeling dependencies between probes [ 8 , 9 ]., to avoid making any assumptions on dependence structures. The asymptotic variance matrix is estimated with the usual "sandwich" estimator [ 8 ]. Diagonal elements in the variance matrix are estimates of marginal variances for all estimated parameters, and are denoted by for the estimated parameters in the model. Both estimates can be used to construct test statistics, such as the ratio of over , known as Wald-statistic. Under the null hypothesis, each statistic has an asymptotic normal distribution when the sample size is sufficiently large, and can therefore be used for making statistical inferences. When the sample size is small, this quantity is treated as a standardized regression coefficient. List of abbreviations ATRA: All Trans Retinoic Acid ETOH: Ethyl Alcohol GPM: Generalized Probe Model IM: Ideal Mismatch MAS: Microarray Suite MM: Mismatch ORF: Open Reading Frame PM: Perfect Match RMA: Robust Multiple Array average Authors' contributions WF carried out analyses and prepared the manuscript. JIP conducted microarray experiments. JMO conceived the study. NK: prepared the manuscript. LPZ: conceived the study, developed the GPM algorithms and prepared the manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC553974.xml |
520742 | A stochastic model for retinocollicular map development | Background We examine results of gain-of-function experiments on retinocollicular maps in knock-in mice [Brown et al. (2000) Cell 102:77]. In wild-type mice the temporal-nasal axis of retina is mapped to the rostral-caudal axis of superior colliculus. The established map is single-valued, which implies that each point in retina maps to a unique termination zone in superior colliculus. In homozygous Isl2/EphA3 knock-in mice the map is double-valued, which means that each point on retina maps to two termination zones in superior colliculus. This is because about 50 percent of cells in retina express Isl2, and two types of projections, wild-type and Isl2/EphA3 positive, form two branches of the map. In heterozygous Isl2/EphA3 knock-ins the map is intermediate between the homozygous and wild-type: it is single-valued in temporal and double-valued in the nasal parts of retina. In this study we address possible reasons for such a bifurcation of the map. Results We study the map formation using stochastic model based on Markov chains. In our model the map undergoes a series of reconstructions with probabilities dependent upon a set of chemical cues. Our model suggests that the map in heterozygotes is single-valued in temporal region of retina for two reasons. First, the inhomogeneous gradient of endogenous receptor in retina makes the impact of exogenous receptor less significant in temporal retina. Second, the gradient of ephrin in the corresponding region of superior colliculus is smaller, which reduces the chemical signal-to-noise ratio. We predict that if gradient of ephrin is reduced by a genetic manipulation, the single-valued region of the map should extend to a larger portion of temporal retina, i.e. the point of transition between single-and doulble-valued maps should move to a more nasal position in Isl2-EphA3 heterozygotes. Conclusions We present a theoretical model for retinocollicular map development, which can account for intriguing behaviors observed in gain-of-function experiments by Brown et al., including bifurcation in heterozygous Isl2/EphA3 knock-ins. The model is based on known chemical labels, axonal repulsion/competition, and stochasticity. Possible mapping in Isl2/EphB knock-ins is also discussed. | Background Topographic ordering is an important feature of the visual system, which is conserved among many visual areas [ 1 ]. Thus, the projection from retina to superior colliculus (SC) is established in a way, which retains neighbourhood relationships between neurons [ 2 - 4 ]. This implies that two axons of retinal ganglion cells (RGCs), which originate from neighbouring points in retina, terminate proximally in SC. It is assumed that this facilitates visual processing, which involves wiring local to the termination zone [ 5 ]. The mechanisms responsible for topographic ordering have been lately under thorough examination. Following the original suggestion by Sperry [ 6 ], it was shown that chemical labels play an essential role in formation of the map (reviewed in [ 3 , 7 ]). For the projection from retina to SC the Eph family of receptor tyrosine kinases and their ligands ephrins were shown to be necessary for establishing correct topographic maps [ 7 - 10 ]. The coordinate system is encoded chemically in retina through graded expression of the Eph receptors by the RGCs. Thus, in mouse retina, two receptors of the family, EphA5 and A6, are expressed in the low nasal – high temporal gradient [ 11 - 14 ]. The recipient coordinate system in the SC is established through high caudal – low rostral gradient of ephrin-A2 and A5 ligands [ 15 ]. Since RGC axons expressing EphA receptors are repelled by high levels of ephrin-A ligands this system of reciprocal gradients allows sorting of the projecting axons in the order of increasing density of receptors, whereby contributing to the formation of topographic map [ 10 , 15 , 16 ] (Figure 1A ). Thus, the system of reciprocal gradients is involved in formation of topographic representation along the nasal-temporal axis, albeit some additional fine-tuning is provided by activity-dependent mechanisms [ 17 - 19 ]. In this study we address the results of gain-of-function experiments, in which the retinocollicular maps were modified by genetic manipulations [ 20 ]. RGCs of the wild-type mouse express the LIM homeobox gene Islet2 (Isl2) [ 21 ]. Retina of a single animal is composed of two types of cells with regard to their expression of Isl2 gene, Isl2+ and Isl2-, which are intermixed in roughly equal proportion throughout the RGC layer (Figure 1B ). To test the mechanisms of the retinocollicular map formation Brown et al. [ 20 ] generated "knock-in" mice, in which the Isl2 and EphA3 genes are coexpressed. This implies that each Isl2+ RGC and its axons, in addition to EphA5 and A6, also expresses EphA3, not found in the wild-type RGCs. The Isl2- cells remain EphA3-, as the wild-type cells. By doing so Brown et al. [ 20 ] increased the total level of EphA receptors in a given fraction of retinal cells. Since the overall level of EphAs is increased in Isl2+/EphA3+ cells, axons of two neighboring cells, knock-in and wild-type, should terminate in quite different places in SC (Figure 1B ). The knock-in cells, interacting more strongly with the repellent should terminate at the position of decreased density of ephrins, i.e. more rostrally with respect to the wild-type cells. The neighborhood relationships between axons should be lost, the new map should lose its continuous nature, and it should split into two maps: one for wild-type RGCs, one for knock-in cells. This prediction was confirmed by experiments of Brown et al. [ 20 ] (Figure 2 ). In addition to the observation of the overall map doubling in homozygous knock-ins (Figure 2C ), Brown et al. discovered a curious behavior of the map in heterozygous animals. In these animals the exogenous levels of EphA3 were reduced roughly by a factor of two with respect to the homozygous knock-ins (Figure 2B ). In terms of the expression density of EphA3 these animals stand between the wild-type and knock-in animals. Accordingly, the structure of the map resembles a hybrid of the wild-type and homozygous maps. The more rostral part of the map is single-valued, similarly to the wild-type, whereas about 60% of the caudal-most part is double-valued, like in the homozygous animals. This observation suggests that the map bifurcates somewhere between double-and single-valued regions. Although overall doubling of the map in homozygotes is easy to understand, any true model for the retinocollicular map formation should be able to account for the bifurcating behavior of map in heterozygotes. Therefore, experiments in heterozygotes represent a powerful tool to falsify various theoretical models. Brown et al. [ 20 ] suggest that the bifurcating behavior of the map is consistent with the importance of relative rather than absolute values of the expression levels. Indeed, the relative difference of exogenous EphA3 to endogenous EphA5/6 is maximal in nasal retina (caudal SC), where the doubled map is observed (Figure 2B ). In the temporal retina (rostral SC) the EphA3 to EphA5/6 ratio is not so large, which may account for the fact that the map is single-valued there. Thus a model for the topographic map from retina to SC should rely on the relative but not absolute levels of EphA signaling. The point, which we make in this study, is that more experimental tests are needed to justify the suggestion about relative expression levels. To make our point clear we present a model for the retinocollicular map formation, which is based upon differences in the absolute values of Eph/ephrin expression levels, rather than relative differences. Our model manages to reproduce all the essential features of experiments described in Brown et al. [ 20 ], including bifurcation of the map in heterozygotes. In the model presented here the map is single-valued in rostral part of heterozygous SC due to inhomogeneous gradients of ligand and receptor, rather than reduced relative difference of EphA receptors. Below we suggest experimental tests, which may distinguish these two classes of models. This model was presented previously at Society for neuroscience meeting 2001 and on the arXiv preprint server [ 22 , 23 ]. To test various hypotheses we use a model for retinocollicular map formation employing stochastic Markov chain process. Our model is based upon three principles: chemoaffinity, axonal competition, and stochasticity. Some features of our model are similar to arrow model of Hope, Hammond, and Gaze [ 24 ]. The implementation of the model used here is available in [ 25 ]. Results Markov chain model Let us first describe the 1D version of the model. We consider a linear chain of 100 RGC, each expressing individual level of EphA receptors given by RA ( i ), where i = 1...100 is the RGC index, which also determines a discrete position of the cell in the retina. We have verified that results presented below do not depend on the number of cells, as long as this number is large enough. Each RGC is attached by an axon to one and only terminal cell in SC, which has an expression level of ligand given by LA ( k ), where k = 1...100 is the index in SC, also describing the terminal's topographic position. The receptor density RA is an overall increasing function of its index i , while the ligand density LA is decreasing, when going from k = 1 (caudal) to k = 100 (rostral) positions (Figure 3 ). This determines the layout of chemical "tags" used to set up map's "topography". An additional feature is that no two cells can project to the same spot in SC, which is meant to mimic axonal repulsion/competition for positive factors in SC, described in detail by Ref. [ 10 ]. We start from a random map, in which the terminal positions of all RGC axons in SC are chosen randomly. We then modify the map probabilistically, using the following rule. We consider two axons projecting to the neighboring points in SC (1 and 2 in Figure 3 ). We attempt to exchange these axons in SC with probability Here α > 0 is the parameter of our model. The probability of the axons to stay unchanged P RETAIN is determined from P EXCHANGE + P RETAIN = 1 and is therefore given by Since the only difference between these probabilities is the sign in front of α , it is important to describe the nature of this sign. Assume that the product of gradients in Eq. (1) is negative, i.e. the gradients run in the opposite directions, which corresponds to the correct order of axonal terminals in SC. Then P EXCHANGE < 1/2 and P EXCHANGE < P RETAIN , i.e. the probability or retaining the current ordering of the axonal pair is larger than changing it. This is consistent with the chemorepellent interactions of receptors and ligands. In the opposite case of the wrong order, i.e. when the product of gradients in Eq. (1) is positive and gradients run in the same directions, P EXCHANGE > P RETAIN by the same reasoning. The described process will tend to exchange the order of gradients and, therefore establish the correct order of topographic projections. By using probabilities described by Eqs. (1) and (2) we incorporate the chemoaffinity principle into our stochastic model. This step is then repeated for another nearest neighbor couple, chosen randomly, and so on, until a stationary distribution of projections is reached. Such a process belongs to the class of Markov chain processes, since transformations to the map are determined only by the present state of the mapping and are not otherwise affected by development history [ 26 ]. Let us first consider cases in which final distribution can be understood without the use of computer. The model described by (1) can be solved exactly for at least two limiting cases: when α = 0 and when α is very large. In the former case ( α = 0) the information about chemical labels cannot affect the solution, since it is multiplied by 0 in Eq. (1). Hence, the map is completely random (Figure 4B ). In the latter case ( α is large) the molecular cues are very strong. They eventually produce solution in which the axons are perfectly sorted in SC in the order of increasing density of receptor (Figure 4D ). An intermediate situation with certain finite value of parameter α is described by a compromise between noise and chemical cues, with former randomizing the map on the finer scale, while the latter inducing the overall correct ordering (Figure 4C ). We conclude that mean position of projections is controlled by the chemical signal, while the spread of projections or the size of TZ is determined by noise (Figure 4C ). It should be noted that in the case of large α (perfect sorting, Figure 4D ) our model is equivalent to the arrow model, introduced by Hope, Hammond, and Gaze in [ 24 ]. The arrow model uses exchanges between nearest axons if they terminate the wrong way in tectum/SC. It can also include stochastic steps, as described in [ 24 , 27 ]. The stochastic behaviors of the model described here [Eqs. (1) and (2)] and the arrow model are not the same (see Discussion). Topographic maps in knock-in mice Figure 5 summarizes results obtained in our model. The top row shows distributions of chemical 'tags' corresponding to the wild-type, heterozygous, and homozygous knock-in conditions in Brown et al. [ 20 ]. The second row shows the corresponding probability distributions for axonal projections. The third row in Figure 5 displays maxima of the probability distributions, shown to make map structure more visible. These results qualitatively agree with Brown et al. [ 20 ] (see Figure 2 above), for all three cases. Maps in both wild-type mice and homozygote conditions (Figure 5 , columns A and C) can be understood on the basis of axonal sorting in SC in the order of monotonously increasing levels of EphAs. Results of such sorting are displayed in the bottom row in Figure 5 . It is clear that maps resulting from simple sorting reproduce all essential features of mapping observed in wild-type and homozygotes. At the same time, bifurcation, observed in heterozygotes (column B), is not captured by simple sorting procedure. This observation led us to develop the version of noisy sorting, based on chemical cues, described here. Why does the blending of two maps occur in rostral SC? This question is addressed below in discussion section. Here we display manipulations with 'chemical tags', which can shift the bifurcation point in our model. These manipulations point to two factors, which contribute to the location of the bifurcation point. The first factor is inhomogeneous endogenous EphA5/6 density. It results in a smaller separation between two branches of the map observed in rostral SC (Figure 5B and 5C , bottom). The second factor is a smaller gradient of ligand in rostral SC (single-valued map region) than in caudal SC. Both these factors are addressed below. Let us now demonstrate the impact of inhomogeneous EphA gradient on bifurcation. Figure 6 (column A) shows that if the endogenous gradient of receptor is made more inhomogeneous, the point of bifurcation is shifted caudally. This means that the single-valued part of the map becomes larger. To see this, notice that the density of receptor in Figure 5 (column B) has a minimum value of about 0.3. The minimum value of the receptor density in Figure 6 (column A) is about twice as small. Hence the endogenous receptor density in Figure 6A changes faster than in Figure 5 . The results of simple sorting of axons according to increasing level of receptor are also shown in Figure 6A (bottom). Two branches of the map approach each other closer than in Figure 5B . This is consistent with the expanded single-valued part of the map observed in Figure 6A . But is receptor distribution the sole determinant of the position of the bifurcation point? To demonstrate that the latter is also controlled by the gradient of ligand in SC we reduce the density of ligand uniformly by 25% (Figure 6B ). This may mimic experiments in which increment in RGC receptor is combined with reduction in ephrin-A ligand density. As a result, the point of transition between single-valued and double valued parts of the map is located more caudally in Figure 6B than in Figure 5A . Hence, we can affect the point of transition by changing the densities of both receptor and ligand to a similar degree. These results are explained below in the discussion section. Finally, we verify that increasing of the gradient of ligand leads to a small expansion of double-valued part of the map. This result is demonstrated in Figure 6C . This example shows once again that ligand concentration can affect the position of bifurcation point and that increasing the ligand profile inhomogeneity (Figure 6C , top) leads to a more pronounced bifurcation effect (compare to Figure 5B ). Results for 2D model We simulated 2D development using the hypothesis that another pair of chemical tags, EphB family of receptors and their ligands, ephrins-B, are responsible for establishing topographic projection from dorsal-ventral (DV) axis on retina to lateral-medial axis in SC [ 9 ]. EphB2/3/4 are expressed in high-ventral-to-low-dorsal gradient by RGCs [ 28 - 30 ], while ephrins-B are expressed in high-medial-to-low-lateral gradient in tectum/SC [ 30 ]. Since dorsal/ventral axons project to lateral/medial SC this implies attractive interactions between EphB+ axons and ephrin-B rich environment [ 31 ] (see, however [ 32 ]). In our model the attractive interactions are modeled by the following exchange probability of two axonal terminals in the DV direction: Here RB (1), RB (2), LB (1), and LB (2) are EphB receptor and ephrin-B ligand densities at neighboring points 1 and 2 in SC. This probability is similar to Eq. (1). Notice a sign change compared to Eq. (1), which insures that P EXCHANGE > P RETAIN if the order of gradients is wrong, i.e. if the gradients of receptor and ligand are antiparallel. By choosing this sign we therefore ensure attraction between axons and ligands. The details of our simulations are described in Methods. Our model allows not only exploration of two-dimensional maps (Figure 7 ) but also observing and modeling temporal development (Figure 8 ). Videos with detailed evolution of the map are available in [ 25 ]. Discussion Why does the map in heterozygotes bifurcate? In our model the map is formed through interaction of three factors: Eph/ephrin-based chemorepulsion/attraction, competition between axons for space, and noise. It is the latter that fuses two maps together in rostral SC (Figures 5B ) in this model. Therefore, to understand position of the bifurcation point one has to consider the interplay between signal and noise at different positions in the map. As we have shown above both ligand and receptor distributions influence the range of single-valued portion of the map independently (Figure 6 , columns A and B). Let us first address the impact of ligand distribution. Figures 9 and 12A show that the gradient of ligand is the smallest in rostral SC. This leads to a larger impact of noise there. Since noise drives blending of two branches of the map, this blending first occurs in rostral SC, in agreement with Brown et al. [ 20 ]. Interestingly, Brown et al. also shows larger diameters of axonal TZ in the single-valued part of the map, which is consistent with the larger impact of noise there. The second factor contributing to bifurcation in heterozygotes is inhomogeneity of EphA gradient in retina. Consider the case of no noise. Mapping in this case is obtained by sorting axonal terminals in the order of increasing density of EphA (Figures 10 , 5 , 6 ). Separation between two maps is the smallest in rostral part (Figure 10B ). This is because of inhomogeneous gradient of receptor in 'retinal' cells (Figure 10A ). Therefore, even if noise were the same in all parts of the map, rostral part has the smallest signal in terms of separation between two maps, and the largest potential to be blended by noise. We conclude that two factors, increased noise and reduced signal, cooperate in rostral SC in fusing the wild-type and knock-in maps. This leads to the formation of single-valued map there. In caudal part both noise is reduced and distance between maps is larger. Hence, the map is double-valued in caudal SC. Mapping in Isl2/EphB knock-ins has two bifurcations It is possible to spatially separate these two blending factors, increased noise and decreased signal, if one applies the same logic to the DV axis of the map. In our model this mapping is implemented by attractive interactions between EphB+ axons and ephrin-B rich environment. Hence, DV mapping is "flipped" with respect to the TN one in the sense that high EphB gradient region of retina maps to a high ephrin-B gradient region in SC. Two blending effects described above (reduced signal and increased noise) are therefore spatially separated for the DV axis. To observe mapping in these conditions we performed a numerical "experiment" on the Isl2/ EphB knock-in conditions. This may have relevance to mapping in DV direction. The results are shown in Figure 11 . Two bifurcations observed in Figure 11 confirm the hypothesis about two factors operating in the numerical model. The ventral bifurcation is associated with receptor, since separation between two maps in perfectly ordered conditions is the smallest in medial SC. The second bifurcation, dorsal, occurs due to noise, since noise is maximal where the gradient of ligand is the smallest, i.e. in lateral SC. Thus, we suggest that experiments on Isl2/EphB knock-ins should make clear if inhomogeneity in receptor density or noise is more important. It is also possible that activity-dependent mechanisms drive blending of two maps. Activity leads to focusing of projections whereby axons with close locations in retina are effectively attracted to each other in SC. Activity-dependent attraction will blend axons positioned proximally in SC, therefore ventral bifurcation, described above, may be robust with respect to these factors. The dorsal bifurcation, on the other hand, may or may not be observable if activity-dependent focusing of projections takes place. These questions will be addressed in future studies. Absolute versus relative Brown et al. [ 20 ] demonstrates that retinocollicular mapping is based on relative levels of EphA/ephrin-A expression in the broad meaning of this term. Indeed, the absolute value of EphA density does not determine where an axon terminates in colliculus. This is because axonal TZ may shift in the presence of axons with altered expression of chemical tags. For example, wild-type axons terminate more caudally in the presence of Isl2+/EphA3+ axons. Hence, an important factor is the presence of other axons, relative to which given axon establishes its termination point. This idea is also evident from retinal and collicular/tectal ablation experiments in rodents [ 33 , 34 ] and other species [ 2 ]. Can we take this idea to the next level and hypothesize that relative differences between neighboring retinal cells represent the chemical signal? This suggestion was used [ 20 ] to explain blending of the two maps in heterozygous rostral SC, since relative differences in receptor levels are the smallest in the corresponding part of retina (temporal). In this study we present a model, which uses differences in absolute values of chemical label, as seen from Eq. (1). Indeed, in our model adding a constant value to all densities does not change the resulting mapping, since (1) depends only on differences in expression levels. But this manipulation decreases the relative differences in the expression of EphAs between neighboring knock-in and wild-type cells. Hence, our model is not based on relative differences between receptor densities. Yet, we demonstrate that it can account for experimental results in detail. Thus, we suggest that existing experimental evidence is not sufficient to distinguish relative and absolute labeling in the narrower sense. Of course, our model also accounts for the caudal displacement of wild-type TZs, thus resulting in a relative labeling system in the broad sense. In the first approximation, this model performs a sorting procedure, understood mathematically, of the fibers based on the expression levels of EphA. Our procedure uses differences in absolute values of EphA densities rather than relative differences. We suggest that more quantitative evidence is needed to distinguish these two "relativity principles". Relative labeling in the narrow sense can be incorporated in our model too, if coefficient α is a function of label densities. Thus, the condition α ∝ 1/( RA · LA ) ensures the Weber's law for axonal "perceptual thresholds", since chemical signal is proportional to the relative differences. Comparison to other theoretical models Theories based on chemoaffinity principle are reviewed in [ 4 ]. Some features of our approach are similar to the arrow models described in Ref. [ 24 , 27 ], which employs exchanges between neighbouring axons to establish ordered retinotectal/collicular maps. At the same time some features of our model are different from the arrow model. First, we employ information about chemical labels, Ephs and ephrins. At the heart of our model are equations (1–3), which rely on the known distributions of chemical labels. These equations are unique to our approach. As it was noted above, in the absence of stochasticity ( α → ∞), when perfectly ordered map is formed, our 1D model with nearest neighbor exchanges is equivalent to the 1D version of the arrow model. However, in the stochastic regime, description of developmental noise is different here. In particular, we relate features of the map, to the distribution of chemical labels. We argue that this feature is important in understanding experiments [ 20 ], since distribution of labels determines where TZs fuse to form bifurcations. Second, we consider both nearest neighbour and distant neighbour exchanges (see Methods for more detail). Indeed, Eq. (1–3) can be applied to determine exchange probability for a pair of distant axons too. This feature may be crucial, since development of map in the RC direction is determined by original primary axonal overshoot with subsequent retraction of inappropriate projections [ 3 , 9 ]. In this process the interstitial branches of the same primary axon are eliminated and subsequently added non-locally. We show below in Method section that in many cases local and global exchanges produce the same results in terms of final distribution of projections. But, the process of development is different in the local and global exchange cases. Prestige and Willshaw [ 35 ] suggested to divide developmental mechanisms into two groups. In group I mechanisms each RGC axon has maximum affinity to certain unique point in the target, even without other axons. In group II mechanisms, the position of TZ results from competitive interactions with other axons. Our model definitely belongs to the second group, since we assume that all axons experience maximum affinity to rostral medial SC and are spread over entire SC by competition. Our approach is similar to described in Prestige and Willshaw in the way how graded distributions of molecular tags are represented. The details of map modifications are somewhat different in this study and are precisely defined by Eqs. (1–3). In a recent study Honda [ 36 ] considered results of experiments [ 20 ]. He used servomechanism model to explain the overall structure of the maps in mutants. Servomechanism model is a hybrid between group I and II models in terminology of Prestige and Willshaw, since it assumes that axons have equilibrium points in SC and they are subject to competition with each other. Although Ref. [ 36 ] reproduces doubling of the map in homozygotes it does not succeed in obtaining the bifurcation observed in heterozygotes, which is one of the purposes of the present study. On the biological realism When dealing with numerical simulations one always faces the question of the degree of realism with which to model the data. Does one have to model behaviors of individual atoms, or description on the level of axons is sufficient? In this work we choose the level of description on the basis of what is known about this system. We realize that our model does not capture many behaviors, but we argue that the mechanisms involved are unclear at the moment to be incorporated into a more detailed model. Our approach also fulfils its original goal, which is to reproduce the results of experiments [ 20 ] and to generate experimentally testable predictions, thus satisfying the requirement of parsimony. Model presented here does not describe the difference between development along TN and DV axes. The former mapping is controlled by original axonal overshoot along the RC direction in SC, with subsequent elimination of topographically inappropriate projections [ 3 , 9 ]. In contrast, primary axons from the same DV retinal position enter SC in a broad distribution along ML axis. Topographically precise termination is provided by producing additional interstitial branches in the ML direction [ 31 , 32 , 37 , 38 ]. These findings cannot be reproduced by our model, since no distinction is made between the primary RGC axon and its branches. Instead, our model deals with terminal points of interstitial branches produced by RGC axons. Conclusions We present a model for retinocollicular map development, which can account intriguing behaviors observed in gain-of-function experiments by Brown et al. [ 20 ], including bifurcation in heterozygous Isl2/EphA3 knock-ins. The model is based on chemoaffinity, axonal repulsion/competition, and stochasticity. We discuss possible mappings in ephrinA-/Isl2+/EphA3+ knock-out/ins and Isl2/EphB knock-ins. Methods 1D model To find a stationary distribution of the RGC's axons in the SC, we use the following computational procedure. We consider a linear chain of 100 RCG that are connected to one and only terminal cell in SC each. The receptor and ligand expression level profiles used in the computations for wild-type, heterozygote and homozygote are shown at Figure 5A,5B,5C . We start with the random map where the position of every axon in SC does not depend on the level of its receptor expression. Then we perform stochastic reconstructions through an exchange of the positions of the neighboring axons in SC. Namely, at each step we randomly choose one pair of axons out of 99 neighboring pairs and switch their positions with the probability given by Eq. (1). In both cases, whether the positions of the axons are exchanged or they retain at their old locations we proceed to the next step when we choose a new pair of neighboring axons. We repeat the process until a stationary distribution of the probabilities for the positions of the RGC's axons in SC is reached. The typical stationary solution for one realization is shown at Figure 4 . Here the number of iterations is 10 6 (nearest neighbor exchanges). The main parameter of our model is taken to be α = 30 throughout the paper. It is chosen to fit the experimental data from Brown et al. [ 20 ]. We have observed that the value of α is roughly equal to inverse of the relative diameter of the TZ squared. Thus, in our results, TZ occupies, roughly, 20% of the entire SC, which corresponds to the value of α given above. The probability distribution and the position of the maximums shown at Figure 5 and 6 are obtained by temporal averaging over 5 × 10 4 realizations of stationary solution separated in time by 10 3 iterations (nearest neighbor exchanges). The choice of receptor/ligand expression profiles We base our choice of parameters for the distribution of molecular markers on experimental observations in mouse retina and SC. Thus, the distribution of ephrinA2 and A5 is obtained in [ 10 ]. The total distribution of ligand in SC is shown in Figure 12A . It resembles closely the distribution used in this study LA ( x ) = exp(- x ) (Figure 4A , etc). Note that the constant factor in front of the exponential is taken to be 1 in our model, since any non-unit factor is absorbed into parameter α [cf. Eq. (1)]. The distribution of receptors in retina requires more thorough consideration. The distribution of strength of EphA S-RNA hybridization signals is measured in [ 20 ] and is shown in Figure 12B . From this distribution one has to obtain the density of receptor expression in single axon, emanating from given point in retina. To this end the overall strength of the hybridization signal is divided by the RGC density in cells/mm 2 , obtained in [ 39 ] (Figure 12C ). The resulting distribution of EphA receptors per cell is shown in Figure 12D . It is matched closely by the function used in this study RA ( x ) = exp(- x ) (see Figure 4A , etc) in > 95% of retina. Additional distortions introduced by non-uniform linear magnification factor are estimated by us to be small (< 10%), based on data from [ 39 , 40 ]. Such distortions cannot be calculated directly, since a complete topographic map from retina to SC is not available. The errors introduced by non-uniform map do not exceed the precision with which the density of receptors is originally measured, estimated from the noise in [ 20 ]. In the EphA3+ retina the density of receptor is increased in every second cell by 50% and 25% of the maximum value in homo and heterozygotes respectively. These parameters are chosen to match the overall map structure (Figure 5 ) to that observed experimentally in [ 20 ] (Figure 2 ). The particular parameter which was chosen for such comparison was the overall distance between the wild-type and knock-in cells, equal approximately to 40 and 20 percent in homo and heterozygotes. Since such distance is approximately constant in the homozygotes, the effects of receptor dimerization, discussed in [ 7 ], are assumed to be negligible. This may occur due to saturated conditions (almost all receptors are in the dimerized state). The effects of ligand dimerization are impossible to estimate at the moment. To assess this effect in our model we verify that our results are not changes significantly if ligand density is below dissociation density for dimerization, i.e. the effective ligand density interacting with the receptor is equal to the square of actual density, LA ( x ) = exp(-2 x ) (Figure 6C ). The profiles of expression of EphB/ephrinB pair are measured in [ 31 ] similar to EphA/ephrinA. They are taken to be LB ( y ) = exp(- y ) and RB ( y ) = exp(- y ). As with the EphA/ephrinA the non-unit overall factors in these distributions are absorbed in parameter β (see below). 2D model Here we describe our 2D model in more detail. We consider an array of 100 by 100 RGC, which are connected to 100 by 100 different points in colliculus. Each RGC is characterized by two levels of expression for two receptors, EphAs and EphBs, described in the text. The concentration profiles are taken to be the same for EphA and EphB receptors in the wild-type species. In the homozygote and heterozygote cases the concentration of EphA is taken as shown at Figure 5 , while the concentration of EphB is unchanged. RGCs do not express ligand in our model. The collicular receptacles are described by two ligand concentrations with the same profiles as shown at Figure 5 but with different gradient directions discussed in the text. The process of development is modeled as follows. We randomly choose a pair of axons in SC separated either in RC or in ML direction. We exchange their positions with the probability given by Eq. (1) or Eq. (3) respectively. We then repeat the process until a stationary distribution of probabilities is reached in the same manner as for 1D case. Note, that this time a chosen pair of axons, say in RC direction, may not be a neighboring pair, but consist of two axons separated by any distance in SC. This procedure dramatically decreases the convergence time to the stationary distribution, which is the same as in the case when we choose the neighboring axons only. The noise level is taken to be the same for both RC and ML directions, that is α = β = 30. The spatial 2D distribution of the axons corresponding to labeled RGCs is shown at Figure 7 . The "labeling spot" in retina is a circle with radius R = 7.3, the coordinates of the center are (15,50), (50,50) and (85,50) on the 100 × 100 grid. The distribution is obtained by averaging the positions of the labeled axons in SC over 1000 realizations after it reached the stationary solution at 1 × 10 6 iterations. The temporal evolution of the map for the label in the central retina is shown at the Figure 8 . It corresponds to averaging over 1000 different realizations at each time interval. In both 1D and 2D cases the calculations were performed on Dell PowerEdge 1600SC server. The programs, written on Matlab (MathWorks, Inc.), are available for download in [ 25 ]. Limiting probabilities between 0 and 1 Equations for the probability of switching of two axons (1) and (2) can yield probabilities, which are below 0 or larger than 1. Thus, in the numerical implementation of our model instead of (1) and (2) we use expressions with soft cutoff at 0 and 1, i.e. These probabilities are restricted to be between 0 and 1. In addition, when differences in ligand and receptor densities between neighboring points are not large, (4) and (5) are equivalent to (1) and (2). Local versus global transitions One could use exchanges between nearest neighbors to implement map development, as described in text above and in [ 24 ]. Alternatively, one could consider swaps between two distant axons chosen randomly. An exact statement, a proof of which we provide here, is that the final probability distribution for connectivities does not depend on whether the swaps are local or global. This statement is true, for any distribution of chemical labels, in 1 or 2D. It is true however if Eq. (4) is used to calculate probabilities of transitions. In particular, we show that Eq. (4) leads to a Boltzmann distribution of the probabilities of connections, which does not depend on the locality/globality of transitions. Thus, we can present final maps for both local and global transitions interchangingly, since results pertaining to the final state of the map, such as in Figures 4 , 5 , 6 , 7 and 10 , 11 , do not depend on the choice of transitions. However, the temporal dynamics of map evolution does depend on this choice. Normally, the convergence of the map to the final distribution is faster with global transitions. Thus, Figure 8 shows evolution of the map for the case of global swaps. The sequence in Figure 8 would be different, if the swaps between nearest neighbors were used. Let us now derive the probability distribution of projections in the final map. We perform our derivation for the 1D case; in 2D it is similar. We proceed using the detailed equilibrium principle, frequently employed in statistical mechanics [ 41 ]. Consider two states of the map, symbolically denoted by A and B. These states are described by corresponding probabilities P A and P B . These probabilities satisfy the detailed equilibrium condition [ 41 ] where the transition probabilities are given by equation (4). After some algebra, it is possible to show that the transition probabilities are given by a simpler than (4) form where E A and E B are 'state' variables, depending on the current arrangement of axons in the target Here summation is assumed over all termination sites in SC, denoted by index i , with L ( i ) being the ligand concentration and R ( i ) the receptor concentration. The latter, of course, depends on the arrangement of axons, corresponding to the state A. The transition probability P B → A is given by the same expression, with exchanged indexes A and B. The detailed equilibrium condition (6) leads then to Botzmann probability distribution of the states of the map Eq. (9) is instrumental in showing that the final distribution of projections in our approach does not depend on the methods of reconstruction. Thus, both global and local transitions will lead to identical final arrangement of the map. This property is well-known in considering Metropolis Monte-Carlo procedures. What does depend on the methods of reconstruction is the time, which it takes to reach the final configuration. Thus, as it was mentioned above, global transitions lead to the final state much faster. With local transitions, on the other hand the map can freeze in the original state, and it may take an exponential time to reach the final configuration. We thus conclude that our results presented in this study are universal in that they do not depend on the exact developmental mechanism, but only on the distribution of 'chemical' tags. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC520742.xml |
554982 | Synergistic inhibition of human cytomegalovirus replication by interferon-alpha/beta and interferon-gamma | Background Recent studies have shown that gamma interferon (IFN-γ) synergizes with the innate IFNs (IFN-α and IFN-β) to inhibit herpes simplex virus type 1 (HSV-1) replication in vitro . To determine whether this phenomenon is shared by other herpesviruses, we investigated the effects of IFNs on human cytomegalovirus (HCMV) replication. Results We have found that as with HSV-1, IFN-γ synergizes with the innate IFNs (IFN-α/β) to potently inhibit HCMV replication in vitro . While pre-treatment of human foreskin fibroblasts (HFFs) with IFN-α, IFN-β or IFN-γ alone inhibited HCMV plaque formation by ~30 to 40-fold, treatment with IFN-α and IFN-γ or IFN-β and IFN-γ inhibited HCMV plaque formation by 163- and 662-fold, respectively. The generation of isobole plots verified that the observed inhibition of HCMV plaque formation and replication in HFFs by IFN-α/β and IFN-γ was a synergistic interaction. Additionally, real-time PCR analyses of the HCMV immediate early (IE) genes (IE1 and IE2) revealed that IE mRNA expression was profoundly decreased in cells stimulated with IFN-α/β and IFN-γ (~5-11-fold) as compared to vehicle-treated cells. Furthermore, decreased IE mRNA expression was accompanied by a decrease in IE protein expression, as demonstrated by western blotting and immunofluorescence. Conclusion These findings suggest that IFN-α/β and IFN-γ synergistically inhibit HCMV replication through a mechanism that may involve the regulation of IE gene expression. We hypothesize that IFN-γ produced by activated cells of the adaptive immune response may potentially synergize with endogenous type I IFNs to inhibit HCMV dissemination in vivo . | Background Human cytomegalovirus (HCMV) is a ubiquitous beta-herpesvirus that affects 60–80% of the human population [ 1 ]. The lytic replication cycle of HCMV is a temporally regulated cascade of events that is initiated when the virus binds to host cell receptors. Upon entry into the cell, the viral DNA translocates to the nucleus, where expression of viral immediate early (IE), early and late genes occurs in a stepwise fashion [ 2 ]. While generally asymptomatic in immunocompetent individuals, primary HCMV infection may cause infectious mononucleosis and has been associated with atherosclerosis and coronary restenosis [ 3 , 4 ]. Furthermore, HCMV is the leading contributor of congenital viral infections in the United States and Europe, causing cytomegalic inclusion disease, pneumonia and severe neurological anomalies in infected neonates [ 5 - 7 ]. Like other herpesviruses, HCMV establishes lifelong latency in its host from which reactivation can occur and cause severe and fatal disease in immunocompromised individuals [ 8 ]. Cellular immune responses (MHC class I-restricted T-cells and natural killer (NK) cells) appear to be an important factor in both the control of acute infections and the establishment and maintenance of viral latency in the host [ 9 - 14 ]; however, the mechanisms by which T-cells affect HCMV replication are currently undefined. While cytotoxic T-cell activity has been shown to correlate with recovery from HCMV infection in patients [ 15 , 16 ], recent studies suggest that immune cytokines such as tumor necrosis factor-α and interferons (IFNs) may have direct inhibitory effects on HCMV replication [ 17 , 18 ]. In particular, the involvement of IFNs as a means of curtailing viral replication without cellular elimination is consistent with the hypothesis that cytokines produced by activated immune cells play a direct role in the control of viral infections [ 19 - 21 ]. Type I IFNs (IFN-α and IFN-β) and type II IFN (IFN-γ) are important components of the host immune response to viral infections. IFN-α and IFN-β are produced by most cells as a direct response to viral infection [ 22 - 24 ], while IFN-γ is synthesized almost exclusively by activated NK cells and activated T-cells in response to virus-infected cells [ 25 ]. Both types of IFNs achieve their antiviral effects by binding to their respective receptors (IFN-α/β or IFN-γ receptors), resulting in the activation of distinct but related Janus kinase/signal transducer and activator of transcription (Jak/STAT) pathways. The result is the transcriptional activation of IFN target genes and the synthesis of a number of proteins that interfere with viral replication (reviewed in [ 26 ]). Although IFNs are effective inhibitors of viruses such as vesicular stomatitis virus and encephalomyocarditis virus [ 26 ], almost all RNA and DNA viruses have evolved mechanisms to subvert the host IFN response [ 21 , 26 , 27 ]. For example, HCMV inhibits IFN-stimulated antiviral and immunoregulatory responses at multiple steps [ 24 , 28 - 32 ]. Likewise, the herpes simplex virus (HSV-1) protein ICP34.5 [ 33 ], the influenza A virus NS1 protein [ 34 ], the simian virus-5 V protein [ 35 ], the Sendai virus C protein [ 36 ], the hepatitis C virus (HCV) NS5A and E2 proteins [ 37 ] and the Ebola virus VP35 protein [ 38 ] have all been shown to block IFN-mediated responses in infected cells. However, several studies have shown that viruses normally resistant to the effects of type I or type II IFNs separately, are susceptible to IFNs when used in combination. For example, IFN-α/β and IFN-γ synergistically inhibit the replication of HSV-1 both in vitro and in vivo [ 20 ]. In addition, recent reports have indicated that IFNs used in combination have a synergistic antiviral activity against severe acute respiratory syndrome-associated coronavirus (SARS-CoV) [ 39 ], HCV [ 40 ] and Lassa virus [ 41 ]. In the present study, we examined the effects of IFN-α, IFN-β and/or IFN-γ on HCMV replication in human foreskin fibroblasts (HFFs). Treatment of HFFs with IFN-α, IFN-β or IFN-γ separately inhibited HCMV replication by ≤ 40-fold in both plaque reduction and viral growth assays. In contrast, treatment with IFN-α and IFN-γ or IFN-β and IFN-γ inhibited HCMV replication 10–20 times greater than that achieved by each IFN separately. This effect was synergistic in nature and the mechanism of inhibition may involve, at least in part, the regulation of IE gene expression. As with HSV-1 [ 20 ], we have found that when used in combination, both type I and type II IFNs potently inhibit the replication of HCMV in vitro . Results IFN-α/β and IFN-γ synergistically inhibit HCMV plaque formation The abilities of human IFN-α, IFN-β or IFN-γ to inhibit the replication of HCMV were initially compared in a plaque reduction assay on HFFs. Viral plaque formation was reduced by 9-, 37- or 29-fold in fibroblasts treated with 100 IU/ml of IFN-α, IFN-β or IFN-γ, respectively (Table 1 ). To test the effects of combination IFN-treatments on viral plaque formation, HFFs were pre-treated with 100 IU/ml each of (1) IFN-α and IFN-β, (2) IFN-α and IFN-γ or (3) IFN-β and IFN-γ. As expected, the level of inhibition achieved with both IFN-α and IFN-β was not greater than the level of inhibition achieved by both IFNs separately. In contrast, pre-treatment with both type I IFNs (IFN-α or IFN-β) and type II IFN (IFN-γ) reduced HCMV plaquing efficiency by 164- and 662-fold, respectively (Table 1 ). To eliminate the possibility that this effect was merely a result of doubling the total amount of IFNs per culture, we tested the inhibitory effects of 200 IU/ml of each IFN separately. Two-hundred IU/ml of IFN-α, IFN-β or IFN-γ reduced HCMV plaque formation by only 11-, 37- or 30-fold, respectively (Table 1 ). The level of inhibition was not significantly greater than the level of inhibition achieved by each IFN at concentrations of 100 IU/ml (P > 0.05), suggesting that the degree of inhibition observed can be attributed to the presence of two distinct types of IFNs. Table 1 Effect of IFN-α, IFN-β and/or IFN-γ on HCMV plaque formation Treatment IU/ml a Log (mean no. of plaques) ± sem Fold-inhibition c Vehicle --- 3.34 ± 0.02 b --- IFN-α 100 2.38 ± 0.01* 9 IFN-α 200 2.30 ± 0.01* 11 IFN-β 100 1.77 ± 0.05* 37 IFN-β 200 1.77 ± 0.02* 37 IFN-γ 100 1.88 ± 0.03* 29 IFN-γ 200 1.85 ± 0.02* 30 IFN-α and IFN-β 100 1.95 ± 0.04* 25 IFN-α and IFN-γ 100 1.13 ± 0.09* 164 IFN-β and IFN-γ 100 0.52 ± 0.05* 662 IFN-α, IFN-β and IFN-γ 100 0.66 ± 0.15* 512 a HFFs were treated with either 100 or 200 IU/ml each of IFN-α, IFN-β or IFN-γ (separately or in combination). b Mean ± sem of viral plaque formation on HFFs observed in 3 replicates per group. Cultures were infected with 2000 PFU/well of Towne-GFP, and plaque numbers were determined 14 d p.i. by fluorescent microscopy. c Fold-inhibition was calculated as: 10 ([log plaques / PFU in vehicle-treated] - [log plaques / PFU in IFN-treated]) * Significant reduction in plaque numbers of IFN-treated groups as compared to vehicle-treated groups is denoted by a single asterisk (P < 0.001, one-way ANOVA and Tukey's post hoc t test). Figure 1 shows a representative micrograph of HCMV plaque formation on IFN-treated HFFs. Consistent with the results in Table 1 , HCMV plaque efficiency was reduced and plaque morphology was smaller in cultures treated with a combination of type I and type II IFNs (Figure 1E, F ). This phenotype was also observed in cultures treated with IFN-γ alone (Figure 1D ), although the overall inhibitory effect of IFN-γ was similar to that achieved in IFN-β-treated HFFs. Figure 1 IFN-α, IFN-β and/or IFN-γ inhibit HCMV plaque formation on HFFs. HFFs were pre-treated with (A) vehicle or 100 IU/ml each of (B) IFN-α, (C) IFN-β, (D) IFN-γ, (E) IFN-α and IFN-γ or (F) IFN-β and IFN-γ. Monolayers were subsequently infected with 1000 PFU of HCMV strain Towne-GFP, and plaque numbers were determined 11 d p.i. by fluorescence microscopy. Plaques were determined by counting a minimum of 10 GFP-positive cells in one foci. The antiviral activity of IFNs on HCMV plaque formation was further assessed by generating dose-response curves (Figure 2A ). The level of inhibition achieved with individual IFN treatments was ≤ 8-fold for IFN-α or IFN-β and ≤ 18-fold for IFN-γ at all concentrations tested. In contrast, combination IFN treatments achieved levels of inhibition 2–18 times greater than the sum of each individual IFN treatment. To determine if the enhanced inhibition of HCMV observed in HFFs treated with both type I and type II IFNs was synergistic, we employed the synergistic analysis for the determination of the interaction of two drugs [ 42 , 43 ]. Interaction indexes were initially calculated from the data generated in the dose response experiments (Figure 2A ) to assess the synergistic potential of type I and type II IFN treatment. An interaction index of 0.05 ± 0.03 for IFN-α and IFN-γ combined and 0.04 ± 0.01 for IFN-β and IFN-γ combined indicated a high degree of synergy (Table 2 ). Additionally, synergy was confirmed by generating isobolograms in which concave isoboles are indicative of synergy while convex isoboles are indicative of an antagonistic effect (Figure 2B ). Inhibitory concentrations were determined from dose response experiments, and IC 95 isoboles were generated for HFFs treated with both IFN-α and IFN-γ (Figure 2C , concave plot) and HFFs treated with both IFN-β and IFN-γ (Figure 2D , concave plot). Consistent with the interaction indexes determined (Table 2 ), concave isoboles shown in Figures 1C and 1D indicate a synergistic relationship between type I IFNs (IFN-α and IFN-β) and type II IFN (IFN-γ), suggesting action via distinct antiviral pathways. Table 2 Degree of antiviral interaction between IFN-α/β and IFN-γ IFN Treatment a (d a + d b ) IC 90 D a b IC 90 D b b interaction index c IFN-α + IFN-γ 300 IU/ml 30 IU/ml 0.05 ± .03 IFN-β + IFN-γ 100 IU/ml 30 IU/ml 0.04 ± .01 a HFFs were treated 12 h prior to infection with various combinations of type 1 IFNs (IFN-α or IFN-β) and type II IFN (IFN-γ). b D a and D b are the concentrations of each IFN separately that inhibit HCMV plaque formation on HFFs by 90% (IC 90 ). c Interaction index is a measure of the divergence between the amounts of IFNs that are observed to produce an inhibitory effect in combination (d a + d b ) and the amounts that would achieve the same effect separately (D a and D b ). Indexes less than 1 indicate synergy, indexes greater than 1 indicate antagonism and indexes equal to 1 indicate additivity. Figure 2 Type I IFNs (IFN-α and IFN-β) and type II IFN (IFN-γ) synergistically inhibit HCMV plaque formation on HFFs. (A) Viral plaque reduction assay. HFFs were treated with vehicle or increasing amounts of IFN-α (■), IFN-β (●), IFN-γ (▲), IFN-α and IFN-γ (□) or IFN-β and IFN-γ (○) prior to infection with 400 PFU of Towne-GFP (n = 3). Fold-inhibition in IFN-treated groups as compared to vehicle-treated groups is plotted as a function of IFN concentration (IU/ml). Significant differences in fold-inhibition for HFFs treated with combination IFNs relative to cells treated with individual IFNs are denoted by a single asterisk (P < 0.001, one-way ANOVA and Tukey's post hoc t test). (B) Illustration of a representative isobologram for a combination of two drugs. The solid line is the line of additivity. When the isobole lies below the line of additivity, the combinatorial effect of drug A and drug B is synergistic. When the isobole lies above the line of additivity, the combinatorial effect of drug A and drug B is antagonistic. Combination effect of (C) IFN-α and IFN-γ and (D) IFN-β and IFN-γ on HCMV plaque formation on HFFs was plotted in an isobologram. Values used to generate the concave isoboles were derived from a dose response curve and represent a combination dose required to elicit 95% (IC 95 ) inhibition of viral plaque formation on HFFs. The dashed line represents the theoretical line of additivity. IFN-α/β and IFN-γ synergistically inhibit HCMV replication To further characterize the inhibitory effect of type I IFNs (IFN-α or IFN-β) and type II IFN (IFN-γ) treatment, four-day viral growth assays were performed. In cultures treated with IFN-α, IFN-β or IFN-γ, viral replication was undetectable or below the lower limit of detection at 1 and 2 days (d) post-infection (p.i.). At 3 d p.i., however, HCMV replicated to average titers of 8350, 1050 or 985 PFU/ml in IFN-α-, IFN-β- or IFN-γ-treated cultures, respectively (Figure 3 ). While vehicle-treated cells replicated to average titers of 3.2 × 10 4 PFU/ml, viral titers recovered from cells treated with IFNs separately were reduced by 6-, 23- or 25-fold, respectively. Moreover, at 4 d p.i., viral titers in cells treated with IFNs separately were equal to viral titers recovered from vehicle-treated cultures. Consistent with our plaque reduction assays, we observed a similar enhanced inhibitory effect when HFFs were treated with a combination of type I and type II IFNs. In cultures treated with 100 IU/ml each of IFN-α and IFN-γ or IFN-β and IFN-γ, HCMV replication was detectable beginning at 3 d p.i. yielding titers at or below the lower limit of detection of the assay. Compared to HCMV titers of 1 × 10 5 PFU/ml at 4 d p.i. in vehicle-treated HFFs, treatment with IFN-α and IFN-γ or IFN-β and IFN-γ inhibited HCMV replication in HFFs by an average of 3125- or 5000-fold, respectively. When compared to ganciclovir (GCV)-treated cells, a known DNA synthesis inhibitor of HCMV, the level of inhibition achieved in GCV-treated cultures was comparable to that in IFN-α and IFN-γ- or IFN-β and IFN-γ-treated cultures at 3 and 4 d p.i. (Figure 3 ). In addition, the potent inhibitory effect observed in the presence of IFN-β and IFN-γ was maintained up to 11 d p.i. (Figure 3 , inset), indicating that the effect was not merely a delay in viral replication. Figure 3 IFN-α, IFN-β and/or IFN-γ inhibit HCMV replication in HFFs. HFFs were treated with vehicle or 100 IU/ml of IFNs 12 h prior to infection with HCMV at a MOI of 2.5: (◆) vehicle, (■) IFN-α, (●) IFN-β, (▲) IFN-γ, (□) IFN-α and IFN-γ, (○) IFN-β and IFN-γ or (◇) GCV (100 μM). On the indicated d p.i., average viral titers (n = 3) were determined by a microtiter plaque assay. HFFs were inoculated for 2 h with serially diluted lysed cultures. Plaque numbers were determined 11 d p.i. by fluorescence microscopy. At 3 d p.i., all IFN treatments significantly reduced viral titers as compared to vehicle-treated cultures (P < 0.001, one-way ANOVA and Tukey's post hoc t test). At 4 d p.i., only cells treated with GCV or combination IFN treatments inhibited viral titers as compared to vehicle-treated HFFs (P < 0.001, one-way ANOVA and Tukey's post hoc t test). Significant reduction denoted by a single asterisk. Inset: Represents HCMV titers determined over 11 d for (◆) vehicle-treated and (○) IFN-β and IFN-γ-treated HFFs. The dashed line represents the lower limit of detection of the plaque assay (20 PFU/ml) used to measure viral titers. Treatment with IFN-α/β and IFN-γ does not prevent HCMV entry into HFFs The HCMV replication cycle is a multistep process, beginning with viral attachment and entry into the host target cell [ 2 ]. To investigate the mechanism(s) by which IFN-α/β and IFN-γ synergistically inhibit HCMV replication, we first examined the effect of IFNs on HCMV entry into HFFs. Cells were treated with vehicle or IFNs for 12 hours (h) prior to infection with HCMV. Two h after viral adsorption, DNA was isolated from the HCMV-infected cells and PCR was used to amplify a 373 bp fragment of the HCMV IE gene (Figure 4 ). For each treatment group, the PCR product yield increased as a function of viral multiplicity of infection (MOI). At all MOIs tested, the amount of PCR product amplified from HFFs treated with IFNs (Figure 4B–F ) was comparable to that of vehicle-treated HFFs (Figure 4A ). Co-amplification of a GAPDH 239 bp PCR product served as an internal loading control for normalization of PCR product between treatment groups (data not shown). The amplification of similar levels of PCR products from HFFs suggests that the synergistic inhibitory effect of IFN-α/β and IFN-γ does not occur at the level of viral entry. Figure 4 Inhibition of HCMV by IFN-α, IFN-β and/or IFN-γ is not a result of decreased viral entry into cells. Ethidium bromide-stained IE exon 4 PCR products amplified from HCMV-infected HFFs pre-treated with either vehicle (A) or 100 IU/ml of IFN-α (B), IFN-β (C), IFN-γ (D), IFN-α and IFN-γ (E) or IFN-β and IFN-γ (F). From left to right, PCR products were amplified from H 2 O control, 100 ng of uninfected (UI) HFF DNA or 100 ng of HCMV-infected HFF DNA harvested from cells inoculated for 2 h at MOIs of 0.3 to 30. GAPDH PCR products were run along side IE exon 4 PCR products and served as internal loading controls (data not shown). IFN-α/β and IFN-γ inhibit HCMV IE mRNA expression HCMV gene expression is temporally regulated in that the IE genes (IE1 and IE2) are the first class of viral genes expressed after HCMV entry into the cell [ 44 ]. Although limited studies have examined the effect of IFN-β or IFN-γ treatment on HCMV IE mRNA expression, the conclusions of these studies are conflicting, most likely due to differences in both IFN and cell type [ 45 , 46 ]. To assess the effect of IFN treatment on IE gene expression, real-time PCR analyses of IE1 and IE2 mRNA levels in IFN-treated cells were performed. Figure 5 summarizes the fold-repression in IE1 and IE2 mRNA levels in IFN-treated cultures as compared to vehicle-treated controls. At 6 h p.i., IE mRNA levels in HFFs treated individually with either IFN-α or IFN-γ were inhibited by < 2-fold, whereas in cells treated with both IFN-α and IFN-γ, IE1 or IE2 mRNA expression was inhibited by 6- or 5-fold, respectively. A more enhanced inhibitory effect was observed in HFFs treated with both IFN-β and IFN-γ. In these cultures, IE1 or IE2 mRNA expression was repressed by 11- or 8-fold, respectively. Interestingly, the degree of IE mRNA inhibition observed in HFFs treated with IFN-β alone was greater than that observed in cultures treated with IFN-α alone, suggesting that type I IFN-mediated inhibition of IE mRNA expression is better facilitated by treatment with IFN-β rather than IFN-α. Figure 5 IFN-α, IFN-β and/or IFN-γ inhibit HCMV IE mRNA expression. SYBR green real-time PCR analyses of IE1 and IE2 mRNA expression in vehicle- or IFN-treated HFFs 6 h p.i. (n = 3). Presented are fold-inhibition ± standard deviation in IE1 (■) and IE2 (□) mRNA expression in each treatment group. Differences in gene expression were determined as described in Methods. IFN-α/β and IFN-γ inhibit HCMV IE protein expression IE protein expression plays a pivotal role in controlling subsequent viral and cellular gene expression during productive HCMV infection [ 47 ], such that an inhibitory effect at this level would significantly impair viral replication. To determine whether the inhibitory block in IE mRNA expression correlated with decreased IE protein expression in IFN-treated cultures, western blot analyses were performed (Figure 6A ). At 12 h p.i., a slight reduction in IE72 and IE86 protein expression was observed in HFFs treated with IFN-β, but not with IFN-α or IFN-γ. Moreover, IE72 and IE86 protein expression was decreased in cells treated with both type I and type II IFNs, with the greatest inhibitory effect observed in HFFs treated with both IFN-β and IFN-γ. This inhibitory block in IE protein expression was consistent throughout a 48 h time period (data not shown). Figure 6 IFN-α, IFN-β and/or IFN-γ inhibit HCMV IE protein expression. (A) HFFs were pre-treated with either vehicle (1) or 100 IU/ml of IFN-α (2), IFN-β (3), IFN-γ (4), IFN-α and IFN-γ (5) or IFN-β and IFN-γ (6) 12 h prior to infection with HCMV. At 12 h p.i., cells were harvested and equal amounts of total protein were examined for IE protein (IE72, IE86) expression by western blot analyses. (B-G) Vehicle- or IFN-treated cells were infected with HCMV and the nuclear proteins IE72/86 were detected by indirect immunofluorescence 5 d p.i. Representative images (100X) from cultures treated with (B) vehicle, (C) IFN-α, (D) IFN-β, (E) IFN-γ, (F) IFN-α and IFN-γ or (G) IFN-β and IFN-γ. Immunofluorescent labeling: HCMV IE72/86 – Alexa Fluor 568 (red), nucleus – DAPI (blue), overlaid (pink). If IFN-α/β and IFN-γ synergistically inhibit HCMV replication through inhibition of IE gene expression, we hypothesized that this inhibitory effect would be maintained after multiple rounds of viral replication. To address this question, IE protein expression was analyzed by indirect immunofluorescence over a 5-day period. For all treatment groups, IE protein expression was detected as early as 1 h p.i.; however, as viral replication progressed IE protein expression among IFN-treated groups varied (data not shown). Notably, by day 5 p.i., nearly 100% of the cells treated with vehicle, IFN-α or IFN-β alone stained positive for IE72/86, and approximately 87% of the cells treated with IFN-γ alone were expressing the IE proteins (Figure 6B–6E ). In contrast, the percentage of cells expressing IE proteins was significantly reduced (P < 0.001) in the treatment groups that received combination IFNs, with only 46% of IFN-α and IFN-γ-treated HFFs and 21% of IFN-β and IFN-γ-treated HFFs positive for IE72/86 (Figure 6F, 6G ). The observed differences suggest that in cells treated with both type I and type II IFNs, IE expression is (1) differentially regulated and/or (2) viral spread is severely hindered. Discussion The immune response to viral infection is responsible for preventing viral dissemination and uncontrolled replication within the host. Following viral infection, type I IFNs are secreted by infected cells and function to induce an antiviral state in neighboring uninfected cells. Infiltrating immune cells, such as NK cells and macrophages, secrete numerous chemokines and cytokines that contribute to the overall antiviral response. Upon activation of the adaptive immune response, T-cells can further add to the milieu of immune cytokines present at the site of viral infection by secreting additional cytokines, including IFN-γ. Although several studies have examined the effects of proinflammatory cytokines on HCMV replication in vitro , these studies are limited as they only examine the effect of one type of cytokine on viral replication rather than examining cytokines in combination. In support of the latter, recent studies have shown that type I and type II IFNs function, in synergy, to inhibit both RNA and DNA viruses, including HCV [ 41 ], SARS-CoV [ 39 ], Lassa virus [ 40 ] and HSV-1 [ 20 ]. These studies may more accurately represent the in vivo inflammatory response that results after viral infection. The results presented herein are consistent with this hypothesis and establish that type I (IFN-α and IFN-β) and type II (IFN-γ) IFNs synergistically inhibit the replication of HCMV. In the present study we have demonstrated that combination treatment with type I and type II IFNs renders cells non-permissive to HCMV replication in vitro . The inhibitory effect by IFN-α/β and IFN-γ was synergistic in nature (Table 2 , Figure 2C, 2D ) and the degree of inhibition was not matched by increasing the concentrations of each individual IFN (Table 1 , Figure 2A ). These results indicate that the observed IFN-induced antiviral effects are a direct result of the presence of two distinct types of IFNs. Moreover, inhibition of HCMV replication in cells treated with IFN-α/β and IFN-γ was observed in both HFF and embryonic lung fibroblasts (MRC5) (data not shown) infected with either Towne-GFP (see Methods) or another laboratory strain, AD169 (data not shown). The mechanism(s) by which HCMV replication is inhibited remains unclear. Type I and type II IFNs may synergize by acting on one or more different stages of the HCMV lytic cycle such as (1) viral attachment, (2) viral entry, (3) IE gene expression, (4) early gene expression, (5) DNA replication, (6) late gene expression, (7) virus assembly or (8) viral egress and maturation. To address the question of attachment and entry, PCR was used to amplify viral DNA from IFN-treated and vehicle-treated cultures shortly after infection. As previously observed [ 20 , 46 ], IFN treatment did not prevent viral entry into cells as indicated by equal PCR product yield from all treatment groups (Figure 4 ). These data indicate that IFNs exert their inhibitory effects at a step after viral attachment and entry. Previously, Yamamoto, et al. (1987) demonstrated that treatment of cells with both IFN-α and IFN-γ potently inhibits HCMV replication; however, this study neither determined whether the effect was synergistic nor identified the mechanism of inhibition. However, the authors suggested that IFN-mediated inhibition of HCMV might occur at or prior to early gene expression [ 48 ]. Similarly, over the course of our experiments utilizing the Towne-GFP strain, it was noticed that very few cells expressed green fluorescent protein (GFP) when treated with IFN-α/β and IFN-γ together (data not shown). In this recombinant Towne strain, GFP expression is driven by the early promoter UL127. The lack of GFP-positive cells in IFN-α/β and IFN-γ-treated groups suggested to us that the synergistic antiviral activities mediated by type I and type II IFNs occurred at a stage prior to early gene expression. Previous, studies have shown that type I or type II IFN treatment can inhibit HCMV IE mRNA expression [ 46 ] and/or HCMV IE protein expression [ 45 , 46 ]. Using real-time PCR, we showed that while IFN-α, IFN-β or IFN-γ treatment inhibited IE mRNA expression by 2–6 fold at 6 h p.i., combination IFN-α and IFN-γ or IFN-β and IFN-γ treatment inhibited IE mRNA expression by 6–11 fold. Of note, this inhibitory effect was abolished by 24 h p.i. (data not shown), suggesting that IE mRNA expression is delayed by IFN treatment. The observed decrease in viral IE mRNA expression was accompanied by a decrease in IE protein expression, as viral IE protein expression was reduced in HFFs treated with both type I and type II IFNs (Figure 6A ). Furthermore, immunofluorescent microscopy of IE protein expression revealed that nearly 100% of vehicle- and individual IFN-treated cells expressed IE72/86 5 d p.i., as compared to 46% or 21% of cells treated with IFN-α and IFN-γ or IFN-β and IFN-γ, respectively (Figure 6B–6G ). It appears that although individual IFN treatment results in a marginal inhibition in IE expression early in infection, the effect is not maintained as demonstrated by high viral titers at 4 d p.i. (Figure 3 ) and increased IE protein expression at 5 d p.i. (Figure 6A–6E ). Additionally, HCMV cytopathic effect, characterized by enlarged cells containing intranuclear and cytoplasmic inclusions, increased over time in vehicle- and individual IFN-treated groups, while morphology was unchanged in cells treated with IFN-α/β and IFN-γ (data not shown). Collectively, these data suggest that the synergistic inhibition of HCMV replication by IFN-α/β and IFN-γ may involve, at least in part, the regulation of IE gene expression. The significance of an inhibitory block at this level is evident when the phenotype of IE1 mutant viruses is considered. Greaves and colleagues have demonstrated that HCMV IE1 mutants exhibit a diminished replication efficiency and a reduced ability to form plaques, as well as defective early gene expression [ 47 , 49 , 50 ]. Interestingly, in the presence of both type I and type II IFNs, HCMV shows similar replication and gene expression defects. Although our data suggest that IE gene regulation contributes to the synergistic inhibition of HCMV replication by IFN-α/β and IFN-γ, other mechanisms may also affect this dramatic response. Accordingly, the decrease in IE protein levels exceeds that in IE mRNA levels in response to IFN-α/β and IFN-γ, suggesting that additional regulation at the level of translation, post-translational processing and/or protein stability may be involved. Delineating the other putative regulatory mechanisms that contribute to IFN-α/β and IFN-γ synergistic inhibition of HCMV replication is the focus of ongoing studies. Type I IFNs (IFN-α and IFN-β) and type II IFN (IFN-γ) activate distinct but related Jak/STAT signal cascades resulting in the transcription of several hundred IFN-stimulated genes [ 26 ]. Although similar genes are activated by all three IFNs, Der, et al. (1998) have identified numerous genes differentially regulated by IFN-α, IFN-β or IFN-γ [ 51 ]. In particular, IFN-β stimulation induces twice as many genes as compared to IFN-α. This differential regulation of IFN-induced genes may explain in part the fact that the level of inhibition observed in HFFs treated with both IFN-β and IFN-γ was consistently greater than that observed in cells treated with both IFN-α and IFN-γ, although both IFN-α and IFN-β bind to the same receptor. Similarly, when compared individually, IFN-β consistently inhibited HCMV replication and IE gene expression to levels greater than IFN-α. Therefore, to better understand the cellular factors involved in the synergistic inhibition of HCMV, the profile of IFN-stimulated genes present in cells treated with both type I and type II IFNs should be further examined. Conclusion Guidotti and Chisari have reported a model of noncytolytic control of viral infections by the innate and adaptive immune response, in which cytokines are implicated as having a direct role in viral clearance [ 21 ]. Here we demonstrate that IFN-γ, together with the innate IFNs (IFN-α/β) synergistically inhibits the replication of HCMV in vitro . We hypothesize that IFN-γ produced by activated cells of the adaptive immune response may potentially synergize with endogenous type I IFNs to inhibit HCMV dissemination and facilitate the establishment and/or maintenance of latency in the host. Further studies are required to evaluate the role(s) of both type I and type II IFNs in the regulation of HCMV replication. Methods Cells, viruses and interferons HFFs (Viromed, Minneapolis, MN) were maintained in minimal essential medium (MEM) supplemented with 10% fetal bovine serum, penicillin G (100 U/ml), streptomycin (100 mg/ml), 2 mM L-glutamine, 1 mM sodium pyruvate and 100 μM non-essential amino acids at 37°C in 5% CO 2 . HCMV strain RVdlMwt-GFP was propagated in HFFs as previously described [ 52 ]. RVdlMwt-GFP, referred to as Towne-GFP throughout this manuscript, is a recombinant of HCMV strain Towne that expresses GFP under the control of the early promoter UL127. This virus was kindly donated by Mark F. Stinski and has been previously described [ 53 ]. Recombinant human universal IFN-α, IFN-β and IFN-γ (PBL Biomedical Laboratories, New Brunswick, NJ) were added to cell cultures 12 h prior to HCMV infection and maintained after viral infection. Concentrations of 100 IU/ml of each IFN were used in all experiments unless stated otherwise. Plaque reduction and viral replication assays For plaque reduction assays, vehicle- and IFN-treated HFFs were infected with a fixed inoculum of Towne-GFP. After 2 h adsorption, the inoculum was removed and medium containing 1.0% methylcellulose (Fisher Scientific, Houston, TX) and the respective IFN(s) was added to the cells. Plaque numbers were determined 14 d later by fluorescent microscopy (Nikon TE300 inverted epifluorescent microscope, Nikon USA, Lewisville, TX). For viral replication assays, vehicle- and IFN-treated HFFs were infected with Towne-GFP at a MOI of 2.5. After 2 h adsorption, the inoculum was removed, monolayers were washed twice with 1X PBS, and fresh IFN-containing medium was returned to each well. For GCV-treated groups, 100 μM GCV (Sigma, St. Louis, MO) was added to culture medium immediately following infection. One, 2, 3 or 4 d p.i. cells and medium were harvested and titers of infectious virus were determined by a microtiter plaque assay on HFFs [ 20 ]. Synergy assays To determine the degree of antiviral interaction between type I and type II IFNs, interaction indexes were calculated using the inequalities: d a /D a +d b /D b > 1 and d a /D a +d b /D b <1, where d a and d b are the IFN concentrations needed to jointly produce the effect under consideration, and D a and D b are the IFN concentrations capable of producing the effect on their own, termed isoeffective doses [ 42 ]. Interaction index values of less than 1 indicate synergism, interaction index values greater than 1 indicate antagonism and interaction index values equal to 1 indicate additivity. Isobolograms were also generated to geometrically assess the degree of antiviral interaction between type I and type II IFNs, as previously described [ 43 ]. Using the guidelines described by Berenbaum [ 43 ], isoboles were generated for IC 95 values at various concentrations of IFN-α or IFN-β in the presence of various concentrations of IFN-γ. Concave isoboles are indicative of synergy while convex isoboles are indicative of an antagonistic effect (Figure 2B ). For all synergy experiments, HCMV plaque reduction assays were conducted as described above. Viral entry assay Vehicle- and IFN-treated HFFs were inoculated with Towne-GFP at MOIs of 0.3, 1, 3, 10 or 30. After 2 h adsorption, the inoculi were removed, cells were washed twice with 1X PBS, and subsequently treated with 0.05% trypsin for 5 minutes to ensure the release of virus that had adhered but had not entered the cells. Cells were pelleted and washed twice with 1X PBS to remove trypsin and non-adhered virus. DNA was isolated from each sample by a standard phenol:chloroform DNA extraction procedure [ 54 ], and HCMV-specific oligonucleotide primers were used to amplify a 373 bp product corresponding to exon 4 of the HCMV IE gene, as described previously [ 55 ]. PCR products were resolved in a 2% agarose gel and imaged using an Alpha Innotech gel documentation system (Alpha Innotech, Corp., San Leandro, CA). Real-time PCR Vehicle- and IFN-treated HFFs were infected with Towne-GFP at a MOI of 2.5. Six h p.i., total RNA was prepared using a RNeasy Mini Prep kit (Qiagen, Inc., Valencia, CA) according to the manufacturer's instructions. Samples were treated with DNase I (Ambion, Inc., Austin, TX), RNA concentration and purity were determined spectrophotometrically (A 260 /A 280 ) and 250 ng was reverse transcribed in a total volume of 20 μl using the iScript cDNA Synthesis Kit (Biorad, Hercules, CA) according to the manufacturer's instructions. For real-time PCR, 1 μl of cDNA was amplified in 1X iQ SYBR Green Supermix containing specific primer pairs using the iCycler iQ Real-Time PCR Detection System (Biorad). The optimal primer concentrations and sequences were as follows: 200 nM IE1, sense 5' CAAGTGACCGAGGATTGCAA 3', antisense 5' CACCATGTCCACTCGAACCTT 3' ; 200 nM IE2, sense 5' TGACCGAGGATTGCAACGA 3', antisense 5' CGGCATGATTGACAGCCTG 3' [ 56 ]; 100 nM 18S rRNA, sense 5' GAGGGAGCCTGAGAAACGG 3', antisense 5' GTCGGGAGTGGGTAATTTGC 3'. All samples were run on the same plate where those for the internal control (18S rRNA) and those for the genes of interest were each run in triplicate, for each of 3 independent RNA preparations. PCR parameters were as follows: an initial step to denature at 95°C for 30 seconds followed by 40 cycles at 95°C for 15 seconds and anneal/extend at 60°C for 45 seconds. Following amplification, melt curves were generated to confirm the specificity of each primer pair with 80 cycles of increasing increments of 0.5°C beginning with 55°C for 30 seconds. Relative quantification of the target genes in comparison to the 18S reference gene was determined by calculating the relative expression ratio (R) of each target gene as follows: R = (E target )ΔCT(vehicle-sample) /(E 18S )ΔCT(vehicle-sample) [ 57 ]. Differences in gene expression between the IFN-treated cells and the vehicle-treated control cells were expressed as fold-inhibition. Western blotting Vehicle- and IFN-treated HFFs were infected with Towne-GFP at a MOI of 2.5. Twelve h p.i., the cells were harvested in 500 μl of 1X RIPA buffer containing a protease inhibitor cocktail (Roche Applied Science, Indianapolis, IN) and 1 mM PMSF. Lysates were sheared 3X with a 27G 1/2 needle and cell debris was pelleted by centrifugation at 14,000 r.p.m. at 4°C. Total protein concentrations from cleared supernatants were estimated with a Micro BCA™ Protein Assay Kit (Pierce, Rockford, IL), 50 μg of total protein were resolved on 10% SDS-polyacrylamide gels and transferred by blotting to PVDF membranes (Amersham Biosciences, Piscataway, NJ). Non-specific reactivity was blocked with 5% nonfat dried milk in Tris-buffered saline containing 0.1% Tween-20 (TBST) for 1 h at room temperature and blots were incubated for 1 h at room temperature with a polyclonal antibody that recognizes the HCMV IE proteins (IE72/86), kindly provided by Daniel N. Streblow [ 58 ]. The blots were then washed in TBST and incubated with donkey anti-rabbit IgG conjugated to horseradish peroxidase (1:5000; Amersham Biosciences) for 1 h at room temperature. Antigen-antibody complexes were detected using an enhanced chemiluminescence system (Amersham Biosciences). Blots were subsequently washed in TBST and tested for immunoreactivity to a rabbit polyclonal antibody to human β-actin (Sigma; loading control). Indirect immunofluorescence Vehicle- and IFN-treated HFFs were infected with Towne-GFP at a MOI of 1.0. Five d p.i., cells were washed 3X with 1X PBS, fixed with 1:1 methanol/acetone for 10 minutes at room temperature, washed again with 1X PBS, and blocked with 4% BSA/PBS for 15 minutes at room temperature. Cells were incubated for 1 h at 37°C with a HCMV IE antibody (IE72/86 kD; Chemicon #MAB810, Temecula, CA) diluted 1:200 in 0.5% BSA/PBS. Cells were then stained with 1:50 Alexa Fluor 568-conjugated goat anti-mouse IgG F(ab') 2 (Molecular Probes, Eugene, OR) for 30 minutes at 37°C, followed by a 2 minute incubation with 1 μM 4',6-diamidino-2-phenylindole, dihydrochloride (DAPI; Molecular Probes) at room temperature. Cells were coverslipped and mounted in Prolong Antifade mounting medium (Molecular Probes), visualized on a Zeiss Axio Plan II microscope (Thornwood, NY) and images were analyzed with deconvolution SlideBook™ 4.0 Intelligent Imaging software (Intelligent Imaging Innovations, Denver, CO). To determine the number of HCMV-infected cells, three fields of view (100X) for each treatment group were considered and the percent of IE-positive cells was calculated as: (average number of IE-stained cells/average number of DAPI-stained cells)×100. Statistics Data are presented as the means ± standard error of the means (sem). Data from IFN-treated groups were compared to vehicle-treated groups and significant differences were determined by one-way analysis of variance (ANOVA) followed by Tukey's post hoc t test (GraphPad Prism © Home, San Diego, CA). Competing interests The author(s) declare that they have no competing interests. Authors' contributions BS and HL conceived of the study, participated in the experimental design, performed all experiments and drafted the manuscript. RG and CM participated in the coordination and design of the study. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC554982.xml |
547901 | Acute weight gain, gender, and therapeutic response to antipsychotics in the treatment of patients with schizophrenia | Background Previous research indicated that women are more vulnerable than men to adverse psychological consequences of weight gain. Other research has suggested that weight gain experienced during antipsychotic therapy may also psychologically impact women more negatively. This study assessed the impact of acute treatment-emergent weight gain on clinical and functional outcomes of patients with schizophrenia by patient gender and antipsychotic treatment (olanzapine or haloperidol). Methods Data were drawn from the acute phase (first 6-weeks) of a double-blind randomized clinical trial of olanzapine versus haloperidol in the treatment of 1296 men and 700 women with schizophrenia-spectrum disorders. The associations between weight change and change in core schizophrenia symptoms, depressive symptoms, and functional status were examined post-hoc for men and women and for each medication group. Core schizophrenia symptoms (positive and negative) were measured with the Brief Psychiatric Rating Scale (BPRS), depressive symptoms with the BPRS Anxiety/Depression Scale and the Montgomery-Asberg Depression Rating Scale, and functional status with the mental and physical component scores on the Medical Outcome Survey-Short Form 36. Statistical analysis included methods that controlled for treatment duration. Results Weight gain during 6-week treatment with olanzapine and haloperidol was significantly associated with improvements in core schizophrenia symptoms, depressive symptoms, mental functioning, and physical functioning for men and women alike. The conditional probability of clinical response (20% reduction in core schizophrenia symptom), given a clinically significant weight gain (at least 7% of baseline weight), showed that about half of the patients who lost weight responded to treatment, whereas three-quarters of the patients who had a clinically significant weight gain responded to treatment. The positive associations between therapeutic response and weight gain were similar for the olanzapine and haloperidol treatment groups. Improved outcomes were, however, more pronounced for the olanzapine-treated patients, and more olanzapine-treated patients gained weight. Conclusions The findings of significant relationships between treatment-emergent weight gain and improvements in clinical and functional status at 6-weeks suggest that patients who have greater treatment-emergent weight gain are more likely to benefit from treatment with olanzapine or haloperidol regardless of gender. | Background Because antipsychotic drugs are considered the core treatment modality for schizophrenia [ 1 ] the differences among antipsychotics in terms of effectiveness, safety, and tolerability have expectedly become a topic of growing clinical and research interest [ 2 ]. The differences among antipsychotics in adverse events have garnered particular interest, with treatment-emergent weight gain becoming a focal point of attention and concern because weight gain can be associated with medical conditions such as type II diabetes, hypertension, and coronary artery disease [ 3 ]. Previous research has shown that there are variations with respect to the magnitude and the course of typical weight gain experienced during treatment with different antipsychotics [ 4 ]. Generally, the first generation antipsychotics, such as haloperidol, are associated with less weight gain than the second-generation antipsychotics. The newer atypical agents vary such that clozapine and olanzapine are associated with the greatest potential for weight gain, followed by risperidone, quetiapine, ziprasidone, [ 5 ] and aripiprazole. Although most of the literature on treatment-emergent weight gain tends to focus on this event as adverse, a growing body of research has demonstrated a significant link between beneficial therapeutic response and treatment-emergent weight gain. With the exception of a few studies that failed to find an association between weight gain and better clinical outcome [ 6 - 9 ], most studies, primarily on clozapine, suggest an association between weight gain and better clinical outcome [ 10 - 19 ]. This expanding body of evidence augments studies on first-generation antipsychotics predating the introduction of atypical antipsychotics by about 30 years, also suggesting a link between weight gain and improved therapeutic response [ 20 - 22 ]. One study [ 17 ] reported mixed findings, where the association between treatment-emergent weight gain and clinical outcome was found for patients treated with clozapine and olanzapine, but not with risperidone or haloperidol, suggesting that this phenomenon may be specific to particular antipsychotics. The study of gender differences in the relationship between treatment-emergent weight gain and therapeutic response has gained limited attention and provided conflicting results. A brief report on weight gain during clozapine therapy indicated that greater weight gain was associated with clinical improvement among women, but not among men [ 13 ]. In contrast, a more extensive analysis [ 16 ] demonstrated that clozapine-emergent weight gain predicted improvement in psychopathology among both men and women. It is unclear if there are gender differences in the association between treatment-emergent weight gain and therapeutic response, and if such gender differences exist, whether they are limited to a specific antipsychotic such as clozapine. Women in the general population appear to be vulnerable to the adverse emotional and psychosocial consequences of weight gain. For women, obesity has been linked to lower life satisfaction, increased social isolation [ 23 ], and lower levels of psychological and physical functioning [ 24 - 26 ]. Compared to men, women are more likely to perceive themselves as overweight [ 27 ], to diet [ 28 ], and to participate in weight loss programs [ 28 ]. Based on generalizations from studies on women in the general population, several authors have speculated that antipsychotic-emergent weight gain will be similarly accompanied by negative psychological consequences [ 29 - 31 ], which will negatively impact women's response to antipsychotic therapy. However, it is unclear if women who gain weight during treatment with antipsychotics tend to experience adverse emotional consequences similar to those noted among women who gain weight in the general population. It is also unclear whether the association between treatment-emergent weight gain and clinical response differs by patient gender and by type of antipsychotic. The primary objective of this study was to expand on prior research and investigate whether the relations between acute weight gain during antipsychotic therapy and treatment outcomes differ based on patient gender and the specific antipsychotic used in the treatment regimen, olanzapine or haloperidol. This study also aimed to broaden the definition of therapeutic response by extending beyond positive and negative symptoms of schizophrenia to depressive symptoms and levels of mental and physical functioning, because these domains tend to deteriorate with weight gain among women in the general population. Methods Subjects and study design We used data of 1296 men and 700 women who participated in a randomized, double-blind, multi-center, clinical trial comparing olanzapine to haloperidol [ 32 ]. Participants met DSM-III-R criteria for schizophrenia spectrum disorders (schizophrenia, schizoaffective disorder, or schizophreniform disorders), and were required to have a total score on the Brief Psychiatric Rating Scale (BPRS) [ 33 ] of ≥ 18 and/or intolerance to current antipsychotic therapy, excluding haloperidol. Following approval of institutional review boards, written informed consent was obtained from all participants. Participants were randomly assigned in 2:1 ratio (2 olanzapine subjects for each haloperidol subject). Although randomization was not stratified on gender or any other patient characteristics, it resulted in a 2:1 ratio for males (870/426) and for females (426/233). The olanzapine group (N = 1337) included 467 women and 870 men, and the haloperidol group (N = 659) was comprised of 233 women and 426 men. Participants were randomly assigned to olanzapine, 5 to 20 mg/day, or haloperidol, 5 to 20 mg/day. The type of antipsychotic medication used prior to enrollment was not assessed in the current study, but the likelihood of previous treatment with an atypical antipsychotic drug was very low, because the study was initiated in 1994 when only clozapine was available in some of the sites. Further, randomization worked for patient and illness characteristics [ 32 ], and there is no reason to expect that the randomization did not work for other characteristics such as type of prior antipsychotic medication. We used data from the acute phase, the first 6 weeks of the study, for several reasons. First and foremost, this study was a 6-week randomized double blind clinical trial with a 46-week "responder maintenance period", in which only patients who responded to the acute 6-week treatment per predetermined response criteria were eligible to continue. Consequently, the study design did not permit a longer-term analysis on the link between weight gain and improvement because only patients who improved during the first 6-weeks phase were followed-up for a longer time period. Second, the 6-week period represents a relevant time frame often used in clinical practice to determine treatment outcome and decide on treatment discontinuation [ 15 ]. For many clinicians the initial 6-weeks of antipsychotic therapy is a minimal time period in which to critically evaluate how patients are responding to a new course of therapy. Third, the rate of weight gain previously reported on clozapine was greatest in the first 6 weeks and slowed thereafter [ 16 ], such that the increase between 6 weeks and 6 months was equivalent in magnitude between baseline and 6 weeks. This observation further enhanced the relevance of studying this phenomenon during the first 6-weeks of treatment. And lastly, the short duration of the current study is comparable to most previous studies of antipsychotic-emergent weight gain and clinical improvement, thus enabling more direct comparisons between the present and the previous findings. During the 6-week acute phase, the mean modal dose was 13.2 mg/day (SD = 5.8) for olanzapine and 11.8 mg/day (SD = 5.6) for haloperidol. There were no discontinuations due to weight gain as an adverse event for any treatment group during the 6-week study period, and the rate of discontinuation for any cause was similar for women (62.7%) and men (60.8%), with a significantly smaller proportion of the patients in the olanzapine group (33.5%) than in the haloperidol group (53.2%, p < .001). In addition, the percentage of patients who discontinued treatment because of an adverse event or a lack of efficacy was significantly higher in the haloperidol group than in the olanzapine group. Further details on the parent study design and primary findings are available elsewhere [ 32 ]. Measures This investigation used measures of positive and negative symptoms ("core schizophrenia symptoms"), depressive symptoms, functional status, and body weight. Core symptoms of schizophrenia were assessed by the Positive Symptom and the Negative Symptom subscales on the BPRS (scored on a scale of 0–6) extracted from the Positive and Negative Syndrome Scale (PANSS) [ 34 ]. Levels of depressive symptoms were assessed by the Depression/Anxiety subscale in the BPRS and total score on the Montgomery-Asberg Depression Rating Scale (MADRS) [ 35 ]. The Physical and the Mental Component scores on The Medical Outcome Survey – Short Form 36 (SF-36) [ 36 ] assessed physical and mental functioning. The SF-36 provides scores on eight functional scales: physical functioning, role limitations due to physical functioning, bodily pain, general health, social functioning, role limitations due to emotional problems, vitality and mental health. The first four scales can be summarized into a Physical Component Score (PCS) and the latter four constitute the Mental Component Score (MCS). PCS and MCS are often used alone because they account for 85% of reliable variance of the eight SF-36 domains, without losing information. It is notable that unlike the symptom measures, which were clinician-rated scales, the SF-36 is a patient-reported measure that provides patients' subjective appraisal of current functional status independently of clinicians' perceptions. Weight change (in kilograms) was measured from baseline to 6 weeks, or to endpoint for patients who dropped out of the study prior to the 6-week visit. BMI was calculated as weight in kilograms divided by the square of height in meters (BMI = kg/m 2 ). To enhance comparability of findings on different measures, the clinical measures were all standardized to z-scores. For the BPRS Core Symptoms, MADRS, and BPRS Anxiety and Depressive Subscale, this was done by subtracting the measure's overall mean and dividing by the measure's standard deviation at baseline. A single measure of depressive symptoms was calculated as the average of the standardized MADRS and BPRS Anxiety and Depressive Subscale. If a score was missing on either depression measure, the score on the available measure was used. The two depression measures were pooled because each is an independent and valid estimate of patients' level of depressive symptoms, and aggregating them should provide the best and most comprehensive estimate of depressive symptoms. Additionally, the pooling helped minimize loss of data, which are assumed not to be missing at random. The SF-36 Physical and Mental Component scores were converted from T-Scores to z-scores. Statistical analysis Baseline comparisons used independent samples t-tests for continuous variables and chi-square tests for categorical variables. Effects of treatment and gender on independent variables were assessed using ANCOVA, with the baseline score as well as the number of weeks in the study as covariates. The relationship between change in weight and change in each outcome variable was assessed using separate multiple linear regression analyses, each with corresponding clinical change score as a dependent variable, the corresponding baseline score and number of weeks in the study as covariates, and the following independent variables: weight change, treatment group assignment, and gender. In an additional analysis, the interactions of these three independent variables were added to the regression models. The analyses included measures from baseline and the 6-week visit. Missing data were handled by carrying forward the last observation for all patients with at least one post-baseline assessment. All analyses were performed using the Statistical Package for the Social Sciences (SPSS) version 11.0. Results Baseline characteristics Relative to men, women were older, more likely to be Caucasians, were more likely to be overweight or obese, had less severe positive symptoms, lower levels of physical functioning, and had higher levels of depressive symptoms (Table 1 ). Women were also more likely to be diagnosed with a schizoaffective disorder and less likely to be diagnosed with schizophrenia. At baseline, the weight and BMI of the haloperidol-treated women and men, were significantly greater than the weight and BMI of olanzapine-treated women and men (Table 2 ). Further, men in either medication group weighed significantly more than women at baseline, on the average, and their mean baseline BMI was significantly lower than that of women. Table 1 Gender differences at baseline* All patients Women Men Characteristic N = 1996 N = 700 N = 1296 P Demographics Age 38.6 (11.4) 40.9 (12.8) 37.3 (10.4) <0.001 Race 0.013 Caucasian 1600 (80.2%) 587 (83.9%) 1013 (78.2%) 0.002 African descent 220 (11.0%) 67 (9.6%) 153 (11.8%) 0.128 Hispanic 83 (4.2%) 19 (2.7%) 64 (4.9%) 0.018 Other 93 (4.7%) 27 (3.9%) 66 (5.1%) 0.211 Diagnosis <0.001 Schizophrenia 1658 (83.1%) 526 (75.1%) 1132 (87.3%) <0.001 Schizoaffective disorder 300 (15.0%) 157 (22.4%) 143 (11.0%) <0.001 Schizophreniform disorder 38 (1.9%) 17 (2.4%) 21 (1.6%) 0.207 Core schizophrenia symptoms BPRS total 33.4 (10.7) 33.5 (11.2) 33.3 (10.5) 0.660 BPRS positive 10.3 (4.1) 10.0 (4.1) 10.5 (4.0) 0.017 BPRS negative 6.7 (3.3) 6.7 (3.4) 6.7 (3.3) 0.791 Depressive symptoms MADRS 16.6 (8.8) 17.3 (9.3) 16.3 (8.5) 0.031 BPRS anxiety and depression 7.5 (3.8) 8.0 (3.9) 7.3 (3.8) <0.001 Functional status SF-36 physical component 43.6 (13.0) 41.4 (14.0) 44.5 (12.5) 0.004 SF-36 mental component 34.6 (12.4) 34.6 (12.9) 34.6 (12.2) 0.958 Weight Weight, kg 76.8 (17.1) 70.2 (16.4) 80.4 (16.3) <0.001 BMI 26.0 (5.2) 26.5 (5.8) 25.8 (4.9) 0.007 BMI level <0.001 Underweight to average (BMI < 25) 930 (48.9%) 319 (47.6%) 611 (49.6%) 0.418 Overweight (BMI ≥ 25 and < 30) 609 (32.0%) 191 (28.5%) 418 (33.9%) 0.016 Obese (BMI ≥ 30) 364 (19.1%) 160 (23.9%) 204 (16.5%) <0.001 * Data are presented as Mean (SD) or N (%). P-values refer to differences between women and men. Table 2 Outcomes by gender and treatment group Women Men Olanzapine Haloperidol Olanzapine Haloperidol Outcome measure Baseline Endpoint Baseline Endpoint Baseline Endpoint Baseline Endpoint Core symptoms BPRS positive 10.1 (4.0) 6.5 (4.8) 9.9 (4.3) 6.9 (4.4) 10.3 (4.1) 7.0 (4.6) 10.7 (4.0) 8.0 (4.6) BPRS negative* 6.6 (3.3) 4.4 (3.1) 6.8 (3.4) 5.5 (3.3) 6.7 (3.2) 4.8 (2.9) 6.8 (3.5) 5.5 (3.2) Depressive symptoms MADRS* 17.8 (9.6) 11.0 (9.3) 16.1 (8.6) 13.5 (10.7) 16.0 (8.4) 10.5 (7.7) 17.0 (8.7) 13.7 (9.4) BPRS anxiety & depression* 8.1 (3.8) 5.0 (4.0) 7.8 (4.1) 6.0 (4.3) 7.1 (3.7) 4.5 (3.6) 7.7 (3.9) 5.7 (3.9) Functioning SF-36 mental component* 34.7 (13.1) 41.8 (12.1) 34.2 (12.4) 36.5 (13.0) 34.6 (12.4) 40.6 (12.0) 34.7 (11.7) 37.8 (12.3) SF-36 physical component* 41.3 (14.3) 45.0 (13.9) 41.9 (13.5) 42.1 (13.8) 44.7 (12.2) 48.7 (11.6) 43.9 (13.2) 45.4 (13.1) Weight Weight, kg* † 68.9 (15.5) 70.5 (15.8) 72.6 (17.8) 72.4 (17.6) 80.3 (15.9) 82.5 (16.2) 80.7 (17.1) 81.0 (17.4) BMI* † 26.1 (5.4) 26.7 (5.5) 27.2 (6.6) 27.1 (6.5) 25.8 (4.8) 26.5 (4.9) 25.7 (5.0) 25.8 (5.0) * Therapy effect (p < .05), reflecting significant differences between olanzapine and haloperidol-treated patients at baseline. † Gender effect (p < .05), reflecting significant differences between women and men on weight parameters within each treatment group at baseline, and between women and men when combined baseline values across treatment groups. Weight gain by treatment group and gender In order to illustrate the differences in weight gain by treatment group and gender, the patients were grouped into thirds based on their percentage of change in weight from baseline. Approximately one third of all patients (29.8%) lost weight (any decrease), one third (36.6%) had relatively stable weight (0% to <3% increase), and one-third (33.6%) gained weight (≥ 3% increase). The corresponding mean weight changes in kilograms were -2.1 kg, 0.9 kg, and 4.6 kg, for lost, stable, and increased weight groups, respectively. Figure 1 demonstrates that men and women had a similar weight gain pattern within each treatment group, and that 59% of olanzapine-treated patients and 82% of haloperidol-treated patients either lost weight or maintained stable weight. Further, 17.6% of the haloperidol treatment group and 41.4% of the olanzapine-treated patients gained at least 3% of their baseline body weight. Compared to the haloperidol-treated patients, the olanzapine treatment group had a greater increase in absolute weight (0.3 kg vs. 2.0 kg, F(1,1901) = 122.0, p < 0.001) and a significantly greater proportion of patients with a potentially clinically meaningful weight gain, defined as an increase of at least 7% from baseline body weight (3.0% vs. 13.6%, χ 2 (1, N = 1913) = 51.8, p < 0.001). Figure 1 Percentage of patients with different levels of weight change by gender and medication. Patients were placed in 3 equal groups based on their percent change in weight: "Lost" indicates any weight loss, "Stable" indicates ≥ 0% to <3% weight gain, and "Gained" indicates ≥ 3% weight gain. Olanzapine treatment group (N = 1337; 870 men, 467 women); Haloperidol treatment group (N = 659, 426 men, 233 women). Compared to women, men experienced greater increases in absolute weight (0.9 kg vs.1.5 kg F(1,1901) = 17.3, p < 0.001), were more likely to experience greater increases in BMI (0.35 vs. 0.48; F(1,1889) = 5.8, p = 0.016), and were more likely to have an increase of at least 7% from baseline body weight (8.1% vs.11.2%; χ 2 (1, N = 1913, p = 0.032). Within the olanzapine treatment group, but not the haloperidol treatment group, significantly more men than women experienced a potentially clinically meaningful weight gain (11.0% vs.15.0%; χ 2 (1, N = 1286) = 4.0, p = 0.045 for women and men in the olanzapine treatment group, and 2.3% vs. 3.5%; χ 2 (1, N = 627) = 0.7, p = 0.40, for women and men in the haloperidol treatment group). Outcomes by treatment group and gender Table 2 presents the outcome measures and BMI by gender and by treatment group at baseline and endpoints. As previously documented in the parent study [ 32 ], there were treatment effects on these outcome measures such that olanzapine-treated patients showed greater improvements than the haloperidol treatment group. There were no gender effects for any of the clinical (core symptoms of schizophrenia, depressive symptoms) and functional outcome measures (mental and physical functioning). Outcomes, weight change, and gender To assess the potential effects of gender on the relationship between outcomes and weight change we performed a set of regression analyses predicting change in each of the outcome variables (i.e., Core symptoms of schizophrenia, depressive symptoms, gender, and the interaction of these variables). The results indicated that gender was not a significant variable (i.e., the following components were not significant in any of the analyises: gender, gender by weight change, gender by treatment group, and gender by treatment group by weight change). Therefore, gender was dropped from subsequent analyses. Since men and women were not found to significantly differ on any of the clinical outcome measures and had a similar pattern of weight gain within each treatment group, we examined the association between weight change and change in treatment outcomes for all patients within each treatment group. Regression analyses demonstrated that for both olanzapine and haloperidol-treated patients, increases in weight were significantly associated with improvements in core schizophrenia symptoms, ( B = -0.038, t(1899) = 5.6, p < 0.001), in depressive symptoms ( B = -0.030, t(1899) = 5.3, p < 0.001), in mental functioning ( B = 0.026, t(700) = 2.0, p = 0.047), and in physical functioning ( B = 0.028, t(700) = 2.3, p = 0.021). Because level of depressive symptoms was based on two depression measures, the MADRS and the BPRS depression/anxiety subscale, we repeated the analysis using each of these measure separately. Results were unchanged. The regression coefficients ( B 's) indicated that every one- kilogram increase in weight at 6-weeks was associated with approximately 0.03 standard deviations improvement in each clinical outcome parameter, when controlling for the effects of treatment group, gender, treatment group-gender interaction, baseline weight, the corresponding baseline outcome measure, and the number of weeks in the study. In order to graphically illustrate the findings, the patients were grouped into thirds based on their percent change in weight as described above, resulting in lost, stable, and increased weight groups. Figure 2 demonstrates the similarity in the relationships between weight changes and changes in the four treatment outcome variables for the olanzapine and haloperidol treatment groups. Figure 2 Change in outcomes by change in weight and treatment group for all patients. Patients were grouped in thirds based on their percent change in weight: "Lost" indicates any weight loss, "Stable" indicates 0% to <3% weight gain, and "Gained" indicates weight gain of 3% or more. "Depression" as measured by the MADRS or BPRS anxiety and depression scale. "Schizophrenia Symptoms" as measured by the BPRS positive symptoms and BPRS negative symptoms scales. "Physical Functioning" as measured by the SF-36 physical component score. "Mental Functioning" as measured by the SF-36 mental component score. Error bars represent 95% confidence intervals. Consistent with a prior analytical approach by Czobor and colleagues [ 17 ], we also performed analyses (ANCOVA, controlling for baseline weight and weeks in the study) that specifically contrasted patients who demonstrated clinical improvement (reduction in BPRS Core Symptoms > 20%) with those who deteriorated by any amount. On core symptoms, improved olanzapine patients gained 2.49 kg, compared with 1.42 kg for those who deteriorated (F(1,1278) = 14.7, p < 0.001). Improved patients on haloperidol gained 0.08 kg while those who deteriorated lost 0.44 kg (F(1, 617) = 2.9, p = 0.087). In order to directly compare current findings with those previously reported by Czobor and associates, we repeated the analysis using their analytical variables (absolute weight change (kg) and PANSS total score), while covarying baseline bodyweight and baseline PANSS total score. This analysis demonstrated that improved patients on olanzapine gained 2.37 kg, compared with 0.59 kg for those who deteriorated (F(1,1278) = 53.1, p < 0.001). Improved patients on haloperidol gained 0.15 kg, while deteriorated patients on haloperidol lost 0.55 kg (F(1,618) = 6.7, p = 0.010). Results were similar when also controlling for weeks in the study. The partial correlations between weight gain (kg) and therapeutic response as measured by the PANSS total score (controlling for baseline weight, baseline PANSS total score, and weeks of treatment) were statistically significant for the olanzapine and haloperidol treatment groups (partial r = -0.15, N = 1279, p < 0.001 for olanzapine; partial r = -0.11, N = 618, p = 0.006 for haloperidol). When using the Czobor and associates method (controlling only for baseline weight and baseline PANSS total score), the partial correlations were more disparate across treatment groups (partial r = -0.24, N = 1280, p < 0.001 for olanzapine; partial r = -0.10, N = 619, p = 0.013 for haloperidol), highlighting the importance of controlling for weeks of treatment. These correlations were similar in direction but of smaller magnitude than the partial correlations reported by Czobor and associates (partial r = -0.57, df = 37, p < 0.001 for olanzapine; partial r = - 0.30, df = 35, p = 0.060 for haloperidol). Although weight gain was identified as a prognostic marker of therapeutic response for both treatment groups, it was unclear if this marker is stronger for the olanzapine than the haloperidol treatment group because the olanzapine-treated patients had greater weight gain and greater therapeutic improvements compared to the haloperidol treatment group. To address this question, we calculated the conditional probability of clinical response, defined as reduction in BPRS core symptoms > 20%, given that the patient experienced various amounts of weight gain on olanzapine and on haloperidol. Results in Table 3 demonstrate that weight gain was a similar prognostic indicator for each treatment group, as patients who gained more weight were significantly more likely to respond to treatment for both treatment groups. About half of the patients who lost weight responded to treatment, whereas three-quarters of patients who had a clinically significant weight gain (≥7%) responded to treatment. Table 3 Conditional probability of response given different amounts of weight gain for all patients and by medication Olanzapine a Haloperidol b N = 1283 N = 622 Weight change Did not respond Responded c P(R| W) Did not respond Responded c P(R| W) N % N % N % N % Lost weight (< 0%) 210 16.4% 187 14.6% .47 176 28.3% 166 26.7% .49 Gained 0 to < 3% 118 9.2% 234 18.3% .67 88 14.2% 83 13.3% .49 Gained 3 to < 7% 111 8.7% 248 19.3% .69 39 6.3% 51 8.2% .57 Gained ≥ 7% 43 3.4% 132 10.3% .75 5 0.8% 14 2.3% .74 Note. Response was defined as a greater than 20% decrease in BPRS Core Symptoms a Mantel-Haenszel test of linear by linear association, χ 2 (1, N = 1283) = 52.3, p < 0.001. b Mantel-Haenszel test of linear by linear association, χ 2 (1, N = 622) = 4.1, p = 0.044. c Probability of response given level of weight change. Discussion Like women in the general population, women with schizophrenia were more likely to be obese [ 25 ], depressed [ 37 ], and to physically function at a poorer level than men [ 26 ]. Despite these similarities, which would be expected to bode poorly for the effects of acute weight gain on women's treatment outcomes, women and men who gained weight during antipsychotic therapy demonstrated significant improvements on core schizophrenia symptoms, depressive symptom, and mental and physical level of functioning. Overall, weight gain was found to be linked to better clinical response among men and women treated with olanzapine or haloperidol. This link impacted olanzapine-treated patients more than those treated with haloperidol because improved clinical and functional outcomes were more pronounced for the olanzapine-treated patients, who were also more likely to experience weight gain than patients treated with haloperidol. The current study adds to the literature by demonstrating a positive association between treatment-emergent weight gain and better clinical outcomes that extends beyond positive and negative symptoms to depressive symptoms and functional status. Depressive symptoms in schizophrenia are known to be a distinctive clinical dimension of prognostic significance [ 38 ] that is associated with compromised quality of life [ 39 ], increased risk of psychotic relapse [ 40 ], suicidal tendencies, work impairment, lower activity, worse daily functioning, and poorer life satisfaction [ 41 ]. In this study, weight gain during antipsychotic therapy was linked to improvements in both core symptoms of schizophrenia and in depressive symptoms, two distinct and clinically meaningful dimensions of outcome in the treatment of schizophrenia. Current findings are consistent with previous research [ 10 - 19 ] and provide further support to the hypothesis [ 16 ] that a positive link between treatment-emergent weight gain and improved clinical response may be a generalized phenomenon across antipsychotic medications. Although a recent study [ 17 ] demonstrated this phenomenon for clozapine and olanzapine-treated patients but not in the haloperidol or risperidone treatment groups, examination of its findings revealed great similarity to the current results, with a moderate association between weight gain and therapeutic response for the haloperidol treatment group despite a small sample size. Interestingly, the size of the effects reported by Czobor and associates (partial correlations of -0.57 for olanzapine and -0.30 for haloperidol) were numerically larger than those found in the current study (partial correlations of -0.24 for olanzapine and -0.10 for haloperidol). Our study supports the findings of Czobor and associates but with sufficient statistical power to produce statistically significant results for both the olanzapine and the haloperidol treatment groups. Although the current results are consistent with those reported in a number of previous prospective studies, our findings are incongruent with two retrospective surveys. In the more recent study [ 42 ], self-administered surveys were distributed to schizophrenia patients through chapters of the National Alliance for the Mentally Ill to assess their perceptions about the negative impact of treatment-emergent weight gain on psychosocial functioning. The authors concluded that weight gain is directly associated with reduced quality of life. Several limitations of this study were previously noted [ 43 ], pointing particularly to a major confounding factor: most of the respondents started their antipsychotic medications several years before the survey. Because antipsychotics differ in the magnitude and in the trajectory of weight gain over time [ 44 ], the reported differences may reflect differences between a group of patients whose illness is well managed and thus are reporting a sense of relative psychological well being and a group of distressed patients who are adjusting to a new antipsychotic regimen [ 43 ]. The other survey [ 45 ] queried depressed psychotic patients who called a mental health crisis line about the impact of eight adverse events, including weight gain, on their emotional distress and satisfaction with treatment. Although weight gain was the adverse event reported least, it was viewed as the most distressing, particularly for women, and was linked to lower satisfaction with treatment. This survey, which was noted for its lack of rigorous design [ 2 ], did not report the treatment duration on the antipsychotic drugs. Resultantly, the respondents may have started the antipsychotic regimens years before the survey, obscuring the findings in a manner similar to that in the survey by Allison and colleagues [ 42 ]. In essence, the two retrospective self-reports appear to have assessed patients' treatment satisfaction and perceptions rather than objective parameters of clinical change and treatment progress. Studies that objectively measure weight change and clinical response in a prospective fashion are more desirable as they provide more objective information. This is especially important because retrospective self-reports may capture the social climate rather than objective changes in clinical outcomes. It is noteworthy, however, that there are four prospective studies reporting findings that are inconsistent with ours [ 6 - 9 ]. The reasons for the inconsistencies are not clear but may be due to small sample size. The sample size needed to detect a correlation of .20 with 80% power is 194, while sample sizes for these four studies ranged from 30 to 82. Although the current study found a link between treatment-emergent weight gain and better therapeutic response, its correlational nature does not allow for discerning the underlying causes. There are numerous factors and poorly understood mechanisms that may impact patients' weight gain during treatment, including environmental, behavioral, neurochemical, genetic, and clinical factors [ 17 ]. It was previously noted, for example, that the association between weight gain and therapeutic improvement may reflect for some patients the restoration of body weight lost during an acute episode because patients were previously found to restore their original body weight upon recovery, even prior to the introduction of antipsychotics [ 46 ]. While the association between treatment-emergent weight gain and therapeutic response may be due to specific pharmacological pathways, it is also possible that non-pharmacological pathways play an important role. The link between weight gain and therapeutic response may be an epiphenomenon that accompanies clinical and functional improvements by influencing patients' increased motivation, pleasures, and specific behaviors that enhance weight gain [ 16 ]. Future studies will be needed to evaluate the correlations reported here in order to better understand the underlying mechanisms of treatment-emergent weight gain, and help differentiate pharmacological from non-pharmacological pathways to weight gain in the treatment of schizophrenia with antipsychotics. A promising research strategy [ 47 ] may involve the use of placebo-controlled trials of antipsychotics to contrast weight change between patients who improved on placebo with those who deteriorated on placebo. It is important to note that regardless of the pathways to weight gain, the link between excess weight and greater morbidity and mortality calls for careful clinical attention during the treatment of patients with schizophrenia. Another important issue that needs addressing is the growing focus on the associations between obesity and poorer quality of life among schizophrenia patients [ 48 ]. Such research helps highlight the need to distinguish between treatment-emergent weight gain and obesity. Although these terms are not mutually exclusive, they are not synonymous either. Gaining weight during treatment with antipsychotics should not be equated with becoming obese. For example, thin individuals may gain a potentially clinically meaningful proportion of their baseline body weight (≥ 7%) and attain an average BMI, whereas obese individuals may gain the same proportion of their baseline weight but maintain their initial obese status per BMI categorization. There are numerous permutations to this phenomenon, suggesting the need to recognize its complexity and pursue further studies that may help clarify the causes and the consequences of treatment-emergent weight gain among individuals with schizophrenia who differ in their baseline body weight. The current study has its limitations. First, it examined weight change post-hoc and only during the acute phase of the illness, which was confined to the first 6 weeks of treatment, and the findings may not generalize to long-term treatment-emergent weight gain. It is noteworthy that this study assessed treatment-emergent weight gain at 6-weeks, although patients continue to accrue weight beyond the acute treatment phase. For olanzapine-treated patients, the mean weight gain observed at 6-weeks (2.0 kg) was about a third of 6.26 kg mean weight gain found at 39 weeks, when weight gain tends to plateau on olanzapine [ 49 ]. Similarly, the haloperidol-treated patients had a 0.3 kg mean weight gain at 6 weeks, which was less than half of 0.69 kg mean weight gain observed for these patients after 39 weeks of treatment. This observation highlights the need to assess the association between weight gain and treatment outcomes in longer-term studies. The choice of 6-weeks was not only driven by the design of the study, in which treatment responders in the acute phase were followed up in the 46-week maintenance period of the study, but also by the clinical relevance of the acute treatment phase. Clinicians often use the first 6-weeks of treatment to assess the tolerability and effectiveness of a new antipsychotic regimen and to decide whether to continue or discontinue that course of therapy [ 15 ]. Further, the study of the first 6 weeks of treatment enabled comparisons of the current findings with other studies, which were typically of short-term duration. Although weight gain appears to be greatest and most rapid during the first 6 weeks of treatment with clozapine [ 16 ], and during the first 12 weeks for olanzapine with a trend toward a plateau after approximately 39 weeks of treatment, [ 49 ] longer-term studies will be needed to determine the validity of the current findings in longer-term treatment. This may be, however, difficult to study. Patient attrition from studies is not random, with those experiencing poor treatment efficacy or poor tolerability being more likely to discontinue the study, leaving a relatively homogeneous group of study completers who are also treatment responders. Such reduction in the variability of treatment outcomes may diminish the likelihood of finding this phenomenon in long term randomized double blind studies. Further, if this phenomenon were to be investigated in long-term naturalistic observational studies, one would likely face another problem, namely the prevalent use of polypharmacy [ 50 ], and the dynamic nature of treatment for schizophrenia, [ 51 ] with frequent changes in antipsychotic regimens and in concomitant psychotropic medications. Such complexity may increase the difficult in identifying which treatment at what time was associated with which weight gain and treatment outcome. It is of interest to note, however, that despite rapid weight gain during the 6-week period in our study, when weight gain is more likely to be noticed by the patients and their clinicians and thus may elicit a negative emotional response, the weight gain in this study was not only linked to improved clinical and functional status but also to reduced emotional distress as measured by the depression scales. Another limitation of the study is its lack of assessment of patients' adherence with medication. It is possible that weight gain and improvement occurred together because improvements occur mostly in patients who are medication adherent. Although this was not assessed in the present study, this possibility was previously studied by Meltzer et al. [ 16 ], who found a significant association between clozapine-emergent weight gain and improved psychopathology. In their study, non-adherent patients were expected to have lower or absent plasma clozapine levels, but there was no relationship between plasma clozapine levels and weight gain or clinical response. Meltzer and colleagues also monitored adherence closely during weekly visits to determine white blood count and found no evidence of intermittent or poor adherence in their study patients. Additionally, the associations between weight gain and improved outcomes in our study were similarly found within the haloperidol and within the olanzapine treatment groups when controlling for treatment duration. Treatment duration is a proxy for time on the medication in randomized double-blind trials, where treatment discontinuation for any reason results in patient's discontinuation from the study. Thus, if patients were more adherent with one antipsychotic drug than with the other, and medication adherence influenced the associations between weight gain and outcomes, then one would expect to find the association between weight gain and improved outcomes to be present only in the more adherent treatment group, but not in both treatment groups, as found in the present study. Another study limitation is the correlational nature of the analyses, which precludes cause-effect relationship and allows for the possibility that the observed associations might be due to an unobserved variable or set of variables. Further, the relatively low correlations suggest that the association explains only a small proportion of the variance in treatment outcomes. Response to antipsychotic medications is a complex phenomenon that is associated by numerous relatively independent components [ 16 ] and weight gain is only one of them. Nonetheless, this link was demonstrated when using other statistical approaches, including contrasting of weight gains between responders and patients who did not respond, by identifying the degree of improvement associated with every 1-kg gained at 6 weeks, and by calculating the conditional probability of therapeutic response given various amounts of weight gain. These findings are important as they suggest that acute weight gain is a valuable prognostic marker in the treatment of schizophrenia. Next, because the study included patients with a moderately severe level of symptomatology, the current findings may not generalize to patients with milder or residual symptoms of schizophrenia. However, the relationships among the severity of patients' baseline symptomatology, treatment-emergent weight gain and therapeutic response is currently unclear. And lastly, this study used the SF-36, a self-report measure of functional status, which was not designed to assess the potential impact of weight gain on patients' functional status or quality of life. Preliminary information on the first measure designed to specifically capture the impact of antipsychotic-emergent weight gain on patients' psychosocial functioning was only recently published [ 52 ]. One would have expected, however, to detect a decline in patients' mental or physical levels of functioning if the experienced weight gain were to have adverse impact during the acute treatment phase. Conclusions Women (and men) with schizophrenia who gained weight during treatment with olanzapine or haloperidol did not experience worsening of clinical or functional status. To the contrary, they had significant improvements in core symptoms of schizophrenia, depressive symptoms, and mental and physical level of functioning. Although excessive weight gain, regardless of origin, is of concern due to its association with physical health problems, the current findings suggest that patients who have greater treatment-emergent weight gain are more likely to benefit from treatment with olanzapine or haloperidol. Findings highlight the complexity inherent in medication management of schizophrenia patients and the need to balance treatment risks and benefits for each patient. In addition, further prospective studies will be required to assess the effects of weight gain, in both psychiatric and medical terms, on individuals treated for schizophrenia with various antipsychotic medications. Competing interests The authors are employees of Eli Lilly and Company, Indianapolis, Indiana Authors' contributions • HAS conceived of the study, participated in its design, the analytical plan, the interpretation of the results, and drafted the manuscript • MS participated in the design of the study, the analytical plan, the interpretation of the results, and performed the statistical analysis • ZZ and BK participated in the design of the study, the interpretation of the results, and the drafting of the manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC547901.xml |
520756 | Logistics of community smallpox control through contact tracing and ring vaccination: a stochastic network model | Background Previous smallpox ring vaccination models based on contact tracing over a network suggest that ring vaccination would be effective, but have not explicitly included response logistics and limited numbers of vaccinators. Methods We developed a continuous-time stochastic simulation of smallpox transmission, including network structure, post-exposure vaccination, vaccination of contacts of contacts, limited response capacity, heterogeneity in symptoms and infectiousness, vaccination prior to the discontinuation of routine vaccination, more rapid diagnosis due to public awareness, surveillance of asymptomatic contacts, and isolation of cases. Results We found that even in cases of very rapidly spreading smallpox, ring vaccination (when coupled with surveillance) is sufficient in most cases to eliminate smallpox quickly, assuming that 95% of household contacts are traced, 80% of workplace or social contacts are traced, and no casual contacts are traced, and that in most cases the ability to trace 1–5 individuals per day per index case is sufficient. If smallpox is assumed to be transmitted very quickly to contacts, it may at times escape containment by ring vaccination, but could be controlled in these circumstances by mass vaccination. Conclusions Small introductions of smallpox are likely to be easily contained by ring vaccination, provided contact tracing is feasible. Uncertainties in the nature of bioterrorist smallpox (infectiousness, vaccine efficacy) support continued planning for ring vaccination as well as mass vaccination. If initiated, ring vaccination should be conducted without delays in vaccination, should include contacts of contacts (whenever there is sufficient capacity) and should be accompanied by increased public awareness and surveillance. | Background Concerns about intentional releases of smallpox have prompted extensive preparations to improve our ability to detect and respond to an outbreak of smallpox [ 1 , 3 , 4 , 2 ]. Many factors contribute to the public health challenge of understanding and preparing for smallpox, including the age and quality of epidemiological data on native smallpox and the smallpox vaccine, the difficulty of extrapolating that data to our current populations, the possible terrorist use of altered smallpox, our ignorance of terrorist methods of release, and the relatively high risk of adverse events caused by the smallpox vaccine. The Centers for Disease Control and Prevention (CDC) established ring vaccination (selective epidemiological control [ 5 ]), a strategy in which contacts of cases are identified and vaccinated, as the preferred control measure in the event of a smallpox outbreak (interim plan). The successful use of ring vaccination during the smallpox eradication campaign and its logical emphasis of case-contacts for immediate vaccination support its use (though the attribution of the success of the eradication program to ring vaccination has been challenged [ 6 ]). Health Officers should initiate ring vaccination upon identification of the first cases of smallpox. However, there are legitimate concerns regarding the ability of public health practitioners to mount a quick, comprehensive and successful ring vaccination program, particularly in the face of a moderate-sized or large smallpox outbreak. To guide preparation efforts and inform incident decision-making, we attempt to identify outbreak characteristics and response capacities that significantly impact the ability of ring vaccination to control a smallpox outbreak and to determine whether ring vaccination is useful in the presence of a mass vaccination campaign. Our analysis uses a newly developed mathematical model: a continuous-time, event-driven network simulation model of smallpox ring vaccination. Mathematical models can advance our understanding of how a smallpox outbreak might progress. Several mathematical and computer models address the question of smallpox transmission [ 7 - 13 ]. The first model to appear [ 8 ] concluded that ring vaccination would be effective, but did not treat response logistics in detail; the model was linear and did not treat the depletion of susceptibles as the epidemic progressed (appropriate, however, for assessing control early in an epidemic, when the number infected is small compared to the number of susceptibles, e.g. [ 14 ]). The innovative model by Kaplan et al. [ 9 ] emphasized the importance of resource limitation and the logistics of smallpox response, but assumed that full infectiousness began before the onset of symptoms (and the subsequent identification and removal), and did not separately monitor close epidemiological contacts of patients (which are at greatest risk, but also easiest to find and vaccinate); the conclusions were highly critical of ring vaccination. The model by Halloran et al. [ 11 ], a stochastic, discrete-time network model omitted the explicit inclusion of response logistics while otherwise used parameter values similar to those in Kaplan et al. [ 9 ]; the inclusion of residual immunity from individuals vaccinated prior to the discontinuation of routine vaccination, however, led to a more favorable view of ring vaccination. The model by Bozzette et al. [ 12 ] assumed that ring vaccination would reduce the number of transmissions and focused on health care workers (but did not explicitly include the network structure of the population nor the response logistics of ring vaccination). The model by Eichner [ 15 ] did not explicitly include the network structure of the population nor the logistics of ring vaccination, but did use parameters based on data from an outbreak in Nigeria, and did distinguish close and casual contacts, case isolation, and surveillance of contacts; it concluded that case isolation and contact tracing could prevent the spread of smallpox. Finally, the individual-based model by Epstein et al. [ 16 ] presented scenarios illustrating certain alternatives to pure mass vaccination and ring vaccination of contacts of cases in preventing smallpox transmission in small populations of 800 individuals; this model includes no homogeneity assumptions, but did not analyze tracing of contacts of contacts. Because none of the available models includes both network structure (with explicit contact tracing) and response logistics limited by the number of available disease control investigators [ 9 ], we included these features in a continuous-time event-driven network simulation model of smallpox ring vaccination. Specifically, the model we developed includes the following features: Network structure Smallpox was primarily a disease of close contact, especially household contacts [ 5 ]. Such contacts are both the most important epidemiologically, and also the easiest to identify. Post-exposure vaccination Some evidence suggests that vaccination soon after exposure may lessen the severity of the resulting case of smallpox or possibly prevent disease entirely [ 17 - 20 ]. Second ring Ring vaccination may involve not only vaccinating contacts of cases, but also contacts of contacts of cases [ 21 , 22 ] – potentially allowing the public health authorities to "outrun" the chain of transmission. Response capacity Limited case-finding and vaccination capabilities lead to the possibility that it may be impossible to find newly exposed individuals and vaccinate them in time, resulting in a "race to trace" [ 9 ]. Heterogeneity in natural history Mild, ambulatory cases of smallpox may spread disease because such cases may be harder to recognize. Prior vaccination Vaccination of individuals prior to the discontinuation of routine vaccination may provide some, possibly considerable, protection against infection [ 11 , 23 , 24 ], although it may also result in more mild cases which may be harder to detect. Public awareness Public awareness may lead to more rapid detection of cases. We use this model to determine what factors promote or hinder the success of ring vaccination during a smallpox outbreak, and whether ring vaccination is useful in the presence of a mass vaccination campaign. In particular, the goal of this paper is to examine the control of smallpox by contact tracing and ring vaccination using a network model which includes response logistics [ 9 ]. Methods Model structure Natural history of smallpox We briefly review relevant features of the natural history and epidemiology of smallpox [ 17 , 25 , 8 , 28 ]. Following infection by the variola virus, individuals exhibited an incubation period of approximately 7–19 days with 10–14 being most typical. Sudden onset of fever and malaise, often with accompanying headache and backache, began the initial (or pre-eruptive) phase of smallpox. After 2–3 (or perhaps 4) days, individuals with the most common form, ordinary type smallpox, developed the characteristic focal rash , preceded in many cases by oropharyngeal lesions. In fatal cases of ordinary smallpox, death often occurred between the tenth and sixteenth day of symptoms; among survivors, most scabs had separated by day 22–27 of illness [ 26 ]. The course of smallpox varied widely between individuals, and several different clinical classifications were developed [ 29 - 31 , 17 , 26 ]. Consideration of the clinical features and severity of smallpox is important from the standpoint of mathematical transmission modeling because (1) the clinical features affect the ease of diagnosis (and thus of case identification), (2) more severe forms of smallpox may result in more transmission, (3) vaccinated individuals may develop less severe disease. We utilize a modified or simplified version of the classification system developed by Rao [ 32 , 31 , 26 ]; for the mathematical model, we will classify smallpox into five categories: early hemorrhagic, flat and late hemorrhagic, ordinary, modified, and mild. However, the clinical features and severity of smallpox in different populations may have been affected by underlying host factors, differences in viral strains, or differences in the infectious dose owing to different prevailing modes of transmission, and thus robust and precise quantitative estimates of the effects of (pre- or post-exposure) vaccination on the resulting smallpox severity, or of the infectivity differences between individuals exhibiting different forms of smallpox, are not available. The significance of such differences will be revealed through sensitivity analysis. Further details are given in Appendix 1 [see Additional file: 1 ]. Vaccination with vaccinia virus provided substantial protection against infection. Dixon assessed the risk of infection for an individual successfully vaccinated 3 years prior to exposure to be 0.1% the infection risk of an unvaccinated individual [ 17 ]. However, smallpox vaccination did not always take when applied, and moreover, in many instances, individuals who experienced a repeated vaccination failure developed severe smallpox upon exposure. The probability of a successful take depended on the vaccination method used; we assume that the take rate is between 95% and 100% [ 22 , 28 ]. In addition to protection against infection, vaccination could in many cases modify the course of infection and reduce the severity. Vaccine protection waned over time, but individuals vaccinated 20 years prior to exposure were believed to still have half the infection probability that an unvaccinated person had [ 17 ], and to have some protection against the most severe manifestations of smallpox. Dixon [ 17 ] believed that vaccine protection had at least three components, which decayed at different rates; for the purpose of this paper, we will assume that the severity of smallpox in previously any (recently or otherwise) vaccinated individuals follows the same distribution as for the vaccinated subjects seen in the case series observed by Rao in Madras [ 26 ], except that anywhere from 0 to 5% of vaccinated subjects develop smallpox too mild to diagnose without special surveillance or awareness. Observe that the vaccinated cases studied by Rao were vaccinated (at some point in their lives) before exposure, rather than after exposure to smallpox. Smallpox was largely a disease of close contacts [ 17 , 26 , 33 ], spread primarily through face to face contact with an infected person (or occasionally through contaminated clothing). Individuals in the incubation period of smallpox were not infectious, and long term carriers did not exist. Patients were believed to be infectious following the development of oropharyngeal lesions, which could precede the rash by 24 hours [ 26 ]. However, patients were believed to be most infectious during the first week of the rash [ 26 ]; Dixon (1962) believed that patients could be infectious from the onset of acute viremia, but most evidence suggested that little transmission occurred prior to the development of the rash [ 26 , 33 ]. The more severe the case, the more infectious they appeared to be [ 34 ]; mild cases were believed to have very little infectiousness. While scabs contained infectious material and patients were considered to be infectious until the last scab fell off, in practice patients were not highly infectious during the scabbing phase. Importantly, patients who had been vaccinated were found to cause fewer secondary cases [ 34 ]. Very severe cases, such as hemorrhagic or flat smallpox, occasionally resulted in considerable transmission, owing to diagnostic difficulties; mild cases, in which the patient remained ambulant during the course of the disease, could cause considerable spread as well [ 35 , 36 ]. Within a household or family dwelling, the secondary attack rate of unvaccinated susceptibles depended on the time and place, occasionally below 50% [ 29 ], but often approaching 100% [ 37 ]. Drier conditions were often believed to favor transmission [ 17 , 27 ], so that lower rates of transmission derived from tropical regions may not be applicable to the temperate zone [ 38 ]. The number of secondary cases resulting from a given importation into Europe varied widely [ 39 ], with most importations yielding few cases, but with the occasional large outbreak being seen. Mathematically, we represent the course of smallpox according to Figure 1 . We distinguish eight epidemiologically relevant states: (1) just following exposure, during which time vaccination could afford complete protection against disease, (2) a period of several days during which vaccination will not prevent disease, but may still reduce the severity of disease, (3) still prior to the development of symptoms, but too late for vaccination, (4) the beginning of the pre-eruptive period, during which the patient exhibits fever, malaise, and possibly other symptoms, but is not yet infectious, (5) a short period prior to the appearance of the rash, during which the appearance of oropharyngeal lesions will permit variola transmission, (6) the first week of the rash, during which time the patient is most infectious, (7) and (8), succeeding stages of the rash, during which time the patient is less infectious. For each of these states, we assume that conditional on surviving, the waiting time until the next stage is chosen from a uniform distribution as indicated in Appendix 2 [see Additional file: 2 ], except that the incubation period (the time from infection until Stage 4) is derived from estimates of the incubation distribution of smallpox based on importation cases in Europe [ 26 ] (see Appendix 2 [see additional file 2] for details). We chose to sample from a uniform distribution as a simple way to ensure a minimum waiting time in each state; many alternatives to this choice are possible. Figure 1 Smallpox stages used in the simulation model. Flat and ordinary smallpox rashes are indicated with more dots than modified and "mild" smallpox, suggesting potentially greater infectiousness. Hemorrhagic smallpox is indicated by horizontal line shading. Further details are provided in Table 6. Network structure We simulate the transmission of smallpox on a "small-worlds" network (highly clustered, but with short characteristic path lengths) [ 40 ]. Specifically, we assume that each person is located in a single household, and that the transmission rates were greatest in the household. We also assume that a fraction of the population are grouped into workplace or social groups, in which transmission may also occur, but with a lower rate per unit time than for household contacts. Finally, we assume that with a still smaller probability, any individual may transmit infection to any other individual in the population (casual contacts). In general, in a network-structured model, the number of secondary cases caused by an index case in a completely susceptible population is not a useful index of epidemic potential [ 41 , 42 ] (for a simple example, see [ 43 ]), since (for instance) an individual could infect everyone in his or her household, and not cause a widespread epidemic unless between-household transmission were sufficiently frequent. Rather than constructing the appropriate generalized basic reproduction number for our model (leading to highly cumbersome expressions), we chose an alternative (ad hoc) index of epidemic potential. For any given scenario of interest, we simulated the introduction of 10 index cases at random into a population of size 10000, and operationally defined "containment" to occur whenever the final size of the epidemic was less than 500 cases within 250 days (we showed, in the discussion of Figure 5 A below, that in nearly all cases, the 250-day window differs very little from a 1000-day window). Because we simulate a disease with a finite duration on a finite and non-renewing population, epidemic extinction always occurs in finite time. Figure 5 5A - The mean containment probability increases as the number of ring vaccinations per day is increased. For this figure, the 1000 "calibrated" parameter sets were chosen, and for each parameter set, 100 realizations were simulated and the fraction of these for which the epidemic was contained to fewer than 500 cases was determined. The average of these 1000 containment fractions is plotted on the vertical axis. We assumed a household contact finding probability of 95% and that the diagnosis rates double after community awareness of the epidemic. We considered high levels of workplace/social (w/s) contact finding (0.9), as well as moderate levels (0.8). We also considered two levels of diagnosis of smallpox among investigated (alerted) contacts: high levels (corresponding to a 3 hour mean delay, indicated by "high contact isolation"), and moderate levels (corresponding to a one day delay, and indicated by "less contact isolation"). The figure shows four such conditions, a . high workplace/social contact finding probability and high contact isolation, b . moderate workplace/social contact finding probability and high contact isolation, c . high workplace/social contact finding probability and less contact isolation, and d . moderate workplace/social contact finding probability and less contact isolation. All other parameter values were chosen from the uncertainty analysis (the 1000 "calibrated" parameter sets). In this figure, "contact isolation" refers to the monitored diagnosis rate, i.e. the rate at which previously asymptomatic contacts who subsequently develop disease will be diagnosed (φ, Table 1, Table 8). 5B - The minimum containment probability out of the same 1000 scenarios chosen in Figure 5A. Whereas in Figure 5A, we averaged the simulated containment frequency (out of 100 realizations for each scenario), in this figure we determined which of the 1000 scenarios led to the lowest containment frequency, and we plotted this single worst (out of 1000) containment frequency, at different levels of ring vaccination capacity, for the same four conditions as in Figure 5A: a . high workplace/social contact finding probability (0.9) and high contact isolation (effective 3 hour delay following symptoms), b . moderate workplace/social contact finding probability (0.8) and high contact isolation, c . high workplace/social contact finding probability (0.9) and less contact isolation (effective one day delay), and d . moderate workplace/social contact finding probability (0.8) and less contact isolation. All parameters are the same as in Figure 5A (the household contact finding probability is 0.95 for all scenarios, and the diagnosis rates are doubled after the onset of community awareness). In this figure, "contact isolation" refers to the monitored diagnosis rate, i.e. the rate at which previously asymptomatic contacts who subsequently develop disease will be diagnosed (φ, Table 1, Table 8). Medical and public health intervention We assume that even in the absence of specific case investigations, the presence of smallpox symptoms will prompt patients to be diagnosed; we assume, however, a higher diagnosis rate for all forms of ordinary smallpox than for the severe flat and hemorrhagic forms, or for the mildest form. We assume that once an individual is diagnosed, their household and workplace contacts are investigated and detected with some probability; we assume that a high fraction (such as 95%) of household contacts are assumed to be traceable (see below). We assume that the fraction of workplace/social contacts that are traceable is less than the fraction of household contacts that are traceable; we assume that no casual contacts are traceable. High contact-finding rates may be plausible; we examined San Francisco Department of Public Health records of contact investigations for meningococcal disease (like smallpox, a potentially fatal disease for which rapid intervention may prevent mortality and morbidity). Records were available from December 2001 to April 2002; 13 such investigations during this period resulted in identification of 62 household contacts, all of which were contacted; out of 38 workplace/social contacts identified, 32 were contacted (84%). In our model, we assume that identified asymptomatic contacts are vaccinated, quarantined, and monitored for symptom development, while symptomatic patients are isolated and treated as necessary [ 9 ]; thus, the modeled interventions include more than ring vaccination alone. Finally, we include the possibility that all contacts (of both symptomatic and asymptomatic) traced and the same procedure applied, so that all contacts of contacts would be investigated. We assume that uninfected or asymptomatic individuals who are visited or traced individuals will be diagnosed more rapidly than if they had not been traced; in fact, such individuals would be isolated and would not be able to continue a chain of transmission. We follow previous models [ 9 ] in assuming a limited vaccination capability of K r per day for ring vaccination. We assumed one of two strategies for contact tracing: (1) tracing only of direct contacts of diagnosed cases, and (2) tracing of contacts of contacts of diagnosed cases as well as direct contacts. The contact structure of the network is illustrated in Figure 2 . Observe that individuals b and c are household contacts of individual a , so that if individual a were a smallpox case, an attempt would be made to find and vaccinate individuals b and c as household contacts of a case. If individuals a and b were both cases, then two attempts could be made to find individual c . We have modeled the effect of multiple contact-finding attempts conservatively in the sense that if the first attempt to find an individual as a household contact (of a case or of a contact) is determined to fail, no further attempts will be made. This maintains the failure rate of contact tracing (looked at from the standpoint of finding individuals) even in large households. Similar considerations apply to workplace/social groups. Figure 2 Network structure shown for households (joined by thick lines) of size 3 and workplace/social groups of size 4 (joined by thin lines); a small portion of the network is shown. Individual a has two household contacts ( b and c ), and three workplace/social contacts ( d , e , and f ). If individual a were a smallpox case, the household contacts would be at highest risk for acquiring smallpox, followed by workplace/social contacts; all individuals in the population are at a low risk of casual transmission from individual a . Case investigation of individual a would identify the direct contacts b-f with probabilities that depend on whether the contact is household or workplace/social; if such individuals are identified, they will be vaccinated. If contacts of contacts are being traced, the investigation will subsequently identify individuals g-p . Analysis We analyzed the model in three ways. First, we selected a Latin Hypercube sample [ 44 - 46 ] of parameters chosen uniformly from the parameter ranges given in Appendix 2 [see additional file 2], and simulated the transmission and control of smallpox to determine which parameters were most important for contact tracing and ring vaccination to be effective. Second, we used the same Latin Hypercube Sample of input parameters, but assumed that all disease control efforts were inactive. We used these parameters to simulate smallpox transmission, but then iteratively selected transmission parameters so that (1) between 1% and 10% of new infections resulted from casual (random) transmission, and (2) each index case resulted in between two and five secondary cases (thought to be plausible for historic smallpox; [ 8 ] suggest three secondary cases). For each of the resulting smallpox parameter sets using 100 stochastic simulations per set, we determined the daily ring vaccination/case tracking capacity needed to contain all simulated smallpox epidemics (i.e., keep the total number of cases below 500 within 250 days). Third and finally, we chose parameter values to yield an moderately large smallpox epidemic (with each index case causing approximately six secondary cases), and present illustrative scenarios for ring vaccination. These scenarios are intended to complement the simulations which were calibrated to historic smallpox, since the characteristics of smallpox that may be used in a deliberate release are not known. It is important to realize that in our model, the case finding time determines the fraction of contacts that will become infected, and that our model parameters have been chosen to yield quite rapid transmission to close contacts; in reality, much transmission of natural smallpox occurred through "sickbed" routes which would not occur in a modern setting [ 47 ], so that in this sense our model errs considerably on the side of caution and pessimism. Results Most important parameters (sensitivity analysis) To determine which of the input parameters were most important in determining the total number of smallpox cases, we selected a Latin Hypercube sample of size 1000 from the input parameter ranges indicated in Appendix 2 [see additional file 2] and simulated the mean number of cases within 250 days in a population of 10000. We then computed the partial rank correlation coefficient [ 46 ] (PRCC; see Appendix 2 [see additional file 2]) between each input parameter and the number of smallpox cases; when the PRCC is close to zero, the value of the parameter has little relation to the simulation output; when the PRCC is close to +1.0 or -1.0, the value of the parameter is highly important in determining the simulation output. Neglecting the number of index cases (which is directly related to the number of new cases), those parameters whose PRCC exceeds 0.1 are shown in Table 2 . Most of these parameters identified as important are related to the density of available contacts (mean household size, prior vaccination fraction, and protection due to prior vaccination) or the transmission rate and infectivity (including the length of the pre-eruptive infectious period (stage 5 in Figure 2 )). Note, however, that the speed of ring vaccination (household tracing delay) and faster diagnosis due to awareness of the outbreak are important parameters. Additionally, the infectivity of mild cases appears as an important parameter as well. Table 2 Most important parameters. PRCC: partial rank correlation coefficient (see Appendix 2 [see additional file 2] for definition and references). Parameter PRCC Mean Household Size 0.575 Transmission Rate from Close Contacts 0.520 Infectivity prior to rash 0.309 Ring Vaccination Capacity -0.296 Casual Transmission Probability 0.244 Pre-eruptive infectious period (lower bound) 0.224 Number of Casual Contacts per Day 0.210 Relative Infectiousness of Social/Workplace Contacts 0.200 Fraction of Individuals in Social/Workplace Groups 0.183 Faster Diagnosis due to Awareness of Outbreak -0.175 Household Tracing Delay 0.104 Pre-eruptive Diagnosis Probability -0.103 Diagnosis Probability after Rash -0.103 Illustrative scenarios To explore factors which contribute to the success of ring vaccination, we chose smallpox scenarios which resulted in severe and fast-moving epidemics in the absence of disease control; these simulated epidemics are considerably more severe than is believed likely under present circumstances. Effect of contact tracing and ring vaccination We used these parameters to simulate smallpox epidemics beginning with 10 cases, for a variety of levels of ring vaccination capacity per day (contact tracing capacity per day), as shown in Figure 3 A. In this Figure, we assume that the population size is 10000, and that the epidemic began with 10 infected individuals. The mean household size is assumed to be 4, the mean size of the workplace/social contact group is 8, and contacts of contacts are traced. We assume that each day, the number of contacts that can be traced and vaccinated as a result of case investigation is 0, 10, 20, 30 and 40 per day; the probability of finding a workplace/social contact is assumed to be 80%. The Figure shows the average number of infected individuals each day (based on 100 stochastic simulations) for each of these scenarios. Selected parameter values are indicated in the caption for Figure 3 A and in Table 1 . Figure 3 3A - Expanding severe smallpox epidemic beginning with 10 initial cases, assuming 0, 10, 20, 30, and 40 possible ring vaccinations per day. The household size is 4 and the workplace/social group size is 8; we assume 95% of household contacts are traceable (with a mean delay of 1 day) and 80% of workplace/social contacts are traceable (with a mean delay of 2 days). We also assume that 25% of the population have 50% protection from infection resulting from vaccination prior to the discontinuation of routine vaccination. We assume that infection will be transmitted to close contacts with a mean time of 0.2 days, and that each person while infective causes on average 0.15 casual (untraceable) infections per day. We assume that individuals are 20% as infectious in the day just before the appearance of the rash as they will be during the first week of the rash, and that individuals are 20% as infectious as this (4% as infectious as during the first week of the rash) during the prodromal period. We assume that diagnosis rates will increase by a factor of 50% after smallpox becomes known to the community; we assume that each individual contacted during an investigation has a additional diagnosis or removal rate of 0.75 per day following the onset of symptoms (reflecting enhanced surveillance or contact isolation). Important parameters are summarized in Table 1; the full set of parameter choices is outlined in Tables 8-11 in Appendix 2 [see additional file 2]. Diagnosis times are discussed in Appendix 2 [see additional file 2]. 3B - An expanding severe smallpox epidemic under inadequate ring vaccination is shown for parameters identical to Figure 3A, except that workplace/social group sizes are 12 (instead of 8), and the probability of tracing workplace/social contacts is 0.6 (instead of 0.8). 3C - A severe smallpox epidemic is controlled by ring vaccination despite the large number of initial cases. The parameters are identical to Figure 3A, except that 1000 index cases inaugurate the attack in these scenarios (and ring vaccination capacity is much greater, as indicated). While not recommended, ring vaccination may ultimately halt epidemics beginning with many index cases if sufficient vaccination capacity were available, contact finding feasible, and follow-up sufficient. 3D - Tracing contacts of contacts (red) is beneficial when sufficient contact tracing/ring vaccination capacity exists (dotted lines). In these scenarios, all parameters are the same as in Figure 3A; the number of contact tracings possible per day is either 20 or 40 per day. Contacts of contacts are traced in two scenarios; in the other two, only direct contacts of cases are traced. For low levels of ring vaccination (20 per day), tracing contacts of contacts is harmful; for high levels (40 per day) of ring vaccination, it is beneficial to trace contacts of contacts. When the contact tracing/ring vaccination capacity is too small to adequately cover contacts of the cases themselves, diversion of resources to contacts of contacts is harmful; however, provided that sufficient capacity exists, tracing contacts of contacts helps outrun the chain of transmission. Each line corresponds to the average of 100 realizations. Table 1 Selected parameter values for Figure 3A and other illustrative scenarios. The notations "Other" or "Other factors" in the column "See also" refers to the text section "Other factors". The symbols are defined in Appendix 2 [see additional file 2] and are included for reference. Description Values See also Symbol Number of index cases 10–1000 Figure 3C A Mean household size 4 H Workplace/social group size 8 Figure 3B W Ring vaccinations per day 0–200 Fig. 3A, 3B, Other K r Monitored diagnosis rate 1–8 day -1 Figure 5A, 5B φ Prob. of finding household contact 0.95 Table 4 υ 1 Prob. of finding workplace/social contact 0.8 Fig. 3B, 4AB; Tb. 4 υ 2 Delay, tracing household contacts 1–5 days Figure 6 δ 1 Delay, tracing workplace/social contacts 2–10 days Figure 6 δ 2 Relative diagnosis rate after 1st diagnosed case 1.5 Figure 7 a 1 Infectivity, stage 4 relative to stage 5 0.2 Figure 8 k Infectivity, stage 5 relative to stage 6 0.2 Figure 8 k ' Infection hazard for close contacts 5 day -1 Table 3 λ Relative hazard for workplace/social contacts 1/3 Table 3 h 2 Casual transmission rate 0.15 day -1 β Prior vaccination fraction 0.25 Other factors f Fraction of mild cases 0.03 Other factors Vaccine success rate (for very recent vaccination) 0.667 Other factors α 1 Vaccine success rate (vaccination prior to discontinuation of routine vaccination) 0.5 Other factors α 2 Vaccine success rate full protection 0.999 Other factors α 3 Because we assumed nonzero diagnosis probabilities during the prodromal period for all individuals in Figure 3 A, we repeated the simulation assuming no diagnosis in the prodromal period unless individuals were under specific surveillance. The results were nearly identical: assuming 30 contact tracings (ring vaccinations) per day, we found 26% of the scenarios in Figure 3 A exhibited decontainment, and 28% assuming no diagnosis during the prodromal period; assuming 40 contact tracings per day, we found 1 out of 100 scenarios showed loss of containment in Figure 3 A and when we repeated the scenario of Figure 3 A assuming no diagnosis during the prodromal period. In Figure 3 B, we illustrate control of an epidemic for which all parameters are identical to Figure 3 A, except that the workplace/social group size is 12 (instead of 8, as in Figure 3 A), and the probability of finding workplace/social contacts is 60% (instead of 80%, as in Figure 3 A). In this case, the larger size of the workplace/social groups and the lower contact finding probability makes it necessary to have a higher ring vaccination capacity to attain a high probability of containing the epidemic, and on average it takes longer for eradication to finally occur. Finally, in Figure 3 C, we show control of an epidemic in a population of 100,000, beginning with 1000 initial infectives, keeping all other parameters the same as in Figure 3 A. Each curve corresponds to the indicated number of possible ring vaccinations per day. This figure shows that assuming sufficient capacity, ring vaccination is in principle capable of containing even epidemics beginning with very many infected individuals. However, mass vaccination in such cases is justified because of the far larger number of individuals at risk and the inability to perform such extensive contact tracing. In Figure 3 D, we compare the effect of tracing contacts of contacts (as described in Appendix 2 [see additional file 2]) at different levels of ring vaccination capacity. Thin lines in red correspond to the presence of tracing contacts of contacts; thick lines in black correspond to tracing direct contacts of cases only. Each simulation was performed 100 times, with 10 initial infectives, and for 20 and 40 ring vaccinations possible per day (as indicated). The average number infected on each day is plotted in the Figure. The figure illustrates that when ring vaccination capacity is low, tracing contacts of contacts (as modeled) yields a more severe average epidemic; when ring vaccination capacity is large, tracing contacts of contacts results in a less severe average epidemic; if the contact tracing/ring vaccination capacity is too low to cover adequately the contacts of contacts in addition to the contacts of cases, extension of tracing to the contacts of contacts (the second ring) is harmful; however, if there is sufficient capacity to cover the contacts of contacts, then the tracing of contacts of contacts is beneficial. Finally, in Figure 4 , we illustrate the considerable variability that may be seen from simulation to simulation. This figure shows twenty simulations when contacts of contacts are not traced. Stochastic variability between realizations is considerable, even when all parameters are held constant; this variability is expected to limit the ability to make inferences based on observation of a single realization of the process. Figure 4 Stochastic variability is illustrated by plotting the number of infectives over time over multiple replications. In this example, most simulations exhibit rapid containment of smallpox. The mean number of cases (averaging over simulations) is influenced by a small number of simulations exhibiting an uncontained epidemic. The parameters are the same as in Figure 3A, except that contacts of contacts are not traced in these replications. Because our baseline hazard for infection of individuals may be larger than would be expected for naturally occurring smallpox, we examined the effect of more realistic values of this hazard. In particular, we chose different levels of ring vaccination capacity (10, and 20), and of the relative hazard for workplace/social contacts, and then chose values of the baseline hazard for infection varying from 0.5 per day (for a mean time to infection of 2 days) to 2 per day (for a mean time to infection of one half day), and introduced 10 index cases into a population of 10000. We then repeated this 100 times, and reported the fraction of scenarios in which the number of infections ultimately exceeded 500 (as before, chosen as a cutoff to indicate the ultimate "escape" of containment of the epidemic). These results, shown in Table 3 , support the idea that ring vaccination can easily control introduced smallpox provided there is sufficient capacity and efficacy of tracing. Table 3 Estimated decontainment probability for different levels of ring vaccination capacity ( K r ) and relative hazard for infection due to workplace/social contacts ( h 2 ), for different levels of the baseline hazard for infection from household contacts λ (based on replications of 100 simulations for each level). For each scenario, 10 index cases were introduced into a population of size 10000. All other parameters were the same as for Figure 3A. As before, we define decontainment to mean that the total number of cases from 10 index cases eventually exceeded 500 by day 250. Relative hazard for workplace or social contacts Ring vaccinations per day 10 20 λ λ 1/3 0.5 0 0.5 0 0.75 0.02 0.75 0 1 0.26 1 0 1.25 0.73 1.25 0 1.5 0.96 1.5 0.02 2 1 2 0.16 λ λ 2/3 0.5 0 0.5 0 0.75 0.46 0.75 0 1 0.82 1 0.02 1.25 1 1.25 0.11 1.5 1 1.5 0.26 2 1 2 0.49 λ λ 1 0.5 0.14 0.5 0 0.75 0.86 0.75 0 1 0.99 1 0.07 1.25 1 1.25 0.22 1.5 1 1.5 0.49 2 1 2 0.85 Because of considerable uncertainty in the model parameters, we chose a collection of parameter values, and for each, estimated the containment probability (operationally defined as fewer than 500 total cases as a result of 10 index cases, within 250 days). We estimated this containment probability by simulating the smallpox epidemic 100 times for the same parameter values, and computing the frequency out of these 100 realizations for which fewer than 500 index cases resulted within 250 days. (Using a 1000 day window produces slightly smaller containment estimates; for 3 out of 1000 parameter set choices, this difference was greater than 0.06; the maximum difference seen was 0.23; the mean absolute difference was 0.0029; in only one case out of 1000 did we see containment in all 100 cases for the 250-day window, but not in all 100 cases for the 1000-day window). One thousand scenarios chosen from a Latin Hypercube sample were analyzed, and as indicated before, we chose the hazard for close contact transmission and the hazard for random transmission to guarantee that between 2 and 5 secondary cases per case occur, and that no more than 5% of cases are attributable to random transmission (we refer to this set as the "calibrated" scenarios further in this text). Having chosen this collection of 1000 parameter sets, we considered two levels of two different control parameters which were applied to each (so that each of the 1000 parameter sets were simulated under four different control conditions). The first of the two control parameters was the probability of workplace/social group contact finding; we chose values of 0.8 and 0.9 for this parameter (the household contact finding probability was 0.95 in all cases). The second of the control parameters was the rate of diagnosis (and effective removal) from the community of cases developing among previously identified and traced contacts who were initially asymptomatic (we refer to this as the monitored diagnosis rate); we assumed first a low level corresponding to a mean diagnosis time of one day from the onset of symptoms, and a high level corresponding to a mean time of 3 hours from the onset of symptoms (high levels of the monitored diagnosis rate correspond effectively to isolation of contacts). Finally, we assumed a doubling of the diagnosis rate after the beginning of widespread community awareness of smallpox. We then computed the containment fraction at different levels of ring vaccination capacity (contact tracing capacity per day). Thus, for each of 1000 scenarios (parameter set choices), we assigned the workplace/social group contact tracing success probability (υ 2 ), the monitored diagnosis rate φ (Appendix 2 [see additional file 2]), and the contact tracing/ring vaccination capacity per day ( K r ). We then performed 100 realizations beginning with 10 index cases, and computed the containment fraction (fraction showing fewer than 500 cases in 250 days, beginning with 10 index cases). Thus, for each of the two choices each of υ 2 and φ, and for each value of K r we examined, we obtained 1000 values of the containment fraction. We use the resulting distributions in Figure 5 A (averaging over these 1000 containment fractions), and Figure 5 B (displaying the minimum value of the 1000 containment fractions). In Figure 5 A, we plot the mean containment fraction (averaging the containment fraction over all 1000 scenarios), as ring vaccination capacity varies, for the two levels of workplace/social group contact finding probabilities (0.8 and 0.9), and for the two levels of monitored diagnosis rate among initially asymptomatic contacts (1 day -1 and 8 day -1 ). For low levels of ring vaccination (traceable contacts per day), the epidemic is almost never contained, but for ring vaccination levels near 50–60 per day (5–6 per index case per day), the average containment fraction became close to 1. However, this average conceals the fact that for some scenarios (parameter sets chosen from the calibrated uncertainty analysis), control remains difficult or impossible even at high levels of ring vaccination. Therefore, in Figure 5 B, we plotted the single lowest containment fraction seen out of the 1000 computed; focusing on the single worst scenarios reveals a different picture, and shows that isolation of asymptomatic contacts and very high probabilities of finding workplace or social contacts would be needed to control smallpox under these most pessimistic parameter choices. Effect of contact tracing speed Rapid contact tracing in ring vaccination may play an important role in suppressing the epidemic, since the longer it takes to trace a contact, the less likely the vaccine is to be efficacious, and the more opportunities the infected individual may have to transmit disease before they are finally located, isolated, and vaccinated if appropriate. We illustrate this possibility in Figure 6 by examining the same scenario we showed earlier in Figure 3 A (e.g. households of size 4, workplace/social groups of size 8, 95% of household contacts traceable, 80% of workplace/social groups traceable, an average time to infection for a household contact of an infective given by 0.2 days). We assume in one case that contacts may be traced quickly (1 day for a household contact, 2 days for a workplace/social contact), and in the other that the contacts are on average found slowly (5 days for a household contact, 10 days for workplace/social contacts); we assumed 30 ring vaccinations (traceable contacts) possible per day. In this scenario, the epidemic is more severe and containment (as we have been defining it) less likely when contact tracing is slow: in the fast scenario, 238 infections occurred on average and the (estimated) containment probability was 99%; for the slow scenario, on average 3587 infections occurred and the (estimated) containment probability was only 1%. Figure 6 Faster contact tracing may improve the efficacy of ring vaccination. We assume the same baseline parameters as in Figure 3A (e.g. households of size 4, workplace/social groups of size 8, 95% of household contacts traceable, 80% of workplace/social contacts traceable), and 30 ring vaccinations available per day (with contacts of contacts not traced). The fast scenario corresponds to an average one day delay for household and two days for workplace/social contacts (as in Figure 3A); the slow scenario corresponds to an average five day delay for household and ten day delay for workplace/social contacts. This figure shows the average of one hundred realizations starting with ten index cases. While Figure 6 illustrates the possibility that rapid contact tracing may be of decisive importance in some scenarios (parameter set choices), this is not always the case. For some parameter sets, the probability of tracing contacts (household or workplace/social) may be too low, or the transmission rate too high, for more rapid contact tracing to make any difference. Conversely, for other parameter sets, the smallpox transmission rate may be so low that smallpox is easily contained even with slow contact tracing. While rapid contact tracing is never harmful, overall, how typical are the results of Figure 6 (in which rapid contact tracing was important in ensuring the efficacy of ring vaccination)? To address this question, we simulated the growth of smallpox for the 1000 "calibrated" scenarios we used in Figure 5 A and 5 B. As before, we assumed ten initial cases, and (as in Figure 6 ) that 30 ring vaccinations were possible per day; then we simulated 100 epidemics assuming one day to find a household contact (and 2 days to find a workplace/social contact). We then simulated 100 epidemics assuming that it takes five days to find a household contact and 10 days to find a workplace/social contact (as in Figure 6 ). For each of these 1000 scenarios, we calculated the fraction of simulations for which the total number of cases is 500 or less within 250 days, i.e. the containment fraction. For nearly all scenarios (parameter set choices), the containment fraction was smaller (sometimes much smaller) when the contact finding time is faster (since faster contact finding, all else being equal, improves smallpox control, as illustrated in Figure 6 ). However, for 64.5% of the scenarios (parameter set choices) examined, the difference was less than 2.5% in absolute terms (smallpox was either contained or not contained depending on other factors, and rapid contact tracing did not make the difference). On the other hand, for 18.7% of the scenarios examined, the absolute difference in the containment probability was 20% or more; thus, a substantial difference in containment probability is occasionally attributable to the difference between fast and slow contact tracing. Effect of more rapid diagnosis Public awareness of smallpox, leading to more rapid isolation and identification, may play an important role in eliminating the epidemic, as illustrated by the scenarios in Figure 7 . We assumed 20 ring vaccinations possible per day, a capacity too small to contain the epidemic in the absence of increased surveillance or diagnosis; the black line in the figure shows the steeply rising average number of cases for the first 100 days. If, however, surveillance or public awareness of the symptoms of smallpox increases the diagnosis rate by 50% (multiplies the baseline diagnosis rates by 1.5), containment becomes possible (blue line); with a doubling of the diagnosis rate (red line) the peak number of cases is lower still. In these scenarios, increased diagnostic rates markedly improve the ability of ring vaccination to control the epidemic, this suggest that any ring vaccination effort be accompanied by increased public awareness and surveillance. Figure 7 More rapid diagnosis due to public awareness or increased surveillance may lead to far more effective epidemic control. We assume the same baseline parameters as in Figure 3A, and averaged 100 realizations of the epidemic beginning with 10 index cases and assumed a ring vaccination capacity of 20 per day (and contacts of contacts not traced). For the black line, the diagnosis rate of cases does not change after the first case is identified (the multiplier is 1.0); for the blue line, the diagnosis rate increases by 50% (multiplier 1.5) after the first case is identified (as in Figure 3A), resulting in substantially fewer cases; and for the red line, the diagnosis rate is doubled (multiplier 2.0) after the first case is identified, resulting in still fewer cases. In many cases, however, more rapid diagnosis was not required for ring vaccination to be effective. As before, we simulated smallpox epidemics for each of 1000 calibrated scenarios, performing 100 realizations each beginning with 10 index cases, and computed the fraction of scenarios for which the epidemic was always contained (as defined earlier), assuming no change in diagnosis rates. We assumed 80 ring vaccinators per day, contact finding probabilities of 0.95 for households and 0.8 for workplace/social contacts (as in Figure 3 A). Under these assumptions, for 83.4% of the scenarios, the epidemic was contained within 500 total cases in each of the 100 realizations, even with no change in diagnosis rates. Uncertainty analysis (using the 1000 calibrated scenarios, and based on the fraction of 100 replications showing decontainment) revealed the most important parameters which predict the failure of ring vaccination without more rapid diagnosis were the same as we found in the earlier uncertainty analysis; a higher fraction vaccinated before the epidemic, smaller households or workplace/social groups, less transmissibility, lowered infectivity prior to the rash, more rapid diagnosis, and a higher rate of diagnosis for alerted individuals all contribute to a greater containment probability even without an overall increase in the diagnosis rate. Effect of continued surveillance of contacts We have been assuming that whenever an individual is contacted during an investigation, the individual will be diagnosed more quickly should they subsequently develop symptoms. When transmission is assumed to be very rapid (smallpox is assumed to be highly contagious), most individuals may already be infected when identified through contact tracing from an infective. Using the scenario we examined in Figure 3 A, we see that continued surveillance of contacts is an essential component of effective ring vaccination designed to control rapidly spreading smallpox: if smallpox in a contact is not diagnosed any more quickly than for a non-contact, containment by ring vaccination requires over 98% contact finding probabilities for both household and workplace/social contacts – even if unlimited numbers of ring vaccinators are available; containment cannot be guaranteed by adding additional ring vaccination capacity if the contact finding rates are too low and/or the follow-up for contacts is insufficient. Smallpox which is transmitted less rapidly to contacts would, however, be containable with a lower contact finding probability (results not shown). Finally, we used the "calibrated" scenarios (parameter set choices) to explore the levels of contact finding probability needed to contain the epidemic (as before, defined to mean 500 or fewer cases ultimately resulting from ten initial cases) (Table 4 ). In these scenarios, we assumed that all traceable contacts were followed up very quickly (1/ a = 1 hour, so that cases arising in previously contacted persons almost never transmit the infection further). We chose different levels of household and workplace/social contact finding probabilities and different levels of ring vaccination capacity, and performed 100 replications of each of the 1000 different scenarios. In Table 4 we report the fraction of scenarios for which all 100 replications exhibited containment. Scenarios in which smallpox is highly contagious require high contact finding probability to ensure the containment of the epidemic. Table 4 Containment of severe smallpox at different levels of contact finding. The first three columns are assumed levels for the probability of finding a household contact, the probability of finding a workplace/social (W/S) contact, and for the number of contact tracings/ring vaccinations possible per day; the last two columns express (as percentages) the resulting probability of containment given the assumed contact finding probabilities and contact tracing capacities; two containment probabilities are given: the containment probability when only contacts of cases are traced (first column, "Contacts"), and the containment probability when contacts of contacts of cases are traced in addition to the contacts of cases (second column, "Contacts of Contacts"). All other parameters are the same as in Figure 3A. Probability of finding Number of Ring vacc. per day Containment Contacts Probability when Tracing Contacts of contacts Household contacts W/S contacts 0.95 0.85 50 99.1% 97.9% 0.95 0.85 100 99.3% 100.0% 0.95 0.85 200 99.1% 100.0% 0.9 0.8 50 95.7% 95.8% 0.9 0.8 100 95.6% 99.9% 0.9 0.8 200 95.4% 100.0% 0.85 0.75 50 86.0% 93.3% 0.85 0.75 100 86.1% 99.1% 0.85 0.75 200 86.3% 99.2% 0.75 0.6 50 52.1% 72.0% 0.75 0.6 100 51.5% 78.5% 0.75 0.6 200 53.0% 78.6% Transmission prior to rash Transmission prior to the rash makes epidemic control more difficult. In Figure 8 , we show an expanding smallpox epidemic assuming differing levels of infectivity prior to the rash (adding increased infectivity prior to the rash, keeping constant the infectivity after the rash). We assume all parameters are the same as in Figure 3 A (and that the ring vaccination capacity is 40 per day). Infectivity prior to the rash is modeled as the relative infectivity during the short (1 day) period of oropharyngeal lesions just prior to the rash (compared to the infectivity during the first week of the rash), and as the relative infectivity during the prodromal period (relative to the period just prior to the rash). We consider three scenarios: a relative infectivity during entire period is one (i.e., infectivity during the prodromal period and just prior to the rash is the same as during the first week of the rash), b the relative infectivity just prior to the rash is the same as during the first week of the rash, but during the prodromal period is 4% (as in Figure 3 A) of this value, and c the relative infectivity just prior to the rash is 20% of the infectivity during the first week of the rash, and during the prodromal period is 20% of this value. The figure shows that increased infectivity just prior to the rash leads to a larger epidemic (comparing b and c ); in case b (high infectivity just prior to onset of rash), loss of containment occurs 36% of the time (but in none of the 100 realizations shown in case c (low infectivity prior to rash)). Scenario a (full infectivity during entire the prodromal period) showed loss of control in every realization. Increasing the ring vaccination capacity from 40 per day to 80 per day (results not shown) led to containment in all of the realizations with high infectivity just prior to the rash and low infectivity during the prodromal period (case b ), but made no difference if the infectivity was as high during the prodromal period as during the rash (case a ). While intuitively adding additional infectiousness must increase the number of secondary cases and make control more difficult, these results do illustrate that even a small amount of increased infectiousness prior to the rash (when diagnosis is more difficult) may substantially increase the difficulty of smallpox control. Figure 8 Transmission prior to the rash makes epidemic control more difficult. The figure shows a expanding smallpox epidemic assuming differing levels of infectivity prior to the rash. We assume all parameters are the same as in Figure 3A (and that the ring vaccination capacity is 40 per day). Infectivity prior to the rash is modeled as the relative infectivity during the short (1 day) period of oropharyngeal lesions just prior to the rash (compared to the infectivity during the first week of the rash), and as the relative infectivity during the prodromal period (relative to the period just prior to the rash). For scenario a , relative infectivity during the prodromal period and just prior to the rash is the same as during the first week of the rash, for scenario b , the relative infectivity just prior to the rash is the same as during the first week of the rash, but during the prodromal period is 4% (as in Figure 3A) of this value, and for scenario c , the relative infectivity just prior to the rash is 20% of the infectivity during the first week of the rash, and during the prodromal period is 20% of this value (these two parameters are the same as in Figure 3A). Other factors Finally, in Figure 9 , we present scenarios in which each of four other parameters are modified from the baseline values of Figure 3 A, assuming 40 contact tracings (ring vaccinations) are possible per day (line a in the figure). Specifically, we assume that severe smallpox (hemorrhagic and flat) on average takes four times longer to diagnose and isolate than ordinary smallpox (case b ), that no one in the population has prior vaccination protection (from before the discontinuation of routine vaccination, case c ), that 10% more smallpox is too mild to diagnose (but still contagious, case d ) compared to baseline, and finally that the vaccine is completely ineffective (case e ). Each of these scenarios will be discussed further below. Figure 9 Additional scenarios, assuming 40 ring vaccinations or contact tracings possible per day, and that contacts of contacts are traced; all parameters are identical to those in Figure 3A unless otherwise indicated. The figure shows the average of 100 replications of five scenarios (Case a repeats the result from Figure 3A for reference); the numbers in parentheses in the legend are the corresponding fraction of the 100 scenarios for which decontainment occurred. For case b , we assumed that flat and hemorrhagic smallpox cases took four times as long on average to diagnose as ordinary cases; for case c ., we assumed that no one in the population had prior protection (as opposed to 25% for Figure 3A); for case d , we assumed that an additional 10% of individuals (13% instead of 3%) would develop mild smallpox (with 75% developing ordinary smallpox instead of 85% as in Figure 3A); and for case e , we assumed that the vaccine is completely ineffective and provides no protection against infection. Scenario b was motivated by the possibility that individuals with severe forms of smallpox may be more difficult to diagnose, and thus remain infectious in the community longer (despite the much greater degree of illness of such patients), or that such patients may be more infectious. In this particular case, quadrupling the mean diagnosis time led to one additional replication out of 100 in which containment was not achieved (2/100, compared to the baseline of 1/100). However, we assumed that community awareness of smallpox leads to the same relative rate of increased diagnosis among severe cases as for ordinary cases, and that the most severe forms are relatively rare. In addition to the scenario shown in the figure, we also replicated the same 1000 "calibrated" simulations, assuming that in each case 40 contact tracings per day are possible and that the diagnosis time for severe cases was four times that of ordinary cases. Finally, we repeated each "calibrated" scenario 100 times assuming long diagnosis times for severe cases, and not making this assumption, and found that the difference in the decontainment fraction was not large (results not shown). Scenario c illustrates that vaccination prior to the discontinuation of routine vaccination does play a role in smallpox control by ring vaccination; there were more decontainment scenarios (5/100) when no prior protection exists in the population. The results suggest that prior vaccination aids in the control of smallpox, but that it is not strictly necessary for control (in this scenario, 95% of the replications exhibited containment). In Figure 3 A, we assumed 25% of individuals had protection due to vaccination prior to the discontinuation of routine vaccination; in scenario c of Figure 9 , we assumed this fraction was zero. Scenario d demonstrates that if 10% more smallpox infections (in absolute terms, i.e. 13% compared to 3% in Figure 3 A) lead to mild cases among individuals with no prior protection, the epidemic is more difficult to contain (13/100 replications showed loss of containment). Finally, scenario e demonstrates that containment is still possible even when the vaccine is completely ineffective in everyone – because of case isolation and isolation of contacts (and of contacts of contacts). Here, with 40 contact tracings possible per day, 55% of the replications nevertheless exhibited containment even with a vaccine which offered no protection whatever. With 90 contact tracings possible per day, all replications exhibited containment even assuming no vaccine protection. Effect of mass vaccination Although less efficient than ring vaccination in the sense that more vaccinations must be delivered to eliminate infection, comprehensive mass vaccination following the introduction of smallpox is sufficient to eliminate the infection. In Figure 10 , we show the probability of achieving containment (defined to be fewer than 500 total cases resulting from 10 index cases) for different levels of ring vaccination (0, 5, 10, and 20 vaccinations per day) and mass vaccination (0, 0.5%, 1%, and 2%; compare with the 10%-20% per day many jurisdictions in the United States are planning to vaccinate). Specifically, for each level of ring vaccination and mass vaccination, we used the same 1000 parameter sets used in Figure 5 , and performed 100 simulated epidemics for each parameter set. On the vertical axis, we plot the fraction of the 1000 scenarios for which each of the 100 simulated epidemics was contained. We further computed the fraction of scenarios for which none of the 100 simulated epidemics was contained; this is indicated by the colored segment in the small pie chart at each symbol. When the mass vaccination rate was 2% per day, the mean number of deaths (averaging over all scenarios and all simulations within each scenario) was 47.7, 33.7, 26.4, and 20.1 for a ring vaccination level of 0, 5, 10, and 20 per day (respectively) out of a population of 10000. Moreover, when we increased the mass vaccination level to 3%, an average of 28.9 deaths occurred when no ring vaccination was used, but this fell to 22.3 deaths when only 5 ring vaccinations per day were available (again assuming a population of 10000, and 10 index cases). With a mass vaccination level of 5% per day, an average of 18.8 deaths occurred without ring vaccination, and 15.8 deaths occurred when only 5 ring vaccinations per day were possible. (At a mass vaccination rate of 3% per day, containment as defined above was achieved in all 100 replications for 95% of the scenarios even without ring vaccination; at a mass vaccination rate of 5% per day, containment was achieved in all replications for all scenarios.) These results show that over a wide range of simulated epidemics, even seemingly small levels of ring vaccination (coupled with follow-up) may have a substantial effect in preventing epidemic spread and reducing deaths from smallpox, even during a mass vaccination campaign. Note that many jurisdictions in the United States are planning mass vaccination campaigns which could reach 10%-20% of the population per day, far greater than the mass vaccination levels examined here; it is interesting to note that mass vaccination campaigns may be effective in preventing a widespread epidemic even at much lower levels than are being planned for. Where feasible, such rapid mass vaccination rapidly eliminates smallpox transmission in our model; vaccination of contacts is still beneficial, since we are assuming that earlier vaccination yields a greater probability of preventing or ameliorating infection (results not shown). Figure 10 Mass and ring vaccination together. Low-level mass vaccination programs are improved substantially by the addition of ring vaccination. The shaded pie segments represent the fraction of 1000 scenarios for which containment (as defined in the text) was never realized; the vertical position of the pie chart represents the fraction of the 1000 "calibrated" scenarios for which containment was always achieved. As the fraction of the population mass vaccinated increases or the ring vaccination capacity increases, the probability of containment increases. Discussion We constructed a simple network model of smallpox transmission, and addressed the question of what circumstances contribute to the success of a ring vaccination campaign designed to control smallpox. Our analysis focused on the use of contact tracing/ring vaccination to prevent a widespread epidemic following a deliberate release. We conducted a sensitivity analysis based on particular, but reasonable, ranges for the unknown parameters. Our results are consistent with prior vaccination models in identifying prior vaccination and ring vaccination capacity as significant factors in determining the spread of smallpox. Unsurprisingly, we also find that household size and ring vaccination speed are particularly important parameters; these results are intuitively plausible. The contact finding probability did not appear important in this analysis only because a narrow range of values was chosen. We illustrated smallpox control by presenting scenarios based on control of moderately severe smallpox epidemics. We find that swift, aggressive contact tracing and ring vaccination is is usually sufficient to bring the infection under control. Provided that there is sufficient capacity, vaccination of contacts of contacts is beneficial, and results in fewer infected individuals and more rapid elimination of infection; investigating contacts of contacts allows the chain of transmission to be outrun to some extent. When ring vaccination capacity is small, diversion of crucial resources away from contacts is harmful; contacts of contacts should only be traced and vaccinated provided that no resources are diverted away from contacts of cases. The increased surveillance (or isolation) of contacts, together with improved rates of diagnosis due to community awareness, play important roles in smallpox control; we note that in some cases, lowered diagnosis rates among severe cases contributed to a small extent to loss of epidemic control, and suggest that any public awareness campaign include information to help the public be more aware of the full spectrum of the clinical features of smallpox. One limitation of our analysis is that we chose not to explicitly incorporate the specific epidemiology of health care workers (or mortuary workers), who are likely to be exposed to infected individuals during any smallpox epidemic (e.g. [ 17 , 22 ]), and who may then infect further members of the community [ 22 ] (as was also seen in the recent outbreak of SARS, e.g. [ 48 ]). Transmission to health care workers may be considered to amplify the initial attack or to be simply accounted among the exposures we considered (and thus be approximated by the behavior of our model), since health care workers and their household contacts are in all likelihood traceable contacts, and ring vaccination/contact tracing would identify and halt these chains of transmission as in our model. The disruption of smallpox control and patient care that may occur is not accounted for in our analysis, however, causing our model in this sense to err on the side of optimism. The appropriateness of pre-event vaccination of health care workers or other first responders has been addressed by other analyses [ 12 , 49 ], and is beyond the scope of our model. While we analyzed the effect of contact tracing, case and contact isolation, and ring vaccination (together with mass vaccination), in a real smallpox epidemic, in practice, control efforts are unlikely to be limited strictly to vaccinating contacts (and health care workers, as likely contacts) and isolating cases. Indeed, making vaccine available to individuals who believe they live near cases or to others on a voluntary basis occurred in smallpox control efforts in the past [ 22 ]. Vaccination of such individuals can only harm the disease control effort if it hinders or delays the diagnosis of cases or the investigation and vaccination of contacts; our results show that even relatively low levels of vaccination of the general population may have a beneficial effect in preventing the epidemic from escaping control. More serious is the possibility that individuals who should be vaccinated or isolated would be missed; this could occur either because individuals or institutions did not cooperate with the disease control effort, or because the individuals simply could not be found. Our analysis suggests that ring vaccination need not be perfect to successfully contain the epidemic, and yet, under conditions where there is a high rate of infection among contacts, or a relatively high rate of casual transmission, high rates of contact finding (in excess of 90%), together with increased surveillance and contact isolation, are needed to contain the epidemic. Finally, the vaccination of individuals at low risk of contracting smallpox will cause harm due to adverse events of the vaccine; in our model, the assumed death rate due to vaccination was small compared to the probability of death from smallpox, and played essentially no role in the analysis. In practice, individuals suspected to be at high-risk for vaccine complications, but at relatively low risk for contracting smallpox, might simply be isolated or closely monitored even during an outbreak; while the presence of individuals in the population at higher risk for vaccine complications would increase the death rate during an outbreak, such individuals are unlikely to impair the containment of the epidemic (the primary focus of this analysis). Our results support ring vaccination against epidemics of smallpox (even assuming high rates of transmission to close contacts), but do note that stochastically, for severe (rapidly transmissible) smallpox, scenarios of loss of control are seen, with resulting widespread epidemics. In scenarios in which the transmission potential of smallpox is smaller, such loss-of-control scenarios occur less frequently (results not shown). Mass vaccination campaigns, when conducted quickly and with very high coverage, do not result in loss of control in our model. Nevertheless, fewer deaths due to smallpox result when ring vaccination is conducted along with mass vaccination. Conclusion Simulated smallpox epidemics with ring vaccination suggest that aggressive, fast ring vaccination can control epidemics of smallpox. To do so, however, smallpox must be identified quickly and contacts vaccinated promptly. We also identify public awareness of smallpox – leading to prompt identification of cases – as a major factor in smallpox control; in some simulations, it may play a role as significant as ring vaccination itself [ 15 ]. However, we also found that uncertainty in (1) transmission from mild cases, (2) the household size, and (3) casual transmission contributed to the overall uncertainty in the epidemic size. Other parameters to which the number of infections were highly sensitive were the prior vaccination fraction, parameters related to infectiousness, and parameters related to transmission prior to the rash. Because our model combines network structure with response logistics, our results support and complement the results of other investigators. Our results support the notion that prior vaccine protection may play an important role in slowing the epidemic [ 11 ], despite the possibility that some vaccinated individuals may develop mild cases which are harder to identify, but which nevertheless transmit disease. Likewise, our results provide support for the view that ring vaccination should play a central part in smallpox control. If initiated, ring vaccination should be conducted without delays in vaccination, should include contacts of contacts (whenever there is sufficient capacity to cover all contacts of cases), and should be accompanied by a vigorous campaign of public awareness which can facilitate more rapid identification and isolation of cases. We assumed that ring vaccination could be fast (little delay between identification of a case and vaccination of the contacts), effective (nearly all household contacts can be found, and most of workplace and social contacts), and available (there is sufficient capacity). To be effective, ring vaccination planning must yield a system capable of meeting these benchmarks; we should not only be able to assess the number of contact vaccinations that will be possible per day, but should have a plan in place to (1) identify contacts by working with individuals, employers, schools, community representatives, and authorities or businesses who may have access to information facilitating contact tracing, (2) rapidly investigate and vaccinate such individuals, perhaps using field teams managed by central dispatch. It is important to realize that for high-risk, transient, or unstably housed populations where reliable contact tracing is impossible, the conclusions of the model we present cannot be applied. It is important to note that while our model suggests that ring vaccination together with contact tracing and isolation is likely to be successful, we found that for some scenarios (where smallpox was more transmissible, or was relatively more transmissible before the rash), epidemic containment required not only ring vaccination, but increased public awareness, the isolation of contacts, and tracing of contacts of contacts. For scenarios in which the smallpox was less transmissible, epidemic containment was possible at lower contact finding probabilities. Thus, while our simulations suggest that contact tracing/ring vaccination need not be perfect to succeed, because of uncertainties in our knowledge of the behavior of bioterrorist smallpox, it is impossible to know in advance how good it will have to be. Thus, that high contact finding rates, mass public awareness leading to early identification of cases, isolation of contacts, and investigation of contacts of contacts should all be conducted with maximum effectiveness to reduce the probability of a widespread epidemic. While the possibility of smallpox uncontrollable by ring vaccination has made mass vaccination preparations wise, and while mass vaccination may be unavoidable in the event of a deliberate release of smallpox, we believe that ring vaccination is essential in any case. This is not only because individuals recently exposed to smallpox may be protected if they are vaccinated promptly, but because each contact identified potentially lies in the immediate future of the transmission chain. From the standpoint of epidemic control, it is far more valuable to vaccinate individuals next in the transmission chain than to vaccinate other persons. Our results support the idea that ring vaccination/case isolation may in many, if not most cases, eliminate smallpox even without mass vaccination, but also support planning for mass vaccination (so that the vastly more costly and difficult policy of mass vaccination will be available in the event of an explosive epidemic). When faced with the unknown, multiple redundant preparations are appropriate; case investigation/isolation may control smallpox even if the vaccine does not work at all, but mass vaccination is useful in the event of an explosive epidemic for which case tracking becomes impossible. Competing interests None declared. Authors' contributions TP, KH, SF, TA, RR, and DP performed the literature review (and parameter evaluation), TP developed and implemented the model and simulation, TP performed the analysis of the simulation model and drafted the manuscript, DP performed analysis of contact tracing data, and KH conceived of the study. All authors contributed to, read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: Supplementary Material Additional file 1 For consistency, all references are included in the bibliography of the main text. Click here for file Additional file 2 For consistency, all references are included in the bibliography of the main text. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC520756.xml |
549535 | Development of a cDNA array for chicken gene expression analysis | Background The application of microarray technology to functional genomic analysis in the chicken has been limited by the lack of arrays containing large numbers of genes. Results We have produced cDNA arrays using chicken EST collections generated by BBSRC, University of Delaware and the Fred Hutchinson Cancer Research Center. From a total of 363,838 chicken ESTs representing 24 different adult or embryonic tissues, a set of 11,447 non-redundant ESTs were selected and added to an existing collection of clones (4,162) from immune tissues and a chicken bursal cell line (DT40). Quality control analysis indicates there are 13,007 useable features on the array, including 160 control spots. The array provides broad coverage of mRNAs expressed in many tissues; in addition, clones with expression unique to various tissues can be detected. Conclusions A chicken multi-tissue cDNA microarray with 13,007 features is now available to academic researchers from genomics@fhcrc.org. Sequence information for all features on the array is in GenBank, and clones can be readily obtained. Targeted users include researchers in comparative and developmental biology, immunology, vaccine and agricultural technology. These arrays will be an important resource for the entire research community using the chicken as a model. | Background The chicken is an important experimental model for evolutionary and developmental biologists, immunologists, cell biologists, geneticists, as well as being an important agricultural commodity. The recent release of a draft of the chicken genome sequence, as well as the development of a large (531,351) collection of expressed sequence tags (ESTs) has dramatically changed the landscape for biologists wishing to use genomic tools to study the chicken. DNA microarrays are well accepted as an essential part of functional genomics. Several small chicken cDNA arrays have been fabricated and used in studies focused on the chicken immune system [ 1 - 4 ]. To enhance the utilization of existing resources and further develop the chicken as a model organism, a consortium was formed to produce microarrays using clones from the Biotechnology and Biological Sciences Research Council (BBSRC), University of Delaware (UD) and Fred Hutchinson Cancer Research Center (FHCRC). The BBSRC chicken cDNA project generated a large (>300,000) collection of ESTs that represents a wide range of adult and embryonic tissues [ 5 ]. The UD Chick EST project has focused on tissues important in agricultural production, with a heavy emphasis on the immune system [ 6 ]. The FHCRC EST collection was generated from DT40 cells (a transformed bursal cell line) [ 1 , 2 ], along with clones from the bursal EST project [ 7 , 8 ] and the UD activated T cell library [ 9 ]. By combining resources and clones from these projects, we have established a collection that encompasses a variety of tissues, and generated microarrays with 13,007 usable features. This paper describes the array with respect to clone selection and quality control parameters. Results and discussion Selection of clones for the array A compilation of 363,838 chicken ESTs from the BBSRC, UD, and FHCRC collections were sorted into contigs (33,323) singlets (27,235), and singletons (8,794), using the default parameters of the phrap assembly program [ 10 ]. The phrap singletons contain sequences represented in the contig group, but could not be assembled, and were eliminated from further consideration. Both contigs and singlets groups were analyzed by using BlastX to compare to GenBank (nr) and BlastN to compare to human dbEST. Because of the evolutionary divergence between chicken and the majority of the sequences that populate GenBank, a Blast score >50 was considered a significant hit, and clones with scores<50 were excluded. Clones belonging to the existing chicken immunology collection (4,162 cDNAs from DT40 cells, bursa and lymphoid tissues) were sorted from the entire contig/singlet set, and after screening for E.coli , mitochondrial and ribosomal RNA contaminants, and identical Blast hits, a total of 2,248 and 13,584 singlets and contigs, respectively, remained as candidates from which to choose cDNAs for the final array. About half of the clones in the contig group were expressed in 4 or more libraries, indicating wide tissue expression (Figure 1 ). The remaining half was found in less than 3 libraries, indicating a more restrictive expression. For clones belonging to contigs, the most 5' clone was selected for inclusion on the array. This potentially introduces a 5' bias in the sequence available for hybridization; however, since the average insert size for all clones is approximately 1.2 kb and most cDNAs were made by oligo dT priming, clones should contain the entire downstream sequence. Figure 1 Library coverage in clones assembled into contigs. Clones from the BBSRC, UD, and FHCRC collections were assembled into 13,584 high scoring (BlastX>50) contigs using phrap software [10], and the number of different tissue libraries represented in each contig were scored. There were 6,832 contigs that had clones from 4 or more libraries, while the remaining contigs consisted of clones from 3 or fewer libraries. The library representation of the clones in the singlets group is shown in Figure 2 . The numbers tended to reflect the depth of sequencing of the individual libraries [ 5 , 11 ]. The chondrocyte, ovary and stage 20–21 whole embryo libraries have more singlets; more than 25,000 ESTs were sequenced from each of these libraries, as opposed to 7–15,000 from the other libraries. The correlation is not perfect, however, and the lack of correspondence likely reflects similarities of some libraries to others in the collection, or relative specialization of the tissue, or a combination of these factors. Figure 2 Library coverage of the singlets. Clones that are only represented once in the ESTs assembled with the phrap software [10], and with Blast scores>50, were analyze for distribution in the different libraries. The final selection of clones for the array was made by randomly choosing about 4,800 ESTs expressed in a wide range (>3) of tissues, and about 4,800 with a more narrow (1–3 tissues) expression profile, in addition to 1,735 singlets. The library distribution of the final clone selection is shown in Figure 3 . However, it is important to note that because >50% of the clones were represented in multi-library contigs, the potential tissue representation on the array is greater than that depicted by library representation. Figure 4 shows the minimal expected tissue coverage of the 11,447 clones chosen from the BBSRC collection. Note, for example, that while only 724 clones from the stage 36 trunk library were selected for the array, at least 2,000 mRNAs from that tissue are represented by clones from various libraries. Figure 3 Library distribution in final clone set. The clones in the original immunology collection and clones randomly selected from the contigs and singlets were scored for library of origin. Figure 4 Minimal expected tissue coverage. The libraries represented in each of the contigs or singlets from which a clone was chosen for the array were scored to give an estimation of the expected coverage for a given tissue. Only clones in the BBSRC collection were included in this analysis. Annotation A list of the clones can be accessed on-line [ 12 ]. The clones represented in the list total 15,769. PCR product quality was assessed using gel electrophoresis and the results were meticulously scored and recorded. After identifying poor quality PCR products (e.g., no detectable product, detection of multiple products), the number of useable features totals approximately 13,000, including control features. The annotation file contains accession numbers, source clone name, and source assigned annotation or Blast derived annotation. In addition the EST identification assigned by The Institute for Genome Research (TIGR) and found in TIGR's Gallus gallus Gene Index (GgGI) [ 13 ] is provided, as is the identifier for TIGR's consensus (TC) sequence and TIGR annotation. An analysis of the TC identifiers for clones on the array revealed that 1,184 mRNAs are represented by more than one clone. This is due to clones in non-overlapping contigs and some redundancy in the original immune collection. A more detailed annotation file, as well as a database for array data is under development and will be accessible on line [ 6 ]. Clone selection and array fabrication predated the sequencing of the chicken genome. An analysis of the sequence of the clones on the array indicates that 10,168 of the 21,447 predicted or annotated chicken genes in the GenBank chicken Unigene collection are present on the array. The remaining clones match cDNAs not yet included in Unigene, or other portions of the chicken genome, or are redundant. Clones are available from their original source: the BBSRC collection, distributed by the MRC gene service [ 14 ]; the DKFZ collection at Heinrich-Pette-Institute maintained by Dr. Jean-Marie Buerstedde [ 15 ]; the DT40 collection at Fred Hutchinson Cancer Research Center, maintained by Dr. Paul Neiman [ 16 ]; the T-cell and lymphoid libraries, maintained by Dr. Joan Burnside of the Delaware Biotechnology Institute [ 6 ]. Chicken 13K array performance An image of the 13K array hybridized to RNA extracted from chicken brain and myc-transformed embryo fibroblast samples and independently labeled with Cy3™ or Cy5™ fluorescent dyes is shown in Figure 5 . There is good discrimination between the two samples, as well as many commonly expressed genes. Of noted prominence is the striking difference in signal intensities associated with the spots located near the bottom of each block. These spots correspond to clones represented in the DT40/UD/DKFZ immune collection, which were originally selected with a bias towards highly expressed genes. Since the BBSRC clones are predominantly from highly normalized libraries and were chosen as non-overlapping with the original immune system set, this resulted in a survey of lower abundance and more tissue-specific transcripts. Figure 5 Image of 13K array hybridized to brain (Cy5™) and myc-transformed fibroblasts (Cy3™). The array layout is 32 blocks in a 4 × 8 configuration and each PCR product is represented once, with the exception of negative controls, which are replicated in each block. Reproducibility Labeled samples were co-hybridized to the array for 16 hrs using standard protocols [ 12 ]. The same brain and fibroblast RNA extracts were also labeled by reversing the dye orientation and then co-hybridized to a second array. After image analysis, modest signal-to-noise (S/N) filtering, log base-2 ratio transformation, loess normalization, and corrections for the inverted dye orientations, the results from the two hybridizations were compared and were shown to be highly correlated (Figure 6 ; Pearson correlation coefficient, r = 0.972). The high correlation is indicative of a very high-level of technical reproducibility in array performance. Rare outlying data points and the slight deviation from a slope = 1, may reflect the influence of the different dyes used in the amino-allyl labeling. Figure 6 Correlation of signals from chicken 13k array hybridized to brain and fibroblast RNA. Samples were each labeled with Cy3™ and Cy5™ to perform a dye swap comparison. Signal-to-noise, specificity and sensitivity We randomly chose one of the "myc-transformed embryo fibroblast vs. brain" array comparisons and determined the signal-to-noise (S/N) values for each channel using the background-corrected feature signals and the variation in the local background signal. Table 1 contains the results for the individual channels/samples. Of note is the high number of features with a S/N > 3.0, a value commonly used for defining the lower-bound threshold of detection. The mean S/N is also provided in Table 1 for each channel. These results reflect the significant detection capabilities obtainable in using the array. For example, the data from this representative comparison spanned the maximum fluorescent dynamic range of detection, from over 65,000 counts down to background count levels. In addition, the average local background signal for both channels was consistently low across the entire array, with no appreciable spatial block-level differences (see Table I ). Furthermore, the variation in the local background signal was less than 38%. Taken collectively, the array provides a significant level of sensitivity for expression profiling. Table I Chicken 13K cDNA Microarray Performance Metrics Label / Sample Mean BG Signal Spot-Level S/N >3 Mean Spot-Level S/N Cy3 / Fibroblast 118 ± 6 88.0% 35.1 Cy5 / Brain 48 ± 3 86.3% 38.3 ± standard deviation of the mean Figure 7 is a box plot of a "brain vs. brain" and a "myc-transformed embryo fibroblast vs. brain" comparison using the array. The y-axis is the Iog 2 -transformed (Cy3™/Cy5™) values for each comparison. The bar inside the box is the median value, the upper and lower dimensions of the box define the inter-quartile range, and the crossbars demark the 10th to 90th percentile range. The difference in the Iog 2 ratio distributions between comparisons highlights the capabilities of the array to detect transcript-level differences between the fibroblast and brain samples. Figure 7 Box plot of Iog 2 ratios from arrays hybridized to brain labeled independently with Cy5™ and Cy3™ and brain versus fibroblasts. The y-axis is the Iog 2 -transformed values for each comparison. The bar inside the box is the median value, the box upper and lower dimensions define the inter-quartile range, and the crossbars demark the 10th to 90th percentile range. The Venn diagram in Figure 8 (A,B) indicates the sample-specific "detectable signals" (spot-level S/N >3.0) from bursa, liver, brain, and myc-transformed embryo fibroblast. Note that signals were obtained for 7,422 spots with RNA from bursa, suggesting that this array provides wide coverage for experiments with lymphoid tissues. Excellent coverage of liver, brain and fibroblast transcripts was obtained as well. The identification of tissue-specific transcripts is noteworthy and reflects the clone selection process, which was designed to provide detection of mRNAs in a wide range of tissues, as well as low abundance, unique transcripts. It is of interest that the myc-transformed fibroblasts are a quail derived cell line; these results indicate that these arrays will be useful for studies in other gallinaceous birds. Figure 8 Performance of amplified RNA (aRNA) and mRNA from different tissues. Common and tissue specific expression is illustrated in panels A and B. Panel C shows fair concordance of hybridizing spots between aRNA and mRNA; In a separate experiment, T7 amplified, random-primer labeled RNA was compared with random-primer labeled poly A RNA (from the same preparation). Figure 8C shows a fair concordance with about 80% of the same spots showing hybridization with each sample. However, this comparison reveals that amplification loses some signals detected with mRNA but picks up others, presumably from low abundance messages which amplify better (with respect to the cDNA sequences on the chip) than average. In another experiment (not shown) repeat amplifications of the same RNA prep give satisfactorily consistent results (correlation coefficient >0.9). These results emphasize that it is important to use the same method of RNA preparation and labeling to obtain reliable comparisons. Conclusions An international consortium of researchers interested in using the chicken as both a model biological system and as an important agricultural commodity have consolidated resources to produce a microarray containing 13,000 features representing approximately 12,000 different mRNAs. These are now available to academic researchers through genomics@fhcrc.org. This array overlaps previous chicken immunology arrays and extends the coverage to 24 different tissues or cell types. In conjunction with the recent release of the chicken genome sequence, this tool will have wide application to studies in developmental biology, immunology, vaccine application, as well as identification of well-characterized complex traits. The availability of genomics tools will enhance the further development of the chicken as a powerful biological model. Materials and methods Libraries and array construction BBSRC and UD clones were shipped to the FHCRC core genomics lab. Information on the libraries joined to produce this collection is available at individual web sites and previous publications [ 1 , 6 , 11 , 15 ]. Microarrays were constructed using modified protocols of those discussed by De Risi et al . [ 17 ]. Individual PCR products were verified as unique via gel electrophoresis and purified using the Millipore Multiscreen-PCR filtration system. Purified PCR products were mechanically "spotted" in 3X SSC (1X = 150 mM sodium chloride, 15 mM sodium citrate, pH 7.0) onto poly-lysine coated microscope slides using a GeneMachines OmniGrid high-precision robotic gridder (Genomics Solutions, Ann Arbor, MI). The array layout consists of 32 blocks in a 4 × 8 configuration and each PCR product is represented once on the array. In addition, each array sub-grid (i.e., "block") contains spots representing 4 different Arabidopsis genes (negative controls) and 1 spot consisting of sheared chicken (white leghorn) genomic DNA. A GenePix scanner-compatible file (chicken 13k_v1.0.gal) is available on line [ 12 ]. For other scanners, this file can be opened in a text editor and used to construct a similar file that meets other image analysis software's format specifications. RNA preparation, labeling and hybridization Total RNA was prepared using Qiagen (Chatsworth, CA) RNeasy kits and amplified using a linear T7 promoter-based mRNA amplification method incorporating amino -allyl dUTP followed by random primer labeling with Cy™3 or Cy™ 5 (Amplification and labeling kits are available from Ambion, Inc., Austin, TX). For hybridization, 10%, sodium dodecyl sulfate (SDS), 0.6 μl was added to the labeled RNA and heated at 99 C for 2 min. RNA was then centrifuged at 14,000 rpm for 3 min, and the sample cooled to room temperature. After placing an array slide in a hybridization chamber, 10 μl 3X SSC was added to the slide, away from the spotted area. RNA sample was then added to the array area and the cover slip promptly positioned over the array. The sealed hybridization chamber was incubated in a water bath at 63 C for 16 h. The slide was then washed for 2 min in a standard slide washing container, first in 1X SSC/0.03% SDS, then in 1X SSC, followed by a 20 min wash with agitation (60 rpm) in 0.2X SSC and a 10 min wash with agitation in 0.05X SSC. The slide was protected from light during the prolonged washes. The slide was then centrifuged (500 rpm × 5 min) to dry. Fluorescent array images were collected for both Cy3™ and Cy5™ using a GenePix 4000A fluorescent scanner (Axon Instruments, Inc., Foster City, CA) and image intensity data was extracted and analyzed using GenePix Pro 3.0 microarray analysis software. Authors contributions JB and PN generated UD and FHCRC clones, respectively. DB provided the BBSRC clones. JB and JT performed the analysis for clone selection. JD and RB fabricated the microarrays. PN, JD, RB performed the analysis and validation of the microarray. MA generated the annotation file. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549535.xml |
524524 | Overdispersed logistic regression for SAGE: Modelling multiple groups and covariates | Background Two major identifiable sources of variation in data derived from the Serial Analysis of Gene Expression (SAGE) are within-library sampling variability and between-library heterogeneity within a group. Most published methods for identifying differential expression focus on just the sampling variability. In recent work, the problem of assessing differential expression between two groups of SAGE libraries has been addressed by introducing a beta-binomial hierarchical model that explicitly deals with both of the above sources of variation. This model leads to a test statistic analogous to a weighted two-sample t -test. When the number of groups involved is more than two, however, a more general approach is needed. Results We describe how logistic regression with overdispersion supplies this generalization, carrying with it the framework for incorporating other covariates into the model as a byproduct. This approach has the advantage that logistic regression routines are available in several common statistical packages. Conclusions The described method provides an easily implemented tool for analyzing SAGE data that correctly handles multiple types of variation and allows for more flexible modelling. | Background The nature of SAGE The Serial Analysis of Gene Expression (SAGE) methodology introduced by Velculescu et al. [ 1 ] is a sequencing-based approach to the measurement of gene expression. Briefly, mRNA transcripts are converted to cDNA and then processed so as to isolate a specific subsequence; starting from the poly-A tail, the subsequence is the 10 (normal SAGE) or 14 (long SAGE) bp immediately preceding the first occurrence of a cleavage site for a common restriction enzyme. Ideally, this subsequence, or "tag" is sufficiently specific to uniquely identify the mRNA from which it was derived. Tags are sampled, concatenated and sequenced, and a table consisting of the tag sequences and their frequency of occurrence is assembled. The complete table derived from a given biological sample is referred to as a SAGE "library". As most tags are sparse within the entire sample, most libraries contain numbers of tags in the tens of thousands to allow the expression levels to be estimated. Due to the current costs of sequencing, however, the total number of libraries assembled for a given experiment is typically small: often in the single digits and occasionally in the tens. While the type of information, gene expression, being investigated in a SAGE experiment is the same as that in a cDNA or oligonucleotide microarray experiment, there are some qualitative differences in the approaches. First, SAGE uses sequencing as opposed to competitive hybridization. Second, while the expression value reported for an array experiment is a measure of fluourescence and is loosely continuous, SAGE supplies data on gene expression in the form of counts, potentially allowing for a different type of "quantitative" comparison. Third, SAGE is an "open" technology in that it can provide information about all of the genes in the sample. Microarrays, by contrast, are "closed" in that we will only get information about the genes that have been printed on the array. Mathematically, the information pertaining to the abundance of a particular tag in a sample is summarized in two numbers: Y , the number of counts of that tag in the library, and n , the total number of tags in the library. In analyzing SAGE data across a series of libraries, interest typically centers on assessing how the underlying true level of gene expression is changing as we move from one library to the next. Mathematical formulation of the differential expression problem When surveyed across a series of libraries, the sufficient statistics containing all of the information about the change in expression for a single tag are the set of counts { Y i } and the set of library sizes { n i }, where the subscript i denotes the specific library. Unless otherwise specified, we will restrict our assessment of differential expression to the case of a single tag. This approach is common to all of the procedures described below. In a real analysis the chosen test is applied to all tags individually and a list of those tags showing differential expression is reported. Different tests will provide altered assessments of significance for individual tags, and hence the list provided will depend on the test employed. In most problems of interest, there is also covariate information X i describing properties of library i . The most common case involves comparing two groups of libraries, such as cancer and control. In this case the information X i simply defines which group library i belongs to. If there are more than two groups, X i can have more levels or can even be vector valued, but as before interest centers on assessing how and whether the expected proportion changes with X . Much work has been done on the problem of comparing expression between two groups. Most of the approaches [ 2 - 9 ] deal with comparing one library with another. Of these, [ 2 , 6 , 7 ] extend their consideration to the case of two groups of libraries by pooling the libraries within a group, effectively reducing the sufficient statistics to the summed counts This approach, while it captures the count nature of the data, loses information in that variation of the proportions within a group is ignored. As noted by both Man et al. [ 9 ] and Ruijter et al. [ 10 ], most of the above tests give equivalent results in terms of assessing significant differences. By contrast, the two-sample t -test used to compare two groups of samples in [ 11 ] reduces the sufficient statistics to the set of proportions { p i } = { Y i / n i }, capturing the variation between members of a group but losing track of the inherent count sampling nature and variability of the data. The two-sample t -test results can be dramatically different from the pooled test results, as they focus on two different types of variation. The effects of these two approaches on a single group of four libraries are shown in Table 1 . Pooling reduces the data to the summed counts at the right, and focusing on proportions reduces the data to the proportions on the bottom. In both cases, this reduction results in a loss of information. When pooling is used, we can't tell that one of the group proportions was large and the other small, indicating instability. When proportions are used, we can't tell that one library was much smaller than the other, so that proportion should be "trusted less" than the other. Baggerly et al. [ 12 ] proposed a beta-binomial hierarchical model for SAGE data in an attempt to simultaneously model both types of variation. This model leads to a test statistic called a weighted two-sample t -test, t w . Computing the value of this test statistic requires all 8 of the numbers in the main body of Table 1 ; there is no reduction of the sufficient statistics. This test statistic exhibits different behaviors depending on which type of variation is larger for a given tag. When the within-library sampling variation is much larger than the between-library variation, t w gives results close to those supplied by pooling tests, which focus on within-library variation. Conversely, when the between-library variation is much larger than the within-library variation, t w gives results very similar to those of a two-sample t-test, which focuses on between-library variation. The t w model also allows the relative contributions of the two types of variation to be assessed. Baggerly et al. [ 12 ] found that for high-count tags, between-library heterogeneity is the much larger source of variation and that pooling methods which do not allow for heterogeneity are biased towards finding high count tags to be significantly different. This can potentially lead to large fractions of false positives, as becomes apparent when the results for several different tags are plotted. Extensions to multiple groups While cases with more than two groups have been described in the literature [ 2 , 13 - 15 ], the means of analysis is currently something of a hybrid. Methods explicitly attacking the multi-library problem have been proposed [ 16 , 17 ], but the most common approach at present [ 13 , 15 ] seems to involve coupling hierarchical clustering of the data with pairwise tests for differential expression [ 2 ] between one group and another. This hybrid approach can indirectly capture both types of variability, with the hierarchical clustering focused on the variation between proportions within a group, and the pairwise test focusing on sampling variation. Clustering has other benefits for clarifying thought apart from assessing differential expression, and we definitely recommend it for exploring the structure of the data. However, clustering tends not to provide a numerical summary, so combining the clustering results with those of the pairwise comparisons can be something of an art. An additional drawback is that the pairwise comparisons may miss useful information about variability by focusing only on a subset of the libraries available. For the purposes of assessing differential expression we believe more efficient tests are available. Our approach: Overdispersed logistic regression We seek to construct a method that takes the count nature of the data into account, deals with multiple groups simultaneously, and allows for variability in the proportions beyond that due to sampling alone. Fortunately, this is not the first time such a problem has arisen. The problem of assessing differential expression for multiple groups corresponds to the classical statistical problem of the analysis of variance (ANOVA). When the values of interest are continuous (e.g., microarray log ratios), the test statistics become F-tests, higher-dimensional generalizations of the two-sample t -test. When the data are counts (SAGE data), and sampling variability needs to be dealt with, the ANOVA test can be adapted to give logistic or Poisson ANOVA. The multi-library test for differential expression proposed by Stekel et al. [ 17 ] corresponds to Poisson ANOVA, but without allowance for overdispersion. ANOVA deals with the extension from two to a larger number of distinct groups, but this can be viewed as a special case of the situation where the covariate information is continuous. One common way of modelling the dependence of proportions upon covariates is through logistic or Poisson regression, both of which are special cases of generalized linear models [ 18 , 19 ]. Such models incorporate the form of the sampling variability directly. For example, the logistic model for proportions, defines both the function of the data that is to be modeled in terms of the covariates (the logit of the proportions) and the precision of each of the measurements. The maximum likelihood estimates of the parameters of this model can be found through iteratively reweighted least squares (IRLS). Excess variation within a level, or overdispersion, can be introduced into a logistic regression framework in a number of ways. The most common and most widely implemented approach is to replace the binomial likelihood function being maximized above with a "quasi-likelihood" which differs from the initial formulation solely through the introduction of a scale term, , into the variance equation, so that V ( Y i ) = n i p i (1 - p i ) . This approach has the advantage that it inflates the variance of each of the observations by a like amount, so that the estimated values will be the same – just the associated standard errors will be inflated. Logistic regression with quasi-likelihood overdispersion is implemented in a wide variety of statistical packages, including S-PLUS, R, GLIM, and SAS. Another method of introducing overdispersion is to assume a hierarchical model in which the proportions at a given level of the covariate are drawn from a nondegenerate distribution, and the distribution of the observed counts is binomial conditional on the value of the drawn proportion. When a beta distribution is assumed for the proportions, the final unconditional distribution of the observed counts is beta-binomial. This is the model suggested by Baggerly et al. [ 12 ] for modelling overdispersion in SAGE data, and is also the model used by Crowder [ 20 ] in generalizing ANOVA to deal with proportions subject to overdispersion. It can be shown (eg, Collett [ 18 ] p.201) that the variance of beta-binomial counts is of the form V ( Y i ) = n i p i (1 - p i )[1 + ( n i - 1) φ ], which is equivalent to the quasi-likelihood formulation when all of the library sizes n i are the same. While approximate equality may suffice, even this assumption may be questionable for SAGE data, particularly if some of the libraries are drawn from experiments conducted at different times. Williams [ 21 ] shows how IRLS can be adapted to deal with this type of overdispersion, and notes that estimation involves φ only and need not assume further structure from the beta distribution, making the procedure slightly more general. This form of overdispersion is implemented in R as part of the dispmod package. In the logistic regression framework, assessing differential expression reduces to a case of deciding whether a set of regression coefficients is different from zero. This can lead to slightly different inferences than models such as t -statistics applied to the proportions, in that approximate normality is assumed to hold for the β values rather than the proportions themselves. When we have worked with a beta model for the p i 's, we have been led to choices of parameters which yield quite skewed distributions, suggesting that the logit scale may be more appropriate. Working with the β values has the additional advantage that confidence intervals are naturally interpreted in terms of fold changes. Results Comparing two groups We begin by comparing the counts of the tag ATTTGAGAAG in 8 colon libraries initially described in Zhang et al. [ 2 ]. These 8 libraries include two normal colon (NC1 and NC2), two primary tumors (TU98 and TU102) and four cell lines (CACO2, HCT116, RKO, and SW837). For now, we focus on comparing normal colon with all tumors, primary or cell line. Counts of the tag and the corresponding library sizes are given in Table 2 . A χ 2 test applied to the pooled counts from each of the two groups yields a test statistic of 444.27; the 95% cutoff for the null distribution is 3.84, with values above this being deemed "significant". The two-sample t -test applied to the two groups of proportions yields 1.60; the 95% cutoffs for the null t 6 distribution are ± 2.45, so this test suggests that the difference is not significant, showing the possibility of stark disagreements between tests focusing on different portions of the variability. The t w statistic proposed by Baggerly et al. [ 12 ], which incorporates both types of variance, yields a test statistic of 1.60. The null distribution of this test statistic in this case is approximately a t 6 distribution, and the qualitative results are far closer to those of the t -test than those of the pooled tests, reflecting the relative dominance of patient heterogeneity in driving the total variation for this tag. We note in passing that this disagreement between the two types of tests is not an isolated incident. When we surveyed all of the tags in this group of libraries we found 10 tags with | t | < 2 and χ 2 > 200, and 48 tags with | t | < 2 and χ 2 > 50. In Baggerly et al. [ 12 ] it was found that most high-count tags appeared significantly different when a pooled test was used and not significant when a t -test was tried, and that in this case the t -test was more likely correct. The results from three logistic regression model fits to the data are shown in Table 3 . In the first model there is no allowance for overdispersion, in the second the quasi-likelihood approach to overdispersion is employed, and in the third the hierarchical approach to overdispersion is used. Here, the values of the covariate X are 0 or 1 as the library is in the first or second group, respectively. In models 1 and 2, the fitted proportions are = e -4.66 /(l + e -4.66 ) = 0.94% and = e -4.66 - 0.89 /(l + e -4.66 - 0.89 ) = 0.39% for the first and second groups, respectively, and the proportions are only slightly altered in model 3. We note that the estimated coefficient values are exactly the same for the first two models, and this is true for these two approaches in general. Fitting the model with no allowance for overdispersion gives a z -value of β 1 / s.e. ( β 1 ) = -20.42, which is definitely significant. Note that the square of this value is of the same order as the value found by the χ 2 test. The Pearson residuals from this model, , however, show a problem. If the model fits well, these should be approximately distributed as a standard normal, with extreme values from a set of 8 observations around 3 or 4 in magnitude. The actual values, -14.6 and 19.0, are far too extreme. When the model is fit with allowance made for overdispersion, the point estimate of the dispersion parameter is = 187.57; this value should be close to 1 if there is no overdispersion. With this allowance made the t -value of -1.49 is no longer significant. This t -value can be found from the first z -value (-20.42) by dividing by = 13.70. Similarly scaling the residuals yields values far more commensurate with a standard normal. We note that due to the differences in the models employed, the presumed distributions of the test statistics have changed. If we assume that the standard logistic model with no overdispersion holds, the test statistic has an approximately normal distribution. This is because the number of total successes is driving the binomial distributions to approximate normality. When we shift to a model where we presume the existence of overdispersion, the test statistic now has a t distribution. This is because our estimate of the variance is now strongly dependent on the precision with which we can estimate the overdispersion parameter, and this precision depends on the number of libraries, not the number of successes. Fitting this model with the hierarchical type of overdispersion, model 3 in Table 3 , yields slightly different answers but the size of is still not significant. The difference in values from those found before is due to the fact that in this model the amount of overdispersion attributed to each proportion changes slightly with library size, thus altering the weights used in the regression model. The point estimate for the hierarchical dispersion parameter φ is = 3.399 e - 03, so the multipliers for the binomial variances are 1 + ({ n i } - 1) = (169.62, 165.78, 141.62, 190.32, 207.26, 190.12, 175.35, 208.84). averaging these gives 181.11 which is close to the value found for the quasi-likelihood dispersion parameter. We note that the differences in coefficient values for models 2 and 3 are largely cosmetic, but the differences in significance between model 1 and the others are not. Choosing to account for overdispersion is more important than the precise model used to achieve this. We note that the overdispersed logistic regression approaches give t -values about -1.49, whereas the two-sample t -test and the modified version t w suggested by Baggerly et al. [ 12 ] both give t -values of about -1.6 (as noted earlier, agreement between t and t w suggests that for this tag, the between-library variation is much larger than the within-library variation). There are two reasons for this difference. First, the t statistic works on the proportion scale, and logistic regression works on the β scale, which is roughly the log proportion scale. Second, the t w statistic used here, does not assume that the overdispersion factor is the same in the two groups being compared; the variance estimate is not pooled. The latter difference is actually the more important for this contrast, particularly as the variance estimate from the first group of size 2 is very unstable. This effect is not always subtle; if we consider instead the tag GCGAAACCCT, with counts given in Table 2 , the two-sample t test and the weighted t w test both give -1.57, and the logistic regression t value is -4.16. Of the two answers, we tend to prefer the one given by the logistic regression fit, for two reasons. First, when we have fit the parameters of the beta distributions for the proportions directly, we have found the distributions to be quite skewed. As such we find it better to assume rough normality on the β coefficient scale. Second, when the number of libraries in a group is quite small, which will often be the case with SAGE data, we prefer the pooled estimate of the variance. This preference is due in large part to its greater stability through the use of more degrees of freedom. It is possible to explicitly incorporate levels of overdispersion that change with the covariates in logistic regression, but we have not pursued this here. Comparing three or more groups Above, we treated the colon libraries as if they came from two groups, but it is more natural to view them as coming from three: normal samples, primary tumors, and cell lines. When we have data from multiple groups, there are two different ways in which this changes the nature of the problem. First, if we are only interested in comparing two of the groups, it is often nonetheless worthwhile to incorporate the data from the other groups into the model. The reason for this is that when overdispersion is driving the variance, the significance of our results depends strongly on the precision with which we can estimate the overdispersion parameter. The libraries in the groups not directly involved in the comparison of interest can still supply information about the overdispersion parameter and increase the degrees of freedom of the associated t-test. Second, by examining the fitted proportions for all groups, the relative sizes of the transitions can be assessed. We begin by looking at the results for a single tag flagged as interesting in the paper by Zhang et al. [ 2 ], namely TGCTGCCTGT, where we presume that the contrast of most interest is between normal colon and primary tumors. The counts for this tag and the corresponding library sizes are given in Table 2 . We first attempt to compare the levels in normal colon and primary tumor while ignoring the cell lines (ie, using just four libraries), and then using a model incorporating all three groups. The results using logistic regression with hierarchical overdispersion are shown in Table 4 . In the model with only two groups, we have a single covariate vector x 1 = (0,0,1,1) denoting which of the two groups the library belongs to. This model produces an overdispersion estimate of = 8.938 e - 05, for inflation factors of 1 + ({ n i } - 1) = (5.43, 5.33, 4.70, 5.98). The fact that these factors are significantly larger than one suggests that the within-group heterogeneity is the dominant component of the variance not explained by the model. In the model with three groups, we cannot use a single covariate vector x 1 , as this is not suited to indicating 3 or more groups in an unordered fashion (using 0, 1, and 2 for the three groups respectively would force an ordering by saying that primary tumors are intermediate betwixt normal samples and cell lines). In general, if we have k groups, we need to use k - 1 covariate vectors. Here, we use x 1 = (0, 0, 1, 1, 0, 0, 0, 0) and x 2 = (0, 0, 0, 0, 1, 1, 1, 1). The set of all 0s ( x 1 = 0, x 2 = 0) corresponds to the first group, here normal colon, and the other groups are defined by which one of the other covariates is nonzero: Group 2 (primaries), ( x 1 = 1, x 2 = 0), Group 3 (cell lines), ( x 1 = 0, x 2 = 1) As we are still focused on the difference between normal colon and primary tumors, for which the logit values are β 0 and β 0 + β 1 respectively, the main interest remains on whether β 1 is significantly different from zero, and the predicted logit for the cell line group, β 0 + β 2 , does not enter the problem directly. Fitting this model produces an overdispersion estimate of = 1.160 e - 04, for inflation factors of 1 + ({ n i } - 1) = (6.76, 6.62, 5.80, 7.46, 8.04, 7.45, 6.95, 8.09). In neither case (considering two groups or three) does the contrast between normal colon and primary tumor, represented as the magnitude of , appear significant once allowance is made for overdispersion, but there is an interesting point to note. Even though the point estimate of overdispersion increases when the cell lines are included, and the value of the t -statistic ( / s.e ( )) associated with the difference declines, the associated p-value indicates an increase in significance. Without using the cell lines, we have just 4 libraries, and after estimating the mean proportions in each group just 2 degrees of freedom for estimating φ . When we use the cell lines, we have 8 libraries and 5 degrees of freedom for estimating φ . Thus, the degrees of freedom in the t -tests shift from 2 to 5. The t 2 distribution has very wide cutoffs, and the t 5 is much closer to normal. In general, the inclusion of related groups can improve estimation by increasing the precision of our estimate of overdispersion. In fitting the model with three groups, of course, we have also gained the ability to look at other contrasts. For example, we can look at normal colon versus cell lines, for which the logits are β 0 and β 0 + β 2 respectively, by checking the significance of . Likewise, we can look at the difference between primary tumors and cell lines, for which the logits are β 0 + β 1 and β 0 + β 2 ,by testing the significance of the difference . While this significance is not listed in the table directly, we can compute the standard error of this contrast, s.e. , divide the estimate by its standard error to get a t-statistic with the degrees of freedom listed (here 5), and compute a p-value accordingly. It is also possible to perform an omnibus test of whether there exists any significant difference among the groups, which is logistic ANOVA for proportions. The regular ANOVA test looks at the amount of variance explained by the terms of interest in the model and compares this to the amount of residual variance. Adjusting for the degrees of freedom in each group gives an F -test. When dealing with generalized linear models, the quantity -2 * log (likelihood ratio), known as the deviance, plays a role analogous to the variance in ANOVA and thus we can speak of the analysis of deviance. The analysis of deviance is complicated by the inclusion of overdispersion in the model, requiring a multi-step approach in which several different models are fit in succession. These models are listed in Table 5 . First, the model is fit using all available covariates and the overdispersion parameter is estimated. Here, the available covariates are x 1 and x 2 , and fitting the full model with both present, β 0 + β 1 + β 2 , gives = 1.160 e - 04 as noted above. Second, submodels are fit with the value for overdispersion fed in as fixed. In this case, the submodels are β 0 + β 2 , using x 2 as the only covariate, β 0 + β 1 , using x 1 as the only covariate, and β 0 , using no covariates and simply fitting a single proportion to all of the data. The results are shown in Table 5 , from which the significance of a given model can be assessed by comparing the scaled reduction in deviance with the scaled residual deviance to the appropriate F distribution. Here, for example, testing whether the overall model including β 1 and β 2 explains things significantly better than just fitting the same proportion throughout ( β 0 ) reduces to indicating that the overall difference between groups is not significant at the 5% level. It may be noted that the submodel including just β 1 in addition to the constant appears to explain very little; this is due to the way in which we have chosen the entries of X , so that including β 1 isolates the effect of the primary tumor group, but excluding β 2 still combines the normal colon group with the cell line group for the contrast. This latter grouping blurs the normal colon vs primary tumor distinction found to be a bit larger earlier. Incorporating other covariates It is possible to use the logistic regression approach to partition the variance amongst multiple effects of interest. For example, in the above section we considered a case with colon libraries taken from both primary tumors and cell lines. Such data is also available for other organs, eg pancreas. If we are interested in identifying consistent differences between primary tumors and cell lines, it would be natural to use libraries from both organ types. However, if these were then compared as two groups, primary vs cell lines, the differences would be difficult to isolate due to the large differences between tissue types within both the primary and cell line groups. The solution is to fit a model with two covariates, with x 1 being 0 or 1 as the sample is colon or pancreatic, respectively, and x 2 being 0 or 1 as the sample is a primary tumor or a cell line, respectively. Inference reduces to testing the significance of β 2 , with the scale of natural variation being assessed only after the effects of the change in tissue type, β 1 , have been factored out. In the above example, we allowed for the effect of one other effect, tissue type. In principle, multiple factors can be allowed for through the inclusion of other covariates. Likewise, though the two covariates in the above example were both "factors" having a finite number of unordered levels, it is possible to include continuous covariates in the modelling process as well. To illustrate this, we give two hypothetical examples using the counts for the GCGAAACCCT tag from Table 2 . In the first example, we posit that we are trying to assess the differences between normal tissue and primary tumors, that the first 4 libraries come from normal colon and primary tumors as indicated, and that the remaining 4 libraries come not from cell lines but rather from normal tissue (libraries 5 and 6) and primary tumor (libraries 7 and 8) from some other organ. As noted above, this leads to a scenario where we want to fit a model with two covariates: x 1 = (0, 0, 1, 1, 0, 0, 1, 1), indicating whether the library is normal (0) or primary tumor (1), and x 2 = (0, 0, 0, 0, 1, 1, 1, 1), indicating the organ from which the library was derived. In the second example, we posit that in addition to the above information, we have access to the levels of a biomarker potentially predictive of survival. These levels are supplied as the values of a third covariate vector, x 3 = (0.89, 0.35, 0.66, 0.23, 0.30, 0.54, 0.90, 0.90). The values for x 3 were generated as random draws from a uniform distribution. In terms of fitting the models, the mechanics are similar to those presented earlier. The model fits are presented in Table 6 . When logistic regression breaks The logistic regression fitting procedure can break down, or exhibit lack of convergence. Typically this means that all of the proportions in one of the groups are zero or one; only the former is realistic in the context of SAGE data. This is natural, in that the maximum likelihood point estimate for the group proportion is 0, and inference for β involves the fold change to the proportion in the second group, leading to division by zero. When the proportions are this small, the binomial variability dominates the heterogeneity and the values are completely noninformative with respect to the estimation of overdispersion. We propose a fix that is iterative in nature in that it requires the logistic fitting routine to be run three times. To illustrate this procedure, we will use the data from tag ATTTGAGAAG in Table 2 , with the first two tag counts, those from group 1, set to zero. The first run of the fitting procedure serves to estimate the overdispersion parameter. This fit uses just the groups that have nonzero counts, omitting the problematic group(s). Here, this involves fitting a single proportion to the six libraries in group 2. The fitted proportion is 0.40%, and the overdispersion estimate is = 3.71 e - 03. The second run of the fitting procedure takes the overdispersion parameter as given, and fits the data after replacing the zero proportions in a group with the same small nonzero proportion, giving us a hopefully conservative estimate of the fold change. This type of replacement is commonly used, and is most often justified via the assumption of a vague prior distribution for the proportions, with the point estimate being derived as the posterior mean or mode. A common assumption for a prior in dealing with proportions is the uniform distribution. The posterior mean after 0 successes are observed out of n i trials is 1/( n i + 1); with multiple trials, it is 1/((∑ n i ) + 1). This is the value we use. This value is actually quite conservative here, for two reasons. First, the uniform distribution places far too much chance on the possibility of proportions greater than a few percent, which will never be observed with SAGE data. Restricting the distribution to be uniform over the range [0, 0.02] should be more than adequate. Second, the presence of overdispersion means that pooling the samples underemphasizes the evidence of a small proportion being supplied by the zero variance of the observed proportions. While we could pursue a more optimal proportion, we choose in this case to simply use the simplistic bound noted above. Here, as the library sizes in the first group are 49610 and 48479, the proportion is 1/(49610 + 48479) and the faked counts are 0.506 = 49610/(49610 + 48479) and 0.494, respectively. Some reformatted results from this fit are shown in Table 7 (Model 1). The results for this fit are ridiculously "insignificant". The problem lies in the fact that the use of a t -value (a Wald test) relies on the approximate normality of the likelihood function in the vicinity of the maximum, and this shape assumption breaks down severely if the number of counts in one group is small. Tests based on changes in the scaled deviance, corresponding to likelihood ratio tests, are better. The third run of the fitting procedure fits a simpler submodel, in this case a single proportion for all eight libraries, using the same overdispersion estimate so as to measure the change in deviance. The results of this fit are shown in Table 7 (Model 2). The analysis of deviance test for significance gives Here, we cannot conclude (given the level of overdispersion) that the difference is real. Note that the degrees of freedom used in the denominator is 5; this follows from the fact that only 6 libraries were used to estimate the overdispersion parameter, and one of those 6 degrees of freedom was needed to estimate the proportion. In general, when any of the groups has very small counts, checking the change in deviance is a good idea. Discussion Logistic regression with overdispersion addresses three issues with SAGE data: simultaneously modelling multiple types of variance, dealing with multiple groups at once, and allowing for the incorporation of covariates. This procedure is widely implemented in available software. Further, and most importantly, viewing SAGE data in the logistic regression setting supplies the framework for thinking of models that describe such data. Dealing with multiple types of variance yields significance estimates we believe to be superior to those derived from pooled counts or from t -tests. The regression setting carries with it other benefits, such as a well-developed body of work regarding model checking, residual analysis, and detection of outliers. For example, the influence of any given library tag count on the overall analysis can be assessed, and methods can be made more robust by bounding these functions so that no single library drives the results. There are some areas in which we can identify difficulties and see room for improvement. First, the model that we are using for the error may be improved. For SAGE data, the proportion associated with a specific tag is rarely on the order of a percent, so logit( p i ) ≈ log( p i ) and we can speak of working with the log rather than the logit transform if we prefer. Assuming variance stability on the log scale then leads to the lognormal distribution often assumed in dealing with microarray data. Assuming a lognormal distribution is equivalent to introducing overdispersion in yet another way, namely as a random effect acting on the β scale. Here, the true proportion for library i is of the form logit( p i ) = β 0 + β 1 x i + ε i , where ε i is a normal random variable with mean 0 and variance . The model described here is a special case of a generalized linear mixed model (GLMM), where "mixed" refers to the fact that we have both fixed effects of interest, the changes with the covariates, and random shocks whose variance needs to be estimated and allowed for. Williams [ 21 ] suggests how this model might be fit using a Taylor-series type expansion, again invoking IRLS. However, as noted in Collett [ 18 ], p.272, "This approach is not entirely satisfactory for fitting such models to binary data, since the estimates can be biased in certain circumstances. Moreover, the deviance calculated for such models is often very approximate and cannot be recommended for general use in comparing alternative models." There are maximum-likelihood based approaches for fitting GLMMs available in SAS and S-PLUS, but there are known problems with fitting mixed effects models to binary data with small numbers of clusters or libraries. One way of addressing this issue more precisely is via simulation (for example via BUGS [ 22 ]). We are exploring these different error models now. Second, the approach developed above works on one tag at a time. In doing so, it is not exploiting to the fullest the unique features of SAGE data. Examples of such exploitation include correcting for sequencing errors by looking at neighbors where sequence similarity is used to define a neighborhood network, and borrowing strength across genes by using common estimation of parameters such as φ over like groups. Work on these issues is ongoing (eg, Colinge and Feger [ 23 ], N. Blades (2002), unpublished dissertation, Johns Hopkins) and we think these features could be usefully combined with the approach presented here. Methods Data The data used here were initially described in Zhang et al. [ 2 ]. The actual numerical libraries used were downloaded from the SAGE Genie web resource introduced by Boon et al. [ 24 , 25 ]. These libraries have had the linker tags removed. Overdispersed logistic regression Only a cursory description of the approach is given here; more detailed treatments are given in Collett [ 18 ] and McCullagh and Nelder [ 19 ], among others. We want to fit the observed proportions, p i = Y i / n i , as a function of the covariates X i . The first step in this process is to specify what form the relationship will take. If the relationship is linear, so that p i = β 0 + β 1 X i + ε , then we can potentially get fitted proportions outside of the interval [0,1], so we typically choose to fit a transformed version of the p i s as being linear in the covariates. A common choice for proportions is the logistic transformation, logit ( p i ) = log ( p i /1 - p i ) = β 0 + β 1 X i + ε . This particular choice is suggested by the form of the likelihood function for binomial data (see McCullagh and Nelder [ 19 ], p.28–32), and we shall take it as assumed here, save to note that while the logit can range over all real values, the corresponding proportions are all between 0 and 1. At this point we are fitting a straight line to a transformed version of the data; this is akin to standard linear regression which is fit by minimizing the sum of squared deviations between the observations and their fitted values: the method of least squares. Now, the default assumption in least squares is that all of the observations are known with equal precision, and hence receive equal weight. This is not the case here, as the variance of a proportion is V ( p i ) = p i (1 - p i )/ n i , so that the precision with which an observation is known depends on both the value of that observation and on the size of the total n i from which the proportion was derived. In the case where the observations are known with differing precisions, then the standard adjustment is to fit a weighted version of least squares, minimizing a weighted sum of the squared differences between the observations and their fitted values, where the weights are inversely proportional to the variances of the observations. Thus, at the first step we fit a logistic curve using weighted least squares where the weights are inversely proportional to the variances associated with our initial estimates of the proportions, ( Y i + 0.5)/( n i + 1). After this first fit, we now have predicted values for each of the observations, and these predicted values in turn suggest new values for the variances and hence the weights. Thus, the second step is to refit the data using the new weights. This process is iterated (iteratively reweighted least squares, IRLS) until the changes in the predicted values from one fit to the next are small enough that the procedure is said to have converged. Even after the process has converged, it is often the case that the sizes of the squared deviations will be substantially larger than might be expected if the variances were of exactly the form given above. In this case, the data are said to exhibit overdispersion relative to the postulated model, and we seek to estimate the scale of the overdispersion. We deal with the quasi-likelihood case of overdispersion here, where the variance is really of the form V ( p i ) = n i p i (l - p i ) , for > 1. The added mechanics for computing the hierarchical form are somewhat involved and we refer the reader to Williams [ 21 ] for details. Using the quasi-likelihood model for overdispersion, the actual parameters of the best fitting model will not change, as the weights used in the weighted least squares routine are all proportional to the inverses of the variances, and scaling all of the variances by the same factor leaves the relative sizes of the weights unchanged. What does change is the presumed precision associated with these parameters; the variances of the parameters will likewise be multiplied by , and significance tests need to be adjusted accordingly. In order to estimate , we return to the weighted squared deviations between observations and predictions noted above. Ideally, the sum of the squared weighted residuals will have a chi-squared distribution with k - p degrees of freedom, where k is the total number of libraries and p is the number of β terms being estimated. As the mean of a chi-squared distribution is equal to its degrees of freedom, we get our initial estimate of by dividing the sum of squared weighted residuals by the posited degrees of freedom: Given the estimated value of , the test statistics are scaled by and the significances recomputed. In the cases below, we outline the procedure and couple the descriptions with scripts for the freeware package R . In each case, the approach begins by loading the data corresponding to the tag counts Y i and the library sizes n i , which are used to supply the observed proportions. The main distinction between the cases resides in how the covariate X values are defined. All of the models assume the presence of a constant vector X 0 of all ones; this produces the corresponding estimate for β 0 . Our discussion will likewise treat this covariate as present in all modelling steps. Annotated R code # Source code for models used in the paper # "Overdispersed Logistic Regression for # SAGE: Modelling Multiple Groups and # Covariates", by Baggerly et al. ########################################## # First, we deal with the case of two # groups, and introduce the methods for # fitting the logistic regression models. ########################################## if(0){ # Load the tag counts for ATTTGAGAAG (y) # from the 8 libraries in Zhang et al. # [ 2 ], the associated library sizes (n) # and the covariate vector indicating # which of two groups the librares # belong to, normal or cancer (x). y <- c(320, 600, 312, 549, 246, 65, 41, 52); n <- c(49610, 48479, 41371, 55700, 60682, 55641, 51294, 61148); x <- c(0, 0, 1, 1, 1, 1, 1, 1); # Now fit a standard logistic regression # model to the data, with no allowance # for overdispersion. This is done # through a call to the generalized # linear model (glm) routine. help(glm) # provides more information about the # nature of the arguments here. fit1 <- glm(cbind(y, n-y) ~x, family=binomial); # check the results summary(fit1); # Next, we refit the model while # allowing for overdispersion of the # quasilikelihood type; all variances # are inflated by a common factor. This # call differs from the first only in # the definition of the glm "family" to # be used. fit2 <- glm(cbind(y, n-y) ~x, family=quasibinomial); # check the results summary(fit2); # Ideally, the sum of the squared # Pearson residuals should have a chi- # squared distribution, with mean equal # to its degrees of freedom. Dividing # the sum by the degrees of freedom # gives our initial estimate of the # overdispersion parameter. varQL <- sum(residuals(fit2, "pearson")^2)/fit2$df.residual; # Finally, we refit the model using the # overdispersion method suggested by # Williams [ 21 ], where the variances are # inflated by factors that are slightly # different depending on the underlying # library sizes. This routine is # implemented in the R package "dispmod" # which is available at # library("dispmod"); fit3 <- glm.binomial.disp(fit1); # check the results summary(fit3); phi <- fit3$dispersion; # Note that the reported p-values from # this fit are incorrect. This is due to # the assumption that the test-stats # have normal distributions, even though # we have had to estimate the # overdispersion parameter. When we have # to perform this estimation, the # correct test is a t-test, with a # number of degrees of freedom # corresponding to the number of # libraries less the number of estimated # parameters. As the number of libraries # is typically not large, this can # create a large difference. sumfit3 <- summary(fit3); t.values <- summary( fit3)$coefficients [,"z value"]; p.values <- 2 * pt(-abs(t.values), fit3$df.residual); } ########################################## # Next, we deal with three groups ########################################## if(0){ # We begin by focusing on gains # available when multiple groups are # present, even if the other groups are # not directly part of the contrast of # interest, due to the additional # information that the added groups can # provide about the scale of the # overdispersion. # Here, we use the data from the tag # TGCTGCCTGT, and this time we note that # there are 3 groups of libraries: # normals (libraries 1–2), primary # tumors (libraries 3–4), and cell lines # (libraries 5–8). If we are interested # in the contrast between normals and # primary tumors, we can fit this using # only the data from those two groups, # or using the data from all three. # First, fit the model as if there were # just two groups present. y <- c(0, 1, 1, 15); n <- c(49610, 48479, 41371, 55700); x <- c(0, 0, 1, 1); fit1 <- glm(cbind(y, n-y) ~x, family=binomial); fit2 <- glm.binomial.disp(fit1); # get the correct p-values fit2.t.values <- summary( fit2)$coefficients [,"z value"]; fit2.p.values <- 2 * pt(-abs( fit2.t.values), fit2$df.residual); # Next, fit the model assuming that # there are three groups. In this case, # we cannot use a single covariate # vector x, as this is not suited to # indicating 3 or more groups in an # unordered fashion (using 0, 1, and 2 # for the three groups respectively # would force an ordering by saying that # primary tumors are intermediate # betwixt normal samples and cell lines) # In general, if we have k groups, we # need to use k-1 covariate vectors. # Here, we use # x1 <- c(0, 0, 1, 1, 0, 0, 0, 0); # x2 <- c(0, 0, 0, 0, 1, 1, 1, 1); # The set of all 0s (x1 = 0, x2 = 0) # corresponds to the first group, here # the normals, and the other groups are # defined by which one of the other # covariates is nonzero: # Group 2 (primaries), (x1 = 1, x2 = 0), # Group 3 (cell lines), (x1 = 0, x2 = 1) y <- c(0, 1, 1, 15, 9, 1, 12, 27); n <- c(49610, 48479, 41371, 55700, 60682, 55641, 51294, 61148); x1 <- c(0, 0, 1, 1, 0, 0, 0, 0); x2 <- c(0, 0, 0, 0, 1, 1, 1, 1); fit3 <- glm(cbind(y, n-y) ~x1 + x2, family=binomial); fit4 <- glm.binomial.disp(fit3); # get the correct p-values fit4.t.values <- summary( fit4)$coefficients [,"z value"]; fit4.p.values <- 2*pt(-abs( fit4.t.values), fit4$df.residual); # The above approach has fit the model # with all of the covariates available, # but in order to perform an analysis of # deviance we want to fit various # submodels using the same estimate of # overdispersion as found here. In this # case, there are 3 submodels: fit5 <- glm(cbind(y, n-y) ~x1, family=binomial, weights = fit4$disp.weights); fit6 <- glm(cbind(y, n-y) ~x2, family=binomial, weights = fit4$disp.weights); fit7 <- glm(cbind(y, n-y) ~1, family=binomial, weights = fit4$disp.weights); # alternatively, the anova function can # be used, but this only considers the # submodels obtained by adding terms # sequentially. Thus, we get the # deviances for beta_0 (the null model), # beta_0 + beta_1 (adding the x1 # covariate only), and beta_0 + beta_1 + # beta_2 (adding the x2 covariate to # what we already have. fit4.anodev <- anova(fit4); } ########################################## # Next, we deal with the case of other # covariates, possibly continuous. ########################################## if(0){ # Here, we are using the counts from the # GCGAAACCCT tag, but we are treating # the 8 libraries as coming from tissue # type 1 (libraries 1–4) and tissue type # 2 (libraries 5–8), with normal tissue # of both types (libraries 1–2, 5–6) and # primary tumor of both types (libraries # 3–4, 7–8). In this hypothetical # example, we are able to partition the # changes into effects associated with # normal/primary differences (x1) or # tissue 1/tissue 2 differences (x2). y <- c(167, 566, 64, 98, 33, 47, 40, 27); n <- c(49610, 48479, 41371, 55700, 60682, 55641, 51294, 61148); x1 <- c(0, 0, 1, 1, 0, 0, 1, 1); x2 <- c(0, 0, 0, 0, 1, 1, 1, 1); fit1 <- glm(cbind(y, n-y) ~x1 + x2, family=binomial); fit2 <- glm.binomial.disp(fit1); # get the correct p-values fit2.t.values <- summary( fit2)$coefficients [,"z value"]; fit2.p.values <- 2*pt(-abs( fit2.t.values), fit2$df.residual); # Next, again using the tag as above, we # posit that we also have access to the # levels of a biomarker potentially # predictive of survival, supplied as # the levels of another covariate x3. # The values supplied here were # generated as random draws from a # uniform (0,1) distribution x3 <- c(0.89, 0.35, 0.66, 0.23, 0.30, 0.54, 0.90, 0.90); fit3 <- glm(cbind(y, n-y) ~x1 + x2 + x3, family=binomial); fit4 <- glm.binomial.disp(fit3); # get the correct p-values fit4.t.values <- summary( fit4)$coefficients [,"z value"]; fit4.p.values <- 2*pt(-abs( fit4.t.values), fit2$df.residual); } Authors' contributions KAB, LD and JSM developed the main ideas and the methodology; LD did most of the coding. CMA supplied SAGE data and provided practical feedback on aspects of earlier approaches found to be wanting, thus guiding further development. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC524524.xml |
521689 | Safety assessment of inhaled xylitol in mice and healthy volunteers | Background Xylitol is a 5-carbon sugar that can lower the airway surface salt concentration, thus enhancing innate immunity. We tested the safety and tolerability of aerosolized iso-osmotic xylitol in mice and human volunteers. Methods This was a prospective cohort study of C57Bl/6 mice in an animal laboratory and healthy human volunteers at the clinical research center of a university hospital. Mice underwent a baseline methacholine challenge, exposure to either aerosolized saline or xylitol (5% solution) for 150 minutes and then a follow-up methacholine challenge. The saline and xylitol exposures were repeated after eosinophilic airway inflammation was induced by sensitization and inhalational challenge to ovalbumin. Normal human volunteers underwent exposures to aerosolized saline (10 ml) and xylitol, with spirometry performed at baseline and after inhalation of 1, 5, and 10 ml. Serum osmolarity and electrolytes were measured at baseline and after the last exposure. A respiratory symptom questionnaire was administered at baseline, after the last exposure, and five days after exposure. In another group of normal volunteers, bronchoalveolar lavage (BAL) was done 20 minutes and 3 hours after aerosolized xylitol exposure for levels of inflammatory markers. Results In naïve mice, methacholine responsiveness was unchanged after exposures to xylitol compared to inhaled saline (p = 0.49). There was no significant increase in Penh in antigen-challenged mice after xylitol exposure (p = 0.38). There was no change in airway cellular response after xylitol exposure in naïve and antigen-challenged mice. In normal volunteers, there was no change in FEV1 after xylitol exposures compared with baseline as well as normal saline exposure (p = 0.19). Safety laboratory values were also unchanged. The only adverse effect reported was stuffy nose by half of the subjects during the 10 ml xylitol exposure, which promptly resolved after exposure completion. BAL cytokine levels were below the detection limits after xylitol exposure in normal volunteers. Conclusions Inhalation of aerosolized iso-osmotic xylitol was well-tolerated by naïve and atopic mice, and by healthy human volunteers. | Background Human airway surface is covered by a thin layer of liquid (airway surface liquid [ASL]) that contains many antimicrobial substances including lysozyme, lactoferrin, human β defensins, and the cathelicidin LL-37 [ 1 - 4 ]. The antibacterial activity of most of these innate immune mediators is salt-sensitive; an increase in salt concentration inhibits their activity [ 5 ]. An equally interesting feature of these antimicrobial factors is that their activity is increased by low ionic strength [ 6 - 9 ]. Lowering the ASL salt concentration might therefore increase the efficacy of the innate immune system and thereby decrease or prevent airway infections. The airway epithelium is water-permeable [ 10 ]. When large volumes of ionic, isotonic liquid are placed on the apical surface, active salt and liquid absorption occurs [ 11 , 12 ]. If water were added to the airway surface, the salt concentration would quickly return to starting values. Thus, lowering of ASL salt concentration is best accomplished using a nonionic osmolyte with low transepithelial permeability. The osmolyte should not provide a ready carbon source for bacteria, and should be safe in humans. One such promising osmolyte is xylitol, a five-carbon sugar that has low transepithelial permeability, is poorly metabolized by bacteria and can lower the salt concentration of both cystic fibrosis (CF) and non-CF epithelia in vitro [ 13 ]. Xylitol is an artificial sweetener that has been successfully used in chewing gums to prevent dental caries [ 14 , 15 ]; it has been used as an oral sugar substitute without significant adverse effects [ 16 ]. It has also been used in lozenges and syrup and has been shown to decrease the incidence of acute otitis media by 20–40% [ 17 ]; nasal application to normal human subjects was found to decrease colonization with coagulase negative staphylococcus [ 13 ]. There are no studies, to our knowledge, examining the effects of inhalation of aerosolized xylitol by experimental animals or humans. Osmotic agents such as hypertonic saline, which is ionic, and nonionic mannitol, dextran and lactose, have been used in human subjects to increase mucus clearance [ 18 - 23 ]. However, some of these agents can serve as a carbon source for bacteria and can cause bronchospasm due to the tonicity. Nebulization of distilled water has been shown to increase airway resistance significantly in asthmatic subjects leading to subsequent use as a bronchoprovocative agent [ 24 - 26 ]. Both hypotonic and hypertonic saline solutions can provoke bronchospasm (a 20% drop in Forced Expiratory Volume in 1 second, FEV1) in asthmatic subjects but not in normal volunteers [ 26 ]. Furthermore, inhalation of 20% dextrose in the same study produced bronchospasm similar to exposure to water or hypertonic saline raising the possibility that osmolarity of the solution is the important determinant of bronchial reactivity. In subjects with bronchiectasis, inhalation of dry powdered mannitol can increase the clearance of mucus without affecting lung function [ 27 ]. However, in a different study on subjects with CF, inhaled mannitol caused a small but significant decline in FEV1 (7.3%, P = 0.004) from baseline immediately after inhalation, which returned to baseline by the end of the study [ 28 ]. We hypothesized that aerosolized iso-osmolar xylitol is safe and well-tolerated well by normal subjects. We compared the safety and tolerability of aerosolized xylitol with normal saline, and carried out additional exposure studies using mice. Methods Safety in normal mice All experiments were reviewed and approved by the animal care and use committee of the University of Iowa. Except during exposures and evaluation, mice were allowed access to food and water ad libitum. C57bl/6 mice (Jackson Lab, Bar Harbor, MA) underwent baseline methacholine challenge test using a whole-body plethysmograph (Buxco Electronics, Troy, NY) as previously described [ 29 ]. Respiratory pattern changes were expressed as enhanced respiratory pause (Penh), which correlates with changes in airway resistance. Airway resistance was expressed as follows: P enh = ([T e /0.3 T r ] - 1) × [2P ef /3P if ], where P enh equals enhanced pause, T e equals expiratory time (in seconds), T r equals relaxation time (in seconds), P ef equals peak expiratory flow (in milliliters per second), and P if equals peak inspiratory flow (in milliliters per second). Mice (6/group) were exposed to aerosolized saline (0.9 % NaCl) or aerosolized xylitol (5% solution in water, equimolar to the NaCl) for 150 minutes in an exposure chamber; all mice were evaluated for bronchial hyperreactivity to inhaled methacholine (using the Buxco whole body plethysmography system) before and after the exposures; other mice were monitored periodically during exposure by whole body plethysmography. All mice underwent whole lung lavage the next day for cell count and differential. After euthanasia, the trachea was cannulated, and the lungs were lavaged with 3.0 mL of sterile normal saline (0.9% NaCl). The lavage samples were immediately processed for total and differential (with Diff Quick Stain; Baxter Scientific, Miami, FL) cell counts. In a separate group of naïve mice, whole body plethysmography was used to monitor Penh, respiratory rate, and tidal volume periodically during exposure to xylitol and saline for 10, 20, 40, and 80 minutes for a cumulative total dose of 150 minutes. Safety in hypersensitive mice We repeated the saline and xylitol exposure protocol to 2 more groups of six mice each after they were sensitized to and challenged with an antigen [ 30 ]. Mice were sensitized to OVA (10 μg with 1 mg alum, i.p.) on days 0 and 7, then challenged with aerosolized OVA (1% solution, 30 minutes) on days 14 and 16. Filtered air was passed at 6 L/min through an Aero-Tech nebulizer (CIS-US Inc) to generate an aerosol. The size distribution of the aerosol was determined using a particle counter (Aerodynamic Particle Sizer, TSI Incorporated). The aerosol sizes were distributed log normally with a count median aerodynamic diameter of 0.82 microns and geometric standard deviation (GSD) of 1.46 microns. A mean OVA concentration of 3.8 ng/ml was measured in the chamber during the exposures. The mice underwent a baseline methacholine challenge on day 17 and subsequently underwent exposures to saline and xylitol using the same protocol described for the naïve mice. Three mice per group underwent whole lung lavage 24 hours after exposure for cell count and differential. Given the concerns that have been raised about the reliability of airway resistance measurement by Buxco equipment, in a select number of mice we confirmed airway hyperresponsiveness using invasive measurement. Airway responsiveness was measured 24 hours after xylitol exposure in ova-challenged mice and compared to measurements made on naïve mice and ova-challenged mice without any exposure. Mice were anesthetized with Ketamine at 90 mg/kg and Pentobarbital at 50 mg/kg and attached to a small-animal ventilator (Flexivent, SCIREQ). Animals were ventilated at 150 breaths/min. Positive end-expiratory pressure (PEEP) was maintained between 2–3 cmH2O, with the computer setting the tidal volume from the entered weight of each animal. Central airway resistance (R) was measured at baseline and after 10 sec. of nebulized methacholine at doses of 12.5, 25 and 50 mg/ml. Safety in normal volunteers The study was approved by the University of Iowa Institutional Review Board as well as the Food and Drug Administration. Since this is a pilot study and would be the first time xylitol is being used as aerosol, there was no information available on expected complications. Ten subjects aged 18 or greater were studied. Pregnancy or any chronic medical conditions such asthma, atopy, and diabetes were grounds for exclusion. After giving written informed consent subjects underwent a screening spirometry (all subjects demonstrated FEV1 >85% of predicted). Baseline measurements of serum electrolytes, and serum and urine osmolarity were carried out. Baseline oxygen saturation was measured using a pulse oximeter. A brief questionnaire of respiratory symptoms that was developed using a visual analog scale (VAS) was administered at baseline [ 31 , 32 ]. Human exposures Subjects received 10 ml of aerosolized saline (generated using a Pari LC Plus nebulizer with Proneb Ultra compressor system, Pari Inc, Monterey, CA) [ 33 ]. The particle size of the aerosol was measured using both a 7-stage cascade impactor (Mercer, Inc., Albuquerque, NM) and an Aerosol Monitor (Grimm Technologies, Inc.). The mass median aerodynamic diameter of the aerosol was 1.63 microns with a GSD of 1.71 microns. Mean breathing time for exposures were as follows: Normal saline – 37 min (range 22–49), 1 ml xylitol – 4.2 min (range 2–10), 5 ml xylitol – 22 min (range 15–33), 10 ml xylitol – 36 min (range 30–49). Thirty minutes after the exposures, subjects completed a follow-up questionnaire, and underwent spirometry and O2 saturation measurement. The procedure was repeated after exposure to 1, 5, and 10 ml of 5% xylitol (Danisco Cultor, USA). Xylitol was prepared by adding 5 gm of crystal sugar to every 100 ml of sterile water (Abbott Laboratories, IL). The solution was sterilized using FDA approved techniques and osmolarity confirmed to be 292 mOsm using a 5500 vapor pressure osmometer (Wescor, Inc., Logan, UT). After completing the exposures, repeat blood and urine tests for electrolytes and osmolarity were carried out. Finally, subjects repeated the symptom questionnaire five days after the first visit, over the telephone. The pre-established criterion for discontinuing study participation was a decline in FEV1 by greater than 20% from baseline. Measurement of lung function Spirometry was performed using a Vmax V6200 Autobox (Sensor Medics Corp., Yorba Linda, CA), according to guidelines published by the American Thoracic Society [ 34 ]. The spirometer was calibrated prior to each visit. Spirometry was performed on seated subjects who were using nose clips. Respiratory symptom score The amount of symptoms was assessed at baseline and after each exposure. Subjects scored chest tightness, shortness of breath, cough, headache, chills, muscle soreness, phlegm, nausea, stuffy nose, sneezing, and fatigue on a visual analog scale from 0–10 cm (0 being symptom-free and 10 being extreme amount) [ 31 , 32 ]. Bronchoscopy and Bronchoalveolar lavage (BAL) We also examined the effect of aerosolized xylitol on markers of inflammation in the airways. A separate group of subjects underwent bronchoscopy and bronchoalveolar lavage (BAL) according to American Thoracic Society standards at 30 minutes (n = 6), and 3 hours (n = 5) after exposure to 10 ml of aerosolized iso-osmolar xylitol [ 35 ]. BAL was performed by instilling two 20-ml aliquots of sterile normal saline into the lingula. The second aspirate was used for cytokine measurements. BAL fluid was filtered through two layers of sterile gauze to remove mucus and centrifuged for 10 minutes at 1500 rpm to separate cells. The cell pellet was washed twice in Hank's Balanced Salt Solution without Ca ++ and Mg ++ and suspended in complete medium, Roswell Park Memorial Institute (RPMI) tissue culture medium (Gibco/BRL, Gaithersberg, MD). Differential cell counts were determined with cytospin (Shandon, Pittsburgh, Pa) slide preparations by using Wright-Giemsa stain. The cell-free fluid was frozen at -70°C until required for cytokine assay. Cytokine measurements were performed using enzyme linked immunosorbent assays for IL-6 and LTC-4. IL-6 levels were determined by a Quantikine Human IL-6 ELISA kit (R&D Systems; Minneapolis, MN). The limit of detection of IL-6 is 0.70 pg/ml. LTC-4 (leukotriene) levels were determined by a leukotriene C4 EIA kit (Cayman Chemical; Ann Arbor, MI). The limit of detection of LTC4 is 10 pg/ml. LTC4 of BALs were extracted and concentrated with Cysteinyl-Leukotriene Affinity Sorbent (Cayman Chemical; Ann Arbor, MI). Statistical analysis We studied ten subjects with a gradual increase in exposure dose in the pilot safety study. Differences were analyzed using t-test, Wilcoxon signed rank test, and one way and two-way repeated measures analysis of variance (ANOVA) as indicated. Ninety-five percent confidence intervals were calculated where appropriate. All analyses were performed using SAS version 8.2 (SAS Institute, NC) and at a 5% significance level. Results Safety in mice Mice tolerated the exposures well without any visible distress. The corresponding volume of the 150-minute exposure was approximately 45 ml. Among naïve mice, exposure to xylitol resulted in no significant change in bronchial hyperresponsiveness compared to saline (Figure 1 ; n = 6/group; p = ns baseline and all concentrations of methacholine). A similar lack of difference between the saline- and xylitol-exposed mice was noted in their tidal volume and respiratory frequencies responses (data not shown). In a separate group of naïve mice that underwent Penh measurements periodically during exposure to saline or xylitol, no significant change was seen in Penh (Figure 2 ). We carried out similar studies on mice that had been sensitized to, and challenged with ovalbumin, a common murine model of asthma. No significant changes in methacholine responsiveness were observed (data not shown). Figure 3 shows airway resistance measured invasively using the Flexivent system in naïve mice, OVA-sensitized/OVA-challenged mice after saline exposure and OVA-sensitized/OVA-challenged mice after xylitol exposure. Figure 1 Effect of saline and xylitol exposure on methacholine responsiveness in naïve mice (n = 6/group). Panel A reflects methacholine responsiveness before and after saline exposure. Panel B reflects methacholine responsiveness before and after xylitol exposure. Error bars = SD. P-values of all comparisons are non-significant. Figure 2 Effect of saline vs. xylitol exposure on Penh of naïve C57BL/6 mice (n = 6). The figure shows mean Penh values for mice exposed to saline (circles) and xylitol (squares). Errors bars = SD. p = 0.21. Figure 3 Invasive airway resistance measurement in response to methacholine challenge in naïve and ova-challenged C57BL/6 mice (n = 2/group) using Flexivent system. The figure shows mean airway resistance for naïve mice (squares) ova-challenged mice (triangles). Whole lung lavage showed no significant differences in lavage fluid cell count and differential due to xylitol exposure. Naïve mice exposed to saline or xylitol demonstrated, as expected, a macrophage-predominant response. In contrast, OVA-sensitized/-challenged mice were characterized by airway eosinophilia in both saline- and xylitol-exposed groups (Table 1 ). In summary, aerosolized xylitol was well tolerated by naïve and hypersensitive mice with no significant effects on the airway physiology or composition of airway inflammatory cells. Table 1 Whole Lung Lavage Cell Count and Differential in Naïve and Ova-challenged Mice Experimental Group Total Cell Count (×10 6 ) Mean (SD) Differential Count (%) Macrophages Lymphocytes Neutrophils Eosinophils Naïve mice-saline 0.26 (0.8) 99.6 0.17 0.17 0.0 Naïve mice-xylitol 0.25 (0.7) 99.0 0.34 0.0 0.66 Ova-challenged mice – saline exposed 0.96 (0.1) 20.0 3.6 14.0 62.2 Ova-challenged mice – xylitol exposed 0.78 (0.08) 21.3 9.0 9.0 61.0 Safety in human volunteers Table 2 shows the baseline characteristics of the ten subjects who underwent graded exposure to aerosolized xylitol as a part of the pilot study. Mean age was 29.1 yrs, and equal numbers of males and females were studied. None of the subjects dropped their FEV1 by ≥ 20%. The mean baseline FEV1 was 92% predicted (SD = 6.9% predicted). There was no significant change in FEV1 % predicted after any exposure in comparison with baseline (Figure 4 ). Table 2 Baseline Characteristics in Normal Volunteers Subject No. Age Years Gender M/F Ethnicity Baseline FEV1 (% predicted) 1 41 F Caucasian 92 2 34 M Caucasian 85 3 48 M African American 87 4 22 M Caucasian 106 5 25 M Asian 95 6 20 F Asian 85 7 22 M Caucasian 91 8 20 F Caucasian 86 9 28 F Caucasian 100 10 31 F Caucasian 89 Mean 29 92 SD 9.5 6.9 Figure 4 Effect of exposure to nebulized saline and xylitol on spirometry in normal volunteers (n = 10). The figure shows mean FEV1 (% predicted) at baseline, after exposure to saline (10 ml), and xylitol (1, 5, and 10 ml). Errors bars = SD. p = 0.19. As shown in Table 3 , xylitol exposure did not induce any significant change in electrolytes and osmolarity. No changes in vital signs or oxygen saturation were noted throughout the study. The most common symptom reported was stuffy nose after xylitol exposure, which occurred in five (50%) subjects after the 10 ml dose (Table 4 ). The mean VAS score among the five subjects for stuffy nose was 3.5 cm. This symptom resolved within minutes after exposure was complete. Other less frequent side effects reported include, cough by two subjects (mean VAS score, 0.5), chest tightness by two subjects (mean VAS score, 1.0), and phlegm production by three subjects (mean VAS score, 1.5). All of these symptoms had resolved by day five of telephone follow-up. One subject noted hiccups half way through the final xylitol exposure, which resolved soon after the exposure was complete. Table 3 Laboratory Results pre and post Xylitol Exposure (n = 10) Serum test Baseline Mean ± (SD) After 10 ml xylitol Mean ± (SD) p value Glucose, mg/dL 89 (3.8) 89 (9.1) 0.98 Osmolarity, mosm/k 292 (5.2) 292 (3.9) 0.98 Sodium, mEq/L 141 (1.4) 141 (2.6) 0.75 Bicarbonate, mEq/L 25 (1.2) 24 (1.9) 0.41 Anion gap, mEq/L 13 (1.2) 13 (1.2) 0.69 Table 4 Adverse Events Score (centimeters, mean ± SD) using Visual Analog Scale (1–10)* Symptom Baseline VAS score Change Post-saline Change Post-10 ml xylitol Change on day 5 follow-up Chest tightness 0 0 0.2 ± 0.4 0 Shortness of breath 0 0 0 0 Cough 0.25 ± 0.8 0.05 ± 0.15 0 0 Headache 0 0 0.2 ± 0.6 0 Chills 0 0 0 0 Muscle soreness 0.2 ± 0.6 0 0 -0.2 ± 0.6 Phlegm 0.2 ± 0.6 0 0.25 ± 0.4 0 Nausea 0 0 0 0 Stuffy/Runny Nose 0 0 0.65 ± 0.9† 0 Sneezing 0 0 0 0 Fatigue 0.1 ± 0.3 0 -0.1 ± 0.3 0 *P-values of all changes from baseline are >0.05 except for stuffy nose after xylitol expsoure. † P-value = 0.03. An additional 11 subjects underwent bronchoscopy and bronchoalveolar lavage following xylitol inhalation. The mean cell count in the BAL fluid at 20 minutes (n = 6) and 3 hours (n = 5) after xylitol exposure was 1.2 ± 0.07 million cells/ml and 2.94 ± 1.48 million cells/ml respectively. All cell preparations had between 95–100% alveolar macrophages. BAL IL-6 and LTC-4 levels after xylitol exposure were below 0.70 pg/ml and 10 pg/ml respectively at all time points. Discussion Lower respiratory tract colonization is an important step in the pathogenesis of pulmonary manifestations of chronic diseases such as CF and dyskinetic cilia syndrome and certain acute clinical entities such as ventilator-associated pneumonia. There is a continuing need for simple, cost-effective, and safe intervention to decrease colonization of lower airways. Studies have shown that lowering the salt concentration of airway surface liquid can enhance innate immunity by increasing the potency of the natural antimicrobial peptides. In addition to increasing the activity of individual ASL factors, lowering the NaCl concentration also independently enhances synergistic interactions [ 36 ]. Thus, lowering the salt concentration could improve the antimicrobial activity of the ASL in two ways: increasing the individual action of the factors, and augmenting synergism between them. This double effect could amplify the impact of relatively modest reductions in salt concentrations. The mechanism of this low salt concentration augmentation of killing remains unclear. The most popular hypothesis is that in low salt concentrations, charged particles become less shielded, increasing the interaction between the cationic proteins and the negatively charged bacteria [ 6 , 7 , 37 , 38 ]. Irrespective of the mechanism, this effect suggests a therapeutic strategy: lowering ASL salt concentrations should enhance bacterial killing. Xylitol, when applied to airways as an iso-osmolar agent, can potentially lower airway salt concentration and therefore lower bacterial colonization in chronic infections. In addition to having low transepithelial permeability, it has the added advantage of being poorly metabolized by bacteria. In recent years, many osmotic agents have been aerosolized into human airways for mucus clearance. However, there are reports of bronchospasm associated with their use. This is the first study to our knowledge to use xylitol in an aerosolized form. The main adverse effect reported from oral xylitol use was diarrhea when the dose exceeded 40–50 gm/day [ 39 ]. Intravenous xylitol has also been used as parenteral nutrition in the post-operative period for many decades. There have been no major changes in serum electrolytes with xylitol infusion [ 40 ]. Parenteral xylitol can cause minimal hyperuricemia, but without any pathophysiological consequences [ 41 ]. Though tolerated well in modest doses, large doses of xylitol administered intravenously have been reported to cause renocerebral oxalosis, with renal failure [ 42 - 45 ]. Before xylitol use in humans for prevention or reduction of airway colonization can be attempted, animal studies on safety as well as studies on healthy volunteers are required. We made calculations of the amount of xylitol to be delivered to the airway surface of an adult. Mercer, et al . [ 46 ] measured a total surface area from trachea to bronchioles of 2,471 cm 2 . The depth of ASL may vary from the trachea to the small bronchioles; if an average depth of 10 μm is estimated, the total ASL volume would be ~2.5 mL. Thus, if we assume a uniform aerosol distribution, administration of a total volume of 2.5 mL of 300 mM xylitol to the airways would be expected to lower the salt concentration in half simply by a dilutional effect. If the mean ASL depth were 20 μm, then 5 mL of delivered solution would be required. Because the solution is iso-osmotic, immediate, major osmotic shifts of water across the epithelium should not occur, which leads to dilution of the salt concentration. Moreover, with time, the volume and salt concentration may decrease due to Na + -dependent salt absorption, the osmotic effects of which are counterbalanced by xylitol in the ASL [ 13 ]. Our preliminary calculations for dosing for mice experiments were derived as follows; Mercer, et al . [ 46 ] also estimated the total airway surface area in rats, which was 27.2 cm [ 3 ]. Assuming an average depth of 10 μm, the total ASL volume would be ~27 μl. For a mouse, given an average weight of 25 gm, which is 1/12th of weight of a rat, the ASL volume is approximately 2.25 μ l. For a 50% dilution we have to deliver 2.25 μl of xylitol solution. Mice have an approximate 10% lung retention rate for the particle size we generated [ 47 ], which will require us to aerosolize 22.5 μl of xylitol. However, we do not have data on the airborne concentration of xylitol to which the mice were exposed. For the generation and exposure system employed, a reasonable approximation is that 5% of the solution nebulized into the mixing chamber was available for inhalation in the exposure chamber. Thus, we would need to deliver approximately 450 μl of xylitol solution to provide the desired 50% dilution of ASL. We exposed both normal and hypersensitive mice to a cumulative volume of 84 ml of iso-osmotic xylitol, which is at least a 2-log increase (187×) from the proposed dose. There was no significant change in airway resistance nor in bronchial hyperresponsiveness after xylitol exposure in naïve or hypersensitive mice. This study shows that aerosolization of iso-osmotic xylitol is likely to be safe and well tolerated by human volunteers. There was no change in spirometry, laboratory test results as well as BAL cytokine levels after xylitol exposure. Earlier studies have reported bronchial hyperresponsiveness with aerosolization of hypotonic and hypertonic solutions. Thus, aerosolization of iso-osmotic xylitol could be tested for prevention and treatment of airway colonization. There are several potential limitations with this study. The validity of body plethysmography as a measure of respiratory physiology in mice has been recently questioned [ 48 , 49 ]. However, several studies have shown good correlation between airway inflammation and changes in Penh [ 50 - 52 ]. Since the human study is a true pilot study, we did not have preliminary data on adverse events for the aerosolized route to base our sample size calculation; given its relatively small size, we do not have the power to detect rare complications. Our human study was unblinded due to the sweet taste of xylitol, which all the subjects experienced. However, our main outcome, FEV1 is unlikely to be biased by knowledge of the exposure. Finally, this was a brief exposure study. Inhalational toxicology studies of safety of long-term exposure to animals looking at histopathology and laboratory data in addition to pulmonary function testing are required before clinical use can be instituted. Conclusions In summary, our data indicate that iso-osmotic xylitol can be safely delivered by aerosol to normal volunteers. Studies of safety with long-term exposure to animals are required before human use can be attempted. This could lead to exciting interventions to enhance the innate immunity of airway epithelia. Abbreviations ANOVA Analysis of Variance ASL Airway Surface Liquid CF Cystic Fibrosis FEV1 Forced Expiratory Volume in 1 second GSD Geometric Standard Deviation Penh Enhanced Pause VAS Visual Analog Scale BAL Bronchoalveolar Lavage | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC521689.xml |
314469 | Protein Interaction Networks by Proteome Peptide Scanning | A substantial proportion of protein interactions relies on small domains binding to short peptides in the partner proteins. Many of these interactions are relatively low affinity and transient, and they impact on signal transduction. However, neither the number of potential interactions mediated by each domain nor the degree of promiscuity at a whole proteome level has been investigated. We have used a combination of phage display and SPOT synthesis to discover all the peptides in the yeast proteome that have the potential to bind to eight SH3 domains. We first identified the peptides that match a relaxed consensus, as deduced from peptides selected by phage display experiments. Next, we synthesized all the matching peptides at high density on a cellulose membrane, and we probed them directly with the SH3 domains. The domains that we have studied were grouped by this approach into five classes with partially overlapping specificity. Within the classes, however, the domains display a high promiscuity and bind to a large number of common targets with comparable affinity. We estimate that the yeast proteome contains as few as six peptides that bind to the Abp1 SH3 domain with a dissociation constant lower than 100 μM, while it contains as many as 50–80 peptides with corresponding affinity for the SH3 domain of Yfr024c. All the targets of the Abp1 SH3 domain, identified by this approach, bind to the native protein in vivo, as shown by coimmunoprecipitation experiments. Finally, we demonstrate that this strategy can be extended to the analysis of the entire human proteome. We have developed an approach, named WISE (whole interactome scanning experiment), that permits rapid and reliable identification of the partners of any peptide recognition module by peptide scanning of a proteome. Since the SPOT synthesis approach is semiquantitative and provides an approximation of the dissociation constants of the several thousands of interactions that are simultaneously analyzed in an array format, the likelihood of each interaction occurring in any given physiological settings can be evaluated. WISE can be easily extended to a variety of protein interaction domains, including those binding to modified peptides, thereby offering a powerful proteomic tool to help completing a full description of the cell interactome. | Introduction Protein–protein interactions govern cell physiology, and the disruption of some sensitive connections in the network can have pathological effects. Once a genome has been sequenced, one of the goals of functional genomics is the elucidation of the protein interaction network supporting biochemical and genetic pathways. Eventually, the aim is to study the consequences on cell physiology of disrupting the specific interaction between any two given proteins. Over the past few years, a number of high-throughput strategies have been proposed to achieve this goal ( Uetz et al. 2000 ; Ito et al. 2001 ; Gavin et al. 2002 ; Ho et al. 2002 ). These endeavors demonstrated the feasibility of a proteomic approach to the protein interaction problem. However, the lack of a substantial overlap between the results of projects designed to cover the entire interactome of Saccharomyces cerevisiae emphasized the importance of confirming any interaction by different methods ( von Mering et al. 2002 ). An in vitro strategy that has received considerable attention is based on the production of proteins in a high-throughput fashion and on their analysis in an array format ( Zhu and Snyder 2003 ). This approach is not limited to the study of protein interactions, and various other protein functions, including enzymatic activities, can be tested in the array format. However, although several experimental strategies are being explored, it is not yet clear which percentage of a eukaryotic proteome can be produced in a folded form in conventional expression systems and still remain functional once printed onto a solid support. High-density arrays of relatively short peptide chains, on the other hand, can be efficiently synthesized by a positionally addressable synthesis of peptides on cellulose membranes (SPOT synthesis) and have been used to facilitate mapping of antibody epitopes and more generally to study protein binding specificity ( Frank 1992 ; Kramer and Schneider-Mergener 1998 ; Reineke et al. 2001 ). The clear advantage of the array format could then be fully exploited to study protein interaction in those cases in which one of the partners participates in complex formation by docking a relatively short peptide into a receptor protein. In fact, a fairly large set of protein interactions are mediated by families of protein-binding domains (SH2, WW, SH3, PDZ, etc.) that act as receptors to accommodate, in their binding pockets, short peptides in an extended conformation ( Pawson and Scott 1997 ; Pawson et al. 2002 ; Pawson and Nash 2003 ). We have recently shown that the peptide sequences, obtained by panning phage-displayed random peptide libraries with SH3 domains, can be used to derive position-specific scoring matrices to computationally scan the entire proteome in search of putative partners ( Tong et al. 2002 ). This approach is affected by relatively low accuracy and/or coverage, depending on the threshold score that is set in the predictive algorithm. As a consequence, reliable inferences are only achieved by considering the intersection of the network obtained by the phage display and the one obtained by a completely unrelated (orthogonal) technique, such as the yeast two-hybrid method ( Tong et al. 2002 ). We reasoned that an alternative strategy whereby the domain of interest is challenged with the entire collection of peptides that the domain is likely to encounter in the cell could eliminate one source of error. However, this straightforward approach is technically not feasible because the number of short peptides, even in a proteome as simple as the one of baker's yeast, is in the order of 10 7 . This figure is far beyond the limits of the current technology for peptide synthesis. On the other hand, one could use the information obtained from screening random peptide repertoires to filter out the amino acid sequences that are highly unlikely to bind, thereby decreasing the peptide sequence space to be tested experimentally. We will refer to this approach with the acronym WISE (whole interactome scanning experiment) ( Figure 1 ). Figure 1 Schematic Representation of the WISE Strategy It should be pointed out that WISE only addresses the problem of identifying natural peptides with the potential for binding to any given recognition domain. Although we use this information to infer the formation of protein complexes in vivo, there are a number of reasons why this inference could turn out to be incorrect. For instance, a peptide could be unavailable for interaction in the native protein structural context. Alternatively, the two inferred partners could be located in different cell compartments or expressed in different tissues or at different times during development. Finally, all the interactions that are mediated by an extended region of a protein surface and that cannot be supported by a relatively short peptide will be missed by this approach. Results To assess the feasibility of this strategy, we have chosen eight S. cerevisaie proteins that contain SH3 domains belonging to five different specificity classes, as determined by phage display experiments ( Tong et al. 2002 ). The SH3 domains of Rvs167 (P39743), Yfr024c (P43603), and Ysc84 (P32793) bind to peptides that conform to typical class 1 (RxxPxxP) and class 2 (PxxPxR) motifs. The SH3 domains of Boi1 (P38041) and Boi2 (P39969) bind to peptides that also match class 1 or class 2 motifs but that display a somewhat higher complexity and variability. By contrast, the SH3 domain of Sho1 (P40073) and Myo5 (Q04439) were found to bind only to class 1 peptides, with a preference either for positively charged or for large hydrophobic sidechains at position P-2, respectively. The SH3 binding motifs' residue nomenclature (P-0 being the first Pro in the PxxP motif) is according to Lim et al. (1994 ). Finally, the SH3 domain of Abp1 (P15891) was poorly defined by the phage display experiments, possibly because peptides longer than nine amino acids are required for efficient binding. For each SH3 domain, we have defined a “relaxed pattern,” less selective than the pattern identified by comparing the most frequent ligands discovered by phage display. We have then used this to scan the whole S. cerevisiae proteome in search of peptides matching that pattern. The yeast proteome was searched with the program PatMatch at the Saccharomyces Genome Database ( http://genome-www.stanford.edu/Saccharomyces/ ). A detailed description of the relaxed patterns can be found in Table S1 . For instance, for class 1 peptides bound by the Rvs167 SH3 domain, instead of using the strict consensus motif RxFPxPP, we have defined a relaxed consensus by allowing either Arg or Lys at position P-3 and any amino acid at P-1 and P+2 (R/KxxPxxP). Standard conventions are used for representing consensus sequences of peptide ligands and protein modules ( Aasland et al. 2002 ) and for the nomenclature of residue positions in SH3 ligands ( Mayer 2001 ). We have to emphasize that this strategy is only suitable for identifying SH3 partners whose ligand domain can be confined to a short peptide detectable by the phage display approach. Although this is often the case, we need to realize that some SH3 domain interactions require more extended binding surfaces. As a consequence, they will not be identified by this approach ( Barnett et al. 2000 ). This approach was repeated for the eight SH3 domains. For each domain, approximately 1,500 peptides, matching the relaxed patterns, were selected for synthesis (see Datasets S1–S8 ). The peptides were synthesized at high density on cellulose membranes by SPOT synthesis technology, and the membranes were probed with the corresponding SH3 domain fused to glutathione S-transferase (GST). Finally, the bound domains were revealed by an anti-GST antibody and by a secondary anti-immunoglobulin G (IgG) antibody coupled to horseradish peroxidase (POD). The intensity of each SPOT was measured quantitatively in Boehringer light units (BLUs) (arbitrary light intensity units measured by a Lumi-Imager TM instrument) Figure 2 and Datasets S1–S8 report the results of these experiments in which the pattern of the reactive spot forms a sort of fingerprint that defines the recognition specificity of the specific SH3 domain. The differences and similarities in recognition specificity are better appreciated in the representation of Figure 3 , where the red hue of the small horizontal bars indicate the intensity of the binding reaction of a specific peptide for each SH3 domain column. As expected from the phage display experiment, the SH3 domains of Rvs167, Yfr024c, and Ysc84 have overlapping specificities, with Rvs167 proving more selective and Yfr024c more promiscuous. By contrast, the peptides that bind to the Abp1 (P15891) and Myo5 SH3 domains are characterized by different motifs. The results reported in Figure 3 also point out that peptide recognition patterns inferred from the phage display experiments (green in Figure 3 ) overlap only partially with the SPOT recognition patterns. These differences are particularly apparent in the case of Boi1 and Boi2. For these domains, the data obtained by phage display have proven insufficient for the target peptides to be inferred with sufficient accuracy, since only three out of 15 peptides have been predicted correctly. This comparison underlines the danger of using regular expressions or position-specific scoring matrices, derived from a relatively small number of peptide sequences, for inferring new peptide targets. Figure 2 WISE Screening of the Binding Potential of Yeast SH3 Domains Seven GST–SH3 domain fusion proteins were challenged with peptides that match different relaxed consensi: class 1 (R/K)xxPxxP and class 2 PxxPx(R/K) . The Myo5 SH3 domain was also tested with peptides matching (F/P/L/W/A/E)xx(W/Y/L/M/F/H)xxPxxP, while the Abp1 membrane contains peptides matching either xxPx(K/R)P or Pxxx(K/R)P. In the design of these relaxed patterns, we first aimed at defining regular expressions that could retrieve from the proteome all the peptides that had been demonstrated, to bind to the domain under consideration. Whenever the number of matching peptides did not exceed an arbitrary chosen threshold of 1,500, we used subjective considerations about sidechain similarities to further relax the search pattern. The three spots near the membrane corners contain peptides that bind to the anti-GST antibody. The intensity of these spots was used for normalization. Figure 3 Comparison of the Phage Display Prediction and the Results of the SPOT Binding Test by the WISE Approach The quantitative results of the experiments in Figure 1 are visualized with a graphical representation obtained with the tool EPCLUST available at http://ep.ebi.ac.uk/EP/EPCLUST . The PepSpot data, represented in red in a semiquantitative scale, is compared to the phage display prediction. Only peptides with BLUs (measured on a Lumi-Imager TM ) higher than 25K are included in the representation. The red intensity scale corresponds to BLU values in the ranges 25K–35K, 35K–45K, 45K–55K, 55K–85K, and larger than 85K, where higher BLU values correspond to a brighter red. Peptides that obtained a high score with the phage display-derived position-specific scoring matrix ( Tong et al. 2002 ) are in brighter green. Peptides with a lower score are represented with a correspondingly lighter green according to an arbitrary linear scale. Correlation of the SPOT Quantitative Output and Dissociation Constant The sensitivity of the SPOT interaction experiment is such that even peptides with a dissociation constant as high as 10 −4 M or above give a positive signal in the assay ( Kramer et al. 1999 ). To establish a correlation between affinity and the BLU signal, we have measured, by surface plasmon resonance, the dissociation constants of a number of peptides that were positive in the membrane assay ( Figure 4 A). The dissociation constants ranged from 9.4 × 10 −7 M to values that, being larger than 10 −4 M, could not be confidently measured. As previously observed for antibodies ( Kramer et al. 1999 ), in these experiments signal intensity also correlated inversely with the dissociation constant (correlation coefficient of –0.4; Figure 4 B). This correlation was obtained by comparing experiments performed with different probes and different membranes and can be further improved through more careful standardization (C. Landgraf and R. Volkmer-Engert, unpublished data). Thus, this approach, in contrast with other high-throughput approaches, is accompanied by a quantitative output that correlates, albeit partially, with the dissociation constant. As such, it can be used to assign figures to the edges of the inferred interaction network. This is illustrated in Figure 5 A, where the inferred SH3-mediated interaction network is represented by different colors to differentiate interaction mediated by different SH3 domains and different edge thicknesses in order to distinguish interactions with different affinities. Figure 4 Measurement of Dissociation Constants and Correlation with SPOT Intensities (A) Dissociation constants were measured with a BIAcoreX instrument as described in the Materials and Methods. The experiments with the Abp1 SH3 domain were carried out in triplicate. (B) Normalized BLU intensities plotted as a function of the log of the dissociation constant. Figure 5 Inferred Protein Interaction Networks (A) Protein interaction network mediated by the SH3 domains of the proteins characterized in this study. The SH3-containing proteins are represented as blue dots, while the prey partner proteins are represented as black dots. The interactions mediated by each SH3 are represented in a different color, and the edge thicknesses are proportional to the BLU intensity of the corresponding interaction, according to the scale described in Figure 3 . (B) The graph represents the interaction network mediated by the SH3 domains of Rvs167, Ysc84, Yfr024c, Abp1, Myo5, Sho1, Boi1, and Boi2 as determined by the two-hybrid approach ( Tong et al. 2002 ). The interactions (edges) that were confirmed by our WISE method (BLU value higher than 25K) are colored in red or magenta. The interactions in magenta, differently from the ones in red, were not correctly inferred by the phage display approach. The interaction in orange was inferred by the phage display approach, but not confirmed by the WISE method. The network was visualized by the Pajek package ( http://vlado.fmf.uni-lj.si/pub/networks/pajek/ ). Inferred Protein Ligands Share Common Functions Interacting proteins often share similar functions and participate in common processes. Hence, we examined whether the proteins, found in our approach to bind to a specific SH3 domain, could be preferentially associated to a biological process. For this analysis we considered, as putative ligands, all the proteins containing at least one peptide with an intensity higher than an arbitrarily chosen threshold of 20,000 (in BLUs, corresponding to a dissociation constant of approximately 100 μM). We then used the FunSpec software ( Robinson et al. 2002 ) to identify the Gene Ontology (GO) terms significantly enriched in the list of proteins interacting with any specific SH3 domains. FunSpec uses a hypergeometric distribution to evaluate the probability ( p ) that the intersection of a protein list with any given functional category occurs by chance. The inferred ligands of the Ysc84, Yfr024c, and Rvs167 SH3 domains were significantly enriched for the GO biological process term “actin cytoskeleton organization and biogenesis” ( p < 5.46 × 10 −7 , p < 7.06 × 10 −6 , and p < 5.50 × 10 −5 , respectively). By contrast, the partners of Abp1 and Myo5 SH3 domains were found to be enriched for the GO terms “actin cortical patch assembly” ( p < 3.49 × 10 −7 ) and “actin cytoskeleton” ( p < 7.14 × 10 −6 ). These results are in accord with the available information about the participation of these bait proteins in the organization of the yeast cytoskeleton, whereas arbitrarily selected gene groups of similar size showed no comparable enrichments for any of the GO terms (best result, p < 10 −3 ). We have previously shown that the information obtained by panning random peptide libraries can be used to draw an interaction network that recapitulates a fraction of the SH3-mediated interaction network determined by the two-hybrid approach ( Tong et al. 2002 ). By using inferred networks of comparable size, the intersection with the two-hybrid network was larger for the WISE than for the phage display network, including three more proteins and six new edges ( Figure 5 B). Furthermore, as shown below, at least some of the WISE interactions, not present in the two-hybrid network, can be shown to occur in physiological conditions in yeast. The Tightest SH3 Peptide Ligands Mediate Complex Formation In Vivo The Abp1 SH3 domain, compared to most of the remaining SH3 domains that we have studied, has a narrower peptide recognition specificity and, as a consequence, fewer inferred partners. Our analysis confirmed that Srv2 and Ark1, previously identified Abp1 SH3 functional partners ( Lila and Drubin 1997 ; Fazi et al. 2002 ), contain peptides that bind with an affinity in the 1–100 μM range. Furthermore, fragments of Prk1 (P40494), Yir003w (P40563), and Ynl094w (P53933) were reported to bind to the Abp1 SH3 domain in vitro ( Fazi et al. 2002 ). Surprisingly, we could not identify any tetradecapeptide in the Ynl094w protein with affinity better than 100 μM. We noticed, though, that if we extend the peptide RRPPPPPIPSTQKP (predicted to be a ligand of the Abp1 SH3 domain by a variety of approaches) to include three more residues at the C-terminus, the affinity rises to approximately 40 μM. Finally, we identified Scp1 (Q08873), the yeast homolog of calponin, as a putative new Abp1 partner. In order to assess how accurately peptide binding affinity permits us to infer physiological partners, we investigated whether the putative partners can be copurified with Abp1 in vivo. We used the tandem affinity purification (TAP) technology ( Rigaut et al. 1999 ) to tag the Prk1, Ynl094w, Scp1, and Yir003w proteins, and we initially asked, by pulldown assays, whether the putative peptide targets of the Abp1 SH3 domain were available for interaction in the protein natural context. As seen in Figure 6 A, the four proteins identified by our approach can be affinity-purified on a sepharose resin containing the Abp1 SH3 domain, thus indicating that the target peptides can bind in their native protein context. Figure 6 Characterization of Abp1 Ligands (A) The dissociation constants of the 11 peptides that bound most efficiently to the Abp1 SH3 domain in the SPOT synthesis assay were measured by BIAcore experiments. (See also Table S1 .) The results of the experiments for the peptides with the highest affinity are reported here. (B) The genes encoding the putative Abp1 ligands (Prk1, Yir003w, Scp1, and Ynl094w) were modified by the TAP technology to produce tagged proteins. A strain expressing the “tapped” Bmh1 protein is used as a control. Yeast extracts encoding the tagged proteins were used in pulldown experiments in the presence of 100 μg of GST–Abp1 SH3 or GST alone as a negative control. The “Ext.” lane was loaded with 1/20 of the extract used in the pulldown experiment. (C) The same extracts were affinity-purified on an IgG affinity resin and then the affinity tag, protein A, released by cutting with the TEV protease. The proteins that were copurified with the “tapped” baits were revealed with an anti-Abp1 serum. To establish whether Abp1 forms a complex with these proteins in vivo, we next affinity-purified the four tagged proteins on an IgG resin. We next probed the purified complexes with an anti-Abp1 antibody. As shown in Figure 6 B, Abp1 could be copurified with Prk1, Scp1, Ynl094w, and Yir003w, but not with Bmh1, used as a negative control. In conclusion, at least in the case of Abp1, the search for the tightest binding peptides in the whole yeast proteome led to the to identification of proteins that form a complex with the bait domain when expressed at physiological levels in vivo. We have also investigated whether the fraction of coimmunoprecpitated Abp1 protein correlates with either the BLU intensity or the dissociation constant of the corresponding SH3–peptide interaction. The observed lack of correlation indicates that other factors, as, for instance, local protein concentration (mediated by different interactions), are important in determining the efficiency of complex formation. WISE Scanning of the Human Proteome Finally, we asked whether this approach could be extended to the analysis of a mammalian proteome that is approximately five to six times more complex than the yeast one. To this end, we selected two proteins involved in membrane recycling, amphiphysin-1 (P49418) and endophilin-1 (Q99962), and whose SH3 domains we had previously characterized by phage display ( Cestra et al. 1999 ). These two SH3 domains are also known to have overlapping recognition specificity, although their preferred target peptides are different and the overall recognition specificity differs from the ones of the yeast SH3 domains characterized so far by this approach (see Figure 2 ). We have screened with the amphiphysin and endophilin SH3 domains all the peptides in the SwissProt/TREMBL database that contain the (P/F/l/I)XRPXX(R/K), the (P/F/l/I)(K/R)RP, or the (P/l/R/F/S/I/V/K/G)PX(R/K)PP motifs. Because of the redundancy of the SwissProt/TREMBL database and because the peptide families matching the three motifs overlap, some of the total 3,774 peptides were synthesized several times, thereby providing an internal control of the approach's reproducibility ( Figure 7 ; Datasets S9 and S10 ). Figure 7 Scanning of the Human Proteome in Search of Ligands for the Amphiphysin and Endophilin SH3 Domains The relaxed target peptide consensi (right) were derived from the available phage display experimental data and used to search the human proteins contained in the SwissProt/TREMBL database with the software ScanProsite, found at http://us.expasy.org/tools/scanprosite/ . Dynamin and synaptojanin, two proteins involved in endocytosis, form an SH3-mediated complex with amphiphysin and endophilin in vivo ( McPherson et al. 1996 ; de Heuvel et al. 1997 ; Ringstad et al. 1997 ). Our approach identified in both proteins at least one peptide that is a ligand for the amphiphysin and endophilin SH3 domains. Interestingly, other proteins that have been already implicated in endocytosis and its control (but not yet described as physiological partners of amphiphysin and endophilin) contain peptides that are ranked among the highest affinity ligands by our approach. Several other proteins of unknown function are predicted to bind to the SH3 domain of these two proteins. Discussion The WISE strategy described here has the merit of combining the strengths of a selective approach (such as panning combinatorial peptide libraries displayed on phage) with a quantitative analysis that can be achieved by screening a large number of peptides arrayed at high density on a solid support. This makes it possible to identify rapidly and directly the tighest ligands of a peptide-binding receptor among all the peptides in an entire proteome. We have demonstrated the approach by applying it to the family of SH3 domains. However, WISE can also be extended to all those domain families (WW, PDZ, EH, GYF, VHS, SH2 PTB, 14-3-3, FHA, WD40, etc.) that mostly recognize short peptides in their partner proteins. Our approach, as any in vitro approach, suffers from some simplifications when it comes to inferring the physiological partners from the domain–peptide interaction data. According to a naive strategy, we would assimilate the cell to a cellulose membrane, where all the peptides are equally represented and accessible to the bait domain, and we would be tempted to conclude that all the proteins containing the identified peptide ligands were likely to be physiological partners. In the real cell, however, the target peptides may be hidden inside the core of the folded proteins, and the protein partners may not be equally represented. Furthermore, the partner proteins may be expressed in different cell types or segregated in different macromolecular complexes or cell compartments. In order to obtain more reliable inferences, the peptide interaction information obtained by a WISE approach should be complemented by information about peptide accessibility obtained by structural predictors ( Garner et al. 1998 ; Linding et al. 2003 ) and data about mRNA and protein concentrations in different physiological and subcellular contexts ( Simpson et al. 2000 ; Kumar et al. 2002 ). Nevertheless, the average number of peptides in the yeast proteome that have the potential to bind SH3 domains with an affinity that may have physiological relevance was found to be surprisingly high, ranging from a few peptides, in the case of the Abp1 and Boi2 SH3 domains, to several tenths, in the case, for instance, of the Yfr024w SH3 domain. Given the hypothesis that all (or most of) these peptides are equally expressed inside the cell and exposed to the solvent in the folded protein structure as most Pro-rich peptides are, these findings raise the question of whether the observed binding promiscuity has any physiological implication. Recent proteome-wide analyses of yeast protein complexes have revealed that many proteins are organized in discrete complexes ( Gavin et al. 2002 ; Ho et al. 2002 ). Yet this approach has failed to identify a large number of interactions whose physiological relevance was validated by traditional single (or few) protein studies, implying that many physiologically relevant protein interactions do not lead to the formation of stable complexes. SH3-mediated interactions may belong to this latter class. This is consistent with the observation that SH3-containing proteins have a connectivity significantly lower than average (2.33; average, 4.00) in the yeast complexosome ( Gavin et al. 2002 ; Ho et al. 2002 ), in contrast with the observed connectivity in the interaction network derived from high-throughput two-hybrid experiments (average connectivity of SH3-containing proteins, 5.05; average connectivity for all proteins, 1.53) ( Uetz et al. 2000 ; Ito et al. 2001 ; L. Montecchi-Palazzi and G. Cesareni, unpublished data). SH3-mediated interactions are much less likely to be detected by coimmunoprecipitation assays than by solid-state (or two-hybrid) assays, because relatively weak interactions are almost certainly lost in the extensive washing needed for coimmunoprecipitation experiments. Our approach has made it possible to rediscover most of the SH3-mediated protein interactions that were previously described for these proteins. Admittedly, though, few clearly characterized protein interactions of this type have yet been reported in the literature. The few failures of our approach (false negatives) are due to weaknesses in the design of the relaxed consensus used to search for matching peptides in the protein databases. All the same, we have identified a larger number of target peptides that bind with affinities that are comparable to the ones of the validated physiological targets. Some of these peptides may never encounter the cognate SH3 domain, while some will only meet partners in specific physiological conditions. Others may contribute to add specificity to the formation of a complex by cooperating with other associated low-specificity binding domains. Finally, we have to consider a new scenario in which proteins, even when not forming stable complexes, are seldom isolated in solution, but navigate in the cell by moving from one weak partner to another. These weak interactions may be important in modulating cell architecture even when they are not instrumental in the nucleation of a stable complex. Although this is difficult to prove, the semiquantitative data provided by our approach, complemented with the results of large-scale expression and localization studies, may eventually allow one to model these different settings. The in vitro approach that we have described, albeit limited to interactions in which one of the partners can be reduced to a relatively short peptide, presents a number of interesting features that complement other strategies aimed at revealing the details of the protein interaction network within cells. First, it takes full advantage both of the genomic information that is being accumulated and of the array format in which all the possible targets are equally represented. Second, it is comprehensive and provides a high level of detail on the interaction topology. Third, it is not affected by protein concentrations inside the cell and is very sensitive (interactions up to 100 μM can be detected). Fourth, interactions that depend on peptide modifications, for instance, phosphorylation, can also be studied. Fifth, the output is semiquantitative. Finally, the identified target peptide can be used as a lead to develop tighter binding molecules in order to interfere with complex formation in vivo. We have shown that the current implementation of the SPOT synthesis technology is sufficient to carry out WISE screening of a proteome as complex as the one of a mammalian organism. Foreseeable technological improvements of the SPOT synthesis technology will permit the assembly of relatively cheap microarrays containing up to 15,000 peptides. This will extend the approach's power by relieving, in some cases, the requirement for an experimental filtering step, as performed here by the phage display approach, thereby allowing more freedom in the design of the relaxed pattern. Materials and Methods Genome tagging Yeast PJ694a strains expressing TAP-tagged ORFs were constructed as described ( Rigaut et al. 1999 ). Primers bearing a sequence identical to the C-terminal part of the ORF were used to amplify the TAP cassette. Primers for Yir003 are forward: GACGTTGATTCTGCCTTACATTCAGAAGAAGCGTCTTTTCACTCCCTTTCCATGGAAAAGAGAAGATG and reverse: CCATTATTATTAATAACACCTCTAGTTTGCTCGTCATTCACATATTTCTACGACTCACTATAGGGCGA. Primers for Scp1 are forward: TCTCAGGCTACTGAAGGAGTGGTGTTAGGACAACGGAGAGATATAGTTCCATGGAAAAGAGAAGATG and reverse: GGAAAACTAAAATATATCAAAGGAACTTTGGTTGCGTATATAGGGTTCTACGACTCACTATAGGGCGA. Primers for Prk1 are forward: GTAGATGATTTAGAAGCCGATTTTAGAAAAAGGTTTCCCAGCAAAGTTTCCATGGAAAAGAGAAGATG and reverse: AAAAATTTCAAATGATTGACGAAAGAAAATTTGTACATTTTGTATGACTACGACTCACTATAGGGCGA. Primers for Ynl094w are forward: TTAAGTTTGGAAGACAGTATTCGCAGAATTAGGGAGAAGTATTCAAACTCCATGGAAAAGAGAAGATG and reverse: CACTCTAAAACGTTGAAAATGGCTCCAATTCATAAGGTCACTTTAGTGTACGACTCACTATAGGGCGA. The polymerase chain reaction (PCR) fragments were used to transform the yeast strain. Positive clones were selected on selective plates and checked by PCR analysis and Western blot analysis. For the PCR analysis, we used a new forward primer together with the reverse ones used for the construction: forward for Yir003, AGCAGATGGAGGACCAAATGGAGGTTG; forward for Scp1, CGGTTATATGAAAGGTGCATCTCAGGC; forward for Prk1, CGTTTACAATCAAAGAAACTGCCGATTG; and forward for Ynl094w, GGACTCAATTCAAAAATTGAGCAATCAAG. Pulldown assay Yeast strains expressing TAP-tagged Yir003w, Scp1, Prk1, Ynl094c, or Bmh1 as a control, were cultured at 30°C in 5-l flasks containing 2 l of YPD medium, collected in the exponential growth phase, and lysed mechanically with glass beads in 5 ml of IPP-150 buffer (10 mM Tris–HCl [pH 8.0], 150 mM NaCl, 0.1% NP-40) in the presence of protease inhibitors (2 mM benzamidine, 0.5 mM PMSF, 1 mM leupeptina, 2.6 mM aprotinin). Half of the extract was incubated for 2 h at 4°C with 100 μg of GST–Abp1SH3 (bound to glutathione–sepharose), while the remaining half was incubated with 100 μg of GST as a control. The resins were washed four times with 5 ml of IPP-150 buffer and the bound proteins recovered (by boiling in SDS–BLU–dye) and analyzed on a 10% SDS–polyacrylamide gel. They were transferred onto nitrocellulose membranes. Filters were blocked overnight at 4°C in PBS containing 5% milk powder (blocking solution), and then incubated with peroxidase (POD)-conjugated anti-POD antibody (PAP) antibody (Sigma P-2026; Sigma, St. Louis, Missouri, United States) diluted 1:1,000 for 2 h at room temperature (RT), washed five times for 15 min with PBS–0.05% Tween, and revealed by chemoluminescence. GST fusion proteins were expressed and purified by standard procedures. Coimmunoprecipitation Yeast cultures expressing TAP-tagged Yir003, Scp1, Prk1, Ynl094c, or Bmh1 were cultured, collected, and lysed as described for the pulldown experiments. Each extract was incubated with 500 μl of IgG–sepharose (Pharmacia Biotech 17–0969-01; Amersham Pharmacia, Uppsala, Sweden) for 2 h at 4°C. The resins were washed four times in 5 ml of IPP-150 buffer, resuspended in 300 ml of 50 mM Tris–HCl (pH 8), 0.5 mM EDTA, 5 mM DTT, transferred to Eppendorf tubes, and incubated with 30 U of recombinant TEV protease (Invitrogen 10127–017; Invitrogen, Carlsbad, California, United States) for 1 h at 20°C. After centrifugation for 2 min at 2,300 rpm, the supernatants were loaded on 10% SDS–polyacrylamide gels and then transferred onto nitrocellulose membranes. Filters were blocked overnight at 4°C in blocking solution, incubated for 2 h at RT with anti-Abp1 antibody (diluited 1:1,000), and washed five times with PBS–0.05% Tween. They were then incubated for 1 h at RT with anti-rabbit POD coniugated, washed ten times with PBS–0.05% Tween, and detected by chemoluminescence. Peptide array synthesis Cellulose membrane-bound peptides were automatically prepared according to standard SPOT synthesis protocols ( Frank 1992 ) using a Spot synthesizer (Abimed, Langenfeld, Germany) as described in detail ( Kramer and Schneider-Mergener 1998 ). For generation of the sequence files, the software LISA (Jerini AG, Berlin, Germany) was used. To exclude false-positive spots in the incubation experiment, all Cys were replaced by Ser. The generated arrays of 15mer peptides were synthesized on cellulose-(3-amino-2-hydroxy-propyl)-ether (CAPE) membranes, because of a better signal-to-noise ratio in the incubation experiments. Preparation of CAPE membranes A 18 × 28 cm Whatman 50 paper (Whatman, Maidstone, United Kingdom) was immersed in a stainless steel dish containing a solution of 400 mg of p -toluenesulfonic acid in methanol (50 ml) and shaken for 3 min. The membrane was removed from the tray and air-dried. Meanwhile a solution of 7.8 g of N-(2,3-epoxypropyl)-phathalimid in dioxane (60 ml) was heated up to 80°C in a covered stainless steel dish placed on a shaking platform using a heater mat. Then, a solution of 400 mg of p -toluenesulfonic acid in 5 ml of dioxane was added. Immediately, the membrane was placed in this solution and shaken at 80°C for 3–5 h. Afterwards, the membrane was washed three times with 50 ml of dioxane and ethanol (twice, 50 ml each) and subsequently incubated with a 10% (v/v) solution (50 ml) of hydrazine hydrate (80%) in ethanol for approximately 6 h. Finally, the membrane was washed twice with ethanol, three times with dimethylacetamide, and once again with ethanol (twice, 50 ml each), and dried. The loading of this type of amino-functionalized cellulose membrane is about 120–200 nmol/cm 2 . SH3 domain binding studies of cellulose-bound peptides Generally, all incubations and washing steps were carried out under gentle shaking. After washing the membrane once with ethanol (10 min) and three times for 10 min with Tris-buffered saline (TBS: 50 mM Tris-(hydroxymethyl)-aminomethane, 137 mM NaCl, 2.7 mM KCl, adjusted to pH 8 with HCl), the membrane-bound peptide arrays were blocked (3 h) with blocking buffer. Blocking reagent (CRB, Northwich, United Kingdom) was diluted 1:10 in TBS containing 5% (w/v) sucrose. After washing with TBS (10 min), 10 μg/ml of the corresponding GST-fused SH3 domain (in blocking buffer) was added and incubated overnight at 4°C. After washing three times for 10 min with TBS, the anti-GST monoclonal antibody (mAb) (G1160; Sigma) was added at a concentration of 1 μg/ml in blocking buffer for 2 h at RT. Subsequently, the membrane was washed three times with TBS (10 min each) and the POD-labeled anti-mouse mAb (1 μg/ml in blocking buffer) was applied for 1.5 h at RT, followed by washing three times with TBS. Analysis and quantification of peptide-bound SH3 domains were carried out using a chemoluminescence substrate and the Lumi-Imager TM instrument (Roche Diagnostics, Basel, Switzerland). For quantification, the SPOT signal intensities were measured in BLUs. To exclude false-positive results, in the SH3-incubation experiment, each membrane was preexamined with GST/anti-GST mAb/anti-mouse mAb. The data obtained with the different membranes were normalized by using as a reference the intensity of three control peptides that bind to the anti-GST antibody. The sequence of these peptides were QRALAKDLIVPRRP, LAKDLIVPRRPEWN, and DLVIRPPRPPKVLGL. BIAcore analysis Surface plasmon resonance measurements were carried out with a BIAcoreX instrument (BIAcore, Uppsala, Sweden). Experiments were carried out on sensor chips with GST-fused SH3 domains and GST as a control. GST-fused SH3 domains and the GST were coupled to CM5 sensor chips using the EDC/NHS (N-(3-dimethylaminopropyl)-N′-ethylcarbodiimide and N-hydroxysuccinimide)amine-coupling kit, yielding approximately 4,350 resonance units in the case of the GST-fused SH3 domain and 4,330 resonance units for GST. Interaction analysis was performed at 25°C with the peptides dissolved in 10 mM HEPES, 150 mM NaCl, 3.4 mM EDTA, and 0.005% surfactant P20 (pH 7.4), at 15 μl/min flow rate, using six to seven dilutions, ranging from 500 μM to 65 nM. Dissociation constant values were evaluated by applying the steady-state model using BIAcore evalution 3.1 software. Supporting Information Dataset S1 Results of the SPOT Analysis Experiments for the Abp1 SH3 Domain (102 KB TXT). Click here for additional data file. Dataset S2 Results of the SPOT Analysis Experiments for the Boi1 SH3 Domain (104 KB TXT). Click here for additional data file. Dataset S3 Results of the SPOT Analysis Experiments for the Boi2 SH3 Domain (103 KB TXT). Click here for additional data file. Dataset S4 Results of the SPOT Analysis Experiments for the Myo5 SH3 Domain (86 KB TXT). Click here for additional data file. Dataset S5 Results of the SPOT Analysis Experiments for the Rvs167 SH3 Domain (104 KB TXT). Click here for additional data file. Dataset S6 Results of the SPOT Analysis Experiments for the Yfr024c SH3 Domain (104 KB TXT). Click here for additional data file. Dataset S7 Results of the SPOT Analysis Experiments for the Yhr016c SH3 Domain (104 KB TXT). Click here for additional data file. Dataset S8 Results of the SPOT Analysis Experiments for the Sho1 SH3 Domain (76 KB TXT). Click here for additional data file. Dataset S9 Results of the SPOT Analysis Experiments for the Amphyphisin SH3 Domain (183 KB TXT). Click here for additional data file. Dataset S10 Results of the SPOT Analysis Experiments for the Endophilin SH3 Domain (184 KB TXT). Click here for additional data file. Table S1 Design of Relaxed Consensi (66 KB PDF). Click here for additional data file. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC314469.xml |
549521 | Marked alveolar apoptosis/proliferation imbalance in end-stage emphysema | Background Apoptosis has recently been proposed to contribute to the pathogenesis of emphysema. Methods In order to establish if cell fate plays a role even in end-stage disease we studied 16 lungs (9 smoking-associated and 7 α1antitrypsin (AAT)-deficiency emphysema) from patients who had undergone lung transplantations. Six unused donor lungs served as controls. Apoptosis was evaluated by TUNEL analysis, single-stranded DNA laddering, electron microscopy and cell proliferation by an immunohistochemical method (MIB1). The role of the transforming growth factor (TGF)-β1 pathway was also investigated and correlated with epithelial cell turnover and with the severity of inflammatory cell infiltrate. Results The apoptotic index (AI) was significantly higher in emphysematous lungs compared to the control group (p ≤ 0.01), particularly if only lungs with AAT-deficiency emphysema were considered (p ≤ 0.01 vs p = 0.09). The proliferation index was similar in patients and controls (1.9 ± 2.2 vs 1.7 ± 1.1). An increased number of T lymphocytes was observed in AAT-deficiency lungs than smoking-related cases (p ≤ 0.05). TGF-β1 expression in the alveolar wall was higher in patients with smoking-associated emphysema than in cases with AAT-deficiency emphysema (p ≤ 0.05). A positive correlation between TGF-βRII and AI was observed only in the control group (p ≤ 0.005, r 2 = 0.8). A negative correlation was found between the TGF-β pathway (particularly TGF-βRII) and T lymphocytes infiltrate in smoking-related cases (p ≤ 0.05, r 2 = 0.99) Conclusion Our findings suggest that apoptosis of alveolar epithelial cells plays an important role even in end-stage emphysema particularly in AAT-deficiency disease. The TGFβ-1 pathway does not seem to directly influence epithelial turnover in end-stage disease. Inflammatory cytokine different from TGF-β1 may differently orchestrate cell fate in AAT and smoking-related emphysema types. | Background Pulmonary emphysema, a significant global health problem, is a pathological condition characterized by enlargement of the airspaces distal to the terminal bronchiole, destruction of the alveolar walls, without and/or with mild fibrosis [ 1 ]. To date the pathogenesis remains enigmatic. The most prevailing hypothesis since the 1960s has been the elastase/antielastase imbalance theory of inflammation [ 2 ]. Briefly, the concept is that activated inflammatory cells release large quantities of elastases, overwhelming local antiprotease activity with consequent damage to the alveolar wall matrix [ 3 ]. However the emphasis on alveolar matrix destruction by a combination of inflammation and excessive proteolysis has failed to fully explain the loss of lung tissue, particularly when compared to alterations seen in other inflammatory lung diseases. Recently more attention has been paid to alveolar epithelial injury in addition to alveolar matrix destruction. The presence of apoptosis has recently been described in animal models of emphysema [ 4 , 5 ] and in a few studies of human disease [ 6 - 9 ]. The majority of investigations have focused the attention on smoking-related emphysema keeping in mind that cigarette smoking was the main cause of apoptotic cell death. Cigarette smoke may induce alveolar cell apoptosis either directly by a cytotoxic effect on pneumocytes or indirectly by decreasing the production of vascular endothelial growth factor (VEGF) via altered epithelial cells [ 7 ]. To date smoking-associated centrilobular emphysema is the only studied form of emphysema in which apoptosis, and more recently also proliferation, have been investigated [ 9 ]. Alterations of lung epithelial cell turnover in end-stage emphysema, either smoking-associated emphysema or α1-antitrypsin (AAT)-deficiency emphysema, are up to now not well distinguished. Moreover apoptotic phenomenon has been previously investigated in moderate/severe smoking-related forms of emphysematous lungs obtained almost exclusively from lung volume reduction surgery [ 6 , 7 , 9 ]. If cell fate is a stable, progressive and/or a decreasing process in end-stage disease is to date unknown. Among the growth factors, transforming growth factor (TGF)-β1 could play a crucial role in the remodeling process occurring in emphysematous parenchyma. TGF-β1, other than its known profibrogenetic [ 10 ] and anti-inflammatory effects [ 11 , 12 ], has an important influence on epithelial cell growth [ 14 ]. It has been demonstrated that it has an inhibitory effect on the growth of lung epithelial cells, particularly for airway epithelium [ 14 , 15 ]. The cytokine has been shown to be over-expressed in patients with a history of smoking and chronic obstructive pulmonary disease (COPD) [ 16 , 17 ]. Paracrine (mainly produced by macrophages) and autocrine (released by epithelial cells) activity of this growth factor could play an important role in the loss of the alveolar walls by inducing apoptotic cell death. In the present work the degree of apoptotic cell death and epithelial proliferation in the lungs of patients with different types of end-stage emphysema was studied. The severity of inflammatory cell infiltrate (ICI) was also quantified and correlated with epithelial cell turnover. Further, the TGF-β1 pathway was detected and correlated with the apoptotic index (AI), the proliferative index (PI) and the ICI. Methods Lung tissue preparation Lung tissue used in the present study comprised material from 16 patients undergoing lung transplantation for end-stage emphysema at the Thoracic Surgery Unit of the University of Padua Medical School. Cold ischemia preservation was 60 minutes and 120 minutes, respectively, for single and double lung transplantations. Small-sized pieces from all lobes were cut and immediately fixed in Karnovsky's solution for electron microscopy. The lungs were then gently fixed in 10% phosphate-buffered formalin by airway perfusion and processed for sectioning (3 μm). Samples were selected from specimens that showed features of excellent tissue preservation and adequate lung inflation. In particular, large thin blocks approximately 30 × 25 mm were cut from the subpleural areas of the apical anterior and lingular segments of the upper lobes, as well as the apical and basal segments of the lower lobes. A more centrally placed block was taken to sample the segmented airways and blood vessels. The right lung was sampled in the same way with the middle lobe being treated in the same way as the lingula [ 18 ]. Adult control lungs were obtained from unused donor lungs for transplantation (6 cases). The Local Research Ethics Committee approved the study. TUNEL analysis The terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end-labeling method (TUNEL) was used to investigate the presence of apoptosis. Sections were processed in accordance with Gavrieli et al's method [ 19 ]. Briefly, after deparaffinization and rehydration, sections were digested with proteinase K (Boehringer Mannheim, Mannheim, Germany) at a concentration of 20 μg/ml for 15 minutes. The slides were then incubated with TdT/biotinylated dUTP diluted in buffer (Boehringer Mannheim, Mannheim, Germany). The slides were developed by using diaminobenzidine and 30 ml hydrogen peroxide. For negative controls, some slides were incubated in buffer without TdT or biotinylated UTP. For positive controls, some slides were incubated with 1 μg/ml DNAse (Sigma-Aldrich, Milan, Italy). Electron microscopy Lung specimens fixed in Karnovsky's solution (2% paraformaldehyde, 2.5% glutaraldehyde in Millonig, pH 7.3) for 24 hours were post-fixed with 1% osmium tetroxide (Millonig, pH 6.8) for 1 hour, and then progressively dehydrated in alcohol and embedded in epon. Semi-thin sections were stained with 0.1% toluidine blue for light microscopic examination. Ultra-thin sections were stained with uranyl acetate and lead citrate for transmission electron microscopy performed using a Hitachi H-7000 (Hitachi Ltd., Tokyo, Japan). Oligonucleosomal-length DNA laddering The presence of oligonucleosomal-length DNA cleavage was investigated with APO-DNA1 (Maxim Biotech Inc, San Francisco, CA, USA) in 12 cases (4 AAT-emphysema patients, 4 smoking-related emphysema patients and 4 controls) in which frozen tissue was available. Briefly, DNA was obtained from lung tissue samples using proteinase K-phenol extraction. Dephosphorylated adaptors were ligated to 5' phosphorylated blunt ends with T 4 DNA ligase to 500 ng of lung sample DNA (for 16 h at 16°C). These then served as primers in LM-PCR under the following conditions: hot start (72°C for 8 min), 30 cycles (94°C for 1 min, and 72°C for 3 min) and extension (72°C for 15 min). Every reaction set included thymus DNA as a positive control and normalization of the amount of reaction products. Amplified DNA was subjected to electrophoresis on 1.2% agarose gel containing ethidium bromide. Images were scanned and the DNA fragmentation levels were based on the density of the bands ranging between 1000 base pairs (bp) and 300 bp. The percentage of DNA fragmentation was quantified by scanning densitometry. Immunohistochemistry for TGF-β1, TGF-βRII and MIB1 All lung sections were subjected to antigen retrieval by heating in a microwave oven on high power for 8 minutes in 0.01 mol/l citrate buffer (ph 6.0) and then incubated with a mouse monoclonal anti-TGF-β 1 -β 2 and-β 3 primary antibody to active TGF-β1 (150 μg/ml; dilution 1:20, Genzyme Diagnostics, Cambridge, MA), with polyclonal antibody against TGF-β receptor type II (200 μg/ml, dilution 1:200, Santa Cruz Biotechnology Inc., Santa Cruz) and monoclonal MIB-1 antibody (1:50 Dako, Santa Barbara, CA, U.S.A.), which recognizes the Ki-67 antigen in paraffin-embedded tissue sections. Immunohistochemical investigations were done on the sections from the same paraffinembedded specimens processed for TUNEL analysis. The detection system was the Vectastain ABC kit (Vector Peterborough, UK) with 3-amino-9-ethylcarbazole (for TGF-β1, TGF-βRII) and with a mixture of 3,3'-diamino-benzidine tetra7 hydrochloride (Dako) and hydrogen peroxide as the chromogenic substrates. Sections were counterstained with Mayer's hematoxylin. Immunohistochemistry for inflammatory cell infiltrate (ICI) In all samples, immunohistochemistry for the characterization of ICI was carried out by using the following antibody panel: CD20 (1.100), CD45RO (1.100), CD4 (1:20), CD8 (1:50), CD3 (1:100), CD68 (1:50) (Dako, Santa Barbara, CA, U.S.A.). The detection system was the Vectastain ABC kit, as described above. For all immunohistochemistry experiments, negative controls were performed by incubation of the sections with the omission of primary antibody and using the antibody diluents alone or the appropriate non-immune IgG in each case. Double immune-labeling For simultaneous detection of DNA fragmentation and cell proliferation a double labeling was also performed. The TUNEL technique was first performed and the staining achieved was diaminobenzidine as chromogen. For MIB1 immunolocalization in the second staining sequence the sections were stained with 5-bromo-4-chloro-3-indoxyl phosphate/nitro blue tetrazolium (BCIP/NBT alkaline phosphatase Kit II, Vector Laboratories (Vector Peterborough, UK). Image analysis Immunoassay for TGF-β1 and TGF-βRII was detected by using digital quantitative analysis (Image Pro Plus software version 4.1, Media Cybernetics, Silver Spring MD) as previously described [ 13 ]. Quantification of TUNEL, MIB1 positive cells and ICI was restricted to the alveolar wall. Images for each lung section from the upper and lower lobes were acquired with a 40X lens. In each case at least 50 microscopic randomly chosen fields were analyzed. A total of 5,000 epithelial cells were counted for AI and PI and the values were expressed as percentages. Statistical analysis To avoid observer bias the cases were coded and measurements were made without knowledge of clinical data. Differences between groups were detected using the analysis of variance for clinical data and the Kruskall-Wallis test for histological data. The Mann-Whitney U test was performed after the Kruskall-Wallis test when appropriate. The statistical tests used were two-sided. Correlation coefficients were calculated using Spearman's rank method. Probability values of 0.05 or less were accepted as significant. Group data were expressed as means and SD or as medians and range when appropriate. Results Clinical data and histological findings Major clinical data for patients with emphysema are shown in Table 1 . Table 1 Subject Characteristics Case Sex Age Emphysema type Packs/year FEV 1* FEV 1 /FVC* Transplantation 1 M§ 49 AAT deficiency 27 27 27 BSLT * 2 M 59 Smoking 36,5 31 38 RtSLT † 3 F|| 62 Smoking 7 17 55 BSLT 4 F 62 Smoking 36,5 13 30 LtSLT ‡ 5 M 62 Smoking 108 15 45 BSLT 6 F 49 Smoking 73 12 33 BSLT 7 M 47 Smoking 54 22 60 BSLT 8 M 59 AAT deficiency 36,5 20 56 BSLT 9 M 64 Smoking 54 6 42 LtSLT 10 M 63 Smoking 54 24 37 BSLT 11 M 51 AAT deficiency 54 11 25 BSLT 12 M 53 AAT deficiency 108 8 14 BSLT 13 M 45 AAT deficiency 73 17 24 BSLT 14 M 56 Smoking 36,5 15 32 RtSLT 15 F 41 AAT deficiency 54 35 38 BSLT 16 M 51 AAT deficiency 36,5 34 36 BSLT *BSLT: bilateral single lung transplantation, † RtSLT: right transplant single lung transplantation, ‡ LtSLT: left transplant single lung transplantation, §M: male, || F: female. FEV 1 and FVC are given as percentages of predicted values. Average patient age was 54.4 ± 7.5 years. FEV1 mean was 19 ± 8.9 (predicted for sex, age, and body weight). Bilateral single lung transplantation was performed in 12 out of 16 patients. All patients had been heavy smokers: 7 were only smoking-associated emphysema cases (51 ± 28 packs-year) and 9 were both AAT-deficiency emphysema and smoking cases (55 ± 27 packs-year). For the sake of brevity, the abbreviation AAT-deficiency emphysema for smoking patients with AAT-deficiency will be used throughout the manuscript. All patients had quit smoking at least 1 year before undergoing surgery. The average control patient age was 34 ± 16.8 years and cerebral trauma was the cause of death. All the donors stayed less than two days in intensive care without evidence of lung infection or other complications. During artificial ventilation, airway pressure (P aw ) was 20,9 ± 1.5 mmHg and inspiratory oxygen fraction (FI, O 2 ) was 0.4 ± 0.1. All the samples showed various degrees of emphysematous changes. In particular, all the patients with AAT-deficiency showed diffuse destruction of alveolar tissue, consistent with panlobular emphysema. In contrast, relatively preserved lower portions of the lungs were observed in patients with smoking-associated emphysema, consistent with centrolobular emphysema. Immunophenotype analysis Emphysema patients had an increased number of ICI (CD20, CD3, CD8, CD68, CD45RO, CD4 and PMN) as compared with controls (p ≤ 0.01). An increased number of CD3 (p ≤ 0.05), CD8 (p ≤ 0.05) and CD45RO (p ≤ 0.001) was seen in AAT-deficiency emphysema compared to smoking-related emphysema (Table 2 ). Table 2 Inflammatory Cells (Total Cells/MM Alveolar Wall) AAT Smokers Controls * P † CD20 4.77 2.1 0.0 ns CD3 25.39 17.84 2.5 <0,05 CD8 12.32 5.07 0.99 <0,05 CD68 4.35 6.88 0.75 ns CD45RO 25.42 8.34 2.31 0,001 CD4 12.1 12.67 1.89 ns PMN 18.07 20.78 0.0 ns Definition of abbreviation. PMN: Polymorphonuclear cells * The values of control group were all statistically significant compared to both emphysema groups. † Significant differences between AAT-deficiency and smoking patients. Analysis of epithelial apoptosis and proliferation Labeling of the DNA breaks by TUNEL demonstrated positive cells that were localized to peribronchiolar, intra-alveolar and septal sites in both normal and emphysematous lungs. Quantification was limited to the alveolar wall. Apoptotic bodies that were very close to each other were counted as one dying cell. Intra-alveolar apoptotic cells were not included in the cell count. In emphysematous lungs AI ranged from 0.68 to 11.92 (mean 6.3 ± 3.5). The TUNEL-positive cells were more frequently detected within more enlarged alveolar walls. Apoptotic cells and/or bodies were frequently seen in intra-alveolar lumen that presumably represented apoptotic cells detached from the alveolar wall (Fig. 1 ). AI was significantly higher in patients than in controls (6.5 ± 3.5 vs 2.7 ± 2.6, p ≤ 0.01) (Fig 2 ). If separately compared with the control group only the AAT-deficiency emphysema showed a statistically significant difference (p ≤ 0.01 vs p = 0.09). Increased levels of oligonucleosomal-length DNA fragments were also detected in emphysema patients, particularly in AAT-deficiency emphysema, than control lungs (Fig. 3a,b ). The PI of patients ranged from 0.19% to 4.81% (mean 1.9 ± 2.2). Similar numbers of MIB1-positive alveolar septal cells were observed in both types of emphysema and control lungs (1.7 ± 1.1). Figure 1 AAT-deficiency emphysema case 1: Micrograph showing strong specific staining for DNA strand breaks in the alveolar epithelial cells and in two cells detaching from the wall. TUNEL (original magnification 160×). Figure 2 AI in controls vs emphysema patients: Significantly higher AI was found in emphysema patients (6.5 ± 3.5 vs 2.7 ± 2.6, p ≤ 0.01) ◊ = AAT-deficiency emphysema; ○ = smoking-related emphysema. Figure 3 Gel-electrophoresis: a) Oligonucleosomal-length DNA laddering in emphysematous and control lungs. Lane 1: DNA marker; Lanes 2–5: control donor lungs (4 cases); Lanes 6–9: AAT-deficiency emphysema patients (4 cases); Lanes 10–13: smoking-related emphysema patients (4 cases); Lane 14: positive control. b) Quantification of DNA laddering based on scanning densitometry of bands approximately between 1000 bp and 300 bp (arrowhead) followed by normalization with the density obtained with the equivalent band of the thymus DNA positive control (lung sample/control = densitometric ratio) which was included in every oligonucleosomal DNA laddering assay. TUNEL-positive/MIB1-negative nuclei detected by double staining were seen in all cases, whereas MIB1 was never expressed in any of the TUNEL-positive nuclei (Fig. 4a,b ). Figure 4 Smoking-related emphysema case 3: a) double labeling TUNEL/MIB1 (marker of cell proliferation) showing two apoptotic cells (dark) and one MIB1-positive cell (blue), on the surface of the same alveolar wall (original magnification 160×). b) Note alveolar cell in proliferation close (blue) to apoptotic pneumocyte (dark) (Original magnification 160×). In each group, no statistically significant correlations were found between AI and PI as well as between AI/PI and ICI. At electron microscopy typical features of early apoptosis with margination of chromatin at the nuclear membrane and late apoptosis with completely dense nuclear chromatin, including apoptotic bodies in various stages of degradation, were seen in pneumocytes, endothelial cells and fibroblasts. Typical features of reduplication of vessel basal membranes were frequently seen in cases with more evident apoptosis ( 5 a-f). Ultrastructural analysis showed more frequent mitotic features in type II pneumocytes. Figure 5 AAT-deficiency emphysema (case 1): Electron micrograph showing (a) early apoptosis with perinuclear chromatin condensation (arrow) and ( b) late apoptosis with nuclear dense chromatin of pneumocytes (arrow). (c) An endothelial cell with condensed chromatin is well visible (arrow). (d) Note reduplication of the vessel basal membrane (arrow). In (e) and (f) apoptotic bodies in various degrees of degradation close to a macrophage and an intraluminal apoptotic body are well visible (arrows). TGF-β1 and TGF-βRII receptor analysis In emphysema patients and controls TGF-β1 and TGF-βRII were localized in bronchiolar and alveolar epithelial cells and macrophages. Quantitative analysis of TGF-β1 measured in the alveolar wall showed no statistically significant difference between emphysema patients and controls. A higher cytokine expression was noted in patients with smoking-associated emphysema compared with AAT-deficiency disease (mean 8.8 ± 1.7 vs 5.2 ± 3.9, p ≤ 0.05) (Fig. 6 ). A positive significant correlation between TGF-βRII and AI (p = 0.005; r 2 = 0.8) was seen in control lungs (Fig. 7 ). A significant negative correlation was found between TGF-β pathway (particularly TGF-βRII) and T lymphocytes infiltrate (CD3+) (p ≤ 0.05, r 2 = 0.99) in smoking-related cases. No correlation was noted between the TGF-β1 pathway (TGF-β1 and its RII) and the AI/ PI of emphysematous lungs. Figure 6 TGF-β1 expression in smoking-related vs AAT-deficiency emphysema: the graphic shows the different cytokine expression in both types of emphysema. A significantly higher TGF-β1 expression was found in smoking-related emphysema versus AAT-deficiency emphysema (mean value 8.8 ± 1.7 vs 5.2 ± 3.9, p ≤ 0.05). Figure 7 Correlation between TGF-βRII and AI: the graphic shows the correlation between TGF-βRII expression and AI in controls and emphysema patients. The degree of TGF-βRII is linearly related to the extension of apoptosis in the control group (p ≤ 0.005, r 2 = 0.8). Discussion In the present study we have analyzed for the first time apoptosis and proliferation in different types of end-stage emphysema. The detection of a high AI in emphysematous lungs even in end-stage disease emphasizes the importance of the phenomenon in the development, and overall, in the progression of emphysema. In general there are two main forms of cell death: oncosis and apoptosis. The latter process results in characteristic biochemical features and cellular morphology such as cell shrinkage condensation and fragmentation of the nucleus into well contained fragments called apoptotic bodies. Perturbation of normal rates of apoptosis has been implicated in many pathologic conditions such as neuro-vegetative, cardiovascular and liver disorders and cancer [ 20 - 22 ]. As stated in Tuder's recent review on apoptosis and its role in emphysema, cell damage, apoptosis, apoptotic cell removal, and cellular replacement are ongoing and presumably highly regulated in order to maintain homeostasis of the entire alveolar unit. The concept of the irreversible destruction of alveolar walls due to the loss of homeostasis of the alveolar unit is a critical point. Lung inflammation, protease/antiprotease imbalance, oxidative stress and apoptosis could work together in the irreversible changes seen in emphysema [ 23 ]. Over-induction of apoptosis and inefficient cellular replenishment, modifying alveolar homeostasis, would both be expected to disrupt the alveolar wall thus inducing the development of emphysema. Recently the causal role of apoptosis has been increasingly recognized in the destruction of alveolar walls and airspace enlargement [ 6 - 9 ]. Among constitutive cell populations of the alveolar wall, epithelial cells are more frequently susceptible to programmed cell death [ 6 , 9 ]. In our study the AI of epithelial cells was significantly higher in end-stage emphysema cases compared to the control group (p ≤ 0.01) and this was particularly more evident for those with AAT-deficiency. To avoid a bias of under or over-estimated alveolar cell apoptosis and proliferation due to regional disease activity we analyzed large specimens taken from different lung regions (upper and lower lobes). The lower AI detected in our control lungs underlines an important concept: in non-emphysematous lungs apoptosis is an irrelevant process adequately balanced by proliferation. The increased number of apoptotic cells in patients with emphysema (not adequately replaced by new epithelial cells) suggests a new mechanism, namely "epithelial apoptosis/proliferation imbalance" in the pathogenesis of disease. In our study, different from a recent clinical study by Yokohori et al [ 9 ], a PI similar to that of the control group was detected in the alveolar epithelial cells of emphysema patients. The discrepancies between the two studies can be attributed to several factors: 1) a different monoclonal antibody used for detection of cell proliferation (MIB1 vs PCNA); 2) different case series including patients affected by emphysema in end-stage status and overall of different types (smoking-associated and AAT-deficiency emphysema), and 3) more analysis of extensive areas (upper and lower lobes) of emphysematous lung parenchyma. Regarding the monoclonal antibody used for proliferation detection, Ki-67 is now well recognized as the most reliable immunohistochemical marker for the analysis of cell proliferation in formalin-fixed, paraffin-embedded tissue [ 24 ]. Immunoassaying for proliferating nuclear cell antigen (PCNA) can also be used in paraffin-embedded tissue, but it may overestimate the proliferation rate given the long half-life of this antigen [ 25 ]. Moreover, the simultaneous positive staining of TUNEL and PCNA in the same cells has been reported [ 26 ]. In fact, it has also been demonstrated that PCNA expression can increase without a corresponding increase in S-phase DNA synthesis [ 27 ]. DNA nicks may be seen in cells with DNA synthesis/repair thus sometimes producing false TUNEL positive cells. False positive TUNEL staining can also be generated through non-apoptotic mechanisms: RNA synthesis and splicing, necrosis as well as artifacts due to preservation methods. Consequently, some authors have stressed the importance of associating other techniques, such as Taq polymerase-based DNA in situ ligation, DNA gel electrophoresis or electron microscopy, in order to avoid false positive labeling and to assess the reliability of apoptosis [ 28 ]. Our TUNEL findings have been corroborated by employing an additional quantitative apoptosis assay. Moreover, the presence of different stages of apoptosis was confirmed and the cells involved in programed cell death were well characterized by using electron microscopy investigation, which is considered the gold-standard technique for apoptotic cell detection. In our work double-immune labeling showed that all TUNEL positive cells were consistently negative for MIB1 thus suggesting the true epithelial DNA fragmentation. Although the high AI detected in our patients could be mainly explained by the high rate of apoptotic cell death, an impaired clearance mechanism of apoptotic cells/bodies should also be considered. A frequent finding of apoptotic bodies in alveolar walls and within lumen may support the latter theory as an important contributing factor for a high percentage of AI. The principal trigger of epithelial injury leading to apoptotic cell death is up to now still unclear. The cytotoxic effects of cigarette smoke, one of the most clearly proven etiologic factors in the development of emphysema and in general of COPD, have been suggested to suppress epithelial proliferation and to induce cell death. In particular oxidants and aldehydes, major constituents in the volatile phase of cigarette smoke, have been reported to induce apoptosis of lung cells [ 29 ]. Different DNA and RNA viruses have been identified as viral pathogens associated with the disease. Double-strand DNA viruses such as adenovirus have the ability to persist in airway epithelial cells long after the acute infection has cleared. The expression of adenoviral trans-activating proteins has been demonstrated in the airway epithelium of both human and animal lungs and is associated with an amplification of cigarette smoke-induced inflammatory response [ 30 ]. Different adenovirus early proteins, in particular E4orf4, have been reported in the shutoff of host protein synthesis and in the promotion of a p53-independent cell death program [ 31 ]. It is likely that many and various noxious agents all come together to play an important role in the progression of cell death in end-stage disease, justifying the high AI in the alveolar wall, as detected in our study. The specific molecular pathogenetic pathways that regulate both cell fate and proliferation are also under investigation. Previous studies demonstrated an inhibitory effect of the TGF-β1 pathway on the growth of lung epithelial cells [ 14 , 15 ]. As the TGF-β1 pathway is well-known for its anti-inflammatory activity, a higher epithelial expression of TGF-β1 in patients with smoking-related emphysema compared with AAT-deficiency may partially justify the different patterns of inflammation in the two types of emphysema, as found in our study. A significantly higher increase of inflammation, particularly due to T lymphocytes, was found in AAT-deficiency emphysema (panlobular type) than in smoking-related disease (centrilobular type), with the latter displaying an increased expression of the TGF-β pathway (as demonstrated by the negative correlation with T lymphocyte infiltrate). Similar findings have been previously reported in both in vitro and in vivo studies [ 32 , 33 ]. An increased pro-apoptotic milieu of inflammatory related cytokines may contribute to the higher cell death rate detected in AATdeficiency emphysema. Moreover, additional cigarette smoke-mediated damage should also be considered in AAT-deficiency emphysema patients, in that in our study they were all heavy smokers. In our work a direct correlation between TGF-βRII and AI was found in the control group thus showing that this cytokine could play a role in alveolar homeostasis in physiologic conditions. Instead no correlation was found between the AI and TGF-β1 pathway in either type of emphysema, suggesting that the TGF-β1 regulated mechanism is lost in the disease. Other cytokines besides TGF-β1 could be involved in uncontrolled programed cell death inducing the progressive disappearance of the alveolar unit. Decreased expression of VEGF and VEGF R2 has been demonstrated to be significantly correlated with apoptosis of both epithelial and endothelial cells in cigarette smoking-induced emphysema [ 7 ]. It has been shown that VEGF receptor signaling is extremely important for the maintenance of alveolar structures. Hence an impairment of its trophic endothelial activity may be one of several factors facilitating alveolar septal cell apoptosis [ 4 , 7 ]. Significantly reduced levels of VEGF have also been detected in induced sputum of emphysema patients compared to that of normal individuals and patients with asthma [ 34 ]. More recently in an experimental model some authors have shown that over-expression of placenta growth factor (PIGF) causes a phenotype and pulmonary dysfunction similar to human lung emphysema by inducing apoptotic events in the alveolar septa [ 35 ]. Although epithelial cells have been demonstrated to be more susceptible to apoptosis [ 7 ], endothelial cells are also an important target for programed cell death. Our ultrastructural analysis showed evidence of endothelial cell apoptosis mainly in those cases with more increased alveolar programed cell death. The presence of a multi-layered vessel basement membrane, as found in many of our emphysematous lungs, may also reflect additional data supporting the increased apoptotic rate of endothelial cells. In summary, our work has demonstrated for the first time that apoptotic phenomenon is extensive also in end-stage emphysema patients. This unique case series, and overall the large variety of lung tissue samples examined, (not only subpleural emphysematous regions as those from lung volume reduction surgery in which apoptosis could already be switched off) may account for the differences in our AI findings compared to other studies [ 9 ]. The higher rate of apoptotic cell death in patients with AAT-deficiency emphysema, partially influenced by the higher degree of inflammation, may allow us to consider this peculiar emphysema subtype as an additional modifier of apoptosis. Whether the "apoptosis/proliferation imbalance" occurs before, after or at the same time as the "elastase/antielastase imbalance" is still unknown and should be the subject of future studies. Limitations of the study Our study had a few limitations. Firstly, the patients with AAT-deficiency can not be considered pure AAT-deficiency emphysema cases because these patients were also smokers. Panlobular emphysema occurs at a younger age in alpha-1-antitrypsin patients, especially if the patients smoke cigarettes, as in our case series (49.8 ± 5.7 yrs AAT-deficiency vs 58.2 ± 6.3 yrs smokers, p ≤ 0.01). Patients with AAT-deficiency who are smokers develop lung impairment function earlier and in a more severe form than their non-smoking counterparts. Thus, it is extremely rare to have patients who are non-smokers with AAT-deficiency as candidates for lung transplantation. Secondly, the clinical characterization of the donor was poor and according to the guidelines for the selection of donor lungs, smokers were not excluded [ 36 ]. Smokers could have been included in the control group, and it is well known that smoking itself may induce apoptosis. However, if this was the case, the AI difference between emphysema and control patients would have been even higher because of the lower AI in healthy patients. A third potential bias is that all the donors were mechanically ventilated before lung transplantation and it is known that mechanical ventilation may induce lung apoptosis [ 37 ]. Again, the difference observed in our study would be even higher than non-ventilated controls, thus confirming the findings that enhanced apoptosis may act as a leading mechanism in the pathogenesis of emphysema. Conclusions Our study analyzed apoptosis and proliferation in end-stage emphysema. In particular the work described for the first time a high AI in patients with AAT-deficiency emphysema. Ultrastructural investigation, TUNEL analysis and oligonucleosomal-length DNA laddering, performed in different lung regions were all used for detection of apoptotic phenomenon. The increase of apoptotic cells in patients with emphysema not adequately replaced by new epithelial cells suggests a new mechanism, namely "epithelial apoptosis/proliferation imbalance" in the pathogenesis of disease. More inflammation, particularly due to T lymphocytes, was observed in AAT-deficiency emphysematous lungs. An increased pro-apoptotic milieu of inflammatory related cytokines may contribute to the higher cell death rate detected in AAT-deficiency emphysema. While a direct correlation between TGF-βRII and AI was found in the control group, no relation was found between the AI and TGF-β1 pathway in end-stage emphysema, suggesting that the influence of the TGF-β pathway on epithelial turnover is lost in the disease. Knowledge of the mechanism responsible for activation and progression of the apoptotic cascade could offer new information in the near future, on more appropriate stratification and treatment of the disease. Authors' contributions FC: conceived of the study and participated in its design and coordination CG: substantial contribution in study design and data interpretation BB: acquisition of clinical data and critical revision for important intellectual content FR: thoracic surgeon providing lung specimen and critical revision for important technical aspects ML: thoracic surgeon providing lung specimen and critical revision for important technical aspects RZ: critical revision for important intellectual content GP: acquisition of clinical data SB: acquisition of clinical data and performed the statistical analysis MS: participated in the design of the study and gave critical revision for important intellectual content MV: substantial contribution in study design and data interpretation All the authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC549521.xml |
340945 | Tumour Suppressor Genes—One Hit Can Be Enough | A paper published in 1998 showed that loss of only one copy of the p53 tumor suppressor gene is sometimes enough to initiate carcinogenesis | For people who received their introduction to cancer genetics in college in the first half of the 1990s, everything looked simple and straightforward. It was the stuff you could explain to sincerely interested relatives who wanted to know what you were spending your time on. There were oncogenes and there were tumour suppressor genes. Oncogenes were overactive genes and proteins that somehow caused cancer because they were overactive; therefore, they were dominant. Tumour suppressor genes were genes that would normally prevent a tumour from happening and that needed to be inactivated for a tumour to start to form; both copies of a tumour suppressor gene had to be inactivated, and the mutation was recessive. If inactivation of these genes is a random process, it was understandable that people who inherit an inactivated copy of a tumour suppressor gene had a higher risk of developing the associated form(s) of cancer than people born with two normal copies, as postulated in Alfred Knudson's (1971) two-hit model. And, indeed, it was shown that in the tumours in these predisposed patients, the remaining wild-type copy of the tumour suppressor gene was lost, a process referred to as loss of heterozygosity. For me, in 1998 things started to change. Venkatachalam et al. (1998) published a paper in the EMBO Journal describing a detailed study of tumours in mice lacking one copy of the p53 tumour suppressor gene ( Trp53 ). This gene is known to be the most mutated gene in human cancer and its function to be central to many processes that are involved in the cellular prevention of cancer. Mice lacking both copies of this gene are for the most part viable, but succumb to cancer (mainly thymic lymphomas) at three to five months of age ( Donehower et al. 1992 ). Mice born with one copy of the Trp53 gene start to develop cancer at around nine months, and incidence increases with age. In their study, Venkatachalam and colleagues analysed an impressive group of 217 Trp53 +/− mice of controlled genetic background and followed the fate of the Trp53 wild-type allele in the tumours. According to the two-hit model, it was expected that in these tumours this copy would have been lost or inactivated. However, this turned out not to be the case. Half of the tumours from mice younger than 18 months were found to have retained the wild-type copy of Trp53 , a number that increased to 85% in mice older than 18 months. In two tumours, the researchers sequenced the complete coding region of the remaining wild-type allele and showed it was structurally intact. To exclude the possibility of downregulation or inactivation at the level of protein expression, they irradiated tumour-bearing mice prior to sacrifice, a treatment known to increase p53 protein levels via posttranslational mechanisms. Their data showed the retained wild-type allele in these tumours was expressed normally and suggested it had a normal wild-type conformation. Next, the authors did a rigorous test of different functions of the p53 protein. They first tested whether the tumours showed amplification of Mdm2. This protein, whose expression is regulated by p53, stimulates breakdown of p53, thereby forming a negative feedback mechanism that keeps p53 levels low. Some tumours therefore amplify the Mdm2 gene as a means of inactivating p53. However, this was not found in the tumours from the Trp53 +/− mice. Subsequently, the researchers tested to what extent the retained Trp53 copy behaved normally. Irradiation of many tissues leads to p53-dependent apoptosis, and, indeed, in tumours that had retained the wild-type allele, irradiation did lead to an increase in apoptosis, whereas in tumours that had lost the wild-type allele, it did not. The p53 protein is known to function as a transcriptional regulator by either up- or down regulating target genes in response to different forms of cellular stress, including irradiation-induced DNA damage. The authors studied the expression of two p53-upregulated genes ( Cdkn1a , which encodes p21, and Mdm2 ) and one downregulated gene ( Pcna ) in p53-positive tumours after irradiation and showed responses indicative of normal p53 function. Furthermore, it was shown that the p53 protein from the tumours was able to bind to a p53-binding DNA sequence in an in vitro setting. Finally, since it is known that p53 absence in tumours is correlated with chromosomal instability, Venkatachalam et al. (1998) used comparative genome hybridisation to compare this feature between p53-negative and p53-positive tumours and found a 5-fold greater stability in the latter. In short, this paper clearly showed that, at least in mice, in many Trp53 +/− tumours the wild-type allele of Trp53 is not only retained, but also appears to function normally. This strongly suggested that a decrease of dosage in p53 is already sufficient for tumourigenesis, a phenomenon referred to as haploinsufficiency. Shortly before, the group of Moshe Oren ( Gottlieb et al. 1997 ) had shown that a Trp53 +/− background leads to a greater than 50% reduction in p53 activity using a p53-responsive lacZ reporter gene in transgenic mice. Venkatachalam and colleagues suggested the strong concentration dependence of p53 function could be explained by the fact that p53 functions as a tetramer. A 50% decrease in p53 monomers can easily be imagined to result in a greater than 50% decrease in functional tetramers, which in turn increases the chances of these cells becoming cancerous. This paper by Venkatachalam et al. (1998) made me realise how important it is to remain critical, even of long-established theories and models. Since then, haploinsufficient mechanisms have been described in more tumour suppressor genes in humans and mice (reviewed in Fodde and Smits 2002 ). For instance, in a recent paper in PLoS Biology , Trotman et al. (2003) used mouse models to describe how the dosage of the Pten tumour suppressor gene influences the occurrence of prostate cancer. Further genes have been described with other unexpected roles in the tumourigenic process. There is a long-standing debate in the literature about the number and role of mutations in a tumour, and without going into the details, it is clear that haploinsufficient mechanisms for tumour suppressor genes will greatly influence the statistics on which these discussions are based. At a time when microarray analysis has become a standard experiment and the many thousands of changes in tumour cells are analysed across the whole genome, it is important to keep in mind that the correct interpretation of this wealth of information might be more complicated than the widely accepted models would have us believe. Figure 1 Initiating Genetic Aberrations in Tumourigenesis (A) According to the two-hit model, the first hit at the rate-limiting tumour suppressor gene provides no selective advantage for the cell. Only after the loss of the second allele of this gene is tumour formation initiated. Extra genetic changes are needed for complete transformation of the cell. (B) In a haploinsufficient mechanism, the first hit on the rate-limiting tumour suppressor gene already provides the cell with sufficient selective advantage to initiate tumour formation. Further events are necessary for complete transformation. These events might or might not include the loss of the wild-type allele of the rate-limiting tumour suppressor gene. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC340945.xml |
503399 | A unique dedifferentiated tumor of the retroperitoneum | Background Dedifferentiated liposarcomas represent heterogeneous tumors with lipomatous and nonlipomatous elements starkly juxtaposed. It is thought that the high grade nonlipomatous elements of the tumor portend a worse prognosis. Case Presentation A 19.8 kg heterogeneous retroperitoneal tumor was successfully and completely resected. Because of its extent, no additional treatment modalities were practicable. The tumor soon recurred. The recurrent tumor differed from the primary tumor in that it was more homogeneous, consisting mainly of nonlipogenic, calcific tissue. Conclusions Dedifferentiated liposarcomas are known to have a very high recurrence rate. The biological behavior of dedifferentiated liposarcomas is likely dictated by the most aggressive element of these heterogeneous tumors. | Background Sarcomas arising from the retroperitoneum are rare tumors, accounting for 10–15% of all soft tissue sarcomas [ 1 ]. Liposarcoma is the single most common soft tissue sarcoma and accounts for at least 20% of all sarcomas in adults [ 1 ]. Classification of liposarcoma into four types, based on morphologic features and cytogenic aberrations, is now widely accepted [ 2 ]. These four types are (a) well differentiated; (b) dedifferentiated; (c) myxoid/round cell and (d) pleomorphic. The extent of differentiation, as reflected by histological grade, remains the most important determinant of clinical course and of ultimate prognosis for patients with liposarcoma after resection. The following case illustrates the great morphological and biological heterogeneity of these tumors. A very rapid recurrence was observed, and this recurrence was considerably less heterogeneous than the primary tumor, consisting mainly of the calcific, nonlipomatous component. Case presentation A 65-year-old male presented with a three-week history of progressively worsening abdominal distension. He denied any abdominal pain but stated that he noticed an increased frequency of bowel movements. His past medical history was unremarkable. On examination, he was afebrile and had a hugely distended abdomen with an immobile, nontender mass occupying all four quadrants of the abdomen. Computed tomographic (CT) scan revealed a large, heterogeneous lobulated mass occupying most of the abdomen (Figure 1 ). The peripheral component appeared lipomatous and the margins of this component were difficult to estimate accurately. There was also a heterogeneous nonlipomatous component that contained areas of lesser density, as well as a central stellate region of calcifications. The preoperative differential diagnosis included a retroperitoneal sarcoma (especially dedifferentiated liposarcoma), desmoid tumor, undifferentiated carcinoma, carcinoid or sclerosing mesenteritis; lymphoma was also considered. Figure 1 CT appearance at multiple cuts (A – E) of a huge retroperitoneal mass with lipomatous and nonlipomatous components. The nonlipomatous component (arrow X, Panel D) contains calcific elements. Note the posterior extension of the lipomatous component (arrow Y, Panel D), which extends superiorly (Panels A – C). The exact boundaries of this component are difficult to appreciate on CT. While neoadjuvant chemotherapy and radiation comprise a frequent approach for retroperitoneal sarcomas at our institution, the extent of the tumor made this approach unfeasible. Resection was therefore planned unless an intraoperative biopsy revealed lymphoma. Resection was accomplished through a T-type incision (Figure 2 ), and entailed removal of the right kidney, terminal ileum, ascending colon, sigmoid colon and the left spermatic cord structures, all of which were intimately attached to the mass. Encasement of the external iliac artery and vein was also encountered near the end of the procedure. This was not fully appreciated preoperatively, as that component of the tumor was so much less conspicuous on CT than the rest of the tumor, given its fatty consistency (Figure 1E ). The mass was split in half to facilitate dissection from the iliac vessels. An anastomosis was constructed from descending colon to rectum. A transverse colon mucous fistula and an ileostomy were brought out, as the ileum was dusky at the end of the procedure. Figure 2 Operative exposure of the tumor through a T-type abdominal incision. The tumor was submitted for histological examination in two parts measuring, 30.0 × 10.0 × 6.5 cm and 32.0 × 20.0 × 10.0 cm, weighing 19.8 kg in total (Figure 3 ). On gross examination the mass was variegated, with fleshy and solid cystic degeneration containing areas of osseous consistency. Microscopic examination revealed a juxtaposition of well differentiated liposarcoma and a spindle cell sarcoma with heterologous chondrosarcomatous elements consistent with a dedifferentiated liposarcoma (Figure 4 ). Figure 3 En bloc resection specimen of heterogeneous tumor with attached organs. Note the lipomatous regions (A), the calcified areas (B), and the remaining nonlipomatous component (C). Figure 4 Histologic appearance of various elements of the tumor as sampled in different regions. A. Well differentiated liposarcoma. B. Low grade spindle cell component. C. Cellular spindle cell component. D. Chondrosarcomatous component. The patient was discharged from hospital on the seventh postoperative day. He was followed in the surgical oncology outpatient clinic monthly. Four months after resection the patient had a follow-up CT scan, which demonstrated an intra-abdominal recurrence consisting almost completely of a calcified, nonlipomatous tumor (Figure 5 ). The patient died one month later. Figure 5 CT appearance at multiple cuts (A – D) of the recurrent tumor. The recurrence was more homogeneous than the primary tumor, consisting almost completely of the calcific component of the nonlipomatous portion of the tumor. Discussion The patient in our case manifested the dedifferentiated variant of liposarcoma. The term "dedifferentiated liposarcoma" refers to the development of a high grade nonlipogenic sarcoma juxtaposed to a well differentiated liposarcoma [ 3 ]. The majority (80 – 90%) occur primarily de novo , although secondary dedifferentiation can occur with multiple recurrences of a well differentiated liposarcoma [ 4 ]. CT and Magnetic resonance imaging scans typically reveal well defined nonlipomatous masses associated with fatty tumor; the transition between the two components is characteristically abrupt, although blended transitions are seen in about 20% of cases [ 5 ]. Calcifications appear in about 30% and usually correspond to osseus metaplasia, although they may represent osteosarcomatous or chondrosarcomatous elements. The most frequent phenotype of dedifferentiation is that of a high grade pleomorphic malignant fibrous histiocytoma-like sarcoma [ 4 , 6 ]. Other phenotypes observed include leiomyosarcomatous, rhabdomyosarcomatous, osteosarcomatous and angiosarcomatous elements, as well as other nonlipogenic elements [ 3 , 7 ]. A further distinctive pattern in some cases is the presence of micronodular spindle cell whorls, often associated with ossification [ 8 ]. Among liposarcomas, the presence of features of the dedifferentiated variant strongly portends a worse prognosis. The overall 5-year survival of dedifferentiated liposarcomas is 20%; the 5-year survival of well differentiated liposarcomas is 83% [ 9 ]. Dedifferentiated liposarcomas recur locally in 40 – 83% and distant metastases appear in 15 – 30% [ 4 , 7 , 9 ]. Therefore, histomorphologic features impact outcomes related to retroperitoneal liposarcomas. While generally the phenotype of the nonlipogenic component does not impact prognosis of dedifferentiated liposarcomas, the presence of calcifications has been identified as an adverse prognostic factor [ 5 ]. In the present situation, it is obvious that the biologically most aggressive component consisted of the calcified (chondrosarcomatous) component. That is, the recurrence was less heterogeneous than the primary tumor, as it had widespread and dense calcifications, but no obvious lipomatous elements. Complete resection of the tumor is perhaps the most important factor determining long-term survival. Unfortunately, the rate of complete respectability is only about 53% [ 10 ]. As illustrated in the present case, in addition to the limitations imposed by the retroperitoneal anatomy, another obstacle to successfully obtaining margins is the difficulty in distinguishing normal retroperitoneal fat from the lipogenic component of the tumor [ 9 ]. This was illustrated by the underestimation of the extent of the tumor around the iliac vessels. Moreover, the intraoperative decision to remove the kidney was made in view of the difficulty in distinguishing normal perinephric fat and neoplastic fat; kidney was not involved with tumor, once examined microscopically. Indeed, in a series of retroperitoneal liposarcomas from Memorial Sloan-Kettering Cancer Center, nephrectomy was performed in 38% of patients, although the number in which kidney was actually involved on pathology was not reported [ 9 ]. Thus, anatomical constraints and difficulty distinguishing more differentiated fatty tumor from normal fat limit the surgeon's ability to confidently and completely remove all neoplastic elements. Conclusions Dedifferentiated liposarcomas represent aggressive variants of liposarcomas. Each morphological element of these heterogeneous tumors may manifest completely different biology. The overall biological behavior of dedifferentiated liposarcomas is likely dictated by the most aggressive element, which typically resides in the nonlipomatous portion of the tumor. Competing Interests None declared. Authors Contributions SK, HB, WT, and OB made substantial contributions to the intellectual content of the paper, in the interpretation of data, and in drafting the manuscript. All authors read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC503399.xml |
516025 | Cytoskeletal influences on nuclear shape in granulocytic HL-60 cells | Background During granulopoiesis in the bone marrow, the nucleus differentiates from ovoid to lobulated shape. Addition of retinoic acid (RA) to leukemic HL-60 cells induces development of lobulated nuclei, furnishing a convenient model system for nuclear differentiation during granulopoiesis. Previous studies from our laboratory have implicated nuclear envelope composition as playing important roles in nuclear shape changes. Specifically noted were: 1) a paucity of lamins A/C and B1 in the undifferentiated and RA treated cell forms; 2) an elevation of lamin B receptor (LBR) during induced granulopoiesis. Results The present study demonstrates that perturbation of cytoskeletal elements influences nuclear differentiation of HL-60 cells. Because of cytotoxicity from prolonged exposure to cytoskeleton-modifying drugs, most studies were performed with a Bcl-2 overexpressing HL-60 subline. We have found that: 1) nocodazole prevents RA induction of lobulation; 2) taxol induces lobulation and micronuclear formation, even in the absence of RA; 3) cytochalasin D does not inhibit RA induced nuclear lobulation, and prolonged exposure induces nuclear shape changes in the absence of RA. Conclusions The present results, in the context of earlier data and models, suggest a mechanism for granulocytic nuclear lobulation. Our current hypothesis is that the nuclear shape change involves factors that increase the flexibility of the nuclear envelope (reduced lamin content), augment connections to the underlying heterochromatin (increased levels of LBR) and promote distortions imposed by the cytoskeleton (microtubule motors creating tension in the nuclear envelope). | Background Granulopoiesis, the differentiation of peripheral blood granulocytes, involves dramatic nuclear and cytoplasmic structural changes [ 1 ]. Committed bone marrow progenitor cells possess ovoid-shaped nuclei with prominent nucleoli and a paucity of heterochromatin. The mature terminally differentiated human neutrophil (polymorphonuclear granulocyte) exhibits a distinctly lobulated (segmented) nucleus with shrunken nucleoli and extensive peripheral heterochromatin. The mature neutrophil is released into the bloodstream, where it circulates as a round unpolarized cell. Responding to chemotactic agents produced by infection and tissue damage, the circulating neutrophil changes cell shape, converting to a rapidly migrating polarized cell. Mature granulocytes have a limited lifespan, succumbing to apoptosis within a few days following release into the bloodstream. Several established tissue culture cell lines have been investigated as model systems for understanding the events and mechanisms of granulopoiesis [ 2 ]. Previous studies from our laboratory have employed the HL-60 cell system to examine nuclear lobulation and cytoskeletal polarization [ 3 - 5 ]. HL-60 cells exhibit granulocytic differentiation in response to added retinoic acid (RA) [ 6 ], eventually undergoing apoptotic death [ 7 ]. Our studies on RA induced granulocytic differentiation of HL-60 cells implicated two major factors in the nuclear lobulation process: 1) very low cellular levels of lamins A/C and B1; 2) a significant increase in cellular levels of lamin B receptor (LBR). Ultrastructural studies suggested that during nuclear differentiation, nuclear envelope surface area was increased [ 3 ]. In addition to nuclear lobulation, these studies demonstrated the formation of extensive nuclear envelope outgrowths, denoted "nuclear e nvelope- l imited c hromatin s heets" or ELCS. The suspected role of LBR was confirmed in subsequent studies [ 8 , 9 ], which demonstrated that a genetic deficiency of LBR correlates with hypolobulated granulocyte nuclei in the human Pelger-Huet anomaly [ 8 ] and the murine Ichthyosis mutation [ 9 ]. LBR is an integral membrane protein of the nuclear envelope inner membrane, with putative interactions to lamin B, chromatin and HP1α [ 10 ]. The mechanistic relationships between changes in nuclear envelope composition and granulocytic nuclear lobulation are currently unknown. The position of the interphase nucleus within the cell appears to be regulated by microtubules and associated dynein [ 11 ] and actin and associated spectrin-like proteins [ 12 - 15 ]. However, only fragmentary data exists examining the relationship, if any, between cytoskeletal elements and nuclear shape. Absence of intermediate filaments (vimentin) has been correlated with nuclear envelope folds or invaginations [ 16 ]. Employing HL-60 cells, evidence has been published that neither microtubules (MTs) nor the actin microfilament system are essential for the establishment of nuclear lobulation [ 17 ]. Our laboratory chose to investigate these conclusions concerning HL-60 cells in greater detail. Employing the same cell subline (HL-60/S4), we demonstrated that brief (i.e., 2 and 4 hour) treatments of undifferentiated or granulocytic (RA treated) cells with various cytoskeletal modifying chemicals had no obvious effects upon nuclear shape, although cell shape was strongly affected [ 4 ]. However, prolonged (i.e., 2 day) exposure of undifferentiated HL-60/S4 cells to nocodazole (NC, disrupts MTs) or taxol (TX, stabilizes MTs), but not cytochalasin D (CD, disrupts actin microfilaments), resulted in rapid apoptotic death. For this reason, the present study emphasizes the prolonged exposure of cytoskeleton-modifying chemicals on Bcl-2 overexpressing HL-60 cells [ 18 , 19 ]. These cells are more refractory to undergoing apoptosis, but can still be induced with RA to exhibit granulocytic differentiation. Employing HL-60- bcl -2 cells [ 18 ], we demonstrate that the integrity of the MTs system, but not the actin microfilament system, is essential for the nuclear lobulation process during in vitro granulopoiesis. Prolonged exposure of HL-60- bcl -2 cells to CD or TX does lead to perturbations of nuclear shape, independently of RA induced nuclear differentiation. Results HL-60/S4 cells undergo retinoic acid induced nuclear lobulation in the presence of cytochalasin D, but do not survive nocodazole or taxol treatment HL-60/S4 cells were cultivated for up to four days in medium with (or without) 1 μM RA and with (or without) 1 μM CD. Daily samples of these four cultivation conditions were cytospun on microscope slides and Wright-Giemsa stained for examination. The image data from day 3 are presented (Figure 1a,1b,1c,1d ). Control cells (Figure 1a ; no RA or CD) possessed single ovoid nuclei (>99% of cells) throughout the entire period; CD treatment alone (Figure 1b ) exhibited primarily multinucleated cells. RA treated controls (Figure 1c ; no CD) revealed clear nuclear indentation and lobulation by day 3 [ 3 ]; RA plus CD treated cells revealed indented and lobulated multiple nuclei (Figure 1d ). HL-60/S4 cells exposed to 1 μM CD, with or without RA, became mostly multinucleated by day 3 (approximately 4% mono-, 56% bi-, 20% tri-, and 18% tetranucleated). Evidence of apoptosis became apparent by day 4. In addition, immunostaining of RA plus CD treated cells (day 4) with anti-lamin B (data not shown) clearly indicated stained "patches" between nuclear lobes, interpreted as ELCS [ 3 ]. These results are in complete agreement with a previous publication [ 17 ], and monitored HL-60/S4 nuclear differentiation for a longer period. Figure 1 Wright-Giemsa stained HL-60 cells following exposure to RA and CD. HL-60/S4 cells (Panels a-d): a, undifferentiated; b, 1 μM CD; c, 1 μM RA; d, 1 μM RA plus 1 μM CD. Three days of drug exposure. HL-60- bcl -2 cells (Panels e-h): e, undifferentiated; f, 1 μM CD; g and h, 1 μM RA plus 1 μM CD. Seven days of drug exposure. Scale bar: 10 μm. The previous publication [ 17 ] also examined the role of MTs on nuclear lobulation in HL-60/S4 cells. They exposed RA treated cells to 0.1 μM (0.03 μg/ml) NC for up to two days, concluding that the integrity of MTs was not important to the formation granulocytic nuclear lobes. We repeated this experiment, including a separate comparison of HL-60/S4 cells exposed to 0.1 μM TX. The results clearly indicated ~100% apoptotic cell death by day 2 with either NC or TX, preventing any firm conclusion of whether MT integrity is important for nuclear lobulation. We concluded that exploration of this issue would be better accomplished using HL-60- bcl -2 cells, which are more refractory to undergoing rapid apoptosis. HL-60- bcl -2 cells exhibit nuclear lobulation in response to retinoic acid Bcl-2 overexpressing HL-60 cells remain viable for longer periods than the parent cell line, following exposure to RA [ 18 , 19 ]. A significant fraction of RA treated cells survive for up to two weeks; some even for three weeks, exhibiting granulocytic characteristics including: nuclear lobulation, surface antigen expression and nitroblue tetrazolium reduction [ 18 , 19 ]. Figure 2 presents a morphological analysis of Wright-Giemsa stained HL-60- bcl -2 cells for up to three weeks following addition of 1 μM RA. For the sake of analysis, we distinguished four nuclear morphology categories: ovoid (Figure 2a ), indented (Figure 2b ), lobulated (Figures 2c and 2d left) and multilobed (Figures 2d right, 2e and 2f ). A multilobed granulocytic nucleus is one that displays five or more lobes, a diagnostic hematological criterion observed in blood smears from humans with megaloblastic anemia [ 20 ]. Figure 3 is a graphical representation of nuclear morphological changes during RA induced granulocytic differentiation. At one week following addition of RA, ~80% of the cells reveal nuclear differentiation, with ~95% viability (trypan blue exclusion). From 12 to 21 days post-addition of RA, small changes in the distribution of nuclear morphologies are apparent. Also during this period, cellular debris from dying cells becomes increasingly evident. These observations are in agreement with detailed viability measurements [ 18 ], which indicate a significant loss of viability after ~14 days post addition of RA. Figure 2 Nuclear differentiation during RA induced granulopoiesis of HL-60- bcl -2 cells monitored by Wright-Giemsa staining. Examples of the four nuclear morphological states: a, ovoid; b, indented; c, lobulated; d, lobulated (left) and multilobed (right); e and f, multilobed. Scale bar: 10 μm. Figure 3 Time course of nuclear differentiation in RA treated HL-60- bcl -2 cells over a three-week period. The percentage of cells in each of the four nuclear categories from day 0 to day 21 (following addition of RA): ovoid; indented; lobulated; multilobed. An immunoblotting analysis from total cell extracts of RA differentiating HL-60- bcl -2 cells is presented in Figure 4 , focusing upon nuclear envelope components. For a detailed comparison with total cell extracts from differentiating HL-60/S4 cells, see [ 5 ]. Based upon densitometric analyses from three separate immunoblotting experiments, LBR showed an increased amount for days 7 to 14 (~2 to 3-fold, compared to the level at day 0). This increase was seen with both the expected ~70 kD protein band and a ~53 kD band, which may represent a proteolytic product [ 4 ]. By contrast, other nuclear proteins (lamin B2, LAP2 α, LAP2 β and emerin) exhibited little-or-no increase in levels, but revealed decreases (especially after day 12). LAP2 β and emerin consistently revealed double bands, which might represent protein modification (e.g., phosphorylation). The low levels of lamins A/C and B1 revealed a relative constancy throughout the period of RA-induced granulocytic differentiation, consistent with earlier observations on HL-60/S4 cells [ 5 ]. The loss of proteins on (or after) day 12 may reflect a combination of programmed gene expression and programmed protein degradation. Therefore, experiments on the effects of various cytoskeleton-modifying drugs upon HL-60- bcl -2 cells were confined to a 10-day period following the addition of RA. Figure 4 Immunoblot of total cell extracts of RA treated HL-60- bcl -2 cells for up to three-weeks of differentiation. Nuclear antigens: LBR, lamin B receptor; LMNB2, lamin B2; LMNB1, lamin B1; LMNA/C, lamin A/C; LAP2β; LAP2α; Emerin. The lanes contained comparable amounts of total proteins as judged by Ponceau S staining of the PVDF membrane. Days following addition of RA are indicated. Confocal immunofluorescence studies of HL-60- bcl -2 reveal the existence of micronuclei Immunofluorescent staining was performed on undifferentiated and RA treated HL-60- bcl -2 cells to better document nuclear shape and antigen localization. In the course of the experiments on undifferentiated cells, we discovered that ~10% of the cells exhibited one or more micronuclei, in close proximity to the main nucleus. Furthermore, co-localization experiments revealed that the vast majority of micronuclei contained lamin B, but reduced levels of LBR, in comparison to the adjacent main nucleus. Figure 5 presents a gallery of anti-LBR and anti-lamin B stained cells with a single main nucleus and adjacent micronuclei. As revealed by TO-PRO-3 staining, the micronuclei also contained DNA. During granulocytic differentiation the round micronuclei continue to be visualized, even as the main nuclei are undergoing indentation and lobulation (Figure 6 ). Again the majority of micronuclei maintain a relative deficiency of LBR, compared to lamin B. The nuclear envelope and patches of ELCS (bright yellow regions) appear to possess significant local concentrations of both LBR and lamin B. Also shown in Figure 6 (right column, arrow) is a cell containing apoptotic bodies, exhibiting persistence of LBR staining, with reduced lamin B reactivity, in agreement with observations that lamin B is degraded prior to LBR [ 21 ]. Figure 5 Gallery of confocal immunofluorescent images of stained nuclei from undifferentiated HL-60- bcl -2 cells. Cells were selected that demonstrate micronuclei adjacent to a main nucleus: LMNB, lamin B; LBR, lamin B receptor; DNA, TO-PRO-3 stain. Notice that micronuclei frequently exhibit a relative deficiency of LBR compared to the main nuclei, but show comparable amounts of LMNB. Scale bar: 10 μm. Figure 6 Gallery of confocal images of LMNB and LBR stained nuclei during RA induced granulocytic differentiation. Note the increased lobulation and the persistence of micronuclei (frequently exhibiting a relative deficiency of LBR). The arrowhead points to an apoptotic cell with apoptotic bodies. Scale bar: 10 μm. In an effort to better define the nature of the nuclei in differentiating HL-60- bcl -2 cells, we explored their immunostaining properties (Figure 7 ). Our results indicate that main nuclei and many micronuclei contain centromeres (reactivity with CREST antisera), heterochromatin (reactivity with anti-dimethyl H3K9) and nucleolar staining. Figure 7A also indicates that the Golgi apparatus (anti-p58) retains its discrete juxtanuclear form in undifferentiated and granulocytic cell types. In addition, the ER (anti-calreticulin) is present in the spaces between nuclear lobes of granulocytic HL-60- bcl -2 cells. A confocal stereo image of immunostained HL-60- bcl -2 cells is also presented in Figure 7B , demonstrating a micronucleus deficient in LBR and containing multiple centromeres. Micronuclei have not been reported to occur in granulocytes or during granulopoiesis, underscoring that HL-60 cells are clearly abnormal in certain aspects of nuclear and cellular physiology (see [ 3 ] for other differences, compared to normal granulocytes). Figure 7 Confocal immunofluorescent images of HL-60- bcl -2 cells. A. Gallery of undifferentiated and RA treated (7 days) cells. The columns (left to right) are: anti-centromere (CREST); anti-dimethyl H3K9; anti-nucleolus; anti-Golgi p58; anti-calreticulin. LMNB is always shown in green; the different antigens in red. Scale bar: 10 μm. B. Stereo image of undifferentiated HL-60- bcl -2 cells stained with anti-centromere (red), anti-LMNB (blue) and anti-LBR (green). Note the presence of centromeres within a micronucleus, which exhibits a deficiency of LBR. Scale bar: 10 μm. Retinoic acid induced nuclear differentiation of HL-60- bcl -2 cells occurs in the presence of cytochalasin D HL-60 -bcl- 2 were exposed to 1 μM CD with (or without) 1 μM RA for up to 7 days, cytospun onto microscope slides, methanol-fixed, air-dried and stained with Wright-Giemsa. The results were generally similar to those described above for HL-60/S4 cells. Several examples of Wright-Giemsa stained HL-60 -bcl- 2 cells treated with CD or with RA plus CD are shown in Figure 1e,1f,1g,1h . Viability of both the CD-only and the RA plus CD cells were ~60% by day 7. By day 2 following addition of CD with (or without RA), ~70% of the cells were binucleated. A slow progression to tri-, tetra- and >tetranucleated cells was observed, leading to a population with ~20% mono-, ~50% bi- and ~30% multinucleated cells by day 7. Lobulation and ELCS formation was evident in the cells exposed to RA for 7 days; but nuclei within the CD-only cells also revealed distorted and folded nuclear shapes. This was best observed when cytospun HCHO-fixed cells were immunostained and viewed by confocal microscopy (Figure 8 ). Therefore, it seems reasonable to conclude that CD does not prevent nuclear lobulation and ELCS formation induced by RA. Furthermore, prolonged exposure to CD alone exerts direct (or indirect) effects upon nuclear shape in undifferentiated HL-60 -bcl- 2 cells. Figure 8 Gallery of confocal images of CD treated undifferentiated and RA differentiated HL-60- bcl -2 cells. CD and RA treatment were for 7 days. The columns (left to right) are: anti-nucleolus; anti-Golgi p58; anti-α-tubulin. LBR is always shown in green; the other antigens in red. Nocodazole prevents nuclear lobulation in retinoic acid treated HL-60- bcl -2 cells HL-60 -bcl- 2 cells were exposed to 0, 0.1 or 1.0 μM NC, with 1 μM RA for up to 10 days. Samples were harvested for Wright-Giemsa staining at 1, 2, 3, 4, 7 and 10 days; for immunostaining at day 8. Figure 9 summarizes the distribution of nuclear profiles in the cytospun, methanol-fixed, air-dried and Wright-Giemsa stained preparations. The progressive disappearance of ovoid and increase in indented and lobulated nuclear forms is clear when NC is not present. On the other hand, the presence of 0.1 or 1.0 μM NC blocked the appearance of any significant numbers of indented or lobulated nuclear forms. Ovoid-shaped nuclei were present in at least 80% of the NC-treated cells. Immunostained preparations of 0.1 μM NC treated cells (HCHO-fixed and not air-dried, to preserve 3-D structure) are presented in Figures 10 and 11 . The ovoid nuclei give a slightly wrinkled appearance, with considerable surface infolding; but clearly not lobulations. Short tufts of MTs could be visualized in juxtanuclear positions, consistent with a general depolymerization of MTs. The nuclear envelope exhibited clear staining with anti-lamin B and anti-LBR. Nuclear interiors were stained with anti-centromere, anti-nucleolar and anti-dimethylated H3K9 antisera. Anti-Golgi antibodies revealed some dispersion of the p58 antigen. Dispersal of the Golgi resulting from MT disruption is well documented [ 22 ]. In one set of experiments (data not shown), NC was added to HL-60- bcl -2 cells at day 2 or day 4 after the addition of RA and nuclear morphology was observed on day 8. NC had a clear inhibitory effect upon RA induced nuclear lobulation, even when added 4 days after RA. Thus the inhibitory effect of NC is not an early event in granulocytic differentiation. As noted earlier, we have observed essentially 100% cell death of HL-60/S4 cells exposed to 0.1 μM NC by day 2. Viability of HL-60 -bcl- 2 cells was much better; but protection was not complete. By day 8 of 0.1 μM NC, viability had dropped to ~23%. This could also be observed as increased levels of cellular debris in the Wright-Giemsa stained slides. The cells shown in Figures 10 and 11 were judged to have been viable prior to fixation and staining, based upon the intactness of their nuclear envelopes. Prolonged exposure of cells to NC has clear detrimental effects. But within this limitation, it can be concluded that exposure to NC prevents RA induced nuclear lobulation in HL-60 -bcl- 2 cells. Figure 9 Nuclear differentiation in RA treated HL-60- bcl -2 cells exposed to varying concentrations of NC. Panels: 0, 0.1 and 1.0 μM NC. The percentage of cells in each of the four nuclear categories from day 0 to day 10 (following addition of RA): ovoid; indented; lobulated; multilobed. Figure 10 Stereo confocal images of HL-60- bcl -2 cells exposed to 1.0 μM RA and 0.1 μM NC. Treatment was for 8 days. The rows are: anti-centromere (CREST); anti-nucleolus; anti-α-tubulin. LMNB is always shown in green; the different antigens in red. Scale bar: 10 μm. Figure 11 Confocal images of HL-60- bcl -2 cells exposed to 1.0 μM RA and 0.1 μM NC. Treatment was for 8 days. Left column: top, anti-LBR; middle, anti-dimethyl H3K9; bottom, anti-LMNB. Right column: top, anti-Golgi p58; middle, anti-LMNB; bottom, merge. LMNB is always shown in green. Scale bar: 10 μm. Taxol treatment of HL-60- bcl -2 cells results in nuclear lobulation and micronuclei, independently of exposure to retinoic acid As described earlier, addition of 0.1 μM TX to HL-60/S4 cells resulted in ~100% cell death by day 2, independently of the presence (or absence) of RA. Apparently the cytotoxicity of TX is potentiated in cells with elevated levels of c- myc [ 23 ]. HL-60 cells are well known to possess c- myc amplification [ 24 , 25 ]. Preliminary studies with HL-60- bcl -2 cells indicated that as early as day 2, addition of 0.1 or 1.0 μM TX resulted in the appearance of indented and lobulated nuclei in the absence of RA. Consequently, Wright-Giemsa staining analysis of TX treated cells (in the absence of RA) was performed for up to 10 days (Figure 12 ; identical results were obtained in the presence of RA). There was a rapid decline of cells with ovoid nuclei and a corresponding increase of cells with indented and lobulated nuclear forms. There was also a progressive increase in cell death and debris in the stained preparations. Indeed by day 4, HL-60- bcl -2 cells exposed to 0.1 μM TX displayed only ~20% viability. Even so, there were a sufficient number of viable cells to perform immunofluorescent staining. Besides the evident nuclear lobulation, considerable numbers of cells revealed formation of micronuclei. This, combined with the extensive bundling of MTs, resulted in dramatic images of highly perturbed cells (Figure 13 ). There was no obvious spatial relationship between the positions of the MT bundles and the nuclear lobulations and micronuclei. This may signify that the action of TX on cells occurs earlier (e.g., disruption of the mitotic spindle), with later rearrangements during interphase. TX induced micronuclei revealed heterochromatin (anti-dimethylH3K9) and nucleolar materials (Figure 14 ). Clearly, the dramatic effects of TX upon nuclear shape do not depend upon the presence of RA. It remains to be demonstrated whether these effects occur by perturbation of interphase and/or mitotic MTs. Figure 12 Nuclear shape changes in undifferentiated HL-60- bcl -2 cells exposed to varying concentrations of TX. Panels: 0, 0.1 and 1.0 μM TX. The percentage of cells in each of the four nuclear categories from day 0 to day 10: ovoid; indented; lobulated; multilobed. Figure 13 Stereo confocal images of undifferentiated HL-60- bcl -2 cells exposed to 1.0 μM TX. Treatment was for 4 days. Rows: top, control cells, unexposed to TX; middle and bottom, 4 days of TX. LMNB is shown in green; α-tubulin in red. Scale bar: 10 μm. Figure 14 Stereo confocal images of undifferentiated HL-60- bcl -2 cells exposed to 1.0 μM TX. Treatment was for 4 days. Rows: top, anti-nucleolus; middle, anti-dimethyl H3K9; bottom, anti-Golgi p58. LMNB is shown in green; other antigens in red. Scale bar: 10 μm. The centrosomal region retains its proximity to the nucleus in HL-60- bcl -2 cells under all conditions The position of the centrosomal region was visualized with monoclonal antibodies against γ-tubulin (Figure 15 ). In all the conditions tested (undifferentiated; RA treated; RA and CD treated; TX treated), the centrosomal region appeared to be near the nuclear envelope; often surrounded by nuclear lobes in the RA treated cells. The proximity of the centrosomal region to the nucleus in RA treated HL-60- bcl -2 cells contrasts with earlier observations on RA treated "polarized" HL-60/S4 cells [ 4 ]. Light microscope observations of living cells revealed fewer polarized HL-60- bcl -2 cells, than previously observed for HL-60/S4 cells (data not shown). It is possible that the HL-60/S4 cells are more readily "activated", like normal neutrophils, than are HL-60- bcl -2 cells. They also differentiate to granulocytic form faster (~3 to 4 days), compared to HL-60- bcl -2 cells (~7 days). Figure 15 Confocal images of HL-60- bcl -2 cells immunostained for γ-tubulin. Cell treatments: 0, undifferentiated; RA, 7 days; RA + NC, 7 days with 1 μM RA and 0.1 μM NC; TX, 3 days with 1 μM TX. γ-tubulin is shown in red; LBR in green. Scale bar: 10 μm. Discussion HL-60- bcl -2 cells and granulocytic nuclear differentiation Prolonged exposure of HL-60 cells to various cytoskeleton-modifying chemicals (i.e., nocodazole and taxol) are very harmful to cell viability, inducing rapid apoptosis. The present study was conducted primarily on a Bcl-2 overexpressing subline of HL-60 cells, which is more refractory to apoptosis and exhibits the chemically induced differentiation properties of the parent cell line [ 18 ]. Lethal effects of NC or TX are delayed in HL-60- bcl -2 cells, compared to HL-60/S4 cells, allowing a window of time for determining the effects of these MT modifying chemicals on nuclear shape and nuclear differentiation. There are two additional differences between HL-60- bcl -2 and HL-60 cells observed in this study. The first difference: ~10% of the undifferentiated HL-60- bcl -2 cells exhibit micronuclei, compared to ~0.8% of HL-60 cells [ 26 ]. Micronuclei are generally regarded as the products of abnormal mitoses, where the enclosed chromosomes or chromosome fragments fail to congress at the mitotic plate, but are still surrounded by a post-mitotic nuclear envelope [ 27 - 30 ]. In normal human cells growing in culture (e.g., lymphocytes), they occur in less than 0.5% of the cells [ 31 ]. They can be induced in cells by treatment with a variety of DNA breakage conditions (e.g., irradiation) or spindle disrupting agents (e.g., colchicine). Micronuclei in HL-60 cells are described as representing amplified acentric euchromatic genes (such as c -myc ) and appear to form dynamically during S phase [ 32 ]. We observed that micronuclei in HL-60- bcl -2 cells exhibited immunostaining of centromeres, heterochromatin and nucleolar antigens (Figure 7 ), which suggests differences from the earlier interpretation of the nature of micronuclei in HL-60 cells. Especially puzzling was our observation that the majority of micronuclei possessed comparable amounts of lamin B, but reduced amounts of LBR, in comparison to the companion main nucleus. A recent study employing MCF-7 cells observed micronuclei containing lamins A/C and B1 and a relative deficiency of LBR [ 33 ]. The same study demonstrated that when significant numbers of micronuclei were induced by prolonged exposure to the spindle disrupting chemical curcumin, the resulting nuclear envelopes contained LBR. Current views on the sequence of protein additions to post-mitotically reformed nuclei agree that LBR enters the nascent nuclear envelope well before lamin B [ 34 , 35 ]. It is possible that untreated HL-60- bcl -2 and MCF-7 micronuclear envelopes form later than the main nucleus, after the cytoplasmic pool of LBR is exhausted, whereas curcumin "induced" micronuclei form at about the same time as main nuclei. The second difference: RA treated HL-60- bcl -2 cells exhibit a small population (up to 20%) with multilobed nuclei, which we almost never observed with HL-60/S4 cells. There may be some clinical significance to this latter observation. A small percent of granulocyte nuclei with 5 or more nuclear lobes in human blood smears is considered diagnostic for megaloblastic anemia (vitamin B12 or folic acid deficiency) [ 20 ]. The present data with RA treated HL-60- bcl -2 cells suggests that delayed or dysfunctional apoptosis might play a role in these human diseases. It is of interest that neutrophil nuclear multilobulation (hypersegmentation) has been described in two circumstances that delay apoptosis: 1) Glucocorticoid administration to patients induces hypersegmentation [ 36 ], and in vitro glucocorticoid treatment of neutrophils prolongs their survival [ 37 - 39 ]. 2) Granulocyte colony-stimulating factor (G-CSF) administration to rats induces hypersegmentation in mature neutrophils [ 40 ], and in vitro treatment of neutrophils with G-CSF prolongs their survival [ 38 , 41 ]. Major cytoskeletal influences on nuclear shape in HL-60- bcl -2 The most important present observation is the suppression of RA induced nuclear lobulation in HL-60- bcl -2 cells by simultaneous exposure to NC (Figure 9 ). This observation implies that MTs must be intact during nuclear differentiation. Furthermore, we have observed (data not shown) that HL-60- bcl -2 cells can be made 0.1 μM NC on day 2 or 4 after addition of RA, still exhibiting inhibition of nuclear lobulation. This observation suggests that the requirement for intact MTs is not an early event in the nuclear differentiation process. When RA and NC treated HL-60- bcl -2 cells were examined by confocal immunofluorescent staining with anti-lamin B, the ovoid nuclei revealed extensive "wrinkling" of the nuclear envelope (Figures 10 and 11 ). We suggest that this "wrinkling" reflects expansion (growth) of the nuclear envelope in the absence of nuclear lobulation. The present study demonstrates a lack of requirement of an intact actin microfilament system for RA induced granulocytic nuclear differentiation (Figure 1 ). This conclusion is based upon the observation that incubation of HL-60/S4 and HL-60- bcl -2 cells with 1 μM CD in the presence of RA does not inhibit nuclear lobulation or the formation of ELCS. Our results agree with an earlier study [ 17 ]. In addition, we extended the incubation (with CD and RA) to times where nuclear differentiation is more definitive. Our data also demonstrated that prolonged incubation with CD alone produces significant nuclear envelope folding, which is best appreciated by confocal immunostaining of the nuclei (Figure 8 ). The mechanism for the nuclear shape changes in undifferentiated cell nuclei exposed to prolonged incubation with CD is presently unknown. There is increasing evidence that actin microfilament interactions with the nuclear envelope exist, mediated via actin binding spectrin-like proteins, variously named Syne 1 and 2/ nesprin-1 and 2/ ANC1/ NUANCE [ 12 - 15 ]. However, a counter argument for the relevance of this actin interacting system to nuclear differentiation in the HL-60 cell system can be made. Nesprin-1 binds directly to lamin A and emerin [ 14 ]; but undifferentiated and granulocytic HL-60 cells possess negligible amounts of lamin A, with emerin primarily localized in the cytoplasm [ 5 ]. Furthermore, published [ 13 ] and unpublished data (A. Karakesisoglu, A. Olins and D. Olins) indicate that HL-60 cells, undifferentiated or RA treated, possess only trace amounts of NUANCE. The finding that TX treatment of cells has dramatic consequences to interphase nuclear structure has been reported before. Exposure of human carcinoma cells (Ishikawa and HeLa) to 0.01–0.1 μM TX for up 20 hours, followed by incubation in drug-free media for up to 72 hours, led to nuclear envelope "unraveling" and clustering of nuclear pores [ 42 ]. The authors observed lobulated and micronuclei, much as observed here. In the case of HL-60- bcl -2, the nuclear structural changes superficially mimic the effects of RA treatment, but occur much faster (Figure 12 ). The bundling of MTs in HL-60 cells in response to exposure to 1.0 μM TX for 24 hours has also been reported [ 43 ], but no mention was made of the nuclear envelope changes. Furthermore, the authors noted that at 0.1 μM TX (or greater) the HL-60 cells showed clear apoptosis by 24 hours. In a later paper by the same group [ 44 ], increased expression of Bcl-2 or Bcl-x L yielded HL-60 cells with considerably greater resistance to TX induced apoptosis. It seems to us that the dramatic cell and nuclear changes that we observe have no obvious relationship to normal granulocytic nuclear lobulation, but underscore an effect of MT integrity upon nuclear shape. The formation of micronuclei, in particular, suggests that TX may exert its effects on nuclear shape by interfering with normal mitotic chromosome distribution. A model for granulocytic nuclear lobulation Although far from a complete understanding of all the molecular forces involved in shaping the granulocytic nucleus, a working model that incorporates existing data and emerging concepts provides a useful perspective for future experimentation (Figure 16 ). The process of granulocytic nuclear lobulation can be viewed as a dynamic balance of stabilizing and distorting forces. The working model contains the following assumptions: 1) the flexible nuclear envelope (due to the paucity of lamins A/C and B1) is "tacked down" to the underlying heterochromatin (enhanced by the elevated LBR); 2) the nuclear envelope undergoes invaginations in the region of the centrosome due to dynein movement along MTs; 3) new membrane materials are added to the nuclear envelope via lateral diffusion from the ER, resulting in net membrane growth [ 3 ]; 4) constraints on nuclear shape by actin and spectrin-like proteins are weak, due to the paucity of lamins A/C and NUANCE and the cytoplasmic localization of emerin; 5) constraints on nuclear shape by vimentin-envelope interactions are minimal in the differentiating HL-60 cell system. Figure 16 Model for granulocytic nuclear lobulation. A. Postulated changes in nuclear envelope flexibility arising from changes in nuclear envelope composition. HL-60 cell states: undifferentiated; granulocyte, RA treated; monocyte, TPA treated. Abbreviations: inm, inner nuclear membrane; LMNB2, lamin B2; LMNA/C, lamins A/C; LMNB1, lamin B1; HP1, heterochromatin protein 1; LBR, lamin B receptor. Due to a paucity of lamins A/C and B1, the nuclear envelope is believed to be more flexible in the undifferentiated and granulocytic cell states. B. Balance of forces postulated to be affecting granulocytic nuclear shape. Microtubules (green) and affiliated dynein motors (red circles) are assumed to produce nuclear envelope invaginations (bent arrows). Actin with affiliated spectrin-like proteins and vimentin are assumed to be pulling outwards on the nuclear envelope (thin arrows). Current evidence does not favor a major contribution by actin; any significant contribution by vimentin is presently unknown. The large straight arrows indicate continued influx of nuclear envelope components from the endoplasmic reticulum, allowing sustained membrane growth. Abbreviations: MTs, microtubules; IFs, intermediate filaments; ER, endoplasmic reticulum; Eu, euchromatin; H, heterochromatin. The nuclear compartment is colored blue; the cytoplasm, pink. We suggest that nuclear envelope deformability is an important factor and depends upon the amount of underlying lamins: the less lamin protein, the more pliable the nuclear envelope. Granulocytic forms of HL-60 exhibit deficiency of lamins A/C and B1, whereas both types of lamins are present in monocyte/macrophage forms [ 5 ]. The absence of lamins A/C in normal granulocytes and their presence in macrophages has been previously noted [ 45 ]. The present data on granulocytic differentiation in HL-60- bcl -2 demonstrates that nuclear lobulation correlates with low levels of lamins A/C and B1 coupled with a rise in LBR levels (Figure 4 ). The pivotal role of LBR in determining granulocytic nuclear lobulation was demonstrated with the human Pelger-Huet anomaly and murine Ichthyosis mutations [ 8 , 9 ]. These studies demonstrated that LBR functions in a dose-dependent manner: homozygous mutants present a more severe phenotype and lower amounts of LBR, than in the heterozygous state. The influence of LBR on granulocytic nuclear lobulation is consistent with its known properties [ 46 ]. LBR is embedded within the nuclear envelope inner membrane via 8 transmembrane domains (the C-terminal ~400 aa), and associated with lamin B, chromatin and HP1α in the N-terminus (~200 aa). In the absence of sufficient LBR, nuclear lobulation is prevented and the normally peripheral heterochromatin is redistributed into a more centrally condensed form [ 8 , 9 ]. A role for MT integrity is implicit in our present observation that exposure of HL-60- bcl -2 cells to NC prevents nuclear lobulation during exposure of the cells to RA. Direct interaction between MTs and the interphase nuclear envelope in mammalian cells have not documented. However, a recent model for mitotic nuclear envelope breakdown [ 47 , 48 ] can be adapted to the situation of nuclear lobulation. The nuclear envelope breakdown model proposes that cytoplasmic dynein attaches MTs to the nuclear envelope, pulling the envelope towards the centrosomal region. The excess envelope near to the centrosome produces nuclear invaginations; the tension on the non-growing nuclear envelope generates tears and the mixing of nuclear and cytoplasmic materials. If we assume that the nuclear envelope is still growing in the case of RA differentiating HL-60 cells, invaginations and lobulations might be expected to accumulate within the intact nuclear envelope. The present study demonstrates proximity of the centrosome to nuclear lobulation (Figure 15 ). But as yet, there is no direct evidence for cytoplasmic dynein playing a role in granulocytic nuclear differentiation. Our present data suggests that an intact actin microfilament system does not play a major role in granulocytic nuclear lobulation (Figure 1 ). The best described mechanism of actin interacting with the nuclear envelope involves spectrin-like proteins, which may bridge cytoplasmic actin to nuclear envelope proteins, such as lamin A and emerin [ 12 - 15 ]. But undifferentiated and granulocytic HL-60 cells possess very little lamin A/C and emerin is primarily cytoplasmic [ 5 ], suggesting that this bridging system may not be functional in these cell forms. In SW-13 cells the absence of intermediate filaments (vimentin) has been correlated with nuclear envelope folds or invaginations [ 16 ]. We have observed a decrease in vimentin during differentiation of granulocytic HL-60/S4 cells [ 3 - 5 ], suggesting that reduced vimentin concentrations may contribute to granulocytic nuclear lobulation. The proposed model for granulocytic nuclear lobulation yields several testable predictions: 1) Expression of lamins A/C and B1 in HL-60 cells should strengthen the nuclear envelope, minimizing nuclear lobulation following exposure of the cells to RA; 2) "Knock-down" or expression of a "dominant negative" form of LBR in HL-60 cells would be expected to suppress granulocytic lobulation; 3) Overexpression of dynamitin in HL-60 cells should inhibit dynein activity [ 49 ], preventing nuclear lobulation following RA induced differentiation. A number of these experiments are already in progress. Conclusions Employing Bcl-2 overexpressing HL-60 cells, which are more refractory to induced apoptosis than the parent cell line, we demonstrated that disruption of the MTs by nocodazole prevented nuclear lobulation in response to RA treatment. These results implicate the necessity of an intact MT system for granulocytic nuclear shape. Cytochalasin D, on the other hand, did not suppress RA induced nuclear lobulation. Combined with the decreasing levels of intermediate filaments (vimentin) during differentiating granulocytic forms of HL-60 cells, the role of the MT system appears to be quite central to the nuclear shape change. Recent models on the role of a MT bound motor (dynein) in facilitating mitotic nuclear envelope breakdown suggest that similar tension forces on the differentiating granulocyte nucleus, combined with continued influx of nuclear envelope components, could explain nuclear invaginations and lobulation. Methods Cells and chemicals Two HL-60 cell sublines were employed in this study: HL-60/S4 [ 50 ], which achieves maximum nuclear lobulation in 4 days following addition of RA; HL-60- bcl -2 [ 18 ], which achieves maximum lobulation in ~7 days, comparable to the parent cell line. Cultivation conditions were exactly as described previously [ 3 , 4 ]. The following chemicals were purchased from Sigma-Aldrich (St. Louis, MO): all-trans retinoic acid (RA), nocodazole (NC), taxol (TX), cytochalasin D (CD). Stock solutions of these chemicals were stored at -20°C as described earlier [ 3 , 4 ]. Antibodies Guinea pig antisera were the generous gifts of two Ph. D. students in the laboratory of H. Herrmann (German Cancer Research Center, Heidelberg): anti-LBR and anti-emerin, from C. Dreger; anti-lamin A and anti-lamin B1, from J. Schumacher. Goat anti-lamin B was obtained from Santa Cruz Biotechnology Inc. (Santa Cruz, CA). Rabbit anti-4 × dimethyl H3K9 (lysine 9 on histone 3) was generously provided by T. Jenuwein (Biocenter, Vienna). Human auto-antisera anti-centromere (CREST) and anti-nucleolus (antigens unknown) were purchased from Antibodies Inc. (Davis, CA). Rabbit anti-calreticulin was from Calbiochem (San Diego, CA). Mouse monoclonal antibodies against α-tubulin, γ-tubulin and Golgi p58 were all purchased from Sigma-Aldrich. Mouse monoclonal anti-vimentin (3B4) was a gift of H. Herrmann and has been described before [ 51 ]. FITC-, Cy3-, Cy5- and HRP-conjugated donkey secondary antibodies were all purchased from Jackson ImmunoResearch Laboratory, Inc. (West Grove, PA). TO-PRO-3 and SlowFade were obtained from Molecular Probes, Inc. (Eugene, OR). Fixation and staining For Wright-Giemsa staining, cells were cytospun onto ethanol-cleaned microscope slides, fixed in room temperature methanol for 15 min, air-dried and stained as described earlier [ 3 ]. For analysis of the percentage of cells in the various nuclear morphology categories (Figures 3 , 9 and 12 ), approximately 150 cells were observed and classified in each experiment. In the majority of immunostaining experiments the procedure followed that described previously [ 4 ], with some modifications, as follows: 1) Microscope slides were soaked overnight in 1/1 ethanol/ether and freshly coated with poly-L-lysine (MW ~150–300,000; Sigma-Aldrich), just before centrifugation of the cells. 2) No coverslip was used during antibody incubations, to minimize loss of cells. 3) Prior to the application of primary antibodies, slides were incubated with 5% normal donkey serum (Jackson ImmunoResearch Laboratory) in PBS for 15–30 min at 37°C in a moist chamber. 4) In the cases of monoclonal anti-γ-tubulin and anti-vimentin, where antigenicity appeared to be destroyed by HCHO fixation, slides were fixed in methanol (-20°C, 10 min), acetone (-20°C, 1 min) followed by three washes in PBS (5 min each). The cells were not excessively flattened by either procedure, maintaining sufficient 3-D structure to justify stereo viewing. Confocal images were collected on a Zeiss 510 Meta. Stereo images were obtained as ± 15° projections through the stack of confocal images. Immunoblot analysis SDS total cell extracts, obtained from undifferentiated and RA treated HL-60- bcl -2 cells over a three-week period (simultaneously with preparation of slides for Wright-Giemsa staining), were analyzed by immunoblotting as previously described [ 5 ]. No attempt was made to separate viable cells from debris prior to SDS extraction. Comparable amounts of total cell protein were loaded into each lane of the SDS-PAGE, as judged by Ponceau S staining of the PVDF membrane after protein transfer. Several different ECL exposures were collected on X-ray film and subsequently scanned with a Bio Rad Chemi Doc for densitometric analyses. Abbreviations RA, retinoic acid; MT, microtubules; NC, nocodazole; TX, taxol; CD, cytochalasin D; ELCS, nuclear envelope-limited chromatin sheets Authors' contributions ALO performed the microscopy and prepared the figures. DEO performed the tissue culture, immunostaining and immunoblotting. Both authors were involved in the conception of the study and have read and approved the final manuscript. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516025.xml |
516019 | Inducible cytochrome P450 activities in renal glomerular mesangial cells: biochemical basis for antagonistic interactions among nephrocarcinogenic polycyclic aromatic hydrocarbons | Background Benzo(a)pyrene (BaP), anthracene (ANTH) and chrysene (CHRY) are polynuclear aromatic hydrocarbons (PAHs) implicated in renal toxicity and carcinogenesis. These PAHs elicit cell type-specific effects that help predict toxicity outcomes in vitro and in vivo . While BaP and ANTH selectively injure glomerular mesangial cells, and CHRY targets cortico-tubular epithelial cells, binary or ternary mixtures of these hydrocarbons markedly reduce the overall cytotoxic potential of individual hydrocarbons. Methods To study the biochemical basis of these antagonistic interactions, renal glomerular mesangial cells were challenged with BaP alone (0.03 – 30 μM) or in the presence of ANTH (3 μM) or CHRY (3 μM) for 24 hr. Total RNA and protein will be harvested for Northern analysis and measurements of aryl hydrocarbon hydroxylase (AHH) and ethoxyresorufin-O-deethylase (EROD) activity, respectively, to evaluate cytochrome P450 mRNA and protein inducibility. Cellular hydrocarbon uptake and metabolic profiles of PAHs were analyzed by high performance liquid chromatography (HPLC). Results Combined hydrocarbon treatments did not influence the cellular uptake of individual hydrocarbons. ANTH or CHRY strongly repressed BaP-inducible cytochrome P450 mRNA and protein expression, and markedly inhibited oxidative BaP metabolism. Conclusion These findings indicate that antagonistic interactions among nephrocarcinogenic PAHs involve altered expression of cytochrome P450s that modulate bioactivation profiles and nephrotoxic/ nephrocarcinogenic potential. | Background The biological effects of PAHs are often mediated by oxidative metabolism of the parent hydrocarbon to reactive intermediates that adduct DNA and induce oxidative stress [ 1 ]. In the kidney, PAHs elicit cell type-specific effects that differentially influence glomerular versus tubular epithelial cell structure and function. BaP and ANTH selectively injure glomerular mesangial cells, while CHRY preferentially targets cortico-tubular epithelial cells [ 2 , 3 ]. The study of single chemical effects has provided fundamental information on the nephrotoxic potential of specific PAHs, but human exposure to these group of chemicals is rarely limited to a single agent, and most often involves exposure to PAH mixtures [ 4 ]. Thus, a more realistic approach is to evaluate the cellular, biochemical, and molecular mechanisms by which PAHs interact to produce additive, synergistic or antagonistic interactions. Such studies have demonstrated that binary and ternary mixtures of PAHs yield paradoxical antagonistic interactions in vitro [ 3 ]. A toxicological interaction is a circumstance in which exposure to two or more chemicals results in qualitative or quantitative modulation of the biological response elicited by individual agents. Toxicological interactions may be mediated by changes in the absorption, distribution, metabolism and excretion of one or more of the chemicals present in the mixture. Since the ability of PAHs to compromise cellular and genomic integrity often requires bioactivation by cytochrome P-450 enzymes ( CYPs ) to reactive intermediates, their role in PAH-induced environmental diseases is profound [ 5 ]. The interaction of PAHs with CYPs is unique in that the expression of genes that encode for CYP -associated activities is itself regulated by the PAH substrates they metabolize. Shimada et al. [ 6 ] have shown that BaP and CHRY induce Cyp1a1 and 1b1 through the aryl hydrocarbon receptor (Ahr), and that the enzymes encoded by these genes mediate toxicity and tumorigenicity. The Ahr belongs to the basic helix loop helix/PAS family of proteins [ 7 ]. The activation of cytoplasmic complexes containing the Ahr depends on ligand binding to the receptor, nuclear translocation and formation of active heterodimers with a nuclear protein called Arnt [ 8 ]. The AhR-Arnt complex binds to s pecific cis -acting responsive elements known as xenobiotic responsive elements located in the promoters and enhancers of target genes, including CYPs themselves [ 7 ]. PAHs or halogenated aromatic hydrocarbons function as ligands of the Ahr. The present studies were conducted to evaluate profiles of Cyp1a1 and Cyp1b1 inducibility in binary PAH mixtures, and their impact on BaP bioactivation. Evidence is presented that chemical-specific differences in the regulation of Cyp1a1 and Cyp1b1 contribute to differential metabolic activation of PAHs in binary mixture. On the basis of these findings it is concluded that interactions between BaP, ANTH and CHRY involve altered expression of cytochrome P450s that modulate bioactivation profiles and nephrotoxic/ nephrocarcinogenic potential. Materials and Methods Materials BaP, ANTH and CHRY were purchased from Sigma Chemical Co. (St. Louis, MO). RPMI 1640 and M199 were purchased from GIBCO-BRL (Grand Island, NY, USA). All other chemicals were from Sigma Chemical Co. Cell culture/chemical treatments Rat glomerular mesangial cells in serial culture were seeded on 6-well plates at a density of 200 cells/mm 2 . At least three replicates were used for each chemical concentration tested in multiple experiments. The concentrations examined are similar to those used in previous studies and representative of those encountered in the environment. Cultures were challenged with selected PAHs for 24 hr at concentrations ranging from 0.03 to 30 μM. Stock solutions of PAHs were dissolved in DMSO with final DMSO concentrations never exceeding 0.1%. Cells, RNA, or protein were harvested after chemical challenge and processed for biochemical measurements. RNA extraction and analysis Total RNA was extracted using Tri reagent (Molecular Research Center, Inc., Cincinnati, OH) according to manufacturer's specifications. Cells were scraped using 1.0 ml of Tri reagent and allowed to sit at room temperature for 5 minutes to dissociate nucleoprotein complexes and then combined with 0.2 ml chloroform, vortexed and allowed to sit at room temperature for 2 minutes. After centrifugation at 12,000 × g (4°C) for 15 minutes, the upper aqueous layer was mixed with and equal volume of isopropanol and stored at -20°C overnight. This solution was then centrifuged for 15 minutes at 12,000 × g (4°C) and the pellet washed with 70% ethanol, dried, and resuspended in 20 μl of RNase free water. RNA concentration was determined spectrophotometrically at 260 nm. Northern analysis Ten μg of total RNA were dissolved in RNase free water, mixed with 3.5 μl formamide, 1 μl of 37% formaldehyde, 1.0 μl of MOPS buffer and 15% 6X gel loading buffer, and denatured by heating at 55°C for 10 minutes. Total RNA was separated by electrophoresis on a formaldehyde denaturing gel (1.2% agarose, 1 M formaldehyde and 10X MOPS) in 1X MOPS buffer and transferred onto a nylon membrane by capillary transfer. Membranes were dried at room temperature, UV crosslinked and hybridized with 32P labeled cDNA probes synthesized using High Prime (Boehringer Mannheim, Germany). The β-tubulin probe was obtained from a 1.6 Kb fragment cloned into a pBluescript plasmid at the EcorI site. The Cyp1a1 probe was obtained from a 1.2 kb Pst1 fragment from a pUC18 vector (ATCC) and the 1 kb Cyp1b1 probe was kindly provided by Dr. Colin Jefcoate (University of Wisconsin, Madison, WI). Following hybridization the blots were subjected to stringent washes, dried at room temperature, and exposed to x-ray film in -80°C for 24 hours. Cyp-related enzyme activities Confluent subcultures of mesangial cells were grown in 100 mm dishes and treated with selected PAHs or their mixtures for 24 hr. Cells were then scraped and collected after the addition of 5 ml of ice-cold Tris-sucrose buffer (pH 8.0), then centrifuged for 5 min at 50 g (4°C). The supernatants were removed and the pellet resuspended in 300 μL of Tris-sucrose buffer. Two 100 μl aliquots containing cellular protein were processed for fluorometric enzyme analysis, while 50 μL of sample was used to measure protein concentration [ 9 ]. Aryl hydrocarbon hydroxylase (AHH) assay Cultures were processed for measurements of AHH activity as described by Nebert and Gelboin [ 10 ]. A 100 μl aliquot was combined with 1 ml of reaction mixture containing 0.1 M HEPES (pH 8.0) and 0.4 mM NADPH. Samples were pre-incubated at 37°C for 2 minutes, and the reaction initiated by addition of 80 μM BaP dissolved in 40 μM of methanol. Samples were incubated for 15 minutes and the reaction terminated by addition of 1 ml of ice cold acetone and 3.25 ml of hexane. After vortexing, 2 ml of the organic layer was collected and extracted with 5 ml of 1 N NaOH. Samples were vortexed and the NaOH fraction read on a spectroflurometer at a wavelength of 396 nm excitation and 522 nm emissions. The spectroflurometer was calibrated using authentic 3-OH BaP standards. Ethoxyresorufin-O-deethylase activity Cultures were processed for measurements of EROD activity as described by Burke and Mayer [ 11 ] with modifications. Briefly, 1.2 ml of 0.1 M HEPES buffer (pH 7.5) containing 0.1 mg of NADH, 0.1 mg of NADPH, 1.5 mg of magnesium sulfate and 1.1 mg of BSA was added to 100 μl of sample. The tubes were incubated for 2 minutes at 37°C prior to addition of 100 mM ethoxyresorufin. 50 ml of ethoxyresorufin was added to each tube and allowed to incubate at 37°C for 15 minutes. The reaction was terminated by addition of 2.5 ml of methanol. Samples were incubated for an additional 2 minutes to allow for protein flocculation and centrifuged for 10 minutes at 1500 × g. EROD activity in the supernatant was measured fluorometrically at a wavelength of 550 nm excitation and 585 nm emission as described by pohl and Fouts [ 12 ]. The spectrofluorimeter was calibrated using authentic resorufin standards. HPLC analysis PAH metabolism was analyzed according to the method of Selkirk et al. [ 13 ]. Chemical separation, identification, and quantification was performed on a high-performance liquid chromatograph (Beckman model 334) fitted with a Rainin Microsorb C18 reverse-phase column (4.6 × 250 mm), using a 22.5 min linear gradient of 75–100% methanol at a flow rate of 1.0 ml/min. A 20 μl sample was injected into the column. The PAH metabolites were monitored by ultraviolet absorption at 254 nm. Identification of metabolites was made by comparison of retention times to known standards. Results CYP expression profiles and toxicological interactions in renal cells after challenge with PAH mixtures Antagonistic interactions among nephrotoxic/nephrocarcingenic PAHs may be mediated by modulation of metabolic activation profiles in mesangial cells. To test this hypothesis, steady state levels of Cyp1a1 and Cyp1b1 mRNAs were evaluated by Northern analysis in cultured mesangial cells treated with 0.03 – 30 μM BaP alone, or in combination with 3 μM ANTH and CHRY (Figs 1 and 2 ). Cyp1a1 mRNA was not expressed constitutively, but was highly inducible by BaP in a concentration-dependent manner (Fig 1A ). ANTH alone (0.03 – 30 μM) did not induce Cyp1a1 at any concentration, but modestly enhanced (3 μM) the mesangial cell response to BaP (3 or 30 μM). In contrast to the Cyp1a1 gene, Cyp1b1 was constitutively expressed in mesangial cells (Fig 1B ). Treatment with BaP induced concentration-dependent increases in steady state Cyp1b1 mRNA levels. ANTH alone, induced Cyp1b1 mRNA by 5–6 fold at the higher concentrations examined. Combined treatment of glomerular mesangial cells with BaP (0.03 μM) and ANTH (3 μM) slightly enhanced the response to individual hydrocarbons at the 0.03 μM concentration, but this enhancement was dissipated at the higher concentrations. Figure 1 Northern analysis of P-450 induction in rGMCs treated with BaP, ANTH and their binary mixtures. mRNA expression of Cyp1a1 in rGMCs challenged with BaP, ANTH and their binary mixtures for 24 hr (A). mRNA expression of Cyp1b1 in rGMCs challenged with BaP and ANTH and their binary mixtures for 24 hr (B). RNA extraction and analysis were performed as described in methodology. β-tubulin was analyzed to asses loading and transfer efficiency. These results shown are representative of three separate experiments. Figure 2 Northern analysis of P-450 induction in rGMCs treated with BaP, CHRY and their binary mixtures. mRNA expression of Cyp1a1 in rGMCs challenged with BaP and CHRY and their binary mixtures for 24 hr (A). mRNA expression of Cyp1b1 in rGMCs challenged with BaP and CHRY and their binary mixtures for 24 hr (B). RNA extraction and analysis were performed as described in methodology. β-tubulin was analyzed to asses loading and transfer efficiency. These results shown are representative of three separate experiments. The metabolic interaction between BaP and CHRY was examined next. As expected, BaP induced Cyp1a1 and Cyp1b1 mRNA levels in a concentration-dependent manner (Figure 2 , panels A and B). CHRY was also a potent Cyp inducer, and in fact elicited greater induction of Cyp1a1 and Cyp1b1 mRNA than BaP (Fig 2 , panels A and B). CHRY induction of Cyp1a1 at the 3 and 30 μM concentrations was 2-fold higher than the response elicited by BaP at the same concentration (Figure 2A ). As with Cyp1a1 , CHRY markedly induced Cyp1b1 steady state mRNA levels at all concentrations examined (Fig 2B ). Combined treatment of mesangial cells with BaP (0.03 – 30 μM) and CHRY (3 μM) yielded modest antagonistic responses for both Cyp1a1 and Cyp1b1 , particularly at the highest concentrations examined. For Cyp1b1 , the induction response in cells treated with 30 μM BaP and 3 μM CHRY was reduced by 30% relative to either hydrocarbon alone. CYP enzymatic activities in renal cells after challenge with PAH mixtures Because mRNA expression does not always correlate with changes in protein levels, measurements of EROD and AHH activity were completed in mesangial cells treated with PAHs alone, or in binary mixture. EROD, an enzyme activity encoded by both Cyp1a1 and Cyp1b1 , was inducible by all hydrocarbons in a concentration-dependent manner (Fig 3 ). Basal and inducible EROD activities in mesangial cells were considerably lower than those in cultured rat hepatocytes (not shown). A greater than 6-fold enhancement of EROD activity was observed in cells treated with 0.3 μM of BaP, with significant decreases observed as the BaP concentration increased. ANTH and CHRY also induced EROD, but induction patterns for these hydrocarbons were remarkably different. ANTH was a weak inducer of EROD, with only a 3-fold induction observed at 30 μM, while CHRY elicited a greater than 14-fold increase in enzymatic activity. As with BaP, reductions in activity were observed at the highest CHRY concentrations. Different profiles were observed in cells treated with BaP in combination with either ANTH or CHRY. Combined treatment of mesangial cells with BaP and ANTH completely inhibited EROD inducibility. In the case of CHRY, co-treatment with BaP yielded an erratic response, with less than additive interactions observed at the lowest concentrations, and modest inhibition observed as hydrocarbon concentrations increased. Figure 3 EROD activity in rGMCs treated with PAHs. EROD activity in rGMCs challenged with BaP, ANTH and CHRY alone or the binary mixtures of BaP with ANTH or CHRY. These results shown are representative of three separate experiments. AHH activity is also encoded by the Cyp1a1 and Cyp1b1 genes. BaP was a potent inducer of AHH activity in mesangial cells, with concentration-dependent increases observed over the full concentration range examined. A greater than 40-fold induction in enzymatic activity was observed at 30 μM BaP (Fig 4 ). Individual treatment with ANTH or CHRY did not modulate AHH activity. Combined treatment of mesangial cells with BaP and ANTH repressed AHH activity at the lower BaP concentrations, but the negative interaction was dissipated at higher concentrations. Likewise, CHRY inhibited AHH inducibility by BaP, with greater than 50% reduction observed when compared to cells treated with BaP alone. Figure 4 AHH activity in rGMCs treated with PAHs. AHH activity in rGMCs challenged with CHRY, ANTH and BaP alone or the binary mixtures of CHRY with ANTH or BaP. These results shown are representative of three separate experiments. Cellular hydrocarbon uptake and metabolic profiles of PAHs To further evaluate cellular mechanisms of toxicological interactions in PAH mixtures, BaP metabolism was examined in mesangial cells treated with 30 μM BaP alone, or in combination with 3 μM ANTH or 3 μM CHRY. At least 50% of the parent compound was reproducibly detected in mesangial cells treated with BaP (Table 1 ). ANTH was readily taken up by mesangial cells, and did not influence the cellular uptake of BaP. 3-hydroxy-BaP was the primary oxidative metabolite detected in mesangial cells treated with BaP, with other metabolites representing less than 1% of the total metabolite pool detected (not shown). Combined treatment of cells with ANTH significantly reduced BaP metabolism, with greater than 80% reduction in detectable metabolite levels (Table 1A ). Mesangial cells readily took up CHRY, with greater than 77% of the parent compound detected at the end of the treatment. As with ANTH, CHRY did not influence the cellular uptake of BaP. However, a 46% reduction in measurable 3-hydroxy-BaP levels was observed in cells subjected to binary hydrocarbon treatment (Table 1B ). Table 1 Oxidative metabolism of benzo(a)pyrene (BaP) alone or in combination with CHRY (A) and ANTH (B) in rat glomerular mesangial cells (rGMCs). A Chemical Con (μM) Calc. BaP (μM) Calc. ANTH (μM) BaP 3(OH) (μM) BaP 30 14.39 ± 0.18 _ 0.026 ± 0.0 BaP/ANTH 30/3 13.72 ± 1.41 3.7 ± 0.49 0.005 ± 0.003 B Chemical Con (μM) Calc. BaP (μM) Calc. CHRY (μM) BaP 3(OH) (μM) BaP 30 17.3 ± 0.29 _ 0.013 ± 0.002 BaP/CHRY 30/3 17.4 ± 1.2 2.31 ± 0.12 0.007 ± 0.002 Discussion Cyp mRNA expression profiles and toxicological interactions PAHs elicit a broad spectrum of toxic and carcinogenic effects in multiple organ systems, including the kidney [ 14 ]. To study the complexity of chemico-biological interactions following exposures to multiple PAH carcinogens, we evaluated the renal cell-specific response to binary and ternary mixtures of BaP, ANTH and CHRY [ 3 ]. Challenge of renal mesangial and cortico-tubular epithelial cells with BaP in combination with ANTH or CHRY yielded unexpected antagonistic interactions that may be partly explained by differential regulation of enzymes involved in PAH metabolism. Renal Cyp1a1 and Cyp1b1 are particularly relevant since these enzymes mediate the conversion of PAHs to intermediates that induce oxidative stress and bind covalently to DNA in the kidney, and are the primary enzymes responsible for bioactivation of carcinogenic PAHs in other tissues [ 6 ]. In mesangial cells, Cyp1a1 mRNA was undetectable under constitutive conditions, highly inducible by BaP and CHRY, and refractory to ANTH (Figs 1 and 2 ). In contrast, Cyp1b1 mRNA was constitutively expressed and highly inducible by all three hydrocarbons. BaP and CHRY were more potent inducers of Cyp1a1 and Cyp1b1 than ANTH (Figs 1 and 2 ), a profile consistent with their relative abilities to activate Ahr signaling in mammalian cells (15). Shimada et al. [ 16 ] have shown that liver and lung Cyp1a1 and Cyp1b1 mRNAs are highly induced in AhR(+/+) mice by a single intraperitoneal injection of carcinogenic PAHs, and that 6-aminochrysene, chrysene, benzo [e]pyrene, and 1-nitropyrene weakly induced Cyp1a1 and Cyp1b1 mRNAs, while non-carcinogenic hydrocarbons, such as anthracene, pyrene, and fluoranthene, were poor or inactive enzyme inducers. These findings indicate that the toxicity and carcinogenicity profiles of PAHs may be defined on the basis of their ability to regulate Cyp1a1 and Cyp1b1 at either the mRNA or protein level. The tissue-specific induction of Cyp1a1 and Cyp1b1 mRNAs by PAHs and polychlorinated biphenyls (PCBs) has been investigated in wildtype and AhR-deficient C57BL/6J mice [ 17 ]. While expression of Cyp1a1 is AhR-dependent, Cyp1b1 is constitutively expressed in various organs in male and female Ahr (+/+) and Ahr (-/-) mice. Cyp1b1 is of interest because it encodes for AHH and EROD, the predominant PAH-metabolizing activities in rodent embryos [ 17 ]. Important roles of Cyp1b1 in PAH carcinogenesis have been proposed by Gonzalez and co-workers [ 18 - 20 ] who observed that Cyp1b1 knock-out mice expressing significant levels of Cyp1a1 are highly resistant to lymphoma formation by 7,12-DMBA. In vitro human studies with recombinant enzymes have shown that Cyp1b1 is more active (about 10-fold) than Cyp1a1 in the formation of BaP-7,8-diol [ 21 ]. CYP enzymatic activities and toxicological interactions Measurements of EROD and AHH were used to monitor kidney microsomal catalytic activities (Figs 3 and 4 ). The profile of enzyme induction was significantly different between BaP, ANTH and CHRY. The basal activities of EROD and AHH were very low in control cultures, but highly inducible in response to BaP and CHRY. BaP was considerably more potent than ANTH as an inducer of EROD and AHH, while CHRY only induced EROD activity. Interestingly, CHRY was a better inducer of Cyp1a1 mRNA, but induction at the mRNA level was not associated with corresponding increases of enzymatic activity. The greater potency of BaP as an inducer of enzymatic activities compared to ANTH and CHRY is consistent with previous observations showing that BaP is the most toxic PAH to renal mesangial cells [ 3 ]. The profiles of mRNA and protein induction elicited by single exposures to BaP, ANTH, and CHRY alone were markedly different from those following challenge in binary mixture. ANTH enhanced both Cyp1a1 and Cyp1b1 inducibility by BaP, but EROD and AHH activities were reduced in binary mixture. CHRY, on the other hand, modestly inhibited Cyp1a1 and Cyp1b1 mRNA inducibility by BaP and also reduced AHH activity. Because both ANTH and CHRY antagonize the mesangial cell response to BaP (3), these findings implicate AHH as the primary enzymatic target for antagonistic interactions among nephrotoxic PAHs. Oxidative metabolic profiles and toxicological interactions The above interpretation is supported by the marked inhibition of 3-OH BaP formation seen in cells co-treated with BaP and ANTH or CHRY (Table 1 ). A role for EROD in mesangial cell injury cannot be ruled out, however, since ANTH selectively antagonized EROD activity and inhibited hydrocarbon metabolism. Thus, patterns of Cyp inducibility and oxidative metabolism may account for differences in the responses to BaP, ANTH and CHRY in kidney cells, and their interactions in simple and complex mixtures. Conclusion A major implication of our findings is that despite similarities in gene and protein inducibility among structurally-related PAHs, their behavior may be influenced by the reactivity of oxidative intermediates generated during the course of cellular metabolism. As such, PAH substrates in complex mixtures may compete for, and inhibit, the same metabolizing enzymes that act upon them to give rise to antagonistic interactions that protect against further chemical toxicity. Interactions of this nature are among the most commonly encountered in environmental mixtures [ 22 ], but the magnitude and capacity of these interactions for most relevant environmental nephrocarcinogens is unknown. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516019.xml |
518975 | Handling multiple testing while interpreting microarrays with the Gene Ontology Database | Background The development of software tools that analyze microarray data in the context of genetic knowledgebases is being pursued by multiple research groups using different methods. A common problem for many of these tools is how to correct for multiple statistical testing since simple corrections are overly conservative and more sophisticated corrections are currently impractical. A careful study of the nature of the distribution one would expect by chance, such as by a simulation study, may be able to guide the development of an appropriate correction that is not overly time consuming computationally. Results We present the results from a preliminary study of the distribution one would expect for analyzing sets of genes extracted from Drosophila , S. cerevisiae , Wormbase, and Gramene databases using the Gene Ontology Database. Conclusions We found that the estimated distribution is not regular and is not predictable outside of a particular set of genes. Permutation-based simulations may be necessary to determine the confidence in results of such analyses. | Background With technological improvements and decreasing costs, microarrays are quickly becoming an affordable analytical tool for genetics analysis. Additionally, the arrays being used are of increasing spot density, allowing for more genes to be tested at once. One impact of the resulting increase in data flow is that it will become more likely that a researcher using microarrays will have greater difficulty making sense of results from preliminary statistical analyses without further computational exploration. In other words, once the researcher has received a list of genes, by whatever statistical means, that are differentially expressed, the task of determining the biological implications of that gene list will need to be performed by statistical methods utilizing computers. Numerous research groups are developing software tools to perform an interpretation of the list of differentially expressed genes, generally by mapping against previously developed knowledgebases such as the Gene Ontology (GO) [ 1 , 2 ] or GenMAPP [ 3 ] as a reference data set (reviewed briefly in [ 4 ]). Some tools, such as DAVID [ 5 ] and FatiGO [ 6 ] examine the percentage of the gene list that is directly associated with a node of the knowledgebase. This method is extremely fast due to its simplicity, but it does have disadvantages, which are also due to the simplicity of the analysis. For example, in some of these tools, information about how nodes (biological terms or steps in pathways) of the knowledgebase are related to each other is ignored. Additionally, in hierarchical structures such as GO, genes with a less precise functional definition will be associated with a node closer to the root than a gene with a more precise definition. In such a case, the information content about the two genes is split into different nodes, reducing the power of the analytical method. Other tools such as GOMiner [ 7 ] and MAPPFinder [ 8 ] analyze the gene list in a broader context of the knowledgebase, looking for patterns of a larger scale than a single node. MAPPFinder searches for whole pathways (MAPPs) over-represented by the gene list. GOMiner performs analyses using genes associated with a node in GO or genes associated with any children of that node, sometimes called "inclusive analysis". In this way, GOMiner minimizes the power reduction of some simpler methods. These tools provide a powerful way for the researcher to quickly get a summarization of the gene list within a biological context. One common problem for the inclusive analytical methods, especially those using knowledgebases with polyhierarchical structures (individual nodes can have multiple parents) like GO, is correcting for multiple statistical tests, usually thousands. In such a case, a Bonferroni correction is overly conservative to the point of being counterproductive since few if any results of the interpretation remain significant [ 7 ]. As of June, 2003, GODB had >13,000 DAG nodes which may be tested, meaning a correction factor of greater than four orders of magnitude would be needed in a Bonferroni correction. Other standard methods used include controlling the Family-Wise Error Rate (FWER) using a numerical correction of the p-value (discussed in [ 9 ]) or controlling the False Positive Rate (FDR, discussed in [ 10 ]). In both cases the methodology should again be overly conservative since, when using inclusive analysis, the p-values for each GO term are not independent [ 11 ]. Here we present, in the context of the program GOArray [ 4 ], a preliminary analysis of the feasibility of using permutation-based simulations to provide an alternate method of handling the multiple-testing problem. GOArray analyzes the gene list in the context of GO. Permutations of the differentially expressed gene list are generated from the total list of genes represented on the microarray to estimate the distribution of significant GO terms expected by chance. We analyze the nature of the distribution of significant terms in reference to varying p-values and numbers of differentially expressed genes using publicly available data sets. We then compare the list of significant terms between data sets. Finally, we discuss the implications of this distribution to provide one solution to the multiple-test problem when analyzing microarray data in the context of GO. Results Four of the test data sets analyzed were extracted from the National Center for Biotechnology Information's (NCBI) Gene Expression Omnibus (GEO) [ 12 ]. The first is an array of Drosophila markers used by Arbeitman et al. [ 13 ] (GEO accession GPL218) for a time-series study of the Drosophila life cycle. This array represents 5081 microarray spots, from which there are 2825 genes as represented by unique FlyBase [ 14 ] accession numbers. The estimation of the distribution took ~16.6 hours. The mean numbers of significant terms for each combination of p-value cutoff and "Gene of Interest" (GOI) count are presented in Figure 1 (full tables of values for all figures are present in the Additional File 1 ). A "trough" of less significant terms than the two surrounding GOI counts for the same p-value for term significance can be observed in the topology diagonally from 500 GOI and a p-value of 0.05 down to 250 GOI and a p-value of 0.0031. There are additional, similarly "wave-shaped" features, although of lesser degree. For example, there is one with a slower rate of change running diagonally from 250 GOI in the vicinity of p-values 0.0002 and 0.000098 (285.9 and 283.9 significant terms respectively), and from 500 GOI between p-values 0.0063 and 0.0.0031 (660.4 and 653.3 significant terms respectively). Overall, however, there is an increase in the number of significant terms with both increasing GOI and p-value cutoff. The increase is sharp from 50 to 100 GOI, and then more gradual with increasing GOI. The increase in significant terms with increasing p-value cutoff, however, is much more gradual. The second data set is for an array of Drosophila markers used by Meiklejohn et al. [ 15 ] (GEO accession GPL356) for a study of interspecies variation. The array represents 5928 cDNA probes, from which there are 5375 unique FlyBase accession numbers. The estimation of the distributions took ~23.8 hours. The mean number of significant terms for each combination of p-value and GOI count were estimated by simulation (Figure 2 ). Again, there is a general increasing trend in the number of significant terms with increasing numbers of GOI and p-value cutoffs. There is another observed trough, however, starting from 500 GOI and a p-value cutoff of 0.0063 diagonally to 350 GOI and a p-value cutoff of 0.00078. As with the first data set, there are also "wave-like" structures in the topology such as from approximately 250 GOI and a p-value of 0.0016 to 400 GOI and a p-value of 0.013. Given the similarity in topology to the first data set, the possibility that these two sets of FlyBase accessions have large overlap was considered. Indeed, ~90% (2560) of the FlyBase accessions from the Arbeitman data set are observed as ~50% of the Meiklejohn data set. That the two data sets would not be independent is to be expected, since one goal of both studies was to examine as many of the known Drosophila genes as possible. This non-independence will probably be observed for most pairs of Drosophila microarrays. Because of this, we extracted the 2815 FlyBase accession numbers from the Meiklejohn data set that did not overlap with the Arbeitman data set, and estimated the distribution of significant terms for just those genes as a comparison to the other two data sets. The simulation took ~18.3 hours and the results are presented in Figure 3 . As with the previous two data sets, there is generally an increase in the number of significant terms with increasing numbers of GOI and p-value cutoffs. There is also another trough extending from 450 and 500 GOI with a p-value cutoff of 0.05 to 200 GOI with a p-value cutoff of 0.0016. Since the two real Drosophila data sets and one simulated Drosophila data set all had a trough in the distribution, it was possible that this is due to inherent structure in GO specifically for Drosophila . Therefore, we extracted three other data sets for different species. The first of these non- Drosophila sets of genes was for S. cerevisiae (GEO accession GPL205), a set of 6084 genes. The overall topology is quite regular (Figure 4 ). Unlike the Drosophila data sets, within the range of GOI and p-values considered there was no evidence of a trough (region where some points would be predicted to have more significant terms than neighboring points but instead have less) in the distribution. There was one data point (500 GOI, p = 0.000391) with a mean number of significant terms (437.3) less than that for the same p-value and the next lower number of GOI (450 GOI, 438.7 terms). The difference between the two means is minute, and may not be meaningful. There are a few regions with leveling (little change in significant terms between points), but these were not large and overall pattern appears somewhat predictable. The second and third non- Drosophila data set were constructed by taking all Wormbase [ 16 ] and Gramene [ 17 ] accessions from GODB. In the case of the Wormbase data set (8224 genes), the distribution again appears somewhat regular (Figure 5 ), with just a few regions of leveling, but no major trough. There was, however, more "wave-like" structure with increasing numbers of GOI and more stringent p-values. The same was noted for the Gramene data set (4798 genes), although the leveling was considerably more apparent (Figure 6 ). This was especially true in the region from 500 GOI and p = 0.00156 to 300 GOI and p = 9.8 × 10 -5 . Even with this region, however, the distribution appears somewhat smoother than that observed for the Drosophila data set. The region in question is located near the edge of the explored space, however, and a pattern may emerge with higher number of GOI. Finally, to see if the same terms were consistently appearing as significant in the Drosophila data sets, we compared the actual number of significant occurrences for each term for the two data sets extracted from GEO (GPL218 and GPL356). Genes with a five-fold or greater change in expression were chosen as GOI. A p-value cutoff of 0.001 was chosen. The list of terms that came up as significant, and the number of permutations out of 1000 that were significant, was recorded and presented as a scatter plot (Figure 7 ). From the plot, it can clearly be seen that there is a lack of correlation between the number of times a term appears as a significant in one data set compared to the second data set, even accounting for a different maximum number of significant terms in the two data sets. A handful of terms were significant a similar number of times in each data set relative to the maximum count of significant terms for the respective data sets. In other words, a handful of terms mapped near the line extending from the origin to the point marked by the maximum value along each axis, which would mark roughly equivalent relative occurrences of the term as significant between the two data sets. However, these terms were near the origin and the vast majority of points were along the axes, showing a clear lack of correlation in how often terms were observed as significant between these two closely related lists of genes. Discussion Based on this set of simulations, predictability appears to be limited to specific data sets. One method of correcting our expectations after performing multiple tests would be to calculate a factor by which to modify α based on the DAG of GO terms. In other words, one could use an adjusted p-value to control the FWER or FDR. Controlling for these two types of error by use of adjusted p-values, however, assumes independence of the tests [ 11 ]. Since there is currently no practical method for directly untangling the interdependence of terms in the GO hierarchy to generate a less conservative correction, adjusted p-values are limited to overly strict results. Another method would be to determine a formula that conservatively approximates the simulated distribution. Unfortunately, the only commonality between the distributions is that, with the exception of the Drosophila data sets, the number of significant terms increases with an increasing number of GOI and an increasing p-value cutoff. The magnitude and detailed shape of the distribution varies between all tested data sets. Even in the more regular non- Drosophila data sets, there were some fluctuations in the distribution, and a smooth surface was not observed. Since neither of the two methods of correcting expectations is currently feasible, it appears that, for now, we are forced to rely on simulation-based methods to estimate the expected distribution of significant terms for each set of genes being examined. While it would be desirable to have a smooth topology that allows for a simple formulaic calculation of the number of significant terms one would expect by chance, it is unfortunately not observed for the Drosophila data sets examined here. The trough that disrupts the Drosophila data sets was not observed, however, in the data sets for other species. The cause of this trough is undetermined, but may be due to structure within the graph of GO terms associated with FlyBase accessions. Alternatively, there could be structure within the chosen genes that is more evident with smaller data sets, since the trough appears to be deepest for the two smaller data sets. One way to approach the question of cause would be to examine which, if any, terms are observed disproportionately in the permuted sets. Based on the frequency of terms it may be possible to observe a pattern in either the genes tested or the set of associated GO terms. We have been unable to observe such a pattern, but that does not mean it does not exist. If one could be found, it may give insights into how to dissect the structure, possibly leading to a more elegant solution to the multiple test problem than a simulation-based approach. Though we were unable to find hints of an easy formulaic way to correct our expectations, we may be able to find a practical (e.g., efficient) method of correction through simulations. There are several ways in which simulated estimates of the distribution could be implemented to provide a less conservative method, yet still statistically appropriate, than a Bonferroni correction to handle the problem of correcting our expectations after performing multiple statistical tests. The simplest to implement, and likely the most accurate, would be to perform a permutation-based simulation for each analysis of a microarray data set in the context of GO. The primary problem with this approach is that it is computationally intensive since the GOI would need to be permutated and scored a thousand or more times for every analysis of a microarray. While tools such as parallel processing can reduce the absolute time necessary to perform the simulations, it is not the most elegant way to solve the problem. Another method would be to simply generate the estimated distribution, again using a permutation-based simulation, once for each set of accession numbers (e.g., each microarray design) for a range of GOI counts and p-value cutoffs, similar to what we have done here but in finer detail, and storing the results. The most conservative simulation distribution neighboring the experimental combination of p-value and GOI count could then be extracted from the stored table to provide an estimated distribution. One problem with this approach is determining how fine a table to design (e.g., the number of values to simulate for each of the two primary parameters). With a simple 10 × 10 matrix, the simulation took ~16–24 hours on a single 2.4 GHz Xeon processor. A finer matrix of parameter values will result in a better estimation of the topology, but consumes more time to compute in a non-linear fashion. However, if a large number of microarray experiments is to be conducted with a single geometry, this method would reduce the total time to estimate significance across all experiments since the simulation would only need to be performed once. Additionally, it will be necessary to determine what range of values should be considered. For the smallest data set tested here (>2500 FlyBase accessions), GOI lists representing less than 20% (500) of the accession numbers were used. The amount of computation time that should be dedicated to simulating the distribution of significant terms expected by chance will likely be a balance determined by the computing resources available, estimates of how many experiments will use the array design, and minimal p-value cutoffs and maximal GOI parameter values determined by the predicted user needs. Conclusions Based on the large simulations performed here, it appears that the rate at which terms are observed as significant is not predictable between sets of genes for a given GOI count and p-value cutoff. Even within a particular species, there is no correlation in relative frequency at which particular terms are significant. Therefore, permutation-based simulations appear to be the most reliable way to generate an estimate of the expected distribution of significant terms. As a result, we plan to extend the confidence tests in the next version of GOArray (version 2.0) by implementing a "false positive frequency estimation" for individual terms based on simulation results. Also, since which terms are observed as significant appears to be highly dependent on the structure of the gene list, and possibly the list of GOI, we plan to examine the merits of bootstrap methods (e.g. in the simulations choosing GOI from the original list of GOI with replacement) rather than a strict permutation method (e.g. choosing GOI from the total list of genes without replacement). In the best case, it appears feasible to pre-generate the estimated distribution of the number of significant terms through a permutation-based simulation, then use a lookup table during analyses of experimental data sets. In the worst case, one would need to generate the distribution for each experimental data set, possibly testing various p-value cutoffs to determine where power is maximal. Even in the worst case, currently available processing power allows the test for a single set of genes and a single p-value cutoff to be performed in well under an hour. While near-instant results would be desirable by end users, the worst case scenario is still quite practical and will only improve over time alongside general computer performance. Thus, relying on permutation-based methods may not be a serious inconvenience, and in fact a highly accurate method of assessing our confidence in the results of the analysis. Methods Test System All tests were performed on a single processor of a dual Xeon 2.4 MHz CPU system with 2 gigabytes of RAM. The operating system was RedHat Linux 7.3 with an SMP kernel. All time calculations were determined using the Linux command time . GOArray GOArray is a Perl script that maps genes of interest (GOI) and non-GOI (NGOI), where the difference between the two gene lists is determined by the researcher, from a microarray experiment to terms in GO and all of that term's parent terms. The GO rooted-DAG is represented in a hash table using the GODB field terms.id as the keys. A z-score is assigned to each term based on the number of genes associated with that term or any of its children relative to the total numbers of GOI and NGOI. Z-scores were used to calculate p-values since they are easy and efficient to compute, and they approximate the hypergeometric p-values when the number of NGOI and GOI for the entire data set is large compared to the NGOI and GOI for the individual nodes. Terms with only one gene in the numerator (GOI) are not given a z-score since it is not possible to have an overrepresentation of GOI with a single gene. P-values are determined using twice the value (e.g., a two-sided test) returned by the routine "uprob($z)" (where "$z" is the z-score) from the Perl module Statistics::Distributions available from the Comprehensive Perl Archive Network (CPAN) [ 18 ]. The June 2003 GODB data set is used in this analysis. Simulations of the GOI list are performed by permuting the status of each gene, keeping the total number of GOI constant. For example, in the case of an experiment examining 5000 total genes with 100 GOI, in each simulation 100 of the 5000 genes are assigned the status of GOI, and 4900 genes are assigned the status of NGOI. The only modifications to the GOArray source code in this analysis are the addition of loop structures to iterate the numbers of GOI and count the number of significant terms under different p-value cutoffs to determine when a term is significant, the use of a user-determined random number seed rather than a computer determined one for reproducibility, and the addition of a routine to summarize the simulation data. The source code for both GOArray and the modifications discussed here are available on the Web [ 19 ]. Distributions Using the modified GOArray code, the number of significant terms were determined for p-values (determining which terms were significant, not which genes were GOI) from 0.05 down to ~0.000098 (starting with 0.05 and decreasing the p-value by a factor of 2 with each iteration), and GOI counts from 50 to 500 in increments of 50. This generates the number of significant terms for each of 1000 permutations for all combinations of ten different p-values and ten different GOI counts, for a total of 100 distributions of 1000 permutations for each data set. Authors' Contributions MVO conceived of the study, performed the analyses, and drafted the manuscript. HYZ participated in the statistical design. KHC participated in the study design and coordination. Supplementary Material Additional File 1 A Microsoft Word document containing the data tables used to generate Figures 1 through 6. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC518975.xml |
340951 | Quality Information for Improved Health | The Medical Library Association converts access to information into access to knowledge in a networked environment of digital resources | “I look forward to such an organization of the literary records of medicine that a puzzled worker in any part of the civilized world shall in an hour be able to gain the knowledge pertaining to a subject of the experience of every other man in the world.” —George Gould, first president of the Association of Medical Librarians (now the Medical Library Association), May 1898 For over 100 years, the Medical Library Association (MLA) has upheld the belief that quality information is essential for improved health and has worked to ensure that health sciences librarians have the skills, knowledge, and leadership necessary for the delivery of information in a biomedical setting. The association has also promoted the concept of unrestricted, affordable, and permanent access of health information worldwide. For example, MLA's peer-reviewed Journal of the Medical Library Association (JMLA) , formerly the Bulletin of the Medical Library Association (BMLA) , has been made available since January 2000 on PubMed Central (PMC), a digital archive of the life sciences journal literature developed and managed by the National Center for Biotechnology Information (NCBI) and the United States National Library of Medicine (NLM). Access to PMC is free and unrestricted. Recently, NLM, working with MLA headquarters, made the full-text archives of BMLA from 1911 onward available online through PMC. This is an excellent resource for the study of health information sciences and the management of knowledge-based information, putting into practice MLA's belief in open access. The association has supported open access to information in several other ways, including memberships in the Scholarly Publishing and Academic Resources Coalition (SPARC) and the Information Access Alliance (IAA). MLA's statement on open access, found at http://www.mlanet.org/government/info_access/openaccess_statement.html , defines the association's position on this important topic. However, open access increases the need for more sophisticated information management tools and systems, such as quality filtering and customization of clinical and research information at the point of need and decision-making. MLA is pursuing a number of initiatives that address the specific information needs of clinicians, healthcare students, biomedical researchers, and institutional leaders. Our members are excited to be in a unique position to develop tools, resources, and advice on how to find relevant information on the Internet. For example, MLA members have developed a User's Guide to Finding and Evaluating Health Information on the Web for the Pew Internet and American Life Project. The guide provides access and evaluation guidelines and MLA's top ten most useful Web sites, as well as lists of top Web sites for cancer, diabetes, and heart disease. MLA is furthering the concept of evidence-based medicine through its exploration and definition of expert searching techniques (see http://www.mlanet.org/resources/expert_search/ ) and the provision of continuing education opportunities in this area (see http://www.mlanet.org/education/telecon/ebhc/index.html ). These techniques identify best practices and cutting-edge clinical and research knowledge and cull through a sometimes overwhelming amount of medical literature that continues to grow exponentially. MLA's work in the area of expert searching was prompted by the increased emphasis on evidence-based practice by the Institute of Medicine. This, along with the publicity following the unfortunate death of a healthy research volunteer at Johns Hopkins about the need for more vigilance in maintaining the quality of literature searching, has created a renewed interest in the knowledge base and skill set required for expert literature searching and expert consultation. The use of evidence- or knowledge-based information retrieved through the expert searching process can help insure the clinical, administrative, educational, and research success and positive performance of the individual healthcare provider as well as the hospital or academic health center. In addition to retrieving the best evidence, it is also important to deliver knowledge and services within the specialized context to patient care, research, and learning. MLA's exploration, along with NLM, of the informationist concept, i.e., specialist librarians who blend the knowledge and skills of both the clinical and information sciences, is defining new roles for librarians for providing filtered and customized clinical/research information at the point of need and decision-making (for more information, see http://www.mlanet.org/research/informationist/ ). Librarians are being recruited to join clinical and research teams as clinical medical librarians and information specialists in context and to provide expert consultation on issues ranging from informatics literacy to evidence-based medicine classes. Besides health-care providers, millions of consumers search for health information on the Web every year. Recognizing the documented difficulties and frustrations health professionals and consumers face in coping with the barrage of available information in a way that results in informed healthcare decisions, MLA has established its health information literacy program (see http://www.mlanet.org/resources/healthlit/index.html ) to stress the importance of “information” in health literacy. The association defines health information literacy as the set of abilities needed to recognize a health information need; to identify likely information sources and use them to retrieve relevant information; to assess the quality of the information and its applicability to a specific situation; and to analyze, understand, and use the information to make good health decisions. MLA has also developed a resources Web site for health consumers at http://www.mlanet.org/resources/consumr_index.html and http://caphis.mlanet.org/consumer/index.html that helps them find quality health information on the Web. These tools are publicly available to anyone in the world at any time. MLA recognizes that this is a time of rapid change in our society in which the availability of digital resources in a networked environment provides unprecedented opportunities for more open access to the scientific and medical literature. As health sciences librarians, we are excited about the potential to serve a much wider group of international consumers, ranging from medical researchers to patients and their relatives. We will continue to work to convert access to information into access to knowledge. www.mlanet.org | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC340951.xml |
516794 | The Genome Assembly Archive: A New Public Resource | With the genome assembly archive, it is possible to examine the raw data that underlies the DNA sequence in any sequenced genome | Scientists have dedicated considerable effort to decoding the genomes of an ever-growing list of species, ranging from small viruses, whose genomes may be just a few thousand nucleotides in length, to large mammalian genomes, three billion nucleotides and larger. Many aspects of life science research have benefited by the accumulation of these data, but decoded genomes could be even more valuable if important information about the genome sequence, currently being lost, were preserved. Occasionally, questions arise about a specific position in the sequence—or a variant in the sequence is observed in a new sample. At times like these, it would be helpful to be able to go back to the experimental evidence that underlies the genome sequence at that position, to see if there is any ambiguity or uncertainty about the sequence. As things stand, that's almost impossible. To understand why this is the case, it is necessary to know a bit more about how a genome sequence is put together. Current sequencing technology can only generate 700–800 nucleotides at a time; genomes must therefore be shattered into many small fragments (in what is known as the “shotgun” approach), which are then sequenced. The sequences are assembled to generate a consensus sequence that, if all steps work perfectly, matches the original DNA molecule. Since the sequencing of Haemophilus influenzae in 1995 ( Fleischmann et al. 1995 ), most bacterial and archaeal species have been sequenced by fragmenting the entire genome, sequencing the pieces, and assembling the result (the whole-genome shotgun, or WGS, strategy). In recent years, ever-larger sequencing projects have followed the WGS approach, requiring teams of computer experts and the use of increasingly sophisticated assembly algorithms in order to put together the huge number of sequence fragments. Without really being aware of it, the bioinformaticians who assemble genomes have for years been discarding the valuable information on how all of the individual sequence fragments align to the assembled chromosomes. This loss has gone largely unremarked because the scientific community has focused its attention primarily on the end product: the final genome sequence itself. It is only natural to regard the genome sequence, which is the basis for gene discovery and for functional understanding of the biology of the organism, as the primary result of a WGS project. In reality, though, a WGS project is an experiment in which large numbers of sequencing reactions are run, followed by a combination of computational work and additional sequencing to complete the genome. Three years ago, the Trace Archive (at The National Center for Biotechnology Information and The Wellcome Trust Genome Campus in Hinxton, United Kingdom) was developed to store the raw sequence data and to facilitate dissemination of this data, but currently there is no database that captures the alignment of these reads to the published genome sequence. Many scientists would be surprised to hear that genome assemblies are unavailable. One might infer that the assembly of a genome could be reconstructed from the genome sequence and the associated traces. However, aligning the traces to the genome will generally not reproduce the assembly, both because many of the traces will have alternate possible alignments and because, in some cases, parts of the assembly are manually refined based on additional experimental data. Furthermore, only a small number of large-scale centers have the computing hardware, software, and bioinformatics expertise to allow them to assemble a large genome. To bridge this gap, we have developed the Assembly Archive ( http://www.ncbi.nlm.nih.gov/projects/assembly ). The archive has been developed to store both an archival record of how a particular assembly was constructed and the alignments of any set of traces to a reference genome. Assemblies contained in this archive will be available in the GenBank ( http://www.ncbi.nlm.nih.gov/Genbank/index.html ), DDBJ ( http://www.ddbj.nig.ac.jp/ ), and EMBL ( http://www.ebi.ac.uk/embl ) databases, and all underlying traces are required to be deposited in the Trace Archive. The Assembly Archive's first entries are a set of seven closely related strains of Bacillus anthracis (the causative agent of anthrax), which have been sequenced as part of an effort to understand the detailed variation of that species. This includes the completed reference genome of the Ames strain, sequenced from a sample kept frozen since 1981, when it was originally isolated in West Texas (J. Ravel, personal communication). For the first time, the evidence behind each polymorphism in these assembled genomes will be directly accessible to the scientific community. Microbial Forensics Recently, heightened awareness of the threat of bioterrorism has spurred efforts to sequence genomes of multiple strains and isolates of a number of microbial pathogens, with the goal of cataloging all sequence differences between genomes. These efforts began with the study of the B. anthracis bacterium (the bacterium sent through the United States mail in late 2001) in order to determine if there were any differences between it and a reference laboratory sample ( Read et al. 2002 ). This and subsequent studies have prompted many scientists to focus much greater attention on the assembly of a genome, and to regard the assembly rather than the genome as the object of greatest interest. In these forensic studies, we sequence whole genomes in order to discover every possible genetic difference between two bacteria or viruses. These genomes may differ in just one or two nucleotides out of millions that are identical; for example, the study referenced above uncovered just four single nucleotide polymorphisms (SNPs) in a chromosome of 5.23 million base pairs. The close similarity between the sequences forces us to consider all the facts behind each individual nucleotide that appears different. For studies that might be used as evidence in criminal investigations, it is essential to produce this information, and furthermore to quantify our confidence in each nucleotide in the genome. Regions of a genome with deep coverage are much more accurate than those with light coverage (i.e., regions with just one or two sequence reads). Figure 1 shows one of the interfaces in the Assembly Archive, covering a small region of the multiple alignment of sequences and traces to one of the newly deposited anthrax genomes. It also shows how it is possible to examine the evidence underlying a specific base in the DNA sequence. Figure 1 Snapshot of the Underlying Sequences and Traces from an Assembly of B. anthracis The consensus sequence shown across the top of the figure contains multiple sequences that validate each nucleotide in the window. Runs of a single base (monomer runs) are common causes of base-calling errors, because the peaks in the underlying trace data sometimes merge together. The sequence shown includes several monomer runs; several of the underlying traces are shown as well. For example, the run of six As at the far left of the figure is supported by several reads in which all six peaks are distinct, as well as other reads in which the six nucleotides appear as one broad peak. By examining data such as these, one can easily verify (or disprove) putative SNPs in this genome. Human SNP Research Human polymorphism studies (e.g., Sachidanandam et al. 2001 ) are a tremendously active and important area of research today. SNPs are directly implicated in a large number of diseases and inherited traits ( Risch 2000 , Chakravarti 2001 ). Within “haplotypes,” they describe individual variation for drug response ( McLeod and Evans 2001 ) and provide a genetic framework for understanding disease phenotype ( Hoehe 2003 ). In contrast with prokaryotic genomes, the human genome (as well as other animals, plants, and a broad range of eukaryotes) is diploid, and as a result many SNPs can be discovered within a single assembly, which contains the chromosomes representing the two parent organisms. SNPs can also be found through population studies in which the same locus is sampled from multiple individuals. In either case, the evidence for a SNP begins with the alignment of two different genomes. Despite the clear need for it, the original evidence for the genome itself—the assembly—is not available, and is not linked to the evidence in the Trace Archive. If it were available, many of the polymorphisms already reported could be validated, and many more SNPs might be discovered. Assemblies will also allow centers to better coordinate their gap-closing and finishing efforts, as has been recently noted ( Schmutz et al. 2004 ). We hope that the availability of the Assembly Archive will encourage human genome sequencers, and sequencers of other genomes, to begin depositing their assemblies into this public resource, where it can be shared by all. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516794.xml |
516780 | An outbreak of Salmonella Enteritidis phage type 34a infection associated with a Chinese restaurant in Suffolk, United Kingdom | Background On 30 th July 2002, the Suffolk Communicable Disease Control Team received notifications of gastrointestinal illness due to Salmonella Enteritidis in subjects who had eaten food from a Chinese restaurant on 27 th July. An Outbreak Control Team was formed resulting in extensive epidemiological, microbiological and environmental investigations. Methods Attempts were made to contact everybody who ate food from the restaurant on 27 th July and a standard case definition was adopted. Using a pre-designed proforma information was gathered from both sick and well subjects. Food specific attack rates were calculated and two-tailed Fisher's exact test was used to test the difference between type of food consumed and the health status. Using a retrospective cohort design univariate Relative Risks and 95% Confidence Intervals were calculated for specific food items. Results Data was gathered on 52 people of whom 38 developed gastrointestinal symptoms; 16 male and 22 female. The mean age was 27 years. The mean incubation period was 30 hours with a range of 6 to 90 hours. Food attack rates were significantly higher for egg, special and chicken fried rice. Relative risk and the Confidence interval for these food items were 1.97 (1.11–3.48), 1.56 (1.23–1.97) and 1.48 (1.20–1.83) respectively. Interviews with the chef revealed that many eggs were used in the preparation of egg-fried rice, which was left at room temperature for seven hours and was used in the preparation of the other two rice dishes. Of the 31 submitted stool specimens 28 tested positive for S Enteritidis phage type 34a and one for S Enteritidis phage type 4. Conclusion In the absence of left over food available for microbiological examination, epidemiological investigation strongly suggested the eggs used in the preparation of the egg-fried rice as the vehicle for this outbreak. This investigation highlights the importance of safe practices in cooking and handling of eggs in restaurants. | Background Infection due to Salmonella is a major public health problem in England and Wales with reports of over 14,400 infections due to Salmonella in the year 2003 [ 1 ]. The most common serotypes responsible for human infection are S Enteritidis, S Typhimurium and S Virchow [ 2 ]. Although Salmonella enterica serovar Enteritidis phage types 4, 21 and 6 have been reported in previous outbreaks [ 3 ], phage type 34a is rare and reports of outbreaks due to this serotype are scarce in the literature. Apart from one report from Wales [ 4 ], we are not aware of any other outbreaks due to S Enteritidis PT 34a infection reported in the literature from United Kingdom. In this report, we present the results of an epidemiological investigation of an outbreak due to this rare phage type associated with a Chinese restaurant in Suffolk, United Kingdom. Methods On 30 th July 2002, the Suffolk Communicable Disease Control team (SCDC) was informed by the consultant microbiologist that S . Enteritidis had been isolated from stool samples of five patients. All had recently eaten a meal in a local Chinese restaurant. Further enquires revealed that there were more patients with similar food history and gastrointestinal symptoms. An Outbreak Control Team was convened on 31 st July and it was decided that a full investigation should be carried out to identify the extent of the outbreak, the probable vehicle of infection and to advise on the appropriate control measures. Epidemiological The environmental health department (EHD) staff initially gathered information from people who had become ill on a standard data collection form. In the initial stages of the investigation, it became apparent that all those who became ill had eaten or had bought a take away at the restaurant on 27 th July 2002. The information collected included name, address, sex, their symptoms and date of onset. The restaurant provided the list of food items that were served/sold on the day in question. This menu extended to 40 food items. This list was shown to the restaurant patrons and they were asked to state the food items they had eaten. A variety of ways was used to identify further cases including the technique of snowball sampling. This involved asking the patrons whether they were aware of any others who had similar symptoms and had eaten in the restaurant. General Practitioners providing primary care in the area were contacted and were requested to check for patients with gastrointestinal symptoms. The presenting symptoms of the patrons were diarrhoea, headache, abdominal pain and fever. The next step involved interviewing all those who had eaten/bought food on 27 th July whether they became ill or not. The restaurant provided table-booking details. The following case definition was adopted for the outbreak. "Symptoms of acute gastroenteritis including one of the following: diarrhoea, vomiting or abdominal pain up to 96 hrs after having had a meal from the said restaurant including takeaway between 22 and 30 July 2002 and/or individuals who have positive stool sample for S . Enteritidis up to 96 hrs after having a meal from the restaurant including a takeaway between 22 and 30 July 2002". An analytical investigation was carried out using a retrospective cohort design. Efforts were made to identify anyone who ate or bought food at the restaurant on 27 th July. Eligibility for membership of the cohort was defined as a person having the opportunity to eat any of the food items available on the day. Statistical methods Data was entered in to Statistical Package of Social Sciences version 10 [ 5 ]. Food specific attack rates and the corresponding two tailed p vales were derived by Fisher's exact test [ 6 ]. Univariate relative risk (RR) and 95% Confidence Intervals (CI) were calculated using standard cohort analysis [ 7 ]. Microbiological Stool samples were requested from all who had eaten food from the restaurant on 27 th of July. Environmental sampling was not carried out as this was considered to be of limited value. There was no food left over from 27 th July, but three food samples were taken on 30 th July and sent for analysis to the food laboratory at Chelmsford Public Health Laboratory. Stool specimens were sent to Ipswich Hospital microbiology laboratory and were cultured for the presence of Salmonella sp. Isolates of Salmonella were forwarded to the Laboratory of Enteric Pathogens at Central Public Health Laboratory, Colindale for phage typing. Standard procedures were adopted for phage typing at the laboratory [ 8 ]. Environmental The EHD staff inspected the premises including verifying the procedures for hazard analysis and critical control point (HACCP). Egg storage and preparation of egg items were also investigated during the visit. Efforts were made to trace the egg trail back to the supplier. Results Epidemiological Data were gathered from 52 subjects who had eaten food from the restaurant on 27 th July of whom 38 developed symptoms and 14 were free of symptoms. Of the 38 who became ill 16 were male and 22 were female. The mean age was 27 years. The mean incubation period was 30 hours with a range of 6 to 90 hours suggesting a point source outbreak. Two patients received hospital care and there were no deaths. No gastrointestinal illness was reported among the kitchen staff of the restaurant in the weeks before or during the outbreak. On investigation of food preparation practices at the restaurant, it appeared dishes containing egg were the most likely vehicle for this outbreak. However, this information was not discussed when gathering data from the subjects. Data was gathered in a standardised format from all subjects to avoid any interviewer or recall bias. We had a strong "a priori" hypothesis that people who had eaten egg or food that had come in to contact with egg were at an increased risk even before looking at the data and these were analysed first. During analysis it became apparent that illness was significantly associated with the three food items that contained egg or the egg rice mixture as shown by the food specific attack rates (Table 1 ) and the increased relative risks (Table 2 ). When many other food items eaten on the day including pork-fried rice and a variety of fish dishes were analysed none of which showed an increased attack rate or was significant in the cohort analysis. Table 1 Specific attack rates of suspected foods Food item Eaten Not eaten p* value Ill Not Ill Attack rate (%) Ill Not Ill Attack rate (%) Egg fried rice 31 5 86.1 7 9 43.8 0.002 Special fried rice 13 0 100.0 25 14 64.0 0.009 Chicken fried rice 9 0 100.0 29 14 67.4 0.04 * Fisher exact test Table 2 Relative risk and 95% Confidence Intervals of suspected foods Food Item Relative Risk 95% Confidence Interval Egg fried rice 1.97 1.11 – 3.48 Special fried rice 1.56 1.23 – 1.97 Chicken fried rice 1.48 1.20 – 1.83 Microbiological A total of 31 stool specimens were submitted to the laboratory from which S . Enteritidis was isolated in 29. Twenty-eight of these 29 isolates were confirmed to be PT 34a and one as PT4. No pathogens were isolated from food samples taken from the restaurant on 30 th July. Environmental During the visit to the restaurant, the EHD staff reported that there was no evidence of hazard analysis and noted many cleaning and maintenance issues. Hand wash facilities were inadequate. The chef explained that after preparing the egg rice mixture, it was left out at room temperature for the rest of the evening, and reheated when ordered. This mixture was used for some of the other fried rice items. It was estimated that on the evening of the 27 th of July the egg rice mixture was left at room temperature for seven hours. The restaurant received eggs from a supplier in London every week and they were not refrigerated. Attempts to trace the egg trail were not successful. Control measures The restaurant closed voluntarily on 31 st July and the EHD staff reassessed the situation on the evening of 1 st of August. As they were satisfied with the arrangements, the restaurant was allowed to reopen. The owner and restaurant staff were provided with information on proper cooking methods and the importance of undertaking HACCP. Discussion The epidemiological investigation showed that eggs used in the preparation of egg-fried rice, which in turn was used in the preparation of some of the other rice items was the vehicle of infection. Isolation of the unusual phage type 34a strengthened the conclusion that eating from the restaurant was linked to a point source outbreak. In general, by the time investigations are initiated often no food material is available for laboratory analysis and the investigator has to rely on epidemiological evidence. The first step in identifying the source of an outbreak is the calculation of attack rates and the responsible food should have a significantly higher attack rate [ 9 ]. In this study, three types of food were found to have higher attack rates and were considered responsible for the outbreak (Table 1 ). All these items contained egg or egg rice mixture, which was left at room temperature for a long time. Cohort analysis also showed elevated RRs which were significant (Table 2 ). Seven sick patrons did not eat egg-fried rice. Descriptive analysis showed that all except one gave a history of eating chicken and or special fried rice. Statistical methods have better power while there is an "a priori" hypothesis as shown in the study of summer excess of leukaemia [ 10 ]. We had an "a priori" hypothesis that food items containing eggs increased the risk of illness. To our knowledge, this is the third report of an outbreak due to phage type 34a and the first of its kind in England published in the literature. In the UK, this phage type has been associated with travel abroad especially to southern Spain [ 11 ] and indigenous infections are rare. The restaurant received eggs from two sources and one of which was a packaging firm. Hence, it was not possible to determine the origin of the eggs. An outbreak due to closely related phage type 34, associated with an egg-containing dish in a Mexican restaurant in the United States has also been described [ 12 ]. There have been earlier reports of S . Enteritidis outbreaks associated with Chinese food businesses in England [ 13 ], Scotland [ 14 ] and the United States [ 15 ] although it is not clear whether any shortfalls in specific food handling techniques are responsible. In one instance [ 15 ], egg roll batter was made from pooled shelled eggs which were left at room temperature throughout the day. The proportion of eggs infected with S . Enteritidis has been reported to be low [ 16 ] and hence the risk of acquiring infection from consuming a single raw egg is much lower. However, the practice of pooling shelled eggs together with storage at room temperature as happened in our outbreak promotes bacterial multiplication and a single contaminated egg can contaminate different types of food. The role of S . Enteritidis in causing food borne outbreaks is well known as it has the ability to contaminate eggs without causing discernible illness in the birds affected [ 17 ]. Eggs have been implicated as the source of Salmonella infection in many previous outbreaks [ 18 - 22 ]. Hayes et al [ 23 ] in their case control study in Wales found that undercooked hens eggs are an important risk factor for sporadic Salmonella infections. We could not find any veterinary data on phage type 34 in British flocks. We searched the literature to determine whether there is any molecular relationship between phage type 34a and phage type 4. Hudson et al [ 24 ] based on the results of pulsed-field gel electrophoresis concluded that different S . Enteritidis phage types appear to be genetically related or clonal. Discussion on stability of phage types of S. Enteritidis can also be found in the literature. Conversion of phage type 4 to 24, phage type 23 to 8 and 4 to 7 have been reported. We could not find reports linking phage type 34a and phage type 4. In a recent public health investigation [ 25 ] of S . Enteritidis in raw eggshells, various serotypes of Salmonella were isolated from 23 out of 449 (5.1%) pooled samples labelled as originating from Spain. These sero/phage types included S. Enteritidis PT6a, PT5c, 13a, 14b, 58, PT6d, PT1, PT1c and PT12. A few limitations of this study are to be noted. The origin of the suspected contaminated eggs could not be traced. Although trace back exercises are key in epidemic investigations, often they are not successful due to logistic and practical reasons. In our outbreak one of the suppliers to the restaurant turned out to be a packaging firm. During our investigation, we found that there were problems with the distribution system, which prevented us from pin pointing the origin of the contaminated eggs. A possibility always exists that we missed a few subjects from this investigation and not all could be persuaded to provide a stool sample. There was no single list of all the patrons who ate/purchased food on the evening. However, all efforts were made to contact the patrons and the outbreak caused considerable publicity in the local media. Hence, we are confident that we have included most patrons. Although the precise number of patrons who were not included will never be known we are confident that their number is small and might be in the region of 10 to 15. Palmer [ 26 ] has pointed out the need to undertake outbreak investigations rapidly but at the same time with sound methodology. We tried to adopt the standard approach to investigating an outbreak including a retrospective cohort study. However, we did not attempt multivariate analysis due to the small number of subjects involved in the investigation. S . Enteritidis PT4 was isolated from one of the subjects. Further investigation revealed that this subject had recently returned from holiday in continental Europe and had suffered mild symptoms before the meal. In response to this and other outbreaks associated with eggs, a Public Health Investigation was launched in October 2002 in the UK to determine the rate of Salmonella contamination in eggs. Tests of nearly 4000 eggs showed that Salmonella was recovered from 5.3% of pooled eggs [ 25 ]. The Food Standards Agency has also produced a leaflet titled "Eggs – what caterers need to know" [ 27 ] which emphasises the importance of thoroughly cooking the eggs, buying eggs from reputable suppliers and use of pasteurised eggs when serving a vulnerable individuals. Conclusions Investigation of this outbreak was greatly facilitated by the close cooperation between local EHD, Communicable Disease Control Team, microbiological laboratory and local health care providers. Although food samples from this point source outbreak were not available for microbiological culture, epidemiological evidence pointed to eggs containing dishes as the most likely source of the outbreak. This outbreak highlights the continuing hazards of raw eggs. It is likely that the use of pasteurised eggs and the adoption of safe food preparation practices would have prevented this outbreak. Competing interests The first author (PB) served as the expert witness for the prosecution during the court proceedings. Authors' contributions PB and TS conceived and designed the study and drafted the manuscript. PB analysed the data RK oversaw the microbiological investigation. PB, TS, RK and HM all interpreted the results of the analysis and critically reviewed the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516780.xml |
314482 | A DNA-Binding Protein Helps Repair Breaks in DNA Double Helix | null | One of the central problems for much of the 20th century was how to reconcile genetic stability with evolutionary change. Genomic fidelity was thought to arise from an inherent invariability in the DNA structure itself. Biologists now know that DNA constantly undergoes modifications as it unwinds, replicates, condenses, twists, and untwists. This dynamic interplay produces both stability and variation—and occasionally genetic damage. If DNA damage goes unrepaired, it can disrupt chromosomal integrity and may lead to cancer and other diseases. When the DNA double helix breaks, the cell must enlist a number of proteins to repair the broken DNA ends, but much remains to be learned about the molecular mechanisms involved. Tracking a protein that binds to single strands of DNA during replication and recombination in living yeast cells, Xuan Wang and James Haber report that this protein plays a role in at least two key steps in the repair of double-strand breaks in DNA. When double-strand breaks occur, the cell mounts a search for similar (homologous) sequences that can be used as a template to repair the damaged sequence. If successful, the broken DNA molecule basepairs with the homologous region and forms a complex, ultimately replacing the damaged sequence with a similar sequence. In yeast—which serves as a stand-in for higher eukaryotes, including humans—this “strand invasion” process requires both an exchange protein, called Rad51, and a single-stranded DNA-binding protein, called RPA (replication protein A). Single-stranded binding proteins bind to regions of DNA that are opened up during replication. They also bind to strands when broken ends of DNA are cut by enzymes that leave long single-stranded tails. RPA proteins are thought to facilitate the formation of Rad51 polymers, or filaments, on single-stranded DNA by clearing away structures that block Rad51's path. The growing filament searches for homologous DNA sequences and promotes the invasion of the single strand, preparing it to copy the homologous template by “repair DNA synthesis,” which patches up the lesion. To investigate how RPA functions in double-strand break repair in a living organism, Wang and Haber created cells with a double-strand break at a specific site and monitored the activity of proteins recruited to repair the damage. With this approach, the researchers could observe these interactions in living yeast to determine what role RPA plays in repairing DNA damage and how it works with the Rad51 protein. The authors show that as soon as a double-strand break occurs, the RPA protein binds to the exposed strand ends, before the Rad51 protein does. This is not unexpected, because this binding order supports the model that RPA prepares the way for Rad51, perhaps by stabilizing the strand long enough for Rad51 filaments to establish themselves. The surprise was that RPA appears to be necessary even after Rad51 binds to the DNA strand, perhaps by stabilizing the interaction with homologous DNA sequences. That RPA is required for successful repair is supported by evidence that a particular mutated form of RPA can stimulate Rad51 DNA binding normally, but inhibits strand exchange and template copying, thus preventing repair of DNA damage. Wang and Haber's work highlights the complex repertoire of DNA–protein and protein–protein interactions that manage and manipulate the genome in the service of genomic stability. The study of DNA repair mechanisms in living cells—a daunting task—promises to lend valuable insights into the truly dynamic nature of maintaining genome stability. Repair of double-strand breaks involves invasion of the homologous region, displacement, and DNA synthesis to fill in the gap | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC314482.xml |
503400 | Proposed methods for reviewing the outcomes of health research: the impact of funding by the UK's 'Arthritis Research Campaign' | Background External and internal factors are increasingly encouraging research funding bodies to demonstrate the outcomes of their research. Traditional methods of assessing research are still important, but can be merged into broader multi-dimensional categorisations of research benefits. The onus has hitherto been on public sector funding bodies, but in the UK the role of medical charities in funding research is particularly important and the Arthritis Research Campaign, the leading medical charity in its field in the UK, commissioned a study to identify the outcomes from research that it funds. This article describes the methods to be used. Methods A case study approach will enable narratives to be told, illuminating how research funded in the early 1990s was (or was not) translated into practice. Each study will be organised using a common structure, which, with careful selection of cases, should enable cross-case analysis to illustrate the strengths of different modes and categories of research. Three main interdependent methods will be used: documentary and literature review; semi-structured interviews; and bibliometric analysis. The evaluative framework for organising the studies was previously used for assessing the benefits from health services research. Here, it has been specifically amended for a medical charity that funds a wide range of research and is concerned to develop the careers of researchers. It was further refined in three pilot studies. The framework has two main elements. First, a multi-dimensional categorisation of benefits going from the knowledge produced in peer reviewed journal articles through to the health and potential economic gain. The second element is a logic model, which, with various stages, should provide a way of organising the studies. The stock of knowledge is important: much research, especially basic, will feed into it and influence further research rather than directly lead to health gains. The cross-case analysis will look for factors associated with outcomes. Conclusions The pilots confirmed the applicability of the methods for a full study which should assist the Arthritis Research Campaign to demonstrate the outcomes from its funding, and provide it with evidence to inform its own policies. | Background The growing concern for the benefits from health research to be studied Health research funding bodies are under increasing pressure to demonstrate the outcomes, or benefits, of the research that they fund [ 1 - 6 ]. Traditional peer review of research focussed on the outputs in terms of journal articles, the training of future researchers and the development of careers. These are still seen as important, but in some analyses they have been merged into broader multi-dimensional categorisations of the benefits from health research [ 7 , 8 ]. The onus hitherto has been on public sector funding bodies. There is a general recognition in the UK, however, of the importance of the role of the medical charities: they fund approximately one third of UK medical research – a level 'unparalleled elsewhere in the world and nor is it found in other areas of science' [ 9 ]. Therefore, in an era of accountability, public involvement in research issues and growing competition for contributions, some medical charities see the virtue in being able to demonstrate the outcomes of the research they fund. Not all the pressures, however, are external and some funding bodies, including the Wellcome Trust, which is an endowment and not collection-based charity, are being pro-active in their attempts to identify and track the outcomes of the research they fund. One factor relevant for both public sector and charity funding bodies is the recognition that assessing the benefits from their research may assist in identifying research strategies most likely to produce benefits [ 2 , 7 , 10 , 11 ]. Concerns such as those above led a UK medical charity, the Arthritis Research Campaign (ARC), to approach RAND Europe with the idea of conducting an assessment of the long-term outcomes from research that they have funded. The purpose of this paper is to set out the aims of the study and the methods being adopted. In particular, it will show how an existing generic approach to the assessment of benefits from health research [ 7 , 12 ] has been adapted to meet the needs of this specific study. After providing the background to the study, the paper describes the methods to be adopted. These were initially agreed after a consultative phase in which the evaluative framework was refined on the basis of interviews with six key actors who have played various roles within ARC and advice from ARC's Development Committee, which acts as the steering group for this project. They were then confirmed following a pilot stage in which three case studies were conducted; some examples from the pilot studies, which endorsed the feasibility of the proposed approach, will be given to illustrate the account of the methods. The increasing attention on musculoskeletal conditions and the role of the Arthritis Research Campaign (ARC) Attention is being drawn to the increasing scale of the burden of musculoskeletal conditions, and the associated costs, in various ways including through the establishment of the Bone and Joint Decade 2000–2010 and the recent collaborative report with the World Health Organization (WHO) [ 13 ]. At the same time, there is a realisation that the benefits of research in this area are sometimes less immediately apparent than in some other fields. For example, two classic studies of the economic benefits from biomedical research [ 14 , 15 ] both highlighted arthritis as an area where research and higher medical care expenditure may have comparatively little impact on mortality. Furthermore, one recent attempt to put a monetary value on the benefits from health research in Australia [ 16 ] adapted a method developed in the USA [ 17 ] and again demonstrates the difficulties of undertaking such analysis in the musculoskeletal field. These observations might suggest that a more careful and wide-reaching assessment of benefits from research is particularly needed in the field of arthritis. ARC is the leading medical charity in this field in the UK and one of the largest collection-based medical charities in the UK. The Research Outputs Database (ROD) records the funding acknowledgements on all UK biomedical papers contained on the citations indices of the Institute for Scientific Information [ 18 ]. Analysis conducted on ROD reveals ARC to be, 'in a dominant position within the UK in the arthritis subfield' [ 19 ]. ARC's funding is associated with more arthritis publications than that from either the Medical Research Council (MRC) or the Wellcome Trust. It spent almost £22 million in the year 2001–2002; its major aim 'is to support the highest quality research into the cause, cure and treatment of arthritis and musculoskeletal diseases' [ 20 ]. It adopts a variety of funding modes for a range of types of research including its support of two research centres. The preliminary interviews, described above, highlighted the importance of the work by Marc Feldmann and Sir Ravinder Maini at one of these, the Kennedy Institute of Rheumatology in London, in developing anti-Tumour Necrosis Factor (anti-TNF) therapy as an effective treatment for rheumatoid arthritis and other autoimmune diseases. The pair won the 2003 Albert Lasker Award for Clinical Medical Research for this discovery. Objectives of the evaluation/research questions Within the general climate of an increased emphasis on the outcomes from research, four main objectives were specified for the particular study described in this article: • Review and document outcomes for ARC research grants • Illustrate the strengths and weaknesses of different modes of research funding • Identify factors associated with translation of research, and attempt to develop 'early indicators' of likely successful translation • Identify 'good news stories' and vignettes of the research process for use by ARC in public engagement and fund raising activities. Methods Rationale for using a case study approach Traditional methods of peer review have long been favoured by medical research funding bodies for evaluating research, but bibliometric methods have had a variable history. An early move by the National Institutes of Health (NIH) to establish a publications' database was cancelled by the 1980s 'as too expensive for the management information it produced' [ 21 ]. The MRC in the UK reviewed the possibility of making greater use of bibliometric measures to inform per review of major long term programmes, but the steering group established to oversee the review concluded that, 'bibliometric analysis would not add sufficient value to peer review to be worthwhile routinely and should not be introduced into MRC procedures' [ 22 ]. Nevertheless, there are circumstances where bibliometric analysis can provide research funding bodies with useful information [ 19 ] and, as discussed below, they can be incorporated into broader case studies. On its own, however, it is unlikely to provide much information about the longer-term outcomes from research funding. The Economic and Social Research Council (ESRC) in the UK commissioned a project to identify the impact of their research on non-academic audiences. It involved tracing the activity of the participating researchers after their projects ended and mapping the networks of researchers and relevant non-academic users and potential beneficiaries [ 23 ]. The study concluded that the preferable way to determine and assess the existence of impacts of socio-economic research on non-academic audiences 'is through detailed, project-by-project qualitative analysis' [ 23 ]. Such an approach probably entails adopting a case study approach, and there is a long history of applying the case study approach to examine the utilisation of research [ 24 ]. Indeed, where the emphasis is on demonstrating the outcomes from health research, a case study approach has mainly been used [ 7 , 25 ] and been recommended for use in future studies [ 4 , 26 , 27 ]. Case studies will enable narratives or stories to be told to illuminate how the research funded in the early 1990s was translated (or not) into practice; each case, therefore, could potentially provide an illustrative example of the outcomes from ARC research. Furthermore, the planned 16 case studies will be based on a variety of modes of ARC research funding and types of research. They will also be organised using a common structure. This should enable cross-case study analysis to demonstrate (via illustrative case studies) the strengths and weaknesses of different modes of funding and categories of research. It should also facilitate the identification of factors associated with the translation of research, perhaps through various phases, into policies, products, and clinical practice that produce a health gain. The evaluation framework described below was developed in a way that incorporates previous experience and knowledge on these issues [ 7 , 12 , 24 , 26 , 28 ]. This should ensure that questions are asked about a range of factors that previous experience suggests are likely to be related to the translation of research. Additionally, because the evaluation framework includes a multi-dimensional categorisation of benefits from research, the full range of outputs and outcomes relevant to different types of research, and modes of funding, will be looked for in the studies. The case studies will, in part, be conducted to see if they produce evidence consistent with existing hypotheses about factors linked to the translation of research and the role of different modes of research funding and types of research. But they will also be exploratory and should allow the generation of new hypotheses, particularly ones specifically relevant for research funding from a medical charity. Timescale In deciding the time window to use for selecting case studies, a compromise usually has to be made between the quality of records/likely ability of researchers to recall their activities and the selection of grants whose outputs have had sufficiently long to develop [ 29 ]. The latter point was important in this study because the aim was to move beyond considering traditional outputs and also examine outcomes such as health gains. ARC instituted a new computerised database during the early 1990s and all their grants awarded since 1990 are held on this database. Prior to this, only paper records of unknown completeness were available. As an appropriate compromise between the various factors, we therefore decided to select grants that were awarded between 1990 and 1994. Selection of cases Within a case study approach it is unlikely that the selection of cases will follow a straight-forward sampling logic in which those selected are assumed to be representative of a larger group [ 30 ]. Nevertheless, in adopting a multi-case approach the project aims to ensure not only that the benefits from the full range of modes of funding and types of research can be illustrated, but also that there is scope for some cross-case analysis. The selection of cases will, therefore, be somewhat purposive. Case studies based on four modes of funding will be included: institute grant, programme grant, project grant and fellowships. ARC-funded researchers will also be divided into three groups on the basis of their qualifications: basic researchers, clinical researchers, and Allied Health Professionals (AHP) such as physiotherapists. In their classic case study analysis of research utilisation, Yin and Moore [ 24 ] went to considerable lengths to ensure that they were including only studies where it was thought there had been utilisation. We do not propose to go that far, but, given that the idea is to illuminate the outcomes, it is considered desirable to concentrate on studies where it is thought there is a reasonable chance that there will be something to show. When examining the outcomes of research, even a stratified sampling approach is not thought to be sufficient because most impact usually comes from a small number of studies [ 23 ]. As a first step, we shall identify all publications in the relevant period from the principal investigators awarded ARC funds. Then, the researchers will be classified according to the journal impact factors (see below) of the journals in which their articles appear. The aim will be to draw up shortlists of possible researchers to include in the study: those in the top decile and those the middle of the range, with the final selection made on the basis of advice from ARC's Development Committee. Organisation of data collection For case studies it is appropriate to use multiple sources of evidence converging on the same issues [ 4 ] and adopt a process of triangulation [ 27 , 30 ]. Three main interdependent methods will be used: documentary and literature review; semi-structured interviews with key informants; and bibliometric analysis. They will be applied in a partially overlapping way. Documentary and literature review We will read key project documents including the original research grant proposals, referees' reports and end of project reports. On the basis of the end of year reports from researchers, and the interviews (see below), we will also identify and read the core publications attributed to the research grant and any subsequent publications such as key citing papers, relevant clinical guidelines etc. Semi-structured interviews with key informants There will be about three interviews per case study. They will be based on a semi-structured interview schedule informed by the evaluation framework described below. They will, therefore, explore the origins of the research and the primary outputs such as the publications. In this way the initial list of publications identified as being related to the project will be refined. Furthermore, there will be a full exploration not only of the contribution to research training and career development, but also of any translation of the research findings into product development, policy and practice. In each case study the initial interviews will be with members of the relevant research team. Then snowballing techniques will be used to identify the people who might be able to provide most information about how the research has influenced subsequent research or been translated into product development, policy and practice. Bibliometric analysis Bibliometric approaches can play a useful role in the analysis of the research funded by specific biomedical research-funding bodies [ 19 , 31 ]. In the current analysis, the list of research papers published as a result of the project will first be refined as described above. Following that, bibliometric analysis will be conducted to record various matters including: the full funding acknowledgements; number of authors; citation counts; and comparison of number of citations with the journal impact factor of the publishing journals. This analysis will be conducted by a further part of the research team: those responsible for maintaining the ROD described above. Clearance and validation In every case a draft copy of the case study report will be sent to the principal investigator for comment. Such a step is an important part of the validation process and not just a matter of professional courtesy [ 24 ]. Evaluation framework for ARC case studies There are two elements in the evaluation framework adopted to organise the case studies being conducted in the assessment of the outcomes from ARC-funded research. Building on the framework developed by Buxton and Hanney [ 7 , 12 ], the two elements consist of a multi-dimensional categorisation of benefits from health research, and a model of how best to assess them. A logic model such as this helps facilitate assessment rather than pretending to be a precise model of how research utilisation occurs. The framework has been developed in various ways to meet the particular circumstances of ARC-funded research, which is often basic and investigator-led. There are many steps involved in assessing outcomes from research. One of the key advantages in taking a detailed approach, such as that described below, is that it enables the issue of the counter-factual to be addressed. In other words, what would the world have looked like without the specific research being examined? The categories of payback The multi-dimensional category of payback provides the evaluation criteria for the outputs and outcomes from ARC funding. The 5 main categories are: a) Knowledge production b) Research targeting, capacity building and absorption c) Informing policy and product development d) Health benefits e) Broader economic benefits. Each can be considered in turn, with various sub-categories explored and possible measures described. Knowledge production The knowledge produced by research is the first output and is contained in various publications and patent applications. Any type of publication can be considered, but it is generally thought that peer reviewed articles are the most important and, at least for biomedical research in industrialised countries, it is thought reasonable to assume that the overall output of research publications is fairly represented by peer-reviewed papers in international journals [ 19 ]. In addition to counting the number of publications, their quality and their impact can be assessed in various ways. The quality of knowledge production has traditionally been assessed by peer review, but various other methods can be applied. Papers that are accompanied by an editorial are often seen as being of particular significance. For those studies that are included in a systematic review there are now formal quality assessment techniques [ 32 ], as there are for reviews appearing in an overview [ 28 ]. Citation analysis can be applied to assess the impact the specific article is having within the research community [ 33 , 34 ]. Previous experience suggests that knowledge production will be particularly important for basic research, and certainly, on average, papers in basic research journals tend to be cited more frequently than ones in clinical journals [ 19 , 35 ]. A journal's 'impact factor' is based on the average number of times an article in the journal is cited; it can provide a short-hand version of citation analysis by giving some indication of the importance of the journal in which an article appears. The use of impact factors in analysis of biomedical research has been criticised [ 36 ] but, provided care is taken [ 37 ], it has been shown to be of some value [ 19 ]. Particularly when considering research that might be aimed at potential users outside the research community, it is often desirable to use a range of publication outlets including those journals with the highest readership among the groups at whom the research is targeted. In some fields these might well be journals that do not have an impact factor but are, nevertheless, significant as vehicles for dissemination of the knowledge produced [ 38 - 40 ]. Research targeting, capacity building and absorption The better targeting of future research is frequently a key benefit from research, especially from research that is more basic and/or methodologically oriented. An indication of this comes from citation analysis. The enhanced targeting can be of the research conducted both by others and by the original researcher(s). Where follow-on research, especially by members of the original research team, is clearly associated with the original research it can be useful to obtain information on the source and amount of such funding [ 39 ]. As is developed in the paragraph below, one of the key roles of a medical charity can be to fund research in its field that will help to open up questions/issues that will then attract further funding from the general research funders such as the MRC and the Wellcome Trust. Research training can be provided both as a result of the employment of staff on research projects and programmes, and through explicit funding for research training and career development [ 1 ]. One measure of research training, which may appear crude but has nevertheless been used in previous studies, is the number and level of higher or research degrees resulting, either totally or in part, from the research funding [ 1 , 14 , 39 , 41 ]. The career development of arthritis researchers goes much wider than specific training and is of considerable importance to ARC which aims to ensure that the pool of researchers in this field is a strong as possible. The reasoning is that this, in turn, should help ensure that arthritis as a topic is able to gain an appropriate share of the research funding available from general medical research funders. Some of ARC's funding schemes aim explicitly to provide career development, and for other researchers the receipt of a project grant from ARC can be important in advancing their career in research. Interviews can address this. Furthermore, they may also enable us to consider how far career development based on ARC funding helps propel some researchers into positions within the health-care system where they can play a role in ensuring that the later stages of translating research findings into outcomes are achieved. Informing policy and product development Research can be used to inform policymaking in a wide range of circumstances and the key issue is that policymaking involves those in positions of authority making choices that have a special status within the group to which they apply [ 27 ]. Policymaking is interpreted very broadly here and refers not just to national policies of the government, but also includes: policies made by managers at many levels within a health service; policies agreed at national or local level by groups of health-care practitioners in the form of clinical or local guidelines; policies developed by those responsible for training/education/inspection in various forms including training packages, curricula and audit and evaluative criteria [ 3 ]; and policies about media campaigns run by health-care providers. Basic research is less likely than that from clinical researchers or AHP to be used to inform policy. Various methods have been proposed for analysing the impact of research on health policymaking, including documentary review and interviews [ 26 , 27 ]. The position of systematic reviews is a little complex. They are themselves a form of research, but inclusion of a study in a systematic review is a form of secondary output and might lead on to further use. At a similar level, although involving very different processes, research can also be used to inform product development [ 38 ]. Informing policies and product development are conceptually similar in that there generally has to be some subsequent adoption of the policy, or product, before the health and economic benefits can accrue [ 7 ]. Health benefits Benefits in terms of health gains might be viewed as the 'real' payback or outcomes from health research. Greater effectiveness of health-care resulting from research-informed drugs or procedures should lead to increased health. Various measures of health gain exist, but for arthritis the emphasis, in most cases, is likely to be on those that assess reduction in pain or disability, and increase in mobility. While the benefits from arthritis research will not generally be measured in terms of life years gained, in some circumstances they might be captured by using Quality Adjusted Life Years (QALYs). This is often seen, in countries such as the UK, as a more appropriate approach than using Disability Adjusted Life Years (DALYs) [ 42 ]. There have been recent attempts to put a monetary valuation on the reduction in mortality and morbidity as a result of health research [ 16 , 43 ], but that is not being proposed for this study. At an overall level, it is possible that figures for the potential population who could benefit from the new drug or procedure could be identified, along with information about the level of benefit that individual patients might receive. If knowledge about adoption levels was then also taken into consideration it might be possible to indicate overall levels of benefit. This category of benefits can be thought of as going wider than health gain, and some aspects can be seen as benefits to the health sector more generally. Cost savings in the provision of health-care may result from research-informed changes in the organisation of services or in the particular therapies delivered. It might be necessary to consider various issues here. These include whether potential savings have in practice been realised – either as cash savings or as the release of resources for other valuable uses [ 44 ]. Furthermore, it would be important to check whether costs are not simply being transferred elsewhere. Improvements could also arise in the process of health-care delivery and these could be measured by techniques such as patient satisfaction surveys [ 7 ]. Broader economic benefits A range of benefits can accrue to the national economy from the commercial exploitation of research. These can take the form of employment and profits resulting from the manufacture and sale of drugs and devices [ 45 ]. The national economy could also benefit from exports and/or import substitution [ 46 , 47 ]. Whilst there is a danger of double counting, it is probably also important to adopt a human capital approach and focus on the value of production gained from having a healthy workforce. This can be measured by examining the reduction in days off work. Typically, in a human capital approach, potential future earnings are calculated for people who, as a result of advances in medical research, can continue to contribute to national production [ 14 , 15 , 48 ]. Those who use it, however, share the concerns that such an approach to assessing the benefits from research could have equity implications in that it would seem to favour research relevant for those of working age. This concern might be relevant here, in that many who suffer most from arthritis are retired, but reducing the days off work caused, for example, by low back pain, could be important. The economic burden of low back pain has been identified [ 49 ] and the potential role of research in reducing it was recently highlighted in a wide-ranging discussion of the benefits from medical research in the USA [ 50 ]. Model for assessing the outputs and outcomes The second element of the evaluation framework is the logic model. Its various stages are shown on Figure 1 and provide a way of organising the case studies. At least seven stages and two interfaces are identified and although they are presented in a linear form, the reality is much more complicated and there is also considerable feedback [ 7 , 12 ]. Figure 1 Model for Organising the Assessment of the Outcomes of Health Research. Sources: Adapted from previous versions of the Buxton/Hanney model for assessing the payback from health research [12,27]. Stage 0: Topic/issue identification Interface A: Project specification and selection Stage 1: Inputs to research Stage 2: Research processes Stage 3: Primary outputs from research Interface B: Dissemination Stage 4: Secondary outputs – policymaking and product development Stage 5: Adoption by practitioners and public Stage 6: Final outcomes While it is not possible totally to tie the categories of benefits to certain stages of the model, it is possible to identify broad correlations: categories a) and b) (knowledge and research benefits respectively) are together considered to be the primary outputs from research; category c) (informing policy and product development) relates to the secondary outputs; and categories d) and e) (health and broader economic benefits respectively) are the final outcomes. This approach can be incorporated into the analysis of each stage in turn as is set out below, where a few examples, drawn from the pilot studies, are used to illustrate how the framework seemed to be working in practice but could be refined in certain ways. Stage 0: Topic/issue identification The topic or issue identification stage covers the generation of the original ideas for the research. Its nature can vary considerably depending on whether the main driving force is internally generated by the researcher, or externally generated [ 27 ]. Most ARC funding falls into the former category: for many researchers the topics will be curiosity-driven and based on examination of the existing stock or pool of knowledge and opinions about where gaps, and/or opportunities, exist and further research could advance understanding. Such factors will also inform more clinical and AHP researchers, but here consideration of clinical needs could also be a factor and might be based on personal experience of treating patients, as became clear in the interview with the principal investigator in one of the case studies. Where research topics are externally generated, the identification of the issue comes from a process of needs assessment that could involve analysis either just within the scientific community or more widely. In the latter case, many groups could be involved. These include not only members of the wider research community and representatives of research funding bodies, but also potential users and beneficiaries of the research drawn from some combination of the wider political, professional, industrial and societal environment. Interface A: Project specification and selection The nature of the activities at Interface A will vary depending on the type of issue identification. Where the topics are externally generated, there are potential difficulties in ensuring both that the research community is actively engaged with the priorities that have been identified and that the project specification meets the needs as identified [ 27 ]. Where the issues are internally generated, the interface involves traditional processes of the researcher developing a detailed proposal and submitting it for peer review. Most of the issues are internal to the scientific world, but there is still a key interface between individual researchers and ARC as the research-funding body. Documentary analysis of ARC files provided information in the pilots that sometimes highlighted issues about how far the proposal was subject to changes as a result of the review process. It also proved useful, however, to supplement this with questions in the interviews. Stage 1: Inputs to research It can be important to consider not only the financial inputs, including any beyond the specific ARC funding, but also the experience of the research team and the knowledge base on which they built. Part of the idea behind examining any other funding brought in to support ARC research is again to see how far ARC funding is helping to facilitate the funding of arthritis research by general funders of health research: is ARC funding studies that produce findings that others believe are worth further investigation? The pilot studies confirmed that the complexities of identifying the exact funding streams behind any piece of research were best addressed by using a case study approach involving initial documentary review and following up issues in interviews. The pilots involved a case where other contributory funding contributed to what was clearly an ARC project, and therefore little attempt was made to portion out credit for outcomes to any funder other than ARC. In another case, however, the research was part of a stream of ARC-funded work and an effort was made to try to draw boundaries around what would be appropriate to include in the case study. Stage 2: Research processes Consideration can be given to how appropriate the proposed methods for a study turned out to be, and whether any difficulties were encountered. In some cases it could be relevant to explore how far potential users were involved at this stage. It is possible that difficulties identified at this stage could explain later problems with translation or uptake of the research findings. Stage 3: Primary outputs from research Knowledge production, as represented by the various types of publications, is a major primary output from the research. Various ways of measuring this were discussed above. The pilots also showed that the interviews used to refine the lists of publications from the specific funding in question, could also sometimes help to identify where non-conventional sources were being used as outlets for publications. Most of the primary outputs will feed into the stock of knowledge. The research benefits in terms of targeting future research represent either feedbacks to further research conducted by team members, or findings that feed into the stock of knowledge and help target future research of others. An example from one pilot study showed not only how the principal investigator used her project to inform her own further work, but was also able to contribute to a much larger collaborative project. Interviews in another study showed that the research had informed considerable further work in industry, but as yet this had not led to any product development. Under the framework being used, it is possible to give that ARC-funded work considerable credit for informing the further research, but record its limited impact at the subsequent stages. Capacity building can also be seen as a primary output. Accounts were given, in pilot study interviews, of the research training and higher degrees that resulted from the research. Interface B: Dissemination Dissemination is usually seen as being somewhat more active than the mere production of academic publications containing the knowledge. There are, however, clear overlaps between some activities. Sometimes it is possible to record not just dissemination activities but also the successful transfer of research findings to potential users in the political, industrial, professional environment and wider society. Previous analyses of how to increase the implementation of research findings [ 28 ] will help inform the issues being examined in the case studies at the dissemination and later stages. Presentations to potential academic and user groups, and media activities, are major ways of disseminating findings, as are the production of brief summaries of findings targeted at specific user groups. In previous case studies, attention has also focused on the way some researchers conduct study days, or training, based on the approach developed by their research and these can be highly effective dissemination mechanisms [ 51 ]. The pilots provided an example of the importance of this and, indeed, of the role of individual researchers in networking and disseminating information. Stage 4: Secondary outputs–policymaking and product development As noted above, policymaking and product development activities can result in a wide range of secondary outputs, and various methods are needed to identify research-informed policies. In one case study, a review of a database revealed that one project had been cited in a clinical guideline unbeknown to the research team, whereas in another pilot it took interviews to identify that the research was informing local guidelines and care pathways. The use of the research in systematic reviews was also revealed in various ways in the pilot studies. Where the research seems to have resulted in secondary outputs it is useful to explore the factors that have led to this. In relation to product development, if research findings are incorporated into the process of developing a product, for example a new drug for arthritis, this can be seen as an important secondary output. In the preliminary set of interviews, most people referred to how ARC-funded research had played a key role in the production of anti-TNF therapy for arthritis. In a pilot study, interviews revealed the extent to which industry's attempts to use one stream of research for product development had not, so far, been successful. Stage 5: Adoption by practitioners and public For the research findings incorporated into secondary outputs to result in final outcomes there usually has to be some behavioural change by practitioners, and/or the public. This may involve take-up of new drugs or procedures as set out in a secondary output such as a guideline from the National Institute for Clinical Excellence (NICE). Sometimes the adoption comes as a direct result of the primary outputs, as when clinicians – often at the cutting edge – decide to implement research findings even prior to the development of clinical guidelines. Either way, it is important to try to establish the adoption or take-up rates and to explore how far the behavioural change can be attributed to the specific research findings, as opposed to other factors such as a more general change in climate of opinion in relation to, for example, the importance of exercise. In one pilot study where interventions based on research filtered into practice, a series of interviews was used to attempt to identify both the precise role of the specific ARC-funded project and possible levels of uptake. The role of the public in responding to informed advice – often research-based – is seen as increasingly important, especially in a field such as arthritis [ 52 ]. Various factors can be explored here. These include the extent to which patient behaviour might change as a result of interactions with health-care providers who promote research-based messages, and how far the public might respond directly to publicity about research findings when they are used, for example, in media campaigns encouraging participation in preventative activities [ 28 ]. Stage 6: Final outcomes The final outcomes are the health and broader economic benefits identified in categories d) and e) above. These are increasingly seen as being the ultimate goal of health research funding, but their precise estimate in practice often remains difficult [ 5 - 7 ]. In one pilot study, it was possible to produce audit figures from one area where there is known to have been local implementation of the research findings. Planned analysis and synthesis Each of the 16 cases will be written up as a narrative organised according to a common structure based on the various stages of the logic model. Each study should potentially, therefore, provide illumination as to the processes that could lead to outcomes and illustrations of such outcomes. In addition, the common structure of each case should facilitate some cross-case analysis that will not only look for common factors associated with research that has led to outcomes, but also see how far such outcomes are associated with different modes of funding and types of research. Some of this analysis should be based on the previous findings that are embedded into the evaluation framework: for example, basic research might be expected to produce a reasonable number of knowledge outputs but be less likely than clinical or AHP research to inform policies. Some other aspects of the analysis, however, are likely to be exploratory: detailed analysis of factors related to the role of medical charity research in contributing to outcomes appear, as yet, not to be well established. Conclusions This paper sets out the aims and methods to be adopted in an innovative study to review the outcomes of the research funded by the Arthritis Research Campaign, one of the leading medical charities in the UK. At a time of growing emphasis on both accountability and evidence-based policy making, it is important for research-funding bodies to be able to show the results of their funding and base their policies on analyses of the processes involved in producing outcomes [ 53 ]. Based on the results of the piloting, a decision was made to go ahead with the full study. Finally, one of the challenges for the future will be to operationalise such analysis on a regular, and therefore less resource intensive, manner. It is hoped that the study will also shed light on these practical considerations, and do so in a way that will enable a system to be developed that meets the specific needs of the particular research funding body [ 4 ], in this case ARC. Competing interests The work was funded by ARC. Authors' contributions JG led the design of the study, with all authors making contributions. SH drafted the article with contributions from all authors. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC503400.xml |
512294 | Field assessments in western Kenya link malaria vectors to environmentally disturbed habitats during the dry season | Background Numerous malaria epidemics have occurred in western Kenya, with increasing frequency over the past 20 years. A variety of hypotheses on the etiology of these epidemics have been put forth, with different implications for surveillance and control. We investigated the ecological and socioeconomic factors promoting highland malaria vectors in the dry season after the 2002 epidemic. Methods Investigations were conducted in Kisii District during the dry season. Aquatic habitats in were surveyed for presence of malaria vectors. Brick-making pits were further investigated for co-associations of larval densities with emergent vegetation, habitat age, and predator diversity. Indoor spray catches were completed in houses near aquatic habitats. Participatory rural appraisals (PRAs) were conducted with 147 community members. Results The most abundant habitat type containing Anopheles larvae was brick-making pits. Vegetation and habitat age were positively associated with predator diversity, and negatively associated with mosquito density. Indoor spray catches found that houses close to brick-making sites had malaria vectors, whereas those next to swamps did not. PRAs revealed that brick-making has grown rapidly in highland swamps due to a variety of socioeconomic pressures in the region. Conclusion Brick-making, an important economic activity, also generates dry season habitats for malaria vectors in western Kenya. Specifically, functional brick making pits contain less that 50% as many predator taxa and greater than 50% more mosquito larvae when compared with nearby abandoned brick making pits. Further evaluations of these disturbed, man-made habitats in the wet season may provide information important for malaria surveillance and control. | Background The World Health Organization estimates that 300 to 500 million people are diagnosed with malaria annually, causing 1.1 to 2.7 million deaths. Approximately 1 million of these deaths are among children in sub-Saharan Africa, where 90% of all malaria cases occur [ 1 ]. During the 1950's and 1960's, a coordinated world-wide effort succeeded in eliminating malaria transmission within countries with temperate climates, and dramatically reduced malaria transmission in many other countries. Since the collapse of this campaign, malaria has since resurged, surpassing pre-campaign infection rates in many places, and entering previously unaffected locations [ 2 ]. The global resurgence of malaria has been attributed to a number of factors: drug-resistant parasites, insecticide-resistant vectors, population shifts, war-damaged infrastructures, altered meteorological conditions, and drastic ecological transformation [ 1 , 2 ]. The first recorded epidemic of malaria in the highlands of western Kenya occurred in 1918/19, with additional epidemics occurring periodically until 1950 [ 3 ]. These initial epidemics were associated with both population movements and progressive construction of roads and a railway through the highlands. With these human activities, new aquatic habitats were created, facilitating a gradual spread of parasites and vector mosquitoes into the highlands from the low-lying hyperendemic-disease areas [ 4 ]. Adoption of an extensive malaria control program [ 5 ] temporarily kept the highlands free from epidemics. Since the 1980s, however, the incidence of highland malaria and frequency of epidemics have been increasing, with severe outbreaks in 1995, 1998/99, [ 6 - 9 ] and most recently in May through July, 2002. Fifteen districts are now considered epidemic-prone by the Kenyan government [ 10 ]. Climate changes have poorly predicted epidemics in the East African highlands [ 11 - 14 ]. Hay et al. [ 11 ] suggest that intrinsic population dynamics are more likely the cause of vector fluctuations. Investigators reviewing Kericho District tea plantation records [ 9 , 15 , 16 ] found no obvious change in average temperature or rainfall over the previous 20–30 years, stable tribal/ethnic composition in the study area, and a properly maintained health care system. These studies suggest a failure of medications was the most likely cause for the recurrence of malaria epidemics in the highlands. However, due to the retrospective approach, these studies were unable to investigate environmental/ecological changes in the highlands or evaluate the role of malaria vectors, as had been done for earlier epidemics [ 4 ]. Following the 2002 epidemic, this study was conducted to identify the ecological and socioeconomic variables affecting malaria vector densities and distributions. In this paper we connect specific man-made aquatic habitats present during the dry season with malaria vector larvae in western Kenya. Methods Study area The highlands of western Kenya are an ecologically diverse and densely populated region situated east of Lake Victoria. Detailed investigations have been conducted in Mosocho Division, a sub-section of Kisii Central District with a predilection for epidemics and high rates of malaria (Ministry of Health Kisii, personal communication). Kisii District covered 648 Km 2 and contained 491,786 people in 2001 (759 people/Km 2 ) [ 17 ]. Mosocho Division covered 97 hectares and contained 105,309 people (1,086 people/Km 2 ). Agriculture was the primary industry in the area, conducted on family plots which surrounded smaller industries, such as quarrying, and extended to the borders of wetlands. Survey of aquatic habitats in Mosocho Division All sites with standing water within Mosocho Division were evaluated by standard dipping for presence of Anopheles mosquito larvae over the course of two weeks in the dry season (September 2002). Standing water was identified by canvassing Mosocho Division by vehicle and by foot, with the assistance of local inhabitants in the areas investigated. This technique especially relied on information from inhabitants when locating the more remote habitats, which may therefore be under-represented in the survey. Once identified, habitats were evaluated for presence of mosquito larvae using standard aquatic dippers. Dipping was performed around the perimeter of the habitat, with three dips performed at approximately one meter intervals. No effort was made to determine larval densities due to the patchiness of larval distribution within each habitat. Dipping was not performed more than one meter into the habitats, which may have led to an under representation of less abundant species. Cross-sectional survey of brick-making pits in Kisii District At each of five brick-making sites in Kisii District, three functional brick-making pits and three abandoned pits were evaluated for pit size, larval densities, percent emergent vegetation and predator taxa present in the habitat. (See Figure 1 .) Functional pits were chosen by asking brick makers to identify sites in which bricks were currently being produced that had been in use for at least two weeks. Abandoned pits were chosen near functional pits that had not been used for brick-making during the previous four months according to brick makers. Surface area of the rectangular pits was estimated by multiplying the measurements of the habitat at maximum and minimum lengths. Larval densities were obtained by standard dipping method, averaging the yields of five dips per pit. Percent emergent vegetation was estimated visually. An estimation of predator diversity was obtained by adding the number of Vertebrata orders, Odonata suborders, Hemiptera families, and adult Dytiscidae (Coleoptera) size-categories collected from the pits with 20 cm diameter fine-mesh sieves capable of collecting organisms down to 1 mm in size[ 18 ]. Predators were surveyed after obtaining larval dips, with one minute spent in each pit. Figure 1 Brick-making site in the Kenyan highlands. Assessments of the shallow pools dug by brick-makers is taking place in the foreground as brick makers continue to excavate soil, combine the soil with water, mould the resulting mud into bricks, and then left to dry before firing. Spatial data To assess the spatial relationship between aquatic habitats and the presence of malaria vectors inside houses, a series of indoor spray catches (described by [ 19 ]) was performed on 5 September 2002. Nine houses were chosen along a transect from a valley used for brick-making up to a maximum elevation (1920 meters), and then down into the next valley, containing swamps. Houses were chosen for possession of a thatched roof, not having had a fire inside since the previous evening, and permission of the owner. Socioeconomic survey Forty nine Chiefs and ninety eight brick makers were interviewed at four locations within Kisii and Gucha Districts to determine the social and economic factors promoting brick-making in the highlands, as well as the historic development of the industry. Standard Participatory Rural Appraisal methodology was used to obtain qualitative data [ 20 ]. Statistical analysis The means of the variables in the cross-sectional survey were compared for independence between the 15 functional and 15 abandoned brick-making pits (SPSS 11.0 t-test for independence). GPS coordinates taken for correlating presence of Anopheles in a house with distance to brick-making pits and swamp were converted into decimal degree coordinates and integrated as map layers within a Geographical Information System (ArcView 8.2). The distances from each house to the nearest brick making pit and swamp boundary were generated using the distance tool within the ArcView software. Spearman's correlation coefficient (SPSS 11.0) was calculated to correlate presence of Anopheles in a house with distance to swamp, distance to brick-making pits, and elevation. Results Survey of aquatic habitats in Mosocho Division A dry-season survey of aquatic habitats in Mosocho Division was conducted to identify potentially important larval habitats of Anopheles mosquitoes. (See Table 1 .) A total of 53 standing aquatic habitats were assessed, with 16 (30.2%) containing Anopheles larvae. 37.5% of Anopheles -positive habitats were functional brick-making sites containing only An. gambiae s.l. larvae, and 18.8% were abandoned brick-making habitats containing only An. gambiae s.l. larvae (combined for 56.3%). Of the four quarries positive for Anopheles larvae, three were functional (created in the previous year) and one was abandoned (unused > two years), both with An. gambiae s.l and An. funestus . A portion of a drainage canal dug near a brick-making sites constituted an additional Anopheles -positive habitat, containing An. gambiae s.l . larvae. Of natural habitats sampled, only two contained malaria vectors. One swamp had An. funestus larvae and one tree hole in an ornamental Flamboyant tree ( Delonix regia ) had An. gambiae s.l. larvae; 87.5% of Anopheles -positive habitats were of direct human origin. Table 1 Dry season survey of aquatic habitats Habitat Number of assessed habitats Number of habitats with Anopheles larvae % of habitats with Anopheles larvae % of total Anopheles larval habitats BMS (F) 6 6 100 37.5 Quarry (F) 4 3 75 18.8 BMS (A) 14 3 21.4 18.8 Quarry (A) 10 1 10 6.3 Tree hole 5 1 20 6.3 Swamp 6 1 16.7 6.3 Drainage 7 1 14.3 6.3 Fish pond 1 0 0 0 Stream pool 2 0 0 0 Total 53 16 30.2 100.3 BMS (F): Functional Brick-making site BMS (A): Abandoned Brick-making site Cross-sectional survey of brick-making pits in Kisii District Mean mosquito larvae densities were higher in functional than abandoned pits for both An. gambiae s.l . (2.87/dip in functional, 0.91/dip abandoned, p = 0.002) and Culex spp. (3.77/dip in functional; 1.32/dip in abandoned, p = 0.025). No An. funestus larvae were found in this study. This corresponded with an increase in predator biodiversity found in the abandoned sites (9.07 taxa in abandoned; 5.13 taxa in functional, p= 0.001) and increased percent emergent vegetation (0.27% in functional; 94.27% in abandoned, p < 0.001). (See Table 2 .) Functional brick-making pits were an average of 7.8 square meters (standard deviation= 6.38; range = 1.65 to 22.5 square meters). Table 2 Ecological survey of brick-making pits Use N Mean Std. Deviation t-statistic p Percent vegetation Functional 15 0.27 0.46 -47.116 <0.001 Abandoned 15 94.27 7.71 Habitat age (months) Functional 15 0.62 0.17 -5.734 <0.001 Abandoned 15 24.20 15.92 Predator biodiversity Functional 15 5.13 2.36 -3.586 0.001 Abandoned 15 9.07 3.53 Average number of Anopheles/ dip (5 dips per pit) Functional 15 2.87 2.04 3.402 0.002 Abandoned 15 0.91 0.90 Average number of Culex/dip Functional 15 3.77 3.60 2.435 0.025 Abandoned 15 1.32 1.51 Spatial data Spray catches in nine houses during the dry season resulted in one Anopheles mosquito in five different houses. Distance from brick-making sites was negatively correlated with presence of Anopheles in houses (p = 0.002, r = -0.868), while distance from swamp was positively correlated with presence of Anopheles mosquitoes (p = 0.018, r = 0.758). Elevation was not correlated with presence of Anopheles in a house (p = 0.829, r = -0.085). Brick-making and socioeconomic survey According to brick-makers, making bricks is predominantly a dry-season activity due to the damage caused by heavy rains to drying bricks. Although variation in technique exists, universally used stages in the brick-making process are: Excavation, Fermentation, Moulding, Drying, and Kilning. During the Moulding stage, water is brought into the excavated clay pits and mixed with soil. During this stage, which lasts from several days to one month, water is continually supplied to the pit through irrigation systems, ground water, or is brought by bucket from a nearby water source. Once abandoned during the Kilning stage and afterwards, the pit accumulates rain water and ground water, which can be subsequently used as a water source for newly excavated pits. Over years, abandoned brick-making pits degrade until they are continuous with the surrounding swamp. Interviews with Chiefs and brick makers revealed different scales of brick-making operations, from individuals working on their own land to large-scale industries (Figure 1 ) where wetland plots are rented to brick-makers who employ a large number of casual laborers for the mass production of bricks. Brick-making is an increasingly popular means of obtaining income, spreading to new communities when skilled brick makers are hired to work new land for short-term projects. While brick-making has been passed through multiple generations, the large-scale industries have developed steadily over the previous 20 years. A primary socioeconomic factor described by brick-makers that promotes brick-making is the desire to make use of agriculturally unfavorable wetlands as human population densities increase. Nearly all brick makers work in wetlands, leading to progressive deforestation of the highland swamps. Money from brick-making was used principally for paying school fees and malaria medication. According to brick-makers, brick houses are considered to be more prestigious than traditional houses, are associated with lower long-term costs than traditional mud-thatched houses, and are less permeable to mosquitoes. Discussion Our study shows that man-made larval habitats were the predominant (87.5%) source of malaria vectors in Mosocho Division during the dry season. In particular, the continuously disturbed, functional brick-making pits contained high densities of malaria-vectoring mosquitoes. PRAs revealed that small brick-making groups have developed into large-scale industries over the past two decades, and brick-making is now dispersed throughout the highlands in unfarmable wetlands. Because brick-making occurs predominantly in the dry season, it may aid in maintaining vector populations year-round. Additionally, houses closest to brick-making pits had malaria vectors present within them. While the number of mosquitoes captured during a single transect of spray-catches was only one per house, this represents a real, if low, possibility of malaria transmission during the dry season at these locations. Thus brick-making areas may function as refugia for malaria parasites and their vectors over the dry season, facilitating spread of malaria when habitats become more plentiful in the wet season. In this study we found emergent vegetation to be negatively associated with presence of malaria vectors in man-made habitats. This association may be a result of an association of both emergent vegetation and predator diversity with habitat age. In continually disturbed habitats (such as functional brick-making pits), the habitats are kept at a low stage of biological succession, possessing fewer species of both plants and animals. There may also be a direct effect of vegetation on the trophic dynamics in ground pools. In structurally simple habitats, intraguild predation has been shown to suppress the diversity of important predators [ 21 - 23 ]. Whether due to co-associated variables, direct effects, or some combination, there was a sizable 57% increase in predator taxa found in abandoned, vegetated pits and a >50% reduction of Anopheles larval densities. More research into the ecology of small aquatic pools will help clarify the interrelationships of these variables. In ground pools, predators are typically thought to regulate the density of mosquito larvae [ 24 , 25 ]. Predators may exert an effect by consuming larvae or through deterring oviposition into an otherwise suitable habitats[ 26 ]. During the dry season, disturbed, man-made habitats (such as functional brick-making pits and quarries) provide a developmental habitat in which Anopheles larvae escape the high degree of predation found in the natural environment. These habitats should be specifically targeted during larval control programs. With substantial socioeconomic motivation for brick-making in the highlands, traditional source reduction (eliminating standing water) is unsustainable, and larvicides should be employed. Further research is needed on the use of these habitats by mosquito larvae during the wet season. The negative correlation between predator diversity and mosquito density suggests that pesticide application may exacerbate epidemics by decreasing predation pressure if used in natural habitats. Pesticide applications can be selectively harmful to larger predators such as dragonflies and large dytiscid beetles which take months to years to develop[ 27 ]. In contrast, larvicides have a smaller impact on rapidly developing insects such as mosquitoes, which can reach maturity in a week. Rapid recolonization of predator-free habitats by mosquitoes would lead to vector resurgence [ 8 , 28 ]. Thus, targeting larvicide application to disturbed aquatic habitats should lead to a better mosquito control than treating all available habitats. The social and economic benefits accompanying disease reduction through vector control was demonstrated in the Zambian copperbelt [ 29 ]. As the population density of the highlands of western Kenya grows, the social and economic costs of malaria are likely to grow as well, unless vector-centered interventions (including environmental management, larvicide application, and vector surveillance systems) are used to confront the disease. Competing interests None declared List of abbreviations PRA: Participatory Rural Appraisal Km: Kilometer BMS: Brick making site NGO: Non governmental organization Authors' contributions JCC designed and conducted evaluations of larval habitats and houses, performed statistical analysis, and wrote the initial draft of the manuscript. BDB performed GIS analysis. FXO identified BMS as sources of vector mosquitoes, designed and conducted PRAs, and aided in the production of the manuscript's final draft. Pre-publication history The pre-publication history for this paper can be accessed here: | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC512294.xml |
516031 | Expression profiling of serum inducible genes identifies a subset of SRF target genes that are MKL dependent | Background Serum Response Factor (SRF) is a transcription factor that is required for the expression of many genes including immediate early genes, cytoskeletal genes, and muscle-specific genes. SRF is activated in response to extra-cellular signals by its association with a diverse set of co-activators in different cell types. In the case of the ubiquitously expressed immediate early genes, the two sets of SRF binding proteins that regulate its activity are the TCF family of proteins that include Elk1, SAP1 and SAP2 and the myocardin-related MKL family of proteins that include MKL1 and MKL2 (also known as MAL, MRTF-A and -B and BSAC). In response to serum or growth factors these two classes of co-activators are activated by different upstream signal transduction pathways. However, it is not clear how they differentially activate SRF target genes. Results In order to identify the serum-inducible SRF target genes that are specifically dependent on the MKL pathway, we have performed microarray experiments using a cell line that expresses dominant negative MKL1. This approach was used to identify SRF target genes whose activation is MKL-dependent. Twenty-eight of 150 serum-inducible genes were found to be MKL-dependent. The promoters of the serum-inducible genes were analyzed for SRF binding sites and other common regulatory elements. Putative SRF binding sites were found at a higher rate than in a mouse promoter database but were only identified in 12% of the serum-inducible promoters analyzed. Additional partial matches to the consensus SRF binding site were found at a higher than expected rate in the MKL-dependent gene promoters. The analysis for other common regulatory elements is discussed. Conclusions These results suggest that a subset of immediate early and SRF target genes are activated by the Rho-MKL pathway. MKL may also contribute to the induction of other SRF target genes however its role is not essential, possibly due to other activation mechanisms such as MAPK phosphorylation of TCFs. | Background Quiescent cells exposed to growth factors respond by expressing a variety of immediate early genes (IEG) that do not need new protein synthesis for their expression [ 1 ]. Serum or growth factor induced expression of many of these immediate early genes, such as c-fos, egr1, cyr61 and pip92, is dependent on a sequence element in their promoter termed the Serum Response Element (SRE). This sequence element contains an A/T rich core flanked by an inverted repeat and is also known as the CArG box (CC(A/T) 6 GG). The CArG box is specifically bound by Serum Response Factor (SRF) [ 2 - 4 ]. Both the SRE and SRF are required for the serum inducibility of these genes since microinjection of SRE oligonucleotides or anti-SRF antibodies blocked induction in NIH3T3 cells [ 5 ]. In addition, mutation of the SRE blocked serum induction of reporter genes containing immediate early gene promoters and SRF null ES cells were defective for immediate early gene activation [ 6 , 7 ]. Although the immediate early genes are so named because of their rapid inducibility after growth factor treatment, different kinetics of expression have been observed among the immediate early genes. Expression of the proto-oncogene c-fos peaks at around 30 minutes after stimulation whereas the peak expression of SRF mRNA occurs after 90–120 minutes [ 8 , 9 ]. Thus SRF has been characterized as a "delayed" IEG although its expression is still independent of new protein synthesis. Activation of SRF by growth factors occurs through at least two mechanisms – the TCF and RhoA pathways [ 10 , 11 ]. Serum or growth factor induction leads to the phosphorylation of p62TCF by MAP kinases. TCF is a ternary complex factor that binds to both SRF and flanking sequences of the SRE. TCF binding to the SRE requires the prior binding of SRF as well as an adjacent TCF binding site. TCF is encoded by three ets-related genes, Elk1, SAP1 and SAP2/Net [ 12 ]. An additional pathway that activates SRF is through activation of the small GTPase RhoA [ 11 ]. Activated RhoA induces the expression of SRE reporter genes while inhibition of RhoA blocks serum induction. RhoA also causes the formation of stress fibers and the use of actin filament inhibitors and actin mutants suggests that actin treadmilling can control SRE activation [ 13 , 14 ]. The RhoA effectors mDia and ROCK appear to be involved in regulating both actin treadmilling and SRF activation [ 15 , 16 ]. This has led to a model whereby free G-actin inhibits SRF activation and this inhibition is relieved when G-actin levels are depleted by their polymerization into actin filaments. However, mutants of RhoA have been identified that are defective for SRF activation but still cause the formation of stress fibers and vice versa, suggesting that a pathway exists for RhoA activation of SRF independent of stress fiber formation [ 17 , 18 ]. The use of pathway specific inhibitors has suggested the presence of two types of SRF target genes – those that are largely dependent on the Rho-actin pathway and those that are largely dependent on the MAP kinase pathway [ 19 ]. Use of reporter genes suggests that some promoters may be activated by both pathways, but that activation by the Rho pathway is only apparent after mutation of the TCF pathway [ 20 ]. We and others have recently identified a family of SRF-specific transcriptional co-activators – the MKL family, comprised of MKL1 and MKL2, that function downstream of the RhoA pathway [ 21 - 26 ]. MKL1 and 2 are widely expressed genes related to the heart and smooth muscle-specific SRF co-activator myocardin [ 21 , 22 , 24 - 29 ], thus making them candidate proteins in the signal transduction pathway of immediate early genes. Indeed, experiments using dominant negative proteins and RNA interference have shown that they are required for the induction of the immediate early genes srf, vinculin and c-fos [ 21 , 23 ]. Serum induction of SRF and vinculin was dependent on MKL1 both at the early and late time points of induction, while serum induction of c-fos showed a modest MKL-dependence at the later time points of serum induction and was largely independent of MKL at the early time points of induction (30 minutes) [ 21 ]. Consistent with this, Gineitis et al. identified c-fos as a gene whose serum induction is largely dependent on the MAPK-TCF pathway [ 19 ]. Overexpression of MKL1 has different effects on specific SRF target gene promoters, as some were strongly activated (e.g. smooth muscle α-actin, SM22) while others were poorly induced (e.g. c-fos, egr-1)[ 21 ]. The lower activation of the c-fos promoter is not due to the c-fos SRE per se, since reporter genes containing the c-fos SRE on a minimal promoter were strongly activated by MKL1 [ 21 , 27 ]. Chromatin immunoprecipitation experiments have shown that MKL1 binds to the promoters of the cyr61, srf and vinculin genes but not egr-1 or c-fos after swinholide treatment which leads to the activation of the Rho-actin pathway [ 23 ]. Moreover SAP-1 (a TCF factor) bound to the egr-1 promoter but not to the srf, vinculin or cyr61 promoters suggesting that TCF and MKL1 binding to target promoters might be mutually exclusive. Indeed, in vitro gel mobility shift assays suggest that MKL1 and Elk-1 bind to the same region of SRF [ 23 ]. Nevertheless, it is still unclear how MKL family factors distinguish SRF target genes. It is possible that there are specific CArG box sequences, that TCF sites adjacent to SREs inhibit activation or that other flanking elements control sensitivity to MKL activation. As a first step towards a better understanding of this differential target gene selection and to demonstrate the importance of the MKL coactivators in immediate early gene induction, we sought to identify the group of SRF target genes that were dependent on the RhoA-MKL pathway. We performed microarray analysis using cell lines expressing dominant negative MKL1 (DN-MKL1) to identify MKL-dependent and -independent serum-induced SRF target genes. The DN-MKL1 protein is effective in blocking both MKL1 and 2 activation of SREs, thus solving the problem of redundancy of these proteins in the cell [ 21 , 24 ](unpublished data). As a control, DN-MKL1 did not affect TCF-pathway induction of reporter genes [ 21 ]. Hence, cell lines expressing DN-MKL1 can be used to elucidate the target genes of SRF that require MKL1/2 for their activation. We have further searched the target promoters for common regulatory elements, in particular for perfect or variant CArG box sequences to identify promoters that are more likely to be direct targets of SRF-MKL activity. Results Dominant negative MKL1 inhibits serum induction of SRF target genes We have utilized a C-terminal deletion mutant of MKL1 (a.a. 1-630) as a dominant negative mutant. We previously found that this mutant can prevent activation of SRE reporter genes by MKL1 and MKL2 overexpression [ 21 ](unpublished data). This DN-MKL1 mutant can bind to SRF but lacks a transcriptional activation domain. Expression of DN-MKL1 reduced serum induction of a c-fos reporter gene that lacks a TCF site and strongly blocked RhoA activation of this reporter [ 21 ]. We generated a cell line stably expressing DN-MKL1 in NIH3T3 cells in order to more easily look at its effect on endogenous gene expression. We found a strong effect on serum-induced expression of the known SRF target genes vinculin and the SRF gene itself [ 21 ]. However, binding of dominant negative MKL1 to SRF could titrate other SRF complexing proteins such as the TCF factors Elk1 and SAP-1/2, since they all bind to the MADS box DNA binding domain of SRF [ 23 ]. Thus there is a possibility that DN-MKL1 could block the serum induced expression of target genes by indirectly blocking the TCF pathway. We previously found that dominant negative MKL1 does not significantly inhibit tetradecanoyl phorbol acetate (TPA) activation of SRE reporter genes, which functions through MAPK phosphorylation of TCF [ 12 , 21 ]. We sought to further confirm this for endogenous gene targets. Serum starved cells were induced with serum or TPA for 60 minutes and the mRNA from these cells were assayed for the expression of the endogenous target genes c-fos and junB by real-time PCR (Fig. 1 ). The DN-MKL1 cell line showed a modest decrease in serum induced expression of c-fos, consistent with our previous RNase protection measurements [ 21 ]. A larger decrease was observed for serum induction of junB, however there was no significant change in TPA induced expression of either c-fos or c-jun in the DN-MKL1 cell line (Fig. 1 ). Hence, DN-MKL1 does not block the TCF pathway and effectively distinguishes the MKL and TCF pathways. Figure 1 Effect of serum and TPA on gene expression in WT and DN-MKL1 cell lines. Serum starved cells were treated with new born calf serum (20%) or TPA (100 ng/ml) for 1 hour and relative mRNA levels for c-fos and jun B were measured by quantitative real-time PCR using the SYBR green method. Data are represented as the relative fold activation ± the standard deviation of the induced cells compared to the serum starved WT cells. Microarray analysis identifies MKL dependent genes with varying kinetics of expression In order to more fully characterize the temporal program of serum induced gene expression and the role of MKL, we performed microarray analysis on wt and DN-MKL1 NIH3T3 cells that had been serum-starved and induced with serum for 0, 30, 60 or 120 minutes. These times should distinguish early vs. 'delayed' immediate early genes. Mouse Affymetrix oligonucleotide arrays (MOE430A) containing 14,824 non-redundant probes were used for hybridization. All data points were done in triplicate and hierarchical clustering was performed using dChip software analysis. We found 229 genes that showed a significant variation across the 8 samples and used the results with these genes for unsupervised clustering [ 30 ]. The hierarchical clustering analysis shows classes of differentially regulated genes which we have designated classes 1 to 10 (Fig. 2 ). We were particularly interested in serum-induced genes. However, one class was constitutively repressed in DN-MKL1 cells (#1) and another was constitutively activated (#5). Further, one class was repressed by serum induction (#4) though there was no significant effect of DN-MKL1. This leaves seven classes of genes that are serum-induced with different kinetics and dependency upon MKL1. Classes 2, 3 and 8 were induced early at the 30 minute time point while classes 6 and 7 had maximal induction at 120 minutes. Classes 9 and 10 had peak induction at the intermediate time point of 60 minutes. Serum induced expression of classes 2, 7 and 9 as well as many of the genes in class 8 were particularly MKL-dependent while the other serum-induced genes (classes 3, 6 and 10) were predominantly MKL-independent. The positions of known SRF target genes that we previously studied are indicated (Fig. 2 ). The vinculin and srf genes fall into the MKL-dependent classes 7 and 9 consistent with our previous results [ 21 ]. Serum-induced expression of the c-fos gene was predominantly independent of MKL1. Figure 2 Hierarchical clustering of gene expression. RNA from WT and DN-MKL1 expressing cells induced with serum for the indicated times was used to probe an Affymetrix mouse chip with 14,824 non-redundant probes. dChip software was used to identify genes with significant variation across samples. 229 such genes were used for clustering analysis. Different classes of similarly regulated genes are indicated to the right and discussed in the text. The respective positions where the previously characterized MKL target genes srf, vinculin and c-fos fall are indicated by the arrows. Expression scales ranging from -3 (blue) to +3 (red) fold are indicated at the bottom of the figure. To analyze the serum-inducible genes more carefully, we further dissected the data using more stringent criteria to identify MKL-dependent genes at each time point. We filtered for genes that were serum-induced ≥ 2 fold in WT cells and whose expression in DN-MKL1 cells was at least 35% less than in WT cells (Tables 1 , 2 and 3 ). We picked points that fell within the 90% confidence intervals for both fold serum induction and decrease in DN-MKL1 cells. At the 30 minute time point, 12 out of the 52 serum-inducible genes were MKL dependent. Similarly, 6 out of 59 and 18 out of 123 genes were MKL dependent at the 60 and 120 minute time points, respectively. The genes previously shown to contain SRF binding sites are indicated. Consolidating these results to eliminate multiple listings of genes induced at different time points, we have identified 28 out of 150 serum-inducible genes as MKL-dependent (Table 4 ). A list of the 122 serum inducible, MKL-independent genes is shown in the supplemental table. Table 1 List of genes induced by serum at 30 minutes in WT cells and reduced in DN-MKL1 cells. Gene Locus Link Fold change Lower CI Upper CI % decrease in DN known SRE 1 similar to Hs Mitogen inducible gene 6 74155 18.67 12.81 26.3 54 2 coagulation factor III 14066 8.05 5.44 14.29 47 3 retinoblastoma inhibiting gene 1 19649 6.45 4.97 8.90 63 4 tropomyosin 1, alpha 22003 5.92 4.18 8.55 63 YES 5 pleckstrin homology-like domain 21664 5.82 3.67 11.48 65 6 epiregulin 13874 5.77 3.15 9.27 71 7 leukemia inhibitory factor 16878 4.84 2.79 7.65 57 8 serum response factor 20807 4.7 3.25 6.40 60 YES 9 tribbles homolog 1 211770 3.87 2.81 5.10 50 10 aortic alpha actin-2 68377 3.77 2.94 5.06 52 YES 11 fos-like antigen 1 14283 3.02 2.27 3.89 44 YES 12 CDC42 effector protein 3 260409 2.47 2.19 2.81 46 Table 2 List of genes induced by serum at 60 minutes in WT cells and reduced in DN-MKL1 cells. Gene Locus Link Fold change Lower CI Upper CI % decrease in DN known SRE 1 serum response factor 20807 6.26 4.66 8.20 59 YES 2 enigma homolog 56376 4.75 3.63 6.28 61 3 adrenomedullin 11535 4.15 3.28 5.17 57 4 retinoblastoma inhibiting gene 1 19649 3.92 2.68 5.75 50 5 CDC42 effector protein 3 260409 3.18 2.40 4.38 48 6 adenosine A2b receptor 11541 2.56 2.00 3.24 40 Table 3 List of genes induced by serum at 120 minutes in WT cells and reduced in DN-MKL1 cells. Gene Locus Link Fold change Lower CI Upper CI % decrease in DN known SRE 1 leukemia inhibitory factor 16878 32.79 16.31 54.48 82 2 epiregulin 13874 21.48 13.61 32.57 54 3 interleukin 6 16193 7.84 4.11 12.98 90 4 inhibitor of DNA binding 3 15903 7.34 5.86 9.45 51 5 snail homolog 1 20613 6.14 4.17 9.00 55 6 serum response factor 20807 5.83 4.39 7.61 53 YES 7 hexokinase 2 15277 5.43 3.52 7.51 52 8 TGFB inducible early growth response 21847 5.39 3.88 7.42 53 9 enigma homolog 56376 5.37 4.26 6.93 57 10 Jun-B oncogene 16477 5.25 3.12 7.93 61 YES 11 vinculin 22330 4.54 3.32 6.10 52 YES 12 expressed sequence AA939927 99526 4.38 3.35 5.79 76 13 B-cell translocation gene 2, anti-proliferative 12227 3.58 2.34 6.09 63 14 adrenomedullin 11535 3.30 2.79 3.91 47 15 methionine adenosyltransferase II, alpha 232087 3.28 2.70 3.95 43 16 ELL-related RNA polymerase II 192657 3.15 2.34 4.56 39 17 zyxin 22793 2.70 2.09 3.37 46 18 transmembrane 4 superfamily member 10 homolog 109160 2.32 2.11 2.56 39 Table 4 List of genes induced by serum at 30, 60 or 120 minutes whose induction is MKL-dependent or -independent. MKL-Dependent genes (28) Known SRF target Genes Other Genes Other Genes 1 Jun-B 6 adrenomedullin 18 Adenosine A2b receptor 2 serum response factor 7 B-cell translocation gene 2 19 Coagulation factor III 3 fos-like antigen 1 8 CDC42 effector protein 3 20 Leukemia Inhibitory factor 4 tropomyosin 1, alpha 9 enigma homolog 21 Retinoblastoma inhibiting gene 1 5 vinculin 10 epiregulin 22 TGFβ inducible early growth response 11 hexokinase 2 23 Tribbles Homolog 1 12 inhibitor of DNA binding 3 24 Aortic alpha actin-2 13 interleukin 6 25 expressed sequence AA939927 14 pleckstrin homology-like domain, family A 26 transmembrane 4 superfamily member 10 homolog 15 similar to Hs Mig6 27 methionine adenosyltransferase II, alpha 16 snail homolog 1 28 ELL-related RNA polymerase II, elongation factor 17 zyxin MKL-Independent genes (122) Known SRF target genes 1 cysteine rich protein 61 ( Cyr61 ) 2 thrombospondin 1 3 FBJ osteosarcoma oncogene ( c-fos ) 4 FBJ osteosarcoma oncogene B ( FosB ) 5 cysteine rich protein 1 ( Crp1 ) 6 prostaglandin-endoperoxide synthase 2 7 early growth response 1 ( Egr1 ) 8 early growth response 2 ( Egr2 ) Correlation of microarray and real time PCR data To verify the results obtained from the microarray experiments, we checked seven genes by real time quantitative PCR using the SYBR Green method. There was a good, but not perfect correlation, of these methods (Fig. 3 ). Four MKL-dependent genes, jun B, srf, interleukin 6 and epiregulin, were also found to be MKL-dependent by real time PCR. Similarly, two MKL-independent genes, cyr61 and egr-1, were also MKL-independent by real time PCR. Finally, c-fos expression was only moderately reduced in DN-MKL1 cells whether measured by microarray or real time PCR (Fig. 3G ). This inhibition was not high enough to meet the cut-off of 35% reduction for our list of MKL-dependent genes (Fig. 4 ). This result contrasts somewhat with our measurements of c-fos mRNA by RNase protection. We previously found that c-fos mRNA was significantly reduced in DN-MKL1 cells at the 60 minute time point, though not the 30 minute point. Since there was inhibition noted by each method, this difference may reflect the sensitivity of each method to detect precise changes in expression. There were also some differences between the microarray and real time PCR results such as inhibition of Jun B expression at the 30 minute point by the PCR method but not by microarray (Fig. 3A ) and the greater serum induction of egr-1 mRNA measured by real time PCR (Fig. 3F ). Nevertheless, the general levels of induction and DN-MKL1 inhibition were similar by both methods. This is reflected in the Pearson's correlation coefficient of 0.92 for a comparison of all of the data attained by each method shown in Fig. 3 . The correlation coefficient ranged for 0.85 to 0.99 for the data with any one gene. Figure 3 Correlation of microarray and real-time PCR data . The expression pattern of select serum-induced MKL-dependent and -independent genes was determined by quantitative real-time PCR (right) and compared to the microarray results (left) for the indicated genes. WT or DN-MKL1 cells were induced with serum for the indicated times before isolation of RNA. The results derived from the microarray hybridizations are the averages of triplicates while the real-time PCR measurements are the averages of at least duplicates ± the standard deviation. Figure 4 Known CArG boxes in serum inducible genes. The upper panel lists the positions and sequences of the known CArG boxes of the MKL-dependent or -independent genes. The bases that differ from the CArG box consensus sequence are in bold. The bottom panel shows the multilevel consensus sequence that was derived from each of these groups of CArG boxes. Below the consensus sequence is the simplified position-specific probability matrix that specifies the probability of each possible base appearing at each position in an occurrence of the motif multiplied by 10. 'a' denotes a probability that is almost or equal to 1. The consensus sequence is the best match to the CArG boxes oriented on either strand. Promoter analysis of MKL-dependent and -independent serum-inducible genes We compared the serum inducible genes to known SRF target genes to identify those genes with SRF binding sites. Comparing to a list of SRF targets genes [ 31 ](J.M. Miano, personal communication) we found that five of 28 MKL-dependent genes had known CArG box sites while eight of 122 MKL-independent genes had known sites (Table 4 ). We further analyzed the promoters of the MKL-dependent genes to identify SRF binding sites that would potentially be direct targets of MKL. Promoter sequences for the serum inducible MKL-dependent or -independent genes were extracted from the Database of Transcription Start Sites (DBTSS) [ 32 ]. This database contains exact information of the genomic positions of the transcriptional start sites (based on full length cDNAs) and the adjacent promoters for 6,875 mouse genes. The upstream sequence (-1000 to +200 relative to the transcription start site) of each of the candidate genes was searched for the consensus SRF binding site (CCWWWWWWGG, where W is A or T) with or without one base mismatch allowed. Of the 28 MKL-dependent genes, upstream sequences were available for 20 genes in the DBTSS database while sequence was extracted for 75 of the 122 MKL-independent genes. There were 17 exact matches to the CArG box in these 95 serum inducible promoters (18%) (Table 5A ). Given multiple matches in some genes this resulted in 11 genes (12%) having exact matches. These frequencies were similar in the MKL-independent and -dependent classes of serum-inducible genes but significantly higher than the frequencies found in searching the entire DBTSS collection of promoters as indicated by the p values in Table 5A . The p value for the MKL-dependent promoters is high because of the small sample size, however the frequency of exact CArG box matches is similar to the other serum-inducible genes. Table 5 Search of MKL-dependent and -independent serum-inducible promoters for SRF binding sites. A. Exact matches to CArG box genes in micro-array genes in DBTSS matches frequency p genes with matches frequency p genes w. >1 matches frequency p Total genes 14824 6875 419 0.061 401 0.058 15 0.002 All Serum-inducible 150 95 17 0.179 0.0001 11 0.116 0.0218 3 0.032 0.0010 MKL-indep, serum-inducible 122 75 14 0.187 0.0002 9 0.120 0.0293 2 0.027 0.0101 MKL-dep, serum-inducible 28 20 3 0.150 0.1193 2 0.100 0.3245 1 0.050 0.0392 B. Matches to CArG box allowing one base mismatch Total genes 14824 6875 6707 0.98 4230 0.615 1744 0.254 All Serum-inducible 150 95 103 1.08 0.155 63 0.663 0.1958 25 0.263 0.4576 MKL-indep, serum-inducible 122 75 72 0.96 0.839 47 0.627 0.4682 15 0.200 0.8886 MKL-dep, serum-inducible 28 20 31 1.55 0.005 16 0.800 0.0666 10 0.500 0.0155 The promoters from indicated groups were searched for exact matches with the CArG box sequence CC(A/T) 6 GG (A) or allowing for one base mismatch (B). Frequency indicates the number of matches divided by the number of DBTSS genes in each category. The p values were calculated based on a binomial distribution except for the value for matches in (B) (column 6 from the left) where the numbers were too large and the Poisson distribution of the frequency of matches per base was used. We also searched for promoters with more than one CArG box match since multiple SREs have been reported in several SRF target genes and this could provide a mechanism to distinguish responses. While there were more genes with greater than one CArG box match in the serum-inducible genes than in the DBTSS promoters, the numbers of MKL-independent vs. -dependent genes with greater than one match were too low to show a statistical significance (Table 5A ). The search for SRF binding sites allowing a one base mismatch was more problematic since there were a high rate of matches in the DBTSS data set, an average of .98 hits per gene (Table 5B ). There was not a significant increase in this rate in the MKL-independent promoters, however there was an increase in the MKL-dependent promoters to 1.55 hits per gene which was found to be statistically significant (p = 0.005; Table 5B ). This difference in the MKL-dependent set was also apparent when considering the increased frequency of genes with matches and of genes with greater than one match (Table 5B ). The increased frequency of the exact CArG box sequence in the upstream region of many of the MKL-dependent genes gives further confidence that SRF coordinately regulates many immediate early genes. However, the absence of a CArG box does not preclude a gene from being a direct target due to the flexibility of enhancer positioning. The regulatory site may be upstream or downstream of the 1200 bp we have analyzed. For instance, in the case of the jun B gene it has been shown that a CArG box downstream of the gene mediates its response to serum [ 33 ]. In addition not all SRF target sites are perfect CArG boxes. Several of the known SRF target sites contain single base changes (Fig. 4 ). While our search allowing a single mismatch resulted in an overly high frequency of matches, there was a significantly higher rate of matches in the MKL-dependent promoters suggesting that many of these sites may be used as direct SRF targets. We compared the CArG boxes more closely since previous work with altered SREs suggested that changes in the A/T core of the SRF can affect sensitivity to the TCF-independent (i.e. RhoA-MKL) pathway [ 34 ]. The known CArG boxes were aligned from each class of genes, either MKL-dependent or -independent (Fig. 4 ). We chose not to analyze the predicted CArG boxes since there were few exact matches in the MKL-dependent promoters and because there was too high a rate of matches allowing one mismatch such that many of the matches are likely false positives. The MEME motif discovery tool (see Methods) was used to arrive at a multilevel consensus sequence based on a position specific probability matrix. The matrix specifies the probability of each possible letter appearing at each position in an occurrence of the motif. MEME also takes into account both orientations of the sequence to arrive at the consensus sequence. The consensus sequences for the two sets are quite similar although there are some minor differences and the sample size is relatively small. We have compared the sequences of the known SRF target genes to determine whether flanking sequences can explain the differential sensitivity to DN-MKL1. We first compared sequence flanking the known CArG boxes. The MEME motif discovery tool was used to identify common elements in 30 bp flanking each side of the CArG boxes. There were no statistically significant matches in the sequences from MKL-dependent of -independent genes. We might have expected to see a TCF site, however this site is very short and flexible. TCF binds to a short site near an SRE and its binding is stabilized by binding to SRF. A consensus of (C/A)(C/A)GGA(A/T) was found, however the orientation and position relative to the SRE was flexible and only the GGA sequence was absolutely required [ 35 ]. Visual inspection of the flanking sequence to the known CArG boxes suggested a potential TCF site in all of the sequences except for that of cyr61. We used two methods to compare the full promoter sequences of the MKL-dependent and -independent, serum-inducible genes for common elements that might be required for their induction. We first used the MEME tool to compare the promoters (-1000 to +200) derived from the DBTSS database. As in Table 4 , 20 MKL-dependent promoter sequences and 75 MKL-independent promoter sequences were extracted. The MEME tool could only compare 50 of these promoters at a time such that 50 MKL-independent promoters and 50 random promoters were compared. While MEME identified some common elements within each of these three classes of promoters, none of the elements identified among the MKL-dependent of -independent promoters appeared to be specific since similar elements were identified in the random promoter set. A second method we used was to look for common regulatory elements was oligonucleotide analysis which has been used in yeast to identify regulatory sites [ 36 ]. This method looks for enriched oligonucleotide frequencies in a group of genes. We used the entire set of 6875 DBTSS mouse promoter sequences to calculate the expected background frequencies of hexamers. The most specific matches we found were in the set of 75 MKL-independent promoters. The related oligonucleotides GGAGGG, CCGGAG and CGGAGA were enriched with significance values (E-value) of 1.4 × 10 -4 to 6.9 × 10 -3 . These oligonucleotides were not enriched in a test case of 60 random promoter sequences. The related sequence GGGAGG, however, was similarly enriched in the MKL-dependent promoters albeit with less statistical significance (E = 0.8). It is intriguing that these oligonucleotides are similar to the TCF consensus binding site (C/A)(C/A)GGA(A/T). The most significantly enriched oligonucleotide in the 20 MKL-dependent promoters was CCGCGC with an E value of 0.078. This lower significance may partially reflect the smaller size of this data set but also may reflect the lower significance of this enrichment. Given the relatively small number of serum inducible genes and present computational tools, it appears that more careful experimental mapping of the sequence elements in MKL-dependent and -independent genes will be required to identify the elements that determine their sensitivity to the MKL pathway. Discussion Serum inducible genes have far ranging effects in many physiological processes such as proliferation, wound healing, migration, and tissue remodeling [ 37 ]. SRF has been implicated in the expression of many serum inducible genes, particularly immediate early transcription factors, but is also required for the expression of many muscle-specific genes [ 31 ]. SRF is activated by different co-activators to control its diverse set of target genes in different tissues [ 27 , 38 - 45 ]. Here, we have elucidated the target genes that are dependent on one particular SRF co-activator family – the MKL family. Since dominant negative MKL1 can block activation by all of the members of the MKL family, MKL1 and 2, as well as myocardin (unpublished data)[ 21 , 24 ], we have used cell lines expressing dominant negative MKL1 to determine the role of the MKL family in serum-induced gene expression patterns. Cluster analysis of the microarray data identified different classes of genes that showed significant variation across the samples. Two classes of genes were either constitutively activated or constitutively repressed. These could be indirect targets of the SRF-MKL pathway or could suggest functions for MKL apart from its role as a transcriptional co-activator of SRF. Among the serum inducible genes, we identified a significant fraction, 28 of 150, that are MKL-dependent. This is not to say that the other 122 genes may not be sensitive to MKL under certain conditions. Rather, they must at least have additional mechanisms for serum induction in NIH3T3 cells. c-fos was identified as an MKL-independent gene although we previously found by RNase protection that its induction was significantly reduced in cells induced with serum for 60 minutes [ 24 ]. We did observe a small decrease in c-fos expression in our microarray experiments, consistent with the RNase protection results, however it did not pass our stringent statistical criteria for MKL-dependence. Serum induction of some of the MKL-independent immediate early genes may also be independent of SRF. For example, the c-jun gene utilizes other sequence elements for serum induction [ 46 ]. The TCF factors, elk-1, SAP1 and SAP2, are the main candidates besides MKL factors for mediating serum induction of SRF target genes. Further expression profiling will be required with inhibitors of the SRF and TCF pathways to better characterize the pathways used by the 150 serum inducible genes described here. Many of the serum-inducible genes identified here have been previously described as immediate early genes either as single genes or in microarray experiments [ 37 , 47 , 48 ]. The groups of genes include immediate early transcription factors and cytokines, as previously noted [ 37 ], but also a broad array of other types of genes. SRF null mice die in utero due to the defective formation of a mesodermal layer and SRF null ES cells are defective in focal adhesion and migration [ 49 , 50 ]. We have also observed that the DN-MKL1 cell line is significantly less adherent than the control line (unpublished data). It is therefore interesting that a number of our serum inducible genes are involved in cytoskeletal structure and adhesion. These include vinculin, tropomyosin, zyxin, thrombospondin, tenascin, integrin α5, and transgelin. It will be interesting to determine whether lowered expression of these genes in SRF or MKL deficient cells leads to the changes in their adhesive properties. Of these proteins, only vinculin is a previously known MKL and SRF target gene [ 19 , 21 , 51 ]. Our microarray analysis found that zyxin is also an MKL-dependent gene and zyxin expression was previously found to be decreased in SRF null cells [ 50 ]. Sequences required for MKL-dependent and -independent genes We analyzed the promoter sequences of the MKL-dependent and -independent genes to identify elements that might determine this sensitivity. Exact matches to a simple CArG box consensus, CC(A/T) 6 GG, resulted in matches in only about 12% of the serum-inducible genes with similar proportions in the MKL-dependent and -independent classes. While some functional SRF binding sites contain a single base mismatch to the consensus, a broader search for CArG boxes allowing a mismatch resulted in a high number of matches in a broad promoter database. There was a modest, statistically significant increase in the matches in the MKL-dependent promoters suggesting that some of these sites are real SRF targets. Other SRF target sites may lie outside of the promoter regions we have searched (-1000 to +200). This still leaves the possibility that induction of many of the immediate early genes is independent of SRF such as we have found for the c-jun gene [ 46 ]. Microarray analysis of serum-inducible genes with inhibition of SRF activity is required to show which genes are SRF dependent. Possible mechanisms for determining sensitivity to MKL are the sequence of the CArG box, flanking sequence or the context of other promoter elements. Our comparison of the known CArG boxes from MKL-dependent and -independent promoters did not show a strong difference. Similarly we were not able to identify differences in the flanking sequence. Analysis of the more full promoter sequences has also not yielded clear common regulatory elements to date. One set of oligonucleotides, containing the sequence GGAG, was found in both the MKL-dependent and -independent promoters at a significantly higher frequency than in the full promoter database. The significance of these sites remains to be determined though it is notable that they are similar to the TCF consensus site. One possible mechanism for distinguishing MKL target genes is the presence of a TCF site next to a CArG box. This would allow for TCF binding and activation by a MAP kinase pathway instead of the MKL pathway. Since the binding sites for TCF and MKL on SRF are similar [ 23 ], TCF binding could preclude MKL activation. In fact, mutation of the TCF site in the c-fos promoter allows it to be activated by a serum induced, actin filament-dependent pathway presumably through RhoA and MKL [ 20 ]. In addition, we found that mutation of the TCF site in the c-fos SRE results in significantly higher activation by MKL1 (Bo Cen and R.P., unpublished results). Nevertheless, c-fos SRE elements with TCF sites are still activated by MKL1 and myocardin [ 21 , 27 ] such that there may be additional elements that determine sensitivity to MKL1 activation. Chromatin immunoprecipitation experiments have shown that MKL1 can bind to the SRF, vinculin and cyr61 promoters in cells treated with the actin inhibitor swinholide A [ 23 ]. This is consistent with our identification of SRF and vinculin as MKL target genes, but we found that induction of cyr61 was MKL-independent. It is possible that this difference is due to our inducing with serum rather than swinholide A, since serum will induce alternative pathways such as the MAP kinase pathway. Chromatin immunprecipitations showed that the TCF factor SAP1 bound to the egr1 promoter, possibly explaining its independence of MKL, but no binding of SAP1 was observed for two other MKL1-independent genes egr-1 and cyr61 [ 23 ]. Thus, TCF binding may not always explain the lack of requirement for MKL factors. Conclusions Our results indicate that a subset of serum inducible genes is dependent upon the MKL family for its induction. This genomic classification of MKL-dependent and -independent serum-inducible genes is a significant step for characterizing which pathways are required for induction of each cellular immediate early gene. Methods Gene expression analysis NIH3T3 cells stably expressing dominant negative MKL1 (a.a. 1-630)(DN-MKL1) or containing the vector, pBabePuro, were previously described [ 21 ]. The cells were serum starved in Dulbecco's modified Eagle's medium (DMEM) with 0.2% new born calf serum (NCS) for 24 hours and then induced with 20% NCS for 30, 60 and 120 minutes. Total RNA from these cells was prepared using Trizol (Invitrogen) and then purified using RNeasy columns (Qiagen). Total RNA (8 μg) was used for first-strand cDNA synthesis using T7-Oligo-dT primers and Powerscript reverse transcriptase (Invitrogen) for 1 hour at 42°C. This was followed by second-strand synthesis for 2 hours at 16°C using RNase H, E. coli DNA polymerase I, and E. coli DNA ligase (Invitrogen). The obtained double-stranded cDNA was then blunted by the addition of 20 units T4 DNA polymerase and incubation for 5 min at 16°C. The material was then purified by phenol:chloroform:isoamyl alcohol extraction followed by precipitation with ammonium acetate and ethanol. The cDNA was then used in an in vitro transcription reaction for 6 hours at 37°C using a T7 in vitro transcription kit (Affymetrix) and biotin-labeled ribonucleotides. The obtained cRNA was purified on an RNeasy column. The eluted cRNA was then fragmented by incubation of the products for 30 min in fragmentation buffer (40 mM Tris-acetate, pH 8.1, 100 mM KOAc, 30 mM MgOAc) at 95°C. The fragmented labeled RNA (15 μg) was hybridized to an Affymetrix mouse MOE430A Genechip at 45°C according to the manufacturer's protocol [ 52 ] and stained with streptavidin-phycoerythrin. The chips were then scanned with an Affymetrix Genechip Scanner GS300. All the data sets were done in triplicates from serum stimulation of cells to scanning of the microarrays. The percent of genes with significant expression ("present" calls) ranged from 50.4 to 56.5% for each microarray. The 3':5' ratio for a control actin gene ranged from 1.0 to 1.31 for each of the 24 microarrays scanned. The scaling factor to account for differences in probe labeling and general changes in hybridization in each microarray was scaled to a target intensity of 250 (using Affymetrix Microarray Suite 5.0 software) and ranged from 1.273–5.542. dChip analysis The dChip software [ 30 , 53 ] was used for the normalization and calculation of model based expression indexes (MBEI) after pooling the replicate arrays and for the hierarchical clustering analysis of the data from the Affymetrix Gene chips. Normalization was based on a large set of probes determined iteratively to be invariant across the different microarrays. After normalization each array has a similar overall signal [ 30 ]. The initial analysis was performed using the perfect match (PM)-only model and genes were filtered according to the criteria of a) co-efficient of variation (standard deviation/mean) across the eight conditions must be between 0.50 and 10.00, b) a gene must be called 'Present' in ≥ 20% of the arrays used, c) the variation of the standard deviation/mean for replicate arrays for a single condition must be between 0 and 0.5, and d) the expression level must be >= 100 in at least one of the time points. Hierarchical tree clusters were then generated using this filtered gene list. Expression values (MBEIs) at each time point were determined with a standard error of the mean. These values were used to calculate the fold change with 90% confidence intervals. The confidence intervals were derived from the standard errors and fold changes based on a χ 2 distribution model with one degree of freedom [ 30 ]. For the lists of serum inducible genes the low confidence interval value was required to be greater than two. For the inhibition in the DN-MKL cells, the low confidence interval value was required to be at least 35% less than in the WT cells for the gene to be designated MKL-dependent. Real time PCR Total RNA was isolated from the DN-MKL1 and vector containing NIH3T3 cells as described for the microarray analysis and 1 μg was used for first strand cDNA synthesis (Powerscript Reverse Transcriptase, BD Biosciences) using oligo-dT primers according to the manufacturer's protocols. One fiftieth of the reverse transcription reaction was included in a 20 μl PCR reaction. For a quantitative analysis, SYBR green PCR technology was used (Applied Biosystems). Real-time detection of the PCR product was monitored by measuring the increase in fluorescence caused by the binding of SYBR green to double-stranded DNA with an ABI PRISM 7000 Sequence Detector. To calculate relative quantification values a threshold cycle (C t ), the cycle at which a statistically significant increase in fluorescence occurs, was derived from the resulting PCR profiles of each sample. C t is a measure of the amount of template present in the starting reaction. To correct for different amounts of total cDNA in the starting reaction, C t values for an endogenous control (Acidic ribosomal phosphoprotein P0) were subtracted from those of the corresponding sample, resulting in ΔC t . The ΔC t value of the serum starved wt sample was chosen as the reference point and was subtracted from the ΔC t value of all the other samples, resulting in ΔΔC t . The relative quantification value is expressed as 2 -ΔΔCt giving the relative difference of the serum induced points compared to the serum-starved cells for expression of a particular gene. All real time PCR data sets shown are the results of two independent mRNA preparations and amplifications. Amplification of only a single species in each PCR reaction was determined by checking for a dissociation curve with a single transition. Promoter sequence analysis The Database of Transcriptional Start Sites (DBTSS)[ 32 , 54 ] was used to extract the promoter sequences (-1000 to +200) of all the available full length cDNAs and RefSeq entries based on the start site of the cDNA as +1. This mouse data base was searched for SRF binding sites using the consensus CCWWWWWWGG, where W is A or T, with or without allowing for one base mismatch. The CArG box sequences of the known SRE regulated genes were derived from the literature. The motif discovery tool MEME (Multiple EM for Motif Elicitation) [ 55 , 56 ] was then used to derive a multilevel consensus sequence based on a position specific probability matrix. The matrix specifies the probability of each possible nucleotide appearing at each possible position in an occurrence of the motif. MEME takes into account both orientations of the sequence to arrive at the consensus sequence. Sequences flanking the known SREs (30 bp on each side) were extracted from Genbank and MEME was also used to compare these sequences. A control set of 10 upstream sequences from the same genes (70 bp in length) was also searched for comparison. Promoter sequences (-1000 to +200) of the serum inducible genes were also searched for common elements with MEME. Promoters for 20 MKL-dependent, 50 MKL-independent and 50 random promoters were searched. These promoter sequences were additionally searched by oligonucleotide analysis for enriched hexamers [ 36 , 57 ] except that 20 MKL-dependent and 75 MKL-independent were searched. The background frequency of oligonucleotides was set using promoter sequence (-1000 to +200) for all 6875 mouse promoters in DBTSS. A control searching 60 random promoters also did not show significant enrichment of specific oligonucleotides. Authors' contributions AS carried out all experimental sections of the paper. AS and RP conceived of the study and participated in its design and coordination. Both authors read and approved the final manuscript. Supplementary Material Additional File 1 List of serum inducible MKL-independent genes This Microsoft Excel spreadsheet file contains a list of genes that were serum inducible (>2-fold) at either the 30, 60 or 120 minute time points and that satisfied the 90% confidence interval criteria for fold-change using the dChip software. The expression of these genes were reduced by less than 35% in the DN-MKL1 cells at each time point. The MKL-dependent genes (reduced by more than 35%) shown in Tables 1 to 3 were removed from this list to give 122 MKL-independent genes. The Affymetrix probe set information, the gene name, Genbank Accession number and Locus link identifier are shown for each of the genes. For each time point the fold induction in serum-treated wt NIH3T3 cells compared to serum-starved wt cells ('fold change') and its upper and lower confidence intervals are shown. The time points where serum induction was deemed significant for each gene are marked by asterisks (*) in the 'filtered' columns. Click here for file | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC516031.xml |
523840 | Prescription for a Healthy Journal | Why does the world need a new medical journal? In this first editorial the PLoS Medicine Editors set out their vision for the journal | Today the possibilities for a medical journal are almost limitless. The first medical journals reflected the needs of a closed group of doctors. But medicine, its place in the world, and the dissemination of information have changed utterly. So in starting afresh, what should a new medical journal retain, and what should it ditch? Most obviously, we should throw out the old way of disseminating information. In today's electronic age, it is no more difficult, and it is only minimally more costly, to provide access to one million people than it is to one person. So the revolutionary idea of anyone being able to read any article is possible. This idea—open access—which completely challenges the old subscription-based publishing model, is the driving force behind the launch of PLoS Medicine . You can download and distribute articles without restrictions (feel free to make a thousand copies, translate articles into other languages, put articles into books—just give the author proper credit). We have also changed the way we involve the academic community in our journal. Our large global editorial board reflects the diversity of medicine today and is intimately involved in what we do. In particular, members of the editorial board are a crucial part of our peer review process. As academic editors they, along with a senior editor at the journal, take research papers through the peer review process in a way that we believe provides the most constructive and fair review. We are delighted that members of our editorial board have also shown their support for our journal by submitting papers to us, even before we launched. What will we publish? The research article on malaria in this issue reflects our priority of publishing papers on diseases that take the greatest toll on health globally. But we will also publish papers reporting a substantial advance in any specialty, whether that advance is in public health, such as the paper on the global burden of disease; drug effects, such as the paper on the effect of HIV drugs on lipids; or the molecular understanding of disease, such as the paper dissecting out the immune responses in lung disease caused by smoking. A good general medical journal should also be a place where the global medical community can discuss together what matters to them. The magazine section of PLoS Medicine will be devoted to comment, lively debate, and diverse opinions, in particular giving neglected voices and diseases a place in the limelight. In this issue's magazine section you will see articles from five continents that cover a huge range of topics, from basic sciences (such as the pathology of emphysema) to global public health (such as palliative care in developing countries). You will find diverse opinions—for example, on whether President Bush is helping or hindering Africa's progress towards tackling HIV, and on whether health professionals should routinely screen women for domestic violence (tell us what you think by taking our poll at www.plosmedicine.org ). And you'll find case-based learning materials on meningitis linked to an online video and an online quiz. The revolutionary idea of anyone being able to read any article is possible. Interpretation of results is an essential part of a medical journal's job. Although we expect that many of our readers will be doctors, we hope readers will range from patients wanting to learn about the latest research on their illness, to teachers wanting to use an article in the classroom, to policymakers. Hence, we have several levels of comment on original research. Perspectives, written by an expert, are aimed at readers who are already familiar with the topic. Synopses, written by PLoS Medicine 's professional editors, should provide any health professional with a quick introduction to an article. Patient summaries provide a starting point for patients to assess the relevance to them of a research paper. We have decided not to be part of the cycle of dependency that has formed between journals and the pharmaceutical industry, an industry that focuses overwhelmingly on the most profitable drugs, thus sidelining many of the world's health problems. Medical journals have allowed their interests to become aligned with those of the pharmaceutical industry by printing advertisements for drugs, publishing trials designed by drug companies' marketing departments, and making profits on reprints used as marketing tools. PLoS Medicine will not accept advertisements for pharmaceutical products or medical devices. Our open-access license allows free distribution of articles, so PLoS cannot benefit from exclusive reprint sales. And we consider as the lowest priority for publication papers that are simply aimed at increasing a drug's market share without obvious benefit to patients. We will aim to have the highest levels of transparency in our published papers. We require authors to tell us of any possible competing interests; we in turn will tell readers about them. But, information flow should not be just one-way. Our editorial doors (or at least our E-mail boxes) are always open. We want your feedback on the journal: send us an E-mail or submit an eLetter about any article in the journal, take part in our polls, contribute ideas for the magazine section and submit original research. PLoS Medicine is a journal for the global medical community; we invite you to join in. | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC523840.xml |
550676 | Jacov Tal (1940 – 2005): remembrances of a friend | An obituary commemorates the life and works of Jacov Tal. | A friend's passing elicits a set of emotions. Collecting and sharing remembrances are steps of closure in bidding farewell. Here, we honor and remember Jacov Tal (Fig. 1 ) who passed away on February 8 th , 2005. At the time of his passing, Jacov was the Head of Virology at Ben-Gurion University Medical School, Israel. In brief, four of us, who befriended Jacov in different capacities, write our remembrances. It is appropriate to recall the words of a colleague who on another occasion upon the passing of a giant in American science remarked, "Well, ghosts can't make men do anything!" Thus, the true reflection of a person is not what (s)he through his/her prestige, wealth, and position makes others do in life, but how (s)he is remembered by others in death. Jacov started as a young retrovirologist in J. M. Bishop and H. Varmus' laboratory; and it is fitting that he is remembered by friends in Retrovirology . Figure 1 Jacov Tal, circa 2004. "I remember Jacov as my actual mentor in my first year as a graduate student in the laboratory of Uri Littauer that Jacov had joined a year earlier. Jacov took similar care of other new comers interested in nucleic acids and molecular biology including Hiroshi Inouye, Inder Verma and Jacque Beckman, even to the point of neglecting his own work. Ibelieve that Jacov's selfless involvement in public affairs and later decision to become a virologist date back to a childhood experience; namely, only a ruling elite and diplomatic corpse in a country that Jacov's parents were stationed in, knew about the polio epidemic, considered then an ideological insult. Jacov Tal studied at the Hebrew University in Jerusalem where he obtained the Bachelor degree in Biochemistry and Microbiology in 1964 and the Master degree in Biochemistry in 1966. He continued his studies at the Weizmann Institute of Science where he obtained his PhD degree in Biochemistry in 1971. Jacov Tal did his postdoctoral training first with Hershel Raskas in Washington University where he investigated adenovirus gene expression and later with Harold Varmus and Michael Bishop at USCF where he studied the relation between the retroviral genome and the genome of its cellular host. After these formative years as a molecular virologist Jacov Tal joined the newly founded Ben Gurion University in the Israeli desert city Beer Sheva where he initiated and led the Medical School's Virology Department." (Gabriel Kaufmann) "Professor Jacov Tal will be best remembered by the scientific community for his extensive studies of the parvoviruses adeno-associated virus (AAV) and minute virus of mice (MVM). One of his significant achievements was the determination of parameters of site-specific integration of AAV, leading to development of potential vectors for gene therapy. Other important contributions were his insights into MVM's ability to kill cancerous cells, while leaving normal cells unaffected. The students of Professor Tal will also remember him for his dedication to training them to be rigorous and discerning scientists, and his concern for their well being. His colleagues will miss his sharp wit and analytic acumen." (Maureen Friedman) "Jacov Tal and I collaborated closely to his last days. Together, we found that during embryonic development MVM somehow senses a differentiation signal, and we suggested a relation between this observation and the MVM's anti tumor cell activity. I recall Jacov's 'freshman' enthusiasm in private, and his public posture as one who speaks up his mind without hesitation; standing up against any perceived injustice." (Claytus Davis) "I met Jacov towards the latter years of his career. In 1993, Jacov came to the US to do a sabbatical in Peter Chiang's laboratory. By and by, he drifted into my laboratory and actually spent the entire year working with me. Jacov by then, already a senior scientist for many years, not unexpectedly struggled heroically (and largely unsuccessfully) at the bench; and certainly it was not his bench-skills that impressed me. What did impress me was Jacov's common sense and his very human and generous attitudes. I recall an incident during Jacov's first week when he had not yet gotten to know all the members of my lab. At that time, there was a tall, curly-haired, darkly-handsome and academically gifted young man, working as a post-doc with me, who had graduated from Yale, obtained his MD degree from Duke, and received house staff and infectious diseases training from the University of Virginia. This person also has four siblings who are MDs. Jacov upon meeting this young man, whispered to me excitedly, 'Now here is a nice Jewish boy who is going to make the mother of a Jewish daughter very happy!' Surprise, surprise...that person turned out not to be Jewish, but a Catholic Lebanese-American of Arabic descent. Afterwards, a sheepish Jacov explained to me that it is very difficult to nearly impossible to tell between an Arab and a Jew in Israel; and as far as he was concerned, it made no difference whether Arab-American or Jewish-American. I was struck by his frankness and openness. In his typically thoughtful and 'dovish' ways, over the next many years, Jacov would email me from Israel his periodic 'roadmaps' for peace in the Middle East, accompanied by his incisive commentaries. I will deeply miss my friend's common sense advice and humor." (Kuan-Teh Jeang) | /Users/keerthanasridhar/biomedlm/data/PMC000xxxxxx/PMC550676.xml |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.