text
stringlengths
247
520k
nemo_id
stringlengths
29
29
The wonders of astronomy in a comfortableseat regardless of the time of day—or the weather! 25 May - 2 Jun 2013 Until 24 May 2013 On Friday 10 May, Australia will be treated to a special solar eclipse. Comets Lemmon and PANSTARRS together in the south. Celebrating Pluto Day. A close encounter with 2012 DA14. Remembering the crew of Columbia Bushfire at one of Australia's top astronomy research centres. Answers to common astronomy questions. A monthly series that provides a guide to the stars and constellations. View 2013 moon phases or choose another year from the menu at left. Time for the Sun, Moon and five bright planets as seen from Melbourne. Years 3-8: This package supports Year 3-8 school groups who are studying astrono... Saturn is the pick of the planets this month, looking lovel... Our annual Discover the Night Sky evenings will be running Thursday nights in August 2013. You can subscribe to Skynotes to keep up to date with the Plane... To read the latest tweets from @museumvictoria Follow Planetarium on Our Living Climate is one of four shows currently running at the Planetarium - the others are called Tilt, Tycho to the Moon, and Black Holes: Journey...
fwe2-CC-MAIN-2013-20-29378000
A chord does not have to be made up of thirds. A chord is by definition two or more notes heard as if sounded simultaneously. Not all chords have three notes either. There are dyads (two notes), triads (three), tetrachords (four), pentachords (five), and hexachords (six). There's no limit on the number of notes, and also, by definition, there's no limits on which notes. C - E - G is a chord. D - E - F - C is a chord. However, the most common triads are the major, minor, augmented, and diminished (there is also the suspended). All of these are composed of a root, a third, and a fifth (except the suspended, which uses the root, perfect fourth and perfect fifth). So, now to your question, why thirds? First, realize there are two types of thirds: the major and the minor. The major consists of four semitones and the minor three semitones. Quoting Wikipedia: The major third is classed as an imperfect consonance and is considered one of the most consonant intervals after the unison, octave, perfect fifth, and perfect fourth. In the common practice period, thirds were considered interesting and dynamic consonances along with their inverses the sixths. After the major third became established as such, it become pretty standard. Every classical piece makes use of it in some way. The other reason the major third is so widely used is that it is found in the harmonic series (between the fourth and fifth). Early brass (e.g., posthorn, natural trumpet) had no valves or slides and were limited to the harmonic series. This encouraged use of and familiarity with the major third. However, I'd say the most important of all these reasons is the first. It is highly consonant. The minor third has the same level of consonance as the major third, but is found higher up in the harmonic series (between fifth and sixth). Also, there are many common transposing instruments which sound a minor third higher or lower form where they are written. For example, the Eb clarinet and the Eb trumpet both sound a minor third higher than written. The oboe d'amore, popular in the eighteenth and twentieth centuries, and the soprano clarinet in A sound a minor third higher than written. Of these reasons, I'd say the first (again) is the most important. As for other intervals, any interval can be used, but some are more common than others. The perfect fifth, octave, unison, and seventh (in no particular order) are very common. All major, minor, and suspended chords have a perfect fifth. Also important are the perfect second, perfect fourth, and major sixth. To learn more about different chords and the intervals that make them up, read this article on intervals and this one on chords. They are both very informative. Thirds are the most consonant intervals (after the unison, octave, perfect fifth, and perfect fourth). Are other intervals sometimes used? Many other intervals are used. See here for a list of the main ones. They include the perfect fifth, the perfect seventh, the octave, the major sixth and the perfect fourth.
fwe2-CC-MAIN-2013-20-29380000
Florida is an important place for the endangered and threatened sea turtles of the world. Sea turtles nest on our beaches, forage for food in our estuaries, and all too often wash-up dead on our shoreline. Florida Fish and Wildlife Conservation Commission staff are dedicated to protecting sea turtles in Florida and learning as much as possible about the biology and life history of these The unusually long spell of cold weather in Florida in January 2010 has had a big impact on sea turtles. The FWC has been working with staff from county, state, and federal agencies as well as numerous volunteers on a mass rescue effort for sea turtles throughout the Five species of sea turtles are found swimming in Florida's waters and nesting on Florida's beaches. All sea turtles found in Florida are protected under state statutes. The Florida Fish and Wildlife Conservation Commission's Fish and Wildlife Research Institute coordinates nesting beach survey programs around the state. FWRI staff members coordinate the Florida Sea Turtle Stranding and Salvage Network (FLSTSSN), which is responsible for gathering data on dead or debilitated (i.e., stranded) sea turtles found in Florida. Debilitated turtles are rescued and transported to FWRI marine turtle program staff conduct research on the distribution, abundance, life histories, ecology, migrations, and threats to marine turtles in Florida and contiguous western Atlantic and Caribbean waters. Illegal harvesting, habitat encroachment, and pollution are only some of the things sea turtles must fight against to stay alive. Researchers at FWRI are studying these threats and finding ways to help the population survive.
fwe2-CC-MAIN-2013-20-29384000
Yogurt, a fermented dairy food, has been around for thousands of years and is an important part of the diet in the Middle East, Asia, Russia and Eastern European countries, such as Bulgaria. The beneficial bacteria in yogurt, also known as probiotics, improve your body’s ability to absorb essential nutrients, particularly calcium. Yogurt is also a potent inflammation-fighter, with well-documented curative effects for the pain and stiffness of arthritis, making it an important part of The Arthritis Healing Diet. 1. Yogurt blocks inflammation. People with arthritis should eat yogurt often because it blocks inflammation. A study in the World Journal of Gastroenterology found that the probiotics in yogurt trigger a decrease in C-reactive protein (CRP), a blood marker for inflammation that can go sky high in people with arthritis. Even more compelling, researchers noted that the bacteria strains Lactobacillus and Propionibacterium exert an especially strong effect on CRP. This means that certain beneficial bacteria have “strain-specific” anti-inflammatory abilities, so look for yogurt that contains these two when shopping. The same study found that yogurt’s beneficial bacteria caused a reduction in the body’s production of cytokines, body chemicals that turn on the inflammation response in joints. For more scientific studies on probiotics go to: http://www.ncbi.nlm.nih.go/pubmed 2. Calcium is critical for joint health. Ounce for ounce, yogurt contains more assimilable calcium than an equal amount of milk. Calcium is an essential mineral for bone and cartilage health. Just 1 cup of plain nonyogurt provides 414 mg of calcium — 25% more than the same amount of nonfat milk. On top of that, the milk sugar (lactose) in yogurt has been predigested by beneficial bacteria, which greatly improves calcium absorption. Bottom line? Eating yogurt regularly is vital for everyone with arthritis. More calcium also makes weight loss easier, giving relief to over-stressed joints. A study published in the American Journal of Clinical Nutrition, found that eating a diet high in calcium boosts fat burning in the body. Women aged 18-30 whose weight was normal were put on either a high-calcium or a low-calcium meal plan for a full year. The high-calcium women took in 1000-1400 mg of calcium daily from food sources, while the low-calcium group got less than 800 mg calcium daily. Results? The high-calcium group burned fat at 20 times the rate of the low-calcium group! This is a testament to calcium’s fat-burning power and its importance for people with arthritis. 3. Not all yogurts are arthritis-friendly. Steer clear of any yogurt containing sugar, artificial sweeteners, artificial coloring, candy and cookie pieces. These bear little resemblance to real yogurt. Most also have been pasteurized, which kills the live bacteria, rendering the yogurt ineffective in treating arthritis. When shopping, read yogurt labels carefully. Look for organic yogurt with labels marked “living active cultures” or “live active cultures.” Some of the most beneficial bacteria to look for include L. bulgaricus, B. bifidus, L. casei, L. rhamnosus and L. reuteria. If you find plain yogurt to be too sour for your taste, just add a little fresh fruit, cinnamon and ground flaxseed for a sweet, healthful and delicious meal or snack any time of day.
fwe2-CC-MAIN-2013-20-29385000
Is participation in adult learning increasing? Adult education activities are formal activities including basic skills training, apprenticeships, work-related courses, personal interest courses, English as a Second Language (ESL) classes, and part-time college or university degree programs. This indicator examines the participation rates of individuals age 16 or older in adult education activities. Overall participation in adult education among individuals age 16 or older increased from 40 percent in 1995 to 46 percent in 2001 and then declined to 44 percent in 2005. In 2005, among the various types of adult education activities, individuals age 16 or older participated most in work-related courses (27 percent), followed by personal interest courses (21 percent), part-time college or university degree programs (5 percent), and other activities (3 percent). Participation rates varied by sex, age, race/ethnicity, employment/occupation, and education in 2005. For example, a greater percentage of females than males participated in personal interest courses (24 vs. 18 percent) and work-related activities (29 vs. 25 percent). Individuals ages 16–24 had a higher overall participation rate in adult education activities than their counterparts age 55 or older. Blacks and Whites had higher rates of overall participation in adult education than their Hispanic peers. Among those employed in the past 12 months, the overall participation rate in adult education was higher for those in a professional or managerial occupation (70 percent) than for those employed in service, sales, or support jobs (48 percent) or those in trade occupations (34 percent). In addition, the overall participation rate in adult education for bachelor’s degree recipients or higher was greater than for those individuals who had some college or less education. SOURCE: U.S. Department of Education, National Center for Education Statistics. (2007). The Condition of Education 2007 (NCES 2007-064), Indicator 10. Related Tables and Figures: (Listed by Release Date) Other Resources: (Listed by Release Date)
fwe2-CC-MAIN-2013-20-29390000
October 9th is the 50th anniversary of the death of Pope Pius XII. Although his memory has been shrouded in controversy over his actions, or lack thereof, during the Second World War and specifically during the Holocaust, he is also remembered for his many encyclicals and messages that laid the groundwork for future developments within the Catholic Church. His encyclicals on the Mystical Body of Christ and the renewal of Biblical studies, both published in 1943, provided inspiration for a more biblically-based ecclesiology and for a scientifically critical study of Sacred Scripture, both of which had a major influence on the Second Vatican Council, convened two decades later by his successor, John XXIII. In 1947 Pius XII issued yet another encyclical on liturgical renewal, followed almost a decade later by his full-scale reform of the Holy Week liturgies. Both of these also fed into Vatican II, which promoted the active participation of the laity in the Church’s worship. Earlier, in 1944, with the war still raging, the pope issued a Christmas message on democracy and the need for a lasting peace. Reading that message today, one is struck by its florid style. What was unprecedented at the time, however, was its near-endorsement of democracy as the form of government best suited to insure justice for all. The call for “democracy and more democracy,” he wrote, “cannot have any other meaning than to place the citizen ever more in the position to hold his own opinion, to express it and to make it prevail in a fashion conducive to the common good” (para. 20). Nearly a half-century later, Pope John Paul II gave voice to the Catholic Church’s strongest support for democracy thus far. In his 1991 encyclical Centesimus annus, marking the one-hundredth anniversary of Leo XIII’s landmark encyclical Rerum novarum, John Paul II wrote: “The Church values the democratic system inasmuch as it ensures the participation of citizens in making political choices, guarantees to the governed the possibility both of electing and holding accountable those who govern them, and of replacing them through peaceful means when appropriate” (n. 46). Those values are tested regularly in democratic elections held at various levels and intervals. Indeed, the United States is currently in the midst of a campaign to elect a new President and Vice President. Like the pontificate of Pius XII, the campaign is shrouded in controversy. The candidate for Vice President on the Republican ticket, Governor Sarah Palin of Alaska, is what we used to refer to as a “fallen-away” Catholic. Baptized and raised as a Catholic, she began attending a Pentecostalist church as a teenager and later joined, and retains active membership in, a fundamentalist Bible church in Wasilla, Alaska, where she formerly served as mayor. She may, however, escape criticism from the vocal group of bishops who tend to be more upset with practicing Catholic candidates like Governor Palin’s counterpart on the Democratic ticket, Senator Joseph Biden of Delaware. They disparage him as pro-choice on the issue of abortion, and therefore pro-abortion, because he does not favor the path of criminalization. In a recent interview on “Meet the Press” (9/7/08), Senator Biden made clear that he accepted the teaching of the Catholic Church that human life begins at the moment of conception. But he also pointed out that this is a “religiously-based view,” a matter of faith, not scientific evidence that every reasonable person would have to accept. Senator Biden noted that many American citizens–Protestants, Jews, Muslims, and others–have a different view, even though they “believe in God as strongly as I do. They’re intensely as religious as I am....For me to impose that judgment (of faith) on everyone else who is equally and maybe even more devout than I am seems to me is inappropriate in a pluralistic society.” When Tom Brokaw, the interviewer, asked why Senator Biden had voted for abortion rights, Biden objected. He said that he had voted against “curtailing the right, criminalizing abortion. I voted,” he continued, “against telling everyone else in the country that they have to accept my religiously-based view that (life begins at the) moment of conception.” He pointed out that he has not voted in favor of public funding of abortion “because that flips the burden. That’s then telling me that I have to accept a different view.” What we all need to do, he said, is “reduce considerably the amount of abortions...by providing the care, the assistance and the encouragement for people to be able to carry to term and to raise their children.” Such are the ways of democracy, as Pope Pius XII noted in 1944.
fwe2-CC-MAIN-2013-20-29392000
In addition to the above types of problems, considerable research is directed to basic questions such as, Do we understand how quasars form and evolve? Can we connect theories of galaxy and black hole formation with the observations of quasars at high redshift and the incidence of black holes in galaxies at low redshift? Here I mention briefly some recent theoretical work that demonstrates progress in our understanding of quasars and ties in with present and future observational work. Haiman, Madau, and Loeb (1998) point out that the scarcity of quasars at z > 3.5 in the Hubble Deep Field implies that the formation of quasars in halos with circular velocities less than 50 km/s is suppressed (on the assumption that black holes form with constant efficiency in cold dark matter halos). They note that the Next Generation Space Telescope should be able to detect the epoch of formation of the earliest quasars. Cavaliere and Vittorini (1998) note that the observed form for the evolution of the space density of quasars can be understood at early times when cosmology and the processes of structure formation provide material for accretion onto central black holes as galaxies assemble. Quasars then turn off at later times because interaction with companions cause the accretion to diminish. Haehnelt, Natarajan, and Rees (1998) show that the peak of quasar activity occurs at the same time as the first deep potential wells form. The Press-Schechter approach provides a way to estimate the space density of dark matter halos. But the space density of z = 3 quasars is less than 1% that of star-forming galaxies, which implies the quasar lifetime is much less than a Hubble time. For an assumed relation between quasar luminosity and timescale and the Eddington limit, it is possible to connect the observed quasar luminosity density with dark matter halos and the numbers of black holes in nearby galaxies. The apparently large number of local galaxies with black holes implies that accretion processes for quasars are inefficient in producing blue light.
fwe2-CC-MAIN-2013-20-29399000
Repetitive transcranial magnetic stimulation (rTMS) is a new tool to study brain function and is being investigated as a treatment modality for depression and other neuropsychiatric disorders.1,2 To perform rTMS, a powerful electromagnetic coil is placed on the scalp. The coil produces a rapidly changing focal magnetic field that induces an electrical current that depolarizes neurons. Because the magnetic field produced by the coil decreases exponentially with distance,3 only superficial structures are directly stimulated. Currently in most rTMS applications, rTMS is dosed for each individual according to the amount of stimulation required to cause contraction of the contralateral (right extremity) abductor pollicis brevis. This is called the motor threshold (MT), and it is commonly expressed as a percentage of the total magnetic pulse capable for each machine. Surprisingly, despite more than a decade of research now using TMS as a tool to investigate the motor system, the relationship between the motor threshold for each individual and the distance from the individual's scalp to cortex is not well understood. Because the motor threshold is inexpensively determined and appears to relate to seizure risk,4 most studies using TMS over nonmotor regions such as the prefrontal cortex have stimulated with the intensity determined by the motor threshold over motor cortex. An untested assumption is made that the motor cortex variables apply to the prefrontal cortex. Initial open studies5—7 and later crossover8,9 and now double-blind parallel studies10 all suggest that rTMS has antidepressant properties. Not all studies have been positive.11 For example, psychotically depressed patients appear not to respond to rTMS as currently applied.12 Moreover, in all studies, older subjects have not responded as well as younger subjects.7,13 Imaging studies have shown that the prefrontal cortex atrophies with age in depressed subjects.14,15 Accordingly, the trend of nonresponse in elderly patients in the TMS antidepressant trials prompted us to wonder if the degree of brain atrophy, particularly prefrontal, might play a role in the relative nonresponse in older depressed subjects. We thus carried out the following magnetic resonance imaging (MRI) study in adult depressed subjects participating in a parallel-design randomized placebo-controlled trial of left prefrontal rTMS for the treatment of depression. (For full details of this clinical trial, see Nahas et al.10) Thirty-two depressed adults enrolled in a 2-week double-blind placebo-controlled trial. Two subjects who had been randomized to receive active rTMS did not tolerate the procedure and dropped out after fewer than three treatments. They were excluded from final analysis. Prior to treatment, all subjects had an MRI scan of the head. One subject could not tolerate the initial MRI scan. Included for this MRI study, therefore, were 29 patients (11 men) who met DSM-IV criteria for either major depressive disorder (n=21; 7 men) or bipolar disorder—most recent episode depressed (n=8; 4 men). They were randomized into one of three cells, in each of which the subjects received 10 days of prefrontal stimulation over 2 weeks. The cells and the subject means (±SD) for Hamilton depression score (Ham-D) and years of age were as follows: high frequency (Active, 20 Hz, n=12; Ham-D=30±5.86, age=42.6±14); low frequency (Active, 5 Hz, n=8; Ham-D=26.3±5.98, age=42.4±7); and placebo (n=9; Ham-D=23.8±4.1, age=48.5±8.8). Following full explanation of the procedures, all subjects signed a written informed consent document according to the declaration of Helsinki and as approved by the Medical University of South Carolina Institutional Review Board and the U.S. Food and Drug Administration Devices Section. Subjects were free of antidepressant medications for at least 2 weeks prior to study entry. Three exceptions were bipolar patients who required ongoing mood stabilizers or benzodiazepines for anxiety. No subjects were currently abusing substances. Two subjects, however, had a history of alcohol dependence, and one had a history of heroine and butane abuse. Subjects underwent an MRI scan of the brain at the beginning and end of the study16 and had three regional cerebral blood flow single-photon emission computed tomography (SPECT) scans: at baseline, during the fifth rTMS session, and 3 days after completion of the trial prior to restarting medications. This report discusses the initial MRI scans only; the SPECT results have been discussed elsewhere.17,18 Qualitative and quantitative analysis of MRI scans done before and after the 2 weeks of treatment showed that rTMS produced no MRI changes.16 Transcranial Magnetic Stimulation A medical doctor (A.M.S., Z.N., or M.S.G.) trained in the proper use of repetitive rTMS used a Cadwell Magnetic Stimulator (Cadwell; Kennewick, WA) equipped with a figure-8-shaped coil and a continuous water cooling system to prevent overheating. Subjects were seated upright in a comfortable chair with eyes open during rTMS. On the initial treatment visit, motor threshold was determined at rest in the contralateral (right extremity) abductor pollicis brevis (APB) muscle, as described previously,13 by using visible twitch. The left prefrontal cortex stimulation site was defined as the location 5 cm rostral and in a parasagittal plane from the site of maximal APB stimulation. Subjects were randomly assigned to receive stimulation over 20 minutes each weekday morning for 2 weeks as active (5 Hz or 20 Hz; see Nahas et al.10 for discussion of the role of stimulation frequency) or placebo (coil held tangential off the head). The left prefrontal cortex was stimulated at 100% MT, with an equal total number of 16,000 stimulations across all cells. Ratings and Response Classification Before entering the study, subjects were screened and diagnosed by trained clinicians using the Schedule for Affective Disorders and Schizophrenia.19 In addition, the Ham-D (21 items)20 was administered at baseline, on the fifth day of treatment (end of week 1), and at the end of the study (week 2). Trained psychiatric nurses, blind to treatment arm, performed all ratings. Ham-D scores were used to calculate percentage improvement from the beginning to the end of treatment (2 weeks). Subjects who showed ≥50% improvement in the 21-item Ham-D at 2 weeks from baseline were classified as antidepressant treatment responders. One day prior to the beginning of treatment, a T1-weighted 3D volumetric MRI sequence was obtained with a 1.5-tesla Picker MRI scanner (Picker International; Cleveland, OH). Scans in this study were 142 1-mm-thick sagittal slices covering the entire brain (128×128, FOV=20 cm, TE=4.4, TR=13; voxel size 1.2×1.2×1 mm). Most, but not all, subjects (21 of 29) had vitamin E capsule fiducials (approximately 7 mm×12 mm oval) placed at the site of rTMS prefrontal stimulation. MRI Scan Reformatting and Distance Calculations The T1-weighted MRI scans were reformatted from sagittal to coronal plane by a blinded reader (F.A.K. or C.D.) using Analyze Mayo Clinic Image Processing System version 7.5.2 on a SUN UltraSPARC 20 station. A line was drawn on the midsagittal view to bisect the anterior commissure and posterior commissure. The corresponding transverse image was resliced and corrected for problems in roll or yaw. The roll-correction was done by equalizing the structures of both eyes in the transverse plane and correcting yaw such that the midsagittal sulcus appeared vertical in that slice. The image was reformatted to the coronal plane, with 250 1-mm-thick slices from the caudal to the rostral aspect of the skull. MRI distance measurements were performed by a trained observer (F.A.K.) blind to all patient variables, using MedX 2.1 (Sensor Systems; Alexandria, VA) on a DEC Alpha workstation with a 21-inch high-resolution screen. The coronal MRI images were enlarged so that there were three slices per screen (approximately 7×12 cm image size on the screen). The software, MedX, has a semi-automated function where, when a line is drawn from one point to another, the distance (in mm) and a reference angle (e.g., moving from one point vertically upward to another gives a value of —90.00 degrees) are computed based on image voxel sizes (see F1). Prefrontal Cortex Distance Measurement (D-PFC) A trained and blinded investigator (F.A.K.) assessed the shortest distance from the TMS coil to the nearest prefrontal cortex by using three different techniques. After determining the D-PFC with all three methods, we elected to use the standard method for testing our hypotheses because it allowed the inclusion of the 8 subjects where the fiducial was not visible. In addition, there were concerns that the fiducials, even when visible, might have been moved from the true site because of hair, gravity, or displacement by the inflatable head holder used while scanning. Motor Cortex Distance Measurement (D-MC) Fiducials were not placed at the scalp position where the motor cortex threshold was determined. Because of the difficulty of determining the motor cortex region for thumb control from a structural MRI scan, particularly in the coronal plane,21 we chose not to identify the motor cortex directly. In the clinical trial, however, we determined the prefrontal stimulation location by empirically finding the area of motor cortex for APB and then moving in a parasagittal plane 5 cm forward. Thus, the motor cortex stimulation site would theoretically be 5 cm caudal from the prefrontal stimulation site. We therefore decided to use the prefrontal site as the point of reference from which to determine the motor cortex site, using the following algorithm. In order to measure the distance from the coil to the motor cortex, we first counted 16 slices rostral from the corpus callosum (which was the first slice measured using the standard method) and copied the vertical line from lateral left eye socket to the skull that was created by the standard method for measuring the prefrontal cortex. We then moved 50 slices (5 cm) caudally, and the vertical line was copied unchanged onto this slice (the first presumed motor cortex slice). This provided a point of intersection between the skull and the line. A measurement was taken from that intersection to the nearest cortex. This was recorded as the distance from the coil to the motor cortex in millimeters (D-MC). The line was copied onto the next eight rostral contiguous slices, and distance measurements were taken as above. First, using a paired Student's t-test, we compared whether D-MC significantly differed from D-PFC. Correlational hypotheses were tested by using StatView 4.5, where bivariate plots were performed and a Pearson's correlation with a Fisher's r to z (P-values) was calculated to determine if significant relationships existed between our hypothesized variables (P<0.05). We analyzed the relationship of distance from coil to motor cortex (D-MC) with 1) percentage output to reach motor threshold; 2) age; 3) distance from coil to prefrontal cortex (D-PFC); and 4) percentage antidepressant clinical response. Next, we analyzed the correlation of D-PFC with 1) percentage clinical response; 2) age among responders and nonresponders; and 3) percentage output to reach motor threshold. Finally, potential relationships between the nondistance measures were tested (age, percentage output to reach motor threshold, and percentage antidepressant clinical response). For a full description of this patient sample, see Nahas et al.10 As noted above, there were 11 men and 18 women; 20 subjects with MRI scans were in the active group and 9 in the placebo group. This antidepressant trial had 7 responders and 22 (9 placebo) nonresponders. Motor Distance and Prefrontal Distance D-PFC was significantly greater than D-MC (D-PFC=14.4±2.7 mm, D-MC=12.7 ±2.6 mm, t=—3.6, P<0.01; mean±SD and Student's paired t-test). Distance and Motor Threshold MT significantly increased with increasing D-MC (P<0.01, Fisher's r to z, 29 subjects; see F2). There was no significant relationship between D-PFC and MT (P=0.0525). D-MC and D-PFC significantly cross-correlated (r=0.562, P<0.01). D-MC (r=0.525, P<0.01) as well as D-PFC (r=0.611, P<0.001) significantly increased with age. Interestingly, a trend for D-PFC to increase more with age than D-MC was found in this depressed cohort (see F3). In this small group, there was not a significant correlation (P=0.5746) between age and D-MC minus D-PFC. There was not a significant correlation between age and MT (P=0.1340) or between age and percentage antidepressant response (P=0.1271). Correlation with Clinical Antidepressant Response There was no correlation between D-MC or D-PFC and percentage clinical response. (D-PFC did not correlate with percentage clinical response with any of the three measuring methods: fiducial to cortex (P=0.1216), skull under fiducial to cortex (P=0.4885), or standard (P=0.2029). When we examined D-PFC and age, analyzed separately for responders and nonresponders (see F4), the responders were significantly younger (t=—2.430, P=0.0258), but response did not significantly correlate with D-PFC. There does seem to be a maximum threshold of age and distance, with the responders being younger than 55 years of age and having a D-PFC of less than 17.00 mm. There was not a significant correlation between percentage output to reach MT and percentage clinical antidepressant response (P=0.1693). This is the first study addressing the complicated area of whether and to what degree the distance from coil to motor or prefrontal cortex interacts with motor threshold. This study also examines the relationships between age, prefrontal cortex distance, and clinical antidepressant response. As the first study exploring these questions, it suffers from several methodological shortcomings outlined below. Nevertheless, there were several important findings. The estimated distance from TMS coil to prefrontal cortex (D-PFC) was greater than the distance from coil to motor cortex (D-MC). The motor threshold (MT) significantly correlated with D-MC, whereas it did not correlate with D-PFC. Both motor and prefrontal cortex distances increased with age in this depressed cohort, with D-PFC showing a trend to increase at a faster rate than D-MC. Finally, there was not a linear correlation between D-PFC and clinical antidepressant response. All subjects however, who responded were below a critical threshold of age and prefrontal cortex distance. There are several limitations of this study that bear on proper interpretation. The most important of these was difficulty in determining the TMS sites on the MRI scans. One method used was to place a fiducial at the site of the rTMS. When designing the study, we reasoned that this fiducial would enable a relatively precise localization of the actual site of stimulation. Using this method alone, however, would have limited an already small sample size, because only about three-fifths of the subjects studied and scanned had fiducials that could be seen on the MRI scan. Complicating things further, during scanning several of the fiducials appeared to have moved off of the scalp, resulting in a placement that no longer represented the true site of rTMS. Because of these confounding factors, we felt that using the actual fiducial to measure the distance to nearest cortex in this study was not an accurate method. While trying to develop a more accurate method, we noticed that the prefrontal fiducial was often directly in line with a vertical (—90 degree) line from the lateral center point of the left eye. We found that the fiducial was commonly 20 slices anterior from the corpus callosum and therefore adopted this as the "virtual" location of the rTMS coil to start measuring the eight scans. By performing the measurement 4 mm in front of and behind this chosen intersection, we averaged the scalp to cortex distance over the likely prefrontal spot. Thus, even if we did not measure directly under the actual TMS spot, we were able to obtain a distance measurement that likely represented the average distance of skull to cortex in this region. Further, this averaging system of a virtual spot enabled us to utilize all of the scans available for analysis, and the technique often correlated well with the position of the fiducial. The limitation was that we could not be sure how close to the actual coil stimulation the measurements were performed. Despite these difficulties, the standardized method of measuring eight scans and averaging the result does appear to give a rough approximation of the distance to the nearest cortex in the area stimulated. This corresponds to either the left medial frontal gyrus or the left superior frontal gyrus (Talairach coordinates: x plane from —25 to —40, y from 50 to 58, z from 20 to 40). Future studies with more precise fiducial placement, or even MRI phase maps of the actual magnetic field3 in all subjects, would improve on the current study. Similarly, there was no fiducial over the motor cortex APB area where MT was actually determined. Thus, we were forced to empirically determine this spot as well by measuring backward from the prefrontal site. We again measured the motor cortex distance over a relatively large area (8 mm) in order to compensate for the imprecision of our location. Because we measured motor cortex distance on 8 slices, the number used as a dependent variable is thus more likely a rough measure of motor cortex atrophy, rather than the exact distance from coil to motor cortex. Despite all these factors, the robust correlation of motor threshold with distance to motor cortex is surprising, especially given these limitations in spatial location. Future studies with fiducials directly over the true site of optimum APB stimulation are needed. The motor threshold was determined in this study by using visible movement, which is not standard practice but which our group has shown on a different machine to correspond to MT determined by motor evoked potentials.13 Future studies exploring these issues might use electrophysiologically determined MT. Similarly, our choice of the percentage change in Ham-D as the dependent variable for clinical response is relatively imprecise, but a standard practice. Future studies using other behavioral, neuroendocrine, or even brain metabolism measures might better address the correlation between distance and clinical response. Finally, the small sample size could have reduced the power of the study such that the relationship between distance to the cortex and percentage response was not statistically significant. Our sample was especially small at the extremes of age (5 subjects>55 years old) and distance to the prefrontal cortex (3 subjects>17 mm). On a more theoretical note, the magnetic field declines logarithmically with distance, and we tested for correlations that assumed linear relationships. Future studies with larger samples might explore whether nonlinear relationships exist between the distance variables and the other factors examined in this study. Despite these important limitations, these data provide intriguing results that will require further investigation combining rTMS and imaging. Motor Cortex Distance and Motor Threshold: The distance to motor cortex correlated strongly with the motor threshold, while the prefrontal distance did not (although there was a nonsignificant trend). This would imply that the distance from the coil to the nearest cortex is critical in determining the amount of energy required to depolarize the pyramidal tract neurons in the motor cortex. Another interpretation, however, is that brain atrophy by neuronal degeneration of cortical neurons may disproportionately alter the excitatory and inhibitory balance, requiring a higher MT. In this interpretation, the increased distance is not the most important variable and instead reflects another process that also alters the motor threshold. The current study cannot distinguish which of these, or even other, explanations lies behind the observed relationship between motor cortex distance and motor threshold. Regardless of mechanism, a greater distance to cortex would indicate a higher motor threshold. Further studies will be needed to address this question of mechanism. In this study, the skull to motor cortex distance alone accounts for 49.7% (r-correlation value) of the variance in MT across individuals. Presumably other factors such as gyral orientation and intrinsic neuronal excitation (particularly inhibition) account for the rest of the variance across individuals. This finding of the importance of distance in determining MT, if confirmed, would imply that the distance is an important variable that might be measured and used as a covariate in studies where the motor threshold is used to examine pharmacology or other questions.22 Lack of Direct Correlation Between Prefrontal Distance and Antidepressant Response: The antidepressant mechanisms of action of rTMS are unknown. Positive clinical effects have been found over both left and right prefrontal cortex, at intensities from 80% to 110% MT, and at frequencies from 0.5 to 20 Hz. Some have suggested that the studies to date have a trend toward larger antidepressant effects with greater intensity, although this has not been directly examined.1 An assumption is made, but not formally tested, that stimulation with an intensity sufficient to cause neuronal depolarization is necessary and that low-intensity stimulation would not cause cortical cell depolarization with trans-synaptic effects. In light of these working assumptions in this new field of TMS and depression, we hypothesized that increasing skull-to-cortex distance might correlate with clinical antidepressant response. Although we failed to find a direct linear relationship, the many limitations of the current study preclude any large interpretation of this negative result. Future clinical trials in conjunction with imaging are needed to directly test the assumptions above about the antidepressant effects of TMS, intensity, and distance to prefrontal cortex. Increase in Distance With Age: Although the motor and prefrontal measurements both increase with age and do correspond to each other, the distance to prefrontal cortex appears to increase faster with age (though not significantly) than the distance to motor cortex. This finding of greater prefrontal atrophy with age in a depressed cohort is similar to findings in other studies that have examined depressed subjects compared with age-matched healthy control subjects.14,15 Although in this study MT also increased with age (though not significantly), it may be the case that there is a greater D-PFC in older depressed subjects that is not accounted for by the more modest increase in MT with age. Our finding that no individuals older than 55 years or with a prefrontal distance greater than 17 mm responded to rTMS is consistent with this idea, although larger studies in elderly depressed subjects are needed to directly test it. Some have suggested that older depressed patients do not respond as well to medication therapy. An age-related variable, therefore, such as prefrontal atrophy, itself may confer a resistance to antidepressant response independent of D-PFC. Further study is indicated to understand the relationships between distance, age, and antidepressant action. We have found that the motor threshold measurement used in TMS studies is highly dependent on the distance from cortex to skull under the TMS coil. Further, this distance increases with age, and in a depressed cohort there is prefrontal cortical atrophy that may outpace the motor cortex declines. These distances do not directly correlate with antidepressant clinical response, although TMS did not work in older subjects with large prefrontal distances. Further work combining TMS with imaging will likely expand knowledge of TMS brain effects. The authors thank Drs. James C. Ballenger, Jeremy Young, George Arana, Eric Wassermann, Sarah Lisanby, and Ulf Ziemann for helpful reviews and comments. Andrew M. Speer assisted in the scanning and rTMS for the study. The National Alliance for Research on Schizophrenia and Depression and the Ted and Vada Stanley Foundation provided grants to Dr. George. An abstract of this work was presented at the New Research Session of the 152nd annual meeting of the American Psychiatric Association, Washington, DC, May 1999.23
fwe2-CC-MAIN-2013-20-29401000
His parents were Tescelin, lord of Fontaines, and Aleth of Montbard, both belonging to the highest nobility of Burgundy. Bernard, the third of a family of seven children, six of whom were sons, was educated with particular care, because, while yet unborn, a devout man had foretold his great destiny. At the age of nine years, Bernard was sent to a much renowned school at Chatillon-sur-Seine, kept by the secular canons of Saint-Vorles. He had a great taste for literature and devoted himself for some time to poetry. His success in his studies won the admiration of his masters, and his growth in virtue was no less marked. Bernard's great desire was to excel in literature in order to take up the study of Sacred Scripture, which later on became, as it were, his own tongue. "Piety was his all," says Bossuet. He had a special devotion to the Blessed Virgin, and there is no one who speaks more sublimely of the Queen of Heaven. Bernard was scarcely nineteen years of age when his mother died. During his youth, he did not escape trying temptations, but his virtue triumphed over them, in many instances in a heroic manner, and from this time he thought of retiring from the world and living a life of solitude and prayer. St. Robert, Abbot of Molesmes, had founded, in 1098, the monastery of Cîteaux, about four leagues from Dijon, with the purpose of restoring the Rule of St. Benedict in all its rigour. Returning to Molesmes, he left the government of the new abbey to St. Alberic, who died in the year 1109. St. Stephen had just succeeded him (1113) as third Abbot of Cîteaux, when Bernard with thirty young noblemen of Burgundy, sought admission into the order. Three years later, St. Stephen sent the young Bernard, at the head of a band of monks, the third to leave Cîteaux, to found a new house at Vallée d'Absinthe, or Valley of Bitterness, in the Diocese of Langres. This Bernard named Claire Vallée, of Clairvaux, on the 25th of June, 1115, and the names of Bernard and Clairvaux thence became inseparable. During the absence of the Bishop of Langres, Bernard was blessed as abbot by William of Champeaux, Bishop of Châlons-sur-Marne, who saw in him the predestined man, servum Dei. From that moment a strong friendship sprang up between the abbot and the bishop, who was professor of theology at Notre Dame of Paris, and the founder of the cloister of St. Victor. The beginnings of Clairvaux were trying and painful. The regime was so austere that Bernard's health was impaired by it, and only the influence of his friend William of Champeaux, and the authority of the General Chapter could make him mitigate his austerities. The monastery, however, made rapid progress. Disciples flocked to it in great numbers, desirous of putting themselves under the direction of Bernard. His father, the aged Tescelin, and all his brothers entered Clairvaux as religious, leaving only Humbeline, his sister, in the world and she, with the consent of her husband, soon took the veil in the Benedictine Convent of Jully. Clairvaux becoming too small for the religious who crowded there, it was necessary to send out bands to found new houses. n 1118, the Monastery of the Three Fountains was founded in the Diocese of Châlons; in 1119, that of Fontenay in the Diocese of Auton (now Dijon) and in 1121, that of Foigny, near Vervins, in the Diocese of Laon (now Soissons), Notwithstanding this prosperity, the Abbot of Clairvaux had his trials. During an absence from Clairvaux, the Grand Prior of Cluny, Bernard of Uxells, sent by the Prince of Priors, to use the expression of Bernard, went to Clairvaux and enticed away the abbot's cousin, Robert of Châtillon. This was the occasion of the longest, and most touching of Bernard's letters. In the year 1119, Bernard was present at the first general chapter of the order convoked by Stephen of Cîteaux. Though not yet thirty years old, Bernard was listened to with the greatest attention and respect, especially when he developed his thoughts upon the revival of the primitive spirit of regularity and fervour in all the monastic orders. It was this general chapter that gave definitive form to the constitutions of the order and the regulations of the "Charter of Charity" which Pope Callixtus II confirmed 23 December, 1119. In 1120 Bernard composed his first work "De Gradibus Superbiae et Humilitatis" and his homilies which he entitles "De Laudibus Mariae". The monks of Cluny had not seen, with satisfaction, those of Cîteaux take the first place among the religious orders for regularity and fervour. For this reason there was a temptation on the part of the "Black Monks" to make it appear that the rules of the new order were impracticable. At the solicitation of William of St. Thierry, Bernard defended himself by publishing his "Apology" which is divided into two parts. In the first part he proves himself innocent of the invectives against Cluny, which had been attributed to him, and in the second he gives his reasons for his attack upon averred abuses. He protests his profound esteem for the Benedictines of Cluny whom he declares he loves equally as well as the other religious orders. Peter the Venerable, Abbot of Cluny, answered the Abbot of Clairvaux without wounding charity in the least, and assured him of his great admiration and sincere friendship. In the meantime Cluny established a reform, and Suger himself, the minister of Louis le Gros, and Abbot of St. Denis, was converted by the apology of Bernard. He hastened to terminate his worldly life and restore discipline in his monastery. The zeal of Bernard did not stop here; it extended to the bishops, the clergy, and the faithful, and remarkable conversions of persons engaged in worldly pursuits were among the fruits of his labours. Bernard's letter to the Archbishop of Sens is a real treatise "De Officiis Episcoporum". About the same time he wrote his work on "Grace and Free Will". In the year 1128, Bernard assisted at the Council of Troyes, which had been convoked by Pope Honorius II, and was presided over by Cardinal Matthew, Bishop of Albano. The purpose of this council was to settle certain disputes of the bishops of Paris, and regulate other matters of the Church of France. The bishops made Bernard secretary of the council, and charged him with drawing up the synodal statutes. After the council, the Bishop of Verdun was deposed. There then arose against Bernard unjust reproaches and he was denounced even in Rome, as a monk who meddled with matters that did not concern him. Cardinal Harmeric, on behalf of the pope, wrote Bernard a sharp letter of remonstrance. "It is not fitting" he said "that noisy and troublesome frogs should come out of their marshes to trouble the Holy See and the cardinals". Bernard answered the letter by saying that, if he had assisted at the council, it was because he had been dragged to it, as it were, by force. "Now illustrious Harmeric", he added, "if you so wished, who would have been more capable of freeing me from the necessity of assisting at the council than yourself? Forbid those noisy troublesome frogs to come out of their holes, to leave their marshes . . . Then your friend will no longer be exposed to the accusations of pride and presumption". This letter made a great impression upon the cardinal, and justified its author both in his eyes and before the Holy See. It was at this council that Bernard traced the outlines of the Rule of the Knights Templars who soon became the ideal of the French nobility. Bernard praises it in his "De Laudibus Novae Militiae". The influence of the Abbot of Clairvaux was soon felt in provincial affairs. He defended the rights of the Church against the encroachments of kings and princes, and recalled to their duty Henry Archbishop of Sens, and Stephen de Senlis, Bishop of Paris. On the death of Honorius II, which occurred on the 14th of February, 1130, a schism broke out in the Church by the election of two popes, Innocent II and Anacletus II. Innocent II having been banished from Rome by Anacletus took refuge in France. King Louis le Gros convened a national council of the French bishops at Etampes, and Bernard, summoned thither by consent of the bishops, was chosen to judge between the rival popes. He decided in favour of Innocent II, caused him to be recognized by all the great Catholic powers, went with him into Italy, calmed the troubles that agitated the country, reconciled Pisa with Genoa, and Milan with the pope and Lothaire. According to the desire of the latter, the pope went to Liège to consult with the emperor upon the best means to be taken for his return to Rome, for it was there that Lothaire was to receive the imperial crown from the hands of the pope. From Liège, the pope returned to France, paid a visit to the Abbey of St. Denis, and then to Clairvaux where his reception was of a simple and purely religious character. The whole pontifical court was touched by the saintly demeanor of this band of monks. In the refectory only a few common fishes were found for the pope, and instead of wine, the juice of herbs was served for drink, says an annalist of Cîteaux. It was not a table feast that was served to the pope and his followers, but a feast of virtues. The same year Bernard was again at the Council of Reims at the side of Innocent II, whose oracle he was; and then in Aquitaine where he succeeded for the time in detaching William, Count of Poitiers, from the cause of Anacletus. In 1132, Bernard accompanied Innocent II into Italy, and at Cluny the pope abolished the dues which Clairvaux used to pay to this celebrated abbey--an action which gave rise to a quarrel between the "White Monks" and the "Black Monks" which lasted twenty years. In the month of May, the pope supported by the army of Lothaire, entered Rome, but Lothaire, feeling himself too weak to resist the partisans of Anacletus, retired beyond the Alps, and Innocent sought refuge in Pisa in September, 1133. In the meantime the abbot had returned to France in June, and was continuing the work of peacemaking which he had commenced in 1130. Towards the end of 1134, he made a second journey into Aquitaine, where William X had relapsed into schism. This would have died out of itself if William could have been detached from the cause of Gerard, who had usurped the See of Bordeaux and retained that of Angoulême. Bernard invited William to the Mass which he celebrated in the Church of La Couldre. At the moment of the Communion, placing the Sacred Host upon the paten, he went to the door of the church where William was, and pointing to the Host, he adjured the Duke not to despise God as he did His servants. William yielded and the schism ended. Bernard went again to Italy, where Roger of Sicily was endeavouring to withdraw the Pisans from their allegiance to Innocent. He recalled the city of Milan, which had been deceived and misled by the ambitious prelate Anselm, Archbishop of Milan, to obedience to the pose, refused the Archbishopric of Milan, and returned finally to Clairvaux. Believing himself at last secure in his cloister Bernard devoted himself with renewed vigour to the composition of those pious and learned works which have won for him the title of "Doctor of the Church". He wrote at this time his sermons on the "Canticle of Canticles". In 1137 he was again forced to leave his solitude by order of the pope to put an end to the quarrel between Lothaire and Roger of Sicily. At the conference held at Palermo, Bernard succeeded in convincing Roger of the rights of Innocent II and in silencing Peter of Pisa who sustained Anacletus. The latter died of grief and disappointment in 1138, and with him the schism. Returning to Clairvaux, Bernard occupied himself in sending bands of monks from his too-crowded monastery into Germany, Sweden, England, Ireland, Portugal, Switzerland, and Italy. Some of these, at the command of Innocent II, took possession of Three Fountains Abbey, near the Salvian Waters in Rome, from which Pope Eugenius III was chosen. Bernard resumed his commentary on the "Canticle of Canticles", assisted in 1139, at the Second General Lateran Council and the Tenth Oecumenical, in which the surviving adherents of the schism were definitively condemned. About the same time, Bernard was visited at Clairvaux by St. Malachi, metropolitan of the Church in Ireland, and a very close friendship was formed between them. St. Malachi would gladly have taken the Cistercian habit, but the sovereign pontiff would not give his permission. He died, however, at Clairvaux in 1148. In the year 1140, we find Bernard engaged in other matters which disturbed the peace of the Church. Towards the close of the eleventh century, the schools of philosophy and theology, dominated by the passion for discussion and a spirit of independence which had introduced itself into political and religious questions, became a veritable public arena, with no other motive than that of ambition. This exaltation of human reason and rationalism found an ardent and powerful adherent in Abelard, the most eloquent and learned man of the age after Bernard. "The history of the calamities and the refutation of his doctrine by St. Bernard", says Ratisbonne, "form the greatest episode of the twelfth century". Abelard's treatise on the Trinity had been condemned in 1121, and he himself had thrown his book into the fire. But in 1139 he advocated new errors. Bernard, informed of this by William of St. Thierry, wrote to Abelard who answered in an insulting manner. Bernard then denounced him to the pope who caused a general council to be held at Sens. Abelard asked for a public discussion with Bernard; the latter showed his opponent's errors with such clearness and force of logic that he was unable to make any reply, and was obliged, after being condemned, to retire. he pope confirmed the judgment of the council, Abelard submitted without resistance, and retired to Cluny to live under Peter the Venerable, where he died two years later. Innocent II died in 1143. His two successors, Celestin II and Lucius, reigned only a short time, and then Bernard saw one of his disciples, Bernard of Pisa, Abbott of Three Fountains, and known thereafter as Eugenius III, raised to the Chair of St. Peter. Bernard sent him, at his own request, various instructions which compose the "Book of Consideration", the predominating idea of which is that the reformation of the Church ought to commence with the sanctity of the head. Temporal matters are merely accessories; the principal are piety, meditation, or consideration, which ought to precede action. The book contains a most beautiful page on the papacy, and has always been greatly esteemed by the sovereign pontiffs, many of whom used it for their ordinary reading. Alarming news came at this time from the East. Edessa had fallen into the hands of the Turks, and Jerusalem and Antioch were threatened with similar disaster. Deputations of the bishops of Armenia solicited aid from the pope, and the King of France also sent ambassadors. The pope commissioned Bernard to preach a new Crusade and granted the same indulgences for it which Urban II had accorded to the first. A parliament was convoked at Vézelay in Burgundy in 1146, and Bernard preached before the assembly. The King, Louis le Jeune, Queen Eleanor, and the princes and lords present prostrated themselves at the feet of the Abbot of Clairvaux to receive the cross. The saint was obliged to use portions of his habit to make crosses to satisfy the zeal and ardour of the multitude who wished to take part in the Crusade. Bernard passed into Germany, and the miracles which multiplied almost at his every step undoubtedly contributed to the success of his mission. The Emperor Conrad and his nephew Frederick Barbarossa, received the pilgrims' cross from the hand of Bernard, and Pope Eugenius, to encourage the enterprise, came in person to France. It was on the occasion of this visit, 1147, that a council was held at Paris, at which the errors of Gilbert de la Porée, Bishop of Poitiers, were examined. He advanced among other absurdities that the essence and the attributes of God are not God, that the properties of the Persons of the Trinity are not the persons themselves in fine that the Divine Nature did not become incarnate. The discussion was warm on both sides. The decision was left for the council which was held at Reims the following year (1148), and in which Eon de l'Etoile was one of the judges. Bernard was chosen by the council to draw up a profession of faith directly opposed to that of Gilbert, who concluding by stating to the Fathers: "If you believe and assert differently than I have done I am willing to believe and speak as you do". The consequence of this declaration was that the pope condemned the assertions of Gilbert without denouncing him personally. After the council the pope paid a visit to Clairvaux, where he held a general chapter of the order and was able to realize the prosperity of which Bernard was the soul. The last years of Bernard's life were saddened by the failure of the Crusade he had preached, the entire responsibility for which was thrown upon him. He had accredited the enterprise by miracles, but he had not guaranteed its success against the misconduct and perfidy of those who participated in it. Lack of discipline and the over-confidence of the German troops, the intrigues of the Prince of Antioch and Queen Eleanor, and finally the avarice and evident treason of the Christian nobles of Syria, who prevented the capture of Damascus, appear to have been the cause of disaster. Bernard considered it his duty to send an apology to the pope and it is inserted in the second part of his "Book of Consideration". There he explains how, with the crusaders as with the Hebrew people, in whose favour the Lord had multiplies his prodigies, their sins were the cause of their misfortune and miseries. The death of his contemporaries served as a warning to Bernard of his own approaching end The first to die was Suger (1152), of whom the Abbot wrote to Eugenius III: "If there is any precious vase adorning the palace of the King of Kings it is the soul of the venerable Suger". Thibaud, Count of Champagne, Conrad, Emperor of Germany, and his son Henry died the same year. From the beginning of the year 1153 Bernard felt his death approaching. The passing of Pope Eugenius had struck the fatal blow by taking from him one whom he considered his greatest friend and consoler. Bernard died in the sixty-third year of his age, after forty years spent in the cloister. He founded one hundred and sixty-three monasteries in different parts of Europe; at his death they numbered three hundred and forty-three. He was the first Cistercian monk placed on the calendar of saints and was canonized by Alexander III, 18 January 1174. Pope Pius VIII bestowed on him the title of Doctor of the Church. The Cistercians honour him as only the founders of orders are honoured, because of the wonderful and widespread activity which he gave to the Order of Cîteaux. The works of St. Bernard are as follows: His sermons are also numerous: Many other letters, treatises, etc., falsely attributed to him are found among his works, such as the "l'Echelle du Cloître", which is the work of Guigues, Prior of La Grande Chartreuse, les Méditations, l'Edification de la Maison intérieure, etc. APA citation. (1907). St. Bernard of Clairvaux. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/02498d.htm MLA citation. "St. Bernard of Clairvaux." The Catholic Encyclopedia. Vol. 2. New York: Robert Appleton Company, 1907. <http://www.newadvent.org/cathen/02498d.htm>. Transcription. This article was transcribed for New Advent by Janet Grayson. Ecclesiastical approbation. Nihil Obstat. 1907. Remy Lafort, S.T.D., Censor. Imprimatur. +John M. Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is feedback732 at newadvent.org. (To help fight spam, this address might change occasionally.) Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
fwe2-CC-MAIN-2013-20-29403000
Dangerous levels of climate change could be reached in just over 20 years if nothing is done to stop global warming, a WWF-UK study claims. Polar bears are at risk of dying out if the Arctic summer sea ice melts At current rates, the Earth will be 2C above pre-industrial levels some time between 2026 and 2060, says a paper by Dr Mark New of Oxford University. Temperatures in the Arctic could rise by three times this amount, he says. It would lead to a loss of summer sea ice and tundra vegetation, with polar bears and other animals dying out. It would also mean a fundamental change in the ways Inuit and other Arctic residents live. Dr New said: "A very robust result from global climate models is that warming due to greenhouse gases will reduce the amount of snow and ice cover in the Arctic, which will in turn produce an additional warming as more solar radiation is absorbed by the ground and the ocean." Ice and snow reflect more solar radiation back to space than unfrozen surfaces. According to the WWF, the perennial ice, or summer sea ice, is currently melting at a rate of 9.6% per decade and will disappear completely by the end of the century if present trends continue. Boreal forests would spread north and overwhelm up to 60% of dwarf shrub tundra, a critical habitat and vital breeding ground for many birds. "If we don't act immediately, the Arctic will soon become unrecognisable," said Dr Catarina Cardoso, head of climate change at WWF-UK. "Polar bears will be consigned to history, something that our grandchildren can only read about in books." Dr New's paper - Arctic Climate Change with a 2C Global Warming - is one of four papers contributing to a report by WWF. The papers will be presented at the Avoiding Dangerous Climate Change conference in Exeter between 1 and 3 February. The conference has been organised by the UK's Met Office.
fwe2-CC-MAIN-2013-20-29407000
The research team includes first author Bradley Bernstein, recipient of a Howard Hughes Medical Institute (HHMI) physician postdoctoral fellowship who works in the Harvard University laboratory of HHMI investigator Stuart L. Schreiber. Other co-authors are from the Broad Institute of MIT and Harvard, and Affymetrix. Their findings are published in the January 28, 2005 issue of Cell. "Now that the human genome has been sequenced, it is vital to learn how the genome is translated to make living cells and organisms, and how we can use that information to improve human health," said Bernstein, who is an instructor of pathology at Brigham & Women's Hospital and Harvard Medical School. "Every one of our cells has the same genome, yet is completely different. Muscle cells are different from neurons. They are different because different genes are on." Many scientists believe changes in the regulatory scaffolding surrounding the genome may be as important as changes in the genome itself in causing diseases such as cancer. This regulatory structure, called chromatin, is a key regulator of gene expression in healthy and diseased cells, Bernstein said. Chromatin is composed of DNA spooled around bundles of histone proteins, and resembles a chain of beads which is then compressed into a working chromosome. Chemical tags placed on the histones alter the way chromatin is organized, thus allowing the right combination of genes to be turned on. In their study, the researchers analyzed the chromatin structure of the two shortest human chromosomes, numbers 21 and 22, containing about two percent of the human genome. They also sampled additional regions in both the human and mouse gen Contact: Jennifer Donovan Howard Hughes Medical Institute
fwe2-CC-MAIN-2013-20-29408000
Cure for eczema comes closer to realityPublished On: Thu, Nov 24th, 2011 | Skin care | By BioNews An effective cure for inflammatory skin conditions like eczema is a step closer to reality, researchers say. Scientists have found that a strain of yeast implicated in skin conditions like eczema, can be killed by certain peptides and could provide a new treatment for these debilitating skin conditions. 20 percent of children in the UK suffer from atopic eczema and whilst this usually clears up in adolescence, 7 percent of adults will continue to suffer throughout their lifetime. Furthermore, this type of eczema, characterized by dry, itchy, flaking skin, is increasing in prevalence. Whilst the cause of eczema remains unknown, one known trigger factor is the yeast Malassezia sympodialis. This strain of yeast is one of the most common skin yeasts in both healthy individuals and those suffering from eczema. The skin barrier is more fragile and often broken in those suffering from such skin conditions, and this allows the yeast to cause infection, which then further exacerbates the condition. Scientists at Karolinska Institute in Sweden looked for a way to kill Malassezia sympodialis without harming healthy human cells. The researchers looked at the effect on the yeast of 21 peptides that had either; cell-penetrating or antimicrobial properties. Cell-penetrating peptides are often investigated as drug delivery vectors and are able to cross the cell membrane, although the exact mechanism for this is unknown. Antimicrobial peptides, on the other hand, are natural antibiotics and kill many different types of microbe including some bacteria, fungi and viruses. Tina Holm and her colleagues added these different peptides types to separate yeast colonies and assessed the toxicity of each peptide type to the yeast. They found that six of the 21 peptides they tested, successfully killed the yeast without damaging the membrane of keratinocytes, human skin cells. “Many questions remain to be solved before these peptides can be used in humans,” Holm said. “However, the appealing combination of being toxic to the yeast at low concentrations whilst sparing human cells makes them very promising as antifungal agents.” “We hope that these peptides in the future can be used to ease the symptoms of patients suffering from atopic eczema and significantly increase their quality of life,” she added. The study was recently published in the Society for Applied Microbiology’s journal, Letters in Applied Microbiology.
fwe2-CC-MAIN-2013-20-29409000
Why champagne is so bubblyPublished On: Sat, Dec 31st, 2011 | Food & Nutrition | By BioNews The unique bubbly fizz and taste that comes on popping the champagne cork is because of trapped carbon dioxide in the drink, a new study has suggested. A New Year’s themed video produced by the American Chemical Society, explained Henry’s Law, which is a law of physics that states that the pressure of a gas above a solution is proportional to the concentration of the gas within the solution. For champagne, carbon dioxide is the gas that forms those delightful bubbles. And, in an unopened bottle of champagne, there is equilibrium between the CO2 inside the liquid and the gas in the spaces of the cork, the Discovery News reported. Popping the cork disturbs this equilibrium, which is only regained as the CO2 bubbles out. To get raise a perfect toast, make sure to pour on an angle, which preserves up to twice as much CO2 compared to pouring into the middle of the glass, according to a 2010 paper in the Journal of Agricultural Food Chemistry. The video demonstrated that “as the bubbles ascend the length of the glass in tiny trains, they drag along molecules of flavor and aroma which explode out of the surface, tickling the nose and stimulating the senses.” Champagne making process includes two fermentations that must be done absolutely accurately to ensure the correct concentration of bubbles in the final product. During the first fermentation, just as for any other kind of wine, yeast eats up sugar molecules in grape juice and releases CO2 and ethanol. The second fermentation traps CO2 inside the liquid. This procedure is definitely not that easy as during 1600s, when Dom Perignon is rumoured to have discovered champagne (or at least helped perfect it), bottles seldom ended up with no bubbles while in some occasions, CO2 levels were so high that bottles exploded. (ANI)
fwe2-CC-MAIN-2013-20-29410000
Just as the Obama administration ditches NASA plans to return to the moon, a group in Japan is vowing to send humanoid robots there by 2015. Call it a giant leap for droidkind. The Space Oriented Higashiosaka Leading Association (SOHLA), a satellite-manufacturing consortium in the Osaka area, has vowed to put bipedal humanoid bots on the moon in the next five years, according to a Jiji Press report. SOHLA is now developing a prototype astro-bot called "Maido-kun" that it hopes will follow in the steps of Neil Armstrong and Buzz Aldrin (minus the "Dancing with the Stars" part). The robot will be smaller than a person and, if it makes it onto the moon, may do things like record astronomical observations and take geological surveys (and maybe do a bit of robot moonwalking). Development costs for Maido-kun are estimated at $10.6 million, but the idea is being floated in part as an economic stimulus project for small and midsize tech firms in the Osaka region. SOHLA has already worked with Japan's New Energy and Industrial Technology Development Organization (NEDO) and the Japan Aerospace Exploration Agency (JAXA). In 2009, it launched the Maido 1 weather observation microsatellite aboard a JAXA HII-A rocket. SOHLA wants its robot to hitch a ride on a JAXA rocket bound for the moon in five years. "Humanoid robots are glamorous, and they tend to get people fired up," SOHLA board member Noriyuki Yoshida was quoted as saying by Pink Tentacle. "We hope to develop a charming robot to fulfill the dream of going to space."
fwe2-CC-MAIN-2013-20-29412000
Features include interactive map, in-depth stories, and more.Download now. » The week's top five must-sees, delivered to your inbox. Slavery is a system under which people are treated as property and are forced to work. Slaves can be held against their will from the time of their capture, purchase or birth, and deprived of the right to leave, to refuse to work, or to demand compensation. Conditions that can be considered slavery include debt bondage, indentured servitude, serfdom, domestic servants kept in captivity, adoption in which children are effectively forced to work as slaves, child soldiers, and forced marriage. Slavery predates written records, has existed in many cultures. The number of slaves today is higher than at any point in history, remaining as high as 12 million to 27 million, though this is probably the smallest proportion of the world's population in history. Most are debt slaves, largely in South Asia, who are under debt bondage incurred by lenders, sometimes even for generations. Human trafficking is primarily for prostituting women and children into sex industries.
fwe2-CC-MAIN-2013-20-29413000
Facing criticism, biofuels industry forms new lobby group to influence lawmakers July 25, 2008 The group, known as the Alliance for Abundant Food and Energy, was created by Archer Daniels Midland Co, DuPont Co, Deere & Co, Monsanto Co and the Renewable Fuels Association. Its initial budget is "in the multimillions", according to the group's executive director Mark Kornblau. "There are critics who are trying to create an either-or decision between food and fuel," Kornblau was quoted as saying by Reuters. "We believe this is a false choice. Today, more than 90 percent of crops in the United States and around the world are used exclusively for food." The group will promote genetically modified crops to improve crops yields as a solution to meeting global food needs. It does not aim to curtail biofuel production and will lobby Congress to keep subsidies for ethanol and biodiesel production in place. The alliance says that the current run up in food prices is linked to high energy prices, not production of biofuels from feedstocks such as corn and soy. The U.S. Agriculture Department estimates that one-third of the U.S. corn crop this year will be used to make ethanol. The UN's Food and Agriculture Organization says that biofuel production has consumed roughly 100 million tons of grains. Food prices have doubled in the past three years according to the World Bank. The International Food Policy Research Institute estimates that biofuels account for more than 30 percent of the increase. Environmentalists say ethanol and biodiesel subsidies in Europe and the United States have caused market distortions that have displaced biofuel feedstock production into rainforests, tropical savannas, and other biologically-rich ecosystems. Biofuels can reduce emissions, but not when grown in place of rainforests (7/22/2008) Biofuels meant to help alleviate greenhouse gas emissions may be in fact contributing to climate change when grown on converted tropical forest lands, warns a comprehensive study published earlier this month in the journal Environmental Research Letters. Analyzing the carbon debt for biofuel crops grown in ecosystems around the world, Holly Gibbs and colleagues report that "while expansion of biofuels into productive tropical ecosystems will always lead to net carbon emissions for decades to centuries... [expansion] into degraded or already cultivated land will provide almost immediate carbon savings." The results suggest that under the right conditions, biofuels could be part of the effort to reduce humanity's carbon footprint. Beyond high food prices, little to show for $11B/yr in biofuel support, says OECD report (7/17/2008) Government support of biofuel production in rich countries is squandering vast amounts of amounts of money while exacerbating the global food crisis and failing to meaningfully curb greenhouse gas emissions and improve energy security, alleges a new report from the OECD, the club of industrialized nations. Palm oil industry moves into the Amazon rainforest (7/9/2008) Malaysia's Land Development Authority FELDA has announced plans to immediately establish 100,000 hectares (250,000) of oil palm plantations in the Brazilian Amazon. The agency will partner with Braspalma, a local company, to form Felda Global Ventures Brazil Sdn Bhd. FELDA will have a 70 percent stake in the venture. The announcement had been expected. Last month Najib said Malaysia would seek to expand its booming palm oil industry overseas. The country is facing land constraints at home. Britain urges 'cautious approach' on biofuels (7/7/2008) Britain and the E.U. should exercise caution in pushing for wider use of biofuels, warns a new study commissioned by the U.K. government. Biofuel production on abandoned lands could meet 8% of global energy needs (6/23/2008) Using abandoned agricultural lands for biofuel production could help meet up to 8 percent of global energy needs without compromising food supplies or diminishing biologically-rich habitats, reports a new study published in the journal Environmental Science and Technology. U.S. may allow corn farming on conservation land (6/23/2008) The U.S. Department of Agriculture may allow farmers to plant corn on million of acres of conservation land to bolster the food supply in response to flooding in the Midwest and record high prices spurred by demand for domestic ethanol production, according to a report in the New York Times. Global Commodities Boom Fuels New Assault on Amazon (6/20/2008) With soaring prices for agricultural goods and new demand for biofuels, the clearing of the world's largest rain forest has accelerated dramatically. Unless forceful measures are taken, half of the Brazilian Amazon could be cut, burned or dried out within 20 years. Nestle Chairman: Biofuels are "ethically indefensible" (6/14/2008) The emergence and expansion of biofuels produced from food crops has exacerabted world's agriculture and water crisis and is a bigger short-term threat than global warming, argued Peter Brabeck-Letmathe in an editorial published Thursday in the Wall Street Journal Asia. Biofuels expansion in Africa may impact rainforests, wetlands (5/28/2008) Biofuel feedstock expansion in Africa will likely come at the expense of ecologically-sensitive lands, reports a new analysis presented by Wetlands International at the Convention of Biological Diversity in Bonn. Half of oil palm expansion in Malaysia, Indonesia occurs at expense of forests (5/20/2008) More than half of the oil palm expansion between 1990 and 2005 Malaysia and Indonesia occurred at expense of forests, reports a new analysis published in the journal conservation Letters. Analyzing data from the United Nations Food and Agriculture Organization, Lian Pin Koh and David S. Wilcove of Princeton University found that 55-59 percent of oil palm expansion in Malaysia and at least 56 percent of that in Indonesia occurred at the expense of forests. Given that oil palm plantations are biologically impoverished relative to primary and secondary forests, the researchers recommend restricting future expansion to pre-existing cropland and degraded habitats. Global ban on biofuels would lead to immediate decline in food prices (5/16/2008) A global moratorium on biofuels produced from food crops would result in a significant decline in the price of corn, sugar, cassava and wheat by 2010, reports the International Food Policy Research Institute (IFPRI). Record food prices to climb through 2010 (3/6/2008) The U.N. expects record high food prices to continue through 2010, driving hunger and poverty in the world's poorest countries, said a top U.N. official Thursday. UN: biofuels are starving the poor by driving up food prices (2/14/2008) Echoing sentiments increasingly expressed by politicians, scientists, and advocates for the poor, the U.N. Food and Agriculture Organization warned that the world's poorest people are suffering as a result of the push to use food crops for biofuel production.
fwe2-CC-MAIN-2013-20-29414000
Skip to Content by Leigh MacMillan | Posted on Thursday, Jan. 31, 2013 — 9:29 AM In a wide-ranging lecture that moved from plants to nematode worms to human leukemia, Nobel laureate Andrew Fire, Ph.D., outlined his vision for a genomics-based understanding of how organisms respond to novel information. Biological responses to foreign information involve an “immune response” — mediated in some organisms by RNA and in others by proteins and cells, said Fire, professor of Pathology and Genetics at Stanford University School of Medicine. Fire and Craig Mello, Ph.D., were awarded the 2006 Nobel Prize in Physiology or Medicine for their discovery of RNA interference — an RNA-based immune response that allows cells to selectively silence certain genes, for example those of a pathogenic virus. Immune responses like RNA interference can be “positive” and directed against a pathogen, or they can be “negative” and directed against the organism itself (generating autoimmune disorders in humans). “We live at this interface between having the immunity good enough to target as many viruses as possible coming from outside, and having it not so effective that it starts to target our own natural products and turn off processes that are very important,” Fire said. In the course of their studies of RNA interference, Fire and his colleagues began to use high-throughput DNA sequencing as a tool to probe all of the RNAs produced in response to foreign information. They wondered if DNA sequencing might also be applied to the human immune response, in particular to the production of antibodies and T cell receptors. DNA sequencing revealed “lots of sequences” in a healthy individual – a rich diversity and repertoire of antibodies and receptors. In individuals with leukemia or lymphoma, the sequencing detected the amplification of single clonal receptors. The findings may be useful clinically, Fire said, to track the recurrence of such clonal cells and improve monitoring of residual disease after treatment. Fire also described using the approach to follow responses to the flu vaccine and to dengue virus infection. “As we get more sophisticated about this, we should be able to classify antibodies by similarity to each other and … build a way to track diseases,” Fire said. “I think it’s an opportunity to develop sequence-based diagnostics.” Fire was the Department of Cell and Developmental Biology Distinguished Faculty Speaker. For a complete schedule of the Flexner Discovery Lecture series and archived video of previous lectures, go to www.mc.vanderbilt.edu/discoveryseries. Leigh MacMillan, (615) 322-4747 There are lots of ways to keep up with Vanderbilt. Choose your preferred method:
fwe2-CC-MAIN-2013-20-29416000
A team of amateurs has discovered evidence for 42 alien planets, including a Jupiter-size world that could potentially be habitable, by sifting through data from a NASA spacecraft. Forty volunteers with the crowd-sourcing Planet Hunters project discovered the new planet candidates, which include 15 potentially habitable worlds and PH2 b, a Jupiter-size planet that the team confirmed to be in the habitable zone of its parent star. This is the second time Planet Hunters project, which is overseen by Zooniverse, has confirmed a new exoplanet discovery. What's more, several candidate planets found by the project may be in the habitable zones of their parent stars. These candidates are awaiting confirmation by professional astronomers. Researchers suggested this bonanza of planets in the so-called Goldilocks zone around a star, a habitable zone in which conditions are liquid water to exist on a planet’s surface and potentially supportlife, could mean there is a "traffic jam" of worlds where life could exist, project officials said. "These are planet candidates that slipped through the net, being missed by professional astronomers and rescued by volunteers in front of their web browsers,” said the University of Oxford's Chris Lintott, who helms the Zooniverse, in a statement. “It's remarkable to think that absolutely anyone can discover a planet.” Life on an 'Avatar'-like moon The planet PH2 b was found using data from NASA's prolific Kepler Space Telescope and confirmed with 99.9 percent confidence by observations at the W. M. Keck Observatory in Hawaii. Ph2 b is considered much too large to host life. However, any moons orbiting the planet could be strong candidates, astronomers said. The atmospheric temperature on the planet would range between 86 and minus 126 degrees Fahrenheit (30 and minus 88 degrees Celsius) in the habitable zone. “Any moon around this newly discovered, Jupiter-sized planet might be habitable," stated Ji Wang, a postdoctoral researcher at Yale University. He is lead author of a paper about the discoveries, which has been submitted to the Astrophysical Journal and is available on the pre-publishing website Arxiv. If a theoretical moon were to host life, it would likely have a rocky core, plus a greenhouse atmosphere of some sort that could have liquid water on its surface, the researchers said. "It’s very similar to what was depicted in the movie ‘Avatar’ – the habitable moon Pandora around a giant planet, Polyphemus," Wang added. A telltale dim Volunteers spotted PH2 b by watching its parent star. As the planet passed in front of the star, the apparent brightness from Earth dimmed. This is one of two commonly used techniques for finding exoplanets; the other is looking for wobbles in a star's gravityas a planet speeds around it. Excluding PH2 b, citizen scientists recently discovered 42 planetary candidates, with 20 of those likely in their respective stars' habitable regions. "These detections nearly double the number of gas giant planet candidates orbiting at habitable zone distances," the paper stated. Planet Hunters includes participation from Oxford, Yale and several other institutions. Volunteers pour over data from Kepler. Once the strongest candidates are identified, professional astronomers take a look at them. Planet Hunters has found 48 candidate planets so far. The first confirmed planet, PH1, was revealed in October 2011. To learn how to participate in the Planet Hunters project, visit: http://www.planethunters.org/ - 9 Exoplanets That Could Host Alien Life - Alien Planet Quiz: Are You an Exoplanet Expert? - Exoplanet Art: The Illustrations of Lynette Cook
fwe2-CC-MAIN-2013-20-29417000
FIRST PERSON | New research directly ties a deficiency of Vitamin D in older adults to mobility limitations and other disabilities. The findings of the six-year study put a name on the culprit behind some of the most significant physical problems that plague American seniors. Results from the Wake Forest Baptist Medical Center research effort are among the earliest that looked at insufficient levels of Vitamin D and the onset of mobility limitations in older adults, according to Medical News Today. The North Carolina researchers published their conclusions in the Journal of Gerontology: Medical Sciences. Their project utilized data from the National Institute on Aging's Health, Aging, and Body study. Researchers defined subjects' limitations as any difficulty walking several blocks or climbing a flight of stairs. They considered disability to mean the inability to perform these activities. Denise K. Houston, Ph.D., of Wake Forest University served as study director. The school indicates that the initial goals included a pilot study to find cost-effective ways to identify individuals at elevated risk for functional decline with insufficient vitamin D levels and gathering data useful for an eventual full-scale randomized trial. The more than 3,000 subjects consisted of black and white men and women between ages 70 and 79. Researchers noted around a 30 percent elevated risk of limitations in mobility for those with low levels of vitamin D. The same group had nearly a two-fold greater risk of mobility disability. Vitamin D is crucial to muscle function. Having an insufficient amount has already been linked to disorders such as high blood pressure, bone-density thinning, cardiovascular disease, and lung disease. Individuals get vitamin D from sun exposure, from foods rich in the vitamin, or from supplements. The Mayo Clinic suggests that as little as 10 minutes of daily sun exposure can prevent deficiencies. However, older adults tend to spend less time outdoors than average. Houston recommends that individuals older than 70 get 800 International Units of vitamin D each day, either through diet or by taking supplements. For several years, I have struggled with getting adequate levels of vitamin D. Although I undergo blood work every few months due to having Crohn's disease, until I experienced significant bone thinning, the tests never included vitamin D levels. When the doctor ordered the measurement, the extent of the deficiency was shocking. The Crohn's & Colitis Foundation of America indicates a vitamin D deficiency can result in increased disease activity and a reduced quality of life for patients with Crohn's disease and ulcerative colitis, the most common inflammatory bowel diseases. Due to extreme deficiency, I had to take 50,000 International Units of this vitamin weekly for six months. Now that my levels are barely normal, I take 5,000 units a day of vitamin D to help maintain maximum mobility in my senior years. Vonda J. Sines has published thousands of print and online health and medical articles. She has a special interest in diseases and other conditions that affect quality of life.
fwe2-CC-MAIN-2013-20-29418000
223 years ago this weekend, Fletcher Christian and 17 other sailors held the domineering Captain Bligh at bayonet point against the mast of His Majesty’s Armed Vessel Bounty in the most famous mutiny in history. One month ago, National Geographic embarked on a journey through their footsteps, but with the very different goal of studying the pristine coral reefs of the area (read blogs). Bligh was set adrift in the ship’s small launch with 18 loyal shipmates, a compass, his journals, some tools, supplies, cutlasses, and food, rum, wine, and water. He navigated the castaways through the open sea some 3000 miles to safety in Timor, and then continued to Britain to begin his attempts to bring all the mutineers to justice at the gallows. Fletcher Christian led the Bounty back to “Otaheite” where they once again enjoyed laid back island life (and women) until fear of discovery drove them to find a new home where they’d never be discovered by the British law. That island was Pitcairn. 50 descendants of the mutineers and their Tahitian wives live there to this day. In 1957 National Geographic’s Luis Marden voyaged to Pitcairn and discovered the last remnants of the Bounty in the waters of the island’s bay (read original article, see photos). Now, over the past several weeks, NG Explorer-in-Residence Enric Sala has led an expedition to survey the sea-life in the area’s nearly un-touched waters (read blogs, see photos). In the gallery above, see photos from this most recent expedition, meet some of the locals, see some of the sights, and get a sense of what remains on Pitcairn Island more than two centuries after the legendary mutiny. More From the Pitcairn Islands Expedition
fwe2-CC-MAIN-2013-20-29421000
The Reason Behind West Nile’s Appearance [AUDIO] Cases of West Nile Virus are on the rise throughout the United States. In the Garden State alone there have been fifteen confirmed cases of the virus afflicting people, one of which resulted in the death of a Burlington County man. So why is the virus making so much noise this late into the summer? Robert Kent, Administrator of the Department of Environmental Protection’s Office of Mosquito Control Coordination says the age of insects is one of the factors for the spread of West Nile. “We’re dealing with an old mosquito population that’s been flying for a great portion of the summer, and this gives West Nile Virus the opportunity to accumulate in the mosquito population. And as the mosquitoes continue to feed on birds the virus is amplified.” Once a mosquito takes blood from a bird infected with West Nile, it’s able to metabolize the virus into something that is transferable and could be spread through other blood meals (humans). “From our stepped up surveillance it’s suggesting that one in ten mosquitoes might be positive for West Nile Virus.” While it may sound scary Kent says compared to the average seasonal flu West Nile’s affects aren’t as prevalent. He notes symptoms don’t appear in everyone. “Well believe it or not about 80% of the people who have West Nile Virus don’t know they have it. There’s no symptoms whatsoever, but of the remaining 20 percent it could be quite severe.” Kent says those who do experience symptoms could have anything from a mild fever and headache, all the way to serious symptoms such as extreme fever, paralysis and neuro-invasive symptoms. Kent points out though it’s usually the very young and very old who are vulnerable to West Nile, they have had middle age and young adult individuals who fell ill. He suggests anyone who suspects they have symptoms, contact their physician who will decide if samples should be sent to Trenton for testing.
fwe2-CC-MAIN-2013-20-29430000
Organic Foods are Mainstream What does organic mean? Why does organic cost more? Is organic better for my health? These questions are becoming more common now that "organics" have hit the mainstream supermarkets, delis, and "fast food" chains. Organic foods no longer solely exist in health-food stores. According to the United States Department of Agriculture (USDA), organic agricultural products like fruits, vegetables, and grains must be grown without the use of pesticides, synthetic fertilizers, radiation, or bioengineering. Organic meats, poultry, eggs, and dairy products are manufactured from livestock that are not fed or injected with antibiotics or growth hormones, live in natural living conditions appropriate for their species, and are fed only organic feed. October 2002 was the rollout of the new national standards for organics. To receive the USDA Organic Seal, a product's label must contain 95-100% organic ingredients. Labels that state 100% Organic contain only organic ingredients, whereas, labels stating Organic contain at least 95% organic ingredients. "Made with organic ingredients" are food products containing at least 70% organic ingredients. If the product is made with less than 70% organic ingredients, these ingredient may be listed on the side of the package, but "organic" claims may not be on the front of the package. When making these claims, not only must the ingredients be certified organic but all processing and handling must also follow organic protocols. The checkbook is often a driving factor when making food purchases. Costs of organic items vary from pennies above to double a conventional item's typical price. This price difference is a result of increased levels of labor and management required to comply with organic certifications mandated by the USDA. Keep your refrigerator stocked and your pocket full by purchasing organics in bulk and from your local farmers market. Twelve produce items nick-named the "Dirty Dozen" have been shown to contain significantly higher levels of pesticide residues than other produce items even after thorough washing. Pesticides may have harmful effects on children's developing bodies. The "Dirty Dozen" foods include: apples, cherries, grapes, nectarines, peaches, pears, raspberries, strawberries, bell peppers, celery, potatoes, and spinach. Conventional meat, poultry, and dairy products have been linked to increased bacterial resistance in humans. Organic foods are shown to have higher levels of phytonutrients. Phytonutrients are linked to many health benefits ranging from battling the common cold to improving cardiovascular health. Organic foods have made their way into the mainstream food markets and are here to stay. A survey conducted in August 2005 for Whole Foods Market found 65 percent of Americans saying they had tried organic foods and beverages. This is up from 54 percent in similar surveys conducted in 2003 and 2004. As science reveals more about the health benefits of organics, the demand for these foods will increase, and the prices at food markets will likely go down. In the meantime, when you find yourself with a few extra pennies for food shopping, consider using the change to purchase organic produce from the "Dirty Dozen" list. Reasons to Buy Organic... - Organic farming practices do not contaminate our water supply. - Organic foods have higher levels of some nutrients. - Organic farming methods help prevent soil erosion. - Animals are treated more humanely under organic conditions. - Organic farmers help cultivate nutrient rich soil. - Organic farming practices are better for the health of farmers and their families. - Buying organic supports small family farms across the country. - Organic farming promotes biodiversity. For more information on organic foods visit: - Dill Garlic Salmon (NEW!) - Pan Seared Striped Bass With Asian Dill Slaw - Rosemary and Lemon Pan Seared Chicken Breast - Pan Seared Rosemary Salmon Skewers - Portobello and Spinach Bolognese - Thyme and Wild Mushroom Risotto - Potato Gnocchi with Zucchini and Thyme Sauce - Herbed Goat Cheese and Roasted Vegetable Sandwich with Herbed Tomato Couscous - Pumpkin and Thyme Gnocchi - Thyme and Lemon Seared Salmon - Summer Chicken Stir Fry With Brown Rice - Mediterranean Chicken Salad Pita - Vegetable Pad Thai - Caribbean Chicken Soup - Aztec Cilantro Couscous - Gazpacho Soup - Chicken Quesadilla with Pico de Gallo - Sweet Potato Fries - Turkey Burger - Asian-Flavored Coleslaw with Rice Vinegar and Ginger - Tofu Breakfast Burritos - Heart Healthy and Planet Friendly Black Bean Burrito - Sweet Potato Portobello Mushroom Wrap with Savory Yogurt Dressing - Heart Healthy Turkey Cranberry Sandwich - Lentil and Spinach Wrap - Butternut Squash Bisque - Heart-Healthy Avocado Pita Pocket - Lemon-Herb Grilled Chicken - Fruity Chicken Salad Wrap with Acorn Squash Salad - Spinach Salad - Pesto, Tomato and Feta Cheese Pizza - Chunky Roasted Vegetable Chili - Mexican Bean Salad - Flank Steak, Spinach & Goat Cheese Wrap - Portobello Wrap - Spinach, Zucchini and Walnut Pasta - Heart Healthy Breakfast Sandwich
fwe2-CC-MAIN-2013-20-29443000
Google has celebrated Jorge Luis Borge’s birthday with one of their iconic homepage images. We’ll join in the party with a quotation from his prologue to The Invention of Morel, by his close friend and frequent collaborator Adolfo Bioy Casares (they wrote detective stories together under the name H. Bustos Domecq): Detective stories—another popular genre in this century that cannot invent plots—tell of mysterious events that are later explained and justified by reasonable facts. In this book [The Invention of Morel] Adolfo Bioy Casares easily solves a problem that is perhaps more difficult. The odyssey of marvels he unfolds seems to have no possible explanation other than hallucination or symbolism, and he uses a single fantastic but not supernatural postulate to decipher it. My fear of making premature or partial revelations restrains me from examining the plot and the wealth of delicate wisdom in its execution. Let me only say that Bioy renews in literature a concept that was refuted by St. Augustine and Origen, studied by Louis-Auguste Blanqui, and expressed in memorable cadence by Dante Gabriel Rossetti. Above is a picture of Borges (left) and Bioy Casares together.
fwe2-CC-MAIN-2013-20-29445000
The government's being urged to set targets to eradicate child poverty following New Zealand's poor ranking in a new Unicef report. The report, Measuring Child Poverty, ranks New Zealand 20th out of 35 OECD countries based on the percentage of children living in relative poverty. That means children living in a household where disposable income is less than 50 per cent of the national median income. Photo by SNPA May 28, 2012 Copyright © 2013 Yahoo! New Zealand All rights reserved. Select your region to see news and weather for your area.
fwe2-CC-MAIN-2013-20-29447000
Farmed Salmon Escapes Fish raised in aquaculture production can cause serious harm when unintentionally or intentionally released from aquaculture facilities. Escaped fish can harm wild fish populations, other species and the ecosystem. Fish in open net pens escape in small numbers even during normal operations, and can escape in large quantities when nets are damaged by storms or predators, such as sharks and sea lions. Atlantic salmon escapes on the U.S. and Canadian West Coasts are common; there were 350,000 known escapes in 1997 and farmed Atlantic salmon have been found thousands of miles away from the closest salmon aquaculture facilities. In the Pacific Ocean, escaped non-native Atlantic salmon have already been found breeding near aquaculture operations in both British Columbia and South America. Escapes are a significant concern because they occur on a regular basis. Escaped fish potentially travel great distances and are a threat to the long-term health and fitness of native populations. In early 2009, Oceana publicized a massive escape that took place on December 31, 2008. We revealed that the escape involved about 750,000 salmon and trout and that some of the escaped salmon were infected with the ISA virus. Moreover, reports of salmon escapes in Chile range upwards of 10 million a year. The escape of farmed salmon from their cages is one of the most serious environmental problems resulting from open-water aquaculture operations. Escaped salmon generate various ecological effects including predation and competition with native species, hybridization and transmission of diseases to native wild fish. Also, many of the native species affected by escaped salmon are the target species for artisanal fishing, causing economic losses in this sector estimated at $5 million annually. Currently, regulation of salmon escape in Chile is very weak. Essentially the only requirement is that farming companies prepare a contingency plan. This has proved to be ineffective in mitigating and even in reporting of escapes. Some companies have insured themselves against escapes which some people believe have led the companies to seek reimbursement (when market prices are down) by negligently permitting massive escapes.
fwe2-CC-MAIN-2013-20-29452000
The samurai dominated Japanese society for 700 years, and the vision of this class permeates Japanese culture. Ever present is the samurai's sword — as a tool, a companion, and a symbol. The samurai sword is both a technical marvel and a significant cultural object. As a technology, it involves a large system of craftsmen, distinct stages of and for the materials, and a long apprenticeship to develop the necessary skills. Culturally, the sword is surrounded by a history of legend, prescribed behaviors, and complex status relationships. Like a many-faceted diamond, a close examination of this one tool can give us a wide perspective on Japanese culture. To study the relationship between the samurai and his sword, we will study the whole of samurai history. The role the sword has played has changed over time, but so have the times brought out different aspects of that many-layered relationship. The sword makes the samurai, makes him its wielder as much as he makes it his weapon. How was the technology of the sword appropriate to the samurai, and what roles did it play? We will spend considerable time exploring the psyche of the samurai, particularly with respect to Zen anti-ideology. How is Zen reflected in the samurai and in his sword? We will also follow Japanese history as it revolves around samurai. By what fire is the samurai's identity forged, and what of today's society can its gleaming edge cut apart? |SES # ||TOPICS ||READINGS ||QUESTIONS | |1 ||Introduction to Japan || | Storry, Richard. Fig. 1-2 and "The Silent Warrior." The Way of the Samurai. London, England: Orbis Books, 1978, pp. 7-17. ISBN: 9780856134043. Suzuki, Daisetz. "Zen and Swordsmanship I." Chapter V in Zen and Japanese Culture. Princeton, NJ: Princeton University Press, 1970, pp. 89-93. ISBN: 9780691017709. |2 ||The katana || Kapp, Leon, Hiroko Kapp, and Yoshindo Yoshihara. "A Craft Reborn," and "The Sword." The Craft of the Japanese Sword. New York, NY: Kodansha International, 1987, pp. 17-27, 53-55 and 61-94. ISBN: 9780870117985. ||(PDF) | |3 ||The samurai's cultural origins || | Tsunoda, Ryusaku, William T. Bary, and Donald Keene. Sources of Japanese Tradition, Vol. I: From Earliest Times to 1600. New York, NY: Columbia University Press, 1964. Please read one of: - pp. 21-26 - pp. 14-17 and 27-29 - pp. 17-18, 29-30 and 274-276 Beaseley, W. G. "Buddhism and Shinto." In The Japanese Experience: A Short History of Japan. Berkeley, CA: University of California Press, 2000, pp. 42-47. ISBN: 9780520225602. Storry, Richard. "The Samurai Emerges." The Way of the Samurai. London, England: Orbis Books, 1978, pp. 18-41. ISBN: 9780856134043. |4 ||The code of the samurai || | Heike Monogatari [The Tale of the Heike]. Translated by Kitagawa Hiroshi and Bruce T. Tsuchida. Tokyo, Japan: University of Tokyo Press, 1975. Chapter 1, p. 5; Chapter 9, pp. 519-523; Chapter 11, pp. 676-689. ISBN 1: 9780860081883 and ISBN 2: 9780860081890. Tsunemoto, Tamamoto. Hagakure: The Book of the Samurai. Vol. I. Tokyo, Japan: Hokuseido Press, 1980, sections 2-5, 9 and 12, pp. 35-40. ISBN: 9780893461690. |5 ||Zen and the samurai ||Please read one of: | - Hoffman, Yoel. "The Haiku," "Death Poems and Zen Buddhism," and poems by Kozan Ichikyo, Suzuki Shosan, Taigen Sofu, Takuan Soho, Zoso Royo, and Bashō. Japanese Death Poems. Rutland, VT: C. E. Tuttle, 1986, pp. 22-27, 65-76, 108, 117-19, 129 and 143. ISBN: 9780804831796. - Storry, Richard. "Zen and the Sword." The Way of the Samurai. London, England: Orbis Books, 1978, pp. 43-61. ISBN: 9780856134043. - Suzuki, Daisetz. "What is Zen?" and "Zen and the Samurai." Chapters I, IV in Zen and Japanese Culture. Princeton, NJ: Princeton University Press, 1970, pp. 7-15 and 70-85. ISBN: 9780691017709. |6 ||Civil war and unification || | Yoshikawa, Eiji. Taiko. New York, NY: Kodansha International, 1992, pp. 653-663. ISBN: 9784770015709. Please read one of: - Beaseley, W. G. "The Unifiers." Chapter 7 in The Japanese Experience: A Short History of Japan. Berkeley, CA: University of California Press, 2000, pp. 116-127. ISBN: 9780520225602. - Berry, Mary Elizabeth. "The Sword Hunt." and "Freezing the Social Order." Chapter 5 in Hideyoshi. Cambridge, MA: Council on East Asian Studies at Harvard University, 1989, pp. 102-111. ISBN: 9780674390263. |7 ||Giving up the gun || Perrin, Noel. Chapters 1-4 in Giving Up the Gun: Japan's Reversion to the Sword. Boston, MA: D. R. Gordine, 1988. ISBN: 9780879237738. ||(PDF) | |8 ||The Tokugawa state || | Storry, Richard. "The Armed Mandarins." The Way of the Samurai. London, England: Orbis Books, 1978, pp. 63-77. ISBN: 9780856134043. Sadler, A. L. "The Legacy of Ieyasu." In The Maker of Modern Japan: The Life of Shogun Tokugawa. Rutland, VT: C. E. Tuttle, 1978, pp. 387-398. ISBN: 9780804812979. The lab for this module is to do blacksmithing (forging) in the MIT forge. After a demonstration, students sign up for four times to work on projects they choose themselves — from small household items to attempts at Samurai swords (the latter is discouraged, though, as it takes about six months to complete). The MIT crest features a blacksmith, because MIT is dedicated to the intricacies of practice, but few students have a chance to get so close to such a concrete engineering problem. Forging is delicate work, requiring both skill and an understanding of the chemistry of iron. We hope that after this lab, students have a greater appreciation for the "learning" necessary for, and part of, "doing."
fwe2-CC-MAIN-2013-20-29455000
After you've entered data, you may find that you need another column to hold additional information. For example, your worksheet might need a column after the date column, for order IDs. Or maybe you need another row, or rows. You might learn that Buchanan, Suyama, or Peacock made more sales than you knew. That's great, but do you have to start over? Of course not. To insert a single column, click any cell in the column immediately to the right of where you want the new column to go. So if you want an order-ID column between columns B and C, you'd click a cell in column C, to the right of the new location. Then, on the Home tab, in the Cells group, click the arrow on Insert. On the drop-down menu, click Insert Sheet Columns. A new blank column is inserted. To insert a single row, click any cell in the row immediately below where you want the new row to go. For example, to insert a new row between row 4 and row 5, click a cell in row 5. Then in the Cells group, click the arrow on Insert. On the drop-down menu, click Insert Sheet Rows. A new blank row is inserted. Excel gives a new column or row the heading its place requires, and changes the headings of later columns and rows. Click Play to watch the process of inserting a column and a row in a worksheet. In the practice you'll learn how to delete columns and rows if you no longer need them.
fwe2-CC-MAIN-2013-20-29458000
Two hundred years ago this week, during the War of 1812, an Ohio fort was front and center as American forces battled the British for control of Lake Erie and the region around it. If the British and their allies had prevailed, places like Toledo, Sandusky, Vermilion, Lorain, Cleveland and Conneaut might be on maps of Canada today. This weekend in Perrysburg, historic Fort Meigs observes the 200th anniversary of the bloodiest day of fighting there during the three-year War of 1812 -- May 5, 1813, the First Siege of Fort Meigs. More... Visit the fort this Friday, May 3, through Sunday, May 5, 2013, and meet hundreds of War of 1812 re-enactors from the United States and Canada portraying those who fought there. Explore authentic 1812-era military camps and see battle re-enactments, fife and drum concerts, musket and cannon firings and more.
fwe2-CC-MAIN-2013-20-29459000
Many immigrants came to US via Canada as fares were generally much cheaper that way. In 1895 Canada and USA established a joint inspection system. Passengers arriving in Canada who intended to go on to United States were inspected by US Officials at the Canadian Port of Arrival, then enumerated on US immigration lists. Immigrants were also given inspection cards which they turned in to US Officials once they were on board trains going to United States. Two sets of records were created - passenger lists and compiled inspection cards. These CANADIAN BORDER CROSSING records were microfilmed by INS. They cover 1895-1954 and are indexed. They do not include Canadians before 1906. After September 30, 1906 both Canadians and non-Canadians are included on these lists. See more information on the St Albans (Canadian Border Crossing) Lists (including film numbers)
fwe2-CC-MAIN-2013-20-29461000
Add to My Links , based on Koch et al. 2002 (PMID 12124289 ). As shown in this picture, DNA fragments with a complementary 3' (three prime) overhang can be ligated to the unzipping construct. If DNA is attached to a surface (e.g. a coverglass) at one end and to another surface (e.g. a microsphere) in the middle, the DNA can be unzipped if there is a nick between the two attachments. This was first shown by Bockelmann, Essevaz-Roulet, and Heslot in the mid-1990s (PMID 9342340). We describe here a versatile adaptation first described in Koch et al. 2002 (PMID 12124289). Compared to DNA constructs for end-to-end DNA stretching (see, e.g. labeling DNA by PCR unzipping constructs are more challenging to produce. The method we describe here has a significant stretch of double-stranded DNA (dsDNA) between the first and second attachment labels (dig and biotin). A simpler construct can be made by directly hybridizing two end-labeled DNA oligos, producing a fork construct. (See, e.g. Koch_Lab:Protocols/Fork unzipping constructs.) However, this produces shorter tethered particles, which was disadvantageous for the optical tweezers systems we were using. The key to this method is that unzipping of a variety of downstream DNA molecules can be carried out with very little modification to the protocol. This versatility was leveraged by Jiang et al. 2004 (PMID 16337600), Shundrovsky et al. 2006 (PMID 16732285), and Johnson et al. 2007 (PMID 17604719). - The anchoring segment (typically about 1 kilobase pairs (kb) in length) is convenient for producing long initial tethers. However, the structural stability of this dsDNA anchoring segment sets an upper limit of below 60 pN (to assure that the dsDNA anchoring segment does not undergo force-induced melting or "overstretching"). - As shown in figure above, the complementary overhang is provided by the biotin-labeled strand of the "insert duplex" (or "adapter duplex"). This is for a 3' overhang. For a 5' overhang, the bottom strand will be longer. Different sticky ends can be created by switching out top or bottom strand. - Protein binding sites can be engineered directly into the insert duplex and some experiments can be carried out without need for ligation of downstream unzipping segment.
fwe2-CC-MAIN-2013-20-29476000
Disunion follows the Civil War as it unfolded. In a violent denouement to the enormous set-piece battles of Second Bull Run and Antietam in the summer of 1862, that fall cavalry clashes raged across Northern Virginia. In dozens of farm villages and crossroads communities, roving bands of Union and Confederate horsemen engaged in a series of brief and often bloody skirmishes. One such fight took place near the hamlet of Little Washington on Nov. 8, 1862. The action began when a squadron of Union Army regulars collided with an enemy picket composed of a Georgia lieutenant and 10 of his men. The Yankees charged the outnumbered rebels, but the rebels “gallantly met the onset, falling back slowly to a narrow lane, stubbornly contesting the ground,” reported an unidentified Confederate in a pamphlet published during the war. The mingled sounds of battle were heard by the main body of Georgia troops at their nearby camp, who hurriedly formed a column and rode to the relief of their endangered comrades. Among the first to saddle up was one of Georgia’s most admired horse soldiers, Will Delony. Raw aggression coursed through the veins of this beau idéal of a Southern cavalryman. “His full brown or mahogany beard and high massive forehead, intellectual face and eagle eyes, marked him as a man among men, resembling the finer full-bearded engravings I have seen of Stonewall Jackson,” noted one Georgian soldier, Wiley C. Howard. That day Delony was to prove himself worthy of such a comparison. Indeed, what occurred next was “one of the bloodiest little fights that the history of our great struggle for right and liberty will ever record,” Howard wrote. The unidentified Confederate observer reported that “Delony, putting spurs to his horse, left the column behind and dashed up into the melee, and hand to hand with his own boys, nearly all of whom had been cut down, was delivering his blows right and left.” Howard remembered that Delony “was fighting like a mad boar with a whole pack of curs about him, having his bridle hand dreadfully hacked, his head gashed and side thrust.” The bluecoats called on him to surrender, but Delony barked back at the federals to lay down their own arms instead: “Surrender! By God! I am the best man!” and felled one enemy soldier with a blow of his sword. Suddenly Delony was attacked by another saber-swinging federal. “His new antagonist’s blows were dexterously dealt, and an instant parry saved his head; a quick, heavy blow, partially warded off, fell broadside and deadened his sword arm, causing it to fall by his side,” Howard reported. But just then the column of Georgians thundered upon the scene, led by Pvt. Jimmie Clanton, mounted “on a little keen black charger.” He made a beeline for the federal cavalryman, who was raising his sword to send the vulnerable Delony to his maker. Clanton, “with upraised gleaming sabre, arrests the fatal blow by cleaving the confident antagonist’s head in twain, and half raising it for another stroke, a pistol shot sends the noble lad, too, reeling from his saddle dangerously wounded.” The rebel column tore into the Yankees. The unidentified Confederate reported that the federals “began to yield and give ground, when a body of our dismounted men gained their flanks.” He added, “Here our artillery came dashing up and completed the success and sent them scampering down the road at a most inconvenient speed.” The next day in camp, Delony sat on a log in camp with his head and hand bandaged. Howard recalled that he “showed me a small metallic flask, which he carried in his inside coat pocket, near the region of the heart and lungs, which showed an entire saber point thrust nearly a quarter of an inch wide clear through the metal.” Delony remarked “that he had sometimes felt that he would hate for his wife, in case he fell in battle, to know that it was there; but, with a humorous smile said he now thought it a good idea for every man to have one on him at the vulnerable spot where the cold steel struck with such force.” Neither Howard nor Delony mentioned if the flask was full or empty before the enemy saber pierced it. Chances are Delony had taken a deep draught before the fight; his habitual drinking had prompted several officers to express concern to Delony’s colonel, Thomas R.R. Cobb. Delony’s fondness of the flask disturbed Cobb very much. “I don’t know what to do about it,” he confessed in a letter to his wife. No evidence exists that Delony drank before he joined the military. An honor graduate from the University of Georgia and a successful attorney in Athens, he raised a cavalry company known as the Georgia Troopers in 1861. The rank and file elected him captain, a common practice in the volunteer army. Delony and his men then joined the “Georgia Legion,” a force of artillery, cavalry and infantry designed loosely around a Roman legion and organized by Colonel Cobb, a popular and charismatic leader (in fact, the unit became better known as “Cobb’s Legion”). The concept of a legion proved impractical, and it was not used as such during the war. The cavalry from Cobb’s Legion served with Gen. Robert E. Lee’s Army of Northern Virginia, where Delony proved himself a caring leader. Howard exclaimed, “How his men loved him, and how he stood by them, contending always for their rights and looking after their comforts, when others would treat them indifferently! His heart and purse were ever open to their needs.” As he showed that day in Little Washington, Delony was at his very best in combat. He could always be found in the hottest part of a battle, and inspired the ranks by his deeds. “He was a game fighter and dared to attempt anything,” Howard said, “even though it seemed impossible to others.” Howard recalled Delony’s actions on June 9, 1863, at Brandy Station — the largest cavalry battle of the war. By this time Delony had advanced to lieutenant colonel, and Pierce M.B. Young had replaced Cobb as colonel. At one point during the engagement, the Georgians charged federal cavalry, “and soon their splendid line was all broken and each man of us was fencing and fighting for the time his individual foe, the fiery and impetuous onslaught of the Southron was too much for the steady courage of the Northman, and quick and fast as the blows fell and the cold steel slashed, the most of the enemy were making to their rear.” Howard observed Delony “smiting Yankees right and left as he charged along in advance. He sat on his charger grandly, his fine physique and full mahogany beard flowing, he looked a very Titan war god, flushed with the exuberance and exhilaration of victory. He called to me to rally with others of his old company about him and on he led us pressing the retreating foe.” On they charged until caught in devastating cross-fire by dismounted federals. Colonel Young ordered Delony to withdraw. “But,” Howard wrote, “shaking his head and lion-like beard Delony said, ‘Young, let’s charge them,’ and in two or three minutes five horses fell and a number of our men had been shot. By this time, however, the enemy’s whole line in sight were giving way and on we went, those not unhorsed or crippled. So fierce and fast was the fighting, we had not time to accept surrender offered by many Yankees — just rode on and left them behind.” Several weeks later, in Pennsylvania during the Gettysburg Campaign, Delony led a similar charge mounted on his bay, Marion. This time he went up against Union forces led by the newly minted brigadier general George Armstrong Custer in a cavalry fight at Hunterstown on July 2, 1863. Federal lead struck Marion, and the horse toppled onto Delony. He extricated himself with great difficulty and barely managed to escape the enemy. His luck ran out at the Battle of Jack’s Shop, in Virginia, on Sept. 22, 1863. A Minié ball struck Delony in the left thigh, and, in a Gettysburg repeat, his horse was also hit and fell on top of him. This time, though, he could not get away and fell into federal hands. Transported to nearby Culpeper for a brief stay, he was then carried by ambulance to Washington. He was admitted to Stanton General Hospital and given a bed in a ward full of Union boys, where he befriended one of convalescing soldiers, John A. Wright of the 140th Pennsylvania Infantry. Delony’s wound turned gangrenous. On Oct. 2, 1863, surgeons informed that his condition was mortal. Wright recalled that Delony then asked him to read from the Bible. “The 14th Chapter of John was selected, and the reader began: ‘Let not your heart be troubled…’” Delony broke down. “‘Oh, I could die in peace, I could died in peace,’ he sobbed, ‘if only I were home with my wife and children. But it is so hard to die away from home and among strangers.” Delony was transferred to another hospital later that day, and died that night. He was 38 years old. Union authorities buried his remains in a numbered grave in the hospital cemetery. They were later disinterred and returned to his family in Athens. Wright survived the war and became a minister, perhaps the last man touched directly by the charismatic Delony. Sources: Ulysses R. Brooks, “Stories of the Confederacy”; Wiley C. Howard, “Sketch of Cobb Legion Cavalry and Some Incidents and Scenes Remembered”; William B. McCash, “Thomas R.R. Cobb: The Making of a Southern Nationalist”; William G. Delony military service record, National Archives and Records Service; The War of the Rebellion: A Compilation of the Official Records of the Union and Confederate Armies; George F. Price, “Across the Continent With the Fifth Cavalry”; John F. Stegeman, “These Men She Gave: Civil War Diary of Athens, Georgia”; Robert L. Stewart, “History of the One Hundred and Fortieth Regiment Pennsylvania Volunteers”; Francis S. Reader, “Some Pioneers of Washington County, Pa.: A Family History.” Ronald S. Coddington is the author of “Faces of the Civil War” and “Faces of the Confederacy.” His new book, “African American Faces of the Civil War,” was published in August. He writes “Faces of War,” a column for the Civil War News.
fwe2-CC-MAIN-2013-20-29479000
“From outside in the fields came a sickening smack of an axe on a tree. Then we heard the tree fall. The very last Truffula tree of them all.” –From The Lorax, Dr. Seuss This spring, a motion picture version of Dr. Seuss’s The Lorax hit the big screen with a not-so-subtle environmental message about the threat timber harvesting poses to the environment. Published in 1971, the book tells the story of a business, led by the “Once-ler,” that cuts down all the trees in the Truffula forest, destroying wildlife habitat, the air, and water in the process. The Lorax, a friendly, furry creature that “speaks for the trees,” announces what he thinks has caused this catastrophe, scolding the businessman, “Sir, you are crazy with greed.” Forty years after the book was published, however, a different story has been written in forests across the globe. Rather than being at odds, the Once-ler and the Lorax have found a common interest in making sure forests grow and expand―and many of the world’s forests have benefitted. In the industrialized world, instead of the scarcity Seuss predicted, forests are plentiful. Last year was the International Year of the Forest, and the United Nations offered some good news. For the last two decades, total land area covered by forest in the Northern Hemisphere―where forestry is particularly active―has increased. Despite the implication that economic growth, or as Seuss has the Once-ler say, “biggering, and biggering, and biggering,” would lead to environmental destruction, the nations where growth has been most steady are the ones enjoying the best environmental outcomes. Not only are nations in the Northern Hemisphere seeing forestland expand, but wood is increasingly recognized as one of the most environmentally friendly building materials. At the University of Washington, researchers compared the environmental impact of building with either wood, concrete, or steel. The hands-down winner for lower energy use, less waste and less water use was wood. While concrete and steel can be mined only once, trees are constantly replacing themselves. One thing Seuss got right was that once the Once-ler cut all the trees down, his business went down with them. Foresters understand this. Destroying a forest by cutting down every last tree makes no sense, and so there are more trees in American forests today than there were just a few decades ago. Indeed, the economic value of the trees ensures forests are replanted and available for wildlife and future generations. Even companies not planning on harvesting in 60 years recognize that land with 20-year old trees is more valuable than land with no trees at all. Replanting isn’t just good for the environment, it’s good for business. This is not to say the world’s forests are forever safe, or to dismiss the impact deforestation has on the environment. The enemy in these areas, however, is more likely to be poverty than industry. Few people realize the most common use for trees across the globe is as firewood to heat a home and cook a meal. These trees are not cut down by machines, but by people struggling to meet the needs of daily living. It is true that government regulation of forestry is stricter today than it was forty years ago. It is also true, however, that we are still harvesting a significant amount of wood in the Northern Hemisphere, while preserving vast areas for future generations. Sawmills are making the most of every part of the tree, literally using lasers to measure the best way to saw the log. Technology has made effective regulation possible by using every tree wisely and limiting short-term pressures to overharvest. Forty years after he sprang from the imagination of Dr. Seuss, the Lorax would be happy to see that, far from disappearing, many forests today are thriving. They are there because the real story of the forests has not been about an unending battle between the fictional Lorax and the hard-hearted Once-ler, but a friendship that understands that both benefit from healthy forests that future generations can enjoy. Todd Myers is the environmental director at Washington Policy Center. He has more than a decade of experience in environmental policy and is the author of the book Eco-Fads: How the Rise of Trendy Environmentalism Is Harming the Environment. He is a guest contributor for Cascade Policy Institute, Oregon’s free market public policy research center.
fwe2-CC-MAIN-2013-20-29480000
Book of Philemon The Epistle to Philemon is a book of the Bible in the New Testament. Philemon is generally regarded as one of the undisputed works of Paul, and it was most likely written in Rome, around 61-63 AD. It is the shortest of Paul's extant letters, consisting of only 25 verses. Purpose of the epistle Paul addressed the epistle specifically to Philemon, his fellow apostle. Paul appeals directly to Philemon's Christian conscience in asking him to accept the return of Onesimus, a runaway slave of Philemon's. Paul indicates that he converted Onesimus to Christianity (1:10-11), therefore making him "profitable" (or "useful"). Paul implores Philemon to treat Onesimus not as a slave but, like Paul, as a brother in Christ. Additionally, Paul offers to take on all debts and transgressions that Onesimus owed to Philemon, just as Christ took on the sins of Man.
fwe2-CC-MAIN-2013-20-29484000
|Line 10:||Line 10:| The oldest manuscripts of Gregorian chants were written using a graphic notation which uses a repertoire of specific signs called [[neume]]s; each neume designates a basic musical gesture (see [[musical notation]]). As books, made of [[vellum]] (prepared sheepskins), were very expensive, the text was abbreviated wherever possible, with the neumes written over the text. This was a notation without lines and no exact melodic contour could be deciphered from it, which implies that the repertoire was learnt by rote The oldest manuscripts of Gregorian chants were written using a graphic notation which uses a repertoire of specific signs called [[neume]]s; each neume designates a basic musical gesture (see [[musical notation]]). As books, made of [[vellum]] (prepared sheepskins), were very expensive, the text was abbreviated wherever possible, with the neumes written over the text. This was a notation without lines and no exact melodic contour could be deciphered from it, which implies that the repertoire was learnt by rote. Revision as of 05:19, February 6, 2011 Gregorian chant is the central tradition of Western plainchant, a form of monophonic liturgical music within Western Orthodoxy that accompanied the celebration of Mass and other ritual services. It is named after Pope Gregory I, Bishop of Rome from 590 to 604, who is traditionally credited for having ordered the simplification and cataloging of music assigned to specific celebrations in the church calendar. The resulting body of music is the first to be notated in a system ancestral to modern musical notation. In general, the chants were learned by the viva voce method, that is, by following the given example orally, which took many years of experience in the Schola Cantorum. Gregorian chant originated in monastic life, in which celebrating the 'Divine Office' eight times a day at the proper hours was upheld according to the Rule of St. Benedict. Singing psalms made up a large part of the life in a monastic community, while a smaller group and soloists sang the chants. In its long history, Gregorian chant has been subjected to many gradual changes and some reforms. Gregorian chant was organized, codified, and notated mainly in the Frankish lands of western and central Europe during the 10th to 13th centuries, with later additions and redactions, but the texts and many of the melodies have antecedents going back several centuries earlier. Although popular belief credited Pope Gregory the Great with having personally invented Gregorian chant (in much the same way that a biblical prophet would transmit a divinely received message), scholars now believe that the chant bearing his name arose from a later Carolingian synthesis of Roman and Gallican chant, and that at that time the attribution to Gregory I was a "marketing ruse" to invest it with a sanctified pedigree, as part of an effort to create one liturgical protocol that would be practised throughout the entire Holy Roman Empire. Gregorian chants are organized into eight modes (scales). Typical melodic features include characteristic incipits and cadences, the use of reciting tones around which the other notes of the melody revolve, and a vocabulary of musical motifs woven together through a process called centonization to create families of related chants. Although the modern eight-tone major and minor scales are strongly related to two of these church modes (the Ionian and Aeolian, respectively), they function according to different harmonic rules. The church modes are based on six-note patterns called hexachords, the main notes of which are called the dominant and the final. Depending on where the final falls in the sequence of the hexachord, the mode is characterized as either authentic or plagal. Modes with the same final share certain characteristics, and it is easy to modulate back and forth between them; hence, the eight modes fall into four larger groupings based on their finals. The oldest manuscripts of Gregorian chants were written using a graphic notation which uses a repertoire of specific signs called neumes; each neume designates a basic musical gesture (see musical notation). As books, made of vellum (prepared sheepskins), were very expensive, the text was abbreviated wherever possible, with the neumes written over the text. This was a notation without lines and no exact melodic contour could be deciphered from it, which implies that the repertoire was learnt by rote. Gregorian chant was traditionally sung by choirs of men and boys in churches, or by monastics in their chapels, and is commonly heard in celebrations of the Western RiteLiturgies. It is the music of the Roman Rite, performed in the Mass and the monastic Office. Development of earlier plainchant Singing has been part of the Christian liturgy since the earliest days of the Church. Until the mid-1990s, it was widely accepted that the psalmody of ancient Jewish worship significantly influenced and contributed to early Christian ritual and chant. This view is no longer generally accepted by scholars, due to analysis that shows that most early Christian hymns did not have Psalms for texts, and that the Psalms were not sung in synagogues for centuries after the Destruction of the Second Temple in AD 70. However, early Christian rites did incorporate elements of Jewish worship that survived in later chant tradition. Canonical hours have their roots in Jewish prayer hours. "Amen" and "alleluia" come from Hebrew, and the threefold "sanctus" derives from the threefold "kadosh" of the Kedusha. The New Testament mentions singing hymns during the Last Supper: "When they had sung the hymn, they went out to the Mount of Olives" Template:Bibleverse. Other ancient witnesses such as Pope Clement I, Tertullian, St. Athanasius, and Egeria confirm the practice, although in poetic or obscure ways that shed little light on how music sounded during this period. The 3rd-century Greek "Oxyrhynchus hymn" survived with musical notation, but the connection between this hymn and the plainchant tradition is uncertain. Musical elements that would later be used in the Roman Rite began to appear in the 3rd century. The Apostolic Tradition, attributed to the theologian Hippolytus, attests the singing of Hallel psalms with Alleluia as the refrain in early Christian agape feasts. Chants of the Office, sung during the canonical hours, have their roots in the early 4th century, when desert monks following St. Anthony introduced the practice of continuous psalmody, singing the complete cycle of 150 psalms each week. Around 375, antiphonal psalmody became popular in the Christian East; in 386, St. Ambrose introduced this practice to the West. Scholars are still debating how plainchant developed during the 5th through the 9th centuries, as information from this period is scarce. Around 410, St. Augustine described the responsorial singing of a Gradual psalm at Mass. At ca. 520, Benedictus of Nursia established what is called the rule of St. Benedict, in which the protocol of the Divine Office for monastic use was laid down. Around 678, Roman chant was taught at York. Distinctive regional traditions of Western plainchant arose during this period, notably in the British Isles (Celtic chant), Spain (Mozarabic), Gaul (Gallican), and Italy (Old Roman, Ambrosian and Beneventan). These traditions may have evolved from a hypothetical year-round repertory of 5th-century plainchant after the western Roman Empire collapsed. Origins of the new traditionRoman Rite. According to James McKinnon, the core liturgy of the Roman Mass was compiled over a brief period in the 8th century in a project overseen by Chrodegang of Metz. Other scholars, including Andreas Pfisterer and Peter Jeffery, have argued for an earlier origin for the oldest layers of the repertory. Scholars debate whether the essentials of the melodies originated in Rome, before the 7th century, or in Francia, in the 8th and early 9th centuries. Traditionalists point to evidence supporting an important role for Pope Gregory the Great between 590 and 604, such as that presented in Heinrich Bewerunge's article in the Catholic Encyclopedia. Scholarly consensus, supported by Willi Apel and Robert Snow, asserts instead that Gregorian chant developed around 750 from a synthesis of Roman and Gallican chant commissioned by Carolingian rulers in France. During a visit to Gaul in 752–753, Pope Stephen II had celebrated Mass using Roman chant. According to Charlemagne, his father Pepin abolished the local Gallican rites in favor of the Roman use, in order to strengthen ties with Rome. In 785–786, at Charlemagne's request, Pope Hadrian I sent a papal sacramentary with Roman chants to the Carolingian court. This Roman chant was subsequently modified, influenced by local styles and Gallican chant, and later adapted into the system of eight modes. This Frankish-Roman Carolingian chant, augmented with new chants to complete the liturgical year, became known as "Gregorian." Originally the chant was probably so named to honor the contemporary Pope Gregory II, but later lore attributed the authorship of chant to his more famous predecessor Gregory the Great. Gregory was portrayed dictating plainchant inspired by a dove representing the Holy Spirit, giving Gregorian chant the stamp of holy authority. Gregory's authorship is popularly accepted as fact to this day. Dissemination and hegemony Gregorian chant appeared in a remarkably uniform state across Europe within a short time. Charlemagne, once elevated to Holy Roman Emperor, aggressively spread Gregorian chant throughout his empire to consolidate religious and secular power, requiring the clergy to use the new repertory on pain of death. From English and German sources, Gregorian chant spread north to Scandinavia, Iceland and Finland. In 885, Pope Stephen V banned the Slavonic liturgy, leading to the ascendancy of Gregorian chant in Eastern Catholic lands including Poland, Moravia, Slovakia, and Austria. The other plainchant repertories of the Christian West faced severe competition from the new Gregorian chant. Charlemagne continued his father's policy of favoring the Roman Rite over the local Gallican traditions. By the 9th century the Gallican rite and chant had effectively been eliminated, although not without local resistance. The Gregorian chant of the Sarum Rite displaced Celtic chant. Gregorian coexisted with Beneventan chant for over a century before Beneventan chant was abolished by papal decree (1058). Mozarabic chant survived the influx of the Visigoths and Moors, but not the Roman-backed prelates newly installed in Spain during the Reconquista. Restricted to a handful of dedicated chapels, modern Mozarabic chant is highly Gregorianized and bears no musical resemblance to its original form. Ambrosian chant alone survived to the present day, preserved in Milan due to the musical reputation and ecclesiastical authority of St. Ambrose. Gregorian chant eventually replaced the local chant tradition of Rome itself, which is now known as Old Roman chant. In the 10th century, virtually no musical manuscripts were being notated in Italy. Instead, Roman Popes imported Gregorian chant from the German Holy Roman Emperors during the 10th and 11th centuries. For example, the Credo was added to the Roman Rite at the behest of the German emperor Henry II in 1014. Reinforced by the legend of Pope Gregory, Gregorian chant was taken to be the authentic, original chant of Rome, a misconception that continues to this day. By the 12th and 13th centuries, Gregorian chant had supplanted or marginalized all the other Western plainchant traditions. Later sources of these other chant traditions show an increasing Gregorian influence, such as occasional efforts to categorize their chants into the Gregorian modes. Similarly, the Gregorian repertory incorporated elements of these lost plainchant traditions, which can be identified by careful stylistic and historical analysis. For example, the Improperia of Good Friday are believed to be a remnant of the Gallican repertory. Early sources and later revisions The first extant sources with musical notation were written around 930 (Graduale Laon). Before this, plainchant had been transmitted orally. Most scholars of Gregorian chant agree that the development of music notation assisted the dissemination of chant across Europe. The earlier notated manuscripts are primarily from Regensburg in Germany, St. Gall in Switzerland, Laon and St. Martial in France. Gregorian chant has in its long history been subjected to a series of redactions to bring it up to changing contemporary tastes and practice. The more recent redaction undertaken in the Benedictine Abbey of St. Pierre, Solesmes, has turned into a huge undertaking to restore the allegedly corrupted chant to a hypothetical "original" state. Early Gregorian chant was revised to conform to the theoretical structure of the modes. In 1562–63, the Council of Trent banned most sequences. Guidette's Directorium chori, published in 1582, and the Editio medicea, published in 1614, drastically revised what was perceived as corrupt and flawed "barbarism" by making the chants conform to contemporary aesthetic standards. In 1811, the French musicologist Alexandre-Étienne Choron, as part of a conservative backlash following the liberal Catholic orders' inefficacy during the French Revolution, called for returning to the "purer" Gregorian chant of Rome over French corruptions. In the late 19th century, early liturgical and musical manuscripts were unearthed and edited. Earlier, Dom Prosper Gueranger revived the monastic tradition in Solesmes. Re-establishing the Divine Office was among his priorities, but no proper chantbooks existed. Many monks were sent out to libraries throughout Europe to find relevant Chant manuscripts. In 1871, however, the old Medicea edition was reprinted (Pustet, Regensburg) which Pope Pius IX declared the only official version. In their firm belief that they were on the right way, Solesmes increased its efforts. In 1889, after decades of research, the monks of Solesmes released the first book in a planned series, the Paléographie Musicale. The incentive of its publication was to demonstrate the corruption of the 'Medicea' by presenting photographed notations originating from a great variety of manuscripts of one single chant, which Solesmes called forth as witnesses to assert their own reforms. The monks of Solesmes brought in their heaviest artillery in this battle, as indeed the academically sound 'Paleo' was intended to be a war-tank, meant to abolish once and for all the corrupted Pustet edition. On the evidence of congruence throughout various manuscripts (which were duely published in facsimile editions with ample editorial introductions) Solesmes was able to work out a practical reconstruction. This reconstructed chant was academically praised, but rejected by Rome until 1903, when Pope Leo XIII died. His successor, Pope Pius X, promptly accepted the Solesmes chant — now compiled as the Liber usualis — as authoritative. In 1904, the Vatican edition of the Solesmes chant was commissioned. Serious academic debates arose, primarily owing to stylistic liberties taken by the Solesmes editors to impose their controversial interpretation of rhythm. The Solesmes editions insert phrasing marks and note-lengthening episema and mora marks not found in the original sources. Conversely, they omit significative letters found in the original sources, which give instructions for rhythm and articulation such as speeding up or slowing down. These editorial practices has placed the historical authenticity of the Solesmes interpretation in doubt. Ever since the restoration of Chant was taken up in Solesmes, there have been lengthy discussions of exactly what course was to be taken. Some favored a strict academic rigour and wanted to postpone publications, while others concentrated on practical matters and wanted to supplant the corrupted tradition as soon as possible. Roughly a century later, there still exists a breach between a strict musicological approach and the practical needs of church choirs. Thus the established performance tradition since the onset of the restoration is at odds with musicological evidence. In his motu proprio Tra le sollecitudini, Pius X mandated the use of Gregorian chant, encouraging the faithful to sing the Ordinary of the Mass, although he reserved the singing of the Propers for males. While this custom is maintained in traditionalist Catholic communities, the Catholic Church no longer persists with this ban. Vatican II officially allowed worshipers to substitute other music, particularly sacred polyphony, in place of Gregorian chant, although it did reaffirm that Gregorian chant was still the official music of the Catholic Church, and the music most suitable for worship. Gregorian chant is, of course, vocal music. The text, the phrases, words and eventually the syllables, can be sung in various ways. The most straightforward is recitation on the same tone, which is called "syllabic" as each syllable is sung to a single tone. Likewise, simple chants are often syllabic throughout with only a few instances where two or more notes are sung on one syllable. "Neumatic" chants are more embellished and ligatures, a connected group of notes, written as a single compound neume, abound in the text. Melismatic chants are the most ornate chants in which elaborate melodies are sung on long sustained vowels as in the Alleluia, ranging from five or six notes per syllable to over sixty in the more prolix melismas. Gregorian chants fall into two broad categories of melody: recitatives and free melodies. The simplest kind of melody is the liturgical recitative. Recitative melodies are dominated by a single pitch, called the reciting tone. Other pitches appear in melodic formulae for incipits, partial cadences, and full cadences. These chants are primarily syllabic. For example, the Collect for Easter consists of 127 syllables sung to 131 pitches, with 108 of these pitches being the reciting note A and the other 23 pitches flexing down to G. Liturgical recitatives are commonly found in the accentus chants of the liturgy, such as the intonations of the Collect, Epistle, and Gospel during the Mass, and in the direct psalmody of the Office.Template:Listen Psalmodic chants, which intone psalms, include both recitatives and free melodies. Psalmodic chants include direct psalmody, antiphonal chants, and responsorial chants. In direct psalmody, psalm verses are sung without refrains to simple, formulaic tones. Most psalmodic chants are antiphonal and responsorial, sung to free melodies of varying complexity. Antiphonal chants such as the Introit, and Communion originally referred to chants in which two choirs sang in alternation, one choir singing verses of a psalm, the other singing a refrain called an antiphon. Over time, the verses were reduced in number, usually to just one psalm verse and the Doxology, or even omitted entirely. Antiphonal chants reflect their ancient origins as elaborate recitatives through the reciting tones in their melodies. Ordinary chants, such as the Kyrie and Gloria, are not considered antiphonal chants, although they are often performed in antiphonal style.Template:Listen Responsorial chants such as the Gradual, Alleluia, Offertory, and the Office Responsories originally consisted of a refrain called a respond sung by a choir, alternating with psalm verses sung by a soloist. Responsorial chants are often composed of an amalgamation of various stock musical phrases, pieced together in a practice called centonization. Tracts are melismatic settings of psalm verses and use frequent recurring cadences and they are strongly centonized. Template:Listen Gregorian chant evolved to fulfill various functions in the Roman Catholic liturgy. Broadly speaking, liturgical recitatives are used for texts intoned by deacons or priests. Antiphonal chants accompany liturgical actions: the entrance of the officiant, the collection of offerings, and the distribution of sanctified bread and wine. Responsorial chants expand on readings and lessons. The non-psalmodic chants, including the Ordinary of the Mass, sequences, and hymns, were originally intended for congregational singing. The structure of their texts largely defines their musical style. In sequences, the same melodic phrase is repeated in each couplet. The strophic texts of hymns use the same syllabic melody for each stanza. Template:Main Early plainchant, like much of Western music, is believed to have been distinguished by the use of the diatonic scale. Modal theory, which postdates the composition of the core chant repertory, arises from a synthesis of two very different traditions: the speculative tradition of numerical ratios and species inherited from ancient Greece and a second tradition rooted in the practical art of cantus. The earliest writings that deal with both theory and practice include the Enchiriadis group of treatises, which circulated in the late ninth century and possibly have their roots in an earlier, oral tradition. In contrast to the ancient Greek system of tetrachords (a collection of four continuous notes) that descend by two tones and a semitone, the Enchiriadis writings base their tone-system on a tetrachord that corresponds to the four finals of chant, D, E, F, and G. The disjunct tetrachords in the Enchiriadis system have been the subject of much speculation, because they do not correspond to the diatonic framework that became the standard Medieval scale (for example, there is a high F#, a note not recognized by later Medieval writers). A diatonic scale with a chromatically alterable b/b-flat was first described by Hucbald, who adopted the tetrachord of the finals (D, E, F, G) and constructed the rest of the system following the model of the Greek Greater and Lesser Perfect Systems. These were the first steps in forging a theoretical tradition that corresponded to chant. Around 1025, Guido d'Arezzo revolutionized Western music with the development of the gamut, in which pitches in the singing range were organized into overlapping hexachords. Hexachords could be built on C (the natural hexachord, C-D-E^F-G-A), F (the soft hexachord, using a B-flat, F-G-A^Bb-C-D), or G (the hard hexachord, using a B-natural, G-A-B^C-D-E). The B-flat was an integral part of the system of hexachords rather than an accidental. The use of notes outside of this collection was described as musica ficta. Gregorian chant was categorized into eight modes, influenced by the eightfold division of Byzantine chants called the oktoechos. Each mode is distinguished by its final, dominant, and ambitus. The final is the ending note, which is usually an important note in the overall structure of the melody. The dominant is a secondary pitch that usually serves as a reciting tone in the melody. Ambitus refers to the range of pitches used in the melody. Melodies whose final is in the middle of the ambitus, or which have only a limited ambitus, are categorized as plagal, while melodies whose final is in the lower end of the ambitus and have a range of over five or six notes are categorized as authentic. Although corresponding plagal and authentic modes have the same final, they have different dominants. The existent pseudo-Greek names of the modes, rarely used in medieval times, derive from a misunderstanding of the Ancient Greek modes; the prefix "Hypo-" (under, Gr.) indicates a plagal mode, where the melody moves below the final. In contemporary Latin manuscripts the modes are simply called Protus authentus /plagalis, Deuterus, Tritus and Tetrardus: the 1st mode, authentic or plagal, the 2nd mode etc. In the Roman Chantbooks the modes are indicated by Roman numerals. - Modes 1 and 2 are the authentic and plagal modes ending on D, sometimes called Dorian and Hypodorian. - Modes 3 and 4 are the authentic and plagal modes ending on E, sometimes called Phrygian and Hypophrygian. - Modes 5 and 6 are the authentic and plagal modes ending on F, sometimes called Lydian and Hypolydian. - Modes 7 and 8 are the authentic and plagal modes ending on G, sometimes called Mixolydian and Hypomixolydian. Although the modes with melodies ending on A, B, and C are sometimes referred to as Aeolian, Locrian, and Ionian, these are not considered distinct modes and are treated as transpositions of whichever mode uses the same set of hexachords. The actual pitch of the Gregorian chant is not fixed, so the piece can be sung in whichever range is most comfortable. Certain classes of Gregorian chant have a separate musical formula for each mode, allowing one section of the chant to transition smoothly into the next section, such as the psalm tones between antiphons and psalm verses. Not every Gregorian chant fits neatly into Guido's hexachords or into the system of eight modes. For example, there are chants — especially from German sources — whose neumes suggest a warbling of pitches between the notes E and F, outside the hexachord system. Early Gregorian chant, like Ambrosian and Old Roman chant, whose melodies are most closely related to Gregorian, did not use the modal system. The great need for a system of organizing chants lies in the need to link antiphons with standard tones, as in for example, the psalmody at the Office. Using Psalm Tone i with an antiphon in Mode 1 makes for a smooth transition between the end of the antiphon and the intonation of the tone, and the ending of the tone can then be chosen to provide a smooth transition back to the antiphon. As the modal system gained acceptance, Gregorian chants were edited to conform to the modes, especially during 12th-century Cistercian reforms. Finals were altered, melodic ranges reduced, melismas trimmed, B-flats eliminated, and repeated words removed. Despite these attempts to impose modal consistency, some chants — notably Communions — defy simple modal assignment. For example, in four medieval manuscripts, the Communion Circuibo was transcribed using a different mode in each. Several features besides modality contribute to the musical idiom of Gregorian chant, giving it a distinctive musical flavor. Melodic motion is primarily stepwise. Skips of a third are common, and larger skips far more common than in other plainchant repertories such as Ambrosian chant or Beneventan chant. Gregorian melodies are more likely to traverse a seventh than a full octave, so that melodies rarely travel from D up to the D an octave higher, but often travel from D to the C a seventh higher, using such patterns as D-F-G-A-C. Gregorian melodies often explore chains of pitches, such as F-A-C, around which the other notes of the chant gravitate. Within each mode, certain incipits and cadences are preferred, which the modal theory alone does not explain. Chants often display complex internal structures that combine and repeat musical subphrases. This occurs notably in the Offertories; in chants with shorter, repeating texts such as the Kyrie and Agnus Dei; and in longer chants with clear textual divisions such as the Great Responsories, the Gloria, and the Credo. Chants sometimes fall into melodically related groups. The musical phrases centonized to create Graduals and Tracts follow a musical "grammar" of sorts. Certain phrases are used only at the beginnings of chants, or only at the end, or only in certain combinations, creating musical families of chants such as the Iustus ut palma family of Graduals. Several Introits in mode 3, including Loquetur Dominus above, exhibit melodic similarities. Mode III (E authentic) chants have C as a dominant, so C is the expected reciting tone. These mode III Introits, however, use both G and C as reciting tones, and often begin with a decorated leap from G to C to establish this tonality. Similar examples exist throughout the repertory. The earliest notated sources of Gregorian chant (written ca. 950) used symbols called neumes (Gr. sign (of the hand) to indicate tone-movements and relative duration within each syllable. A sort of musical stenography that seems to focus on gestures and tone-movements but not the specific pitches of individual notes, nor the relative starting pitches of each neume. Given the fact that Chant was learned in an oral tradition in which the texts and melodies were sung from memory, this was obviously not necessary. The neumatic manuscripts display great sophistication and precision in notation and a wealth of graphic signs to indicate the musical gesture and proper pronunciation of the text. Scholars postulate that this practice may have been derived from cheironomic hand-gestures, the ekphonetic notation of Byzantine chant, punctuation marks, or diacritical accents. Later adaptations and innovations included the use of a dry-scratched line or an inked line or two lines, marked C or F showing the relative pitches between neumes. Consistent relative heightening first developed in the Aquitaine region, particularly at St. Martial de Limoges, in the first half of the eleventh century. Many German-speaking areas, however, continued to use unpitched neumes into the twelfth century. Additional symbols developed, such as the custos, placed at the end of a system to show the next pitch. Other symbols indicated changes in articulation, duration, or tempo, such as a letter "t" to indicate a tenuto. Another form of early notation used a system of letters corresponding to different pitches, much as Shaker music is notated.Dom. E. Cardine (see below under 'rhythm'), ornamental neumes have received more attention from both researchers and performers. B-flat is indicated by a "b-mollum" (Lat. soft), a rounded undercaste 'b' placed to the left of the entire neume in which the note occurs, as shown in the "Kyrie" to the right. When necessary, a "b-durum" (Lat. hard), written squarely, indicates B-natural and serves to cancel the b-mollum . This system of square notation is standard in modern chantbooks. <ref>tags exist, but no <references/>tag was found
fwe2-CC-MAIN-2013-20-29485000
Mounted arc tube |Mounted arc tube. | In some ways, gases are a pain from a sample point of view. With the exception of chlorine and bromine they all look exactly the same: Like nothing at all. My beautiful set of noble gas flasks is beautiful because of the flasks, not what's in them, which is indistinguishable from plain air or vacuum. (So much so that I got them for a bargain price because the seller thought the were empty.) But set up an electric current through almost any gas, and things are completely different. The current ionizes the gas, and when the electrons fall back into their orbits, they emit light of very specific frequencies. These spectral lines can easily be seen with even a very cheap pocket spectroscope, and they give the glowing tubes very unusual colors. So unusual in fact that they are basically impossible to photograph. The pictures here simply don't look at all like the real colors of these tubes, which cannot be represented by the limited red, green, and blue mixtures available in computer or printed photographs. David Franco helped arrange these tubes, which were made by a guy who specializes in noble gas tubes and Geissler tubes (click the source link). I have tubes installed in each of the five stable noble gas spots in the table, hooked up underneath to a high voltage transformer. They are really quite beautiful. On my Noble Rack page I have all the pictures collected, along with pictures of arcs I made in my other collection of noble gas flasks. Source: Special Effects Neon Contributor: Theodore Gray Acquired: 22 November, 2002
fwe2-CC-MAIN-2013-20-29502000
Mind and Language 20 (2):241-57 (2005) |Abstract||Anosognosia is literally ‘unawareness of or failure to acknowledge one’s hemi- plegia or other disability’ (OED). Etymology would suggest the meaning ‘lack of knowledge of disease’ so that anosognosia would include any denial of impairment, such as denial of blindness (Anton’s syndrome). But Babinski, who introduced the term in 1914, applied it only to patients with hemiplegia who fail to acknowledge their paralysis. Most commonly, this is failure to acknowledge paralysis of the left side of the body following damage to the right hemisphere of the brain. In this paper, we shall mainly be concerned with anosognosia for hemiplegia. But we shall also use the term ‘anosognosia’ in an inclusive way to encompass lack of knowledge or acknowledgement of any impairment. Indeed, in the construction ‘anosognosia for X’, X might even be anosognosia for some Y.| |Keywords||Amnesia Belief Delusion Epistemology Impairment| |Through your library||Configure| Similar books and articles Elizabeth Leritz, Chris Loftis, Greg Crucian, William J. Friedman & Dawn Bowers (2004). Self-Awareness of Deficits in Parkinson Disease. Clinical Neuropsychologist 18 (3):352-361. Oliver H. Turnbull, Karen Jones & Judith Reed-Screen (2002). Implicit Awareness of Deficit in Anosognosia? An Emotion-Based Account of Denial of Deficit. Comment. Neuro-Psychoanalysis 4 (1):69-86. Vilayanur S. Ramachandran (1995). Anosognosia in Parietal Lobe Syndrome. Consciousness and Cognition 4 (1):22-51. Drakon Nikolinakos (2004). Anosognosia and the Unity of Consciousness. Philosophical Studies 119 (3):315-342. Lisa Bortolotti, Rochelle Cox & Amanda Barnier (2011). Can We Recreate Delusions in the Laboratory? Philosophical Psychology 25 (1):109 - 131. Paul M. Jenkinson, Nicola M. J. Edelstyn, Justine L. Drakeford & Simon J. Ellis (2009). Reality Monitoring in Anosognosia for Hemiplegia. Consciousness and Cognition 18 (2):458-470. Martin Davies, Max Coltheart, Robyn Langdon & N. Breen (2001). Monothematic Delusions: Towards a Two-Factor Account. Philosophy, Psychiatry and Psychology 8 (2-3):133-58. E. Bisiach & G. Geminiani (1991). Anosognosia Related to Hemiplegia and Hemianopia. In George P. Prigatano & Daniel L. Schacter (eds.), Awareness of Deficits After Brain Injury. Oxford University Press. Annalena Venneri & Michael F. Shanks (2004). Belief and Awareness: Reflections on a Case of Persistent Anosognosia. Neuropsychologia 42 (2):230-238. Added to index2009-01-28 Total downloads87 ( #8,313 of 549,162 ) Recent downloads (6 months)1 ( #63,397 of 549,162 ) How can I increase my downloads?
fwe2-CC-MAIN-2013-20-29507000
(Phys.org) -- This image, taken by the NASA/ESA Hubble Space Telescope, shows a detailed view of the spiral arms on one side of the galaxy Messier 99. Messier 99 is a so-called grand design spiral, with long, large and clearly defined spiral arms giving it a structure somewhat similar to the Milky Way. Lying around 50 million light-years away, Messier 99 is one of over a thousand galaxies that make up the Virgo Cluster, the closest cluster of galaxies to us. Messier 99 itself is relatively bright and large, meaning it was one of the first galaxies to be discovered, way back in the 18th century. This earned it a place in Charles Messiers famous catalog of astronomical objects. In recent years, a number of unexplained phenomena in Messier 99 have been studied by astronomers. Among these is the nature of one of the brighter stars visible in this image. Cataloged as PTF 10fqs, and visible as a yellow-orange star in the top-left corner of this image, it was first spotted by the Palomar Transient Facility, which scans the skies for sudden changes in brightness (or transient phenomena, to use astronomers jargon). These can be caused by different kinds of event, including variable stars and supernova explosions. What is unusual about PTF 10fqs is that it has so far defied classification: it is brighter than a nova (a bright eruption on a stars surface), but fainter than a supernova (the explosion that marks the end of life for a large star). Scientists have offered a number of possible explanations, including the intriguing suggestion that it could have been caused by a giant planet plunging into its parent star. This Hubble image was made in June 2010, during the period when the outburst was fading, so PTF 10fqss location could be pinpointed with great precision. These measurements will allow other telescopes to home in on the star in future, even when the afterglow of the outburst has faded to nothing. A version of this image of Messier 99 was entered into the Hubbles Hidden Treasures Competition by contestant Matej Novak. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition is now closed and the winners will be announced soon. Explore further: Hubble reveals the ring nebula's true shape
fwe2-CC-MAIN-2013-20-29513000
(Phys.org) -- An essay on robots by a professor in Japan over 40 years ago has just got its official translation. Many in robotics and other science circles will say better late than never for an official translation of Yasahiro Mori's paper, The Uncanny Valley which was published in a Japanese journal called Energy 42 years earlier. The essay has generated interest about the extent and limitations of making robots more and more human-like in human-robot interaction. An English translation was done in 2005 but the translation that has been authorized and and reviewed by Mori was published Tuesday in IEEE Spectrum. I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley (Figure 1), which I call the uncanny valley. That observation from his original essay is what sparked conversations and interest among robotic designers over the years. Mori maintains that humans are drawn to human-like robots with positive feelings of affinity until the robot moves or reveals itself in such a way that triggers the persons realization that it is not human. Then it becomes uncanny or in popular-culture terms, creepy. Affinity is lost. In his essay, Mori expressed this experience in a graph, and he also offered an example, the prosthetic hand. The human being gets an eerie sensation, he said, when realizing that the hand is not real. We could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny. In mathematical terms, this can be represented by a negative value. He adds that when a prosthetic hand that is near the bottom of the uncanny valley starts to move, the sensation of eeriness intensifies. The official translation on Tuesday is accompanied by an interview with Mori, who can look at the validity of his remarks 42 years later, when robotics has gone through so many developments. A counterpoint to the popularity of Moris essay has been the contention that the essay was an essay, after all, of limited scientific value. Mori said, I have read that there is scientific evidence that the uncanny valley does indeed exist; for example, by measuring brain waves scientists have found evidence of that. I do appreciate the fact that research is being conducted in this area, but from my point of view, I think that the brain waves act that way because we feel eerie. It still doesn't explain why we feel eerie to begin with. Mori said that pointing out the existence of the uncanny valley was intended as advice for people who design robots rather than a scientific statement itself. Mori said he still thinks that designers should steer clear of making robots too lifelike, falling into the valley. I have no motivation to build a robot that resides on the other side of the valley Why do you have to take the risk and try to get closer to the other side? He said he did not even find it interesting to develop a robot that looks exactly like a human. Mori spoke approvingly about Asimo as invigorating, a robot inviting positive feelings but appearing as different from humans. The two translators of the essay are Karl F. MacDorman, associate professor of human computer interaction at the School of Informatics, Indiana University. and Norri Kageki, a journalist who writes about robots. Explore further: Robots learn to take a proper handoff by following digitized human examples More information: spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley
fwe2-CC-MAIN-2013-20-29514000
Statistical modeling could help us understand cosmic accelerationDecember 24th, 2010 in Physics / General Physics (PhysOrg.com) -- While it is generally accepted by scientists that the universe is expanding at an accelerated rate, there are questions about why this should be so. For years, scientists have been trying to determine the cause of this behavior. One of the theories is that dark energy could be the cause of cosmic acceleration. In order to test theories of dark energy, a group at Los Alamos National Laboratory in New Mexico and the University of California Santa Cruz came up with a technique designed to test different models of dark energy. We are trying to investigate what could be behind the accelerated expansion of the universe, Katrin Heitmann, one of the Los Alamos scientists tells PhysOrg.com. Our technique is based on data, and can be used to evaluate different models. Heitmann and her collaborators created their method based on Gaussian process modeling; the implementation was led by Tracy Holsclaw from UC Santa Cruz. Were using statistical methods rather than trying to come up with different models. Our process takes data from different sources and then uses it to look for certain deviations from what we assume in a cosmological constant. The groups work can be seen in Physical Review Letters: Nonparametric Dark Energy Reconstruction from Supernova Data. Many scientists think that dark energy is driving the accelerated expansion of the universe, Heitmann says. If this is the case, it is possible to characterize it via its equation of state w(z). The redshift evolution of the equation of state parameter w(z) would show some indication of a dynamical origin of dark energy. Heitmann points out that in such a case, there could be an infinite number of models. We cant test all those models, she says, so we have to do an inverse problem. We have data and we can characterize the underlying cause of the accelerated expansion. It assumes that w is a smoothly varying function, and a dynamical dark energy theory would fit that. We can use data and analyze it to see if we can find indications that dark energy really is behind accelerated expansion. The Los Alamos and University of California, Santa Cruz team first tested their statistical technique on simulated data in order see whether the method was reliable. After we saw that it was, Heitmann says, we tried it on currently available supernova data. So far, their analysis has not revealed that a dynamical dark energy is behind the accelerated expansion (the cosmological constant is a very special case of dark energy and is still in agreement with the data), but Heitmann doesnt think that means that the door is closed on dynamical dark energy theories as the cause of acceleration in the expanding universe. The data so far is limited, and better data is coming in every day, she says. Additionally, the group hopes to include other data in their statistical analyses. Our technique allows for the input of data from cosmic microwave background and baryon acoustic oscillations as well, and thats what we want to add in next. If this technique does identify a dynamical dark energy as the reason behind accelerated expansion of the universe, it could mean revisiting the basics of what we know about the workings of the universe. If we do find the time dependence that supports the idea of dark energy as this mechanism, then we can go back to the theory approach. We would have an idea of which models could better explain universes expansion history and ultimately develop a self-consistent theory with no ad hoc assumptions. More information: Tracy Holsclaw, Ujjaini Alam, Bruno Sansó, Herbert Lee, Katrin Heitmann, Salman Halbib, and David Higdon, Nonparametric Dark Energy Reconstruction from Supernova Data, Physical Review Letters (2010). Available online: link.aps.org/doi/10.1103/PhysRevLett.105.241302 Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. "Statistical modeling could help us understand cosmic acceleration." December 24th, 2010. http://phys.org/news/2010-12-statistical-cosmic.html
fwe2-CC-MAIN-2013-20-29515000
In the Karlsruhe physics course one defines the term "substance-like" quantity: Let my cite the definition from a paper by Falk, Herrmann and Schmid: "There is a class of physical quantities whose characteristics are especially easy to visualize: those extensive physical quantities to which a density can be assigned. These include electric charge, mass, amount of substance (number of particles), and others. Because of the fundamental role these quantities play throughout science and because such quantities can be distributed in and flow through space, we give them a designation of their own: substance-like." Are there examples of extensive quantities, which are not substance-like? I think volume is one example, since it seems to make no sense to assign a density to it, are there others? Now the authors write that a quantity can only be conserved if it is substance-like, let my cite this from an other publication: F. Herrmann, writes: "It is important to make clear that the question of conservation or non-conservation only makes sense with substance-like quantities. Only in the context of substance-like quantities does it make sense to ask the question of whether they are conserved or not. The question makes no sense in the case of non-substance-like quantities such as field strength or temperature." So my second question is: Why has a conserved quantity to be substance like? It would be great if one could give me a detailed explanation (or a counterexample if he thinks the statement is wrong). Are there resources where the ideas cited above are introduced with some higher degree of detail and rigour?
fwe2-CC-MAIN-2013-20-29516000
Major Section: DOCUMENTATION ACL2 documentation strings make special use of the tilde character (~). In particular, we describe here a ``markup language'' for which the tilde character plays a special role. The markup language is valuable if you want to write documentation that is to be displayed outside your ACL2 session. If you are not writing such documentation, and if also you do not use the character `~', then there is no need to read on. Three uses of the tilde character (~) in documentation strings are as follows. Below we explain the uses that constitute the ACL2 markup language. The other uses of the tilde character are of the following form. Indicates the end of a documentation section; see doc-string. Indicates the literal insertion of a tilde character (~). This directive in a documentation string is effective only during the processing of part 2, the details (see doc-string), and controls how much is shown on each round of moreprocessing when printing to the terminal. If the system is not doing moreprocessing, then it acts as though the ~] is not present. Otherwise, the system put out a newline and halts documentation printing on the present topic, which can be resumed if the user types moreat the terminal. ~key[arg]Before launching into an explanation of how this works in detail, let us consider some small examples. Here is a word that is code: ~c[function-name].Here is a phrase with an ``emphasized'' word, ``not'': Do ~em[not] do that.Here is the same phrase, but where ``not'' receives stronger emphasis (presumably boldface in a printed version): Do ~st[not] do that.Here is a passage that is set off as a display, in a fixed-width font: ~bv This passage has been set off as ``verbatim''. The present line starts just after a line break. Normally, printed text is formatted, but inside ~bv...~ev, line breaks are taken literally. ~evIn general, the idea is to provide a ``markup language'' that can be reasonably interpreted not only at the terminal (via doc), but also via translators into other languages. In fact, translators have been written into Texinfo and HTML. Let us turn to a more systematic consideration of how to mark text in documentation strings using expressions of the form ~key[arg], which we will call ``doc-string tilde directives.'' The idea is that key informs the documentation printer (which could be the terminal, a hardcopy printer, or some hypertext tool) about the ``style'' used to display arg. The intention is that each such printer should do the best it can. For example, we have seen above that ~em[arg] tells the printer to emphasize arg if possible, using an appropriate display to indicate emphasis (italics, or perhaps surrounding arg with some character _, or ...). For another example, the directive for bold ~b[arg], says that printed text for arg should be in bold if possible, but if there is no bold font available (such as at the terminal), then the argument should be printed in some other reasonable manner (for example, as ordinary text). The is case-insensitive; for example, you can use ~BV or ~Bv or ~bV in place of ~bv. Every form below may have any string as the argument (inside [..]), as long as it does not contain a newline (more on that below). However, when an argument does not make much sense to us, we show it below as the empty string, e.g., `` ~- Print the equivalent of a dash ~b[arg] Print the argument in bold font, if available ~bid[arg] ``Begin implementation dependent'' -- Ignores argument at terminal. ~bf Begin formatted text (respecting spaces and line breaks), but in ordinary font (rather than, say, fixed-width font) if possible ~bq Begin quotation (indented text, if possible) ~bv Begin verbatim (print in fixed-width font, respecting spaces and line breaks) ~c[arg] Print arg as ``code'', such as in a fixed-width font ~ef End format; balances ~bf ~eid[arg] ``End implementation dependent'' -- Ignores argument at terminal. ~em[arg] Emphasize arg, perhaps using italics ~eq End quotation; balances ~bq ~ev End verbatim; balances ~bv ~i[arg] Print arg in italics font ~id[arg] ``Implementation dependent'' -- Ignores argument at terminal. ~il[arg] Print argument as is, but make it a link (for true hypertext environments) ~ilc[arg] Same as ~il[arg], except that arg should be printed as with ~c[arg] ~l[arg] Ordinary link; prints as ``See :DOC arg'' at the terminal (but also see ~pl below, which puts ``see'' in lower case) ~nl Print a newline ~par Paragraph mark, of no significance at the terminal (can be safely ignored; see also notes below) ~pl[arg] Parenthetical link (borrowing from Texinfo): same as ~l[arg], except that ``see'' is in lower case. This is typically used at other than the beginning of a sentence. ~sc[arg] Print arg in (small, if possible) capital letters ~st[arg] Strongly emphasize arg, perhaps using a bold font ~t[arg] Typewriter font; similar to ~c[arg], but leaves less doubt about the font that will be used. ~terminal[arg] Terminal only; arg is to be ignored except when reading documentation at the terminal, using :DOC. Style notes and further details It is not a good idea to put doc-string tilde directives inside ~bv ... ~ev. Do not nest doc-string tilde directives; that is, do not write The ~c[~il[append] function ...but note that the ``equivalent'' expression The ~ilc[append] function ...is fine. The following phrase is also acceptable: ~bfThis is ~em[formatted] text. ~efbecause the nesting is only conceptual, not literal. We recommend that for displayed text, should usually each be on lines by themselves. That way, printed text may be less encumbered with excessive blank lines. Here is an Here is some normal text. Now start a display: ~bv 2 + 2 = 4 ~ev And here is the end of that paragraph.The analogous consideration applies to Here is the start of the next paragraph. ~efas well as You may ``quote'' characters inside the arg part of ~key[arg], by preceding them with ~. This is, in fact, the only legal way to use a newline character or a right bracket (]) inside the argument to a doc-string tilde directive. Write your documentation strings without hyphens. Otherwise, you may find your text printed on paper (via TeX, for example) like this -- Here is a hyphe- nated word.even if what you had in mind was: Here is a hyphe- nated word.When you want to use a dash (as opposed to a hyphen), consider using ~-, which is intended to be interpreted as a ``dash.'' For example: This sentence ~- which is broken with dashes ~- is boring.would be written to the terminal (using doc) by replacing ~-with two hyphen characters, but would presumably be printed on paper with a dash. Be careful to balance the ``begin'' and ``end'' pairs, such as ~ev. Also, do not use two ``begin'' ~bv) without an intervening ``end'' directive. It is permissible (and perhaps this is not surprising) to use the doc-string part separator between such a begin-end pair. Because of a bug in texinfo (as of this writing), you may wish to avoid beginning a line with (any number of spaces followed by) the - character or The ``paragraph'' directive, ~par, is rarely if ever used. There is a low-level capability, not presently documented, that interprets two successive newlines as though they were This is useful for the HTML driver. For further details, see the authors of ACL2. Emacs code is available for manipulating documentation strings that contain doc-string tilde-directives (for example, for doing a reasonable job filling such documentation strings). See the authors if you are interested. We tend to use ~em[arg] for ``section headers,'' such as ``Style notes and further details'' above. We tend to use ~st[arg] for emphasis of words inside text. This division seems to work well for our Texinfo driver. Note that arg to be printed in upper-case at the terminal (using arg to be printed at the terminal as though arg were not marked for emphasis. Our Texinfo and HTML drivers both take advantage of capabilities for indicating which characters need to be ``escaped,'' and how. Unless you intend to write your own driver, you probably do not need to know more about this issue; otherwise, contact the ACL2 authors. We should probably mention, however, that Texinfo makes the following requirement: when using one of the special characters }, you must immediately follow this use with a period or comma. Also, the Emacs ``info'' documentation that we generate by using our Texinfo driver has the property that in node names, : has been replaced by (because of quirks in info); so for example, the ``proof-checker'' s, is documented under rather than under We have tried to keep this markup language fairly simple; in particular, there is no way to refer to a link by other than the actual name. So for example, when we want to make invisible link in ``code'' font, we write the following form, which : should be in that font and then both be in that font and be an invisible link.
fwe2-CC-MAIN-2013-20-29520000
“Understanding which species are most vulnerable to human impacts is a prerequisite for designing effective conservation strategies. Surveys of terrestrial species have suggested that large-bodied species and top predators are the most at risk, and it is commonly assumed that such patterns also apply in the ocean. However, there has been no global test of this hypothesis in the sea. We analyzed two fisheries datasets (stock assessments and landings) to determine the life-history traits of species that have suffered dramatic population collapses. Contrary to expectations, our data suggest that up to twice as many fisheries for small, low trophic-level species have collapsed compared with those for large predators. These patterns contrast with those on land, suggesting fundamental differences in the ways that industrial fisheries and land conversion affect natural communities. Even temporary collapses of small, low trophic-level fishes can have ecosystem-wide impacts by reducing food supply to larger fish, seabirds, and marine mammals.” Aceder ao artigo completo aqui. Aceder a mais artigos aqui. Fonte: Sea Web Marine Science Review – 07 de Setembro de 2012
fwe2-CC-MAIN-2013-20-29523000
Recently, the U.S. Preventive Services Task Force (USPSTF), a government-funded group of independent experts, addressed a comprehensive review of the available data on ways to detect maltreatment of children. In a sobering acknowledgment, the USPSTF believes that there is not much that can be done to detect cases of child maltreatment that aren’t glaringly obvious. There’s simply not enough research to make a case for advising physicians to take specific actions during well-child visits, for example, to help determine which children are at risk. In 2010, nearly 700,000 children were victims of abuse and neglect; 1,537 of them died. [From: Child Abuse: Why It’s So Hard to Determine Who’s at Risk, January 23, 2013 ] The report continued: The researchers at OHSU analyzed 11 studies that evaluated the effectiveness of child abuse and neglect prevention programs or interventions that took place in clinics — such as meetings with a social worker, for example. They gave parents questionnaires that assessed such risk factors as substance abuse, depression, stress and attitudes toward physical punishment — as well as noting whether parents were concerned that their child may have been physically or sexually abused. Doctors discussed the risk factors with parents and referred them to social workers if needed. After three years, researchers found that parents who took part in risk assessments and received social work referrals, if necessary, had decreased incidences of abuse, fewer reports to Child Protective Services (CPS) and better adherence to immunization schedules. And still, no official correlation was made as to which parenting preparation programs work better: those made available through adoption agencies, or those made available by non-profit programs like Nurse Partnership Programs for at-risk first-time mothers. Contrary to the report findings, I believe there is an easy way to determine who is at highest risk of experiencing hidden domestic violence and child abuse. Based on a 2006 study conducted in Australia , a staggering statistic was revealed: children under five living with a non-biological or step-parent are up to 77 times more likely to die from a violence-related injury than those living with their biological families. I think for the sake of abuse study and domestic violence prevention, and for the sake of a child's best-interest, it really behooves those concerned about the rise of child abuse rates in the USA to keep a closer eye on today's foster/adoptive home, and those seeking the adoption--option for themselves. Interestingly, while I have found many abuse studies correlate child abuse/neglect with drug and alcohol abuse and family structure breakdown, few mention the effect poor parent preparation has on child safety and wellness in the home. And so the cycle of unrecognized abuse continues, especially when where one or both parents caring for a child are rather unprepared and clueless when it comes to demonstrating good positive parenting skills -- the type of skills that help raise a child from being a dependent needy creature to becoming an adult who is loving, confident, and very capable of independent living. Recently I myself have been posting many pieces related to the un-fitness of a foster/adoptive parent, and how this itself is creating an alarming end-result in the form of abused adoptees. [See: Stigmas and reputations that need to be clarified and Discrimination in Adoptionland is NOT a bad thing ] In private, I have been receiving more and more complaints coming from adoptive mothers claiming they don't know what to do with the adopted problem-child in their lives, or those who don't know how to support out of control Amothers because those mothers seem so clueless when it comes to meeting the most basic needs of an adopted child.. In turn, the women who contact me seek parenting advice and my personal opinion, because I have become so vocal via the PPL pages. Not surprisingly, I find many of these overwhelmed women are not liking my honest response to their replies, even if all claim they respect the blunt angry adoptee's POV. My response to them reflects all that I myself post here on PPL; I remind them, as mothers, they own a major on-going responsibility to the relationship that makes a promise to a child.... that promise being, "I will not leave you, like others did". My general response to most also includes the following observations and comments: - Adoption is a choice, and a major sacrifice. As such, much independent thought, investigation and study should go into what it means to become an adoptive parent BEFORE one foolishly falls in love with a photo. - Adoption requires a significant amount of teaching and instruction, not to mention follow-up monitoring and guidance because today's adoptable children are far more troubled and traumatized than most want to think or believe. - Adapting to adoption, for the child, is not an easy process. It can take years, (if not a life-time), to "get over" what caused the adoption relationship in the first place. This difficulty in adaption can easily manifest itself in "unwanted behaviors". It's the APs job to accept and assist in the process, not punish the adoptee for negative opinions. - Parenting, no matter how good or bad the child, IS NOT EASY. Only a fool thinks being a parent is going to be a breeze. As a mother myself, I can't count how many times I wanted to take a pick-ax to my head. Why should Amothers not feel the same? Good Parenting, for all, requires an ability to teach with love, forgiveness, and acceptance. Anything less is not as good. - Adoption requires ongoing support in the form of talk-behavioral therapy, both for the AP AND the adoptee, as exampled in the article, Romanian foster care: equipping carers to help challenging children After spending seven years at PPL, posting as I do, what type of Amothers contact me? I'd like to introduce PPL readers to the three general types of AMothers who, in spite of really good intentions, became a clueless unprepared AP, which roughly translates into this: an adoptive parent who is a real dangerous hazard to the adopted child cursed with many complex "special needs". I will refer to these types of Amothers as: In order to get a better understanding of fail-fueled adoptive parenting, one must know a little background information about each. Each woman represents very different backgrounds and lifestyles, showing us just how diverse the adoptive mother population really is these days. Each represents the new "normal" we see in marital/sexual relationships found in the USA. One is in a traditional marriage (heterosexual); one is in an "untraditional" marriage (same-sex); and one represents the modern-day spinster -- the older single-woman who decided creating a family for herself does not require marriage, first. In terms of their own childhood experiences, one Am came from a very abusive/dysfunctional family. The parenting role-modeling was so bad, she was repulsed by her own genetic material and the idea of reproducing. It's fascinating to note how she thought putting an end to biologic-transfer would put an end to pathological parenting, as if learned behaviors could not be passed onto a child, biologically related, or not, Another Am came from an unremarkable family, and the third Am came from "the greatest parents in the world" and according to her, she had the best childhood any person could ask for, somehow making her most fit to parent a child who turned-out to be nothing like any child found in her family.. In spite of these major differences, these three very different women share some very significant commonalities. Each entered adoption with the belief that she had all that's needed be a really great (adoptive) mom. Each adopted a child with the conviction that all a needy child ("orphan") needs is love, and love is enough to make the parent-child relationship thrive and reward itself. Each adopted at least one "orphan" with very complex "special needs". Each chose her adoption agency with the same belief and confidence: "this agency will provide all the information, tools, and support I need to help me make the perfect family, through adoption". Each used a private adoption agency, one that specialized in ICA. Sadly, and not all that surprising to me, each found herself in the all-too familiar scenario found in adoption-relationships that often end in disruption. Each adoptive mother freely admitted she was unable to bond to one or more adopted child, claiming "The child is too much; he/she scares me; the child is too difficult". For these women, and so many adopters just like them, the "forever family" complete with "unconditional love" (promised through an adoption agreement) has become contingent upon one thing: the adopted child's behavior has to be "good", and not at all scary or too demanding or difficult. What baffles me is, with so much information now available through the Internet and various adoption websites/support groups, how is today's PAP so unprepared and clueless? How is it possible for any PAP to lack a decent understanding of core adoption issues (like how stress affects the traumatized child) and what it takes to properly parent today's "orphan" sent from abroad? While I could revert to my old ways, and simply hate all adopters, I feel it's important to share what it is I have learned through PPL and the stories shared with me by some really good (patient!!) Amoms. The shortcomings found in the overwhelmed Amothers I chose to write about were made worse by and through the private adoption agencies they used, and the American Adoption Industry, as a whole. The list of failures begins with the absence of the simplest of all parent-teaching lessons all PAPs need to know and recognize as seriously significant: To my knowledge, no agency addresses abuse statistics, as they relate to the female-child relationship. Not one Amother who contacts me has any knowledge just how easy it is for a woman to abuse a child with seemingly willful "bad behavior", and this blows my mind because it shows how little women know about the reasons and causes of violence against children. It then comes as no surprise to me that not one AMother who contacts me is familiar with Lloyd deMause (an adoptive father) and his work, The History of Child Abuse and The Evolution of Childhood , two VERY compelling reads since the first piece begins with the following: In several hundred studies published by myself and my associates in The Journal of Psychohistory, we have provided extensive evidence that the history of childhood has been a nightmare from which we have only recently begun to awaken. The further back in history one goes--and the further away from the West one gets--the more massive the neglect and cruelty one finds and the more likely children are to have been killed, rejected, beaten, terrorized and sexually abused by their caretakers. Indeed, my conclusion from a lifetime of psychohistorical study of childhood and society is that the history of humanity is founded upon the abuse of children. Just as family therapists today find that child abuse often functions to hold families together as a way of solving their emotional problems, so, too, the routine assault of children has been society's most effective way of maintaining its collective emotional homeostasis. Most historical families once practiced infanticide, erotic beating and incest. Most states sacrificed and mutilated their children to relieve the guilt of adults. Even today, we continue to arrange the daily killing, maiming, molestation and starvation of children through our social, military and economic activities. I would like to summarize here some of the evidence I have found as to why child abuse has been humanity's most powerful and most successful ritual, why it has been the cause of war and social violence, and why the eradication of child abuse and neglect is the most important social task we face today How does this all fit with other failures found in and through the adoption process? In my mind, any adoption agency that does not help educate PAP about the effects poor parenting has on a child, and any agency that does not help prepare an AP for the stress and strain child behavior can bring (and easily trigger an unprepared parent), that agency is not ensuring a child's best interests and greatest needs (safety and guidance) are not going to be met through adoption. In AmK's case, the very reputable adoption agency (with a very long history of great success with foreign adoptions) failed both Amother and Achildren put in her care, in many ways. First, this agency encouraged her to become a mega-adopter. (At no point did they tell this married woman, 3, 4, 5, 6, 7 children with extensive "special needs" was too many for two average adults to handle). Second, this agency did not tell her that the third child she was going to receive had been sexually abused, repeatedly. She was approved over and over again to "save orphans" but she was given NO TOOLS, NO GUIDANCE, NO PREPARATION in terms of what is needed to help heal and re-mold such a wounded child. Instead, she was encouraged to adopt MORE children with "special needs". The end-result was tragic, yet not at all surprising: The most difficult child, the one with the most complex emotional needs, Not only became sexual with the family pet, he became sexual with the youngest child with the most physical deformities. That third adopted child among seven was eventually sent to live in a RTC. If/when he will get out has yet to be determined. All children in that home had to endure what should have been prevented, through the very "reputable" adoption agency. In AmL's case, the private adoption agency she used promoted itself as an agency that was going to help save abandoned orphans, and promote single-parent/GLBT adoptions. The two children put in her care both have very different personalities and needs. The oldest child is male, and an obvious favorite to the women in-charge. His temper tantrums are many; their excuses for his behaviors are shameful. The younger child, a girl, has been almost forgotten. Her neglect and feelings of displacement manifest themselves when she is at other people's homes. [She has become a real social terror.] Home-life is rather trraditional: one partner works full-time, assuming the more traditional "male" role, while the other stays home, favoring the son. No agency rep visits this family to see how many times they have moved, switched schools, and changed various parts of their lives, all to please the young spoiled unhappy prince they have at home. I strongly believe It will only be a matter of time before the neglected little girl will act-out more, no doubt "shocking" both clueless women approved to adopt when they should otherwise have been told "No!". But try warning them about that... AmM, in my mind, represents the worst and most typical of unprepared clueless APs out there. She asserts herself as the victim of a difficult adoption, and is in constant need of sympathy. She claims she has read all the best adoption books, has spoken to all the social workers, and has consulted every AP she knew and knows as to how to parent an angry adoptee. The adopted child was abandoned by his birthmother; before living in-care, he lived on the streets. He was moved to America, thanks to this single-woman's dream to have a child who will love her unconditionally. Since his stay, he has been bullied at school; he has been made fun by others because he has an accent and is not as quick as fellow students, making work at school very difficult. As a single-parent, with no family support system, she has been left alone to be all roles and fulfill all the many needs a young boy in that situation really needs. She is expected to be mothering nurturer, mentoring male-figure and round-the-clock care-taker, all while she goes to work/earn an income for herself and her son. She gets no breaks, and is unable to recognize how firm boundaries and set limits are an act of love to a child living in single-parent chaos. While I really empathize with her many difficult struggles, my empathy is limited since she, more than the other two , sees herself as the victimized martyr who never asked to be hated by the child who never asked to be "saved" by a single-female American adopter. It's hard to pity the woman who failed to see how difficult single-parenting a child (now a growing teenage boy ) would be for her and the child who was handed a rough life. As a result, many of his own unresolved abandonment/adoption issues have morphed into something much bigger and complicated. It's hard to support the Amother who does not want to follow advice that involves more work and therapy, but instead complains how all she wanted was love, not the "scary bully" she got through an adoption plan. To date, she's torn: does she use the ever-popular underground adoptive parent networks that help re-home unwanted adoptees, or does she stay with her "forever" son, even if doing so would require a lot more work from her. These are difficult choices for a woman who had a dream childhood but now finds her own dream-family (made possible through private adoption) got too rough and out of control, and not at all as described by the pro-adoption brochures promoted by the community she molded herself into. Truth be told, it would not surprise me one bit if the AmL and AmM types are blogging all their woes on the Internet, earning sympathy and really bad parenting advice from other APs, which will only make matters worse, not better, for their older adopted children. And yet where are the private adoption agencies? They are doing well. Whether the adoption facilitators are still with the original agencies, or have moved to another popular child trade-group (working hard to maintain/increase adoption sales and profits), you can damn well bet more child-trade agreements will fall into one of the above described arrangements, thanks to the government's inability to recognize where the high-risk groups of child abuse and parental neglect exist. Is this more than a little frustrating for the adult adoptee to know and witness? You bet. Bookmark/Search this post with:
fwe2-CC-MAIN-2013-20-29527000
The induction lamp is one of the newer technologies in lighting. This new tech lamp offers high efficacy and a very long life. With a shape similar to that of an incandescent lamp, it is useful in a variety of applications: - Hard-to-reach areas - Lobbies and atriums - Decorative street lighting - Hazardous areas Compared with the incandescent lamp, the induction bulb is about four times as efficient and lasts over 40 times longer. In fact, this lamp and generator system are rated at 100,000 hours, as compared to incandescents, which burn only 750 to 2,500 hours. When compared to metal halide lamps of similar wattages, the induction lamp lasts up to 10 times longer, has higher color rendering, starts instantly, and does not require the warm-up period of metal halide lamps. The induction lamp operates without an electrode. At the center of the lamp is the induction coil, powered by an electronic unit at the base of the lamp, which produces a magnetic field that produces light. This coil is sometimes referred to as an energy-coupling antenna. The glass assembly surrounding the induction coil contains a mercury electron-ion plasma material, which is energized in a magnetic field producing UV light. The inner portion of the glass is lined with a phosphor coating very similar to that in fluorescent lamps. The induction lamp offers many features that make it an attractive light source. With such a long rated life, these lamps seldom need replacing, rendering them virtually maintenance-free. This is particularly useful in applications where lamp replacement is cumbersome and expensive, as in some outdoor applications and in hard-to-reach areas. The induction lamp is also durable, and its light output is not significantly influenced by ambient temperature. This makes the induction lamp ideal for outdoor applications, where durability is certainly a high priority. - Ultra-long life -- 100,000 hours rated life - White light, excellent color rendering (80+ CRI) with choice of color temperatures - No color shift - High reliability -- instant start when cold and re-starting when hot - Low EMI complies with FCC Non-Consumer Limits
fwe2-CC-MAIN-2013-20-29530000
July 24, 2012 Any positive integer can be represented as the sum of one or more non-consecutive fibonacci numbers. For instance, 100 = 2 + 8 + 89. Note that 100 can also be written using fibonacci sums as 89 + 8 + 2 + 1 or 55 + 34 + 8 + 3, but those use consecutive fibonacci numbers (2 + 1 for the first representation, 55 + 34 for the second). Belgian mathematician Edouard Zeckendorf proved that such a representation is unique. Zeckendorf representations can easily be found by a greedy strategy. Start with the largest fibonacci number less than the target number. Then choose the largest fibonacci number less than the remainder after subtracting the first number. And so on, stopping when the remainder is a fibonacci number itself. Your task is to write a function that finds the Zeckendorf representation of a positive integer. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
fwe2-CC-MAIN-2013-20-29533000
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists - Main article: Inductive deductive reasoning In traditional Aristotelian logic, deductive reasoning is inference in which the conclusion is of no greater generality than the premises, as opposed to inductive reasoning, where the conclusion is of greater generality than the premises. Other theories of logic define deductive reasoning as inference in which the conclusion is just as certain as the premises, as opposed to inductive reasoning, where the conclusion can have less certainty than the premises. In both approaches, the conclusion of a deductive inference is necessitated by the premises: the premises can't be true while the conclusion is false. (In Aristotelian logic, the premises in inductive reasoning can also be related in this way to the conclusion.) - All men are mortal. - Socrates is a man. - Therefore Socrates is mortal. - The picture is above the desk. - The desk is above the floor. - Therefore the picture is above the floor. - All birds have wings. - A cardinal is a bird. - Therefore a cardinal has wings. - Every criminal opposes the government. - Everyone in the opposition party opposes the government. - Therefore everyone in the opposition party is a criminal. This is invalid because the premises fail to establish commonality between membership in the opposition party and being a criminal. This is the famous fallacy of the undistributed middle. |Basic argument forms of the calculus| |Modus Ponens||[(p → q) ∧ p] ├ q||if p then q; p; therefore q| |Modus Tollens||[(p → q) ∧ ¬q] ├ ¬p||if p then q; not q; therefore not p| |Hypothetical Syllogism||[(p → q) ∧ (q → r)] ├ (p → r)||if p then q; if q then r; therefore, if p then r| |Disjunctive Syllogism||[(p ∨ q) ∧ ¬p] ├ q||Either p or q; not p; therefore, q| |Constructive Dilemma||[(p → q) ∧ (r → s) ∧ (p ∨ r)] ├ (q ∨ s)||If p then q; and if r then s; but either p or r; therefore either q or s| |Destructive Dilemma||[(p → q) ∧ (r → s) ∧ (¬q ∨ ¬s)] ├ (¬p ∨ ¬r)||If p then q; and if r then s; but either not q or not s; therefore rather not p or not r| |Simplification||(p ∧ q) ├ p||p and q are true; therefore p is true| |Conjunction||p, q ├ (p ∧ q)||p and q are true separately; therefore they are true conjointly| |Addition||p ├ (p ∨ q)||p is true; therefore the disjunction (p or q) is true| |Composition||[(p → q) ∧ (p → r)] ├ [p → (q ∧ r)]||If p then q; and if p then r; therefore if p is true then q and r are true| |De Morgan's Theorem (1)||¬ (p ∧ q) ├ (¬p ∨ ¬q)||The negation of (p and q) is equiv. to (not p or not q)| |De Morgan's Theorem (2)||¬ (p ∨ q) ├ (¬p ∧ ¬q)||The negation of (p or q) is equiv. to (not p and not q)| |Commutation (1)||(p ∨ q) ├ (q ∨ p)||(p or q) is equiv. to (q or p)| |Commutation (2)||(p ∧ q) ├ (q ∧ p)||(p and q) is equiv. to (q and p)| |Association (1)||[p ∨ (q ∨ r)] ├ [(p ∨ q) ∨ r]||p or (q or r) is equiv. to (p or q) or r| |Association (2)||[p ∧ (q ∧ r)] ├ [(p ∧ q) ∧ r]||p and (q and r) is equiv. to (p and q) and r| |Distribution (1)||[p ∧ (q ∨ r)] ├ [(p ∧ q) ∨ (p ∧ r)]||p and (q or r) is equiv. to (p and q) or (p and r)| |Distribution (2)||[p ∨ (q ∧ r)] ├ [(p ∨ q) ∧ (p ∨ r)]||p or (q and r) is equiv. to (p or q) and (p or r)| |Double Negation||p ├ ¬¬p||p is equivalent to the negation of not p| |Transposition||(p → q) ├ (¬q → ¬p)||If p then q is equiv. to if not q then not p| |Material Implication||(p → q) ├ (¬p ∨ q)||If p then q is equiv. to either not p or q| |Material Equivalence (1)||(p ↔ q) ├ [(p → q) ∧ (q → p)]||(p is equiv. to q) means, (if p is true then q is true) and (if q is true then p is true)| |Material Equivalence (2)||(p ↔ q) ├ [(p ∧ q) ∨ (¬q ∧ ¬p)]||(p is equiv. to q) means, either (p and q are true) or ( both p and q are false)| |Exportation||[(p ∧ q) → r] ├ [p → (q → r)]||from (if p and q are true then r is true) we can prove (if q is true then r is true, if p is true)| |Importation||[p → (q → r)] ├ [(p ∧ q) → r]| |Tautology||p ├ (p ∨ p)||p is true is equiv. to p is true or p is true| In more formal terms, a deduction is a sequence of statements such that every statement can be derived from those before it. It is understandable, then, that this leaves open the question of how we prove the first sentence (since it cannot follow from anything). Axiomatic propositional logic solves this by requiring the following conditions for a proof to be met: A proof of α from an ensemble Σ of well-formed-formulas (wffs) is a finite sequence of wffs: - βn = α and for each βi (1 ≤ i ≤ n), either - βi ∈ Σ - βi is an axiom, - βi is the output of Modus Ponens for two previous wffs, βi-g and βi-h. Different versions of axiomatic propositional logics contain a few axioms, usually three or more than three, in addition to one or more inference rules. For instance, Gottlob Frege's axiomatization of propositional logic, which is also the first instance of such an attempt, has six propositional axioms and two rules. Bertrand Russell and Alfred North Whitehead also suggested a system with five axioms. For instance a version of axiomatic propositional logic due to Jan Lukasiewicz (1878-1956) has a set A of axioms adopted as follows: - [PL1] p → (q → p) - [PL2] (p → (q → r)) → ((p → q) → (p → r)) - [PL3] (¬p → ¬q) → (q → p) and it has the set R of Rules of inference with one rule in it that is Modu Ponendo Ponens as follows: - [MP] from α and α → β, infer β. The inference rule(s) allows us to derive the statements following the axioms or given wffs of the ensemble Σ. Natural deductive logic Edit In one version of natural deductive logic presented by E.J. Lemmon that we should refer to it as system L, we do not have any axiom to begin with. We only have nine primitive rules that govern the syntax of a proof. The nine primitive rules of system L are: - The Rule of Assumption (A) - Modus Ponendo Ponens (MPP) - The Rule of Double Negation (DN) - The Rule of Conditional Proof (CP) - The Rule of ∧-introduction (∧I) - The Rule of ∧-elimination (∧E) - The Rule of ∨-introduction (∨I) - The Rule of ∨-elimination (∨E) - Reductio Ad Absurdum (RAA) In system L, a proof has a definition with the following conditions: - has a finite sequence of wffs (well-formed-formula) - each line of it is justified by a rule of the system L - the last line of the proof is what is intended (Q.E.D, quod erat demonstrandum, is a Latin expression that means: which was the thing to be proved), and this last line of the proof uses the only premise(s) that is given; or no premise if nothing is given. Then if no premise is given, the sequent is called theorem. Therefore, the definitions of a theorem in system L is: - a theorem is a sequent that can be proved in system L, using an empty set of assumption. or in other words: - a theorem is a sequent that can be proved from an empty set of assumptions in system L An example of the proof of a sequent (Modus Tollendo Tollens in this case): |p → q, ¬q ├ ¬p [Modus Tollendo Tollens (MTT)]| |Assumption number||Line number||Formula (wff)||Lines in-use and Justification| |1||(1)||(p → q)||A| |3||(3)||p||A (for RAA)| |1,2,3||(5)||q ∧ ¬q||2,4,∧I| An example of the proof of a sequent (a theorem in this case): |├p ∨ ¬p| |Assumption number||Line number||Formula (wff)||Lines in-use and Justification| |1||(1)||¬(p ∨ ¬p)||A (for RAA)| |2||(2)||¬p||A (for RAA)| |2||(3)||(p ∨ ¬p)||2, ∨I| |1, 2||(4)||(p ∨ ¬p) ∧ ¬(p ∨ ¬p)||1, 2, ∧I| |1||(5)||¬¬p||2, 4, RAA| |1||(7)||(p ∨ ¬p)||6, ∨I| |1||(8)||(p ∨ ¬p) ∧ ¬(p ∨ ¬p)||1, 7, ∧I| |(9)||¬¬(p ∨ ¬p)||1, 8, RAA| |(10)||(p ∨ ¬p)||9, DN| Each rule of system L has its own requirements for the type of input(s) or entry(es) that it can accept and has its own way of treating and calculating the assumptions used by its inputs. - Jennings, R. E., Continuing Logic, the course book of 'Axiomatic Logic' in Simon Fraser University, Vancouver, Canada - Zarefsky, David, Argumentation: The Study of Effective Reasoning Parts I and II, The Teaching Company 2002 - Abductive reasoning - Correspondence theory of truth - Defeasible reasoning - Hypothetico-deductive method - Inductive reasoning - Propositional calculus - Retroductive reasoning |General: Philosophy: Eastern - Western | History of philosophy: Ancient - Medieval - Modern | Portal| |Lists: Basic topics | Topic list | Philosophers | Philosophies | Glossary of philosophical "isms" | Philosophical movements | Publications | Category listings ...more lists| |Branches: Aesthetics | Ethics | Epistemology | Logic | Metaphysics | Philosophy of: Education, History, Language, Law, Mathematics, Mind, Philosophy, Politics, Psychology, Religion, Science, Social philosophy, Social Sciences| |Schools: Agnosticism | Analytic philosophy | Atheism | Critical theory | Determinism | Dialectics | Empiricism | Existentialism | Humanism | Idealism | Logical positivism | Materialism | Nihilism | Postmodernism | Rationalism | Relativism | Skepticism | Theism| |References: Philosophy primer | Internet Encyclo. of Philosophy | Philosophical dictionary | Stanford Encyclo. of Philosophy | Internet philosophy guide| cs:Dedukce da:Deduktion de:Deduktion es:Razonamiento deductivo fr:Déduction logique ko:연역법 hr:Dedukcija he:דדוקציה hu:Dedukció nl:Deductieno:Deduksjon (filosofi) nn:Deduksjonru:Дедукция sl:Dedukcija sv:Deduktionuk:Дедукція zh:演绎推理 |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
fwe2-CC-MAIN-2013-20-29534000
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Globulin is one of the two types of serum proteins, the other being albumin. This generic term encompasses a heterogeneous series of families of proteins, with larger molecules and less soluble in pure water than albumin, which migrate less than albumin during serum electrophoresis. The normal range in blood is 2 to 3.5 g/dl. It is sometimes used synonymously with globular protein. However, albumin is also a globular protein, but not a globulin. All other serum globular proteins are globulins. Protein electrophoresis is used to categorize globulins into the following four categories: - Alpha 1 globulins - Alpha 2 globulins - Beta globulins - Gamma globulins (one group of gamma globulins are immunoglobulins, that function as antibodies) |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
fwe2-CC-MAIN-2013-20-29535000
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | |Pacinian corpuscle, with its system of capsules and central cavity.| a. Arterial twig, ending in capillaries, which form loops in some of the intercapsular spaces, and one penetrates to the central capsule. b. The fibrous tissue of the stalk. n. Nerve tube advancing to the central capsule, there losing its white matter, and stretching along the axis to the opposite end, where it ends by a tuberculated enlargement. |Gray's||subject #233 1060| |Pacinian capsule labeled at bottom.| The Pacinian corpuscle is oval shaped and approximately 1 mm in length. The entire corpuscle is wrapped by a layer of connective tissue. It has 20 to 60 concentric lamellae composed of fibrous connective tissue and fibroblasts, separated by gelatinous material. The lamellae are very thin, flat, modified Schwann cells. In the center of the corpuscle is the inner bulb, a fluid-filled cavity with a single afferent unmyelinated nerve ending. Pacinian corpuscles detect gross pressure changes and vibrations. Any deformation in the corpuscle causes action potentials to be generated, by opening pressure-sensitive sodium ion channels in the axon membrane. This allows sodium ions to influx in, creating a receptor potential. These corpuscles are especially susceptible to vibrations, which they can sense even centimeters away . Pacinian corpuscles cause action potentials when the skin is rapidly indented but not when the pressure is steady, due to the layers of connective tissue that cover the nerve ending . It is thought that they respond to high velocity changes in joint position. See also Edit - Virginia Commonwealth University - Anatomy Atlases - Microscopic Anatomy, plate 06.124 - Dictionary at eMedicine lamellated+corpuscles |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
fwe2-CC-MAIN-2013-20-29536000
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Psychological Operations (PSYOP,PSYOPS) are techniques used by military and police forces to influence a target audience's emotions, motives, objective reasoning, and behavior. Target audiences can be governments, organizations, groups, and individuals, and are used in order to induce confessions, or reinforce attitudes and behaviors favorable to the originator's objectives. This concept has been used by military institutions throughout history, but it is only since the 20th century that it has been accorded the organizational and professional status it enjoys now. In the German Bundeswehr, the Zentrum Operative Information and its subordinated Bataillon für Operative Information 950 are responsible for the PSYOP efforts (called Operative Information in German). Both the center and the battalion are subordinate to the new Streitkräftebasis (Joint Services Support Command, SKB) and together consist of about 1,000 soldiers specialising in modern communication and media technologies. One project of the German PSYOP forces is the radio station Stimme der Freiheit (Voice of Freedom), heard by thousands of Afghans. Another is publication of various newspapers and magazines in Kosovo and Afghanistan. United Kingdom Edit In the British Armed Forces, PSYOPS are handled by the tri-service 15 Psychological Operations Group. United States Edit - Main article: Psychological operations (United States) The purpose of United States psychological operations (PSYOP) is to induce or reinforce attitudes and behaviors favorable to U.S. objectives. In the United States Department of Defense, dedicated Psychological Operations units exist only in the United States Army. However, the United States Navy also plans and executes limited PSYOP missions. Unlike some countries, United States PSYOP units and Soldiers of all branches of the military are prohibited by law from conducting PSYOP missions on domestic audiences. While United States Army PSYOP units may offer non-PSYOP support to domestic military missions, they can only target foreign audiences. Within the U.S. Psychological Operations community, the correct acronym is PSYOP without the "s" at the end, as noted in FM 33-1-1. NATO references will alternately list the capability as PSYOP or PSYOPS, depending on the source's nation of origin. During the Waco Siege, the FBI and BATF conducted psychological operations on the men, women and children inside the Mount Carmel complex. This included using loud speakers to play sounds of animals being slaughtered, drilling noises and clips from talk shows about how David Koresh was much hated. In addition, very bright, flashing lights were used at night. See also Edit - Psychological warfare - Information warfare - Psychological operations (United States) - 15 Psychological Operations Group (British Armed Forces) - Political Warfare Executive - Psychological Warfare Division It is possible that PSYOPS, using a combonation of the Patriot Act and the Rave act, could engage in psychological warfare on those who produce, promote, or organize events centered around electronic music. These acts allow governmental agencies on all levels to coordinate and engage in psychological warfare against innocent civilians who use the electronic medium as a tool to create art. These acts could potentially lead to a constitutional conflict in regards to the quartering of troops as stated in the Bill of Rights. - PsyWar.Org — Psychological Operations and Black Propaganda The history of psychological warfare / PSYOP with an extensive library of aerial propaganda leaflets. - IWS — The Information Warfare Site - U.S. — PSYOP producing mid-eastern kids comic book - The Institute of Heraldry — Psychological Operations - OSS — Development of Psychological Warfare (WWII) - Clandestine Radio - it:Operazioni psicologiche - fi:Psykologinen operaatio |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
fwe2-CC-MAIN-2013-20-29537000
- Mobile Air Quality Studies (MAQS) - an international project (2010) - Due to an increasing awareness of the potential hazardousness of air pollutants, new laws, rules and guidelines have recently been implemented globally. In this respect, numerous studies have addressed traffic-related exposure to particulate matter using stationary technology so far. By contrast, only few studies used the advanced technology of mobile exposure analysis. The Mobile Air Quality Study (MAQS) addresses the issue of air pollutant exposure by combining advanced high-granularity spatial-temporal analysis with vehicle-mounted, person-mounted and roadside sensors. The MAQS-platform will be used by international collaborators in order 1) to assess air pollutant exposure in relation to road structure, 2) to assess air pollutant exposure in relation to traffic density, 3) to assess air pollutant exposure in relation to weather conditions, 4) to compare exposure within vehicles between front and back seat (children) positions, and 5) to evaluate "traffic zone"- exposure in relation to non-"traffic zone"-exposure. Primarily, the MAQS-platform will focus on particulate matter. With the establishment of advanced mobile analysis tools, it is planed to extend the analysis to other pollutants including including NO2, SO2, nanoparticles, and ozone.
fwe2-CC-MAIN-2013-20-29540000
Women and girls around the world are excluded from participation in science and technology (S&T) by poverty, lack of education and aspects of their legal, institutional, political and cultural environments. Science, Technology and Gender: An International Report is designed to support efforts being made worldwide to analyze, discuss and change this situation. Based on empirical research and data, this UNESCO report incorporates substantive inputs from institutions involved in science, technology, gender studies and policy. Marking the start of an ongoing initiative, it aims to spur serious discussion and action in national and international scientific and academic communities, especially regarding the pressing needs to increase women’s participation in S&T careers and enable sex-disaggregated data collection and rigorous research development, along with increasing public awareness of gender issues. With its goal of helping educators, policy-makers and the members of the scientific community to address the underlying causes of gender disparities in S&T, both in the public and private sectors, this report represents an important contribution to the political and institutional mainstreaming of the gender dimension in S&T. Also available in the Science and Technology for Development series
fwe2-CC-MAIN-2013-20-29542000
Your vote on this answer has already been received Technically speaking, every country has a capital. The most likely country that could be referred to as having no capital is Vatican City (in the centre of Rome, Italy) because the whole country (0.44 sqkm) is it's own city. Nauru also has no capital because it lacks any sizable population. However, most its leadership governs from the Yaren District. Some other countries that have the same capital name as the country are: San Marino (Europe) Panama (Central America)
fwe2-CC-MAIN-2013-20-29547000
Property rights issues are once again on the minds of many Americans as we mark the first anniversary of the Supreme Court's now infamous 5-4 Kelo v. City of New London eminent domain decision. The decision affirmed the ability of governments to forcibly take private property for "public purposes," even if those purposes serve fairly narrow private interests. The decision sparked outrage among many Americans, who viewed the process of taking land from one private party and giving it to another, even with "just compensation," to be fundamentally unfair and an abuse of government power. Opinion polls have shown opposition to the use of eminent domain for economic development ranging from 70 percent to over 90 percent. This spawned a healthy revolt against abusive land seizures by governments across the nation. Over the past year, at least 325 measures in 47 states have been proposed to protect against eminent domain abuses at the state level. In California alone, there have been 87 bills and several ballot initiatives proposed. One of those initiatives, the "Protect Our Homes" initiative, may make the November 2006 ballot, pending signature verification. Over one million signatures were submitted in support of the measure last month, far more than the nearly 600,000 required to qualify for the ballot. But what has been the result of all the fist-waving and teeth-gnashing following the Kelo decision? Sadly, momentum for the issue seems to have waned in most of the country and the initial indignation over the Supreme Court's decision has been met not with a bang, but a whimper. Several states—including Alabama, Delaware, Ohio, and Texas—have succeeded in passing eminent domain reforms, but most of these do not have any real teeth. According to Timothy Sandefur of the Pacific Legal Foundation, laws like those in Alabama and Texas leave open the door to eminent domain abuse by still allowing governments to take land they deem "blighted." While "blighted" property traditionally refers to property so dangerous to the public health that it must be removed, the term is so vaguely defined in the new legislation that it could mean anything the government wishes, including a perceived "need" for economic development. Thus, the vocabulary may have changed from "economic development" to "blight," but the recipe for takings abuse remains the same. At the federal level, the House of Representatives last November overwhelmingly passed H.R. 4128, the Private Property Rights Protection Act of 2005, which would deny Federal economic development funds to state and local governments that utilize eminent domain for this purpose. Unfortunately, no action has been taken on the bill in the Senate, where it has languished in the Judiciary Committee for over seven months. Why Property Matters The Founding Fathers knew well the importance of private property in "securing the blessings of liberty." The Declaration of Independence asserts our unalienable rights to "life, liberty, and the pursuit of happiness," which was derived from John Locke's Two Treatises of Government (1689), in which Locke describes our reasons for forming government in the first place: man "is willing to join in society with others . . . for the mutual preservation of their lives, liberties and estates, which I call by the general name, property." Earlier in the same essay, Locke explains the importance of property even more starkly: - Man being born . . . with a title to perfect freedom, and an uncontrouled enjoyment of all the rights and privileges of the law of nature, equally with any other man . . . hath by nature a power, not only to preserve his property, that is, his life, liberty and estate, against the injuries and attempts of other men; but to judge of, and punish the breaches of that law in others. James Madison, the fourth President of the United States and "Father of the Constitution," similarly maintained in 1792: - A man has a property in his opinions and the free communication of them. He has a property of peculiar value in his religious opinions, and in the profession and practice dictated by them. He has a property very dear to him in the safety and liberty of his person. He has an equal property in the free use of his faculties and free choice of the objects on which to employ them. In a word, as a man is said to have right to his property, he may be equally said to have a property in his rights. Where an excess of power prevails, property of no sort is duly respected. No man is safe in his opinions, his person, his faculties, or his possessions. Notice that Locke and Madison include among "property" people's very lives, their property in their own existence and the right to preserve that existence. Other forms of property are no less important, for they are necessary to sustain our lives. If we are to live, we must also provide for food, clothing, shelter, and other needs and luxuries. We can obtain these things only through the fruits of our labor or through charity (leaving aside the possibility of violating others' rights through theft, either directly or by using the government as our agent to take from another and give to us). In other words, you can talk all you want about the freedom of speech, but what good is it if you are unable to own a printing press or the paper (or computer) on which to write your ideas? You can pay lip service to the freedom of association, the freedom to peaceably assemble, and the freedom to practice any religion you want (or none at all), but what good is it if you are not permitted the opportunity to own the land on which to exercise these rights? You can have the right to keep and bear arms, but what good is it if you are not allowed to own any place to keep them? Property rights are not just an academic concept or an economic expediency, they are inexorably intertwined with the human rights and freedoms we hold so dear. This is why the power of eminent domain is one of government's most evil, insidious powers. But, you may argue, when government invokes eminent domain to take someone's property, the "Takings Clause" of the Fifth Amendment says it must pay "just compensation" so that the property owner is no worse off than before the taking. The key question that must be asked is: Who determines whether the compensation is "just"? It certainly isn't just to the property owners who simply desire to be left alone and remain in their homes; otherwise, they would have simply accepted a buyout offer. So, the government has an appraisal done, oftentimes producing a lowball figure, and demands that the property owner take the offer and leave. Never mind that the government-acquired appraisal may be only a fraction of what the owner could get for the property from another private party (the government's claims of its offer's "fair market value" notwithstanding). The government can offer below market value (or reduce the value of the property unilaterally by limiting its use through wetlands regulations or other tactics, but that must be a subject for another piece) because it knows the property owner will probably be forced to take it. Sure, he could try to fight a lengthy and costly court battle, but the government has access to skilled lawyers and unlimited funds; the property owner's funds are quite limited, and, thus, so may be his ability to obtain capable legal talent. So, he is typically stuck with insufficient compensation for his property, and must additionally bear the time, energy, stress, and other costs associated with picking up his roots and relocating. The Takings Clause Revisited Perhaps the Founding Fathers erred in allowing government the power to take someone's property for any reason, regardless of "just compensation." If someone has obtained his property legally and poses no threat to others through his use of the property, why should government be able to forcibly evict him at all? In fact, there were some among the American revolutionaries that did feel government should be prohibited from taking private property for any reason. The Declaration of Rights of the Pennsylvania Constitution of 1776 affirms: "no part of a man's property can be justly taken from him or applied to public uses without his own consent or that of his legal representatives." This language is repeated in the Delaware Declaration of Rights (1776) and the Vermont Constitution of 1777. Put another way, what difference does it make if the government takes one's property for "public use" or "private use"? After all, the public might get more "use" out of a new Wal-Mart than a fancy new government building, and the jobs and low-priced goods Wal-Mart offers would be available to anyone, as opposed to, say, a school, which serves only a certain segment of the population (those with school-aged children). Moreover, governments routinely take property in the form of taxes and redistribute it to other private parties. Even in cases where money is purportedly spent in the interest of the taxpayer, many would question whether such acts constitute "just compensation." Why should one's "money property" not enjoy the same protection as his "land property" against government takings and redistribution to private parties? The Kelo decision was not earth-shattering in that it merely confirmed what has been going on across the country for years and years. The Supreme Court's faith in the "public purpose" doctrine of eminent domain was nonetheless incredibly disappointing to those who recognize the importance of property rights in a free society. The silver lining to the decision is that it made the plight of innocent homeowners abused by the government real to many who have thus far ignored such government transgressions because they did not affect them directly. The firestorm of support for eminent domain reform seems to have diminished somewhat over the past year, however. We must be ever vigilant and wary of watered-down "reform" measures if we are to regain the protections our private property so richly deserves. Private property is not merely the things we purchase with our money. It is the things that sustain and enrich our lives, the places and things that allow us to express our other rights and enjoy our other freedoms. Property rights are human rights. The Founding Fathers understood this well. This is why they spoke of property in the same vein as life and liberty. If we are unwilling to demand that our property be protected, rather than seized by a capricious and avaricious government, we will find, only when it is too late, that we have sacrificed our lives and our liberties as well. During the course of researching this article, I came across a number of great quotes on property rights from several of the Founding Fathers and some of the great thinkers and adherents to the philosophy of freedom. There were far too many to include in this piece, but I thought readers of this article might enjoy them as much as I did, so I am including them here: All men have equal rights to liberty, to their property, and to the protection of the laws. — Voltaire, Essay on Manners, 1756 The system of private property is the most important guaranty of freedom, not only for those who own property, but scarcely less for those who do not. — Friedrich August von Hayek, The Road to Serfdom, 1944 If we wish to preserve a free society, it is essential that we recognize that the desirability of a particular object is not sufficient justification for the use of coercion. — Friedrich August von Hayek, The Constitution of Liberty, 1960 Property is surely a right of mankind as real as liberty. — John Adams, Dissertation on the Canon and the Feudal Law, 1765 Property must be secured, or liberty cannot exist. — John Adams, A Balanced Government (1790) in Discourses on Davila (1805), reprinted in 6 Works of John Adams (1851 ed.) Now what liberty can there be where property is taken away without consent? — Samuel Adams, The Rights of the Colonists, The Report of the Committee of Correspondence to the Boston Town Meeting, November 20, 1772 Private property and freedom are inseparable. — George Washington You cannot have a free society without private property. — Milton Friedman Man is born into the universe with a personality that is his own. He has a right that is founded upon the constitution of the universe to have property that is his own. Ultimately, property rights and personal rights are the same thing. The one cannot be preserved if the other be violated. — Calvin Coolidge, "Have faith in Massachusetts," Massachusetts Senate President Acceptance Speech, January 7, 1914 The right of liberty means man's right to individual action, individual initiative and individual property. Without the right to private property no independent action is possible. — Ayn Rand, "The Only Path to Tomorrow," 1944 The right to life is the source of all rights—and the right to property is their only implementation. Without property rights, no other rights are possible. Since man has to sustain his life by his own effort, the man who has no right to the product of his effort has no means to sustain his life. The man who produces while others dispose of his product, is a slave. — Ayn Rand, "Man's Rights" in The Virtue of Selfishness, 1964 The sacred rights of property are to be guarded at every point. I call them sacred, because, if they are unprotected, all other rights become worthless or visionary. What is personal liberty, if it does not draw after it the right to enjoy the fruits of our own industry? What is political liberty, if it imparts only perpetual poverty to us and all our posterity? What is the privilege of a vote, if the majority of the hour may sweep away the earnings of our whole lives, to gratify the rapacity of the indolent, the cunning, or the profligate, who are borne into power upon the tide of a temporary popularity? — Joseph Story, Associate Justice of the United States Supreme Court, William W. Story, ed., "The Value and Importance of Legal Studies" in Miscellaneous Writings of Joseph Story (Boston: C. C. Little and J. Brown, 1852), 503, 519 The dichotomy between personal liberties and property rights is a false one. Property does not have rights. People have rights. The right to enjoy property without unlawful deprivation, no less than the right to speak or the right to travel, is in truth, a "personal" right, whether the "property" in question be a welfare check, a home, or a savings account. In fact, a fundamental interdependence exists between the personal right to liberty and the personal right in property. — Potter Stewart, Associate Justice of the United States Supreme Court, Lynch v. Household Finance Corp., 405 U.S. 538, 552 (1972) Each of us has a natural right—from God—to defend his person, his liberty, and his property. These are the three basic requirements of life, and the preservation of any one of them is completely dependent upon the preservation of the other two. For what are our faculties but the extension of our individuality? And what is property but an extension of our faculties? — Frederic Bastiat, The Law, 1850 The three great rights are so bound together as to be essentially one right. To give a man his life but deny him his liberty, is to take from him all that makes his life worth living. To give him his liberty but take from him the property which is the fruit and badge of his liberty, is to still leave him a slave. — George Sutherland, Associate Justice of the United States Supreme Court, 1921, quoted in Cleon Skousen, The Five Thousand Year Leap (Washington, DC: National Center for Constitutional.Studies, 1981), p. 173. The great and chief end, therefore, of men uniting into commonwealths, and putting themselves under government, is the preservation of their property. — John Locke, Two Treatises of Government, 1690, Book II, Chapter IX, Sec. 124 The supreme power cannot take from any man any part of his property without his own consent. . . . Men therefore in society having property, they have such a right to the goods, which by the law of the community are theirs, that no body hath a right to take their substance or any part of it from them, without their own consent: without this they have no property at all; for I have truly no property in that, which another can by right take from me, when he pleases, against my consent. Hence it is a mistake to think, that the supreme or legislative power of any commonwealth, can do what it will, and dispose of the estates of the subject arbitrarily, or take any part of them at pleasure. — John Locke, Two Treatises of Government, Book II, Chapter XI, Sec. 138 There is, therefore, secondly, another way whereby governments are dissolved, and that is, when the legislative, or the prince, either of them, act contrary to their trust. First, the legislative acts against the trust reposed in them, when they endeavor to invade the property of the subject, and to make themselves, or any part of the community, masters, or arbitrary disposers of the lives, liberties, or fortunes of the people. — John Locke, Two Treatises of Government, 1690, Book II, Chapter XIX, Sec. 221 Whenever the legislators endeavor to take away and destroy the property of the people, or to reduce them to slavery under arbitrary power, they put themselves into a state of war with the people, who are thereupon absolved from any further obedience. — John Locke, Two Treatises of Government, 1690, Book II, Chapter XIX, Sec. 222 All men are created equally free and independent, and have certain inherent rights, of which they cannot, by any compact, deprive or divest their posterity; among which are the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing the obtaining of happiness and safety. — George Mason, First Draft, Virginia Declaration of Rights, May 1776 That all men are by nature equally free and independent, and have certain inherent rights, of which, when they enter into a state of society, they cannot, by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety. — George Mason, Virginia Declaration of Rights, Article 1, 1776 All men, having sufficient evidence of permanent common interest with, and attachment to, the community have the right of suffrage and cannot be taxed or deprived of their property for public uses without their own consent or that of their representatives so elected. — George Mason, Virginia Declaration of Rights, Article 6, 1776 No part of a man's property can be justly taken from him or applied to public uses without his own consent or that of his legal representatives. — This language is included in several early state constitutions, including the Pennsylvania Constitution of 1776, Declaration of Rights, Article XIII; Delaware Declaration of Rights, Section 10, 1776; and Vermont Constitution of 1777, Chapter 1, Article IX. No power on earth has a right to take our property from us without our consent. — John Jay, First Chief Justice of the United States Supreme Court and co-author of The Federalist Papers, "Address to the People of Great Britain," October 1774 So great, moreover, is the regard of the law for private property, that it will not authorize the least violation of it; no, not even for the general good of the whole community. If a new road, for instance, were to be made through the grounds of a private person, it might perhaps be extensively beneficial to the public; but the law permits no man, or set of men, to do this without consent of the owner of the land. In vain may it be urged, that the good of the individual ought to yield to that of the community; for it would be dangerous to allow any private man, or even any public tribunal, to be the judge of this common good, and to decide whether it be expedient or no. Besides, the public good is in nothing more essentially interested, than in the protection of every individual�s private rights, as modeled by the municipal law. — Sir William Blackstone, Commentaries on the Laws of England, 1765 It is evident that the right of acquiring and possessing property, and having it protected, is one of the natural, inherent, and inalienable rights of man. Men have a sense of property: Property is necessary to their subsistence, and correspondent to their natural wants and desires; its security was one of the objects that induced them to unite in society. No man would become a member of a community, in which he could not enjoy the fruits of his honest labor and industry. The preservation of property then is a primary object of the social compact. . . . Where is the security, where is the inviolability of property, if the legislature, by a private act, affecting particular persons only, can take land from one citizen, who acquired it legally, and vest it in another? — William Paterson, Associate Justice of the United States Supreme Court and signer of the Constitution, Van Horne's Lessee v. Dorrance, 2 U.S. (2 Dall.) 304, 309, 311-12 (1795) Government is instituted to protect property of every sort; as well that which lies in the various rights of individuals, as that which the term particularly expresses. This being the end of government, that alone is a just government which impartially secures to every man whatever is his own. — James Madison, "Property," National Gazette, March 27, 1792 Government is instituted no less for protection of the property, than of the persons, of individuals. — James Madison, Federalist No. 54 It is sufficiently obvious, that persons and property are the two great subjects on which Governments are to act; and that the rights of persons, and the rights of property, are the objects, for the protection of which Government was instituted. These rights cannot well be separated. — James Madison, Speech at the Virginia Constitutional Convention, December 2, 1829 By Liberty I understand the Power which every Man has over his own Actions, and his Right to enjoy the Fruits of his Labour, Art, and Industry, as far as by it he hurts not the Society, or any Members of it, by taking from any Member, or by hindering him from enjoying what he himself enjoys. The Fruits of a Man�s honest Industry are the just Rewards of it, ascertained to him by natural and eternal Equity, as is his Title to use them in the Manner which he thinks fit: And thus, with the above Limitations, every Man is sole Lord and Arbitrer of his own private Actions and Property. — Thomas Gordon, Cato's Letters, No. 62, January 20, 1721 As property, honestly obtained, is best secured by an equality of rights, so ill-gotten property depends for protection on a monopoly of rights. He who has robbed another of his property, will next endeavor to disarm him of his rights, to secure that property; for when the robber becomes the legislator he believes himself secure. — Thomas Paine, Dissertations on First Principles of Government, 1795 I consider the war of America against Britain as the country's war, the public's war, or the war of the people in their own behalf, for the security of their natural rights, and the protection of their own property. — Thomas Paine, On Financing the War, 1782 The true foundation of republican government is the equal right of every citizen in his person and property, and in their management. Try by this, as a tally, every provision of our Constitution, and see if it hangs directly on the will of the people. — Thomas Jefferson, Letter to Samuel Kercheval, July 12, 1816, in Albert Ellery Bergh, ed., "The Writings of Thomas Jefferson," (Washington, D.C.: Thomas Jefferson Memorial Association, 1907), Vol. 15, p. 36. Next to the right of liberty, the right of property is the most important individual right guaranteed by the Constitution and the one which, united with that of personal liberty, has contributed more to the growth of civilization than any other institution established by the human race. — William Howard Taft, Popular Government, 1913
fwe2-CC-MAIN-2013-20-29559000
Heavy drinking, especially bingeing, makes platelets more likely to clump together into blood clots, which can lead to heart attack or stroke. In a landmark study published in 2005, Harvard researchers found that binge drinking doubled the risk of death among people who initially survived a heart attack. Heavy drinking can also cause cardiomyopathy, a potentially deadly condition in which the heart muscle weakens and eventually fails, and is commonly associated with the heart-rhythm abnormalities atrial and ventricular fibrillation. Moderate to high alcohol intake is associated with an increased incidence of atrial fibrillation among people aged 55 or older with cardiovascular disease or diabetes. Atrial fibrillation, in which the heart’s upper chambers (atria) twitch chaotically rather than constrict rhythmically, can cause blood clots that can trigger a stroke. Ventricular fibrillation causes chaotic twitching in the heart’s main pumping chambers (ventricles). It causes rapid loss of consciousness and, in the absence of immediate treatment, sudden death.
fwe2-CC-MAIN-2013-20-29562000
- GENEVA - U.S. and European intelligence agencies are reporting mounting evidence that Russia and China have massively violated the 1972 Biological and Toxic Weapons Convention and subsequent international and bilateral agreements to control biowarfare weapons. - The convention, signed by 169 nations, prohibits the development, production, acquisition, stockpiling, transfer or use of chemical and biological weapons. - All signatories with biowarfare arsenals are pledged to eliminate such weapons over 10 years. While Russia and China appear to have ceased adding to their huge stockpiles of chemical weapons, both are developing new strains of highly lethal biological toxins. - According to Ken Alibek, a former deputy director of the top secret Soviet-era biowarfare program, who defected to the West, Moscow never ended its offensive biological warfare research. Alibek claims Russia has stockpiled many hundreds of tonnes of anthrax and plague, as well as smaller quantities of smallpox, Ebola and Marburg viruses, and toxins designed to attack plants and animals. Russia is also developing a new strain of "invisible" biowarfare agents, known as bioregulators, that destroy the body's immune or neurological systems. - The highest-ranking defector from Russia's biowarfare program ever to come West also claims that in 1985 former Soviet leader Mikhail Gorbachev secretly authorized a five-year program to develop weaponized germs and viruses, some of which were mounted on multiple warheads of the large SS-18 ICBMs targeted at North America. Alibek also says China, which claims to have abandoned biowarfare production and eliminated stockpiles, is producing hemorrhagic viruses at Lop Nor in Central Asia and suffered two major accidents in the late 1980s that killed hundreds of people. - Many toxins being developed in Russia have been biologically engineered to resist antibiotics, notably a super-strain of anthrax that is apparently impervious to the anti-anthrax inoculations now being given to NATO troops. - Alibek and other Russian defectors also confirmed the Soviet Union used chemical and biological weapons in Afghanistan from 1980-89. While covering the war there, I saw numerous cases of grave injuries or death inflicted on the Afghan mujahedeen by mysterious Soviet weapons. After being sprayed by a fine chemical mist, or exposed to gas, people would turn black and die, bleed profusely from all body orifices, choke and vomit or become disoriented and dazed. Bodies of some victims would putrefy almost immediately. - The Soviets also employed glanders, a highly contagious horse disease, to kill the animal transport used by the Afghan resistance and ergot fungus to destroy wheat. None of the biowarfare agents used by the U.S.S.R. in Afghanistan, save glanders, have ever been identified by western scientists. The West, while scourging Iraq for using chemical weapons against Iran and its rebellious Kurds, chose to ignore employment by the U.S.S.R. of more sophisticated toxic agents in Afghanistan. - Western protests over Russia's latest germ warfare projects and demands for inspection of its four major biowarfare labs have been rebuffed by Russia. The Bill Clinton administration, influenced by the strongly pro-Russian Strobe Talbot, has repeatedly rejected demands by Congress to cut off billions in U.S. aid in order to pressure Moscow into ceasing its illegal biowarfare programs. Europe, which also bankrolls Boris Yeltsin's regime, has been similarly negligent in pressing Moscow on this vital issue. - Some of the 60,000 scientists and technicians formerly employed in the Soviet biological warfare establishment have reportedly been employed by Iraq, Israel, Iran, Syria and Serbia - all of which have extensive biowarfare arsenals. India may also have received substantial Russian aid to develop its growing biowarfare capabilities. - Alibek testified before the U.S. Congress that he defected after learning that while the West had virtually eliminated its toxic arsenals, Russia was not only continuing Soviet biowarfare programs but accelerating them, with 2,000 scientists alone working on new, genetically engineered strains of anthrax at a top secret island base in the Aral Sea. He claims such toxic agents have little tactical military value and are of use only as mass terror weapons designed to compensate for Russia's and China's relative backwardness in conventional military systems. - These terror agents are being produced in a large complex at Kirov, east of Moscow, Compound 19 at Ekaterinburg in the Urals, Sergeiv Possad outside Moscow and at a new complex at Strizhy, close to Kirov. The laboratory at Ekaterinburg (formerly Sverdlovsk) was the site of a massive accidental release of anthrax in 1979 that killed or injured over 1,000 people. - According to the 1990 U.S.-Russia Bilateral Destruction Agreement, the two powers were to reduce their respective chemical stockpiles to 5,000 tonnes each by 2002. In 1996, Russia backed off even this agreement, citing financial problems. The UN was supposed to take over supervision of biowarfare agents destruction and implementation of the 1972 treaty, but it has failed dismally to enforce the agreements or even to protest egregious violations by Russia, China and other signatory - The West has destroyed or significantly reduced its stocks of chemical agents, and ceased biological warfare research. Russia and China continue to develop such weapons. The former balance of terror has become unbalanced, as "friendly" regimes in Moscow and Beijing not only violate international law but threaten all mankind with their relentless development of hi-tech germ warfare.
fwe2-CC-MAIN-2013-20-29566000
What is an SSL certificate? The Secure Sockets Layer (SSL) protects data transferred over http using encryption enabled by a servers SSL Certificate. An SSL Certificate is an electronic file that uniquely identifies individuals and Web sites and enables encrypted communications. An SSL Certificate contains a public key and a private key. A public key is used to encrypt information and a private key is used to decipher it. When a browser points to a secured domain, an SSL handshake authenticates the server and the client and establishes an encryption method and a unique session key. They can begin a secure session that guarantees message privacy and message integrity. SSL Certificates serve as a kind of digital Passport or credential. Typically, the "signer" of a certificate is a "Certificate Authority" (CA), such as VeriSign. Encryption, the process of transforming information to make it unintelligible to all but the intended recipient, forms the basis of data integrity and privacy necessary for e-commerce. Customers submit sensitive information and purchase goods or services via the Web only when they are confident that their personal information is secure. The solution for businesses that are serious about online transactions is to implement a trust infrastructure based on encryption technology. The diagram below illustrates the process that guarantees protected communications between a Web server and a client. All exchanges of SSL Certificates occur within seconds, and require no action by the consumer.
fwe2-CC-MAIN-2013-20-29567000
Gay-rights activists are celebrating in Puerto Rico after the Senate passed a sweeping bill that bans discrimination on sexual orientation and gender identity. The Puerto Rico Senate voted on May 16, 15-11, to pass Bill 238 just days after San Juan Mayor Carmen Yulín Cruz issued two executive orders banning discrimination against the city’s LGBT municipal employees. Now, the bill moves on to the House and faces hurdles from a group of lawmakers in the lower chamber who have come out against it. The bill, though, has one famous supporter. Puerto Rican Ricky Martin released a statement in support of the law. “The rights of homosexual people are human rights and human rights are for everyone,” Martin said in the letter released by his representative in San Juan. For the original report go to http://www.passportmagazine.com/blog/archives/28490-puerto-rico-senate-approves-anti-discrimination-bill-moves-on-to-house.html The Indian Caribbean Museum, described as “a national treasure, a window to the past, and an opportunity to see history come alive”, has been cited by a National Geographic publication that showcases 500 of the world’s most powerful and spiritual places and guides travellers who wish to visit them, as Paras Ramoutar reports in this article for twocircles.com. “This is a fitting recognition in just seven years of our existence, especially as we celebrate the 168th Indian Arrival Day May 30,” Sansbhan Jokhoo, the curator of the museum that serves as a link between indentured Indian labourers and the present, told IANS. “The Indian Caribbean Museum has international prominence and recognition as the only one of its kind in the world. Not even India has one. And before the inauguration of the Kolkata memorial last year planners from India came to visit our facility,” Jokhoo said. The Kolkata memorial, in the city’s Garden Reach area, remembers the indentured Indian labourers who left India during the 19th & early 20th centuries to work on plantations in the West Indies. Between 1845 and 1917, approximately 148,000 Indians were brought to this country, principally from Bihar and Uttar Pradesh, and worked to rescue the decaying plantations following the abolition of slavery by the British government. It is to keep alive their memory that Satnarayan Maharaj, secretary general of the Sanatan Dharma Maha Sabha (SDMS) launched the museum, which features in “Sacred Places of a Lifetime – 500 of the World’s Most Peaceful and Powerful Destinations”. The collection includes items such as rare musical instruments, agricultural objects, cooking utensils, pieces of clothing, ancient photographs and historical books. Objects of historical and aesthetic value include a sapat (wooden slipper) jata (grinding stone) boli (gourd bowl) and hassawa (grass knife). There is also a huge copper basin that was used for boiling cane syrup in the sugar factories up to the 1930s, and a dekha (a wooden contraption used for grinding cocoa, coffee beans, corn and rice). The museum, which has become a research centre with the country’s National Archives, also houses an art gallery, a reference library and a computerised genealogical database. A botanical garden is also in the making. The institution is a member of the Caribbean Museum Association, which comprises 20 institutions spread across the region. “The Indian Caribbean Museum is a national treasure, a window to the past, and an opportunity to see history come alive. To many visitors, it evokes memories of the past, a link to the present, and a vision for the future. The museum serves as a foundation for collective memory, cultural continuity and national development,” Jokhoo said. “It provides a common experience that families can share across generations and serve as a link between revered ancestors and living people. The museum provides information on the cultural heritage of Indians in the Caribbean to themselves and to people of all ethnic backgrounds,” he added. “The Caribbean Indian Museum holds fundamental importance and relevance to the continued kinship and affinity with India, and within the entire Indian diaspora, as it has myriad symbolic, cultural, religious and transcendental interpretations and meanings for all. It remains a monument for posterity. It will remain ageless,” Jokhoo said. Since its inception, in excess of 45,000 persons from all walks of life from the four corners of the globe have visited the museum, according to Ann Marie Ramhit, an assistant. She said that Dennison Moore, who wrote the Canadian government’s policy on multiculturalism, recently donated 107 books reflecting different aspects of India and the diaspora to the library. “This donation has augmented our educational stock for research, as well as for leisure reading,” Ramhit added. Winston Dookeran, now the Trinidad and Tobago foreign minister, had in 2006 opened the museum, located in the west-central part of Trinidad. For the original report go to http://twocircles.net/2013may23/indian_caribbean_museum_nat_geo_list_500_sacred_places.html Scientists say three to six major hurricanes will hit US, some in areas far beyond those typically associated with extreme storms, as Suzanne Goldenberg reports in this article for London’s Guardian. Americans were warned on Thursday to brace for an extremely active hurricane season – less than a year after the devastation of Sandy, which hit the east coast in October 2012 – with 13 to 20 named storms, including seven to 11 hurricanes. The National Oceanic and Atmospheric Administration, releasing its annual forecast, said 2013 would be prolific in raising storms out of the Atlantic and Caribbean. Of the predicted hurricanes, Noaa predicted that three to six could be major hurricanes, rated category three and packing winds of 111mph or higher. Thursday’s forecast was well above the average of 12 named storms, eight hurricanes and three major hurricanes. Administration officials also warned that the impacts of those storms – as with Sandy and Irene in 2011 – could be felt in areas far beyond those typically associated with hurricanes and tropical storms. Sandy killed scores as it made its way across the Caribbean to the north-east US. While it was only a category two storm when it made landfall near Atlantic City in New Jersey, Sandy caused more than $75bn in damage. Lower Manhattan was knocked off the electrical grid for days because of storm surges and coastal communities have yet to recover. “As we saw first-hand with Sandy, it’s important to remember that tropical storm and hurricane impacts are not limited to the coastline. Strong winds, torrential rain, flooding, and tornadoes often threaten inland areas far from where the storm first makes landfall,” said Kathryn Sullivan, the acting Noaa administrator. Noaa scientists said there were three main causes behind the forecast of an extremely active season. They included a continuation of an atmospheric climate pattern, which includes a strong west African monsoon, that has been contributing to high activity during Atlantic hurricane season since the 1990s. Warmer ocean temperatures in the Atlantic and Caribbean oceans, where many storms originate, are also making for stronger storms. Officials said temperatures were on average about 0.8 of one degree fahrenheit above average. El Niño, which can inhibit storm systems, was not expected to develop during this year’s hurricane season. The season runs from 1 June to 1 November. “There are no mitigating factors that we can see that will suppress the activity,” said Gerry Bell, Noaa’s lead Atlantic hurricane forecaster. “The computer models all point to an active, or very active, hurricane season.” Thursday’s forecast was released at a time when Republicans in Congress are sharply scrutinising Noaa’s role in forecasting. Earlier in the day, a house committee held a hearing to discuss privatising some of the forecasting functions that are overseen by the premier scientific agency. There has also been criticism of Noaa’s messaging in advance of Hurricane Sandy, and whether its decision to officially downgrade the storm when it made landfall in New Jersey induced a false sense of security among some coastal communities. Noaa officials, in unveiling their 2013 forecast, noted improvements to computer models that would allow better far-range prediction of storms. New Doppler radar data, to be introduced in July, will allow forecasters to better analyse rapidly changing storm conditions, officials said. However, the officials said it was impossible at this juncture to predict which coastal communities along the Atlantic coast are most likely to be hit this year. It is also not yet clear when the storms will hit. As Sullivan noted, Sandy struck in the waning days of the hurricane season. “Hurricane Sandy was at the very end of the hurricane season and yet was one of the most devastating storms that we have ever seen,” she said. But officials said repeatedly that residents the length of the coast – and beyond – needed to prepare in advance, in order to be able to ride out storms in their homes or, if needed, have an exit plan in place. Such preparations should include putting aside a 72-hour supply of food and water at home, or having an evacuation plan in case of storm damage or flooding. “This is a very dangerous hurricane season,” said Joe Nimmich, who directs disaster response and recovery for the Federal Emergency Management Agency. “If you are not prepared you may become one of the statistics we don’t care to have.” For the original report go to http://www.guardian.co.uk/world/2013/may/23/noaa-forecast-active-hurricane-season The special issue, edited by Lorna Burns and Wendy Knepper, seeks to “sound new directions in Harris studies and attempt both to reinvigorate the current file and establish a new agenda for future scholarship.” Journal of Postcolonial Writing, Vol. 49, No. 2, 01 May 2013 is now available on Taylor & Francis Online. Special Issue: “-Scapes” of Globality in the Work of Wilson Harris This new issue contains the following articles: Articles Revisionary “-scapes” of globality in the work of Wilson Harris: introduction Lorna Burns & Wendy Knepper Pages: 127-132 DOI: 10.1080/17449855.2013.776361 The reality of trespass: Wilson Harris and an impossible poetics of the Americas Gemma Robinson Pages: 133-147 DOI: 10.1080/17449855.2013.776372 The “impossible quest for wholeness”: sugar, cassava, and the ecological aesthetic in The Guyana Quartet Michael Niblett Pages: 148-160 DOI: 10.1080/17449855.2013.776374 Cataclysmic life in Wilson Harris’s Jonestown Wendy Knepper Pages: 161-173 DOI: 10.1080/17449855.2013.776376 Philosophy of the imagination: time, immanence and the events that wound us in Wilson Harris’s Jonestown Lorna Burns Pages: 174-186 DOI: 10.1080/17449855.2013.776378 Legends of the Fall: on rereading Companions of the Day and Night Michael Mitchell Pages: 187-197 DOI: 10.1080/17449855.2013.776383 Kaieteur: place of the pharmakos and deconstruction Tim Cribb Pages: 198-208 DOI: 10.1080/17449855.2013.776386 Intrasubjectivity in the philosophy of Wilson Harris Paget Henry Pages: 209-221. DOI: 10.1080/17449855.2013.779093 As part of the International Colloquium “La diversidad cultural en el Caribe” [Cultural Diversity in the Caribbean] being held from May 20 to May 24, 2013, Casa de las Américas presents “Rostros del Carnaval” [Faces of the Carnival] a photographic exhibition by Mario Picayo and Mariano Hernández. The exhibition opens tonight, Thursday, May 23, at 7:00pm, at Galería Mariano. The gallery is located at #607 15th Street, between Avenues B and C in Vedado (Havana, Cuba). For more information, see http://www.lapapeleta.cult.cu/actividad/detalles/1429-rostros-del-carnaval/ The 13th International Conference on Caribbean Literature (ICCL)—Panama in the Caribbean: The Caribbean in Panama—will be hosted by the University of Panama, the country’s largest and most renowned institution of higher learning, on November 13-16, 2013. The deadline for submissions is July 15, 2013. Description: For this historic event, ICCL will assemble 150-200 scholars from a host of colleges and universities in Central America, South America, the Caribbean, North America, Europe, Asia and Africa. The hosts have arranged a unique program to interact with the Panamanian people as you explore important historical and cultural sites in Panama City, while you engage in lectures, discussions, readings, and performances by prominent Panamanian scholars, writers, and artists. Of course, you will be afforded the unforgettable experience of touring one of the world’s technological, commercial, and geographical wonders: the Panama Canal. Although the organizers are particularly interested in Caribbean literature, presentatios may focus on any aspect of Caribbean culture. Papers and panels may be presented in Spanish, French, and English. Please send one-page abstracts as indicated below: (French or Spanish presentations)Dr. Jorge Román-Lagunas Department of Modern Languages Purdue University Calumet 2200 169th Street Hammond, IN 46323-2094 Phone: 219-989-2379 Fax: 219-746-9372 Email: email@example.com (English Presentations)Dr. Melvin B. Rahming Department of English Morehouse College 830 Westview Dr., S.W. Atlanta, GA 30314Phone: 404-572-3607 Fax: 404-614-8545 Email: firstname.lastname@example.org For further conference details, visitwww.icclconference.org Today (May 23, 2013), Campus Principal and Pro Vice Chancellor, Professor Clement Sankat, will host a public lecture and launch of Britain’s Black Debt: Reparation for Caribbean Slavery & Native Genocide, a book by Professor Sir Hilary Beckles, at the Daaga Auditorium, University of the West Indies-St. Augustine at 5:30pm. Description: Since the mid-nineteenth-century abolition of slavery, the call for reparations for the crime of African enslavement and native genocide has been growing. In the Caribbean, grassroots and official voices now constitute a regional reparations movement. It is a fractured, contentious and divisive call, but it generates considerable public interest. Britain’s Black Debt is the first scholarly work that looks comprehensively at the reparations discussion in the Caribbean. Author Hilary McD. Beckles is a leading economic historian of the region and a seasoned activist in the wider movement for social justice and advocacy of historical truth, and as such, he is uniquely positioned to explore the origins and development of reparations as a regional and international process. Beckles weaves detailed historical data on Caribbean slavery and the transatlantic slave trade together with legal principles and the politics of postcolonialism, and sets out a solid academic analysis of the evidence. He concludes that Britain has a case of reparations to answer, which the Caribbean should litigate. International law provides that chattel slavery as practised by Britain was a crime against humanity. Slavery was invested in by the royal family, the government, the established church, most elite families, and large public institutions in the private and public sector. Citing the legal principles of unjust and criminal enrichment, Beckles presents a compelling argument for Britain’s payment of its black debt, a debt that it continues to deny in the face of overwhelming evidence to the contrary. Britain’s Black Debt is at once an exciting narration of Britain’s dominance of the slave markets that enriched the economy and a seminal conceptual journey into the hidden politics and public posturing of leaders on both sides of the Atlantic. No work of this kind has ever been attempted. No author has had the diversity of historical research skills, national and international political involvement, and personal engagement as an activist to present such a complex yet accessible work of scholarship. Professor Sir Hilary McD. Beckles holds a Chair in Social and Economic History, University of the West Indies-Cave Hill, Barbados, where he is also Principal and Pro-Vice Chancellor. He is Vice-President of the International Scientific Committee for the UNESCO Slave Route Project, and member of the International Advisory Board of the Cultures and Globalization Series. A leading voice on reparations issues, he led the Barbados National Delegation and coordinated Caribbean actions at the UN Conference on Race in Durban, 2001. His many publications including Natural Rebels: A Social History of Enslaved Black Women in Barbados; Centering Woman: Gender Discourses in Caribbean Slave Society; and A History of Barbados: From Amerindian Settlement to Nation-State. For more information, see http://sta.uwi.edu/news/ecalendar/event.asp?id=1925 For purchasing information, see http://www.amazon.com/Britains-Black-Debt-Reparations-Caribbean/dp/976640349X In his column “Dowd on Drinks,” Bill Dowd (Times Union) writes about how the Bacardi Company is releasing a television commercial that capitalizes on the supposed historical origins of the “Cuba Libre” cocktail—rum and Coke. [Remember to watch the video of the ad in the link below!] Through all sorts of societal changes and over several generations, the Cuba Libre has endured as a very popular cocktail. The recipe is a simple one: Light rum, Coca-Cola and a squeeze of lime. Where it came from is, as is the case with so many cocktail origins, a matter of opinion. The most popular version matches that told in a soon-to-be-released Bacardi USA TV commercial — that it was created in Cuba in 1900 as Colonel Teddy Roosevelt and his Rough Riders helped fight for the island’s independence from Spain — and takeover by the U.S. They toasted the victory with the cheer “Free Cuba!” or “Cuba Libre!” in Spanish. The spot, reports Advertising Age, is the first in a series of ads showing historical events that shaped the 151 year-old brand, which has links to the creation of other rum cocktails such as the Daiquiri and Mojito. However, Coca-Cola won’t be getting a free ride on the Bacardi advertising dollar. The ad will refer to the drink as “run [sic] and cola.” The historic theme may well be in response to competitors’ rum ads featuring historic personalities. Diageo has recast its once silly Captain Morgan as real-life privateer Captain Henry Morgan of the 1600s. William Grant & Sons is pushing its Sailor Jerry rum by using Norman “Sailor Jerry” Collins, a renowned American tattoo artist and Navy man of the mid-1900s. Last year, both brands gained market share on Bacardi, although it remains the top-selling U.S. rum with 35.4% share in 2012, according to Euromonitor International which measures volume of liters sold. Captain Morgan is No. 2 with 23.2%, and Sailor Jerry No. 7 at 2.6%. Bacardi’s campaign is timed to coincide with Cuban Independence Day on Monday. Interesting, considering both Bacardi and Coca-Cola left the island nation after Fidel Castro came to power. Bacardi now is made in Puerto Rico; Coca-Cola in plants all over the world — except Cuba and North Korea where the product is not sold. For original post, see http://blog.timesunion.com/dowdondrinks/new-ad-revives-the-history-of-the-cuba-libre/14685/ Xylem’s YSI Integrated Systems and Services (ISS) has been awarded a contract for five marine monitoring buoys by The Caribbean Community Climate Change Centre (CCCCC). The buoys will collect high-quality data for researchers studying climate change in the Caribbean Sea, including the waters of Barbados, Belize, Dominican Republic, St. Lucia, and Trinidad and Tobago. The customized YSI EMM 2000 buoys will measure, record and transmit real-time water quality and meteorological data as key components of a Coral Reef Early Warning System (CREWS). The entire system will be powered by solar panels. “The Caribbean is a unique part of the world. Our waters are the ‘bread basket’ for the region, and we must be diligent in protecting and sustaining them,” says Dr. Kenrick Leslie, CCCCC executive director. “We are very excited to build our education and research infrastructure with the addition of this important technology project for addressing the impacts of climate change on the Caribbean ecosystem.” [. . .] Coral reefs play an extremely important role in the Caribbean economy for tourism as well as food production and food security. The regions’ unique reefs have been impacted by rising sea temperatures and pollution. Long-term monitoring of environmental conditions in the Caribbean will help researchers track the health of the reefs, among the oldest and most diverse ecosystems on the planet, and mirrors similar systems already installed at key reef sites in the Atlantic and Pacific Oceans. Data will allow development of climate models and ecological forecasting in coral reef ecosystems. [. . .] Caribbean researchers and scientists from national and regional universities, government coastal marine research departments and non-governmental organizations are expected to use and benefit from the data to be generated by the CREWS stations. The CREWS system will be expandable with additional sensors and parameters—such as CO2 and underwater photo-synthetically active radiation (PAR)—to accommodate visiting researchers who later join the collaborative project. The CCCCC will work with the National Oceanographic and Atmospheric Administration (NOAA) and YSI to install and operate this network, beginning in spring 2013. The CREWS project is funded by the European Union and the Global Climate Change Alliance in the amount of US $617,000 (€ 465,000) and is part of a wider climate change project – “The Global Climate Change Alliance Caribbean Support Project” being implemented by the Caribbean Community Climate Change Centre.
fwe2-CC-MAIN-2013-20-29568000
Java vs. C Is Java easier or harder than C?. Java Virtual Machine The key to Java's portability and security is the Java Virtual Machine.. History of Java Java was designed by Sun Microsystems in the early 1990s to solve the problem of connecting many household machines together. This project failed because no one wanted to use it.. Java is arguably the best overall programming languages, but there are problems with it.. Java is an excellent programming language.. GUI - Swing vs. AWT The original graphical user interface (GUI) for Java was called the Abstract Windowing Toolkit (AWT)..
fwe2-CC-MAIN-2013-20-29583000
SOHO is part of the first Cornerstone project in ESA's science programme, in which the other part is the Cluster mission. Both are joint ESA/NASA projects in which ESA is the senior partner. SOHO and Cluster are also contributions to the International Solar-Terrestrial Physics Programme, to which ESA, NASA and the space agencies of Japan, Russia, Sweden and Denmark all contribute satellites monitoring the Sun and solar effects. Of the spacecraft's 12 sets of instruments, nine come from multinational teams led by European scientists, and three from US-led teams. More than 1500 scientists from around the world have been involved with the SOHO programme, analysing and interpreting SOHO data for their research projects. SOHO was built for ESA by industrial companies in 14 European countries, led by Matra Marconi (now called ASTRIUM). The service module, with solar panels, thrusters, attitude control systems, communications and housekeeping functions, was prepared in Toulouse, France. The payload module carrying the scientific instruments was assembled in Portsmouth, United Kingdom, and mated with the service module in Toulouse, France. NASA launched SOHO and is responsible for tracking, telemetry reception and commanding.
fwe2-CC-MAIN-2013-20-29584000
The scary hidden stressor In her introduction to a compelling new study, “The Arab Spring and Climate Change,” released Thursday, the Princeton scholar Anne-Marie Slaughter notes that crime shows often rely on the concept of a “stressor.” A stressor, she explains, is a “sudden change in circumstances or environment that interacts with a complicated psychological profile in a way that leads a previously quiescent person to become violent.” The stressor is never the only explanation for the crime, but it is inevitably an important factor in a complex set of variables that lead to a disaster. “The Arab Spring and Climate Change” doesn’t claim that climate change caused the recent wave of Arab revolutions, but, taken together, the essays make a strong case that the interplay between climate change, food prices (particularly wheat) and politics is a hidden stressor that helped to fuel the revolutions and will continue to make consolidating them into stable democracies much more difficult. Jointly produced by the Center for American Progress, the Stimson Center and the Center for Climate and Security, this collection of essays opens with the Oxford University geographer Troy Sternberg, who demonstrates how in 2010-11, in tandem with the Arab awakenings, “a once-in-a-century winter drought in China” — combined, at the same time, with record-breaking heat waves or floods in other key wheat-growing countries (Ukraine, Russia, Canada and Australia) — “contributed to global wheat shortages and skyrocketing bread prices” in wheat-importing states, most of which are in the Arab world. Only a small fraction — 6 percent to 18 percent — of annual global wheat production is traded across borders, explained Sternberg, “so any decrease in world supply contributes to a sharp rise in wheat prices and has a serious economic impact in countries such as Egypt, the largest wheat importer in the world.” The numbers tell the story: “Bread provides one-third of the caloric intake in Egypt, a country where 38 percent of income is spent on food,” notes Sternberg. “The doubling of global wheat prices — from $157/metric ton in June 2010 to $326/metric ton in February 2011 — thus significantly impacted the country’s food supply and availability.” Global food prices peaked at an all-time high in March 2011, shortly after President Hosni Mubarak was toppled in Egypt. Consider this: The world’s top nine wheat-importers are in the Middle East: “Seven had political protests resulting in civilian deaths in 2011,” said Sternberg. “Households in the countries that experience political unrest spend, on average, more than 35 percent of their income on food supplies,” compared with less than 10 percent in developed countries. Everything is linked: Chinese drought and Russian bushfires produced wheat shortages leading to higher bread prices fueling protests in Tahrir Square. Sternberg calls it the globalization of “hazard.” Ditto in Syria and Libya. In their essay, the study’s co-editors, Francesco Femia and Caitlin Werrell, note that from 2006 to 2011, up to 60 percent of Syria’s land experienced the worst drought ever recorded there — at a time when Syria’s population was exploding and its corrupt and inefficient regime was proving incapable of managing the stress. In 2009, they noted, the U.N. and other international agencies reported that more than 800,000 Syrians lost their entire livelihoods as a result of the great drought, which led to “a massive exodus of farmers, herders, and agriculturally dependent rural families from the Syrian countryside to the cities,” fueling unrest. The future does not look much brighter. “On a scale of wetness conditions,” Femia and Werrell note, “‘where a reading of -4 or below is considered extreme drought,’ a 2010 report by the National Center for Atmospheric Research shows that Syria and its neighbors face projected readings of -8 to -15 as a result of climatic changes in the next 25 years.” Similar trends, they note, are true for Libya, whose “primary source of water is a finite cache of fossilized groundwater, which already has been severely stressed while coastal aquifers have been progressively invaded by seawater.” Scientists like to say that, when it comes to climate change, we need to manage what is unavoidable and avoid what is unmanageable. That requires collective action globally to mitigate as much climate change as we can and the building of resilient states locally to adapt to what we can’t mitigate. The Arab world is doing the opposite. Arab states as a group are the biggest lobbyists against efforts to reduce oil and fuel subsidies. According to the International Monetary Fund, as much as one-fifth of some Arab state budgets go to subsidizing gasoline and cooking fuel — more than $200 billion a year in the Arab world as a whole — rather than into spending on health and education. Meanwhile, locally, Arab states are being made less resilient by the tribalism and sectarianism that are eating away at their democratic revolutions. As Sarah Johnstone and Jeffrey Mazo of the International Institute for Strategic Studies conclude in their essay, “fledgling democracies with weak institutions might find it even harder to deal with the root problems than the regimes they replace, and they may be more vulnerable to further unrest as a result.” Yikes. Thomas L. Friedman is a columnist for The New York Times.
fwe2-CC-MAIN-2013-20-29587000
The Upanishads, Part 1 (SBE01), by Max Müller, , at sacred-texts.com 1. Next let a man meditate on the sevenfold Sâman which is uniform in itself 1 and leads beyond death. The word hiṅkâra has three syllables, the word prastâva has three syllables: that is equal (sama). 2. The word âdi (first, Om) has two syllables, the word pratihâra has four syllables. Taking one syllable from that over, that is equal (sama). 3. The word udgîtha has three syllables, the word upadrava has four syllables. With three and three syllables it should be equal. One syllable being left over, it becomes trisyllabic. Hence it is equal. 4. The word nidhana has three syllables, therefore it is equal. These make twenty-two syllables. 5. With twenty-one syllables a man reaches the sun (and death), for the sun is the twenty-first 2 from here; with the twenty-second he conquers what is beyond the sun: that is blessedness, that is freedom from grief 6. He obtains here the victory over the sun (death), and there is a higher victory than the victory over the sun for him, who knowing this meditates on the sevenfold Sâman as uniform in itself, which leads beyond death, yea, which leads beyond death. 28:1 Âtmasammita is explained by the commentator either as having the same number of syllables in the names of the different Sâmans, or as equal to the Highest Self. 28:2 There are twelve months, five seasons, three worlds, then follows the sun as the twenty-first. Comm.
fwe2-CC-MAIN-2013-20-29590000
History of the Salida Library In 1894, only 14 years after Salida was incorporated, a group of eleven townswomen formed the Tuesday Evening Club. One of the cultural objectives of this organization was to found a city library. During the first year, the few books purchased from club dues were kept on donated shelves at Central school. In 1896, the growing collection was moved to a small room on West second street, near the old opera house. In 1898, the library was relocated first to a small one-story brick building at the corner of "F" and third streets, and later to a large second story room in City Hall. To supplement the early book purchasing budget, the Tuesday Evening Club sponsored numerous public benefits. The members also took turns serving as librarian and custodian, except when the collection was at Central School. The campaign for procuring a site and raising funds to build a public library started in 1905. That year, Mrs. Ruth Spray wrote, "For years, some of us had looked longingly on the vacant place by Alpine Park, corner of "E" and Fourth Streets, as the only site that was perfect for our library. Whenever we would speak of that location, we were met on all sides with 'But you cannot get hold of those lots. People have tried in vain to reach the owner of them.'" The Tuesday Evening Club determined to find the owners of the vacant land and purchase the lots for the public library. The location was ideal: across from beautiful Alpine Park, between the two principal schools, and near the downtown business area. Only a few months later, they located owners S.G. Stein in Muscatine, Iowa and A.M. Barnhart in Chicago, Illinois. In late 1905 correspondence began and, by November 10, 1906, they had obtained the lots. Mrs. Mary Ridgeway was the first president of the Tuesday Evening Club, and she and her husband, Carl (A.C.), donated the generous sum of $1,200 to pay for the lots. While the land campaign progressed, the club began to correspond with millionaire Andrew Carnegie of New York City. He had decided as a young boy that if he ever became wealthy, he would use his wealth to help establish free public libraries. The club was able to convince Mr. Carnegie that the community of Salida would faithfully support a public library. On December 23, 1905, he said he would provide $9,000 toward the construction of the library building if the club had a site for the building. The entire Carnegie donation was received by November, 1908. The community was to provide $6,000. Many Salida citizens were interested in the efforts of the Tuesday Evening Club to build a public library. One of the most staunch supporters was Colonel William Penn Harbottle, a Civil War veteran and highly respected citizen. Upon his death in early 1906, it was learned that he had willed his personal library and his home at 546 "G" Street to the Salida Library Association, an organization within the Tuesday Evening Club. His will stipulated that it be known as the Juliana Reference Library, for his mother, and that it be a non-circulating library wherever it might be housed . The Juliana Reference room remains an important part of the library today and the Harbottle Estate continues to provide part of the funding for this part of the collection. The eagerly awaited ground breaking ceremony took place in October, 1907. The handsome Salida-granite cornerstone was laid in May, 1908, and the deed to the library transferred to the city. In February, 1909, the library was dedicated and opened for service. Total construction costs were $15,000. The Tuesday Evening Club planned meeting rooms into the lower level of the library. The club leased this area until the 1970's, using it as a place to host various money-making events for the benefit of the library or subleasing it for the same purpose. In the1970's, the library board determined that the library needed the space and the insurance company no longer allowed subleasing, so the Tuesday Evening Club lease was discontinued. The library gradually become more crowded over the following 20 years and storage consumed an ever greater portion of the meeting rooms. This trend continued until the library addition was completed in 1998 and the meeting rooms returned to their original purpose. The rooms are still used for Tuesday Evening Club meetings, as well as for public meetings and library programs. In November, 1974, the voters approved the formation of the Southern Chaffee County Regional Library District. This resulted in a broader tax base beginning in 1976 and thus provided more operational funds. The Salida Public Library name was changed to "Salida Regional Library" to represent the larger area now served. Voters approved additional funding for the library in two subsequent elections, 198X and 1995. In 198X, the mill levy was increased to 2.5 mills. In 1995, the levy was increased again to 3.5 mills, plus a bond was approved for construction of the addition to the Carnegie building. Rapid growth in Chaffee County in the '90s, along with rapidly rising real estate prices, have increased the library's income, although statutory constraints currently prohibit the library from collecting its full approved mill levy. A temporary property tax credit is issued each year after property tax calculations are made. The library increased hours from 40 to 70 hours per week and is open seven days a week, except for holidays. The book budget grew from $10,000 in 1995 to $60,000 in 2001. Budgets for other materials grew as well. The library offers a fast and reliable Internet access for the public, and supplements the Internet with subscription databases such as periodical indexes, Contemporary Authors, Encyclopedia of Associations, and the like. Library usage has grown because of all these things, plus the use of the popular community room available to the public for meetings.
fwe2-CC-MAIN-2013-20-29594000
Solar Images to be made by unique X-ray telescope |Tweet|Solar Images to be made by unique X-ray April 2, 1998: A unique cluster of telescopes that make X-rays take a U-turn has been selected for a fourth flight to capture "multicolored" images that will help us understand why the sun's outer atmosphere is so hot. Right: The Sun as seen in the glow of highly ionized iron. Such images are really taken in black and white. Scientists assign them false colors to help in studying different images. "One of the major objectives is to follow up on something we saw on the first flight 10 years ago," said Dr. Arthur B.C. Walker II of Stanford University, the principal investigator for the Chromospheric/Corona Spectroheliograph telescope. It will actually be a bundle of up to 19 telescopes, each taking pictures of the sun in a slightly different X-ray energy. The array is an upgrade of the Multi-Spectral Solar Telescope Array (MSSTA) which flew on October 23, 1987, May 13, 1991, and November 3, 1994. The 1987 flight - which also made the September 30, 1988 cover of Science magazine - returned pictures that showed where the sun's atmosphere was as hot as 1 million deg. K (about 1.8 million deg. F) also showed spectral lines that indicated temperatures of about 700,000 deg. K (1.26 million deg F). "We were mystified by this," Walker said. "We are now convinced that there is material at about 700,000 degrees K in the transition region and which contributes to coronal heating." NASA recently selected the Chromospheric/Corona Spectroheliograph under the solar physics research program. Richard Hoover of NASA's Marshall Space Flight Center and Troy W. Barbee of Lawrence Livermore National Laboratory are co-investigators with Walker. Their project is entitled Investigation of the Corona/Chromosphere Interface. This is the same region that will be studied by the Transition Region and Coronal Explorer (TRACE) scheduled for launch Thursday evening from California. The Chromospheric/Corona Spectroheliograph will complement TRACE by providing images of solar gases at temperatures as high as 5 million degrees K (9 million deg. F). While the sun is more than 99.9 percent hydrogen and helium, it carries significant quantities of carbon, iron, calcium, silicon, and other elements. Heavier elements have more protons (carbon is 6, iron is 26) in their nuclei than do lighter elements (hydrogen is 1, helium is 2). That means that as electrons are stripped from heavier atoms, the charge of the larger number of protons is devoted to the few remaining electrons. It takes ever more energy to strip off another electron. As a result, light from energetic atoms acts like a tracer that reveals where the sun is hot and at what temperatures. This is important to dissecting activities from the sun's corona - its outer atmosphere - through the transition region and to the chromosphere and photosphere - the visible "surface." The challenge is that the X-ray emissions are so energetic that they pass through materials rather than being reflected as visible light would be. The usual trick to making X-ray images is called grazing incidence reflection. Just as light will reflect off clear glass (or a rock will skip on a pond) if it strikes at a shallow angle, X-rays will reflect - and be focused - if they, too, strike at an even shallower angle. Several X-ray telescopes, such as, the Advanced X-ray Astrophysics Facility use this. The MSSTA works by a different effect. Its multi-layer mirrors comprise an ultrasmooth mirror coated by up to 100 layers of heavy elements like tungsten spaced by layers of lightweight elements like carbon. In effect, the layers work like a Bragg crystal, which will reflect X-rays. Everything is extremely smooth, on the order of 0.1 nm (a 10 billionth of a meter, or 1/250 millionth of an inch). These reflect a little bit of the X-rays at the surface of each layer pair. The choice of materials and the thickness of the layers determine precisely which wavelength is making the X-rays interfere with each other reflection. In this way, the scientists can fine tune a telescope to observe in a narrow band of wavelengths (a spectral band) or even one wavelength. That makes it possible to measure the temperature of the solar atmosphere. To observe the sun in several wavelengths at once, several telescopes must be flown together. This unique approach makes it possible to use conventional optical layouts - like the Hubble Space Telescope's Ritchey Chretien design - and get a much larger collecting area and brighter images than are possible grazing incidence optics of the same size. The design was invented by Barbee (and separately by scientists at IBM) and pioneered by Barbee, Walker, and Hoover for use in telescopes. The MSSTA (right) carries up to 19 telescopes of various sizes, each with a filter designed to admit only radiation of a specific wavelength or wavelength band, each corresponding to a specific temperature in the sun's atmosphere. Even though each image is taken in black-and-white, each represents a different wavelength and a different temperature in the solar atmosphere. To help in studying them, scientists often give them false colors to distinguish one from the other. This is similar to a color print that is really made from four black-and-white negatives, each to print a different color. On its fourth flight, the array will include a telescope that can see FE XVII; iron stripped of 9 of its 26 electrons. That takes temperatures up to 5 million deg. K. "It would be a better indicator of the distribution of high-temperature gases in the solar atmosphere," Walker said. This may also reveal small flares that may be one source of energy being pumped into the corona. For the C/CS flight, expected by early 2000 near the around the time of solar maximum. MSSTA will be upgraded and some new telescopes and detectors installed. As with its first two flights, the telescope will be boosted by a Terrier Black Brant IX launched from the White Sands Missile Range, N.M. The C/CS payload will be boosted to an altitude of 230 km (144 mi) and fall then parachute back to Earth for recovery. During the coast above Earth's atmosphere, the telescope array will be pointed precisely at the sun for about 6 minutes. Each telescope will take 10 to 15 full-disk images. Ground-based observatories will take pictures at the same time in white light and H-alpha, and with telescopes equipped to map magnetic fields. Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
fwe2-CC-MAIN-2013-20-29605000
Popocatépetl from the ISS on January 23, 2001 It might be (and is likely) just normal behavior for Popocatépetl in Mexico, but the volcano produced six plumes over the last 24 hours, according to a report out of Mexico City (in spanish). Officials from El Centro Nacional de Prevención de Desastres (The National Center for the Prevention of Disasters – Cenapred) say that the plumes appear to be mostly water vapor and other volcanic gases, but remind people living near the volcano to be vigilant. Popocatépetl is only 70 km from Mexico City, so any major eruption from the volcano could affect life and air travel to the major metropolis. The last major eruptive period at Popocatépetl ran from 1996-2003, producing VEI 3 eruptions, but the volcano has been producing smaller eruptions since January 2005. The volcano produces a mixed bag of activity, with ash fall, lava flows, pyroclastic flows and lahar generation and might be one of the more hazardous volcanoes in the Americas.
fwe2-CC-MAIN-2013-20-29607000
Doctors have always made notes about patients. By the 1800s doctors published their diagnoses and treatment records. However, there were no agreed standards for records or requirements to keep any. Medical records became an important medical practice during the late 1800s. Treating large numbers of patients in hospitals and private practice relied on written records. In the early 20th century professional medical organisations pressured practitioners and hospitals to keep patient records. Medical records were written on paper and kept in folders, but managing thousands of paper-based records became complex and expensive for hospitals during the 20th century. Tabulating machines sorted and managed patient records until the 1960s. Patient information was recorded onto key-punched cards which were sorted into groupings by the tabulating machine. During the 1970s hospitals stored patient records electronically using computers. Computers stored and retrieved vast amounts of information at high speed and low cost. They became invaluable. However, there are concerns about the privacy and safety of electronic patient records. J Howell, Technology in the hospital: transforming patient care in the early twentieth century (John Hopkins University Press, 1995) S Teng Liaw, 'The Computer-Based Patient Record - An Historical Perspective,' Informatics in Healthcare Australia November, 2/5 (1993), pp 17-21 S J Reiser, Medicine and the Reign of Technology (Cambridge: Cambridge University Press, 1978) S J Reiser, ‘Creating form out of mass: the development of the medical record’ in E Mendelsohn (ed.), Transformation and Tradition in the Sciences: Essays in Honor of I. Bernard Cohen (Cambridge: Cambridge University Press, 1984), pp 303-16
fwe2-CC-MAIN-2013-20-29610000
Short-listed for the 2012 Great Plains Distinguished Book Prize South Dakota's role in the nineteenth-century political movement Lee's book is a "worthy and important addition to the canon of South Dakota political history."—Prairie Progressive blog A "thorough analysis"—Western Historical Quarterly The Populist movement of the 1890s was one of the most successful third-party initiatives in United States history. Although it never elected a president, this movement seated governors, congressmen, and United States senators, and played a major political role in a number of states, including all the Great Plains states then in the nation. Populism has been thoroughly studied in many areas of the country, but South Dakota has, so far, been neglected. R. Alton Lee's Principle over Party begins to correct this oversight, shining light on the prominent South Dakotans who strode down the path to the progressive agrarian politics that dominated the state in the late 1880s and early 1900s. Lee examines the causes that led South Dakota farmers to rise up against the establishment and take their fate into their own hands. He discusses prominent figures Henry Loucks and Alonzo Wardall as well as political and social movements such as the Farmers' Alliance. Together these men and their organizations sowed the seeds of the Populist Party in South Dakota. Principle over Party showcases the successes and failures of one of the most lasting political movements in this nation's history. "Principle over Party is an excellent, well-researched and accessible contribution to South Dakota and American political history shelves. Highly recommended."—The Midwest Book Review Read a review of this book from Nebraska History. "This book starts out a bit dry but soon becomes much more interesting. Sometimes I have thought South Dakota's political history, compared to the fascinating political history of North Dakota, is not very exciting. But these years, 1880 to 1900, were vivedly tumultuous political years and this book tells the story carefully and well. The career of Richard Pettigrew lent much excitement to the politics of the time and this book rescues Henry Loucks from the obscurity into which he has fallen--no doubt because he never was elected to anything. Anyone interested in South Dakota's politics will want to read this excellently reseaarched book."—Schermguls, LibraryThing.com Paul Guggenheimer interviewed Al Lee on SDPB Radio's Dakota Midday. Click here to listen to the interview. Read the whole review of this book on the Prairie Progressive blog. Read a review of this book from my605.com Read a review of this book from the Western Historical Quarterly by clicking on the link below. R. Alton Lee is an acknowledged expert on American political history.
fwe2-CC-MAIN-2013-20-29615000
Curriculum: Early Childhood Our early childhood curriculum is designed to nurture the spiritual, academic, physical, and social needs of the young child. Children learn by exploring and developing social, emotional, and cognitive skills. Centers and theme units are presented to challenge and stimulate the children through their hands-on participation. The children receive individual, small group, and whole group instruction, which develops their skills with letters, numbers, and writing. Creativity in art, music, role-playing, and storytelling is encouraged and stimulated through lively play and fun activities. Parental communication is a vital component of the program. Each class has a certified teacher and a full-time aide. The religious education program consists of a multi-dimensional approach to developing the young child’s spirituality. Children learn about God and our faith through teacher-designed activities and by attending school liturgies. They learn to respect and appreciate others, and they develop an awareness of their place in the world around them. Pre-K4 uses the Pflaum Gospel Weeklies Faith Formation Program. Kindergarten uses the I Am Special religion series. Our readiness curriculum is designed to provide the emerging reader with age appropriate reading and writing skills. Phonics is an integral component of the early reading curriculum. Children are read to each day. They may also choose books to read from the classroom library. Kindergarten uses the Rowland Reading Program: Meet the Superkids for reading development. Number concepts and relationships are taught through the use of a wide variety of manipulatives and teacher directed activities. The program also develops problem solving and analytical thinking. Kindergarten uses the Harcourt math curriculum. Children are given the opportunity for active, hands-on learning. Center activities are varied and rotated according to the weekly theme. Centers provide a multi-sensory approach to age appropriate learning. The daily curriculum includes the additional core subjects of science, social studies, art, and music. Each day the students receive an enrichment class of physical education, computer, music, library, or guidance. In addition, Kindergarten’s curriculum includes one day of French.
fwe2-CC-MAIN-2013-20-29620000
Global warming can be reduced, but at what cost? CHICAGO — In a United Nations report this month, scientists said the cost of aggressively tackling climate change was comparatively reasonable. By spending a little more than 0.1 percent of the world's income each year for 23 years, they say, greenhouse gases could be held nearly in check, avoiding the worst predicted environmental disasters. The same day, Bush administration officials argued that the same aggressive effort would throw the world's economy into recession. The reality, top climate economists say, is that cutting U.S. emissions sufficiently to hold greenhouse-gas concentrations at near-current levels soon could cost the United States twice as much per year as it is now spending on the war in Iraq. But, as the U.N. report essentially urges, spending $1 trillion a year worldwide over two decades to aggressively curb global warming could be a bargain in the long run. "It isn't going to be cheap, but there's an awful lot we can do, and it doesn't break the bank, especially if we do it cleverly," argued Robert Socolow, a physicist, co-director of the Carbon Mitigation Initiative at Princeton University and a leading theorist on ways to reduce greenhouse-gas emissions. "I don't see how we get a recession out of it." For the United States, the most aggressive scenario in the new U.N. Intergovernmental Panel on Climate Change mitigation report — holding greenhouse gases in the atmosphere to less than 500 parts per million, up from the current 380 parts per million — could cost $240 billion a year, or 2 percent of the nation's income, said Robert Mendelsohn, a climate-change economist at Yale University. The Iraq war, comparatively speaking, has cost a little less than $100 billion a year on average since it began in 2003. That 2 percent of national income figure is much higher than the cost of 0.12 percent of world income quoted in the U.N. report because the United States is the world's leading producer of greenhouse gases and therefore has more work to do cutting them, Mendelsohn said. Many economists also say U.N. figures suggesting a moderate cost for limiting climate change assume that nations around the world would act quickly and in concert to target the problem, something political leaders say is highly unlikely. Reducing greenhouse gases vigorously and quickly probably would push Americans' heating and electric bills up by 50 percent to 100 percent, said Jae Edmonds, a scientist and economist with the Joint Global Change Research Institute, based in Maryland. Gasoline prices would rise between 50 cents and $1 a gallon, he said. Whether that is a cheap or expensive price to pay for cutting emissions is a matter of perspective, he said. "Some might look at those numbers and say that's a pretty good buy to avoid the potential negative implications of climate change," he said. "Others might think those costs look high and say they'd rather go slower." Choosing a sufficiently aggressive plan to stave off the worst effects of climate change without dire economic consequences is a complicated balancing act, economists say, particularly because so many variables remain unknown. Too vigorous a worldwide campaign could backfire, hurting economic growth and alienating key greenhouse-gas producers. But doing too little too slowly might waste a crucial opportunity to avoid potentially catastrophic impacts of global warming and to dodge greater costs in the future. The right answer, many economists suggest, is to act quickly to launch tests of potentially useful technology and programs worldwide, then rapidly scale up those that work. In Mendelsohn's view, the most aggressive level of greenhouse-gas cuts promoted in the U.N. report is "too radical a recommendation to be supported by mainstream economics." Because efforts to control greenhouse gases will be effective only if all of the world's major producers take part, "by starting with a crash program you ensure a lot of countries are not going to join in," he said. However, "you don't want to get sucked into thinking the only choice is to do the crash program or nothing at all," he said. He suggests that a much more modest target — limiting atmospheric concentrations of greenhouse gases to perhaps 640 to 750 parts per million — would cost the United States a tenth as much as the most aggressive scenario outlined in the U.N. report. Worldwide, the cost would fall by about half, according to the report. Other scientists and economists say holding greenhouse-gas concentrations to about 550 parts per million, at somewhat higher cost, is a better option. Under Mendelsohn's scenario, average global temperatures would be expected to rise by 7 to 11 degrees Fahrenheit by the end of the century, according to the U.N. panel, compared with about 3 to 6 degrees under the most aggressive program. Because no one knows what temperature increase might trigger disastrous environmental problems — large sea-level rises, worsening flooding and droughts, a disruption of ocean circulation patterns — the lower range of temperature increases is generally thought to be safer. Development of new technology and creative use of existing technology potentially could cut the costs of reducing emissions dramatically. Because plants draw carbon dioxide from the atmosphere when they grow, using plant fuels rather than fossil fuels effectively cuts emissions of greenhouse gases, Edmonds said. If engineers are able to find efficient ways to use plants to create fuel and then capture carbon dioxide released from the smokestacks of plant-fueled power stations and pump it into storage underground, the world could potentially lower levels of greenhouse gases in the atmosphere while generating power. But racing too quickly toward renewable energy and other efforts to cut greenhouse-gas emissions could have problematic consequences as well, Mendelsohn warned. Using more nuclear power, he said, will lead to renewed concerns about what to do with nuclear waste. Planting billions of acres of new crops for biofuels could lead to accelerating deforestation in places such as Brazil and Indonesia. And efforts to boost hydroelectric generation could result in many of the world's last wild rivers being dammed. David O'Reilly, the chief executive of Chevron, points to a Senate bill calling on the Energy Department to develop a plan to cut gasoline consumption by 20 percent by 2017, 35 percent by 2025 and 45 percent by 2030, largely by substituting ethanol and other renewable fuels. Under the Senate proposal, the amount of alternative fuels used in U.S. motor vehicles would rise to 8.5 billion gallons by 2008 and 36 billion gallons by 2022. The problem, O'Reilly said, is that U.S. farmers cannot currently produce enough corn to make more than 15 billion gallons of fuel. Producing 36 billion gallons would require huge corn imports or a massive overhaul of the U.S. agricultural economy. And Chevron is not just protecting its fossil-fuels turf; the company already produces 70 percent of the ethanol made in the United States. "We're dealing with a massive economy and a massive energy infrastructure that was developed to supply this economy," O'Reilly said. "You can't turn that around in just a couple of years." Copyright © 2007 The Seattle Times Company Seattle Times Special
fwe2-CC-MAIN-2013-20-29621000
Many faculty teaching at the introductory level use environmental themes and hazards to get their students excited about the geosciences. Whether in the form of a stand-alone Environmental Geology or Natural Hazards course or as environmental content integrated into other introductory courses, these concepts are an important part of the geoscience education for many students who will never take another course in the sciences. This workshop will bring together educators from a wide variety of institutional settings and backgrounds with the common goal of sharing ideas about improving the pedagogy and environmental geology content of our introductory geoscience courses. As a part of this workshop, participants will: - Share what works in their classrooms with each other. We will identify innovative teaching methods, approaches, and activities for teaching Environmental Geology and share ideas on how to teach in various contexts: large classes, courses with no lab component, courses in urban areas, etc. - Examine where and how environmental geology topics are taught in the geoscience curriculum from introductory courses for non-majors to "core" geoscience courses for majors. We will discuss and develop ideas for maximizing the impact of environmental geology topics to ultimately improve undergraduate students' experience with and knowledge of geoscience. - Consider the ways that Environmental Geology courses and topical materials can contribute to public science literacy, particularly how to make personal and societal decisions about the range of issues facing humanity and to live responsibly and sustainably on this planet. - Develop a list of best practices for integrating emerging environmental issues, recent natural disasters, and issues related to natural resources into course work and identifying how scientific data and research outcomes can inform public discourse on topical issues. - Develop strategies to reach under-represented groups and expand the diversity of students who enroll in our courses. We will consider strategies for improving the overall design of an Environmental Geology course to maximize its appeal and effectiveness. - Identify topics of high interest and need for future development as teaching modules and courses through the related InTeGrate project, through funding from the NSF/DUE TUES program, or through other local or national curricular initiatives. Participants must arrive in Bozeman in time for the first workshop event at 5 pm on Saturday, June 2. (Arrive earlier if you plan to attend the optional field trips.) The workshop will be over on Wednesday evening, June 6, and participants should plan return travel on Thursday, June 7 (those who stay an extra day can attend optional local hikes). By applying to the workshop, participants agree to do the following if accepted: - Serve on a review committee from April to June 2012, applying standardized review criteria to teaching activities in the On the Cutting Edge activity collection related to environmental geology. We anticipate that everyone will be asked to review ~5 activities using an on-line review form. - Submit additional teaching activities as needed that complement the existing collection, prior to the workshop. Our goal is to have a comprehensive, reviewed collection of teaching activities ready to showcase at the summer workshop. - Prepare in advance for workshop discussions via readings, writings, discussion or other activities developed by workshop leaders. - Participate fully in the entire workshop and attend all workshop sessions. Many participants will be invited to make presentations or serve as discussion or working group leaders at the workshop. - Post-workshop: continue to network with workshop participants, share workshop resources with colleagues across the geosciences, and participate in follow-on activities such as making presentations at theme sessions at professional society meetings. Application and Selection Criteria Applicants for this workshop must hold a faculty position at a two- or four-year college or university and have responsibility for teaching environmental geology topics either in an Environmental Geology course or distributed through other courses. The workshop is limited to 70 participants, and the final list of participants will be established with the goal of assembling a group representing a wide range of experiences, educational environments, and specialties. For more information see our page on general information for Cutting Edge workshop participants. Costs and Logistics The workshop will be held at Montana State University located in Bozeman, Montana. Our National Science Foundation grant provides funding for most of the operational costs of this workshop. To be supported by these funds, a participant must be either a US citizen, a permanent resident, or in the employ of a US institution. If you don't meet these requirements and are interested in participating in this workshop at your own expense, please contact the workshop conveners. Costs of the workshop not covered by the grant are outlined below. Workshop registration fee: $150 Travel, lodging. Participants or their home institutions must cover costs of lodging plus travel to and from the workshop. We will offer a low-cost option to stay in the dorms at MSU. Alternatively, participants may make their own lodging arrangements at a local motel, where we will hold a block of rooms. Rooming rates for this workshop have not been set yet, but in past workshops the MSU dorm option was ~$25/night single occupancy and the hotel option was ~$120 + tax per night. More information on the lodging options will be made available as soon as arrangements have been finalized. Optional field trips: There will be a separate free for the pre and post optional field trips. That fee has not yet been determined, but will cover transportation and food. We will be able to offer small stipends to participants from institutions unable to cover the costs of travel and participation in Cutting Edge workshops. The deadline for applying for one of these stipends is March 12, 2012.
fwe2-CC-MAIN-2013-20-29629000
Question: Why does Shakespeare introduce here the game of chess? Answer: At the time this play was written chess was very popular in Naples, of which place Ferdinand was a prince. With this fact Shakespeare was doubtless familiar. It probably suggested to him the use of the game in this play. How to cite this article: Fleming, William H. How to Study Shakespeare. New York: Doubleday and Co., 1898. Shakespeare Online. 10 Aug. 2010. (date when you accessed the information) < http://www.shakespeare-online.com/plays/thetempest/questionst/chesstempest.html >.
fwe2-CC-MAIN-2013-20-29633000
Attending Public hearings and community meetings Showing up is 90% of the game. Public hearings provide an opportunity for public comments on a particular project or vote. This kind of community involvement can make a strong statement. - Time is limited at public hearings, so arrive early to sign up for a slot to speak. - When you speak, focus on your main points. You will often be able to submit written statements which will allow you to address additional concerns. - Be polite and respect other community members’ ideas. A hearing is a forum for the exchange of ideas, not a neighborhood contest. Meeting with elected officials in person is an opportunity to make personal contact with decision-makers and convey your position in a persuasive and animated manner. A lobby visit allows you to tell your Senator or Representative what you think about a certain issue or bill and ask her/him to take positive action. Here are some suggestions for a successful lobby visit: Before the Meeting - Request a meeting in writing with specific times and dates. Follow up with a call to the scheduler or secretary to confim the meeting. - Make sure to convey what issue or bill you would like to discuss. - Decide on talking points to express your most important ideas. - Set a goal for the meeting. Do you want the Representative to vote for or against a bill or introduce legislation? During the Meeting - Be prompt. - Keep it short and stick to your talking points. - Take the time to thank the elected official for past votes in support of your issues. - Provide personal and local examples of the impact of the legislation. - Be honest and don’t claim to know more than you do about an issue. You don’t have to be the expert, just a committed and active constituent. - Set a deadline or timeline for response. After the Meeting - Write a thank you letter to the legislator. - Send any materials and information you offered. - Follow up on deadlines and if they are not met, set up others. Be persistent.
fwe2-CC-MAIN-2013-20-29634000
Did you know?? On average, a fertile cat can produce three litters a year, each with an average of four to six kittens. If you run the numbers, this means that a single cat and her first-year offspring can yield upwards of 150 kittens within a three-year period. A fertile dog can produce up to two litters a year of six to10 puppies each. The Humane Society of the United States (HSUS) reports that every year in the U.S., between six and eight million dogs and cats are turned over to animal shelters; of that number, three to four million are euthanized -- as many as are adopted. These tragic numbers would be greatly reduced if more pets were spayed or neutered. And if that's not reason enough . . . Apart from the problem of pet overpopulation, keep in mind that "intact" (i.e. un-neutered) dogs and cats are not the most pleasant companions to have around the house. Here's why: - Intact female dogs will come into heat every six to 12 months with each heat lasting 10-24 days. During this time they have a bloody vaginal discharge which may leave stains around the house. This bleeding is different from menstruation in human females as it coincides with the time the female dog is most likely to become pregnant. Female dogs in heat may become anxious, and are more likely to fight with other female dogs, including those in the same household. - Intact female cats can keep coming into heat every two weeks unless they are mated. They will typically engage in such mate-seeking behaviors as yowling, rolling and urinating in unacceptable places. - At maturity -- typically at six to nine months of age -- male dogs and cats become capable of breeding. Males of both species will "mark" their territories by spraying strongly scented urine on furniture, curtains, and elsewhere around the house. - Given the chance, intact male cats and dogs will attempt to escape the house to roam in search of a mate. During this time, they become aggressive toward other males and -- in the case of dogs -- toward people, and are more likely than neutered animals to engage in fights. The medical benefits Apart from helping to ease the problem of pet overpopulation -- and making home life more pleasant both for your family and your pet -- spaying or neutering your dog or cat carries significant health benefits as well. Spaying female dogs eliminates the risk of uterine cancer and pyometra -- a serious, potentially fatal uterine infection and dramatically reduces the risk of mammary cancer in both dogs and cats, especially if done before the first heat. Intact female dogs may go into a period called pseudocyesis, or "false pregnancy", a condition which can occur after being in heat. Their bodies go through all of the usual hormonal changes associated with pregnancy, including milk production, even though they are not pregnant. This is avoided if females are spayed. For male pets, neutering eliminates the possibility of developing testicular cancer and reduces the risk of developing prostate illness. A further benefit to neutering male cats is that it will significantly reduce the risk of infection with Feline Immunodeficiency virus (FIV), a virus that causes a disease in cats similar to AIDS in humans. FIV is carried in the saliva and blood of infected cats. Intact male cats are much more likely than neutered males to roam and fight. A scratch or bite suffered in such a fight from an FIV-infected male carries a significant risk of FIV infection. The majority of FIV-infected cats are intact males. And even if the wounds are not inflicted by an FIV-positive cat, they may nonetheless result in serious injury and infection. It all adds up While spaying/neutering are surgical procedures that carry a small element of risk, the scales are heavily tipped toward the benefits side. The incidence of complications from the procedures is quite low. On balance, it's a no-brainer: spaying/neutering is one of the best things you can do to improve a pet's quality of life. Discuss any questions or concerns you may have with your veterinarian while your pet is still young. You will be doing both your pet and yourself a great service.More from WebVet: - A Tribute to African-American Deans in Veterinary Medicine - A Remarkable Animal-Human Friendship - 5 Things I've Learned From Getting a Second Cat
fwe2-CC-MAIN-2013-20-29640000
View Sample Pages Just for fourth grade: twenty-four high-interest stories, paired with comprehension-building puzzles, facts, and activities! This valuable resource gives kids practice with: • main idea and details • making inferences • following directions • drawing conclusions PLUS—challenges that help develop vocabulary, understand cause and effect, distinguish between fact and opinion, and learn about story elements! 48 pages. You will need Adobe Acrobat Reader® software, version 4.0 or higher, to view and print the sample page above. Get Adobe Reader®
fwe2-CC-MAIN-2013-20-29644000
[see also: three The player has to decide which of the two strategies is better for him and act accordingly. [Not: “the 2 strategies''; the numbers 1 to 12, when used for counting objects (without units of measurement), should be written in words.] The first two are simpler than the third. [Or: the third one; not: “The first two ones”] One of these lies in the union of the other two. the last two rows the following two maps a two-variable characterization
fwe2-CC-MAIN-2013-20-29646000
Thursday, March 31, 2011 1) Baby’s brains are great statisticians. Between the ages of 6-9 months, they listen to everything they hear around them, and are able to calculate what sounds are important to pay attention to in the language they are hearing. For example, is the difference between “r” and “l” important? Yes in English, no in Japanese. 2) Babies who grow up in language rich environments enter Kindergarten with four times the vocabulary of babies who grow up in language poor environments. Children who have a higher vocabulary will have an easier time learning how to read. And those children who grew up in language-poor environments *never* catch up in their reading ability. 3) Babies learn through interacting with people. Mom, dad, Aunt, Uncle, Grandparents, and every other person a baby spends time with, is a learning experience for baby. Television does not interact with babies, and even when babies look like they are fascinated with a television, they have found that babies do NOT learn anything from it. 4) Babies who live in a bilingual house learn language just as fast as babies who live in a monolingual house. They measured vocabulary in kids from both situations at a certain age, and kids had the same vocabulary numbers. The key was that the young children in the bilingual homes had the same number of words in their vocabularies spread over the two languages. What can parents take away from this research? 1) Talk to your babies. Even if they can’t talk back, they are still learning. YOU are your child’s best toy. 2) Pay attention to your baby, and whatever they do, you add to it. If they say “ba”, say “baba”. If they say “baba”, say “bada”. If they say “truck”, you say “red truck”. 3) Use the television sparingly, if at all. Be aware that even if you really need to park your kids in front of the TV in that crazy moment while you finish dinner while the kids are cranky and tired, that is a moment that they’re not learning. (Can you tell I did that, too?) 4) Read to your babies, even if they can’t talk yet. (You’re not surprised I would put in a plug like this, are you?). Read to them in whatever language you find most comfortable. Notice when they interact with the book or the story, talk to them about it, then read to them again and again. You’re not only having fun with your beloved child, you’re helping to build their brains! Saturday, March 19, 2011 Your child is writing an essay for school, and finds a site on the Internet that is very interesting and useful for his or her paper. But is it true? Is the information trustworthy? There’s a site about Martin Luther King Jr written by a white supremacist group. There’s another website site claiming that the moon landings never took place written by a guy in his garage. Should you believe the information you read on either of those sites? Here are some tips for figuring it out: 1) The first thing to look for is who wrote the site. Everybody has a point of view, and you should know what it is. Reliable sites will have the author’s or organization’s name in an obvious place. You should also be able to find a link to “About us” that tells about the organization and their goals. The two sites I reference above don’t have this information. 2) Look up at the address bar. Does it have a “.com” or “.org”? Anybody at all can create a website with those top level domain names. If you have any questions, you can always search Google using whois Samplesite to find out who the site is really registered to. Does it have a “.gov” or “.edu”? Those sites are hosted by either the government or an accredited educational institution. In fact, when you use Google, you can limit your results to only those sites by typing in site:.edu samplesearch or site:.gov samplesearch. 3) Did you find a great article on Wikipedia? Wikipedia is what is known as a “stepping stone” site. That means that your child shouldn’t use the information in the article directly for their paper, but it is a great source to find information they can use via the links to outside resources listed at the bottom. 4) Finally, Sno-Isle has a great collection of databases with good, reliable information. Go to www.sno-isle.org, hover over the “Databases and Research” in the blue bar, and choose one of the broad topics. We have databases that cover everything from country information to biographies, from science to all sides of controversial topics. Using these tools and tips, you can be confident that the information you find is much more likely to be true. And if you have further questions, about any of this, don’t hesitate to ask any of us at the Information Desk at your library!
fwe2-CC-MAIN-2013-20-29647000
This is a tricky question to answer because weather, what you experience at your house right now, is not really that same thing as climate, the patterns of global air and sea movements that bring weather. So milder winters can be a possibility in certain locations, as they will be exposed to an overall warming of the entire atmosphere. But colder winters can be experienced. Since the mid 1970s, global temperatures have been warming at around 0.2 degrees Celsius per decade. However, weather imposes its own dramatic ups and downs over the long term trend. We expect to see record cold temperatures even during global warming. Nevertheless over the last decade, daily record high temperatures occurred twice as often as record lows. This tendency towards hotter days is expected to increase as global warming continues into the 21st Century. Vladimir Petoukhov, a climate scientist at the Potsdam Institute for Climate Impact Research, has recently completed a study on the effect of climate change on winter. According to Petoukhov, These anomalies could triple the probability of cold winter extremes in Europe and northern Asia. Recent severe winters like last year's or the one of 2005-06 do not conflict with the global warming picture, but rather supplement it. Weather being a local response to climatic conditions means that you have to understand what has changed in the climatic patterns in your region. What are your local weather drivers? How have they changed since the 1970s? Thus, you could end up with some areas experiencing colder winters; due to greater moisture levels in the air, more precipitation of snow, greater heat loss at night due to clear skies, etc. Or you could have an area that will experience milder temps in winter due to warmer air currents, warmer oceans, localised heat island impacts, etc. For further information you should investigate the weather and climate agencies publications for your area.
fwe2-CC-MAIN-2013-20-29650000
When we talk about network hardware, we are commonly using the term Megabit. For example, a 10/100 Fast Ethernet Network Card is 10/100 Megabit. 1 Gigabit (Gb) equals 1024 Megabits (Mb). When Microsoft Windows displays the network transfer speed, it is displayed using Megabytes (mb) not Megabits (Mb). One Megabit (Mb) equals 0.125 Megabytes (mb). There are 8 Megabytes (mb) in 1 Megabit (Mb). Theoretical Transfer Speed Since there are 8 Megabytes (mb) in 1 Megabit (Mb), we can determine the theoretical maximum network transfer speed in Megabytes. 1 Gigabit equals 1024 Megabits which equals 128 Megabytes. In theory, a 1 Gigabit Network should provide us with a transfer speed of 128 Megabytes. Average Transfer Speed On a 100 Megabit network using CAT-5/CAT-6, the average transfer rate is 8.6 Megabytes to 12.5 Megabytes. On a Gigabit network (1024 Megabit) using CAT-5/CAT-6, the average transfer rate is 21.5 to 45 Megabytes. On a Gigabit network using CAT-5/CAT-6, why isn't the average transfer rate closer to the theoretical maximum of 128 Megabytes? The Simple Answer The simple answer is the transfer rate is typically limited by the maximum transfer rate of the computer hard drive found in the origin and/or destination computer. Ideally, both the origin and destination are using a Modern Computer with a RAID Controller, SSD Drives, Optimal Motherboard Bus Architecture, Multi-Core Processor, Recommended RAM and Recommended Swap File with plenty of Free Space. Using this configuration, you should be able to obtain an average transfer rate of 72 Megabytes and greater. The Complete Answer On a computer network (WAN, LAN, VPN, etc.), the data transfer rate for Client/Server can be impacted by: - RAID Controller - Hard Drive Type - Motherboard Bus Speed - Swap File Setting - Free Disk Space - Free RAM - Flow Control - Auto Negotiation - Shared Resources - Cabling Quality and Type (Fiber, CAT-5 and CAT-6) - Cabling Length - Network Card Driver Version - Network Card Firmware - Electrical Interference - Protocol Type and Overhead - Hops (Routers, Switches, Hubs, Firewalls) - Poorly Designed or Multiple Antivirus - Open Relay - ISP Circuit Committed Information Rate - ISP Type and Bandwidth
fwe2-CC-MAIN-2013-20-29655000
The rise of the Black Muslims looks at the roots of the organization that produced Malcolm X. WHENEVER THE Nation of Islam or its leader Louis Farrakhan is covered in the mainstream media, they are dismissed as "reverse racists." By sensationalizing the Nation's anti-white stance and highlighting examples of Farrakhan's anti-Semitic statements, the media has tried to discredit the Nation's overall argument that the U.S. is racist to the core. Socialists have criticisms of Farrakhan and the Nation, not only against anti-Semitism, but the organization's program of Black economic self-sufficiency, which would only benefit a minority of African Americans. But we understand that the appeal of the Muslims' ideas of Black superiority has nothing in common with white racism. Nationalism is a defensive reaction to the blatant segregation forced upon Blacks. The left failed to grasp this when the Black Muslims, as the Nation is commonly called, first gained national attention in the late 1950s. The Nation's theory of Black superiority and its hostility towards "white devils" led many socialists to accept the media's argument that Nation members were reverse racist. Thus, the left, still reeling from the anti-Communist McCarthy witch-hunts, isolated itself from Blacks influenced by the Nation. In fact, the Black Muslims represented one of the few political alternatives to Northern Blacks, at a time when most Black political organizations concentrated on legal assaults against Jim Crow in the South. At the same time, trade union leaders effectively sided with employers in keeping Blacks in the lowest-paid and least-skilled jobs. So while the civil rights struggle against Southern segregation laws captured media attention, the Black majority in the North encountered conditions almost as brutal. By 1960, the differential between Black and white unemployment had reached two to one, where it remains to this day. Throughout the 1950s--a time of general economic expansion--less than half the Black working class held full-time jobs year round. Although formal segregation laws did not exist in the North, Black workers nevertheless lived in segregated neighborhoods in declining central cities. In such conditions, the Black Muslims flourished. What had begun in Detroit as a religious sect in the early 1930s grew, under the leadership of Elijah Muhammad and organizer Malcolm X, into a movement of an estimated 100,000 members by 1961. Muhammad 's apocalyptic vision of a Black-white confrontation, articulated by Malcolm, influenced hundreds of thousands more who were not necessarily prepared to join the organization. As one youth told Black sociologist C. Eric Lincoln in 1962: Man, I don't care what those [Nation] cats say out loud--that's just a hype they're putting down for the man (i.e. whites). Let me tell you--they've got some stuff for the man even the Mau Mau [the anti-colonial Kenyan rebels] didn't have! If he tries to crowd them like he's been used to doing to the rest of us all the time, they're going to lay it on him from here to Little Rock [Arkansas, the scene of racist violence against school desegregation]. - - - - - - - - - - - - - - - - UNABLE TO refute Malcolm X's searing criticism of racism in America, politicians and the media tried to dismiss the Black Muslims as cranks, focusing on Elijah Muhammad's claim to be the messenger of Allah, and the strict, almost militaristic discipline that came with membership in the organization. But as the Nation grew, the government's attitude hardened, and the media's charges of "Black racism" grew more shrill. Even though the organization abstained from most civil rights struggles and did not confront the authorities, it was seen as a serious threat. The authorities' fears were justified. Even though the Nation declined after Malcolm X was forced from the organization and assassinated in the mid-1960s, its ideas of Black self-defense and separatism were adopted by millions in what became the Black Power movement. In fact, Farrakhan's program in later years was mild compared to the demands of the radical Black nationalists in the late 1960s. Unfortunately, many on the left of the 1960s repeated the mistake they had made earlier. Rather than starting from the position that Black and white unity must be built on Black workers' terms, some radical organizations blamed Black nationalists for splitting the movement--a charge no different than the complaint of liberals about the "racism" of Black nationalism. Radical nationalists like the Black Panthers and other radical nationalists were more isolated when they were hit by government repression. Whatever criticisms socialists have of Black nationalists, the first priority is to defend them from racist attack, even in the case of aggressively anti-white organizations such as the Black Muslims. Accepting the idea of "Black racism" plays into the hands of the real racists--a ruling class that benefits from the exploitation of workers and the oppression of Blacks. A version of this article first appeared in the March 1987 issue of Socialist Worker.
fwe2-CC-MAIN-2013-20-29659000
#Sandy: Climate Disasters in the Age of Social Media Hurricane Sandy was the largest Atlantic storm in history. At its peak it was over 1100 miles wide with winds up to 110 mph. Nine countries were hit by it and half of the states in the US, with a domestic cost greater than $65 billion. At least 250 people died, millions of people were left without power, running water, and more. During the days immediately leading up to and following the storm, 20 million tweets about Sandy were also sent out, and more than 800,000 photos were tagged with Sandy on Instagram. News outlets everywhere were using social media to turn the millions of storm survivors into instant citizen journalists. And they weren’t the only ones: Con Edison, The City of New York, and other governmental and public utility organizations were using Twitter to reach out to their constituents and keep them informed of evacuation plans and storm updates. With Hurricane Sandy, social media further cemented itself as an indispensable source during times of mass crisis. After the storm passed, questions about its cause and even how future, worse storms might be avoided came up. Climate change, an issue that had been politically undesirable to address, suddenly got thrown back into sharp relief once the immediate impact could be felt so severely. And just as social media helped share knowledge about Sandy, it also helped disseminate and foster the discussion on climate change. Many questions remain though: How can social media be leveraged to increase its usefulness in times of crises? What impact is social media having on how we learn and engage with these natural disasters? How can social media affect the public policy discussion around climate change? Please join us on Monday, 12/10 at 1:00 PM EST/10:00 AM PST at our free webinar: "#Sandy: Climate Disasters in the Age of Social Media." We'll address these questions and more in a panel with Scott Dodd, Journalist and Editor of OnEarth.org and Michael Leuthner, Digital & Social Media Director, The Climate Reality Project, moderated by Marc Gunther, journalist and consultant in business and sustainability and contributing editor at FORTUNE magazine. Social Media Today
fwe2-CC-MAIN-2013-20-29660000
Cassini Prepares to Swoop by Saturn's Geyser-Spewing Moon 7 Aug 2008 (Source: Jet Propulsion Laboratory) Fractures, or "tiger stripes," where icy jets erupt on Saturn's moon Enceladus will be the target of a close flyby by the Cassini spacecraft on Monday, Aug. 11. Cassini will zoom past the tiny moon a mere 50 kilometers (30 miles) from the surface. Just after closest approach, all of the spacecraft's cameras -- covering infrared wavelengths, where temperatures are mapped, as well as visible light and ultraviolet -- will focus on the fissures running along the moon's south pole. That is where the jets of icy water vapor emanate and erupt hundreds of miles into space. Those jets have fascinated scientists since their discovery in 2005. "Our main goal is to get the most detailed images and remote sensing data ever of the geologically active features on Enceladus," said Paul Helfenstein, a Cassini imaging team associate at Cornell University in Ithaca, NY. "From this data we may learn more about how eruptions, tectonics, and seismic activity alter the moon's surface. We will get an unprecedented high-resolution view of the active area immediately following the closest approach." Seeing inside one of the fissures in high resolution may provide more information on the terrain and depth of the fissures, as well as the size and composition of the ice grains inside. Refined temperature data could help scientists determine if water, in vapor or liquid form, lies close to the surface and better refine their theories on what powers the jets. Imaging sequences will capture stereo views of the north polar terrain, and high resolution images of the south polar region will begin shortly after closest approach to Enceladus. The image resolution will be as fine as 7 meters per pixel (23 feet) and will cover known active spots on three of the prominent "tiger stripe" fractures. In addition to mapping the moon's surface in visible light as well as infrared and ultraviolet light, Cassini will help determine the size of the ice grains and distinguish other elements mixed in with the ice, such as oxygen, hydrogen, or organics. "Knowing the sizes of the particles, their rates and what else is mixed in these jets can tell us a lot about what's happening inside the little moon," said Amanda Hendrix, Cassini ultraviolet imaging spectrograph team member at NASA's Jet Propulsion Laboratory, Pasadena, Calif. Other instruments will measure the temperatures along the fractures, which happen to be some of the hottest spots on the moon's surface. "We'd like to refine our numbers and see which fracture or stripe is hotter than the rest because these results can offer evidence, one way or the other, for the existence of liquid water as the engine that powers the plumes," said Bonnie Buratti of JPL, team member on Cassini's visual and infrared mapping spectrometer. Cassini discovered evidence for the geyser-like jets on Enceladus in 2005, finding that the continuous eruptions of ice water create a gigantic halo of ice and gas around Enceladus, which helps supply material to Saturn's E-ring. This marks Cassini's second flyby of Enceladus this year. During Cassini's last flyby of Enceladus in March, the spacecraft snatched up precious samples and tasted comet-like organics inside the little moon. Two more Enceladus flybys are coming up in October, and they may bring the spacecraft even closer to the moon. The Oct. 9 encounter is complimentary to the March one, which was optimized for sampling the plume. The Oct. 31 flyby is similar to this August one, and is again optimized for the optical remote sensing instruments. For images, videos and a mission blog on the flyby, visit: http://www.nasa.gov/cassini . More information on the Cassini mission is also available at http://saturn.jpl.nasa.gov . Editors: A pre-flyby videofile with animation, images and interview is available on NASA TV. The videofile airs at 12 p.m. Eastern on the Media Channel with replays at 4 p.m., 8 p.m. and 10 p.m Eastern. In the continental United States, NASA Television's Public, Education and Media channels are carried by MPEG-2 digital C-band signal on AMC-6, at 72 degrees west longitude, Transponder 17C, 4040 MHz, vertical polarization. They're available in Alaska and Hawaii on an MPEG-2 digital C-band signal accessed via satellite AMC-7, transponder 18C, 137 degrees west longitude, 4060 MHz, vertical A Digital Video Broadcast (DVB)-compliant Integrated Receiver Decoder (IRD) with modulation of QPSK/DVB-S, data rate of 36.86 and FEC is needed for reception. Media contact: Carolina Martinez 818-354-9382 Jet Propulsion Laboratory, Pasadena, Calif.
fwe2-CC-MAIN-2013-20-29665000
Winner: Quantum Leap Quantum-dot lasers from Japan's QD Laser will make high-speed "fiber to the home" networks simpler, cheaper, and more power-efficient Image: QD Laser Collecting the DOTS One of QD Laser’s achievements, as shown in these atomic-force microscope images, was to double the dot density in its quantum-dot lasers, from 30 billion [left] to 60 billion [right] dots per square centimeter. This is part of IEEE Spectrum's SPECIAL REPORT: WINNERS & LOSERS 2009, The Year's Best and Worst of Technology. Japanese start-up QD Laser’s Yasuhiko Arakawa [left] and Mitsuru Sugawara oversaw the 15-year effort to commercialize a temperature-stable semiconductor laser. Suppose you had a dog whose personality fluctuated with the weather. On cool, crisp mornings, he’s a champ, fetching, rolling over, and shaking hands at your slightest command. But as the sun climbs higher and the day warms up, he becomes less and less responsive, and you have to ply him with doggy treats to get him to obey. And during heat waves? Forget about it—he barely plays dead unless you double or triple his kibble ration. While you could excuse such behavior in Fido, something remarkably similar goes on all the time with the semiconductor lasers used in CD and DVD players and in optical communications. These tiny devices are incredibly sensitive to heat. Even a small rise in temperature causes the electrons within to move around faster and migrate out of the laser’s active layer—the thin slice of semiconducting material where the electrons recombine with positively charged holes to make light. As a result, the laser’s light output fluctuates, and it needs stronger and stronger electrical currents to keep lasing. At 85 °C, the device might need two or three times as much current to produce the same amount of light as at 25 °C. To get around that shortcoming, developers of semiconductor lasers must either cool them or introduce extra circuitry that maintains the device’s output even as the temperature fluctuates. But those workarounds increase both the cost of making the lasers and the power they consume. Ever since this problem came to light, researchers have been hunting for a semiconductor laser that is inherently stable. One promising technology, first proposed 27 years ago, is the quantum-dot laser. Such a device tightly confines the electrons and holes within many nanoscale blobs, or dots, of semiconducting material. With enough dots—millions or billions, that is—lasing will occur and steady output maintained, regardless of external temperature. While researchers can now grow these devices using standard molecular-beam epitaxy equipment, mass-producing them has been very tricky. The Japanese start-up QD Laser, of Tokyo, a joint venture of Fujitsu and Mitsui Venture Capital Corp., has finally succeeded. Its quantum-dot lasers use inexpensive substrates made from gallium arsenide (GaAs) and boast an industry-leading density of 60 billion dots per square centimeter [see images, ”Collecting the Dots”]. Compared with the conventional indium-phosphide lasers now used in optical networks, QD Laser’s devices will consume just half the power while transmitting up to 10 gigabits of data per second at a wavelength of 1.3 micrometers. Best of all, they will generate the same output at any temperature from –40 to 100 °C. To mass-produce the GaAs laser chips, QD Laser has partnered with one of Japan’s leading consumer-electronics firms, which will use the same production lines on which it currently cranks out conventional red lasers for DVD and CD players, video-game consoles, and other products. (QD Laser says it will reveal the name of its partner later this year.) The initial shipments of laser chips are destined for an unnamed optical equipment vendor, which sometime this spring will begin offering the world’s first optical transceivers incorporating a quantum-dot laser. Fujitsu will almost certainly buy the transceivers for use in optical LANs and fiber-to-the-home networks. The quantum-dot laser has long been envisioned as a successor to the quantum-well laser, itself an improvement on earlier laser designs because it confined the injected electrons to an extremely thin layer—no more than tens of nanometers thick—of active material. That way, it required less current to induce lasing. But like the ”bulk” semiconductor lasers it superseded, the quantum-well laser is sensitive to temperature. In the active layer of a bulk semiconductor laser, which you can picture as a fat, rectangular slab, the electrons and holes move in three dimensions, and that makes their interactions hard to control. In a quantum-well laser, they can move in only two dimensions, but electrostatic fields tend to build up, pulling the electrons away from the holes. In both cases, an increase in temperature makes the electrons more unruly. Researchers began looking at ways to confine the electrons even further. In 1980, Yasuhiko Arakawa, a 28â¿¿year-old associate professor at the University of Tokyo, had an epiphany. ”I thought, if we fix the position of each electron by confining it in a small box, the energy distribution will not be affected by temperature,” Arakawa recalled in a recent interview at his office at the University of Tokyo. Each ”box” would be a semiconducting nanosize crystal into which electrons and holes would be injected. The box would effectively prevent the electrons and holes from being thermally excited to higher energy states. He presented his quantum-box laser idea at the annual meeting of the Japanese Society of Applied Physics in March 1981. Then, collaborating with another professor, Hiroyuki Sakaki, he published a paper on the topic in the 1 June 1982 issue of Applied Physics Letters . The two researchers followed up with a series of experiments in which they confined electrons using 30-tesla magnets and demonstrated that the devices worked the same over a wide temperature range. ”But I thought it would be impossible to fabricate such nanostructures until the 21st century,” Arakawa says. The quantum-box laser concept didn’t exactly set the world on fire. Some people found it interesting but not particularly useful, while others concluded that the boxes would be structurally unstable. His early work ”attracted almost no one to the field,” says Arakawa, now an IEEE Fellow. Today, he adds, thousands of researchers worldwide are working to advance the field. Just three years after Arakawa and Sakaki’s paper, a research group at France’s Centre National d’Etudes des Télécommunications (CNET) noticed a strange phenomenon in the ”superlattices” they were trying to build out of extremely thin alternating layers of indium arsenide and gallium arsenide. Studying their handiwork under an electron microscope, they noticed that some of the indium arsenide had formed tiny regular blobs atop the underlying layer of gallium arsenide. Each blob, it turned out, was a quantum dot. The French team didn’t actually produce lasing from their weird structure, but it was a start. In 1994, a team at the Tokyo Institute of Technology and a collaboration of the Technical University of Berlin, Russia’s Ioffe Physico-Technical Institute, and the Max Planck Institute of Microstructure Physics independently demonstrated the first quantum-dot lasers. (At that point, the quantum-dot versus quantum-box terminology was still in flux, with the German-Russian team using the former term and the Japanese using the latter. Eventually, Arakawa says, the world settled on quantum dot . ”Now even I call them quantum dots,” he says.) But it’s one thing to create an experimental device in the lab and another thing to mass-produce a laser that operates reliably, can be manufactured cheaply, and performs a useful function. QD Laser’s president and CEO, Mitsuru Sugawara, and his colleagues began chipping away at the problem of commercialization in 1994. Sugawara was then a research physicist at Fujitsu, aiming to develop a temperature-stable laser that emitted at 1.3 µm, the best wavelength for optical communications. ”We weren’t interested in quantum dots per se,” Sugawara recalled in an interview last fall. Like the CNET group, he and his team had been working on superlattices when they noticed quantum dots forming spontaneously, Sugawara says, ”like water beading up on a waxed car.” After realizing what they’d done, they set to work on building a laser. ”We knew that to produce lasing, we had to increase the density of the dots, so we started to study how to grow them intentionally,” he says. Five years later, in 1999, they demonstrated their first quantum-dot laser with a wavelength of 1.3 µm. In a perfect world, the Fujitsu group would have continued to make steady progress, and a commercial quantum-dot laser would have hit the market years ago. In the real world, the IT bubble burst, and corporate priorities shifted. ”My boss told me that if we didn’t stop our research [on quantum dots], he’d be fired,” Sugawara says. Eager to keep Japanese R&D on quantum-dot lasers alive, Arakawa stepped in. By then his pioneering work on nanostructure devices had made him quite influential in Japan’s scientific circles. In 2001 he persuaded the Japanese government to include quantum-dot research in a national project on photonic networking. Fujitsu participated, along with Hitachi, Mitsubishi, NEC, and a number of other Japanese companies. The Fujitsu group resumed its efforts to increase the dot density, mainly by stacking the quantum-dot layers. In 2004, they built a stack of 10 layers containing 30 billion dots per layer and capable of transmitting data at 10 Gb/s. ”At that point, we could think about starting up a venture company,” Sugawara says. Though it had nurtured the early stages of research, Fujitsu wasn’t the best place to commercialize the results, he says. The company’s main business is building high-end servers and optical networking systems for government and business customers. It has no expertise in the commodity chip-making methods that Sugawara envisioned using for the quantum-dot lasers. In April 2006, Fujitsu and Mitsui Venture Capital formed QD Laser, providing the start-up with an initial US $2 million. Fujitsu agreed to let QD Laser use its 40 or so patents on quantum-dot technology; Arakawa signed on as the company’s technical advisor. Although QD Laser’s official headquarters are in a central Tokyo high-rise, most of the company’s staff, including Sugawara, are based at Fujitsu’s facility in Atsugi, about 45 kilometers southwest of Tokyo, and research goes on there and at Arakawa’s labs at the University of Tokyo. There are currently 30 scientists and engineers involved, including five at the University of Tokyo. After its founding , the startâ¿¿up continued to work on boosting the lasers’ dot density. ”We thought we could keep adding more layers, but we realized that wasn’t enough,” Sugawara says. Using proprietary techniques, researchers at QD Laser and Tokyo University eventually succeeded in doubling the dot density, from 30 billion dots per square centimeter to 60 billion. Sugawara brings out two atomic-force microscope images of quantum dots. The first shows a sparsely dotted surface. ”Everyone can make this density,” he says. Then, pointing to the second image, which is crowded with dots, he says, ”but only we can make this.” QD Laser isn’t the first company to bring a quantum-dot laser to market. That distinction belongs to Innolume, a start-up based in Dortmund, Germany, and Santa Clara, Calif. Since 2007 it has sold quantum-dot ”comb” lasers, which can emit tens to hundreds of colors over a range of wavelengths. The devices are potentially suitable for optical computing, laser television, and biomedical applications. But Innolume has yet to find a wide market for its products. QD Laser will do better because its corporate backers have the muscle to see that it does. Fujitsu has already agreed to replace the standard indium-phosphide lasers in its optical networking systems with QD Laser’s gallium-arsenide lasers. But even Fujitsu had to be convinced that the new devices would be as reliable as existing lasers. ”The communications market is very conservative,” Sugawara notes. To make its products more palatable to optical equipment makers like Fujitsu, his company spent months tailoring the quantum-dot laser’s output power and performance so that they matched those of a conventional laser. The resulting laser can seamlessly replace an indium-phosphide laser in an optical transceiver, with no significant redesign required. With telecom giant Nippon Telegraph and Telephone Corp. adding 3 million fiber-to-the-home connections each year, Sugawara thinks his company could claim 5 to 10 percent of the Japanese market by 2011. QD Laser is also working on lasers for long-distance communications of up to 20 kilometers. At press time, the company was wrapping up reliability tests and planned to begin selling in the spring. Even as it tries to line up more optical equipment customers, QD Laser wants to branch out into the consumer-electronics market, which buys 100 times as many lasers, or about 2 billion devices a year. That’s why the partnership with the Japanese consumer electronics maker holds particular promise. Back in 2006, shortly after his company was founded, Sugawara visited four of the major Japanese consumer-electronics makers to gauge their interest in quantum-dot lasers. Three said no thanks. But the fourth, Sugawara recalls, told him, ”We’ve been waiting for you.” The partnership is unusual in Japan, he adds, where there’s little overlap between the optical-communications sector and the consumer-electronics makers. ”We’re one of the first companies to bridge the gap,” he says. For two years, QD Laser engineers worked closely with the consumer electronics firm to refine the fabrication process for the laser chips. QD Laser grows the 3-inch gallium-arsenide wafers in-house and then ships them to its partner, which can print about 50 000 chips on each wafer. Each 0.3-square-millimeter chip consists of a substrate of n-doped gallium arsenide, followed by a layer of n -doped aluminum gallium arsenide, the quantum-dot layer, and then layers of p -doped AlGaAs and GaAs. The company packages each chip in a can about 2 cm long. ”Even though we’re a small company, we can do mass production,” Sugawara says. QD Laser’s partner would like to start incorporating quantum-dot lasers into its CD and DVD players and other products. By varying the size and concentration of the quantum dots, you can generate different wavelengths of light. To produce red light at 650 nm, for example, you could start with a 1300-nm quantum-dot laser and then pass it through a frequency doubler, which halves the wavelength. To make green light, you similarly start with a 1064-nm laser and double the frequency to get a 532-nm wavelength. Quantum-dot lasers could also be used in laser TV sets, medical devices, and tiny portable projectors that fit in your cellphone. In the next couple of decades, Arakawa says, we’ll see quantum dots showing up in quantum computers and other IT devices [for more on quantum computing, see ”Dot to Dot Design,” IEEE Spectrum, September 2007]. But why stop there? Quantum-dot researchers have been looking at ways to use quantum dots in biochemical sensors, solar cells, and other technologies. It’s a future Arakawa modestly refers to as ”quantum dots for everything.” For more articles, go to Winners & Losers 2009 Special Report. Snapshot: A Laser That’s Right On the Dot Goal: To commercialize a reliable and inexpensive semiconductor laser that’s also immune to temperature changes. Why it’s a winner: These high-speed, low-power, temperature-stable lasers are equally applicable to optical networking and consumer electronics. Who: QD Laser, a joint venture of Fujitsu and Mitsui Venture Capital Corp., and University of Tokyo Where: Tokyo and Atsugi, Japan Staff: 30 scientists and engineers Budget: US $14 million When: Spring 2009
fwe2-CC-MAIN-2013-20-29668000
Anger can often lead to problems which may include violence, bullying or even just frustration. Learn to harness your anger with these 5 basic tips: - Try to figure out why you’re angry. Did somebody say something that really ticked you off? Did someone tease you? Did someone take their anger out on you? If you can answer these questions you may realize you don’t even have a reason to be angry. - Release your anger gradually. Get in touch with your own feelings so you know how you can release your own anger. Go out for a jog. Go for a swim. Workout at the gym. Do something creative. Shoot some hoops. Play the piano. You get the idea. Do something that will help you unwind. - Ask for help. Talk with a friend about your frustrations until you feel better. Spend time with your counselor unloading your frustrations. Sit down and look your web cam in the lens and make a video, talking about how you feel. Watch that video and experience your mood begin to change. - Think about someone you can help. The world is full of people who need a hand. Think of someone who is having a rough time right now. Are they experiencing cyber-bullying or workplace bullying? You can do something to help them. Think of what it is and get to work. This is one of the best ways to beat anger and frustration. - Get totally relaxed. Slow down your thought process and start thinking the most peaceful thoughts you possibly can imagine. Put on a relaxing CD. Close your eyes take deep breaths. Imagine the most relaxing place you could possibly be. Maybe it’s the beach with the sound of gentle waves in the background. It could be a grassy meadow with the breeze blowing through your hair. Feel the tension leave your body. Simply allow your body to completely unwind. Bruce Langford, Bullying Prevention Advocate www.standupnow.ca
fwe2-CC-MAIN-2013-20-29672000
File organization refers to the way records are physically arranged on a storage device. Intel Fortran supports two kinds of file organization: The default file organization is always ORGANIZATION= 'SEQUENTIAL' for an OPEN statement. The organization of a file is specified by means of the ORGANIZATION specifier in the OPEN statement. You can store sequential files on magnetic tape or disk devices, and can use other peripheral devices, such as terminals, pipes, and line printers as sequential files. You must store relative files on a disk device. A sequentially organized file consists of records arranged in the sequence in which they are written to the file (the first record written is the first record in the file, the second record written is the second record in the file, and so on). As a result, records can be added only at the end of the file. Sequential files are usually read sequentially, starting with the first record in the file. Sequential files with a fixed-length record type that are stored on disk can also be accessed by relative record number (direct access). Within a relative file are numbered positions, called cells. These cells are of fixed equal length and are consecutively numbered from 1 to n, where 1 is the first cell, and n is the last available cell in the file. Each cell either contains a single record or is empty. Records in a relative file are accessed according to cell number. A cell number is a record's relative record number (its location relative to the beginning of the file). By specifying relative record numbers, you can directly retrieve, add, or delete records regardless of their locations (direct access). (Detecting deleted records is only available if you specified the -vms option when the program was compiled.) When creating a relative file, use the RECL value to determine the size of the fixed-length cells. Within the cells, you can store records of varying length, as long as their size does not exceed the cell size.
fwe2-CC-MAIN-2013-20-29677000
Forest Service rangers will try and use fire to regenerate aspen stands affected by sudden aspen decline with a controlled burn in the Battlements Roadless Area in western Colorado. PHOTO BY BOB BERWYN 500-acre prescribed fire under way near Collbran By Bob Berwyn SUMMIT COUNTY — As a Colorado landmark tree, the aspen gets a lot of attention, especially this time of year, when the stands are vibrant with fiery fall color. This week, a different sort of blaze will roar through a 500-acre stand of aspens in the Battlements Roadless area north of Collbran, where land managers are using a prescribed burn to treat an area affected by sudden aspen decline, the term for a sudden die-back of the trees linked to stress from the 2002 drought. The pace of the die-back has slowed considerably in the past couple of years, but Forest Service researchers are still trying to figure out how they might be able to revitalize some of the areas that were hit. And even though sudden aspen decline has slowed in southwest Colorado, there is still a slower trend of aspen decline across the state, attributed in part to fire suppression, as well to over-grazing of young stands by elk. Aspens are an important part of Colorado’s forest ecosystems. The groves provide good habitat for cavity nesting birds, and the understory is much more diverse than in many evergreen forests, with shrubs and berries that provide an important food source for many animals. (more…) Filed under: Environment, forest fires, forests, Summit County Colorado, US Forest Service | Tagged: aspen regeneration, aspens, Colorado, Controlled burn, Forest health, forests, Grand Mesa, sudden aspen decline, Summit County News, US Forest Service | Leave a Comment »
fwe2-CC-MAIN-2013-20-29680000
KENTUCKY (3/22/13) – This is your daily update of what has happened on this day in history. Joseph Priestly invented carbonated water 280 years ago today. Thomas Jefferson became the first United States Secretary of State 223 years ago today. Congress banned US vessels from supplying slaves to other countries 219 years ago today. The first US Nursing School was chartered 152 years ago today. Illinois became the first state to require sexual equality in employment 141 years ago today. Niagara Falls ran out of water due to a drought 110 years ago today. Information provided by http://www.historyorb.com Graphics provided by SurfKY Graphics Department Copyright © 2012 SurfKY News Group, Inc. all rights reserved. SurfKY.com is an eNewspaper providing local news FREE to Kentucky 24/7. Read Statewide Kentucky News, Sports, Obituaries and more from the following Kentucky Counties: Calloway, Christian, Daviess, Fayette (Lexington), Henderson, Hopkins, Logan, McCracken, Muhlenberg, Warren, and Webster Counties as well as the Kentucky Lakes Area. |< Prev||Next >|
fwe2-CC-MAIN-2013-20-29685000
Class blogs are an excellent starting point. But the most incredible outcomes are observed when students are progressed onto their own individual blogs. Why? Human nature! As individuals we’re all driven by personal ownership; class blogs have less sense of ownership than an individual blog. In this seventh activity you will: - Learn about the recommended approach to setting up individual Student Blogs - Gain tips for creating student blogs - Learn how to create student blogs using the Blog & User Creator – Edublogs Pro/Campus users only - Learn how to create student blogs using the Edublogs Signup page – free Edublogs users only - Complete the extension activity (if you have time). Step 1: Recommended Approach to Setting up Student Blogs As highlighted in Student Blogging Activity 5 (Beginner): Add Students To Your Class Blog So They Can Write Posts the best approach to student blogging is to take it slowly. Benefits of this approach include: - Gives you time to increase your own skills while educating your students on appropriate online behaviour. - You’re less likely to have problems if you take this approach. If you decide to increase your students’ blogging roles it’s a good idea to introduce it slowly in the following three steps: The idea is as they show increased responsibility you move them onto the next stage of blogging. And remember you can stage granting students rights to post on the class blog and having their own student blog. For example, you might gradually allow three students at a time rights to post on the class blog. Then use these students to teach the next group of three students how to post on the class blog and so on. Once they’re working well on the class blog then you start creating and assigning them their own individual student blog. Step 2: Tips for creating student blogs #1 Choosing Usernames and Blog URLs Educators normally use the same name for both the student’s username and blog URL. Keep them simple and easy for the student to remember. Most use a combination of their student’s first name followed by numbers that might represent the year, class number and/or school initials. They do this to protect the identity of the student by not including their last name and to ensure their username is unique (as Edublogs has close to 1,000,000 users). For example, username misty16 or mistybp16. For example, username mistybp16 and blog URL mistybp16.edublogs.org. If you want the students to use the blog for their entire school life then use a combination of letters combined with a number that represents the year they started school or are finishing school. #2 Adding yourself to your student blogs Always add yourself as an administrator to your student blog. This means if you need to edit/delete a post, page or comment you can quickly access their blog from your blog dashboard. The easiest way to do this is to set up your student blogs using the Blog & User Creator inside an Edublogs Pro blog — making sure you select add as Admin. Accessing a student blog’s dashboard is as simple as: - Click on Dashboard > My Blogs - Click on the Dashboard link under the Blog Title you want to access and this will take you to the dashboard of that student blog #3 Moderating Comments Educators either prefer to let their students moderate their own comments or they moderate all the comments for their students. There are pros and cons to each approach. For those comfortable with students moderating comments we recommend you subscribe to the comment feeds from your student blogs — here is how to subscribe to their comments using Google Reader. If you want to moderate all comments, so comments are only posted once you have approved them, you need to create the blogs using the gmail+ method. How it works is you set up one Gmail account for your class and then add a + sign and a different number and/or letter(s) to the end of your email name for each student. Gmails ignores anything in the first half of an email address after a plus sign. So if you create each email with the format email@example.com all emails will be sent to the inbox of firstname.lastname@example.org - You must use a real gmail account– educators either use their own gmail account or set up a gmail account for their class e.g. email@example.com #4 Assigning Student Role You need to think about how much responsibility your students are given. Do you want them to be able to write own posts/pages, change themes, add widget and approve comments or do you want (or need) to limit their level of responsibility? The five roles for users you can give students on their student blogs are: Administrator; Editor; Author; Contributor; and Subscriber. Deciding which role to assign them is a balance between: - How much responsibility you’re comfortable with assigning your students - School and District guidelines - Providing them with an environment that’s motivating If you want to approve all posts before they can be published then assign them the role of contributors. If you do assign them the role of contributor it means their posts will be submitted as pending and you’ll need to visit their blog dashboard to approve their posts. If you’ve added yourself as an admin user you can see all pending posts and comments on your student blogs by going to Dashboard > My Blogs. For more info refer to Managing Students on Blogs…What Role Do You Assign Students? For those comfortable with students having a higher level of responsibility I recommend you subscribe to the post feeds from your student blogs — here is how to subscribe to their posts using Google Reader. Here is a summary of their differences based on User Capability: Here is a summary of their differences based on access to features in the dashboard: Step 3: Create the student blogs How you create the blogs depends on the type of Edublogs blog you have: - If you are using an Edublogs Pro/Campus blog – you create the student blogs using the Blog & User Creator inside your dashboard. - If you are using a free Edublogs blog — you’ll need to create the student blogs using the Edublogs sign up page. You’ll need to add yourself as an admin user once the blogs are created. Remember spam filters, especially strict ones for institutional email addresses, often block activation and password reset emails from Edublogs.org. If unsure use free webmail accounts such as gmail, hotmail that don’t block these invitation emails. There are no limitations on the number of student blogs you can create! #1 Creating Student blogs using the Blog & User Creator The Blog & User Creater is designed specificially to save time and make it easy for educators to mass create student blogs. Creating the blogs is a simple as: 1. Go to Users > Blog & User Creator in your Dashboard. 2. Click on Create Blogs tab. 3. Select their role on their new blog, their role on your blog, your role on their blog and select ‘Upgrade to give access to new premium features and other features’. - We recommend the use of pre-set passwords as it means students will be able to log in if you got the email address wrong or their login email is blocked by filters on their email account. 3. Add the usernames - Use only lowercase letters and numbers, with no spaces, in the username - The username is what they use to sign into the blog dashboard and is displayed on posts and comments they write. You can’t change a username, however you can change what name is displayed. - If you are creating a new username and see ‘Sorry, that username already exists!’ it means you need to use a more unique username. Remember there is over 1,000,000 users in Edublogs.org. A simple solution for students is to use a combination of their first name, school initials and their room or year. 4. Add their email address - You can’t create several usernames with the same email address because the system resets password based on email address. But you can trick it using the gmail+ method - Spam filters, especially strict ones for institutional email addresses, often block these activation emails. If unsure use free webmail accounts such as gmail, hotmail that don’t block these invitation emails. 5. Add their password - Leave this blank if you want to let the system automatically create the password 6. Add their blog urls - You can’t change a blog URL once a blog is created so choose carefully 7. Add Blog title - This can be changed later in Settings > General 8. Click Submit at the bottom of the page #2 Creating Student blogs using the Edublogs Signup page If you are using a free Edublogs blog you’ll need to create the student blogs using the Edublogs sign up page. You’ll need to add yourself as an admin user once the blogs are created. Here is how you do it: 1. Go to Edublogs.org 2. Click on the ‘Free’ image 3. This takes you to the Edublogs sign up page where you need to enter your desired username, email address, tick you agree to TOS (terms of Service) and then click Next. - You will be sent an activation email once your account is created. This email normally arrives within 30 minutes. - You have 48 hours to click on the link in the email to activate your blog otherwise you will need to reset up your account. - Spam filters, especially strict ones for institutional email addresses, often block these activation emails. If unsure use free webmail accounts such as gmail, hotmail that don’t block these activation emails. - Use only lowercase letters and numbers, with no spaces, in your username - Your username is what you use to sign into your blog dashboard and is displayed on posts and comments you write. You can’t change your username, you can change what name is displayed. 4. On the next page enter the blog domain (i.e. blog URL), blog title, select your preferred privacy and language, enter the Captcha word and click Signup. - Use only lowercase letters and numbers, with no spaces, in your blog URL - Blog URLs can’t be changed once created - Use a blog URL that reflects what your blog is about and is unique - Keep in mind people need to be able to remember and easily type your blog URL into their browser – where possible try to keep your blog URL short but meaningful - Don’t stress too much about your blog title as this can be changed any time. 5. Next you should see a page with the blog title and instructions to check email inbox. This email should arrive within 30 minutes. 6. Click on the link in the email to activate the blog account. 7. This activates your account and takes you to the activation page on Edublogs. 8. You should also receive another email with the username, password and login details which the student’s use to log into their blog dashboard. 9. Once the blog is created you’ll need to add yourself as an admin user to each student blog by going to Users > Add New in each student blog dashboard and following these instructions. Step 4: Complete the extension activity (if you have time) Write a comment on this post or your own post to share your tips for creating student blogs such as: - What worked well? - What caused you problems? - What are the three most important tips you would give other educators when using individual student blogs? - What would you like explained in more detail? And remember to leave a comment with a link to your post (if you do write a post) so we can drop past to check it out! We like to include these links to your posts in our weekly reviews! Here is where you find the other activities from this series: Thanks to everyone who is participating in the 30 Days to Get Started Blogging with your students! And if you missed out, it is never too late to work through the challenges at your own pace! You can always form your own team with other educators and work together! - Student Blogging Activity 1 (Beginner): Setting Up Your Class Blog - Student Blogging Activity 2 (Beginner): Setting Up Rules & Guidelines - Student Blogging Activity 3 (Beginner) – Teaching Quality Commenting - Student Blogging Activity 4 (Beginner) – Helping Parents Connect with your Class Blog - Student Blogging Activity 5 (Beginner): Add Students To Your Class Blog So They Can Write Posts - Student Blogging Activity 6 (Beginner): Add A Visitor Tracking Widget To Your Blog Sidebar - Student Blogging Activity 7 (Beginner): Set up your student blogs - Student Blogging Activity 8 (Beginners): Add your student blogs to your blogroll - Student Blogging Activity 9 (Beginners): Add Your Student Blogs To A Folder In Google Reader
fwe2-CC-MAIN-2013-20-29692000
Ubon Ratchathani Province covers a total area of 15,744.85 square kilometers, with Amnat Charoen Province to the north, the Banthat Mountain Range along the4 border of the Kingdom of Cambodia to the south, the Mekhong River and Lao People’s Democratic Republic to the east, and Yathothon and Si Sa Ket Provinces to the west. Ubon Ratchathani is divided into 19 Amphoes and 6 King Amphoes, namely: Amphoe Muang, Amphoe Warin Chamrap, Amphoe Det Udom, Amphoe Buntharik, Amphoe Na Chaluai, Amphoe Nam Yun, Amphoe Khong Chiam, Amphoe Phibun Mangsahan, Amphoe Si Muang Mai, Amphoe Trakan Phutphon, Amphoe Khemarat, Amphoe Muang Samsip, Amphoe Khuang Nai, Amphoe Kut Khaopun, Amphoe Pho Sai, Amphoe Tan Sum, Amphoe Samrong, Amphoe Sirindhorn King Amphoe Don Mot Daeng, King Amphoe Thung Si Udom, King Amphoe Na Yia, King Amphoe No Tan, King Amphoe Lao Sua kok, and King Amphoe Sawang Wirawong. The Kha and the Suai, two local tribes, had moved from Si Sattanakanahut to this area before the Rattanakosin Period. During the reign of King Rama I, the King thought of locating the people scattered around because of war into one Therefore, any leader who could gather the greatest number of people and establish a secure community would be promoted to the rank of Chao Muang or Chief. For this reason, in 1786, Thao Kham Phong, who had led a group of his people to settle in the Huai Chaeramae area on a plain on the bank of the Mun River, was promoted to the rank of Chief. Later, when he helped the Thai troops to attack Nakhon Champasak, he was promoted to the rank of Phra Pathum Worarat Suriyawong and became Chao Muang or Governor of Ban Chaeramae, which was upgraded to the status of a province called Ubon Ratchathani. Later, the city was moved to a new site at Dong U-Phung, which is the site of the present city with seven other towns During the reign of King Rama V, before the reform of the provincial administration which divided the kingdom into Monthon (circle), Changwat (province), and Amphoe (district), Ubon Ratchathani was annexed to Lao Kao town. Later in 1899, the name of the area was changed to the Northeastern Monthon with Ubon Ratchathani as its administrative center, and the name was changed again in 1900 to Monthon I-San. Because of the Depression in 1915, the status of Monthon Ubon Ratchathani was reduced to only a province in Monthon Nakhon Ratchsima in 1933, the division of the kingdom into Monthon was abolished and the city has been known as Ubon Ratchathani from that time on. Transportation to Ubon Ratchathani is very convenient by car, train, and air. By Car : Follow Highway 1 (Phahon Yothin Road) to Highway 2 (Friendship Then follow Highway 2 to Highway 24 (Chok ChaiDet Udom); turning onto this route and following it untio the end. The total distance is 629 kilometres. On take Highway 2 to Nakhon Ratchasima and turn onto Highway 226 to Buri Ram – Surin – Si Sa Ket – Ubon Ratchathani. By Bus: There are both air-conditioned and ordinary buses leaving from the Northeastern Bus Terminal (Talat Mo Chit) many times a day. For detailed information Tel. 272-5228 (Ordinary Bus) and 272-5299 Ubon Ratchathani Bus Terminal Tel. (045) Private agencies: Nakhon Chai Air Tel. 2725271 (at Ubon Ratchathani Tel. 269385-6), Mong Khon Tour Tel. 2725239 (at Ubon Ratchathani Tel. 255116), Chet Chai Tour Tel. 2725264 (at Ubon Ratchathani Tel. 254885, 255907), Sahamit Tour. Tel. 2725252 (at Ubon Ratchathani Tel. 255043), Sayan Tour (Ubon Ratchathani) Tel. 254885, 242163, Siri Ratanapon (Ubon Tel. 245847, 441848. By Train: There are ordinary, rapid, and express trains from Bangkok to Ubon ratchathani every day. For more information please contact: Tel. 223-7010, By Plane: Thai Airways International Ltd. Has a daily flight for passengers and air parcels from Bangkok to Ubon Ratchathani. Detailed information can be requested from Thai Airways International Ltd., Lan Luang Rd., Bangkok, Tel. 280-0060, 628-2000 and Ubon ratchathani Office, Tel. (045) As for local transportation, there are bused running from Muang District to other districts and to other nearby provinces in the Northeast and the North, such as Chiang
fwe2-CC-MAIN-2013-20-29701000
The mere mention of Korean food and the first thing that comes to mind is kimchi – the spicy red-tinged fermented vegetable dish. It is as distinctly Korean as hamburger is American, and pasta Italian. But like every other country, while a hint or more of spice is vital to Korean food recipes, there is so much more to their cuisine than just kimchi. A lot of the food consumed by present-day Koreans came from the cuisine and customs of the royal court. Korean food is one of the healthiest cuisines because it is vegetable-based and meats are cooked without a lot of oil. A full meal is balanced, considering spiciness, texture, temperature, and careful presentation. A complete Korean meal is called Hanjoungshik and is composed of fish, steamed ribs, and other meat and vegetable dishes, rice, soup, and of course, kimchi. There is also twoenjang-guk, a fermented soybean paste soup with clams. There are many banchan or side dishes which are shared, and each side dish must complement the others. Usual side dishes are steamed vegetables, kimchi, beef, fish, and bean taste soup. You can see the richness and diversity of Korean recipes during gatherings and special occasions, because various dishes are cooked in many different ways. Some are stewed, pan-fried, fermented, simmered, steamed, or eaten raw. But the one dish that must always be present is kimchi. Kimchi is known for its spicy tangy crunch, and is considered a digestive aid and appetite stimulant. It can be seasoned with pepper, garlic, radish, green onions, ginger, and other ingredients. But red chilli pepper flakes is what gives it its red coloring and makes it hot and spicy. Served during every Korean meal, it is said to be good in Vitamin C and fiber. There are more than 100 varieties of kimchi, the most popular of which is the baechu or napa cabbage variety. If you want to try Korean dishes but are not into spicy food, don’t worry. Korean food recipes also include meat dishes and these are usually eaten with vegetables or beans. The most popular of these is the samgyeopsal – thinly-sliced pork belly meat that resembles bacon. You can also try their bulgogi or marinated barbecued beef, and the galbi – grilled beef ribs wrapped in lettuce and doused with chilli, soybean paste and garlic. Soup is made from vegetables and meat or seafood, and would bring relief and warm a person during cold weather. Maeuntang is composed of white fish, vegetables, soybean curd, and of course red pepper powder to make the soup spicy. Korean cuisine is based on practical and healthy living, but Koreans make sure their food is always still delicious. So if you are on the lookout for a healthy alternative to your usual diet, try some Korean food. You might just find yourself loving it.
fwe2-CC-MAIN-2013-20-29711000
According to Japan’s Asahi Shimbun, cleanup crews working near the ruined Fukushima Daiichi nuclear plant, “dumped soil and leaves contaminated with radioactive fallout into rivers.” CROOKED CLEANUP (1): Radioactive waste dumped into rivers during decontamination work in Fukushima http://t.co/vZ7BFMC9 A team of journalists who observed the decontamination work in the region last month added: “Water sprayed on contaminated buildings has been allowed to drain back into the environment. And supervisors have instructed workers to ignore rules on proper collection and disposal of the radioactive waste.” Workers were apparently aware that they were breaking rules, the paper reported: From Dec. 11 to 18, four Asahi reporters spent 130 hours observing work at various locations in Fukushima Prefecture. At 13 locations in Naraha, Iitate and Tamura, workers were seen simply dumping collected soil and leaves as well as water used for cleaning rather than securing them for proper disposal. Photographs were taken at 11 of those locations. The reporters also talked to about 20 workers who said they were following the instructions of employees of the contracted companies or their subcontractors in dumping the materials. A common response of the workers was that the decontamination work could never be completed if they adhered to the strict rules.
fwe2-CC-MAIN-2013-20-29714000
In an exclusive interview with ThinkProgress Green, Lubchenco, the administrator of the National Oceanic and Atmospheric Administration (NOAA), discussed the vicious circle of oil and gas greenhouse pollution melting the Arctic sea ice, making it possible for new oil and gas drilling in the region that will melt the ice even faster. Lubchenco had just appeared in a panel on threats to oceans at the Society of Environmental Journalists annual conference on Friday morning, discussing ocean acidification and the unexpectedly rapid decline of Arctic sea ice, both results of greenhouse pollution from burning fossil fuels. “Less sea ice means greater access to reserves for gas and oil that are there,” Lubchenco said in the TP Green interview, agreeing that “increased production of oil and gas means less sea ice.” When asked whether there are civilizational risks to a world without permanent Arctic sea ice, Lubchenco explained that “what happens in the Arctic doesn’t stay in the Arctic”: Well, what happens in the Arctic doesn’t stay in the Arctic. It has huge implications for the global system. And one of the reasons people are legitimately concerned about melting of sea ice are the uncertainties associated with the consequences of that for the rest of the planet. We’re entering a no-analogue world here. We’ve never experienced the kinds of changes that we’re seeing now in the Arctic and elsewhere. And we don’t fully understand what the consequences of that are going to be. Watch the interview: The United States and other nations with access to the Arctic are taking steps to support the expansion of drilling in regions made accessible by global warming pollution. Although Norway is concerned about the costs of a Deepwater Horizon-like disaster, the government is still encouraging Arctic drilling. In August, Exxon Mobil signed a blockbuster deal with Russia’s Rosneft to explore the Russian reaches of the Arctic ocean for oil. This month, the Department of Interior announced it is moving forward with 500 oil drilling leases sold during the Bush administration for the Chukchi Sea. Last week, the Environmental Protection Agency granted Shell an air permit for exploratory drilling in the Beaufort Sea. The Arctic Ocean is estimated by the U.S. Geological Survey to have vast reserves of oil and gas. Burning of those fossil fuels would add tens of billions of tons of carbon dioxide to our already overheated atmosphere. Although NOAA is the nation’s top oceanographic agency, its scientists play only a minor, advisory role in the government’s approval of offshore drilling, which is run by the Interior Department. NOAA plays a larger role in cleaning up after oil spills. Read more
fwe2-CC-MAIN-2013-20-29730000
NEW YORK — From the Rocky Mountains to New England, hospitals are swamped with people with flu symptoms. Some medical centers have limited visitors, and one Pennsylvania hospital set up a tent outside its ER to handle the feverish patients. Flu season in the U.S. has hit early and, in some places, hard. But whether this will be considered a bad season by the time it has run its course in the spring remains to be seen. Those of us with gray hair have seen worse, said Dr. William Schaffner, a flu expert at Vanderbilt University in Nashville. The evidence so far is pointing to a moderate season, Schaffner and others believe. It just looks bad compared with last year — an unusually mild one. Flu usually doesn't blanket the country until late January or February, but it is already widespread in more than 40 states. Also, the main influenza virus this year tends to make people sicker. And there are other bugs out there causing flu-like illnesses. So what people are calling the flu may, in fact, be something else. There may be more of an overlap than we normally see, said Dr. Joseph Bresee, who tracks the flu for the Centers for Disease Control and Prevention. The flu's early arrival in the U.S. coincided with spikes in a variety of other viruses, including a childhood malady that mimics flu and a new norovirus that causes what some people call stomach flu. Flu is a major contributor, though, to what's going on, experts say. The early onslaught has prompted hospitals to take steps to deal with the influx and protect other patients from getting sick, including restricting visits from children, requiring family members to wear masks, and banning anyone with flu symptoms from maternity wards. One hospital in Allentown this week set up a tent for a steady stream of patients with flu symptoms. But so far, what we're seeing is a typical flu season, said Terry Burger, director of infection control and prevention for the hospital, Lehigh Valley Hospital-Cedar Crest. Health officials are analyzing this year's flu vaccine's effectiveness, but early indications are that about 60 percent of all vaccinated people have been protected from the flu. That's in line with how effective flu vaccines have been in other years. On average, about 24,000 Americans die each flu season, according to the CDC. Symptoms can include fever, cough, runny nose, head and body aches and fatigue. Some people also suffer vomiting and diarrhea, and some develop pneumonia or other severe complications. Most people with flu have a mild illness and can help themselves and protect others by staying home and resting. But people with severe symptoms should see a doctor.
fwe2-CC-MAIN-2013-20-29734000
Kitchen Food Safety Basics of Kitchen Food Safety: Clean, Separate, Cook, Chill Learn the about the basics of food safety in the kitchen. The four simple concepts: clean, separate, cook and chill can help prevent foodborne illness. - Spring Clean Your Way into a Safer Kitchen (Partnership for Food Safety Education) - Clean, Separate, Cook and Chill (Canadian Partnership for Consumer Food Safety Education). - Five Keys to Safer Food Manual (World Health Organization PDF 3,936 KB) - Food Safety Education - the Four Messages: Clean, Separate, Cook, Chill (USDA) - Washing Food - Does it Promote Food Safety? (USDA) - Kitchen Companion - Your Safe Food Handbook (USDA) - Fight Bac! (Partnership for Food Safety Education) - UC Food Safety in the Kitchen Publications Holiday Food Safety - Holidayfoodsafety.org is dedicated to providing information, including videos, about keeping food safe during the holidays. You can find detailed information and useful tips on: - The Turkey - Cranberry Sauce Recipe - Selecting, Preparing, and Canning Fruit (National Center for Home Food Preservation) - Preparing the kitchen - Safely handling ingredients - Menu selection - FoodSafety.gov - Useful information about holiday food safety. Includes “Food Safety for Moms to Be", "Mail Order Food Safety" (USDA) and the video "Be Food Safe for Holiday Buffets" (USDA). - FDA: Holiday Food Safety - CDC: Holiday Food Safety Podcast Packing a Safe Lunch Packing a safe lunch for our little ones is just as important as packing a lunch that is nutritious. The web pages below discuss food safety tips for the lunch box. - The Lunch Box Series, C: Safe Lunches for Preschol Children (UC Pub PDF 2335k) Tips on keeping food safety in mind when you put together your rpeschooler's nutritious sack lunch. In English and Spanish (La Lonchera: Cómo empacar almuerzos seguros para niños preescolares PDF 1290k). - Back to School Food Safety Tips for Parents and Students (USDA) - Eating Outdoors - Handling Food Safely (FDA) - An Ounce of Prevention Keeps the Germs Away (CDC PDF 3,187 KB)
fwe2-CC-MAIN-2013-20-29749000
The Stormwater Ecological Enhancement Project (SEEP) began in 1995 as a take-home final exam for the course Ecosystems of Florida. The objective was to develop a management plan to enhance a stormwater retention basin located within the University of Florida Natural Area and Teaching Lab (NATL) for species diversity while optimizing the basin's use for research and education. Since that time, the Wetlands Club at UF has taken this project further and implemented a full-scale created wetland that achieves not only the original objectives but also improves wildlife habitat, water quality, and aesthetics. These efforts have been in close coordination with the NATL Advisory Committee. What is a Stormwater Retention Basin? Water that runs off the land during and after a rainstorm is called stormwater runoff. This runoff and any pollutants it carries flows into streams, rivers, lakes and depressions throughout the landscape. In an urbanized landscape natural physical, chemical and biological processes are disrupted and leaves, litter, animal waste, oil, greases, heavy metals, fertilizers and pesticides are transported downstream. A stormwater retention basin provides temporary storage for the runoff generated by development in the watershed, releasing it slowly and reducing the potential for flooding. The basin also provides some treatment of pollution carried by the stormwater runoff. While wetlands have historically been considered of little importance, our increasing understanding of these systems is changing this misconception. Wetlands are now recognized for providing many vital benefits. Some of these benefits include: - habitat for commercially valuable fish and shellfish, - improved water quality. Although we have lost more than 50 percent of the historic wetlands in the lower 48 states, protection of wetlands has increased considerably over the past 15 years due to recognition of these values. Wetlands and Stormwater Basins Wetlands can be found alongside rivers and lake shores, and as low areas in the landscape that often become flooded during storms. These wetlands are the natural stormwater basins of the landscape. As humans create stormwater basins to reduce the effects of development, it seems only logical to mimic these natural stormwater basins. This provides benefits beyond that of water storage as the basin becomes a multipurpose area serving our needs to reduce flooding while offsetting wetland functions that have been lost over the past 200 years. The water treatment component of the retention basin would also be substantially enhanced by the diversity of vegetation and complexity of the integrated wetland community. The integration of these "free" services provided by a natural system with the needs of our growing world has been termed Ecological Engineering. This new approach to urban and regional planning is not only a more environmentally sensitive approach, but one that uses processes that have been working naturally for millions of years. The Retention Pond at NATL The 3-acre retention pond is the low point of a 39.75 acre watershed. The majority of the basin was constructed in 1988 with additional storage created in 1990. Structures within this watershed contributing significant runoff to the basin include the Center for Performing Arts, Entomology and Nematology buildings, the Park & Ride commuter lot and roadways between and around these buildings. The total storage of the basin to offset the increased runoff generated by these impervious surfaces is 478,000 cubic feet. As originally designed the bottom of the basin is essentially flat, with uniform slopes on the north, south and east sides. To the west of the basin the slope is low and quickly grades into the preexisting depression of the area. Because the basin is almost uniform in elevation the established vegetation was dominated by Cattail. Ecologically Enhanced Design The primary goal of the project is to increase the diversity of flooding depths and frequency of flooding that will occur since this is the primary factor regulating species composition in a wetland. To do this two depressions, one 4-feet, the other 5-feet deep, were dug at the southeastern end of the pond providing a deep, open-water habitat. At the north end a low berm was constructed to temporarily impound 80% of the entering stormwater. This forebay provides the first phase of treatment and was planted with species known to take up heavy metals and remove nutrients. Water from the forebay is then slowly released, first flowing through an area planted to resemble a bottom-land hardwood swamp, and move into a shallow freshwater marsh before entering the deep-water ponds. At the southeastern end of the pond another small berm was built to divert stormwater away from the deep-water ponds, increasing treatment time. At the end of this berm a knoll was built and planted with trees to provide nesting or roosting sites for birds. The basin was planted with species that resemble those found in wetlands of North Central Florida. A boardwalk also will be constructed. Expected SEEP Benefits The SEEP project already has provided a great learning experience for Wetlands Club members through project design and organization, regulatory agency interaction and team work. Other benefits of the project include: -Species diversity. The variety of plantings and topographic diversity on the sight provides new genetic material as well as suitable establishment sites for long-term increases in vegetative species diversity within the basin. - Wildlife habitat. Vegetative diversity as well as diversity of aquatic habitat provides a multitude of new biotic niches not previously available in the basin. The value of this habitat becomes increasingly important as other areas on campus and in the Gainesville community are encroached upon. -Aesthetics. Retention basins are notoriously unattractive, often fenced in, littered with trash, and square. Although the retention basin at the NATL is pleasant compared to some, its appeal would be improved if it resembled a diverse wetland. -Water Quality. Construction of the forebay, planting of species known to have high treatment potential, and diversion of stormwater to maximize treatment all improve the water treatment potential of the basin. -Research. Since integration of wetlands and stormwater basins is still a relatively new concept, little is known about optimization and performance of these systems. Implementing SEEP provides a unique opportunity to test the principles of this integration, pushing the University of Florida to the forefront of this technology. The location of this site on campus as well as the location of the site within NATL allows for easy access and control over activities within the site. Faculty, staff and state agencies interested in this topic will be able to use this as a long-term study site. -Education. Educational opportunities for both students and the public enormous for this site. The University has one of only three wetland centers in the country with some of the founding faculty in principles of Ecological Engineering. Many courses throughout the campus use the area for various components of their curriculum. Public education opportunities abound with the construction of the new Florida Museum of Natural History within a stones throw of the basin.
fwe2-CC-MAIN-2013-20-29751000
Currently, hundreds of satellites orbit the Earth, many with instruments that improve our lives and define our lifestyles. Satellite-based technology makes it possible to watch our favorite television shows, navigate unfamiliar roads and plan weather-dependent activities a week ahead. Satellites also collect data that aren’t as easy to explain, such as the NASA instrument to be launched this spring to measure sea surface salinity over the globe. Annette deCharon’s job is to explain exactly what that instrument, dubbed Aquarius, will do once it’s launched in 2011 and why its mission — to collect data on salt concentrations at the ocean surface — is critical for NASA and for society as a whole. From her mission control-style desk in an office at the University of Maine Darling Marine Center in Walpole, Maine, deCharon’s job as senior marine education scientist is to make ocean sciences more accessible for various audiences. Her work with NASA/Aquarius Education & Public Outreach targets the public, students from elementary school to college, and science communicators from classroom teachers to ocean researchers. “Many people don’t really interact with the ocean at all, so they don’t think about how it affects them, but it’s basically the key driver of climate,” says deCharon, who teaches a UMaine Semester by the Sea class and directs one of the national Centers for Ocean Sciences Education Excellence (COSEE). “In all of our education and outreach programs, we really try to emphasize that point. One approach is through visualization of ocean data and concepts. This is important because the ocean is so remote that people can’t readily identify with it. But if you give them visuals that help them see the big picture and how things interact, they are more likely to believe its relevance.” The educational materials deCharon and her team produce help demonstrate why monitoring sea surface salinity is key to understanding what’s happening in the oceans. “Most people know the oceans are salty, but they don’t know that patterns of salinity change geographically and over time,” she says. “If there’s a lot of rain or if ice melts in a region, the sea surface will be less salty. If you get higher evaporation, seawater will be saltier. Sea surface salinity changes can tell us how the water cycle is changing over the ocean. That’s important because 86 percent of global evaporation and 74 percent of global precipitation happens over the oceans.” Over its three-year mission, Aquarius data will be used to produce monthly maps of global sea surface salinity. Within a few months, Aquarius will collect as many sea surface salinity measurements as the entire 125-year historical record from ships and buoys. The newest findings also will be used in climate prediction and El Niño forecasts. To increase awareness and understanding of salinity, deCharon and her team have developed a website with a wealth of information, including trivia (bet you didn’t know, for example, the word “salary” may derive from the money paid to Roman soldiers to buy salt), online data tools and suggested activities for students from elementary (a potato float helps children understand the concept of relative density) to high school (an experiment that splits saltwater into its constituent ions). The activities are aligned with National Science Education Standards and Ocean Literacy principles, and are evaluated to ensure their efficacy. As ensconced as deCharon is now in the world of science education, she didn’t start out that way. A University of California – Davis graduate in geology, deCharon earned a master’s degree in oceanography at Oregon State University. She worked at Brown University as a research assistant in the Department of Planetary Geology before being hired as a mission planner for the NASA Jet Propulsion Laboratory in Pasadena, Calif. “I discovered that I am pretty good at taking lots of input and coming up with plans that satisfied the needs of various types of people, including NASA engineers and scientists,” she says. “That’s something you usually don’t get during your scientific training.” After several years, deCharon went from mission planning to public outreach on NASA’s TOPEX/Poseidon mission, the pioneering satellite that measured sea surface height during the 1997–98 El Niño event. DeCharon assisted in media campaigns, produced outreach materials and created some of NASA’s early educational websites. It was an exciting experience merging science, technology and education, deCharon says, and a springboard for her role as director of COSEE-Ocean Systems. The UMaine-based COSEE center brings together scientists and educators in peer-to-peer interactions, both in person and online. Educators contribute their understanding of how people learn, thus helping scientists better communicate their research. Scientists contribute their content knowledge and expertise in connecting complex ideas. “By developing web-based tools that support scientist-educator collaborations, we are pushing the boundaries of ocean sciences education,” says deCharon, who recently received another three years of COSEE funding. “We look forward to making significant impacts in Maine, New England and beyond.” Image Description: Annette deCharon
fwe2-CC-MAIN-2013-20-29755000
UNAIDS and UNICEF welcome news of a baby born with HIV who now as a toddler appears “functionally cured” through treatment And looks forward to further studies to see if findings can be replicated. GENEVA, 4 March 2013—The Joint United Nations Programme on HIV/AIDS (UNAIDS) and UNICEF welcome a new case study, which found a baby treated with antiretroviral drugs in the first 30 hours of life and who continued on HIV treatment for 18 months appeared to be functionally cured. The findings were presented today at the Conference on Retroviruses and Opportunistic Infections (CROI) in Atlanta, Georgia in the United States of America. According to researchers the mother who was living with HIV at the time of birth had not received antiretroviral (ARV) medication or prenatal care. Researchers say that the child was born prematurely in July 2010 in the state of Mississippi. Due to the high risk of exposure to HIV, the researchers say the baby was started on a triple therapy regimen of antiretroviral drug 30 hours after birth and before proof of infection could be confirmed. The newborn’s HIV-positive status was subsequently confirmed through a highly sensitive polymerase chain reaction testing which was conducted on several occasions. The case study stated that the baby was discharged from the hospital after one week and continued ARV treatment until 18 months of age, when for reasons that are unclear the treatment was discontinued. However, when the child was seen by medical professionals about a half a year later, blood samples revealed undetectable HIV levels and no HIV-specific antibodies. If the findings are confirmed this would be the first well-documented case of an HIV-positive child who appears to have no detectable levels of the virus despite stopping HIV treatment. “This news gives us great hope that a cure for HIV in children is possible and could bring us one step closer to an AIDS free generation,” said UNAIDS Executive Director Michel Sidibé. “This also underscores the need for research and innovation especially in the area of early diagnostics.” In 2011, UNAIDS and its partners launched a Global plan for the elimination of new HIV infections among children by 2015 and keeping their mothers alive. Significant progress has been made and continued support and research is needed. “While we wait for these results to be confirmed with further research, it is potentially great news,” said UNICEF Executive Director, Anthony Lake. “This case also demonstrates what we already know—it is vital to test newborn babies at risk as soon as possible.” According to data from the World Health Organization and UNICEF only 28% of HIV-exposed babies were tested for HIV within six weeks of birth in 2010. Obstacles to early diagnosis and treatment include the high cost of diagnostics and difficulty of getting timely results and limited access to services and medicines. There were 330 000 children newly infected with HIV in 2011. At the end of 2011, 28% of children under the age of 15 living with HIV were on HIV treatment, compared to 54% of eligible adults. Now two and a half year’s old, the toddler continues to thrive without antiretroviral therapy and has no identifiable levels of HIV. However, UNAIDS cautions that more studies need to be conducted to understand the outcomes and whether the current findings can be replicated.
fwe2-CC-MAIN-2013-20-29756000
George Frideric Handel From Uncyclopedia, the content-free encyclopedia - "Handel" redirects here. Perhaps you were just looking for Door Handle. George Frideric Handel (German: Georg Friedrich Händel; pronounced [ɡɔːdʒ fɹaɪdrɪʧ ˈhændəl]) (23 February 1685 – 14 April 1759) was an expert at handling things, and more specifically all things related to music. With his many operas, oratorios and concerti grossi, he revolutionized the scene of German-English Baroque. His music's influence was of such proportions that it may justly be described as being of cosmic significance. Born in Halle an der Saale, he traveled the world to spread his heavenly music. Some of his most notable works include Yes, We Can Handle It!, Walking on Water, with Handles!, and Music for the Royal Handles. Handel's music went on to inspire composers like Haydn, Mozart and Beethoven. And all this despite his obvious handicap. edit Early life Handel was born in Halle an der Saale (German for "hall on the Saale river") to Georg and Dorothea (née Taust) Händel in 1685, the same year that both Johann Sebastian Bach and Domenico Scarlatti were born. Handel displayed considerable musical talent at an early age; by the age of seven he was a skillful performer on the harpsichord and pipe organ. However, his father, a distinguished citizen of Halle and an eminent barber-surgeon, thought about handling things differently, preferring him to study law. However, when Handel managed to greatly impress Duke Johann Adolf I, the latter urged Handel's father to let Handel take musical lessons. Handel's first teacher was Friedrich Wilhelm Zachow, the organist at the local church. Handel learned about harmony and contemporary styles. He studied with Zachow from 1692 to 1703, when he moved to a largely unknown village where he would later invent the hamburger. Handel was such an excellent student that he soon surpassed his teacher's capabilities. After Zachow died, Handel became a benefactor to his widow and children in gratitude for his teacher's instruction. edit From Halle to Italy Händel was bored with the nazifuckers in Germany so he decided to go to the country of music at the time, Italy. Italy was nutorious for it's fine opera which Händel adored. Since the Brits had the true money and Händel was such a filthy money-whore he later one moved to England so he could make some money from his operas. As they said in Europe at the time: "In Italy you can make art, but in England you can make money, so go there you WHORE" edit Journey to India He totally went there. edit The move to London In 1711 Händel moved to London because the inbred Brits had no culture what so ever. The English really wanted to listen to some opera but they where completely retarded so they gave Händel a call so he came to England. Händel how was only slightly less retarded then the Brits gave an great success with his first opera Rinaldo about a crusader without any testicles how sang like a woman. The leading role was written for the castrato Senesino. Senesino and Händel started a homosexual relationship during the preformence of this great opera. Händel also slept around with several other singers including castratos like Cafferelli, Farinelli and Tom Cruise. Händel also had sexual affairs with gorgeous sopranos and scary contra-altos. After over 40 operas the retarded Brits got bored of opera so Händel had to write new music. Händel was so stupid and had no creativity what so ever so he just moved over from opera to oratorios. The two art-forms are basically the same, just that an oratorio isn't acted on stage but preformed in a church like some long and dull mass. He had immense success with the oratorio Messiah, also know as Messiah Christ Superstar and "a long wait to the Hallelujah-chorus". After two hours of long wait with small random choruses, arias and boring recetatives the famous Hallelujah-chorus finally comes. At the first performance the Queen of England came and the King himself jizzed in his pants. They thought it had ended after that chorus but they where wrong so they killed themself. edit The move to America Needing somewhere to go, Handel travelled to America, he did not like what he saw... but yet he dealed with the lack of original culture, and composed barsongs, the most famous being "Throw the Jew Down The Well." Handel had many cannons but there is one he was particularly proud of. Oh, but this section is actually about a place called "Cannons". Sorry! edit Royal Academy of Music edit Opera at Covent Garden As mentioned earlier Händel wrote operas, so yeah. He totally did that! edit Later years He later got blined but he learned to play the organ anyway and he completley "wow-ed" the audience with his organplaying. He was known as the "Drunken Pig of organplaying" (se this picture: ) since he resembled one so much. He died eventually (thank God!) In a surprising segue into the mainstream music business, Handel joined up with Brackets and Hinges to form The Doors. - Rinaldo "the castrated knight" (1711) - Rinaldo II "the return of the knight without a ballsack" (1712) - Giulio Cesare in Egitto "the Emperor of Rome without his testicals" (1724) - Rodelinda "some guy crying about his lost balls "Dove sei amante testicals?" (1725) - Alcina "a bitchy cougar soprano-sorceress that have sex with alot of castrated men" (1735) - Serse "It beginns with that aria "Ombra mai fu", the rest is pretty shitty" (1738) - Esther "Like, the first oratorio in English ever! Totally, fuck Italian!" (It is funny because the ouverture is written in an italian style!) (1718) - Messiah "Oh JESUS! Everyone knows this. A long fucking wait untill the Hallelujah-chorus" (1741) - Judas Maccabeus "About that jew that killed Jesus, you know Judas. (1746) - Orchestral music - Organ-concertos op. 4 and 7 - Golder shower music on the Thames - Water music in da shower - Music for the Royal erection edit Musical influences Handel is a key figure in the world of club music and dance. He is often refereed to as the "great grandmother" of dance music, dance in fact being an anagram of his name. Being born in the 1930's, Handel grew up listening to club on the popular radio stations and town criers. He was particularly fond of the instruments found in Dance music, such as the Organ and the Ferris Wheel. Legend tells that at just 40 years old, before he had even learned to read and write, he composed his first piece of Dance music entitled, The club can't even Handel me right now. - ↑ I wonder where the guy got all his inspiration from. - ↑ Do Germans have funny surnames or what? - ↑ The fact this is the only occurrence of this name in Uncyclopedia shows how uncultured it really is. - ↑ Guess where that links to. Just take a guess. - ↑ The man Adolf Hitler was named after. - ↑ He has a cool name. Too bad we don't have an article on him. - ↑ And also because he was secretly in love with Zachow's widow, I bet.
fwe2-CC-MAIN-2013-20-29758000
Ever feel like the KKK gets a bum rap? I mean... there’s two sides to every story, right? Maybe a case can be made that the original Klansmen were heroes. Freedom fighters. Manful defenders of their women. Soldiers of God. Well, don’t risk pulling a muscle on that little mind experiment, because the case was made. In the Encyclopedia Britannica, the finest compendium of general knowledge in the English language. You know I love rummaging through old texts. You step back in time 100 years, you’re bound to discover some interesting perspectives on things. Dig it: The contents of the classic 1911 edition of the Encyclopedia Britannica (now unprotected by copyright) are online and searchable at www.1911encyclopedia.org. For shits and grins, I looked up “Ku Klux Klan.” The current Encyclopedia Britannica describes the Klan as “either of two distinct secret terrorist organizations in the United States, one founded immediately after the Civil War..., the other beginning in 1915...” But the word “terrorist” wasn’t used in the 1911 edition. To say the least. Here is how the 1911 entry begins: “KU KLUX KLAN, the name of an American secret association of Southern whites united for self-protection and to oppose the Reconstruction measures of the United States Congress, 1865-1876.” Self-protection? Okaaay. Tell me more. “The object was to protect the whites during the disorders that followed the Civil War, and to oppose the policy of the North towards the South, and the result of the whole movement was a more or less successful revolution against the Reconstruction and an overthrow of the governments based on negro suffrage.” Wow. Sounds kind of valorous when you put it like that. How did this revolutionary movement begin? “[The Ku Klux Klan] began in 1865 in Pulaski, Tennessee, as a social club of young men. It had an absurd ritual and a strange uniform. The members accidentally discovered that the fear of it had a great influence over the lawless but superstitious blacks... “The various causes assigned for the origin and development of this movement were: ... the corrupt and tyrannical rule of the alien [i.e., Northern whites], renegade and negro...; the disfranchisement of whites; the spread of ideas of social and political equality among the negroes; fear of negro insurrections; the arming of negro militia and the disarming of the whites; outrages upon white women by black men;” – – “... the humiliation of Confederate soldiers after they had been paroled – in general, the insecurity felt by Southern whites during the decade after the collapse of the Confederacy.” Perfectly understandable. So what were the Klan’s stated principles? “[T]he following are characteristic: to protect and succour the weak and unfortunate, especially the widows and orphans of Confederate soldiers;” – Awww... widows and orphans. Nobody ever talks about that! – “to protect members of the white race in life, honour and property from the encroachments of the blacks; ... to defend constitutional liberty, to prevent usurpation, emancipate the whites, maintain peace and order, the laws of God, the principles of 1776” – People, let me hereby repeat: This is from the ENCYCLOPEDIA FREAKING BRITANNICA! – “and the political and social supremacy of the white race – in short, to oppose African influence in government and society, and to prevent any intermingling of the races.” I see. So how were these noble principles actualized? What were the Klan’s tactics? “To control the negro the Klan played upon his superstitious fears by having night patrols, parades and drills of silent horsemen covered with white sheets, carrying skulls with coals of fire for eyes, sacks of bones to rattle, and wearing hideous masks.” (Pictured at left is a genuine 1870 Ku Klux Klan mask from the North Carolina Museum of History.) Shit! That sure would scare me good. Not to mention the bullwhips and guns. Oh, right... you didn’t mention the bullwhips and guns. Anyhoo, please continue, Encyclopedia Britannica... “In calling upon dangerous blacks at night they pretended to be the spirits of dead Confederates, ‘just from Hell’.... Mysterious signs and warnings were sent to disorderly negro politicians. The whites who were responsible for the conduct of the blacks were warned or driven away by social and business ostracism or by violence. Nearly all southern whites (except ‘scalawags’), whether members of the secret societies or not, in some way took part in the Ku Klux movement.” All right now, reality-check time. Was there anything negative about the Ku Kluxers? What about that violence you alluded to? “In some communities they fell into the control of violent men and became simply bands of outlaws, dangerous even to the former members; and the anarchical aspects of the movement excited the North to vigorous condemnation.” So give me the bottom line, 1911 Encyclopedia Britannica. What did the original KKK accomplish? “[T]he Ku Klux movement went on until it accomplished its object by giving protection to the whites, reducing the blacks to order, ... expelling the worst of the carpet-baggers and scalawags, and nullifying those laws of Congress which had resulted in placing the Southern whites under the control of a party composed principally of ex-slaves.” Dang. With such a romantic view of the KKK inscribed even in the Encyclopedia Britannica, is it any wonder that a new Klan arose in 1915 and lives on to today? But let’s end on an up note. As much as our modern imagination pictures Negroes quaking in terror from the night-riders... as much as that old encyclopedia speaks of fearful and superstitious blacks... there is this New York Times item, published March 19, 1868: Book Review: Storm Warning by E.A. O’Neal 15 hours ago
fwe2-CC-MAIN-2013-20-29760000
October is the month of raising awareness for breast cancer around the world. Originating in the US, The National Breast Cancer Awareness Month (NBCAM) is a collaboration of national public service organizations, professional medical associations, and government agencies working together to promote breast cancer awareness, share information on the disease, and provide greater access to services. Since its inception, the campaign’s recognition has increased dramatically. Every year more cities and municipalities ship in the awareness stream about the devastating disease; by lighting up monuments and landmark building in pink, gushing pink water out of fountains, painting sidewalks and vehicles in pink, etc – all as a symbol for the cause. (hover on images for caption) The landmark illumination initiative has illuminated 534 unique landmarks, and in 2009 alone, more than 200 global landmarks were illuminated. These monuments have become international symbols of hope, with the goal that when a woman sees a pink landmark, she will be inspired to get a mammogram or seek information about breast health.
fwe2-CC-MAIN-2013-20-29768000
By Steve Sternberg, USA TODAY AIDS virus testing should be offered regularly to everyone ages 13 to 64 in every hospital, doctor's office and clinic to speed diagnosis and help curb the epidemic, federal health officials recommended Thursday. The Centers for Disease Control and Prevention's recommendations are not legally binding, but they are designed to make HIV testing as routine as tests for high blood pressure, cholesterol and diabetes. About 1 million people in the USA are HIV-positive, but 250,000 of them have not been diagnosed, according to the CDC. "It will allow us to identify a lot of people who have HIV and don't know it," the CDC's Timothy Mastro says. ON DEADLINE: More information from the CDC The guidelines no longer require health workers to provide special counseling before and after the test, and they lift the requirement that patients supply specific written consent, though patients must be given the opportunity to refuse testing. Daniel Kuritzkes of the University of Colorado, chair of the HIV Medicine Association, says, "I think the guidelines will help destigmatize HIV testing by making it part of routine medical care and not a test with some special mystique about it." More than a dozen AIDS advocacy groups released a statement objecting to the decision to drop counseling. "We fear that some health care settings will interpret today's announcement as a call for universal screening and test patients without informing them or arming them with the information they need to avoid putting others at risk," says David Munar of the National Association of People with AIDS. Peter Staley, a founder of the protest group ACT UP, disagrees with his peers: "The bottom line is that we're really losing the fight here. We're losing lives. I'm an ACT UP grad, and our motto is 'by any means necessary.' "I realize that abandoning written informed consent raises issues. People are worried about privacy and stigma. But the bottom line is that this would probably save lives, and that's why I'm very much in favor of it." Even patients diagnosed late in the course of the disease can extend their life expectancy by 14 years with standard treatment, according to a recent study led by Rochelle Walensky of Harvard Medical School. Patients diagnosed soon after infection can extend their lives by as much as 25 years, she says. Mastro says diagnosis is a powerful tool for prevention. "We think that the quarter of a million people who don't know their infection status account for 70% of sexually transmitted infections," he says. "We have very strong data showing that when patients know they're infected, they take strong measures to avoid infecting others." Studies have shown that AIDS testing is as cost-effective as tests for high blood pressure and colon cancer. The new guidelines leave open two key concerns: who will pay for the tests and the cost of treating 250,000 new HIV patients. "The strain that this is going to place on Medicaid, the Ryan White Care Act and the state AIDS drug assistance programs is going to be enormous," says A. David Paltiel of Yale University, who has studied the test's cost-effectiveness.
fwe2-CC-MAIN-2013-20-29770000
Learning Technology - General description of immediate needs (1-6 months) I. Classroom presentation equipment Projection device : Usage : For multimedia presentation in class lectures. There are 4 categories of projection devices that can be used depending on the size of the audience. - 1. LCD panel ($ 3,000 - $ 7,000) - For class size larger than 20 students, this is the best solution to display images. For multimedia presentation, LCD panels with thousands color display (16-bit or more) and 10" active matrix will be highly recommended over those 8.4" passive matrix with only 256-color display (8-bit). Most commercial class materials are already (or are going to be) using 16-bit or better colors. If 16-bit graphics are displayed in 256-color, undesired and unexpected color alternation will occur. This problem is particularly serious for presenting video, 256-color passive matrix LCD panel is not adequent for this kind of task, because it will become a concern and an inconvenience for the professors in using color graphics for their multimedia class lectures. - suggested 10.4" system for larger classroom, 8.4" may be sufficient for small classroom - if possible, supports 640x480+ resolution (800x600, 1024x768) - 16+ bit color - build in audio - no external speakers are needed for audio - accept video - it can display videos played directly from video - remote control - the professor would not be confined to be around the LCD panel if any adjustments are needed. - 2. SVGA/Mac to TV converter - A scan converter costs $300-$1500. It converts the computer's video signal (SVGA/VGA/Mac) to TV's NTSC signals. However, an display device, either a large screen TV or TV projector is required. It may be the cheapest solution if the departments already have either a big screen TV or TV projector. - 3. Presentation monitor ($ 5,000 - $ 8,000) - For a smaller audience, a presentation monitor (>29") has advantages over the LCD panel. The images from a monitor are sharper and brighter than LCD panel projection and there is no need to turn off the light during the presentation. It will be discussed in the 6-12 month plan. - 4. Projection unit ($ 7,000 - $ 10,000) - Projection unit is another alternative. It can accept computer video output directly and has built-in projection mechanism to produce large images on an screen. II. Multimedia presentation development 1. Presentation software : PowerPoint 4.0 : As in the project plan, the machines for the faculty will be bundled with Microsoft Office ($170 academic price). The Microsoft Office suite includes Microsoft PowerPoint 4.0. Since this presentation package will be available to everyone, and it is one of the most popular presentation packages on the market, it will be very likely the best choice for general presentation uses. 2. Scanner : Usage : for scanning hardcopy graphics or text into computer readable form so that they can be incorporated in the multimedia presentations, and tutorials, etc. Scanner has higher resolution than most digital cameras. However, it is only useful for flat objects - mostly printed materials. Suggested system : The price difference between 300 and 600 dpi scanners are insignificant (less than $200 difference) nowadays. It is recommended to acquire one of the newer models. The cost is around $1200 for one from HP or Epson. - 600 dots per inch (dpi) optical resolution, 24 bit color min. - TWAIN interface for Windows - supports both Windows and Mac (if possible) 3. Optical Character Recognization (OCR) software : Usage : For converting scanned text images into word-processor readable text. Often you will have printed material and other text, but no electronic text files, to incorporate into your project. With OCR software and a scanner, it can save you many hours of re-typing. Different packages costs from $100 to $500. OmniPage Professional is the most popular on the market. 4. Image processing software : Usage : Images obtained from scanning or digital camera often require retouching or re-composition before they are in the desire form. Most of the scanners come in packages/bundles that include image processing software. The most popular software is Adobe Photoshop which costs $600/$250 academic. A bundle that includes Photoshop should be chosen if available. 5. Color printing Usage : Occasionally, there are needs for color hard copy printout, either on paper or transparency, which can be showed to anyone who may not have a computer available all the time. A low cost ink-jet printer is recommeded for initial purchase. Models from Epson or HP in the $500 ranges would be sufficient for regular uses. The cost of supplies for this type of printer is low. Epson Color Stylus is both Mac and PC compatible. 6. Low cost digital camera : Usage : Provide quick and fairly easy way to take digital images. Images can be played back directly from the camera to a TV monitor or exported as a graphics image to a multimedia presentation or incorporate into various applications. While several days turnaround time is needed for taking regular photographs, developing into prints and then scanning with an scanner, digital camera has almost instant feedback about how the image looks. Even though the image quality is not as good as with traditonal films, they are acceptable for most of the multimedia work. Suggested system : Low cost models from Kodak or Apple cost around $900. - 640x480+ resolution, 24 bit color. - supports both Windows and Mac (if possible) 7. 2D illustration Usage : General purpose 2D illustration Adobe's Illustrator 5.5, Macromedia Freehand 5.0, ($150 academic price), or CorelDraw 6.0 for Windows 95, ($500 list price). 8. Screen capture software Usage :To capture the activities on screen into image files or QuickTime movies; useful for training/demonstration of using software. This kind of software is usually around $100. III. Live video 1. Low cost camcorder Usage : For shooting live video (in analog form); useful for laboratory Consumer grade camcorder ranges from $700 - $1500 Suggested system : - Hi8 or SVHS (400+ line resolution), regular 8mm and VHS has only about 240 line resolution. - 12x zoom - video output port, preferable S-Video port which give better signal than using composite/RCA connectors. - color viewfinder/stereo/time code/VISCA(for computer control) support if available Usage : Pre-screen of video segments before video capture. A resolution of more than 400 line and S-video input port are required to take advantage of S-VHS or Hi-8 capability. 3. Computer hardware requirement for digitizing video : To make movies from video, special hardware is needed to convert the analog signal to digital data. Video quality : There are 2 factors of the apparent video quality, size and frame rate. Actual size of a 320x240 and a 160x120 movie Quarter size (160x120) usually is too small to be useful. Some students and faculties have commented that quarter size (160x120) movies are too small. Half size (320x240) is usually recommended. Full screen (640x480) is achievable now even though is expensive. Not many computers can play the full screen movies without skipping frames. To achieve full screen movies playback, the computer needs a hardware accelerated video display card installed. One disappointment of first time digital video user is the small size video window they are required to use when limited by equipment. Low frame rate video usually looks jerky. Equipment that could only achieve no more than 10 frame per seconds (fps) is probably not considered as long term solution. Low frame rate equipment may result in waste of time, money, and effort for development for the long run. Mac solution : The easiest way to start is with a Macintosh 8100/100AV model and an additional low-cost video capture card Radius SpigotPro AV. This can be used to demonstrate what can be done. Monthly lease is about $200/month. The estimated purchase price is included in the specification document. For regular production use, an medium quality system would require a VideoVision Studio card ($3700) and upgrade of the AV tuned hard drive to an disk array. Pre-configured system like this are available for lease for about $450/month. One reason for leasing instead of purchasing these AV systems is that the second generation Mac would change to PCI bus from the NuBus. Purchase of NuBus Mac is not advisable. However, at this time, the only second generation PowerMac models available (9500/120 or 9500/132) have no build-in AV capability. Both SpigotPro AV and VideoVision PCI version are not available for the 9500. 8500AV should be available around Aug 7. The other future alternative is from Radius. It has announced plan to ship very soon, pre-configured "PowerMac-clone" video workstations equipped with VideoVision Studio and VideoVision Disk Array. In addition, the new Mac uses 64-bit DIMM instead of 32-bit SIMM memory modules. Therefore, any purchase of RAM would not be sound investment. Video work requires a large amount of RAM, starting with 40M, or 72M is not uncommon. The lease can delay any purchasing decision until the 8500AV is available. It can also give time to test whether SpigotPro AV or VideoVision Studio is required for the kind of work the professor would want to develop. PC solution : Usually the original video captures are done on Mac and move the processing to PC. Although it is not very common to capture video on PC, it is indeed possible to have the entire process done on PC, which probably is the preferred choice for department of completely PC-based such as Physics. The same level of hardware for video work that is possible on the Mac is not available for PC platform. PC usually are limited to frame size of no larger than 320x240 video using capture card such as Intel SmartVideo Recorder Pro ($700). Although PC platform has this limitation in capturing video, PCs can perform video editing at the same level as Macintosh. And with common format like Apple's QuickTime, video clips captured from Macintosh can be edited or composed on either platform. Video editing software such as Adobe Premiere is available for both PC and Mac and are identical on either one. The digitized movies can be played on either platform despite of which platform the movies are originally captured and edited. 4. Video capture hard drive To capture video into digital form, special hard drives that does not require thermal recalibration are required. Regular hard drives pause occasionally to perform thermal calibration that would result in skipped frame during the video capture. Digital videos take up huge amount of disk space. Raw video (after capture, before compression) at 160x120 frame size (1/16 full screen) takes about 0.3 - 0.5 M/sec of video. That means at 320x240 frame size (1/4 full screen), a video clip would take 2M/sec. of raw video. A clip for simple laboratory training is usually 30 seconds to 1 min. This translates to a 20+ MB file for 160x120, 60+ MB 320x240. The clips are then composed in video editing software and compress to a much smaller file size. The main capture drive is mean for storing current processing files only. Processed files are moved to secondary storage for "long term" storage. Drive of this type are Micropolis 3243AV, and Seagate Barracuda series. A 4GB AV drive costs about $1700. This type of drive is suitable for most work. However, for high video quality, higher frame rate and for 320x240 frame size, a drive array is recommended, especially if the VideoVision Studio class of video capture card is used. Drive arrays will be discussed in the long term requirement section. AV-tuned drives are also required for any CD-ROM writing applications, in which any interrupt would ruin the CD-ROM being "burned". 5. Video editing software Adobe's Premiere 4.0 ($250 academic price, both Mac and Windows version available) is one of the most common and powerful video editing software for composing video clips. It lets you edit and assemble video clips captured from camera, tape, other digitized movie segments, animations, scanned images, and from digitized audio or MIDI files. It also let you add transition visual effcts and superimposed caption to the clips. The QuickTime generated clip can be played on either PC, Mac, or UNIX platform despite of which platform the clip is assembled. It is bundled with some video capture cards. 6. Laser disk player Usage: Laser video disc has become less commonly used nowadays. However, there are still excellent courseware available that are in video disk format. This type of player is around $ 1,000. 7. Sound editing software Usage : For editing narration recorded from microphone; usually for tutorial development. Macromedia's SoundEdit 16, ~$150 academic price. 8. Secondary storage : Usage : Since, as mentioned above, video clips take up a large amount of disk space ( ~0.3-0.5 MB/sec. uncompressed), the captured clips needed to be backuped up out of the capture hard drive on to other low-cost medium. The working hard drive should always be left with plenty of space for capture and processing video. Secondar storage is also for personal files backup or professors who want to keep their own multimedia work on the disk. As mentioned in the video capturing section, digital video capturing are very speed critical. Usually the computer capturing video need to disable network during video capturing. Storing file to remote mounted network disk is not convenient although it can always re-enable network after video capture. Video editing stage is less resources demanding, although manipulating files over network is still not advisable. Medium that can hold more than 1GB are recommended. For example, 1.3GB MagnetoOptical Drive ($1800 for drive, $100/1GB cartridge). There are also other low cost alternatives which are less convenient as they cannot hold as much information. Although these drives are less expensive than the MO drive, cost per MB of the cartridge storage is higher than the MO cartridge. - Bernoulli 230M, $ 500. Cartridge costs $ 100 each. - 230M MO, $ 600. Cartridge costs $ 50 each. - SyQuest 270M, $ 600. Cartridge costs $ 80 each. - 100M ZIP drive, $ 200. Cartridge costs $ 20 each. 9. Backup : Usage : The multimedia workstations hard disk size are in GB range, the only viable system backup solution is digital tape drive. In contrast to the secondary storage, tape drive backup can hold more than 2 GB of files and can be used for regular backup for the whole system. Tape drive backup is more time-consuming than the secondary storage solution suggested above. Tapes are useful for regular system backup but not practical for saving files that are being work on because individual files are not randomly accessible like on hard disk or secondary storage. 4GB Digital Audio Tape (DAT) drives run about $ 1,000-$ 2,000. Usage : To keep systems like the ones mentioned above in "top shape", several types of utility programs are required. - Disk defragmentation - Norton Utilities, $100 academic - Anti-virus - Symantec AntiVirus, $ 100 - Backup - MacTools Pro/PC Tools for Windows, $ 80 academic Usage : Laser printer is needed for printing graphics layout and draft printout of source code (programming may be required in some multimedia development). Color printing may still be required. Suggested system : As the price of 600 dpi laser printer has recently dropped to the level of 300 dpi ones, the price difference between a 300 dpi and a 600 dpi laser printer is about $300. For text printout, there is no significant, noticable difference. For graphics intensive work like multimedia, the number of gray levels a 600 dpi printer supports is very distinguishable. The cost of low end 4-6 page per minute (ppm) Postscript 600 dpi HP LaserJet 5 MP is about $1300, a 12-16 ppm model is around $2400. - 600 dpi - supports both Windows and Mac IV. Courseware development Usage : Authoring software let you put all the graphics, animation, audio, video, and text together, and to add interactivity. For creating quality multimedia programs, Macromedia's Director is the most popular and powerful tool of choice. Director can let you make your multimedia project as a standalone executable, so your multimedia projects developed can be distributed. It is available for both and Mac and PC. Both platform use the same file format. ~$300/per copy academic price if purchase in 10 packs (~$3000). Director is sufficient for most multimedia courseware authoring works. One exception is for the Physics CUPLE project. All the tutorials from the CUPLE project are created with Asymmetric Multimedia Toolbook. In order to make modifications to them, a copy of Multimedia Toolbook is required. It costs around $300 academic. The CBT (computer based training) version costs $ 1,000. The initial electron dissemination of information probably would be done using the WWW. Hardware and software needed for creating Web documents are basically the same as the ones for multimedia presentation development. Since Microsoft Office is going to be bundled with the faculties' machines, Microsoft Word 6.0c is included in the suite. A free Word add-on, Internet Assistant, is available from Microsoft for creating HTML documents within Word. V. Commercial available software from Physics Academic Software, North Carolina State University Comprehensive Unified Physics Learning Environment (CUPLE) CUPLE Student version, $ 500/10 license, 1 CD-ROM CUPLE Developer's version from Falcon Sofware - SuperChemLab Mac CD-ROM Version 1.0 $ 300, (by Melanie Cooper, Clemson U.) - Exploring Chemistry V CD-ROM (Mac and Windows) $ 500, (by Stanley G. Smith, U. of Illinois and others) - Chemistry Review Series : General Chemistry $ 40, (by Stanley G. Smith) - Chemistry Review Series : Organic Chemistry $ 40, (by Stanley G. Smith) - Introductory Chemistry Lecture Package CD-ROM for Windows Version 1.0 $ 300, (by Iris Stovall and Roxy Wilson, U. of Illinois) - Teaching Chemistry with Demonstrations Level 1 Videodisc $ 300, (by Roxy Wilson, U. of Illinois and others) - The Electronic Laboratory Simulator (ELS) Version 2.0 for Windows, $250 | Table of Contents | Goals | Cost Summary | 1-6 mon. | 1-6 mon. price | 7-12 mon. | Ching-Wan Yip, Department of Chemistry, Wake Forest University, Winston-Salem, NC 27109-7486, Copyright © 1995 Ching-Wan Yip
fwe2-CC-MAIN-2013-20-29773000