text
stringlengths
247
520k
nemo_id
stringlengths
29
29
Air is surprisingly heavy stuff. The air under an average coffee table weighs about 1kg, which means when we push large quantities of it through the small pipes of an engine at high speed it has a significant amount of momentum. Engines breathe in huge quantities of air, each 100bhp consumes about 35 pints of air per second and expels about three times that volume in exhaust gas. All this flow needs to be carefully controlled, the flow into the cylinders is not constant but stops and starts many times a second with each cycle, inertia has to be allowed for so that enough time is available for the gas to speed up or slow down. This is all achieved with the camshaft, a long steel rod with lumps on that pushes the valves up and down. It's spun round in synchronisation with the crank and pistons so that the lumps, called lobes, hit the top of the valve stems via the follower or other links and force them open at just the right moment. The valve is gradually opened and shut to avoid excessive forces which would increase wear. Because it takes time for the air to get moving the cam opens the intake valve before the piston starts the intake stroke (I'll come back to this situation later), as the piston draws the air in the speed increases and gains momentum, then when the piston is at the bottom of the stroke and starts coming up the momentum continues to force air in whilst the valve gradually shuts. So rather than the intake valve being only open for the 180 degrees of the intake stroke, it is more usually open for over 230 degrees in total. After it shuts and the compression stroke is followed by the expansion stroke it becomes time to open the exhaust valve, but again rather than waiting for the power stoke to finish and the exhaust stroke to start the exhaust valve is actually opened a bit early. Near the end of the power stroke very little useful power is being transmitted to the crank, the exhaust is opened and gradually exhaust gas speed increases. Initially the exhaust gas is forced out by combustion pressure, but as the piston travels up on the exhaust stroke it pushes the remaining exhaust gas out, further increasing momentum. Near the top of the stroke there is still enough gas momentum to drag the remaining exhaust gas out of the chamber even as the piston starts going back down on the intake stroke. The exhaust valve is then gradually closed. This does mean that there is a period at the end of the exhaust stroke and the beginning of the intake stroke where both the intake and exhaust valves are open, the exhaust momentum prevents it going the wrong way and even helps to drag fresh intake air into the cylinder. This situation is referred to as valve overlap. Matching cam timing to the valve size and port diameter is vital, big valve/port engines have lower gas speeds, less momentum and during the overlap phase the exhaust gas can reverse direction and flow into the intake more easily at low engine speeds, but at high engine speeds a big valve head with a large overlap phase can make the engine breathe in much more air than it would normally do, effectively the gas momentum cramming more intake charge in just like a supercharger does. That is why race engines can make so much more power than road engines, but struggle to idle smoothly and can be unpleasant to drive at light throttle. Cams are generally referred to by the total duration the valve is open for in degrees, most cams have the same duration for the intake and exhaust, but some race cams may use different amounts, particularly on turbo or supercharged engines. A standard cam may have a duration of 240 to 260 degrees, a sporty road cam might have about 275 to 285 degrees, but a full bore race cam could keep the valves open for as much as 310 degrees, although that probably wouldn't idle below 4000rpm! The cam designed is a complex subject but the design has a massive effect on how well the engine breaths, many tuning companies offer performance cams sold as 'Stage 1' etc., but there is no technical definition of a 'Stage' and this is simply a marketing term, one company's stage 1 cam might be equivalent to another companies stage 3. The cams are driven by either a chain or a toothed belt that is driven by the crank so everything is synchronised. The crank has to go round twice for the pistons to do all four strokes on a 4 stroke engine, so the cams are driven at half the speed of the crank, this is done by having the cog on the crank half the size of the cogs on the cams. A very small number of engines use a series of gears instead of a belt or chain, one example being the old Rolls Royce V8, which has the advantage of never needing replacement and maintaining accurate timing under all conditions, the down side is that it is heavier and more expensive. Adjustable 'Vernier' pulleys are available to fine tune the timing on race engines, these have two parts; one is the outer cog driven by the chain or belt and the other part is the inner mounting hub onto the camshaft. A ring of bolts join the two halves together, each bolt passes through a slotted hole so that the two parts can be rotated slightly with respect to each other. A typical cam gear, known as a sprocket, may have teeth separated by 10° so fitting the chain either one tooth ahead or behind adjusts timing by 10° either way, fitting a vernier cam wheel allows finer adjustment adjustment within that 10°. Retarding the cam by a few degrees can increase low speed torque at the expense of high rpm power, advancing it a few degrees has the opposite effect. A good tuning company balances the intake dimensions, exhaust manifold and systems, port and valve sizes with the right cam type and setting just the right cam timing to achieve the desired characteristics. There is a lot more to cams than just a stick with lumps on.
fwe2-CC-MAIN-2013-20-32867000
In several installations the generator manufacturer has installed the brushes so they trail the commutator. In this case, if the generator drive shaft is rotated opposite the normal direction, brush damage can occur. Many operators elect to replace their own brushes in the field. This is usually not a problem as long as brush spring tensions are tested and adequate brush seating techniques are used. One nonrecommended technique used for brush seating is after installation of two brush sets (180 degrees apart) in a starter generator to complete two successive ground power starts. After the second start, the brushes are seated. Unfortunately, implementing this technique results in excessive arcing between the brushes and the commutator during the first start. The reason the brushes are so well seated after the second start is the arcing causes the commutator surface to resemble a rough cut file. So much for extended brush life. Attention should always be paid to the manufacturer's recommendations prior to conducting any maintenance on a starter generator. Aircraft using separate starters can incorporate brushless generators or rectified alternators. There are several distinct differences in the internal operation of these devices as compared with a unit using brushes. One primary difference is that a brushless unit will include a permanent magnet generator (PMG). Anytime the shaft of this device is rotating, the PMG has relative movement with a series of coils. This means that even if the cockpit switch is selected "off" there will still be excitation power available when the engine is in operation. Frequently, three phase AC power is supplied from this PMG to a generator control unit. Here the excitation power is converted to DC and is then metered into the main excitor winding within the generator case. The magnetic field produced by this stationary excitor winding works in conjunction with (usually three) main power coils installed on the rotor. Output from each power coil is AC and is directed through a diode circuit to change the AC into a rippled DC. Feeder cables are used to connect this output into the aircraft electrical distribution system. The generator control unit will also monitor this output and adjust the regulation system according to voltage deficiencies or surges. Numerous avionic/airframe system problems arise from anomalies in the power systems. Most autoflight systems, as well as electronic flight instrument systems (EFIS), are voltage sensitive. In the event of voltage drop below a specific threshold, the autopilot may disconnect or the EFIS could black out. In one situation, an aircraft had gone in for a complete interior refurbishment and new avionics. Afterwards, on extended duration flights, the autopilot would periodically trip off but would always reset. Numerous technicians replaced various components in the autopilot circuit. Nothing solved the problem. Eventually, a technician flew with the aircraft on an overseas mission and happened to notice that when the flight attendant switched on the new oven in the galley, the autopilot tripped. The technician was able to duplicate this condition time after time. The current draw associated with turning on the oven was causing a momentary voltage drop on the distribution system resulting in the autopilot trip. Relocating the oven power supply to a bus that was supplied by more than one generator solved the problem. In other cases, the ripple produced by brush bounce or regulation malfunctions can cause various computers to see voltages that might be out of tolerance for a computer sensor. The computer then signals the flight crew that it has sensed a failure. Ground power units (GPU) are also not exempt from spiking electrical power systems. In many aircraft the battery(ies) can be brought on line with the GPU and serve as a filter. When using a questionable GPU on an electrically sensitive aircraft, connecting a lead acid battery in parallel with the GPU will provide some protection. It is a good idea to periodically test each GPU for excessive voltage ripple. When troubleshooting electronic problems, even those associated with self diagnosing systems, consider the power source before replacing too many components.
fwe2-CC-MAIN-2013-20-32869000
Warning: In the List of Prokaryotic names with Standing in Nomenclature, an arrow (--->) only indicates the sequence of valid publication of names and does not mean that the last name in the sequence must be used (see: Introduction). Classification - List of genera included in the order - List of families included in the order - Warning: see also the file "Classification of prokaryotes: Introduction". Picrophilales Cavalier-Smith 2002, ord. nov. (Type order of the class ¤ Picrophilea Cavalier-Smith 2002). Type genus: ¤ Picrophilus Schleper et al. 1996. Etymology: N.L. masc. n. Picrophilus, type genus of the order; suff. -ales, ending denoting an order; N.L. fem. pl. n. Picrophilales, the Picrophilus order. Reference: CAVALIER-SMITH (T.): The neomuran origin of archaebacteria, the negibacterial root of the universal tree and bacterial megaclassification. Int. J. Syst. Evol. Microbiol., 2002, 52, 7-76. Original article in IJSEM Online Copyright © J.P. Euzéby The information on this page may not be reproduced, republished or mirrored on another webpage or website. See, Legal rights and disclaimers
fwe2-CC-MAIN-2013-20-32880000
Titanic centenary: Swedish dreams of a new life lost at sea In the days after the Titanic sank with the loss of 1,517 lives, the Chicago Daily Tribune published an account of how Swedish immigrant and city tram conductor Nils Pålsson discovered his wife Alma and four children had perished in the waters of the Atlantic. "Paulson looked pale and ill when he leaned hungry-eyed over the desk and asked in broken English if his wife or children had been accounted for. "Chief clerk Ivar Holmstrom scanned his list of third-class passengers saved. "He failed to find there any of the names enumerated by Paulson. 'Perhaps they did not sail,' he suggested hopefully. "Then he looked over the list of those who sailed third class on the Titanic...The process of elimination was now complete. "Your family was on the boat, but none of them are accounted for," said Clerk Holmstrom. "The man on the other side of the counter was assisted to a seat. His face and hands were bathed in cold water before he became fully conscious. "He was finally assisted to the street by Gust Johnson, a friend who arrived with him. "Paulson's grief was the most acute of any who visited the offices of the White Star, but his loss was the greatest. "His whole family had been wiped out."A million emigrants During the 19th Century failing crops and rising poverty levels made many Swedes sell up to start a new life across the Atlantic. The 231 Nordic passengers - 123 Swedes - 89 died - 63 Fins - 43 died - 31 Norwegians - 21 died - 14 Danes - 12 died Between the early 1800s and 1930 more than one million Swedes left for America. Most sailed from Southampton or Liverpool to New York. On the Titanic, the Swedes were the largest group after British and American passengers, making Swedish the second most spoken language on board, according to Titanic expert and author Claes-Göran Wetterholm. "There were more than 200 Nordic passengers and they made up almost a third of all third-class passengers," he explained. Of the estimated 1,300 passengers on board the Titanic, there were 123 Swedes, 112 in third class. There were 327 British and 306 American passengers on board. Nils Pålsson, a miner, left his home in Bjuv, Skåne, south Sweden, in June 1910 for Chicago where he got a job as a tram conductor.Unknown child's grave By April 1912 he had enough money to pay for Alma Pålsson, 29, and their children Gösta Leonard, two, Stina Viola, three, Paul Folke, five, and Torborg Danira, eight, to join him. They travelled via Copenhagen to England and Southampton where they boarded Titanic. As the ship began to sink late on 14 April Alma dressed her children in their cabin. But they arrived on deck too late for the lifeboats and all of them died that night. Alma's body was recovered but none of her children was found. In the days after the disaster the body of a fair-haired little boy was found floating in the water near the site of the sinking. He was never identified and was buried at Fairview Cemetery in Halifax, Canada. His gravestone read: "Erected to the memory of an unknown child". Lars-Inge Glad, a descendant of Nils Pålsson, said: "For many years it was believed that the 'unknown child's grave' belonged to one of Alma's children but it turned out to be an English child from third class." The grave, which is near Alma's grave at Fairview Cemetery, was identified in 2007 as that of Sidney Leslie Goodwin, a 19-month-old boy from Wiltshire. Mr Glad said: "Nils never recovered from losing his family, but he remarried another Swedish woman called Christina. "They moved from Chicago to a place not that far away where they bought a house where Nils planted four trees in the garden in memory of his wife and children. Nils later changed his surname to Paulson to make it sound more American. "He died in 1964. The names of their children have been kept alive in our family. My mother's second name was Viola and my grandmother was called Torborg."The lost ring The story of another Swedish victim will live on through her wedding ring. Gerda Lindell, 30, was also emigrating to America with her husband Edvard, 36, on the Titanic. The couple, from Helsingborg, Skåne, managed to stay together as the Titanic went down and reached collapsible lifeboat A together. August Wennerström, one of only 34 surviving Swedes, later described the events to many newspapers. He said he and Edvard managed to get into the lifeboat but Gerda had no strength left to climb in and clung on to the side. Eventually she could hold on no longer and drowned. Wennerström described how Edvard's hair "turned all grey in lesser time than 30 minutes" before he died, still holding his wife's ring in his hand. The survivors were later transferred to another lifeboat and taken to Carpathia while the collapsible was left to drift away. Gerda's body was never found, nor was her husband's. But a month later a crew from another ship, Oceanic, found the drifting lifeboat about 300 miles from where the Titanic sank.Last Swedish survivor As they began recovering three dead bodies from the raft, they saw something glistening at the bottom. They had found Gerda Lindell's ring. The ring was reunited with her father in Sweden after her brother saw a note about it in a local newspaper. For many years the ring, which was a combined wedding and engagement ring, remained in the family and Gerda's niece wore it. Mr Wetterholm had heard the story about the ring but until he managed to trace it in 1991 he thought it was a myth. The ring is now stored in a safety deposit box in Sweden, but is taken out for exhibitions around the world. Another well known Swede on board was Lillian Asplund. She is better known as the last American survivor, although she was actually from Sweden. Having been born in the US in 1906 to immigrant parents, the family returned to Småland in Sweden in 1907 to sort out the family farm after her grandfather's death. By 1912 they decided to move back to the US and Mr Asplund booked them on the Titanic. Lillian survived along with her mother Selma and younger brother Felix and were rescued by Carpathia. Her father and three older brothers died. Lillian Asplund never wanted to to talk about the events of that fateful night. She died in 2006 at the age of 99.
fwe2-CC-MAIN-2013-20-32894000
History of Chiropractic Daniel David Palmer founded chiropractic in 1895, after an experience in which he apparently believed he cured a man’s deafness by manipulating his back. He opened the Palmer School of Chiropractic and began teaching spinal manipulation. This college still exists today, with a fully accredited program. One of Palmer’s first students was his son, Bartlett Joshua (B.J.) Palmer. It was B.J. Palmer who truly popularized the technique. Later, Willard Carver, an Oklahoma City lawyer, opened a competing school. He believed that chiropractic physicians needed to offer other methods of treatment in addition to spinal manipulation. This opened a schism in the chiropractic world that still exists today. Followers of Palmer and his methods focus only on spinal adjustments, an approach called "straight" chiropractic. Those who, like Carver, use various approaches to healing are called "mixers." Mixers may use vitamins, herbs, and any other treatment methods they find useful (and are allowed to practice by law). Medical treatments in the 19th and early 20th centuries were not based on scientific evidence of effectiveness, and chiropractic treatment was no exception. It became a widespread technique long before there was any real evidence that it worked. Chiropractic schools utilized all of their profits and resources to further develop programs for training people in chiropractic techniques—not for verifying the theory and practice of chiropractic. However, in the 1970s, proper scientific research into chiropractic began to draw interest. In 1977, the Foundation for Chiropractic Education and Research (FCER) established a program to train chiropractic researchers. Since then, efforts have been made to fund scientific trials testing the effectiveness of chiropractic techniques and to establish a scientific foundation for the practice. What Is the Scientific Evidence for Chiropractic Spinal Manipulation? Chiropractic spinal manipulation has been evaluated scientifically to determine its efficacy, as well as its costs comparative to other forms of health care. However, the evidence is not compelling in either case. Although there is some evidence that chiropractic spinal manipulation may be helpful for various medical purposes, in general the evidence is not strong. There are several reasons for this, but one is fundamental: Even with the best of intentions, it is difficult to properly ascertain the effectiveness of a hands-on therapy like chiropractic. Only one form of study can truly prove that a treatment is effective: the trial. (For more information on why such studies are so crucial, see Why Does This Database Rely on Double-blind Studies? ) However, it isn’t easy to fit chiropractic into a study design of this type. Consider the obstacles: What could researchers use for placebo chiropractic treatment? And how could they make sure that both participants and practitioners would be kept in the dark regarding who was receiving real chiropractic manipulation and who was receiving fake manipulation? Because of these problems, all studies of chiropractic manipulation fall short of optimum design. Many have compared chiropractic treatment against no treatment. However, studies of this type cannot provide reliable evidence about the efficacy of a treatment. If a benefit is seen, there is no way to determine whether it was caused by chiropractic manipulation specifically, or just attention generally. (Attention alone will almost always produce some reported benefit.) More meaningful trials used some sort of unrelated fake treatment for the control group, such as phony laser acupuncture. However, it is less than ideal to use a placebo treatment that is so very different in form from the treatment under study. Better studies compare real chiropractic manipulation against sham forms of manipulation, such as light touch. Studies of this type are a definite step forward. However, it is quite likely that the practitioners at least unconsciously conveyed more enthusiasm and optimism when performing the real therapy than the fake therapy; this, too, could affect the outcome. It has been suggested that the only way to get around this problem would be to compare the effectiveness of trained practitioners to actors trained only enough to provide a simulation of treatment; however, such studies have not been reported. Still other studies have simply involved treating people with chiropractic spinal manipulation and seeing whether they improve. These trials are particularly meaningless; it has been long since proven that both participants and examining physicians will at least think that they observe improvement in people given a treatment, regardless of whether the treatment does anything on its own. Finally, other trials have compared chiropractic manipulation to competing therapies, such as or conventional physical therapy. However, neither of these therapies has been proven effective. When you compare unproven therapies to each other, the results cannot possibly prove that any of the tested treatments are effective. Given these caveats, we discuss below what science knows about the effects of chiropractic. Besides effectiveness, another important consideration is cost of care. There are many aspects to the cost of treatment, including number of visits to the chosen provider, cost of evaluation procedures such as x-rays, insurance reimbursement versus patient out-of-pocket expense, and costs for missed work time. However, it is difficult to develop accurate cost-comparison figures because there are many complicating factors in research on the subject. For example, one approach is to simply identify people with similar injuries who choose one treatment or another and add up the total cost. Unfortunately, the results of such a study can be misleading. People with more or less severe back pain might tend to choose different forms of treatment; if those with more severe pain usually chose surgical treatment, this would tend to inflate the comparative costs of conventional care and make chiropractic seem less expensive. Another potentially complicating factor is that, to a great extent, insurance companies control utilization of treatment. If they are less inclined to authorize chiropractic visits, people who choose chiropractic care might find their care cut off more rapidly than others who choose, say, physical therapy. This too would lead to artificially low costs of chiropractic treatment compared to physical therapy, skewing the results of the study. These problems could be solved by conducting a study in which researchers randomly assign participants to certain treatments, with the length of treatment determined entirely by the treating physician. Unfortunately, studies of this type have not yet been conducted. Chiropractic spinal manipulation is one of the most popular treatments for acute and chronic back pain in the US, and it may in fact provide at least modest benefit. However, as yet, research evidence has failed to find chiropractic manipulation convincingly more effective than standard medical care. Chiropractic does seem to be more effective than placebo, if not by a great deal. For example, a single-blind controlled study of 84 people suffering from low back pain compared manipulation to treatment with a diathermy machine (a physical therapy machine that uses microwaves to create heat beneath the skin) that was not actually functioning. The researchers asked the participants to assess their own pain levels within 15 minutes of the first treatment, then 3 and 7 days after treatment. The only statistically significant difference between the two groups was within 15 minutes of the manipulation. (Chiropractic had better results at that point.) In another single-blind, placebo-controlled study, researchers assigned 209 participants to one of three groups: a high-velocity, low-amplitude (HVLA) spinal manipulation; a sham manipulation; or a back-education program. Although this has been reported as a positive study, most of the differences seen between the groups were not statistically significant. In addition, because almost half the participants dropped out of the study before the end, the results can't be regarded as meaningful. Unimpressive results were also seen in a well-designed study of 321 people with back pain comparing chiropractic manipulation, a special form of physical therapy (the Mackenzie method), and the provision of an educational booklet in treating low back pain. All groups improved to about the same extent. Several studies evaluated the effectiveness of chiropractic manipulation combined with a different kind of treatment called mobilization, but they too found little to no benefit. On a positive note, one study of 100 people with back pain and sciatica symptoms (pain down the leg due to disc protrusion) found that chiropractic manipulation was significantly more effective at relieving symptoms than sham chiropractic manipulation. Several studies have found that chiropractic is at least as helpful as other commonly used therapies for low back pain, such as muscle relaxants, anti-inflammatory medication, soft-tissue , conventional medical care, and physical therapy. For example, a large, well-designed study found chiropractic manipulation more effective than general medical care and exercise therapy. Note: Physical therapy, the main conventional therapy for back pain, also lacks consistent supporting evidence. For example, in one large study of people with back pain, a single session of advice proved equally effective as a full course of physical therapy for back pain. As with back pain, despite the widespread use of chiropractic spinal manipulation for neck pain, there is as yet no reliable evidence that it works any better than other therapies, particularly over the long-term . Of the limited number of studies performed, most have failed to find manipulation (with or without mobilization or massage) convincingly more effective than placebo or no treatment. One large study (almost 200 participants) found that a special exercise program (MedX) was more effective than manipulation. However, a study reported in 2006 showed that a single high-velocity, low-amplitude (eg, chiropractic-style) manipulation of the neck was more effective than a single mobilization procedure in improving range of motion and pain in people with neck pain. And a 2010 systematic review, including 17 randomized trials, found mixed results for the benefits of manual therapy (including manipulation and mobilization) combined with exercise. According to these researchers, high-quality studies showed manual therapy plus exercise to be more effective than exercise alone in the short-term, but there was no difference over the long-term. Upper Extremity Pain Patients often seek out chiropractic for painful conditions affecting their upper extremities (eg, shoulder, elbow, forearm, wrist, hand). A recent search and analysis of all published studies examining the effectiveness of chiropractic for these conditions revealed mostly case studies, an unreliable source of evidence. The few uncovered controlled trials were of insufficient quality to draw any reliable conclusions about the effectiveness of chiropractic for painful conditions of the upper extremity. Tension Headaches and Cervicogenic Headaches Many people experience headaches caused by muscle tension, neck problems, or a combination of the two. Because these so-called and cervicogenic headaches (caused by neck problems) overlap, we discuss them together here. Chiropractic spinal manipulation has shown some promise for these conditions, but the evidence remains incomplete and somewhat contradictory. In a controlled trial of 150 people, investigators compared spinal manipulation to the drug amitriptyline for the treatment of chronic tension-type headaches. By the end of the 6-week treatment period, participants in both groups had improved similarly. However, 4 weeks after treatment was stopped, people who had received spinal manipulation showed greater reduction in headache intensity and frequency and over-the-counter medication usage than those who used the medication. The difference in the amount of improvement between the groups was statistically significant. In another positive trial, 53 people with cervicogenic headaches received chiropractic spinal manipulation or laser acupuncture plus massage. Chiropractic manipulation was more effective. However, a similar study of 75 people with recurrent tension headaches found no difference between the two groups. Other, smaller studies of spinal manipulation have been reported as well, with mixed results. In a controlled trial, 200 people with cervicogenic headaches were randomly assigned to receive one of four therapies: manipulation, a special exercise technique, exercise plus manipulation, or no therapy. Each participant received at least 8 to 12 treatments over a period of 6 weeks. All three treatment approaches produced better results than no treatment, and approximately the same effect as each other. However, these results prove little because, as noted earlier, any treatment whatsoever will generally produce better results than no treatment. A review of 5 randomized trials with 348 patients found that spinal manipulation was more effective than medication (amitriptyline), manipulation with placebo, sham manipulation with placebo, standard treatment, or no treatment. However, there was no significant difference in headache pain or intensity when comparing spinal manipulation to soft tissue therapy with placebo laser.83 There is some evidence that chiropractic manipulation may provide both long- and short-term benefits for In a double-blind, placebo-controlled study, 123 participants suffering from migraine headaches were treated for 2 months with chiropractic manipulations or fake electrical therapy (electrodes placed on the body without electrical current sent between them) as placebo. The study lasted a total of 6 months: 2 months pre-treatment, 2 months of treatment, and 2 months post-treatment. After 2 months of treatment, those receiving chiropractic manipulation showed improvement in headache severity and frequency compared to the control group. Furthermore, these benefits persisted to a 2-month follow-up evaluation. Chiropractic manipulation also produced relatively prolonged benefits in another trial as well. In this study, 218 people with migraine headaches were divided into three groups: manipulation, medication (amitriptyline), or manipulation plus medication. During the 4 weeks of treatment, all three groups experienced comparable benefits. During the follow-up 4-week period, however, people who had received manipulation alone experienced more benefit than those who had been in the other two groups. However, a study of 85 people with migraines compared spinal manipulation against two other treatments: manipulation performed by a non-chiropractor and mobilization. The results showed no difference between groups. Chiropractic has been evaluated for many other conditions as well, but the results as yet provide little evidence of benefit. is a common and frustrating problem. Although chiropractic manipulation has been promoted as a treatment for this condition, there is as yet little evidence that it offers specific benefits. In a single-blind, placebo-controlled trial, a total of 86 infants either received three chiropractic treatments or were held for 10 minutes by a nurse. While a high percentage of infants improved, there was no significant difference between the two groups. Another trial compared spinal manipulation to the drug dimethicone. While chiropractic proved more effective than the medication, dimethicone itself has never been proven effective for infantile colic, and the study did not use a placebo group. For this reason, the results of this study indicate little about the effectiveness of chiropractic treatment for infantile colic. A small crossover trial of chiropractic for found equivocal results. A small trial compared real and sham Activator-style chiropractic treatment in people with phobias and found some evidence of benefit. In two controlled studies comparing spinal manipulation to sham manipulation for treatment of people with , the results showed equal improvement for participants in the two groups. These results suggest that the benefits were most likely caused by the attention given by the chiropractor, and not due to the spinal manipulation itself. However, one of these studies has been sharply criticized for using as a sham treatment a chiropractic method perfectly capable of producing a therapeutic effect. This could hide real benefits of the tested form of chiropractic. (If the “placebo” treatment used in a study is actually better than placebo, and the tested treatment does no better than this “placebo,” the results would appear to indicate that the tested treatment is no better than placebo, and, hence, ineffective.) Dysmenorrhea (Menstrual Pain) A single-blind, placebo-controlled study of 138 women complaining of compared spinal manipulation to sham manipulation for four menstrual cycles and found no differences between the two groups. High Blood Pressure In a study of 148 people with mild high blood pressure , use of chiropractic spinal manipulation plus dietary changes failed to prove more effective for reducing blood pressure than dietary changes alone. A single-blind, placebo-controlled trial compared real and sham chiropractic (Activator technique) in 46 children with problems, but failed to find a statistically significant difference between the groups. Weak evidence hints that chiropractic could be somewhat helpful for adolescent idiopathic scoliosis (curvature of the spine that occurs for no clear reason in adolescents). Chiropractic manipulation appears to be generally safe—rarely causing serious side effects. However, a temporary increase of symptoms may occur relatively frequently. Other side effects include temporary headache, tiredness, and discomfort radiating from the site of the adjustment. More serious complications may occur on rare occasions. These are primarily associated with manipulation of the neck. Articles have been published that document a total of almost 200 cases of more serious complications associated with neck manipulation, including stroke, vertebral fracture, disc herniation, severely increased sensation of nerve pinching, and rupture of the windpipe. More than half of these reports involve some form of stroke, often due to a tear in a major blood vessel at the base of the neck (the vertebral artery). Although attempts have been made to determine in advance who will experience strokes following chiropractic, they have not been successful. Thus, stroke must be considered an unpredictable, though rare, side effect of chiropractic manipulation of the neck. To put this in perspective, however, the rate of complications from chiropractic is extremely low. According to one estimate, only one complication per million individual sessions occurs. Among people receiving a course of treatment involving manipulation of the neck, the rate of stroke is perhaps one per 100,000 people; the rate of death is one per 400,000. By comparison, serious medical complications involving common drugs in the ibuprofen family (non-steroidal anti-inflammatory drugs, or NSAIDs) are far more common. Among people using them for arthritis, NSAIDs result in hospitalizations at a rate of about four in 1,000 people, and death at a rate of four in 10,000. To put it another way, the rate of complications with these common over-the-counter drugs is perhaps 100 to 400 times greater than with chiropractic. Certain health conditions preclude spinal manipulation, such as nerve impingement causing severe nerve damage, or significant disease of the spinal bones.
fwe2-CC-MAIN-2013-20-32903000
If he casts the right fly, an angler can catch some really big fish. Scientists are the same way, needing the right type of microscope to visualize nature's smallest molecules and atoms. Now, researchers are redesigning their light microscopes to catch a glimpse of some of the most miniscule molecules, those that make proteins in bacteria and archaea. A promising solution is the use of fluorescence in situ hybridization (FISH) and stochastical optical reconstruction microscopy (STORM). Together, these techniques are improving our understanding of how bacteria and archaea transcribe DNA to RNA and then translate RNA to proteins. In addition, they are re-shaping how cell biology studies relate to environmental microbes. Luring and Lighting Biomolecules "Light microscopy has been a workhorse in cell biological research," says Harvard biophysicist Xiaowei Zhuang. She says scientists want to use light microscopy to study cells, especially live ones, because it is non-invasive. The problem, however, with zooming in on biomolecules and their movements in bacteria and archaea is the small size of the individual cells. At only about three micrometers long and a micrometer wide, bacterial and archaeal cells come into focus just around the diffraction limit of light, which is about 200 nanometers. With light microscopy, scientists can see a cell but not its nuclear and cellular machinery. Even though these cells are relatively simpler than mammalian cells and other eukaryotic ones, scientists still know little about them. To get a better look, Zhuang and her collaborators developed STORM in 2008 (1). Zhuang's group has used it to image individually labeled proteins in live cells, including bacteria and archaea. And, like pairing the right fly with a great bait, other researchers are using STORM with their own techniques to "look at the distribution and dynamics of nuclear targets at a resolution that is far from the reach of conventional microscopy," says Bakshi. For example, Cristina Moraru of the Max Planck Institute for Marine Microbiology in Germany and colleagues wanted to know where ribosomes sit within the cell because those molecular machines interact with the nucleoid—the carrier of the genetic information in archaea and bacteria. Based on where ribosomes are located, there are different models of interactions, which can significantly shape regulation of transcription, translation, and other cellular processes. In a paper recently published in Systematic and Applied Microbiology (2), Moraru’s group reported on a combined STORM and FISH approach to locate ribosomes in an Escherichia coli cell. Moraru’s team used FISH to label specific sequences of ribosomal RNA with fluorescent probes, and then imaged the samples with STORM. "In the end, all these differences could reflect in the way the cell answer to environmental changes, and therefore, in the fitness and survival," says Moraru. In the near future, she adds, scientists could use STORM, FISH, and other super-resolution techniques to count of the number of ribosomes in a bacterium. Ribosomal Catch and Release Counting the number of ribosomes is essential to understanding how bacteria grow. Moraru explains that "the regulation of ribosome numbers in microbial cells is complex and, probably, there will not always be a direct correlation between ribosome numbers and metabolic activity." But it is likely that a cell with a high ribosome content will be more active compared with one with a low ribosome content. If scientists can count ribosomes, they could get a sense of the level of metabolic activity in microbial cells. But scientists have not yet counted the exact numbers of ribosomes per cell; the FISH protocol and RNA probes need to be more efficient at hybridization. "Work in this direction is in progress, and we are confident that there is only a matter of time till ribosome quantification per cell will be achieved," says Moraru. So far, prokaryotic cell biology studies have been limited because many methods are not compatible with uncultivated microorganisms. But because the FISH-STORM approach uses RNA probes that target different microbial taxa in environmental samples, scientists could study ribosome variation across bacterial species. "By looking at samples from different environmental conditions, from warm season versus cold season, or, from high salinity versus low salinity, the variation of ribosome number across environmental conditions could be assessed," says Moraru. In structured environments, such as biofilms, activated sludge and tissue samples, FISH also preserves the spatial information and reveals potential interactions between different species and community members in a sample. "Targeting rRNA by super-resolution FISH is only the beginning. In the near future, we envision targeting the other nucleic acid components of microbial cells to reveal the sub-cellular localization and numbers of specific genes and mRNAs," says Moraru. A Different Kettle But the FISH-STORM approach isn't the only way to bait biomolecules in small cells. Bakshi, a graduate student in University of Wisconsin-Madison chemist James Weisshaar's lab, uses a technique called pointillism to do sub-diffraction limit imaging. With this technique, he constructs an image of a cell by localizing a large number of single molecules iteratively. This requires labels that can be switched on and off, but generates resolution up to 20–30 nanometers. In contrast to FISH, Bakshi’s approachcan be used for live-cell imaging. To truly understand the complexity and heterogeneity of the behavior of any biomolecule, says Bakshi, requires that scientists can probe one molecule at a time. His team's technique gives them the position and movement of a single object in a cell at a high spatio-temporal resolution. "When we are looking at a ribosome, it enables us to determine which molecules are involved in translation and where they are inside the cell," he says. In a 2012 paper published in Molecular Microbiology (3), he and Weisshaar reported that most of E. coli's translation is not coupled with transcription—a discovery that runs counter to the common view in the scientific literature. Bakshi says that since bacteria lack a nuclear membrane—which separates the nucleoid from the rest of the cytoplasm—co-transcriptional translation is possible in the cells. To what extent the translation process is coupled to transcription, however, was not clear. Electron microscope images of ribosomes in cell extract, published in the 1970s, suggested that all translating ribosomes are joined to the chromosome through transcriptional coupling. "When we found that our results suggest that most translation is actually happening without such coupling, we were very surprised," says Bakshi. The team eventually figured out that the lifetime of an mRNA in E. coli is much longer than the time taken for its transcription. The mRNA gets released from proteins associated with the nucleoid once transcription terminates and is then translated by ribosomes without being attached to DNA for the rest of its lifetime, he says. The techniques—whether it's FISH, STORM, or something else—ultimately let biologists cast deeper lines into individual cells of bacteria and archaea, learning more about their molecular and metabolic dynamics. 1. Huang, B., W. Wang, M. Bates, and X. Zhuang. 2008. Three-Dimensional Super-Resolution imaging by stochastic optical reconstruction microscopy. Science 319(5864):810-813. 2. Moraru, C. and Amann, R. (2012). "Crystal ball: Fluorescence in situ hybridization in the age of super-resolution microscopy." Systematic and Applied Microbiology. In Press. 3. Bakshi, S. et al. (2012). Super-resolution imaging of ribosomes and RNA polymerase in live Escherichia coli cells." Molecular Microbiology 85 (1): 21–38 4. Wang, W. et al. (2011). "Chromosome Organization by a Nucleoid-Associated Protein in Live Bacteria." Science 333: 1445 -1449.
fwe2-CC-MAIN-2013-20-32906000
Birds buffer against virus North American scientists studying West Nile virus have shown that more diverse bird populations can help to buffer people against infection. Since the virus first spread to North America it has reached epidemic proportions and claimed over 1,100 human lives. “This is an important example of the links between biodiversity and human health”, commented Dr Stuart Butchart, BirdLife's Global Research and Indicators Coordinator. Biodiversity is increasingly being recognised as socially and economically important because of the valuable services it provides. The authors of this latest research - John Swaddle and Stavros Calos - highlighted the “increasing evidence for economically valuable ecosystem services provided by biodiversity”. West Nile virus mainly affects birds but can be transferred to humans via mosquitoes. It first spread to North America in 2002, and since that time it has reached an epidemic scale with over 28,000 human cases – including 1,100 deaths - being reported. The cost of West Nile virus-related healthcare in the United States was estimated at $200 million in 2002 alone. The virus is also an important threat to bird populations. Over 300 species act as hosts, although American Robin Turdus migratorius has been named as largely responsible for transmission from birds to humans. “West Nile virus may compound existing pressures - like habitat loss - to increase the risk of extinction for species”, commented Dr Butchart. For example, Yellow-billed Magpie Pica nuttalli, which is found only in California, appears to have declined by almost 50% in the last two years as a consequence of the disease. “This is an important example of the links between biodiversity and human health” —Dr Stuart Butchart, BirdLife's Global Research and Indicators Coordinator Scientists studying the virus looked at US counties east of the Mississippi River and compared their avian diversity with the number of human cases. They found that high bird diversity was linked with low incidence of the virus in humans. They reported that about half of the human incidences of West Nile virus could be explained by the differences in local bird populations. The study’s results also suggest that bird communities lowered human case numbers even when the epidemic was underway. The way in which biodiversity and disease rates are linked has been dubbed the ‘dilution effect’. Although the exact mechanisms aren’t currently clear, scientists believe that increased diversity within an ecosystem reduces - or dilutes - the proportion of suitable hosts for a disease, and therefore reduces transmission rates. It has previously been studied through another infection, Lyme disease, but this new research suggests that it may be more widely applicable. If so, it could be a valuable tool for public health and safety plans. The paper is entitled ‘Increased avian diversity is associated with lower incidence of human West Nile Infection: observation of the dilution effect’ and published in PLoS ONE. Read more about how to receive BirdLife news. Credits: Written by Harriet Vickers
fwe2-CC-MAIN-2013-20-32907000
At least a thousand years before the Jewish concept of humans being made in the image of God (Genesis 1, 27), African Sages said the sanctity of life is the central pillar inside each human being. This concept was introduced in the Sebait of Kheti for his son Meritkara in the First Intermediate Period, more specifically in the 9th Dynasty (c. 4042-3633 BCE). Kheti’s comments not only provide the earliest known concept of humans as the images of God, but they also pose them as the children or offspring of God (Karenga, Maat, p. 225, 318). Out of this evolved concepts of the sanctity of human life and humans as the bearers of Dignity and Divinity—both characterizing what it means to be Human— and both constituting the source of Good Character. Thus, ones Dignity is the absolute reality and significance of ones Selfhood and ones Divinity is the subtle and hidden qualities of God’s Consciousness that requires cultivation throughout ones lifetime. By being of a spiritual nature both are without degrees. This Ancient Africans belief in man being made in the image of God (Snn NTr; Imago Dei) became the spiritual grounding or meaning for human Dignity and Divinity; for the sacredness of life; and for moral responsibility. Hence it followed that the moral relationship between one human and another ought to be that of Acknowledgement of the Dignity and Divinity bestowed on every person and the Appreciation of whatever flows out of and/or contributes to either or both. To Appreciate ones Dignity demands the acquisition of African-type moral character. African-type Moral character is fashioned around the spark of the divine presence within each human being. This means that whereas Dignity and Divinity are birth gifts, ones Dignity must be displayed around ones Divinity while ones Divinity must be cultivated into Enlightenment. When one esteems who one is, based upon ones Dignity, and then attaches to ones dignity the tasks one does in life and carries those tasks to completion, one exhibits Self-Respect. Selfhood Mastery means one maintains moral character every time one is being severely tested. |< Prev||Next >|
fwe2-CC-MAIN-2013-20-32912000
|Name: _____________________________||Period: ___________________________| This test consists of 5 short answer questions, 10 short essay questions, and 1 (of 3) essay topics. Short Answer Questions Directions: Answer the question with a short answer. 1. When Dunstan finds the Bollandists, where does he find Padre Blazon? 2. What does Eisengrim tell Boy that he called him during Eisengrim's days in Deptford? 3. When Dunstan visits Mary Dempster, after his return from Europe, what is she wearing each time he comes? 4. Since the former Headmaster died, what has Dunstan been doing? 5. Who is Magnus Eisengrim? Short Essay Questions Directions: Answer the questions with a short paragraph. 1. What causes Boy to take a walk by himself during the Christmas season of 1936? This section contains 798 words| (approx. 3 pages at 300 words per page)
fwe2-CC-MAIN-2013-20-32922000
How one word developed a split personality TO ME, ONE of the best things about being an English speaker is affixes - those little additions before or after the root of a word that can transform its meaning. Our language allows us to be almost endlessly creative in how we combine and recombine words and word parts. Take the case of the two affixes robo- and -bot: two sides of the same word, but with very different effects. Compare the roboturtle (an engineering project described as “an agile and aggressively maneuvering biomimetic autonomous underwater vehicle”) with turtlebot (a homebrew robot kit that “can explore your house on its own!”) The word robot, in the sense of a machine capable of independent action, was ushered into the language in 1920 by the play “R.U.R.: Rossum’s Universal Robots.” According to Karel Capek, the play’s Czech author, his brother Josef suggested that he take the word from the Czech robota (meaning “forced labor” or “drudgery”) rather than create a word from a Latin root. (According to the Oxford English Dictionary, the word android, “an automaton resembling a human being,” is nearly 200 years older.) But in the 90-odd years since we’ve had the word robot, it’s undergone a personality split into two different affixes, robo- and -bot. This isn’t surprising - there’s a language-learning strategy called the “principle of contrast,” which basically means that when we learn a language, we come to expect that two words should never be exact synonyms. Even for words as close in meaning as baggage and luggage, there tend to be definite distinctions in use (for instance, we talk often about “emotional baggage,” but would mention “emotional luggage” only as a joke). So robo- and -bot may have started out as similar affixes, but as new generations of English speakers have adopted them - and expected them to be different - they have drifted apart. Robo- is the stronger of the two affixes, and it has taken on a slightly more menacing air. Compounds with robo- tend to focus on qualities of strength and unstoppableness - for good or ill. Although the Robocop of the 1987 movie of the same name was a hero, later references to robocops have focused more on the supposedly dehumanized nature of over-armored police. Robocalls, those auto-dialed nuisances, inexorably interrupt you during dinner, and the calls used for negative political campaigning are sometimes called robo-slime; robo-polls are calls used to conduct scientifically questionable surveys. Robo-trading, using algorithms to automatically buy and sell stocks and securities, was widely blamed for last year’s “Flash Crash,” when the Dow Jones industrial average dropped more than 600 points in five minutes. And the outcry over robo-signers, mortgage-company employees who signed thousands of foreclosure documents without looking at them, led major banks (including Other more-scary-than-cute robo- compounds include the robo-toilet, which, with its motion-activated lid and seat, sounds like an accident waiting to happen; the robotaxi, which has space for two passengers (without bags) but no driver; and the robonaut, a semi-humanoid joint project of GM and NASA, intended to help on both spacewalks and assembly lines. (Why semi-humanoid? The robonaut has no legs.) Even the most innocuous nouns can take on a harder edge when preceded by robo-: In the 2010 movie “Megamind,” Megamind laments that he “had so many evil plans in the works - the illiteracy beam, typhoon cheese, robosheep . . . .” The suffix -bot, on the other hand, is a bit more cuddly: Think Roomba, not Robocop. Early uses of -bot marked computer programs that automatically interacted with information online, and ranged from helpful to slightly annoying: searchbots to build indexes of webpages, spambots, floodbots (which pushed information where it wasn’t wanted), cancelbots (which removed unwanted information), and chatbots or chatterbots (programs designed to engage in more-or-less human-sounding conversation). These -bots were easy to anthropomorphize, and more or less harmless - unlike, say, an autonomous evil cyborg. More recently, Twitter has been colonized by a host of twitterbots, little programs that automatically find, create, or send data. Many are annoying and spammy, bombarding the poor souls who happen to tweet about iPads or other hot products with commercial messages. But some are pure entertainment, such as a comedybot, which automatically posts bits from comedians, or @EinsteinBOT, which autotweets quotes from Albert Einstein. Friendlier words seem to be more easily suffixed with -bot. There are guidebots and guardbots, helpbots and healthbots, medbots and newsbots, shopbots and teachbots. The chalkbot writes inspirational messages in chalk on the roadway; the dustbot is an “on-call robotic rubbish collection service” being tried in Italy. The suffix -bot is also popular in product names - there’s Jambot (musical software), Kegbot (a beer dispenser that tracks how much you’ve consumed), and Wattbot (which helps you figure out whether you can save money through renewable energy sources). Dorkbot is an organization for those interested in electronic art; Makerbot is a company that creates 3D printers - which can also make parts for more Makerbots. When -bot words do turn unfriendly, they emphasize the knee-jerk automaticity of what’s being done, and so are often political. There are Obamabots and Randbots; Romneybots, Republibots, and Dembots, Conserva-bots, Bushbots, Palinbots, Paulbots (Ron Paul enthusiasts), and Limbots (followers of Rush Limbaugh). The scariest -bot may be the fembot, encompassing the evil-but-hot fembots of Austin Powers, ads for Svedka Vodka, and the Robyn song that insists that “fembots have feelings too.” Why don’t we call them robo-femmes? Because they’re still more sexy than scary: thus the -bot suffix. So what happened to make robo- take the aggressive path and -bot the cute and friendly one? Other words that have split into prefix-suffix pairs have clearer rationales for their separate meanings: Alcoholic’s more common -holic suffix is used for any addiction, with alco- reserved for things related to alcohol, such as alcopops (alcoholic drinks that taste like soft drinks) and alcolocks (devices that disable a car’s ignition if the driver has had too much to drink). More commonly, though, it’s only the less-specific tail end of a word that goes on to a productive life as a suffix: the -thon of marathon, the -naut of astronaut, the -gate of Watergate. Perhaps it’s because robot’s two syllables are equally meaningful (or meaningless) that we can get two different affixes out of it: -bot, as a suffix, acts more like a cute diminutive, like -let or -ling, while robo-, as a prefix, behaves more like a menacing intensifier, like mega- and uber-. Either way, at this point, the divergence looks as if it’s here to stay - and is almost, dare we say, automatic.
fwe2-CC-MAIN-2013-20-32925000
Sir Ferdinando GorgesArticle Free Pass Sir Ferdinando Gorges, (born c. 1566, probably at Wraxall, Somerset, Eng.—died 1647, Long Ashton, Gloucestershire), British proprietary founder of Maine, who promoted, though unsuccessfully, the colonization of New England along aristocratic lines. After a colourful military career in his early manhood, during which he was knighted (1591), Gorges’ life after 1605 was dominated by attempts to gain royal sanction for various settlement schemes in North America, although he himself never traveled there. He felt that colonizing should be a royal endeavour and that colonies should be kept under rigid control from above. In 1620 Gorges succeeded in obtaining a charter to develop the Council for New England—a proprietary grant covering the entire area in North America between the 40th and 48th parallels. He intended to distribute the land as manors and fiefs to fellow gentry who were members of the Council but was thwarted by the success of two vigorous, middle-class, self-governing English colonies founded by joint-stock companies at Plymouth and Massachusetts Bay. Since these New England settlements had received their charters directly from the crown, the Council was thus bypassed as an intermediary. Gorges was the recipient of several land grants during his lifetime, most importantly the charter for Maine in 1639. Although his agents set up a provincial government there, the English Civil Wars and Gorges’ advancing age prevented him from fulfilling his American dream. What made you want to look up "Sir Ferdinando Gorges"? Please share what surprised you most...
fwe2-CC-MAIN-2013-20-32935000
Medici FamilyArticle Free Pass Medici Family, French Médicis, Italian bourgeois family that ruled Florence and, later, Tuscany, during most of the period from 1434 to 1737, except for two brief intervals (from 1494 to 1512 and from 1527 to 1530). It provided the church with four popes (Leo X, Clement VII, Pius IV, and Leon XI) and married into the royal families of Europe (most notably in France, in the persons of queens Catherine de Médicis and Marie de Médicis). Three lines of Medici successively approached or acquired positions of power (see the Table). The line of Chiarissimo II failed to gain power in Florence in the 14th century. In the 15th century the line of Cosimo the Elder set up a hereditary principate in Florence but without legal right or title, hence subject to sudden overthrow; crowns burgeoned, however, on the last branches of their genealogical tree, for two of them were dukes outside Florence, their last heir in a direct line became queen of France (Catherine de Médicis), and their final offspring, Alessandro, a bastard, was duke of Florence. In the 16th century a third line renounced republican notions and imposed its tyranny, and its members made themselves a dynasty of grand dukes of Tuscany. The differences between these three collateral lines are due essentially to circumstances, for there was, in all the Medici, an extraordinary persistence of hereditary traits. In the first place, not being soldiers, they were constantly confronting their adversaries with bribes of gold rather than with battalions of armed men. In addition, the early Medici resolutely courted favour with the middle and poorer classes in the city, and this determination to be popolani (“plebeian”) endured a long time after them. Finally, all were consumed by a passion for arts and letters and for building. They were more than beneficent and ostentatious patrons of the arts; they were also enlightened and were probably the most magnificent such patrons that the West has ever seen. Line of Chiarissimo II. The Medici were originally of Tuscan peasant origin, from the village of Cafaggiolo in the Mugello, the valley of the Sieve, north of Florence. Some of these villagers, in the 12th century perhaps, became aware of the new opportunities afforded by commerce and emigrated to Florence. There, by the following century, the Medici were counted among the wealthy notables, although in the second rank, after leading families of the city. After 1340 an economic depression throughout Europe forced these more powerful houses into bankruptcy. The Medici, however, were able to escape this fate and even took advantage of it to establish themselves among the city’s elite. But their policy of consolidating their position by controlling the government—the work of the descendants of Chiarissimo II (himself the grandson of the first known Medici)—resulted in 50 years of serious misfortunes for the family (1343–93). His grandson Salvestro took up his policy of alliance with the popolo minuto (“common people”) and was elected gonfalonier, head of the signoria, the council of government, in 1378. Salvestro more or less willingly stirred up an insurrection of the ciompi, the artisans of the lowest class, and, after their victory, was not above reaping substantial monetary and titular advantages. But in 1381, when the popular government fell, he had to go into exile. His memory, however, was still alive in 1393, when the popolo magro (“lean people”) once more thought it possible to take over the signoria. The mob hastened to seek out his first cousin, Vieri, who was, however, able to fade away without losing face. With Vieri this branch of the Medici was to disappear definitively from history. Line of Cosimo the Elder. A distant cousin of Salvestro was Averardo de’ Medici (or Bicci), whose progeny became the famous Medici of history. His son Giovanni di Bicci de’ Medici (1360–1429), considered the first of the great Medici, inherited the family business based on cloth and silk manufacturing and on banking operations and made the family powerfully prosperous. Giovanni’s two sons, Cosimo (1389–1464) and Lorenzo (1394–1440), both of whom acquired the appellation of “the Elder,” founded the famous lines of the Medici family. Cosimo de’ Medici, the older brother, established the family’s political base. He served on the Florentine board of war, called the Dieci (The Ten), and held other posts. His two sons were Piero (1416–69) and Giovanni (1424–63). The latter died before his father, who in death received the title “Father of His Country.” Piero di Cosimo de’ Medici maintained and strengthened the political fortunes of the family. He also fathered two sons, one of whom, Giuliano (1453–78) was assassinated. The second son, Lorenzo (1449–92), became in his own time Il Magnifico (The Magnificent). Lorenzo de’ Medici deservedly holds an honoured place in the history of Florence and Italy. Inheriting from his forebears a deep respect for arts and letters, he became a poet himself as well as a patron of artists and a skilled statesman. His three children, Piero (1472–1503), Giovanni (1475–1521)—later Leo X—and Giuliano (1479–1516), played contrasting roles in the city’s history. Assuming the mantle of family power from Lorenzo, Piero alienated the people of Florence by siding with the French. Because of this act, considered a betrayal, the Medici had to flee Florence (1494). Giovanni, at that time a cardinal, used his influence with Pope Julius II to bring the family back to positions of power. Giuliano, who received the French title of duc de Nemours, was in poor health and died relatively young. Piero, oldest of the children of Lorenzo the Magnificent, fathered one son, also named Lorenzo (1492–1519), who in turn had a daughter, Catherine (1519–89), who became queen of France as wife of Henry II; three of her four sons became kings of France. Giovanni, second son of Lorenzo the Magnificent, became Pope Leo X. In commemoration of the deaths of Giuliano and Lorenzo, the two who had died relatively young, the family commissioned Michelangelo to complete the famous Medici Tombs in Florence. The few years of this period are often considered to be the apogee of the Medici age. The period has even been called “the century of Leo X.” From 1513 to 1521, surrounded by five nephews and cousins whom he had named cardinals, Leo X reigned less over Christianity than over arts and letters in the style of his father, the Magnificent, too occupied with patronage to pay sufficient attention to an unimportant monk by the name of Martin Luther. By the 1520s, nonetheless, the descendants of Cosimo the Elder had become few in number. To ensure that a Medici of the Cosimo line would continue to rule Florence, Pope Clement VII, nephew of Lorenzo the Magnificent, installed Alessandro (1511–37), reputedly his own illegitimate son, as hereditary duke of Florence. In the same year, 1532, Clement VII abolished the city’s old constitution. Alessandro proved to be cruel and brutally authoritarian. He ruled for five years. In 1537 he was assassinated by a companion who was also a relative. What made you want to look up "Medici Family"? Please share what surprised you most...
fwe2-CC-MAIN-2013-20-32936000
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...Disillusionment with French policies, however, did not reconcile the Italian Jacobins with their former rulers; instead, it bolstered their nationalism. In Piedmont, for instance, a secret society, I Raggi (“The Beams of Light”), advocated a democratic, unionist, and anti-French program that would lead Italy toward unity and independence. What made you want to look up "I Raggi"? Please share what surprised you most...
fwe2-CC-MAIN-2013-20-32937000
Supreme Court of JapanArticle Free Pass Supreme Court of Japan, Japanese Saikō Saibansho, the highest court in Japan, a court of last resort with powers of judicial review and the responsibility for judicial administration and legal training. The court was created in 1947 during the U.S. occupation and is modelled to some extent after the U.S. Supreme Court. As was the Federal Constitutional Court of West Germany, the Supreme Court of Japan was endowed with the prerogative of judicial review, largely as a result of U.S. influence. The Supreme Court of Japan is the successor to the Daishin-in, which was established in 1875 and reorganized in 1890 under the Meiji Constitution (1889) as a supreme court of final appeal in criminal and civil cases. Under the control of the Ministry of Justice, that court had little independence and could not deal with questions of constitutionality. The 1947 court, therefore, was intended to have the freedom to work independently of the government and to decide the constitutionality of statutes and administrative decisions. The Supreme Court of Japan is made up of 14 justices and a chief justice, who sit as the Grand Bench to hear constitutional cases and cases that a petty bench (made up of five of the justices) has been unable to decide. There are three petty benches: civil, criminal, and administrative. A petty bench may consider a constitutional issue only if the Grand Bench has set precedent in the specific area covered. Distribution of cases among the petty benches and assignments of individual Supreme Court judges are determined by the entire court sitting as the Judicial Assembly. The assembly is responsible for determining regulations for the national courts, the public prosecutors, and the legal profession and for disciplining violators of these regulations. As Japan has a unified national court system, all courts are under the control of the Supreme Court. The court even prepares a list of nominees for positions in the inferior courts. The Judicial Assembly, through the Legal Training and Research Institute, also oversees graduate legal training for those who wish to pursue careers as judges, prosecutors, and lawyers. The justices are appointed by the Cabinet (the chief justice by the emperor upon designation by the Cabinet). At least two-thirds must have considerable experience as lawyers, prosecutors, law professors, or members of high courts. Justices serve for life but may be retired for advanced age or ill health; they may also be impeached by the Diet. The only restriction on the justices is that they are forbidden to take part in politics. Theoretically, the public has some control over the appointments to the court. In the first general election following the appointment of a justice, the electorate is allowed to voice its approval or disapproval; the electorate reviews the status of a justice after a tenure of 10 years. Cases come to the Supreme Court on appeal from one of the high courts, which are themselves appeals courts. The Supreme Court has no original jurisdiction, and it can deal only with a legal issue arising from a specific case. Even constitutional issues cannot be considered abstractly outside specific legal problems. The court can void any decision in which it finds there has been an incorrect interpretation or application of the law. The court may also overturn a ruling if it finds error in the facts of the case or if it considers the punishment unjust. It may remand a case back to a lower court if it finds justification for the reopening of proceedings. What made you want to look up "Supreme Court of Japan"? Please share what surprised you most...
fwe2-CC-MAIN-2013-20-32938000
CHAPTER 20: LONG ACRE The formation of the parish of St. Paul, Covent Garden, in 1645 left a long strip of ground between the northern boundary of that parish and Castle Street, Long Acre, the northern boundary of the parish of St. Martinin-the-Fields; through this strip runs the street called Long Acre. The street takes its name from a field known as Long Acre, which consisted originally of 7 acres and was purchased (ref. 158) by Henry VIII, together with Covent Garden adjoining it on the south, from the Abbey of Westminster. It was then held on lease by William Browne. In July, 1547, Long Acre and Covent Garden were granted (ref. 26) to Edward, Duke of Somerset, the Protector, who, four years later, in December, 1551, was sentenced to death in Westminster Hall. We are told that the people "supposing he had been clerely quitt, when they see the axe of the Tower put downe, made such a shryke and castinge up of caps, that it was heard into the Long Acre beyonde Charinge Crosse." (ref. 187) In May, 1552, John, Earl of Bedford, obtained a grant "of the land called le Covent Garden; and the 7 ac. land and pasture called Long Acre abutting on St. Martin's Lane on the west, on Foscue [Drury] Lane on the East, on the Strand on the south, and upon the land called Elmfield pertaining to the Mercers' Company on the north, to hold as of the Manor of East Greenwich." (ref. 26) Bedford's descendants retained possession of this property almost down to the present day. Elmfield, to the north of Long Acre, was not bought by Henry VIII, but remained in the possession of the Mercers' Company. In 1614 the Mercers granted a 30 years' lease of it to Thomas, Earl of Exeter, who in the following year sold his lease to Sir William Slingsby. The street called Long Acre was laid out at about this time by Slingsby and the Earl of Bedford, the line of the street following approximately the line of the common boundary of their properties. Thenceforth the term Long Acre was frequently applied to the ground on both sides of the street, and in 1650 when the Mercers' ground was surveyed it was referred to as "Elme Close alias Long Acre," and a certain Captain Disher tried to prove that it was part of the property purchased by Henry VIII. (ref. 188) From 1616 onward there were frequent complaints about buildings in Long Acre erected "contrary to the King's Proclamation." In 1630 Francis, Earl of Bedford and Sir Henry Cary (then tenant of Elmfield) replied to a letter ordering them "to cleanse and make passable the way called Long Acre" that their predecessors had granted long leases of their lands adjoining the street "in hope to procure fair and spacious buildings to be there erected," and that if the King would give them leave to build they would "pave and keep it as well as any other street in London." (ref. 36) Part of Elmfield was granted by Slingsby to the Churchwardens of St. Clement Danes for use as a laystall. In 1636 this laystall was condemned by the Justices of the Peace for Westminster as a "nuisance," but the Churchwardens successfully appealed against this decision by stating that the houses in the neighbourhood had been built since the formation of the laystall and "the building of houses there is a greater nuisance and inconvenience to the public than the placing of the laystall can be." (ref. 36) Nevertheless by various shifts and expendients building went on. In December, 1637, William Portington, Lieutenant of the Horse for Middlesex, appealed against an order of the Commissioners for Buildings for the demolition of his shed fronting Long Acre. Portington argued that his building was not "a shed" which he defined as "a leaning to something to bear up the roof" whereas "this roof bears itself and at its first erecting as a tenement it was built for one." (ref. 36) In the same year another petitioner, Thomas Cooke, stated that Long Acre was "almost wholly built." (ref. 36) The Parliamentary Survey (ref. 188) shows that the street was fairly well lined with small houses and shops in 1650. Mercer Street and Cross Lane were also built up, the latter being on the site of what is now Neal Street (formerly King Street). Feather Alley, Knockle Alley and Dirty Lane or Street were also mentioned as turnings out of the north side of Long Acre. Among the early residents may be mentioned Oliver Cromwell (1637–43), Nicholas Stone, sculptor (1615–45), John Parkinson, botanist (1626–45), and Sir John Temple (1645). John Taylor, the "water-poet." took the Crown Inn in Hanover Court after the fall of Oxford in 1645. Scipio Lesquire, who owned much property in the parish, and after whom Lesquire Street (later Chandos Street) was named, also lived in Long Acre (1627–59), as did Major-General Skippon (1645–49), the Earl of Peterborough (1665–74), John Dryden (1668–86), Lady Mary St. John, mother of Viscount Bolingbroke (1655–92), and Adrian Vandiest, Dutch landscape painter (1698–1704). Thomas Stothard, artist, was born at the Black Horse Inn in 1755. On the 1875 Ordnance Survey several " coach manufactories" are shown on the north side of the street, and leases of the Mercers' Company show that the connection of this trade with the locality dates back to the late Nos. 16–20.—These premises, which appear to have been built circa 1690, have plain brick fronts of two storeys over shops and with attics (Plate110 ). A plain projecting band denotes the second floor level while the windows have their frames flush with the wall face. The shops are of later date. In No. 19 the upper flights of the staircase are original and have spiral balusters, square newel posts and close moulded strings, but the lower flight and the side entrance have been altered. Some of the rooms still retain bolection moulded panelling in two heights with a deep wooden cornice. On the first floor is a mantelpiece with plain stone jambs and a keyed flat List of Occupants to 1800. (fn. a) No. 16—Edward (Edmond) Vialls (1690–1717), Amos Vialls (1718–42), Vialls Widow (1743), Jas. Cope (1744–47), James Rigby (1747–49), Jeremiah Wills (1749–52), Sunibank Giles (1753–79), John Randall (1780–85), Thos. Cox (1786–89), Barbor and Harvey (1790–97), Jas. Scoles (1797–). No. 17.—Isaac Deloone (1690–92), Samuel Watson (1693–1712), Wm. Casteele (1713–14), John Bird (1715–23), Edward Middlebrook (1724–25), Joseph Mason (1726), Edward Mason (1727), Thos. Cotterell (1728–50), George Hall (1751–52), John Bedford (1752–57), John Hurst (1758–61), Sarah Hurst (1762), John Reynolds (1762–67), Joseph Carter (1768), Henry Edgecomb (1769–71), Thomas Faucit (1772–73), Thomas Moyston (1774–76), Thomas Wood (1777–78), Evan Powell (1779–80), John Crookham or Cookham (1781–88), Tho. Wooden (1789–91), Tempest Holt (1791–93), Jno. Crockham (1794–96), Hannah Crockham (1797), John Mansfield (1798), Evan Jones (1799–). No. 18.—John Perismore (1690–1703), Owen Davis (1704–18), Lewis Gyatt (1719–21), James Hurst (1722–25), Samuel Hurst (1726–32), Samuell Steele (1733), Christopher White (1734–55), Henry Todd (1755–67), — Hill (1768), Thos. Dawson (1769–79), John Whitaker (1780–81), Geo. Salt (1782–). No. 19.—Thos. Burton (1690–1704), Jonathan Farren (1705–16), Wm. West (1717–20), Rich. Messenger (1721–22), John Chiselston (1723–30), Samuel Davison (1731–33), Bartholomew Kilpin (1734–41), Peter Planck (1742–70), Miss Planck (1771–73), Peter Planck & Co. (1774–96), Renigall Briand (1797–98), — Planck (1799–). No. 20.—Jas. English (1686–96), Edw. Luttrell (1698–99), Charles Pennycock (1700), Augustine Ingeno (1701), Alexander Bracket (1702–03), Richard Yates (1704–27), Yates Widow (1728–30), Thomas Turner (1731), Ric. Hubbard (1731–40), John Gibson (1742–45), Savile (Samuel) Samber (1747–53) (1754–1800 occupied with No. 19). Conduit Court between Nos. 17 and 18, appears to have taken its name from Leonard Conduit who is rated there in 1689–90. It is described by Strype as "indifferent broad with a free-stone pavement, and passage to Hart Street; a court indifferently well built and inhabited." No. 17, Long Acre, the Bird in Hand, has been so called for well over 200 years. Langley Court, a narrow thoroughfare leading out of Long Acre on the southern side between Nos. 34 and 35, has some interesting bay windows. It was known until 1846 as Leg Alley, probably from the house at the corner which in the 18th century had the sign of the Golden Leg. The south side of the court appears to have been erected circa 1759–61, probably by Thomas Prior of St. Giles-in-the-Fields, bricklayer. No. 53, Long Acre.—This house appears to date from the middle of the 18th century but the interior has been entirely altered. List of Residents to1800: Timothy Raikes (1730–32), Ignatius Couran (1734–35), Mary Hancock (1736–40), John Shelton or Sheinton (1741–60), Edward Brain (1761), John Plunkett (1762), Jas. Rowles (1765–75), Henry Frost (1776), John Barber (1777–80), John Windeatt (1781–82), Richard Mortimer (1783–85), Richard Norris (1786), Jas. Carter (1787–92), Harriet Pearce (1793–).
fwe2-CC-MAIN-2013-20-32939000
Treating diabetes, cerebral palsy and heart disease with cord blood Type 1 Diabetes Type 1 Diabetes is also known as juvenile diabetes. It is an autoimmune disease caused by the body's own immune system attacking and destroying the insulin producing cells in the pancreas. Insulin allows the body to process sugar to create energy, and without insulin, the body literally starves as it cannot process food. Type 1 Diabetes affects more than 140,000 people in Australia alone and, while it can be managed, at present it cannot be cured. As a result, it is a lifelong and often disabling disease that can severely impact the quality of life of those who are afflicted. Type 1 diabetes - Fast Facts Researchers are looking at a wide range of potential cell-based therapies and the use of autologous umbilical cord blood as a source of immunomodulatory cells for the treatment of autoimmune diseases has become increasingly popular7-10. Umbilical cord blood contains a population of immature but highly functional regulatory T-cells (Tregs)11. These regulatory T-cells could, in theory, limit inflammatory cytokine responses and energize effector T-cells, which are thought to play a key role in autoimmune processes12,13. In the laboratory infusion of human blood stem cells into diabetic animals has demonstrated a reversal of the disease2,3. The potential of such cells to provide a source of safe and effective immunomodulation may be of the greatest importance in treating type 1 diabetes4-6. As such, umbilical cord blood Tregs have become a major focus in designing cell-based therapies for children with Type 1 Diabetes14 Cord blood and cerebral palsy Cerebral palsy (CP) is a permanent physical condition that affects movement and muscle coordination. It results from damage to part of the brain, usually before birth. In Australia it is estimated that a child is born with cerebral palsy every 15 hours and despite advances in medical science the incidence of CP has not declined. Babies most at risk of cerebral palsy are those born prematurely or with low birthweight. While the reasons for this remain unclear, cerebral palsy may occur as a result of problems associated with preterm birth or may indicate an injury has occurred during the pregnancy that has caused the baby to be born early. For most people with cerebral palsy, the cause is unknown. Except in its mildest forms, it is usually diagnosed within the first 12-18 months of life. The early signs are a lack of muscle coordination when performing voluntary movement, walking with one foot or leg dragging, walking on the toes or muscle tone that is either too stiff or too floppy. Cerebral palsy cannot be cured; to date, treatment plans for children have focused on improving a child's physical and mental functioning through physical, occupational, speech and behavioural therapy. Additional treatment for CP has included special braces to compensate for muscular imbalances, mechanical and communication aids as well as surgery. However, there is now some exciting research regarding the use of cord blood cells for people with cerebral palsy taking place around the world. Researchers at Duke University, USA are looking at infants and children diagnosed with cerebral palsy and infusing them with their own cord blood. Parents, therapists and researchers are observing dramatic improvements in the motor and speech skills of the children with cerebral palsy, in some cases within a few days of being treated with their own cord blood extracted at birth. Additional trials are underway in the US, Europe and a trial is expected to commence in Australia mid-2011. Real life Stories Videos 1 and 2: These two amazing videos show the dramatic changes that Maia Friedlander, a young New Zealand girl with cerebral palsy, underwent after receiving stem cell therapy using her own cord blood. The cord blood was inserted by Dr Joanna Kurtzberg at Duke University and within two days Maia's parents began to notice significant changes. Video 3: At 8 months old Dallas Hextell was diagnosed with Cerebral Palsy. Unable to communicate or control his body, conventional therapy had had little impact. Dallas was accepted into the Duke University clinical trial and this video shows the remarkable improvement of Dallas after receiving his own cord blood. Video 4: Chloe Levine underwent an infusion of her own cord blood stem cells to treat cerebral palsy. Her progress since the infusion has been remarkable and this video shows how cord blood changed her life. Video 5: This video features two children Emma Jabs and Alyssa Dupuis, who received infusions of their own stem cells to treat cerebral palsy. Cord blood and heart disease Cardiovascular disease (CVD) refers to all diseases of the heart and blood vessels. Affecting more than 3.7 million Australians, it is the leading cause of death in Australia and is one of Australia’s largest health problems. Heart failure occurs when cardiac tissue is deprived of oxygen. If the damage is significant then there is a loss of cardiac muscle cells, which in turn results in a variety of events including formation of scar tissue, thinning of the heart walls, increased blood flow and pressure, heart failure and eventually death. Although there are a vast array of medications, plus a range of surgical options available to treat heart disease neither approach actually addresses the loss of function in the damaged tissues. Researchers are now exploring potential therapies using various stem cell sources (including cord blood) to repair or replace damaged tissue. Results to date have shown that in animal models, cord blood stem cells can move to injured cardiac tissue and improve both vascular function and blood flow at the site of injury, with an overall improvement in heart function. In other studies researchers have shown that cord blood stem cells showed strong growth potential for engineered vascular grafts that could be used help treat heart defects. Although more research needs to be done, scientists believe cord blood stem cells may have a future role in treating children born with congenital heart defects. 1. International Diabetes Federation. Facts & figures: diabetes prevalence. http://www.idf.org/home/index.cfm?node=264. Accessed September 15, 2007 2. Beilhack GF et al: Purified allogeneic hematopoietic stem cell transplantation blocks diabetes pathogenesis in NOD mice. Diabetes 2003;52:59–68 . 3. Hess D et al: Bone marrow-derived stem cells initiate pancreatic regeneration. Nat Biotechnol 2003;21:763–770 4. Limbert C et al: Beta-cell replacement and regeneration: Strategies of cell-based therapy for type 1 diabetes mellitus. Diabetes Res Clin Pract 2008;79:389–399. 5. Hussain MA et al: Stem-cell therapy for diabetes mellitus. Lancet 364:203–205. 6. Couri CE et al: Secondary prevention of type 1 diabetes mellitus: stopping immune destruction and promoting beta-cell regeneration. Braz J Med Biol Res 2006;39:1271–1280 7. Ende N et al: Effect of human umbilical cord blood cells on glycemia and insulitis in type 1 diabetic mice. Biochem Biophys Res Commun 2004;325:665–669. 8. Haller M et al: Insulin requirements, HbA1c, and stimulated C-peptide following autologous umbillical cord blood transfusion in children with type 1 diabetes (Abstract). Diabetes 2007;56(Suppl. 1):A82. 9. Viener H et al: Changes in regulatory T cells following autologous umbillical cord blood transfusion in children with type 1 diabetes (Abstract). Diabetes 2007;56(Suppl. 1):A82. 10. Voltarelli JC et al: Autologous nonmyeloablative hematopoietic stem cell transplantation in newly diagnosed type 1 diabetes mellitus. JAMA 2007;297:1568–1576 11. Godfrey WR et al: Cord blood CD4(+)CD25(+)-derived T regulatory cell lines express FoxP3 protein and manifest potent suppressor function. Blood 2005;105:750–758 12. Fruchtman S: Stem cell transplantation. Mt Sinai J Med 2003;70:166–170. 13. Han P et al: Phenotypic analysis of functional T-lymphocyte subtypes and natural killer cells in human cord blood: relevance to umbilical cord blood transplantation. Br J Haematol 1995;89:733–740 14. Haller MJ et al: Autologous umbilical cord blood infusion for type 1 diabetes. Exp Hematol 2008;36:710–715 want to know more? For more articles, local directories of shops and services, checklists, calculators and more visit our...:: pregnancy & birth info hub
fwe2-CC-MAIN-2013-20-32943000
LITTLE FALLS, N.J., July 17 -- Increases in teenage births, AIDS infections, and other sexually transmitted diseases indicate that progress in adolescent sexual health may have slowed in recent years, researchers say. Teen birth rates for girls decreased for almost 15 years before starting an upward trend in 2005, and the annual rate of AIDS diagnoses among boys in that age group has nearly doubled in the last 10 years, according to a new CDC After decreasing for more than 20 years, gonorrhea infection rates leveled off, while syphilis rates have been increasing, CDC's Lorrie Gavin, PhD, and colleagues reported in the July 17 Morbidity and Mortality Weekly Report. "The sexual and reproductive health of America's young persons remains an important public health concern," the researchers said. "Earlier progress appears to be slowing and perhaps reversing." The report is a summary of data on young people from ages 10 to 24 from multiple sources, including the National Vital Statistics System, the National Examination Survey, and the National Survey of Family Growth. The researchers found that about one million young people in the U.S. had chlamydia, gonorrhea, or syphilis in 2006. That group accounted for nearly half of all incident sexually transmitted diseases even though it represented only 25% of the sexually active population, the researchers said. A large proportion of disease occurs in the youngest population, with about 18,000 youths between ages 10 and 14 diagnosed with sexually transmitted diseases in 2006. Chlamydia was the most commonly reported STD, followed by gonorrhea, and then syphilis. Rates of all three diseases were highest among non-Hispanic blacks for all age groups. Also, about 25% of girls from to 19 and 45% of those from 20 to 24 had human papillomavirus infections between 2003 and 2004. While teen pregnancy rates decreased every year from 1991 to 2005, they started increasing from 2005 to 2007. About 745,000 pregnancies occurred among girls under 20 in 2004, with 16,000 of those involving girls from 10 to 14. Like disparities in STD infection, pregnancy rates were much higher for Hispanic and non-Hispanic black girls ages 15 to 19 than in white girls (132.8 and 128 per 100,000 population, versus 45.2). With regard to AIDS, the annual rate of diagnoses among boys ages 10 to 24 has nearly doubled in the last 10 years, from 1.3 cases per 100,000 in 1997 to 2.5 in 2006. That year, about 22,000 youths in 33 states were living with HIV/AIDS. Again, non-Hispanic blacks were most likely to be affected. Black teenage girls ages 15 to 19 were more likely to live with AIDS than Hispanics, Alaska Natives, whites, or Pacific Islanders (49.6 per 100,000 compared with 12.2, 2.6, 2.5, and 1.3, respectively). Sexual assaults also increased over the study period, the researchers said, with 105,000 girls visiting an emergency department for a sexual assault injury between 2004 and 2006. About 27,500 of those visits were among girls ages 10 to 14. The researchers said that the southern states generally had the highest rates of negative sexual and reproductive health outcomes, including early pregnancies and STDs. They said their findings "underscore the importance of sustaining efforts to promote adolescent reproductive health." "Practitioners," they said, "can use [this information] when making decisions about how to allocate resources and identify those subpopulations that are in greatest need." The study was limited by self-reported data, undetected cases of disease, challenges in estimating pregnancy rates, and lack of ability to investigate causality. What do you think?
fwe2-CC-MAIN-2013-20-32960000
Phytoplankton Under Ice Beneath the Arctic ice—over 12 feet deep in some areas—lies a dark, cold and lifeless sea. Or so we thought. “If someone had asked me before the expedition whether we would see under-ice blooms, I would have told them it was impossible,” says Arrigo. “This discovery was a complete surprise.” The researchers discovered an abundance of phytoplankton—microscopic life that forms the base of the marine food chain. Phytoplankton require sunlight for photosynthesis, just like plants. And sunlight has a tough time penetrating thick sea ice. But that thick sea ice is changing. Not only are warmer temperatures thinning the ice, but as the ice melts in summer, it forms pools of water that act like transient skylights and magnifying lenses. These pools focus sunlight through the ice and into the ocean, where currents steer nutrient-rich deep waters up toward the surface. Phytoplankton under the ice evolved to take advantage of this narrow window of light and nutrients. The phytoplankton displayed extreme activity, doubling in number more than once a day. Blooms in open waters grow at a much slower rate, doubling in two to three days. These growth rates are among the highest ever measured for polar waters. Researchers estimate that phytoplankton production under the ice in parts of the Arctic could be up to 10 times higher than in the nearby open ocean. The phytoplankton bloom discovered by Arrigo and his colleagues in the Chukchi Sea (just north of Alaska) extends tens of meters deep in spots and about 100 kilometers (62 miles) across. “At this point we don’t know whether these rich phytoplankton blooms have been happening in the Arctic for a long time and we just haven’t observed them before,” Arrigo says. “These blooms could become more widespread in the future, however, if the Arctic sea ice cover continues to thin.” The discovery of these previously unknown under-ice blooms could have serious implications for the broader Arctic ecosystem, including migratory species such as whales and birds. Phytoplankton are eaten by small ocean animals, which are eaten by larger fish and ocean animals. “It could make it harder and harder for migratory species to time their life cycles to be in the Arctic when the bloom is at its peak,” Arrigo says. “If their food supply is coming earlier, they might be missing the boat.” The research is published this week in Science.
fwe2-CC-MAIN-2013-20-32961000
A legend in her own time both for her brilliant poetry and for her resistance to oppression, Anna Akhmatova—denounced by the Soviet regime for her “eroticism, mysticism, and political indifference”—is one of the greatest Russian poets of the twentieth century.Before the revolution, Akhmatova was a wildly popular young poet who lived a bohemian life. She was one of the leaders of a movement of poets whose ideal was “beautiful clarity”—in her deeply personal work, themes of love and mourning are conveyed with passionate intensity and economy, her voice by turns tender and fierce. A vocal critic of Stalinism, she saw her work banned for many years and was expelled from the Writers’ Union—condemned as “half nun, half harlot.” Despite this censorship, her reputation continued to flourish underground, and she is still among Russia’s most beloved poets. Here are poems from all her major works—including the magnificent “Requiem” commemorating the victims of Stalin’s terror—and some that have been newly translated for this edition. Buyback (Sell directly to one of these merchants and get cash immediately) |Currently there are no buyers interested in purchasing this book. While the book has no cash or trade value, you may consider donating it|
fwe2-CC-MAIN-2013-20-32966000
© 2005-2012 American Society of Clinical Oncology (ASCO). All rights reserved worldwide. ON THIS PAGE: You will find information about how many people learn they have this type of cancer each year and some general survival information. Remember, survival rates depend on several factors. To see other pages in this guide, use the colored boxes on the right side of your screen, or click “Next” at the bottom. This year, an estimated 7,060 adults (2,630 men and 4,430 women) in the United States will be diagnosed with anal cancer. It is estimated that 880 deaths (330 men and 550 women) from this disease will occur this year. The five-year survival rate (percentage of people who survive at least five years after the cancer is detected, excluding those who die from other diseases) for early, localized anal cancer is between 53% and 71%, depending on the type of cancer (see Overview for details). The five-year survival rate for people with tumors that have spread to the area around the anus is 24% to 48%. If the cancer has spread to more distant body parts, the five-year survival rate is between 7% and 21%. Survival rate may be lower for people who have human immunodeficiency virus (HIV), the virus that causes acquired immune deficiency syndrome (AIDS). Cancer survival statistics should be interpreted with caution. These estimates are based on data from thousands of people with this type of cancer in the United States each year, but the actual risk for a particular individual may differ. It is not possible to tell a person how long he or she will live with anal cancer. Because the survival statistics are measured in five-year intervals, they may not represent advances made in the treatment or diagnosis of this cancer. Learn more about understanding statistics. Statistics adapted from the American Cancer Society's (ACS) publication, Cancer Facts & Figures 2013, and the ACS website. Choose “Next” (below, right) to continue reading this guide to learn what raises a person’s risk of developing this type of cancer, or use the colored boxes located on the right side of your screen to visit any section.
fwe2-CC-MAIN-2013-20-32971000
This kennel was in a barn about 60 feet long and 30 feet wide with metal walls, a peaked metal roof, and concrete flooring. There were 12 dog pens in a row, each with inside and outside cages connected by plastic doggy doors framed in wood. The inside pens were about 3.5 feet long, two feet wide, and three feet high, had wooden beams along the bottoms of the pens, rusting, thin-gauge wire for walls, and no roofs. Each dog pen contained a single adult Pug or Miniature Pinscher, each of which had a bronze tag hanging from a collar around its neck. Each inside cage contained a dog house, about 1.5 feet long, wide, and high, made of untreated wood. The corners of the dog houses were chewed and broken in several places (3.1(c)(1)- Surfaces). There was straw and wood chips on the flooring of the cages and more than 24 hours’ accumulation of feces in each inside pen (3.11(a)-Cleaning of primary enclosures). Each cage had a metal or plastic food dish placed on the floor (3.9(b)-Feeding). The water dishes contained ice (3.2(a)-Heating, cooling and temperature) (3.10-Watering). A wooden board against the inside of the barn wall and about four feet above the flooring of one cage had a white, plastic five-gallon bucket on top of it with a wooden board about a foot wide and long on top of it (3.1(b)-Condition and site).
fwe2-CC-MAIN-2013-20-32972000
As ancient Rome lived in splendor off the tribute raised in the provinces, so in modern America the political capitals are prospering economically at the expense of the rest of the nation. The productive, private citizens in outlying regions of our nation and states are financially burdened to pay for a parasite public economy of lawmakers, lobbyists, contractors, and bureaucrats in the political centers. Several statistics support those claims: - Average annual pay of workers in the District of Columbia exceeds the national average by 48 percent. Nation-wide, income per person in counties with state capitals tends to be nearly 10 percent higher than in other regions. - The income differential between Washington and the rest of the nation rose from 25.9 percent in 1980 to 32.1 percent in 1990. The poorest states have capitals with per capita income levels about 17 percent above state averages; the richest states show virtually no income differential, which suggests that government income redistribution may contribute to poverty rather than enhance wealth. - Although unemployment in the Washington, D.C., metropolitan area has been increasing, it remains almost 30 percent below the national average. Unemployment rates in counties containing state capitals average about 20 percent lower than in other counties. That evidence is consistent with the hypothesis that those who make up the “parasite economy” have been successful at improving their economic well-being at the expense of those working in the productive private economy.
fwe2-CC-MAIN-2013-20-32979000
Evaluation of Meningitis Surveillance Before Introduction of Serogroup A Meningococcal Conjugate Vaccine — Burkina Faso and Mali Each year, 450 million persons in a region of sub-Saharan Africa known as the "meningitis belt" are at risk for death and disability from epidemic meningitis caused by serogroup A Neisseria meningitidis (1). In 2009, the first serogroup A meningococcal conjugate vaccine (PsA-TT) developed solely for Africa (MenAfriVac, Serum Institute of India, Ltd.), was licensed for persons aged 1–29 years. During 2010–2011, the vaccine was introduced in the hyperendemic countries of Burkina Faso, Mali, and Niger through mass campaigns. Strong meningitis surveillance is critical for evaluating the impact of PsA-TT because it was licensed based on safety and immunogenicity data without field effectiveness trials. Case-based surveillance, which includes the collection of epidemiologic and laboratory data on individual cases year-round, is recommended for countries that aim to evaluate the vaccine's impact. A key component of case-based surveillance is expansion of laboratory confirmation to include every case of bacterial meningitis because multiple meningococcal serogroups and different pathogens such as Haemophilus influenzae type b and Streptococcus pneumoniae cause meningitis that is clinically indistinguishable from that caused by serogroup A Neisseria meningitidis. Before the introduction of PsA-TT, evaluations of the existing meningitis surveillance in Burkina Faso and Mali were conducted to assess the capacity for case-based surveillance. This report describes the results of those evaluations, which found that surveillance infrastructures were strong but opportunities existed for improving data management, handling of specimens shipped to reference laboratories, and laboratory capacity for confirming cases. These findings underscore the need to evaluate surveillance before vaccine introduction so that activities to strengthen surveillance are tailored to a country's needs and capacities. Before introduction of the meningococcal conjugate vaccine, meningitis surveillance in Burkina Faso and Mali included aggregate case counts only, enhanced by cerebrospinal fluid (CSF) collection from a subset of cases during the epidemic season to guide epidemic preparedness and choice of polysaccharide vaccine. In collaboration with the West Africa Inter-Country Support Team of the World Health Organization's Africa Regional Office, CDC evaluated 2007 meningitis surveillance data from Burkina Faso during 2007–2008 and from Mali in 2010. Surveillance was evaluated according to CDC guidelines (2). Each country's surveillance system was evaluated for compliance with standard operating procedures for enhanced meningitis surveillance and case-based surveillance in Africa developed by the World Health Organization (3–5). Meningitis surveillance data were analyzed, stakeholders were consulted, and surveillance databases, reports, and registers were examined. Data management was evaluated, along with data completeness, reporting completeness, and representativeness; specimen collection and transport; and laboratory confirmation. In Burkina Faso in 2007, all 55 districts reported a total of 25,695 meningitis cases to the national surveillance office. Cases were reported weekly in aggregate, and reporting was supplemented with line lists of case-level data during the epidemic season. Multiple databases rather than a single database were used, and unique identifiers were not used to link epidemiologic and laboratory data; instead, hand-matching (i.e., by name, age, and residence) was attempted. Completeness of case-level data was greater for demographic information (98%) than for vaccination status (81%). Reporting completeness of the surveillance system, defined as the 10,614 line-listed cases divided by the 25,695 total cases reported in aggregate, was 41%. Of the line-listed cases, 9,824 (93%) had CSF specimens collected. Population representativeness of surveillance data based on the proportion of districts submitting line lists and CSF specimens was 91% (50/55) and 85% (47/55), respectively; 4% (443/10,614) of line-listed cases and 4% (423/9,824) of specimens were from the Burkina Faso capital, Ouagadougou. The proportion of all reported cases with a specimen reaching a national reference laboratory was 11% (2,898/25,695) for cases reported in aggregate and 27% (2,898/10,614) for line-listed cases. CSF macroscopic examination, Gram stain, and white blood cell count were performed routinely at district laboratories; results of these tests were suggestive of bacterial meningitis* in 35% (3,428/9,824) of specimens. Five reference laboratories in Burkina Faso performed culture or latex agglutination, and one of these performed conventional polymerase chain reaction (PCR) for pathogen confirmation. The proportion of specimens reaching a national reference laboratory that were confirmed as bacterial meningitis† was 24% (685/2,898). In Mali in 2007, all 59 districts reported a total of 978 meningitis cases to the national surveillance office. Cases were reported weekly in aggregate, but reporting was not supplemented with line-listed cases during the epidemic season. Multiple databases rather than a single database were used, and unique identifiers were not used to link epidemiologic and laboratory data. Case-level data were recorded for the 514 specimens that reached the national reference laboratory, but these data were not systematically entered into any database. Completeness of these case-level data was greater for demographic information and confirmatory laboratory results than for vaccination status and outcome (95% and 100% versus 11% and 30%). In Mali, the total number of specimens collected was unknown and line lists were not available; therefore, measures of reporting completeness could not be evaluated. Population representativeness of surveillance data based on proportion of districts submitting CSF specimens was 61% (36/59); 63% (324/514) of specimens received at the reference laboratory were from the Mali capital, Bamako. The proportion of reported cases with a specimen reaching the national reference laboratory was 53% (514/978). The median interval between specimen collection and receipt at a reference laboratory was 2 days (range: <1 to 57 days). Although performed at district laboratories, CSF macroscopic examination, Gram stain, and white blood cell count results from district laboratories were not routinely collected nationally, but CSF findings from retesting at the national reference laboratory were collected. Results of these tests suggested bacterial meningitis in 39% (198/514) of specimens. At the one reference laboratory that performed culture and latex agglutination, the proportion of specimens that were confirmed as bacterial meningitis was 21% (106/514). Mamoudou Djingarey, MD, Denis Kandolo, MD, Clement Lingani, MSc, Fabien Diomandé, MD, World Health Organization West Africa Inter-Country Support Team, Burkina Faso. Isaïe Medah, MD, Ludovic Kambou, MD, Felix Tarbangdo, Ministère de la Santé, Burkina Faso. Seydou Diarra, MD, Kandioura Touré, MD, Flabou Bougoudogo, PhD, Ministère de la Santé, Mali. Sema Mandal, MD, Ryan T. Novak, PhD, Amanda C. Cohn, MD, Thomas A. Clark, MD, Nancy E. Messonnier, MD, Div of Bacterial Diseases, National Center for Immunizations and Respiratory Diseases, CDC. Corresponding contributor: Sema Mandal, firstname.lastname@example.org, 404-639-3158. High-quality surveillance with laboratory confirmation is necessary to evaluate vaccine effectiveness, inform vaccination strategies to maintain population immunity, and monitor for changes in disease epidemiology. In this evaluation of meningitis surveillance in Burkina Faso and Mali, good organizational structures, capable staff, and clear protocols for collecting both aggregate and case-level data and collecting CSF specimens were found. However, a major gap was that case-level data and specimens often were not sent to the national level for analysis. Harmonized data management tools and linking case identifiers were lacking. Moreover, the ability of the reference laboratories to confirm cases was limited by the low number of submitted specimens, along with delayed specimen transport, and inadequate capacity for testing. Based on the findings from the evaluation, recommendations were made to Burkina Faso and Mali to improve data management, epidemiology, and laboratory capacity. Since March 2008 in Burkina Faso and December 2010 in Mali, these surveillance domains have been strengthened through baseline assessments, technology transfer, training, and mentorship. This is the model for meningitis surveillance and capacity-building in the meningitis belt (Figure). Surveillance needs assessments were conducted and pilot projects for case-based surveillance were implemented in selected districts, which were subsequently scaled up to the appropriate level in each country. To improve case-level data reporting to the national level, district visits by supervision teams focused on introducing data management tools that included deploying a standardized surveillance database, introducing systemwide linking using unique case identifiers, and conducting training for surveillance officers. Additionally, national level surveillance epidemiologists and data managers were mentored in collating, analyzing, and interpreting data. To improve specimen transport, district visits focused on reconnecting the network and conducted training on appropriate transport conditions. To improve laboratory capacity for case confirmation, real-time PCR§ and external quality-control programs were established at reference laboratories. Preliminary data from Burkina Faso for 2011 show improvements in surveillance. Compared with 2007, in 2011 the proportion of line-listed cases doubled from 41% to 88%, and the proportion of all reported cases with a specimen reaching a reference laboratory increased from 11% to 85%. With implementation of real-time PCR in four national reference laboratories, causative pathogen confirmation increased from 24% to 41%. In Mali, most surveillance-strengthening activities are still in progress, but compared with 2007, early 2012 indicators are encouraging. Two of the first districts to introduce PsA-TT now send electronic line-list data to the national level, the proportion of districts submitting specimens has increased from 61% to 80%, and PCR has been introduced at the national reference laboratory (conventional PCR in 2009, real-time PCR in 2011). In Burkina Faso, high-quality surveillance data revealed the impact of PsA-TT 1 year after it was introduced, with significant decreases in the incidence of all bacterial meningitis, serogroup A–specific meningococcal disease, and bacterial meningitis mortality, with no outbreaks identified (6). In Mali, no meningitis outbreaks have occurred in 2012, and preliminary surveillance data have not identified serogroup A disease (7). Burkina Faso and Mali differed in how they built on existing infrastructure to establish case-based surveillance. Depending on local capacity, populations at risk, disease incidence, and geographic distribution, subnational rather than nationwide population-based case-based surveillance might be appropriate. For example, although Burkina Faso and Mali are neighbors with similar sized populations (15–16 million) and a history of meningitis epidemics, disease epidemiology over the past decade has differed substantially. The incidence of meningitis disease in Burkina Faso is one of the highest in Africa, with a mean annual incidence of 90 per 100,000 during 2005–2009. The last major epidemic was in 2007, with 25,695 cases. Mali has a much lower mean annual incidence, seven per 100,000 during 2005–2009, and the last major epidemic was in 1997, with 11,228 cases. Unlike Burkina Faso, which lies entirely within the meningitis belt, Mali's northern, sparsely populated desert regions do not. Therefore, Mali concentrated its surveillance-strengthening efforts on the most populous districts in the meningitis belt to achieve a high proportion of laboratory-confirmed cases. The experience of case-based surveillance in Burkina Faso and Mali has shown that one size might not fit all, but key factors for achieving surveillance objectives are conducting baseline surveillance evaluations, placing a high priority on developing surveillance expertise (e.g., through staff training and development), and building on existing infrastructure. The public health goal of introducing a serogroup A meningococcal conjugate vaccine is to eliminate meningitis epidemics in sub-Saharan Africa.¶ Strong case-based surveillance with pathogen-specific laboratory confirmation is essential to enable accurate assessments of vaccine effectiveness, vaccine failures, duration of protection, and herd immunity. Assessment of all of these factors will help define a national vaccination strategy to maintain population immunity so that epidemics do not recur. Such surveillance also enables identification of susceptible populations that might emerge as a result of low vaccine coverage or loss of vaccine potency during vaccine storage and handling. Additionally, case-based surveillance is essential to detect other meningococcal serogroups and other meningitis pathogens with epidemic potential. Finally, case-based meningitis surveillance can be of even greater value in the many countries that have introduced Haemophilus influenzae type b vaccines and in those that plan to introduce pneumococcal conjugate vaccines, providing necessary information on vaccine effectiveness and changes in the epidemiology of meningitis following implementation of the vaccination programs. - Lapeyssonnie L. Cerebrospinal meningitis in Africa. Bull World Health Organ 1963;28(Suppl). - CDC. Updated guidelines for evaluating public health surveillance systems: recommendations from the Guidelines Working Group. MMWR 2001;50(No. RR-13). - World Health Organization. Control of epidemic meningococcal disease. WHO practical guidelines. 2nd ed. Geneva, Switzerland: World Health Organization; 1998. - World Health Organization Regional Office for Africa. Standard operating procedures for enhanced meningitis surveillance in Africa. Geneva, Switzerland: World Health Organization; 2005. - World Health Organization Regional Office for Africa. Guide générique pour la surveillance cas par cas des méningites bactériennes dans la région Africaine de l'OMS. Geneva, Switzerland: World Health Organization; 2009. - Novak RT, Kambou JL, Diomande FV, et al. Serogroup A meningococcal conjugate vaccination in Burkina Faso: analysis of national surveillance data. Lancet Infect Dis 2012;12:757–64. - Mandal S, Diarra S, Touré KT, et al. Meningitis surveillance in Mali: monitoring the elimination of epidemic meningitis. Presented at the 2012 International Conference on Emerging Infectious Diseases, March 13, 2012, Atlanta, GA. * Suggestive of bacterial meningitis: any suspected case with gram-negative cocci; gram-negative rods or gram-positive cocci in cerebrospinal fluid (CSF) by direct microscopic examination; or a leukocyte count of >10 per µL; or turbid or purulent macroscopic appearance. † Confirmed bacterial meningitis: isolation or detection in CSF by latex agglutination or polymerase chain reaction of Neisseria meningitidis, Streptococcus pneumoniae, Haemophilus influenzae, or other bacterial pathogens known to cause meningitis. § Advantages of real-time over conventional PCR include the following: 1) in real-time PCR, amplification products are measured quantitatively each amplification cycle by measuring the fluorescence of a dye, whereas in conventional PCR, amplification products are detected only after the last amplification cycle when the products are separated by gel electrophoresis and stained; 2) real-time PCR is more sensitive than conventional PCR; and 3) real-time PCR amplification is performed in a closed system, whereas amplification in conventional PCR is performed in an open system, allowing a greater chance of contamination. ¶ Additional information available at http://www.meningvax.org/mission.php. What is already known on this topic? A new serogroup A meningococcal conjugate vaccine (PsA-TT) was introduced in the African meningitis belt with the goal of eliminating epidemic meningitis as a regional public health concern. Strong case-based surveillance with laboratory confirmation is essential in early-implementing countries to evaluate vaccine impact because the vaccine was licensed based on safety and immunogenicity data without field effectiveness trials. What is added by this report? Surveillance evaluations conducted in Burkina Faso and Mali before introduction of the vaccine revealed limitations in data quality and management, specimen collection and transport, and laboratory confirmation. Building on existing infrastructure and expertise, surveillance-strengthening activities, such as technology transfer, training, and mentorship, demonstrated measurable improvements. Compared with 2007, causative pathogen confirmation during 2011–2012 increased from 24% to 41% in Burkina Faso, and the proportion of districts submitting specimens increased from 61% to 80% in Mali. What are the implications for public health practice? Countries implementing PsA-TT should evaluate their existing meningitis surveillance before vaccine introduction and create a surveillance system that is population-based at the national or subnational level and that generates case-level data appropriate to their needs and capacity. Abbreviations: QA = quality assurance; QC = quality control; PCR = real-time polymerase chain reaction; ID = identifier. Alternate Text: The figure above shows the model for meningitis surveillance and capacity-building used in the "meningitis belt" in Africa. Based on the evaluation findings, recommendations were made to Burkina Faso and Mali to improve epidemiologic and laboratory capacity. Since March 2008 in Burkina Faso and December 2010 in Mali, surveillance has been strengthened through baseline assessments, technology transfer, training, and mentorship. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. All MMWR HTML versions of articles are electronic conversions from typeset documents. This conversion might result in character translation or format errors in the HTML version. Users are referred to the electronic PDF version (http://www.cdc.gov/mmwr) and/or the original MMWR paper copy for printable versions of official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices. **Questions or messages regarding errors in formatting should be addressed to email@example.com.
fwe2-CC-MAIN-2013-20-32985000
Gonorrhea Laboratory Information Identification of N. gonorrhoeae and Related Species The genus Neisseria contains a number of species which are normal flora and pathogens of humans and animals. Of these species, the species of human origin--and particularly the pathogenic species, N. gonorrhoeae and N. meningitidis--have been studied extensively in an effort to control the infections they cause. Gonorrhea, caused by N. gonorrhoeae, is one of the most frequently reported infectious diseases in the United States and worldwide. Rapid tests have been developed to identify and distinguish N. gonorrhoeae, from the commensal Neisseria and related species which are normal flora of the oro- and nasopharynx. Because many rapid tests for the identification of N. gonorrhoeae test for a limited number of characteristics which may be shared by one or more nonpathogenic Neisseria spp., a non-gonococcal, commensal Neisseria species may be incorrectly identified as N. gonorrhoeae. Such incorrect identifications may result in serious social and medicolegal consequences for patients and their families. Thus, the primary purpose of these pages is to provide information relating to the accurate identification of N. gonorrhoeae. Descriptions of species in these pages will, for the moment, be limited to those of human origin. Information relating to the identification species of animal origin will include a table of differential characteristics which should be consulted when a gram negative diplococcus is not readily identifiable as a human Neisseria species e.g., an isolate from a wound inflicted by an animal bite. In addition, reference information on the taxonomy, host range, pathogenicity, natural habitat and prevalence of the Neisseria species is included.
fwe2-CC-MAIN-2013-20-32987000
Mobility/Stability Statistics for 2011-2012 Definitions of Terms Used The total (cumulative) number of students in membership at any time during the academic year. Instructional Program Service Type (IPST): Services provided by schools and/or districts for students identified as belonging to one or more of the categories below. Students with Disabilities: Students who have been formally identified as having physical or health conditions that may have a significant impact on the student’s ability to learn and therefore warrant placing the student on an Individual Educational Program (IEP). Limited English Proficient: This designation encompasses all students identified as either non-English proficient or limited English proficient. Non-English proficient is defined as a student who speaks a language other than English and does not comprehend, speak, read, or write English. Limited English proficient is defined as a student who comprehends, speaks, reads, or writes some English, but whose predominant comprehension or speech is in a language other than English. Districts must provide language services to all limited English proficient students. Student qualifies for either the free or reduced lunch program. The Federal National School Lunch Act establishes eligibility for the reduced price lunch program for families with income up to 185 percent of the federal poverty level (in 2009, this amount was $39,220 for a family of four). Families with income up to 130 percent of the federal poverty level qualify for the free lunch program (in 2009 this amount was $27,560 for a family of four). Students enrolled in a specially designed program for children who are, or whose parent or spouse is a migratory agricultural worker, and who, in the preceding 36 months, in order to obtain, or accompany such parent or spouse in order to obtain, temporary or seasonal employment in agricultural work has moved from one school district to another. Students that are identified by the school as failing, or most at risk of failing, to meet the State’s challenging student academic achievement standards on the basis of multiple, educationally related, objective criteria established by the school. According to the McKinney Act, a “homeless individual”: lacks a fixed, regular, and adequate nighttime residence. Gifted and Talented: Students who have been formally identified, using district-wide procedures aligned with CDE guidelines, as being endowed with a high degree of exceptionality or potential in mental ability, academics, creativity, or talents (visual, performing, musical arts, or leadership. For additional information, E-Mail:
fwe2-CC-MAIN-2013-20-32988000
Reporting on the State of the North American Environment The North American Agreement on Environmental Cooperation obliges the Secretariat of the Commission for Environmental Cooperation to “periodically address the state of the environment in the territories of the Parties.” To meet this obligation, the Secretariat has developed this report—The North American Mosaic: An Overview of Key Environmental Issues—with the support of environmental reporting experts from the governments of Canada, Mexico and the United States. This report describes current environmental conditions and trends across North America. The breadth and diversity of the subject are astounding: from tiny invasive zebra mussels to global greenhouse gases measured by the teragram; from the last remaining vaquita porpoises to vast expanses of boreal forests and marine ecosystems; from invisible molecules of toxic chemicals to the all-too-visible smog and haze that blanket our cities from time to time. As a mosaic of existing information, this report prompts us to consider the following questions: The Commission for Environmental Cooperation welcomes your feedback <email@example.com>. - What are the central environmental challenges confronting North America? - What are the greatest priorities for cooperative action among our three countries to address these environmental challenges? - How can we measure our progress and create effective feedback mechanisms? - How can we enhance the relevance of trinational cooperation through the Commission for Environmental Cooperation? The 14 environmental issue papers may be downloaded individually as PDF files:
fwe2-CC-MAIN-2013-20-32990000
How is anencephaly diagnosed? The diagnosis of anencephaly may be made during pregnancy or at birth by physical examination. Your baby's head might appear flattened due to the abnormal brain development and missing bones of the skull. Diagnostic tests performed during pregnancy to evaluate your baby for anencephaly include: - alpha–fetoprotein - a protein produced by the fetus that is excreted into the amniotic fluid. - amniocentesis- a test performed to determine chromosomal and genetic disorders and certain birth defects - ultrasound - a diagnostic imaging technique that uses high-frequency sound waves and a computer to create images of blood vessels, tissues and organs - blood tests
fwe2-CC-MAIN-2013-20-33002000
March 2, 2011 Contact: Ashley Moore The Children's Hospital of Philadelphia Office: (267) 426-6071; Mobile: (267) 294-9134 Performing delicate surgery in the womb, months before birth, can substantially improve outcomes for children with a common, disabling birth defect of the spine. Experts at The Children’s Hospital of Philadelphia (CHOP) co-led a new landmark study showing that fetal surgery for spina bifida greatly reduces the need to divert fluid from the brain, improves mobility and improves the chances that a child will be able to walk independently. Spina bifida is the most common birth defect of the central nervous system, affecting about 1,500 babies born each year in the United States. “This is the first time in history that we can offer real hope to parents who receive a prenatal diagnosis of spina bifida,” said N. Scott Adzick, MD, Surgeon-in-Chief at The Children’s Hospital of Philadelphia, director of Children’s Hospital’s Center for Fetal Diagnosis and Treatment, and lead author of a federally sponsored study reporting results of a clinical trial of fetal surgery for myelomeningocele, the most severe form of spina bifida. Adzick, who led a team at CHOP that pioneered fetal surgeries for this condition and set the stage for this clinical trial, added, “This is not a cure, but this trial demonstrates scientifically that we can now offer fetal surgery as a standard of care for spina bifida.” Myelomeningocele is devastating, occurring when part of the spinal column does not close around the spinal cord, failing to protect it during stages of fetal development. Long-term survivors of the condition frequently suffer lifelong disabilities, including paralysis, bladder and bowel problems, hydrocephalus (excessive fluid pressure in the brain), and cognitive impairments. Fetal surgery researchers have now reported long-awaited results from an unprecedented clinical trial that compared outcomes of prenatal, or fetal, surgery versus postnatal surgery, the conventional surgery for this disabling neurological condition. The study appears today in an Online First article in the New England Journal of Medicine. Two and a half years after fetal surgery, children with spina bifida were better able to walk, when compared to children who received surgery shortly after birth. Patients who received fetal surgery also scored better on tests of motor function. Within a year after fetal surgery, they were less likely to need a shunt, a surgically implanted tube that drains fluid from the brain. Three fetal surgery centers participated in the Management of Myelomeningocele Study (MOMS) trial—at The Children’s Hospital of Philadelphia, Vanderbilt University, and the University of California San Francisco. The biostatistics center at George Washington University (GWU) served as the coordinating center and oversaw data collection and analysis, while the Eunice Kennedy Shriver National Institute of Child Health and Human Development sponsored the trial. The MOMS study was a prospective, randomized clinical trial. One sign of its prominence is that all U.S. fetal surgery centers not participating in the trial agreed to perform no fetal surgery for spina bifida during the 7-year duration of the trial. The trial goal was to enroll 200 patients, but the NIH ended the trial in December 2010, after 183 surgeries had occurred, based on clear evidence of efficacy for the prenatal procedure. Throughout the trial, women whose fetuses had been diagnosed with spina bifida contacted the trial’s coordinating center at GWU if they chose to volunteer for the study. That center randomly assigned half of the eligible women to receive prenatal surgery, the other half to receive postnatal surgery. Postnatal surgery entailed delivery by planned cesarean section at 37 weeks gestation, after which the surgical team repaired the opening in the newborn’s spine, usually within 24 hours after birth. In prenatal surgery, done between 19 and 26 weeks’ gestation, the surgical team made incisions in the mother and her uterus, then repaired the spina bifida lesion while the fetus was in the womb. Mothers in this group stayed near the center for ongoing monitoring, then underwent delivery by planned cesarean section at 37 weeks, or earlier, because many of the babies in the prenatal surgery group arrived prematurely. The complex requirements of this fetal surgery require a highly sophisticated multidisciplinary team. The CHOP program includes specialists in fetal surgery, neurosurgery, obstetrics, maternal-fetal medicine, cardiology, anesthesiology and critical care, neonatology, and nursing. In both study groups, surgeons used the same technique to cover the myelomeningocele with multiple layers of the fetus’s own tissue. “This lesion leaves the spinal cord exposed, so it’s essential to protect this tissue from neurological injury,” said study co-author Leslie N. Sutton, MD, Chief of Neurosurgery at The Children’s Hospital of Philadelphia. Previous research had established that in myelomeningocele, amniotic fluid and other features of the intrauterine environment damage the exposed spinal cord. Starting two decades ago, pioneering animal studies by Adzick and collaborators such as Martin Meuli, MD (now Surgeon-in-Chief at Zurich Children’s Hospital in Switzerland) showed that the timing of the myelomeningocele repair was important, a finding borne out by clinical experience in fetal surgery done before the MOMS trial. “The damage to the spinal cord and nerves is progressive during pregnancy, so there’s a rationale for performing the repair by the 26th week of gestation, rather than after birth,” said Sutton. The abnormal spinal development underlying myelomeningocele triggers a cascade of disabling consequences, including weakness or paralysis below the level of the defect on the spinal column. In addition, leakage of cerebrospinal fluid through the open spina bifida defect results in herniation of the brainstem down into the spinal canal in the neck—a condition called hindbrain herniation. Hindbrain herniation obstructs the flow of cerebrospinal fluid within the brain, leading to hydrocephalus, a life-threatening buildup of fluid that can injure the developing brain. Surgeons must implant a shunt, a hollow tube that drains fluid from the brain into the child’s abdominal cavity. However, shunts may become infected or blocked, often requiring a series of replacements over a patient’s lifetime. The current study reports data on 158 patients who were followed at least one year after surgery. Clinicians who were independent of the surgical teams and blinded (not informed which of the two surgeries a given child received) evaluated the children from the study at one year of age and again at age 30 months. “The mothers, children and families who participated in this MOMS trial, and who are continuing to be available for follow-up studies, have made an important contribution to our knowledge and treatment of spina bifida,” said Lori J. Howell, RN, MS, Executive Director of the CFDT, and a study co-author. “Because of their involvement, we are better able to accurately counsel other families about what it will mean to have a child with spina bifida—and to offer a rigorously tested, innovative prenatal surgical treatment.” Although the trial results mark a milestone in spina bifida treatment, not every woman carrying a fetus with spina bifida may be a suitable candidate for fetal surgery. For example, severely obese women were not included in the current study because they have a higher risk of surgical complications. Adzick noted that further research will continue to refine surgical techniques and improve methods to reduce the risks to mothers and fetuses. In the meantime, concluded Adzick, “Both the experimental outcomes of animal studies and the results of the MOMS trial suggest that prenatal surgery for myelomeningocele stops the exposure of the developing spinal cord to amniotic fluid and thereby averts further neurological damage in utero. In addition, by stopping the leak of cerebrospinal fluid from the myelomeningocele defect, prenatal surgery reverses hindbrain herniation in utero. We believe this in turn mitigates the development of hydrocephalus and the need for shunting after birth.” Adzick added that this demonstrated success for fetal surgery may broaden its application to other birth defects, many of which are rarer but more uniformly lethal than spina bifida. Children’s Hospital’s comprehensive center already offers fetal surgery for selected life-threatening fetal conditions. The Children’s Hospital of Philadelphia began performing fetal surgery for spina bifida in 1998, three years after Adzick launched the Center for Fetal Diagnosis and Treatment. The Center’s reports of neurological improvements in spina bifida, based on 58 fetal surgeries through 2003, helped lay the groundwork for the MOMS trial. For Adzick, who has been working to advance fetal surgery since performing preclinical studies in the early 1980s, “It’s very gratifying to take this idea forward over 30 years, starting with a concept and now offering hope—to families, mothers and the children themselves.” This trial was sponsored by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. Additional funding for spina bifida research at the CFDT at The Children’s Hospital of Philadelphia was provided by Katherine and Michael Mulligan, the March of Dimes Foundation and the Spina Bifida Association. “A Randomized Trial of Prenatal versus Postnatal Repair of Myelomeningocele,” New England Journal of Medicine, Online First, Feb. 9, 2011 (to appear March 24, 2011 in print edition).
fwe2-CC-MAIN-2013-20-33009000
March 26, 2012 Contact: John Ascenzi, Children’s Hospital of Philadelphia, Phone: (267) 426-6055, Ascenzi@email.chop.edu New biological research reveals how an invading virus hijacks a cell’s workings by imitating a signaling marker to defeat the body’s defenses. By manipulating cell signals, the virus destroys a defensive protein designed to inhibit it. This finding, from studies in human cell cultures, may represent a broader targeting strategy used by other viruses, and may lay the scientific groundwork for developing more effective treatments for infectious diseases. “Learning details of how cells respond to viruses helps us to understand key cellular machinery better,” said study leader Matthew D. Weitzman, Ph.D., of the Center for Cellular and Molecular Therapeutics at The Children’s Hospital of Philadelphia. “This study tells us how a virus overcomes intrinsic host defenses. In this case the virus mimics signals used during normal DNA repair mechanisms.” The study team, formerly based at the Salk Institute for Biological Studies in La Jolla, Calif., published their current findings online March 8 in Molecular Cell. Biologists have long known that viruses hijack cellular processes to replicate themselves, while host cells have evolved intrinsic defense systems to resist viral invasion. To replicate, viruses must deliver their own DNA into a cell’s nucleus, so a viral infection entails a conflict between two genomes—the DNA of the host cell versus the foreign DNA of the virus. Viruses mount their attack by interacting with specific cell proteins as a way of penetrating the cell’s defenses. “In this study, we asked how the herpes simplex virus finds the specific proteins that it interacts with,” said Weitzman. “By describing the mechanism of this particular interaction between a virus and a cell protein, we have pinpointed key regulators of a cell’s processes, and shed light on how a cell regulates its defenses.” This laboratory study focused on herpes simplex virus type-1 (HSV-1), a common human virus that results in recurrent infections alternating with inactive periods. Like other viruses, HSV-1 is known to manipulate cellular processes in order to infect cells, but the specific mechanisms by which it acts on the DNA repair pathway were previously unknown. Weitzman’s study team was studying a viral protein called ICP0 that overcomes host defenses by targeting cellular proteins for destruction. They found that ICP0 exploits phosphorylation, a chemical mark that is often used in cells to promote interactions between proteins, especially as part of the cellular signaling response to DNA damage. In HSV-1 infection, the phosphorylation signal on ICP0 attracts a cellular DNA damage response protein, RNF8, which binds to the false signaling marker and is then degraded. Because RNF8 normally inhibits viral replication, its destruction leaves the cell vulnerable to HSV-1 infection, as the virus takes over the cell’s machinery. The researchers also found that ICP0 exploits the same phosphorylation signal to bind to other cellular proteins in addition to RNF8, a hint that it may play a broader role in defeating antiviral defenses and manipulating cellular machinery. Weitzman will continue to investigate HSV-1 infection in neurons and in animal models. He also plans to extend his research into other viruses, which may act on different pathways than HSV-1 does. “Ultimately,” he added, “better knowledge of molecular mechanisms in infection may suggest strategies to interrupt the viral life cycle and treat infections.” The National Institutes of Health, the Salk Institute, the American Cancer Society and the Howard Hughes Medical Institute were among the funders of this research. “Viral E3 Ubiquitin Ligase-Mediated Degradation of a Cellular E3: Viral Mimicry of a Cellular Phosphorylation Mark Targets the RNF8 FHA Domain,” Molecular Cell, published online March 8, 2012, to appear in print, April 13, 2012. doi: 10.1016/j.molcel.2012.02.004
fwe2-CC-MAIN-2013-20-33010000
March 26, 2010 The simple ability to consume food is one most of us take for granted. But for some children, getting adequate nourishment is far from simple. Feeding and swallowing (dysphagia) problems are extremely complex and surprisingly common in children with autism. Without appropriate treatment, these disorders can have lasting effects on a child’s physical and emotional development. The conference will highlight approaches to treating children on the autism spectrum with feeding disorders with presentations by guest lecturers (including: Susan Levy, MD, Maureen Black, PhD, Thomas Linschied, PhD), interactive breakouts sessions, and family panel. More information will be available here in December.
fwe2-CC-MAIN-2013-20-33012000
Stefansson, Dr. Anderson and the Canadian Arctic Expedition, 1913–1918: A Story of Exploration, Science and Sovereignty. Mercury series, History paper no 56 Author: Stuart E. Jenness Release date: April 2011 Impressive in its scope and scholarship, this book presents the first comprehensive and authoritative account of the storied Canadian Arctic Expedition and the personal animosity of its co-leaders: the intrepid explorer/ethnologist Vilhjalmur Stefansson and the respected scientist Rudolph Anderson. The 440-page volume details the Expedition's successes and tragedies, including the discovery of islands never before mapped and the sinking of the flagship Karluk. After 90 years, all the elements of this important and compelling story have finally been woven into a single volume. It is long overdue. The book includes 84 illustrations and maps, several appendices, and a detailed bibliography. The author is uniquely qualified to tell this story. His father was Diamond Jenness, a scientist on the Expedition, and he knew or met eight Expedition members, including both Stefansson and Dr. Anderson.
fwe2-CC-MAIN-2013-20-33021000
Human rights reminder Sixty-four years ago, on Dec. 10, the United Nations promulgated and its members adopted the International Declaration of Human Rights. Ever since, International Human Rights Day has been “celebrated” on the same date. Written after World War II as the sun had just begun to rise over the darkened, brutalized terrain of so much of the world, the document enshrines the highest values and aspirations of civilized men and women. The document hoped to open a door onto a new world. It begins: “Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world… “Whereas disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind… “Whereas it is essential to promote the development of friendly relations between nations…” How should we respond, therefore, when not one leader of the Arab world, let alone the purportedly moderate Palestinian Authority President Mahmoud Abbas, condemned the call to genocide last weekend by Hamas chief Khaled Meshal? Meshal fulminated against Israel. “We are not giving up any inch of Palestine. It will remain Islamic and Arab for us and nobody else. Jihad and armed resistance is the only way. We cannot recognize Israel’s legitimacy. From the sea to the river, from north to south, we will not give up any part of Palestine — it is our country, our right and our homeland.” Was it only a fool’s grasp at a straw to hope that Abbas, the anointed of the western world, would have publicly protested: “No! That is no longer the way. We wish to live alongside, not instead of, Israel.” But it would appear that despite the new UN state status conferred upon Abbas’ PLO, the Palestinian leader believes the provisions of the declaration do not apply to him. Of course, in this belief, he is at one with all the members of Arab League who applauded Meshal and the various other political leaders who regard the declaration as merely a decorative plaque to hang on a wall hiding a crack in the plaster rather than as an earnest, meaningful expression of human values. But the protection of human rights must not only concern us when it is beyond our borders. CJN reporter Andy Levy-Ajzenkopf reminds us in a compelling, two-part series about the predicament of the European Roma that we must ensure our own laws – here in Canada – conform to both the spirit and the letter of the International Declaration of Human Rights. We firmly believe that the recognition of the inherent dignity and equal rights of all members of the human family is indeed the foundation of freedom, justice and peace in the world. We must therefore practise what we believe if we are to point out the disgusting and cynical hypocrisy of men like Mahmoud Abbas.
fwe2-CC-MAIN-2013-20-33022000
Warsaw Ghetto uprising’s 70th anniversary Seventy years ago last month, the Nazis began their second deportation of the Jews from the Warsaw Ghetto. In response, on Jan. 18, 1943, the first organized and armed Jewish/Zionist resistance action in the ghetto was launched. The fighters of the ZZW and ZOB drove the Nazis from the ghetto. Months later, on April 19, 1943, on the eve of Passover, the Nazi SS and police units entered the ghetto and were attacked by organized Jewish partisans yet again. There were two separate armed resistance organizations in the ghetto – the ZZW and ZOB. The most famous Jewish leader of armed resistance is Mordechai Anielewicz, commander of the ZOB (Jewish Fighting Organization) during the uprising. The ZOB was an alliance of several Zionist and non-Zionist youth groups. Anielewicz received paramilitary training in Betar as a young teenager and left the group before the war. The ZOB had a socialist orientation, and Betar as an organization did not participate in it, in part because of politics. ZZW, the Jewish Military Organization, was commanded and manned by Betar members and their allies. Betar’s fighters in the Warsaw Ghetto Uprising have been largely written out of history. Moshe Arens, a former Israeli defence minister who was a Betar member, recently wrote a yet-to-be published book on Betar’s heroic battle against the SS in the ghetto. That book, Flags Over the Warsaw Ghetto (Gefen Publishing, November 2011), and articles by Arens about the ZZW that were published in Yad Vashem Studies, Ha’aretz and the Jerusalem Post have helped to create a far more accurate account of the ZZW’s participation in the uprising. The book and the articles also did much to recall the heroism of Pawel Frenkel, ZZW’s commander. The ZZW is now thought by historians to have been the better-equipped force in the ghetto, as it had procured machine guns. The ZOB, however, had more fighters. The groups finally decided to co-ordinate their efforts in the last moments before the April 19 battle began. For 28 days, Jewish warriors fought the enemy and showed bravery not seen since the days of Bar Kochba’s uprising against Rome. In the Vilna Ghetto, Betar leader Joseph Glazman was deputy commander of the United Partisan Organization, the only armed Jewish resistance group in that ghetto. Betar was founded in 1923 by Ze’ev Jabotinsky (1880-1940), a figure who is too often forgotten today. Professor Daniel J. Elazar (1934-1999), a scholar of the Jewish political tradition, in the May 15, 1981, edition of the journal Sh’ma Elazar remarked about Jabotinsky’s legacy: “Would there be serious public commemoration of the 100th birthday of Zev Jabotinsky had it not been for the fact that the Likud won the election in Israel in 1977? Not likely. For 30 years and more, Jabotinsky was one of those non-persons in Israel and the Jewish world… The ruling Labor party made him a non-person for the same reasons that it portrayed [then-prime minister] Menachem Begin and his supporters as uncivilized fascists – it is easier to beat the opposition by painting it as irrelevant, intolerable and non-existent, until it is too strong to be dismissed.” This year’s 70th anniversary of the Warsaw Ghetto Uprising offers an opportunity to remind today’s Jews about Jabotinsky’s vital contributions. Moshe Phillips is the president of the Philadelphia chapter of Americans For a Safe Israel.
fwe2-CC-MAIN-2013-20-33023000
Carbon Savings - Promoting tangible methods of conservation (Finalist.) YMCA - Toronto Office Updates based on feedback: Many have felt this proposal is based on measuring a person's carbon footprint, instead we are helping people estimate the economic and environmental benefits they can expect to achieve by switching to a new light bulb, showerhead or even by adding a rain barrel. We will be getting the word out through our own marketing efforts but also see this as a resource to support the many organizations that are already out there. We provide traction to the envrionmental movement and use money and environmental benefits as incentives for people to consider and try new products. We are looking to work with sustainability networks, housing associations, builders, manufactuers and retailers (to help market their products), real estate agents, mortgage specialists, educational programs, speakers and the list goes on. - If you are one of these people or know someone who is, we would love to hear from you! Carbon Savings is committed to reducing society’s demand on natural resources by promoting environmentally preferred products (EPPs). To do this, Carbon Savings focuses on public awareness by helping people to understand how to conserve water and energy and then to estimate the financial and environmental benefits associated with each method. This is done through calculators which help people estimate their annual savings, payback period and CO2 reductions. Please view http://carbonsavings.org/product.php?item=faucet for an example. The organization was founded on the following premises: - People would prefer to reduce their impact on the environment as long as it does not affect their standard of living - The majority of people are motivated to action based on financial considerations - People are hesitant to spend money when the resulting savings are unknown In order to support the transition towards a low carbon economy, Carbon Savings has decided to concentrate its efforts on a two pronged approach. The first is to help homeowners and businesses find tangible methods to conserve energy and water. There are roughly 30 household technologies that are good for the environmental and have a fast payback period, yet many of them are unfamiliar to the public. As a result, Carbon Savings is building a comprehensive and easy-to-use website for consumers to view all of their options and learn about each one. The second approach is to help companies communicate the benefits of their EPPs to the public. This allows manufacturers and retailers to promote their products by demonstrating the economic and environmental benefits to the customer. In essence, the calculators become interactive marketing tools to help companies sell more products. At the end of the day, increased sales translate into greater adoption of conservation. BACKGROUND & DETAILS The concept of Carbon Savings was born in 2010 as part of a sustainability course in engineering at Queen’s University. Since that time, the idea has evolved and gained momentum. Today, the company has teamed up with a number of non-profit organizations to provide traction to their environmental efforts. One of Carbon Savings’ core beliefs is that in order to achieve mainstream conservation, consumers must first know what their options are and then be convinced these products are worth buying. As a result, the company has created a working prototype at www.CarbonSavings.org to allow users to browse through different product categories, such as showerheads and thermostats, in order to learn about them and their importance. Each technology is highlighted with its own interactive calculator to help users estimate the financial and environmental benefits. The calculators are pre-loaded with average household data to help people that may not be aware of reasonable input values. These numbers can be changed to generate customized estimates. In addition to the 30 household technologies, there are roughly 40 commercial technologies that can be adopted at work to generate even greater savings. As the site develops, the goal is to provide a user experience geared towards helping people and companies explore practical opportunities to achieve conservation. By highlighting all of these technologies, Carbon Savings is looking to offer the most extensive collection of environmental products and their calculators in one location. Beyond the educational website, Carbon Savings will leverage its impact by creating marketing tools for manufacturers and retailers. The calculators can be customized to be product specific and can be placed on the retailer’s site beside the product, which positions the calculators in front of people at the point of sale. It also enables companies to provide personalized estimates without sending their customers away from their website. The calculators have a lot of flexibility in terms of look, feel and the number of questions asked of the user. They must also be approved by Carbon Savings for accuracy and then carry the Carbon Savings logo, providing third party verification. Companies do not currently have the option to license energy savings calculators and therefore, cannot benefit from the various advantages they have to offer. The licensing option allows companies to focus on their core competency while outsourcing software improvements and utility updates to an industry expert. Also, companies are able to forgo the development cost, which reduces risk and makes high-end features more affordable. At the end of the day, these marketing tools and added features help companies sell more EPPs and lead to a low carbon economy. Once the site is established, Carbon Savings will create tools to help sustainability departments and green teams to evaluate the energy and water savings potential of their company. Again, the focus will be on both the financial and enviornmental benefits so they can communicate the benefits to the public once the retrofits are performed. This helps the departments identify the best opportunities while saving a lot time since the software will evaluate which technologies offer the greatest savings and are best suited for their applications. This lets staff members concentrate on implimentation while Carbon Savings performs the initial research and compares technologies. A big component of our success will be getting our calculators in front of people. This will be facilitated by a number of partnerships. As it's been mentioned throughout the competition, we have a number of partnerships in the works and some have been confirmed at this time. We are working with the YMCA, a few TD branches (to start), Science 44 Housing Co-op (and looking at the Co-op network), Youth Metoring Youth program and we are hoping to work with a number of low-income housing organizations, Greenovations with Queen's University and there are a large number of organizations in the pipeline. Another aspect to consider is that corporate clients are similar to partnerships in the way that they help bring the calculators to the public. Companies looking to license the calculators bring the calculators to their own network and often present the information at meaningful times such as the point of sale or in relationship to a "call to action". The energy demands of the average Canadian household results in roughly 8-10 metric tons of CO2 emissions per year. However, it is possible to reduce these energy demands by more than 20% with technologies that offer a combined payback period of less than 5 years. As a result, a household would be able to save upwards of $500 per year and reduce emission by over 2 metric tons on an annual basis. Average Annual Savings Low flow showerheads - 220kg CO2 | $90 per year | 0.7 year payback period (or less) | 30,000L of water Faucet Aerators - 60kg CO2 | $40 per year | 0.5 year payback period (or less) | 15,000L of water Rain Barrel - 2kg CO2 | $9 per year | 10 year payback period | 5,000L of water Front-load washing - 170kg CO2 | $140 per year | 5 -10 year payback period | 40,000L of water Thermostat - 200kg CO2 | $40 per year | 2.5 year payback period Phantom Power Bar - 30kg CO2 | $20 per year | 2 year payback period The goal is to reach as many people as possible and through the help of the YMCA, Carbon Savings is able to get a good start. In Toronto alone, the YMCA network reaches approximately 500,000 people each year and the organization helps roughly 25% of all immigrants entering the country. Their network offers an incredible opportunity to reach a large number of people and tailor solutions to help recent immigrants. The partnership will begin with a flyer distributed through the YMCA’s network that will highlight a couple household products and how much can be saved. It will then lead people to both organizations’ websites to learn more. Partnering with the YMCA offers an unparalleled ability to reach a large number of people and achieve environmental reform. Furthermore, targeted campaigns can help low income families save money with products that have payback periods of less than 1 year. Separately, a couple bank branches have offered to provide exposure by handing out information about www.CarbonSavings.org to new homeowners that sign up for a mortgage. Carbon Savings was founded by Howard Swartz who graduated from Mechanical Engineering at Queen’s University. He has continued his studies at Queen’s and is currently completing a Masters in Civil Engineering with Speciality in Applied Sustainability. He has worked as the environmental manager for an eco-friendly coffee shop and spent a summer at an energy service company. In the latter role, he generated estimates of the financial savings associated with various energy reducing technologies. Howard’s network includes sustainability managers, manufacturers, retailers, utility companies, environmental consultants and marketing firms. The company is able to derive revenues through a number of methods. The main source will come from licensing the customized calculators. Carbon Savings is in the process of developing a sophisticated platform to offer customized calculators that are capable of collecting valuable statistics about users, which can then be used to influence future marketing campaigns. Manufacturers and retailers will pay to have calculators developed for their products, which will be supported by Carbon Savings through software and data updates. This helps companies focus on their core competencies while benefiting from a third party source that is trusted for estimating utility savings. Once the company has developed a strong online presence, the company will look to establish a consulting division. This would allow the company to provide more detailed analysis and engage organizations that would prefer to receive expert advice.
fwe2-CC-MAIN-2013-20-33031000
Contact Lens Prescription BY MARY JAMESON, BHS, COA, NCLC, CPOT What do the numbers on a contact lens prescription mean? How do they relate to a patient's eyes? How important are they? With all of the controversy surrounding the issue of releasing contact lens prescriptions, we should understand what every detail of the prescription means. Measuring Base Curve The back of a contact lens features a series of curves. These curves help the contact lens fit the contour of a cornea. The main back surface curve of a lens is the base curve. It represents the central radius of curvature of the back of the lens. The base curve corresponds to the keratometry reading, the measurement of a patient's corneal curvature. We typically record keratometry measurements in diopters of power and typically measure the base curve reading in millimeters of radius. You can use a conversion chart to transition from diopters to millimeters of radius. When you select a trial lens, you can choose a base curve that is on K (meaning it matches the corneal curvature reading), steeper than K (meaning the base curve is smaller than the corneal curvature reading) or flatter than K (meaning that the base curve is larger or longer than the corneal curvature reading). An "on K fit" is not always the ideal. You can compare this to shopping for shoes or clothing -- if you are a certain size in one brand, you may not be the same size in another. The same is true for contact lenses. Patients need to try contact lenses on to make sure the fit is the best possible. The power of a lens represents the prescription that will neutralize the power of the patient's eye. This number is not always the same for a contact lens fitting as it is for a spectacle prescription. You must take the vertex distance (the distance from the surface of the cornea to the back of the spectacle lens) into consideration because the contact lens fits onto the cornea. You can use an effectivity chart to make the appropriate power adjustments. Also, the over-refraction during the lens fit will help finalize the patient's prescription. When you fit gas permeable lenses, you must consider the relationship of the base curve to the cornea. When you fit a lens steeper or flatter than the keratometry reading, a lacrimal lens forms between the cornea and the back curvature of the lens. If you fit a lens steeper than the K reading, then you must add minus power to the prescription. Likewise, if you fit a lens flatter than the K reading, then you must add plus power to the prescription. Use the acronym SAMFAP (steeper add minus; flatter add plus) to help remember this calculation. Contact lens diameter influences the fit of the lens. Diameter is less variable for soft lenses. If you reduce the diameter of the lens but the base curve remains the same, then the lens will show more movement. If you increase the diameter of the lens but the base curve remains the same, the lens will show minimal amounts of movement. Contact lens prescriptions should be filled just like any other valid medical prescription. Make "No Substitutions" the norm for contact lens prescriptions. Any variance from the prescribed contact lenses can result in problems for your patients. Ms. Jameson is laboratory supervisor for the Department of Clinical Sciences at the Pennsylvania College of Optometry and is a past chair of the AOA Paraoptometric Contact Lens Spectrum, Issue: October 2003
fwe2-CC-MAIN-2013-20-33036000
Having a Labrador Retriever as a companion in your home is wonderful, especially if you have the opportunity to have raised your dog from a puppy. However, dogs tend to be much more short-lived than humans, and many pet owners are concerned about how to give their dog the highest quality of living possible. There are many factors that influence how many years your Labrador Retriever will spend with you, some of which will depend on your responsibility as a pet owner. WHAT IS THE AVERAGE LIFESPAN OF A LABRADOR RETRIEVER? The official average lifespan of a Labrador Retriever is said to be between 12 and 13 years. This is called the median age. The term “median age” refers to a point when half of a specific breed live past this age, while half are no longer alive. Labrador Retrievers have a relatively high median age compared to some other larger breeds (St. Bernard dogs only have a median age of 4.1 years!). However, it is possible for a Labrador Retriever to live longer than 13 years, with good lifestyle habits and attentive medical care. GOOD LIFESTYLE HABITS Many veterinary studies have proven that a fit Labrador lives much longer than an inactive or obese Labrador. In one study, there were two groups of 20 dogs: one group was fed low-quality dog food, and one group was fed high quality dog food in addition to exercise. By the age of 13, eleven of the dogs that had been fed better food were still alive, in contrast to one of the low-quality dog food group. Though some of these dogs died from disease, it is clear that some important lifestyle choices can affect how long your dog will live. HOW TO HELP YOUR DOG LIVE LONGER The most important thing to help your dog to live longer is to provide constant attention. Your dog is your companion, and will thrive off of the love that you show them. As a companion, it’s important that you feed your dog the highest quality pet food that you can afford. Lower quality pet foods may contain chemicals, indigestible nutrients, and a high carbohydrate content. Higher quality foods contain nutrients that are easily absorbed, and are much easier for your dog to digest. Exercise is also very important when helping your dog to become healthy. Labrador Retrievers are very active, and will require at least 20 minutes of moderate exercise on a daily basis.
fwe2-CC-MAIN-2013-20-33038000
(CNN) -- Boys in the United States are starting puberty earlier than ever, according to a new study publishing in the November issue of the journal Pediatrics. In the study, lead author Marcia Herman-Giddens from the University of North Carolina's School of Public Health and her colleagues show that boys are starting to sexually develop six months to two years earlier than medical textbooks say is standard. This research has been a long time coming. Herman-Giddens first documented early puberty in girls in 1997, and several studies have since backed up those findings. One of the reasons it took so long to do a comprehensive study on early puberty in boys, Herman-Giddens said, is that the onset is more difficult to identify. For girls, breast development and the start of a menstrual cycle are obvious clues. For boys, the onset of puberty comes in the form of enlarged testes and the production of sperm. Researchers responded: " 'Yikes, we don't want to ask about that!' " Herman-Giddens said with a laugh. But ask they did -- 212 practitioners across the country examined more than 4,100 boys aged 6 to 16. The practitioners recorded information on the boys' genital size and pubic hair appearance. Researchers assigned each boy's data to one of five stages -- Stage 1 being pre-puberty, Stage 2 being the onset of puberty and Stage 5 being adult maturity. They then compared the ages and puberty stages of all the boys. The rigorous study was designed to report on only physical changes, not hormonal. The results were broken down by race: African-American boys start hitting Stage 2 first, at about 9 years old, while non-Hispanic white and Hispanic boys begin developing around 10 years old. "This should have an impact on the public health community," Herman-Giddens said. But the researcher is concerned about using the numbers as a new standard for pediatricians. "That might be normal now," she said, "but that doesn't mean it's normal in the sense of what's healthy or what should be." One of the reasons she's worried is that our environment may be playing a role in accelerating puberty. "The changes are too fast," Herman-Giddes said. "Genetics take maybe hundreds, thousands of years. You have to look at something in the environment. That would include everything from (a lack of) exercise to junk food to TV to chemicals." Dr. Megan Kelsey, an assistant professor of pediatric endocrinology with Children's Hospital Colorado, said several studies have shown an association between childhood obesity and early puberty in girls. Fat tissue has the ability to convert other hormones into estrogen, which experts believe may lead to early breast development. Fat also creates the hormone leptin, which is necessary for the onset of puberty, Kelsey said. The little research that has been done on the relation between obesity and puberty in boys has shown conflicting results. A few studies found puberty is delayed in obese males -- possibly because of excess estrogen in the body. Obesity could still be to blame, but a closer examination is needed, Kelsey said. The problem is a lot of other factors that could be involved. "It's a very complicated subject," said Sonya Lunder, a senior analyst with the Environmental Working Group. "We're finding a lot of the chemicals that Americans have daily exposure to have an impact." But identifying the specific chemicals that are causing hormonal changes has so far been impossible. Skepticism abounds about food additives, pesticides and chemicals like BPA, Lunder said. "The overall concern is that by hastening puberty you're actually shortening childhood," she said. "The real impact of this is not only on future fertility," but also that puberty is a "physiological change in your brain." Parents should be aware that their boy or girl could hit puberty earlier than they did as children, Herman-Giddes said. They may need to give "the birds and the bees" talk earlier or be prepared to explain their child's body changes. Early development in girls has been linked to poor self-esteem, eating disorders, and depression, according to Health.com. The findings for boys are not as clear, but parents should be on the lookout for risky behaviors. One thing that should be a relief for parents, though, is that boys are still seemingly reaching sexual maturity at the same age as they have in the past. "Although they seem to be starting a little bit earlier, they seem to reach the end of puberty at the same time as they used to," Kelsey said. We're "not going to have grown men in eighth grade."
fwe2-CC-MAIN-2013-20-33043000
Problem code: MANYLEFT All submissions for this problem are available. The classical game of OneLeft is played as follows. Some pegs are placed on an NxN grid. Initially, at least one cell is empty and at least one contains a peg (each cell contains at most one peg). A move consists of jumping one peg over an adjacent peg to an empty cell, and removing the peg that was jumped over. Formally, if there is a peg in cell (x1, y1), and cell (x2, y2) is empty, and (x1-x2, y1-y2) is one of (0, 2), (0, -2), (2, 0), or (-2, 0), and there is a peg in cell ((x1+x2)/2, (y1+y2)/2), then the peg in cell (x1, y1) may be moved to cell (x2, y2) and the peg in cell ((x1+x2)/2, (y1+y2)/2) removed. The coordinate (0, 0) indicates the top-left corner, (N-1, 0) indicates the top-right corner, (0, N-1) indicates the bottom-left corner, and (N-1, N-1) indicates the bottom-right corner. The game continues until no more moves are possible. Normally the goal of OneLeft is to leave a single peg on the grid. However, in this problem the goal is to leave as many pegs as possible. Optimal solutions are not required, but solutions that leave more pegs will score more points. Input begins with an integer N, the size of the grid. N lines follow with N characters each, representing the grid. A '.' character indicates an empty cell, and a '*' character indicates a peg. For each test case, first output the number of moves in your solution. Then output each move in the form "x1 y1 x2 y2", which indicates a peg moving from (x1,y1) to (x2,y2). Any whitespace in your solution will be ignored. Your score for each test case is the fraction of cells containing pegs after performing the moves in your solution. Your overall score is the average of your scores on the individual test cases. Invalid solutions will be judged as "wrong answer". In particular, if any legal moves exist after the moves in your solution have been performed, your solution will considered invalid. Sample Input 1 6 ..*..* *..*.* ***.** .***.. ****.. **.*.* Sample Output 1 13 1 3 1 1 3 3 1 3 0 5 0 3 0 2 0 0 3 5 3 3 5 2 3 2 5 0 5 2 3 2 3 0 2 4 0 4 0 3 0 5 3 0 1 0 0 0 2 0 0 5 2 5 Sample Input 2 5 .*.*. ..*.. *...* ...*. .*... Sample Output 2 The first sample output scores 8/36 = 0.2222. The second sample output scores 7/25 = 0.28. Recall that the goal is to maximize your score. Test Case Generation For each official test file, N is chosen randomly and uniformly between 10 and 30, inclusive. A real number D is chosen randomly and uniformly between 0.5 and 0.95, then each cell is independently chosen to contain a peg with probability D. |Time Limit:||1 sec| |Source Limit:||50000 Bytes| |Languages:||ADA, ASM, BASH, BF, C, C99 strict, CAML, CLOJ, CLPS, CPP 4.0.0-8, CPP 4.3.2, CS2, D, ERL, FORT, FS, GO, HASK, ICK, ICON, JAR, JAVA, JS, LISP clisp, LISP sbcl, LUA, NEM, NICE, NODEJS, PAS fpc, PAS gpc, PERL, PERL6, PHP, PIKE, PRLG, PYTH, PYTH 3.1.2, RUBY, SCALA, SCM guile, SCM qobi, ST, TCL, TEXT, WSPC| Fetching successful submissions
fwe2-CC-MAIN-2013-20-33047000
Posted 3 years ago In 1941 Anton Refreger was paid $26,000 by the WPA to make murals in San Francisco's Rincon Post Office on Spear Street. I visited it today and took pictures of the wonderful murals, which trace the state's history from the Indians to the World Wars. These two murals are numbers four and five. Notice in them the MASSIVE hands of the people--something found in all of Refreiger's murals. Here is the provided description in the post office, which is actually now a food court, not a post office. 4. Conquistadors Discover the Pacific: "Baja California was discovered in 1533 by Fortun Jimenez of the Cortes Expedition. By 1540, Ulloa, another member of that expedition, had explored the Sea of Cortes. Also in that year, Hernando de Alarcon sailed up the Colorado River and in 1541, Francisco de Bolanos explored both sides of the Baja peninsula. The first European to explore Alta California, the land above the Bajapeninsula, was Juan Rodriguez Cabrillo who sailed to the Santa Barbara Islands in 1543.' 5. Monks Building the Mission: "Mission San Francisco de Asis, named for the founder of the Franciscan order, St. Francis of Assisi, is popularly know as Mission Dolores. That name comes from a stream named Arroyo de los Dolores originally on the property. That stream and its adjoining lake were filled in years ago. Mission Dolores, the sixth of California's 21 missions, was founded in 1776 by a missionary named Palou. The chapel that stands today was completed in 1791 and restored in 1917 by architect Willis Polk." Part of my trip to Rincon Center. For more on New Deal Post Office murals, check out this website: http://www.parmaconservation.com/newdealpostoffic.html
fwe2-CC-MAIN-2013-20-33051000
WASHINGTON -- December 7 -- The United States is facing a looming waste crisis with a conservative estimate of 70 billion pounds of PVC plastic (polyvinyl chloride) slated for disposal in the next decade. Disposal rates are expected to sharply increase as an estimated 125 billion pounds of PVC installed in the last 40 years in construction and other long lasting uses will need to be disposed of as it reaches the end of its useful life. This pervasive poison plastic is used in thousands of products including pipes, building materials (such as vinyl siding), consumer products (such as toys or tablecloths) and disposable packaging, and cannot be disposed of safely. From 1966 to 2002, an estimated 250 billion pounds of PVC was used in the U.S., with a doubling of use in the past 15 years alone. A new report, PVC: Bad News Come in Threes, documents the health and environmental hazards during manufacturing, product use and disposal, and provides detailed state and national estimates on PVC waste incinerated and landfilled. Several communities near incinerators are concerned about increased cancer rates linked to dioxin emissionsburning PVC plastic forms dioxins, a highly toxic group of chemicals linked to cancer Many PVC products are made with toxic additives including phthalates and organotins that can be released during use, and leach into groundwater when landfilled. Studies have shown plasticizers such as phthalates have migrated out of PVC containers used to store food, IV bags used to hold blood, toys and numerous other soft vinyl products, exposing people to toxic additives. And, consumers that recycle PVC bottles are unaware that it can contaminate the entire recycling batch. The Center for Health, Environment & Justices (CHEJ) BE SAFE network kicked off a campaign to convince Johnson & Johnson and Microsoft to switch to available, safe non-PVC products and packaging as Bristol Meyers, Samsung and Nike have already done. Many firefighters who are concerned about PVC-producing toxic fumes in burning buildings will benefit from Firestones announcement in October to phase out 8,000 tons of PVC used annually in their roofing. The two corporate targets are large users of PVC packaging such as Microsofts blister packaging on software products, and Johnson and Johnsons Kids Detangling Shampoo bottles. Some major medical device manufacturers are switching from using PVC to avoid direct patient exposure to phthalates, as well as the public and environmental health impacts of PVC throughout its life cycle, said Ted Schettler MD, MPH of the Science and Environmental Health Network. Companies realize that protecting the public health and the environment is the right thing to do and makes good business sense. The campaign is asking consumers to avoid PVC productswhich are often marked with a 3 or a v for vinyland send back any PVC items to the manufacturer or bring it to a household hazardous waste collection. We know enough about the dangers of PVC to take precautionary action and phase it out, said Lois Gibbs who founded CHEJ and is well known as the housewife turned activist around Love Canals toxic contamination in her hometown of Niagara Falls, NY. We need to tell corporations to protect our health and environment by switching to non-PVC materials. Consumers need to know that bad news comes in threesavoid buying PVC products which are marked with a 3 or v in the recycle symbol. PVC is estimated to contribute from 38 to 67% of the total chlorine found in solid waste, from 90 to 98% of phthalates, from 1 to 28% of the lead, and 10% of the cadmium (Pg. 14 Report). Cadmium, lead, organotins and phthalates are commonly released from PVC waste in landfills (Pg. 37 Report). Burning PVC plastic, which contains 57% chlorine when pure, forms dioxins, a highly toxic group of chemicals linked to cancer. PVC is the major contributor of chlorine to four combustion sourcesmunicipal solid waste incinerators, backyard burn barrels, medical waste incinerators and secondary copper smeltersthat account for a significant portion of dioxin air emissions; these four sources accounted for more than 80% of dioxin emissions to air based on a USEPA survey (Pg. 2 Report). Government tests found residents of Mossville, Louisianathe location of four vinyl production facilitieshad dioxin levels in their blood at three times the average rate and were breathing air contaminated with vinyl chloride, a potent carcinogen, more than 120 times higher than the ambient air standard (Pg. 19 Report). Organizations released the report in 20 states, including CA, CT, DE, FL, GA, HI, IL, LA, MA, MD, ME, MI, NC, NY, OH, OR, PA, VA, WA and WV. It is co-authored by CHEJ and the Environmental Health Strategy Center.
fwe2-CC-MAIN-2013-20-33053000
Imagine a moment from the age of dinosaurs frozen in time: primitive birds, bees, insects, early mammals, the first known flowering plants and of course, dinosaurs, all exquisitely preserved in fine-grained fossils from China’s Liaoning Province. Volcanic eruptions killed and buried victims quickly in this dinosaur Pompeii, capturing soft, fragile features not normally preserved in fossils — notably the feathers on animals that had never been known to have them before. Now, with state-of-the-art animation to bring this lost world to life, NOVA investigates the mysterious feathered dinosaurs that are challenging old ideas about the origin of bird flight. The central character in this drama is a strange little dinosaur with wings on its legs as well as its arms. The pigeon-sized microraptor is the smallest adult dinosaur ever found, perhaps the first known tree dweller. But could it really fly? Is it the key to understanding the origin of flight or merely an evolutionary dead end unrelated to the ancestry of birds? To help solve the riddle, NOVA assembles a team of top paleontologists, aeronautical engineers and paleo-artists to reconstruct the microraptor and build a sophisticated model for a wind tunnel experiment. The results have surprising implications for long-accepted ideas about how winged flight began.Â
fwe2-CC-MAIN-2013-20-33082000
Herbal Tea Ingredients The prickley Hawthorne bush is native to Europe, Africa and western Asia and is often used as a hedge in Europe though it grows to be about 13 feet high. In North American climates the hawthorne tree grows to about 5 feet tall. The Hawthorne plant has a grayish colored bark and thorns that grow along the branches. The leaves are shiny and dark green with a bluish tint to the undersides. Hawthorne trees produce white flowers and bright red berries that hang in clusters. Hawthorne berries are widely used for heart problems and its medicinal value for blood and heart related illness is thought to be both effective and safer than other drugs with similar qualities. Hawthorne is used for irregular heart beat, to lessen the plaque build up in arteries and increase blood flow and oxygen in the blood to the heart and brain. It is known as a high blood pressure regulator. In addition to the benefits to the heart and circulatory system Hawthorne has been used to rid the body of excess water and salts and support weight loss programs. Medicinal properties extend to digestive disorders, insomnia and even sore throats can be relieved by Hawthorne. Effect of the benefits from Hawthorne herbals are seen after several weeks use as it is a slow acting herb. Hawthorn is available as a dried herb, tea and as a tincture which is more potent than the tea. It is sometimes found under the name Indian Hawthorne. It is thought that the thorns of the Hawthorne tree were used to make the crown of thorns worn by Jesus at his crucifixion. Herbal Tea Recipe Hawthorne tea is prepared by steeping 1 – 2 teaspoons dried leaf and flower or 2 – 3 teaspoons of the dried crushed berry in 8 ounces of boiling water for 15 – 20 minutes. Hawthorne extract can also be added to other herbal teas for additional effects. Hawthorne Uses & Herbal Remedies Hawthorne Tea Benefits Hawthorne herb is often used to quiet muscle spasms and as a sedative of the nervous system without inducing sleep. A compress of Hawthorne berries have been used to help remove splinters and embedded foreign bodies in the skin. Hawthorne is thought to be an effective treatment in heart disease and will help to improve blood supply to the heart and smooth the heart contractions. The leaf bud of the Hawthorne can be cooked and eaten, the leaf can be chewed to nourish and relieve hunger, the berries can be used to make jellies and fruit sauces and the flowers can be added to salads. Hawthorne Side Effects & Cautions Hawthorne is considered a safe herb; however, caution should be made in excessive use of it. Lowering Blood Pressure Naturally We came across this resource for Lowering Blood Pressure and it is a great report. If you are suffering from high blood pressure, you are well aware of the dangers. This report can help you to naturally reduce your blood pressure and gain control. Lots of our readers have reported back excellent results in just 2-3 weeks after implimenting a few of the strategies. You can get the High Blood Pressure Remedy Report by clicking here. Buy Herbal Tea Remedy E-Book If you are interested in herbal teas – our Complete Herbal Tea Recipe E-Book is a fabulous resource. You can buy it for a limited time for just $9.99. You will get 80 tried and tested herbal teas and herbal blends along with what the tea is best used for.This herbal remedy tea recipe book will become one of your favorite resources if you are interested in holistic healing, herbs and herbal tea. Use the graphic below to place your order. Return from Hawthorne Herb to Herbal Tea Ingredients. Return to Herbal Tea Home
fwe2-CC-MAIN-2013-20-33084000
For most of us, search engine optimization (SEO) can be more than a little confusing. The main idea is to increase traffic to your site through optimizing pages for search engines, but how does it work? In plain English, how do you get the pages within your site “read” by the search engines and stored in the database of existing pages? In SEO terms, how do you get your site crawled and indexed? Search engines use two major areas of assessment to produce search engine result pages (SERPs) most relevant to a specific search: document analysis and link analysis. Document analysis comprises several factors. The search engine crawls pages for keywords related to the search while taking both quantity and location into account. Keywords in more important places carry more weight. For example, a search engine assumes keywords in your domain name, title tag and H1 tags (headings) as more likely to convey subject matter than page content and captions. Search engines also surmise that pages with multiple instances of a keyword are more relevant to the query, the word or phrase entered in a search engine. Actual page content is another factor that search engines are able to recognize and use to rank pages. Search engines use semantics and lexical analysis to “read” text and judge its quality. Other factors are also measured, such as how long a viewer spends on your page. How do you keep a viewer on your page longer? You can compel them to stay with unique, intelligent content. Search engines not only strive to produce relevant SERPs, but also quality SERPs, a process which is achieved through link analysis. Search engines assume that the more sites there are that link to your site (called backlinking), the more authoritative your site is. Popularity equals importance. Search engines also “read” what the backlinking site says about your site. The anchor text (the actual text being linked) and the text directly surrounding the link are both considered. If the anchor text for the backlink is “this is a terrible site,” search engines take that text into account. Therefore, you wouldn’t want another site to link to yours with the anchor text “click here,” but something more relevant, such as your company name or descriptive keywords. Links from any old site won’t do either. The more trusted or authoritative the site is that backlinks to your site, the more weight that backlink will carry. This being said, backlinks from poorly coded sites with inferior content can actually hurt your ranking. Link farms, sites that exist solely to house links for the purpose of influencing rank, are an example of this type of harmful spam. Sites that participate in link farms are penalized by search engines and given a lower ranking. Most importantly, for your site to be crawled and properly indexed, you need a site worthy of traffic. This means having a site with good usability, professional design and high-quality content. A site with good usability is easily crawled because it has clear navigation and organizational hierarchy, making subject matter easily assessed. Professional design conveys authority and trust, making viewers more likely to visit and backlink to your site. Finally, high-quality content will bring links and invite viewers to spend more time on your site. Even professional search engine optimizers are unaware of the exact procedure search engines use to rank pages. If all the secrets were released, the process could be cheated, rendering search engines less effective at returning relevant information. The bottom line is that high rankings come from high-quality pages with professional design and great content. Follow that mantra and you have taken the biggest step toward obtaining a high ranking.
fwe2-CC-MAIN-2013-20-33089000
Last modified: 2012-07-07 by rick wyatt Keywords: nevada | united states | Links: FOTW homepage | search | disclaimer and copyright | write us | mirrors image by Clay Moss, 28 March 2009 In 1865, a star was added, representing Nevada, bringing the total number of stars on the U.S. flag to 36. There were thirteen stripes representing the thirteen original colonies. The 1929 design was altered in 1987 so the name "NEVADA" is placed under the star, not spaced all around it. Motto of state is "BATTLE BORN" reflecting the formation of the state during the Civil War. The background color on Nevada flags is officially "cobalt blue", best matched at RGB 0-51-153. Clay Moss, 28 March 2009 Nevada Revised Statutes NRS 235.020 State flag. The official flag of the State of Nevada is hereby created. The body of the flag must be of solid cobalt blue. On the field in the upper left quarter thereof must be two sprays of sagebrush with the stems crossed at the bottom to form a half wreath. Within the sprays must be a five-pointed silver star with one point up. The word "Nevada" must also be inscribed below the star and above the sprays, in a semicircular pattern with the letters spaced apart in equal increments, in the same style of letters as the words "Battle Born." Above the wreath, and touching the tips thereof, must be a scroll bearing the words "Battle Born." The scroll and the word "Nevada" must be golden-yellow. The lettering on the scroll must be black-colored sans serif gothic capital letters. Joe McMillan, 16 February 2000 "The County Flag Array came about as a result of the Nevada Centennial in 1964. It was the brainchild of the Music Director at the Sands Hotel in Las Vegas, Mr. Antonio Morelli. His suggestion was passed to the Chairman of the Nevada Centennial Commission Mr. Thomas C. Wilson. He requested that the county commissioners of all 17 Nevada counties consider adopting flags.James J. Ferrigan III, 5 January 2001 As a result all of Nevada's counties began to solicit designs from schools, local artists, and designers. All the counties adopted official flags in time for their display on Nevada Day 1964. On that day a U.S. Marine Corps Honor Guard from the U.S. Navy Ammunition Depot at Hawthorne marched the new flags into the West portico of the capitol in Carson City and presented them to Governor Grant Sawyer in the name of the people of Nevada. Since then they have been used at the capitol building as well in numerous Nevada Day parades." image by Clay Moss, 25 March 2009 There are two flags of the Governor of Nevada, one military and the other civil. The military version was updated in 1991 when the specifications were created for the state flag in 1991. James J. Ferrigan III, 17 April 2001 While official, the Governor's flag has apparently fallen into disuse as the Governor's office according to Jim doesn't even know that the flag exists. Clay Moss, 25 March 2009 image by Clay Moss, 25 March 2009 The previous governor's flag had the letters NEVADA wrapped around the central star. Nevada's state seal has mountains, a mine, a railroad crossing a bridge, a plow, etc. The star and sagebrush badge was designed specifically for the flag. Joe McMillan, 5 February 2001 image by Joe McMillan, 21 April 2000 The state military crest, which is the crest used in the coats of arms of units of the National Guard, as granted by the precursor organizations of what is now the Army Institute of Heraldry. The official Institute of Heraldry blazon is "Within a garland of sagebrush a sledge and miner's drill crossed in saltire behind a pickaxe in pale proper." Joe McMillan, 21 April 2000 image by Zach Harden, 10 August 2001
fwe2-CC-MAIN-2013-20-33094000
Bone loss in the jaw is common in people who have lost teeth, had gum disease, suffered facial trauma, or have ill-fitting dentures. Even with just a single missing tooth, 40-60% of the supporting bone structure can be lost in the first year, which can make it very difficult to place a dental implant that will last. But today, thanks to advanced bone grafting techniques, we have the ability to grow bone where you need it. This gives us the opportunity to build a strong foundation on which successful implants can be placed to help restore a fully functional, beautiful smile. There are several bone graft options, which will be determined during your treatment planning with your doctor. Common Bone Grafting Procedures - Autogenous Bone Graft - Also called autografts, these types of grafts are made from the patient's own bone, removed from another part of the body. The most common area is the hip. - Allogeneic bone - Bone from a human tissue donor obtained from a bone bank (cadaver bone). - Xenogenic - Similar to allogeneic bone, but derived from another species, usually a cow. - Alloplast - Synthetic or artificial bone. An example is INFUSE® Bone Graft - which contains a synthetic version of Bone Morphogenetic Proteins (BMPs) which are naturally produced in the body to regulate bone formation and healing. - Many patients prefer these grafting options because they eliminate the harvesting procedure of the autogenous bone graft. However, with these procedures bone regeneration may take longer and the outcome may be less predictable. - Ridge Preservations - Replacing bone in the empty space or socket created after a tooth is extracted. - Sinus Lift - Replacing bone lost in the upper jaw/sinus floor to accommodate a dental implant. - Guided Tissue/Bone Regeneration - Special membranes are placed under the gum to protect a bone graft and encourage bone regeneration. - Platelet Rich Plasma - Using platelets from your own blood to promote faster and more efficient healing. Most bone grafting procedures are fairly simple and are usually performed under sedation in our office. However, major bone grafts are sometimes performed to repair defects of the jaws resulting from traumatic injuries, tumor surgery, or congenital defects. Large defects are often repaired using anautogenous bone graft. These procedures are routinely performed in an operating room and may require a hospital stay.
fwe2-CC-MAIN-2013-20-33102000
Susan Yark with Bartow History Museum explained the intent of the two-day camp was to help children see how youth in pioneer days created their own toys and games. The camp also involved some modern aspects for participants. “We learned that kids in the past played with sticks, so we painted a base coat on [sticks] and put decorations and yarn wigs on them and, of course, the googly eyes,” Yark said. Other activities included weaving, candle dipping and a tour of the museum. “[It’s important] to recognize the difference of what happens now and what happened in the past,” Yark said. “And the chores they had to do, like candlemaking — it’s fun for us, but [pioneers] had to do it once a year for the whole year’s lighting of their cabin.” Camp participants also played games, such as the Cherokee bean dice game and a Cherokee game called “firekeeper.” “[Firekeeper] uses craft sticks that are painted to look like fire. We take a bandanna hankerchief and put it around the eyes as a blindfold for the firekeeper,” Yark said. “The other children try to steal a stick one at a time and the Cherokee children learned how to be quiet when they were hunting.” While the camp taught from the past, a modern craft — the Styrofoam and yarn octopus — was the crowd favorite for Tuesday, which included children ages 7 to 11. Monday’s camp was geared toward children ages 4 to 6. “We’ve been making arts and crafts, weaving, and we’ve made candles and stuff like that. It’s been a lot of fun,” Euharlee Elementary School third-grader Dylan Hankins said. Both Hankins and fellow camp participant Cody Stewart, a fourth-grader at Taylorsville Elementary School, said they enjoyed crafting the Styrofoam and yarn octopus as well as playing and exploring in the museum’s history nook. Program volunteers and staff said one of the less popular activities was weaving, leaving some children frustrated with the difficult nature of the activity and unable to complete their project before the end of camp. However, that wasn’t the case for Caroline Lanier, a third-grader at TES. “Weaving was hard ... but that was my favorite part,” Lanier said. “I finished everything.”
fwe2-CC-MAIN-2013-20-33105000
TRAUNIK - Alger County was the site of a rare bird sighting this week when an endangered whooping crane appeared off a dirt road near a corn field, about two miles north of Traunik. Initial reports from those spotting the bird thought it was an albino sandhill crane. The whooping crane was seen with the grayish or rusty brown-colored sandhill cranes. Whooping cranes are distinctively white. The crane, known to researchers as No. 2705, is a female hatched in captivity at the Necedah National Wildlife Refuge in Wisconsin in spring 2005. In autumn that year, the bird was fitted with radio transmitters and legs bands with a distinct color pattern, allowing for positive identification. Not only was the sighting unusual for local birdwatchers, but it was also interesting for researchers at the International Crane Foundation in Baraboo, Wis. "We've never had a bird there before. It's a little bit puzzling," said Sara Zimorski, co-chair of the whooping crane tracking team. "It's completely new to us too, so we're not sure what she's going to do." In late March, the same whooping crane was found in northern Wisconsin, much farther north than she'd gone before. The crane migrates to Tennessee each winter. In 2001, a well-publicized scientific effort was undertaken at the refuge to teach whooping cranes to migrate to Florida, following ultra-lite aircraft. The cranes hatched in captivity are raised by humans wearing costumes to look like cranes. The female whooping crane seen in Alger County was part of a newer program called "direct autumn release" that was begun in 2005. "Those birds learn to migrate not with aircraft, but with older whooping cranes and sandhill cranes," Zimorski said. Zimorski said this whooping crane may decide to spend the summer in the U.P., but would likely leave the way she got here - following sandhill cranes, back to Wisconsin before winter. Biologists estimate that there were between 700 and 1,400 whooping cranes alive in 1865. Their numbers dropped rapidly, however, and by 1890 the whooping crane had disappeared from the heart of its breeding range in the north central United States, due to unregulated hunting and loss of habitat. Prior to the introduced population at the Necedah refuge, the birds were only hanging on to a tenuous existence at the in Texas at the Aransas National Wildlife Refuge. Because the whooping crane is an endangered species, if you do see No. 2705, researchers recommend not approaching closer than 600 feet. In the wild, cranes can live to be 20 to 30 years old. For more information on cranes, visit the International Crane Foundation Web site at: www.saving-cranes.org.
fwe2-CC-MAIN-2013-20-33111000
What Is a Root Canal? In dentistry, the root canal is the soft nerve in the tooth that is covered by the hard enamel outer layer. The root canal, as a nerve, is sensitive and vulnerable to high pain levels if it is impacted by decay or other issues. Dentists work with the root canal to alleviate some issues with the nerve. Root Canal Procedure The root canal procedure is also called endodontic therapy. This might happen if the patient begins to experience a lot of pain because of decay of the tooth or other issues. The patient will generally see a periodontist who will advise on the issues that are making the root canal procedure necessary and identify both an immediate and a general strategy for protecting overall dental health. A root canal is done with local anesthetic. The dentist often works to change the way the nerve experiences a stimulus. This can lead to less pain in the tooth. A tooth can still “work” with a dead nerve, and a root canal procedure is a way of making a hurt tooth functional again. Before going through with a root canal procedure, the patient should talk to the dental office about all of the risks and benefits, including cost and time involved, and give a medical history as well as a list of any known allergies to help avoid some problems and complications. - Getting to the Root of Your Pain - Options after a Failed Root Canal - Potential Complications of a Root Canal - Why Is a Root Canal Needed? - Can a Root Canal be Combined with Other Dental Treatments? - Options for Restoring a Tooth after a Root Canal - The Cost and Financing of a Root Canal - Risks of Apicoectomy Root End Surgery - How Apicoectomy Root End Surgery Is Performed - The Cost and Financing of Apicoectomy Root End Surgery Dentists in Beverly Hills, CA Dr. Kevin B. Sands specializes in cosmetic dentistry, taking pride in offering the finest in patient care and services to each and every patient. He is determined to give you the smile you deserve! In fact, some of the most beautiful smiles in Hollywood have come through our doors. Dr. Kevin B. Sands has trained with some of the worlds most prominent cosmetic dental specialists. He is rapidly becoming known Beverly Hills leading cosmetic dentist for people ...
fwe2-CC-MAIN-2013-20-33128000
Let food be your medicine! In India, food has been used to cure minor ailments for years. Learn how you can cure aches and pains, lifestyle conditions, minor skin and hair problems and common ailments at home! Find Home Remedy Body rash or skin rash is usually an inflammation on the skin. There is a change in color and texture on the affected area. Body rash could be the result of irritation, disease or an allergic reaction. Allergies could be to food, plants, chemicals, animals, insects or other environmental factors. Body rash could affect the entire body or could be limited to a specific area. Sometimes body rashes could be contagious. Body Rash Mix 10-12 basil leaves with 1 tbsp olive oil, 2 garlic cloves, salt and pepper. Apply a coat of this mixture to the rashes. Holy Basil leaves Salt and Pepper Take foods rich in Vitamin C. It has antioxidant properties that will help in fighting body rash. Avoid foods causing allergic skin rash on the body or back. Do not scrub your skin when there is a rash breakout. Avoid using soap, switchover to gentle body cleansers. Do not expose the area to direct sunlight and hot water. Apply olive oil to the affected area to get relief from the body rash. Another effective natural cure for body rash and back rash is to apply baking powder on the affected area. Pour a cup of uncooked oatmeal into your bathwater and soak in the tub for natural treatment of body rash. It relieves inflammation. To relieve from skin rash, make a poultice from dandelion, yellow dock root and chaparral. All advice is intended to be for informational purposes only, and not a substitute for professional or medical advice and/or diagnosis/treatment. DesiDieter does not provide medical advice and is not a substitute for professional medical advice from a qualified healthcare provider.
fwe2-CC-MAIN-2013-20-33131000
About seven years ago, a brain scientist was working with primate brain signals three floors underground. The scientist was using special neurological amplifiers that amplified micro-volt signals. He called me to solve a strange, sporadic noise problem that had appeared in his amplifier outputs. His laboratory had been in operation for many years before this problem appeared. When confronted with a mysterious problem, I always ask one particular question: "What is changed, what is different?" But in this case, the answer was nothing. Connecting up a scope, I soon saw a signal on the screen that coincided with a noise from a speaker connected to a neural amplifier output. We heard a distinctive "click” sound. By slowing the horizontal time period to one second/division, we could see the entire several-millisecond-long noise pulse. But this was no ordinary noise pulse -- it was actually a perfect bipolar square wave. Envision a single period of a full sine wave on a scope screen, then convert that same wave pattern to fit a bipolar square wave. That's exactly what it was. Every few seconds it appeared, but each time, the starting and ending polarity was flipped. There was no question this was an intelligently generated signal -- but from where? Soon, a pattern was discernible. Pulse spacing was a consistent 5.5 seconds. Remember, this laboratory was about 60 feet underground. The building had corrugated steel plates as the base, with three reinforced concrete floors up above. Line of sight with the local radar dish required that you travel through wet dirt, steel, rock, and reinforced concrete for about two miles at a slight upward angle to reach the local airport dish. And microwaves will not travel through any of these materials very well. Certain types of microwave sources contain a property few engineers know about -- scalar energy. Scalar electromagnetic waves have the E and B fields in phase, unlike normal electromagnetic waves where E and B fields are typically 90 degrees out of phase. There is another interesting characteristic of scalar waves -- they are not stopped by shielding, even by a Faraday cage. When E and B fields are in phase, they do not interact with metal molecules like conventional RF does, which makes shielding useless. Usually, only distance can stop scalar waves. Based on the waveform period, there could be only one source of this signal. I called the local international airport TRACON group, which stands for TRacking and CONtrol. My one question to the engineer on duty was simply this: "What is the rotation period of your radar dish? I'm certain I'm picking up your signal at the university." He replied: "Let me look out the window and see." A short time later, he came back to the phone, saying: "About five and a half seconds." Ah Ha! There was my signal source. Conventional microwave theory says this was impossible, but there it was. Clearly these were not conventional microwaves at all. The engineer then asked where I was picking up their signal and I told him. He mumbled, "Guess it would be good for tracking submarines, too." Apparently, I was correct. This scalar signal disappeared overnight and never returned. As for the real purpose of this scalar pulse, which traveled through two miles of dirt, reinforced concrete, steel, and rock? It remains unknown to this day. Shutting down that short-lived signal, which was transmitted for just one day, did not cause the airport to close. Apparently, it had little to do with air traffic control. Here is the strangest part of all: It was a signal that was DC-based and detected by a neural amplifier with a -3db bandwidth of 50Hz. No diode detector, no RF amplifier, no demodulators, no IF stages, no dish, no waveguides, none of the usual RF components. Yet, this very low frequency signal definitely originated from a radar dish after traveling through about two miles of dirt and other materials. This entry was submitted by Ted Twietmeyer and edited by Rob Spiegel. Ted Twietmeyer’s background includes a patented optical backplane technology. He also has more than 30 years of experience in defense and aerospace systems engineering, project management, and the training of customer technical personnel. Since 2000, Ted has been designing advanced, custom-designed, high-performance systems at the board level. Tell us your experience in solving a knotty engineering problem. Send stories to Rob Spiegel for Sherlock Ohms.
fwe2-CC-MAIN-2013-20-33133000
“Nothing struck deeper fear into the hearts of southerners, whether they held slaves or not, than the idea of a slave revolt.” -- Historian Kenneth C. Davis As a young man, Turner was sold to Thomas Moore. Upon Moore's death, Turner moved to the home of Joseph Travis, the new husband of Moore's widow. In each setting, he was remembered for his praying, fasting, and visions. A solar eclipse in February 1831 was interpreted by Turner to be the sign for him to take action. He and a few trusted friends commenced planning an insurrection. Originally slated for July 4 but postponed due to Turner being ill, the plan resurfaced on August 13, when an atmospheric disturbance made the sun appear bluish-green. Again construing this as a sign, Turner and his fellow slaves decided to act. On August 21 -- 175 years ago this week -- they killed the entire Travis family as they slept. Thus began a house-to-house murdering spree that swelled Turner's "army" to more than 40 slaves. By the morning of August 22, word of the rebellion had gotten out... prompting a calling up of the militia and a wave of fright through the region. Turner and his men continued marching and killing but were badly outnumbered by the white militia. Many slaves were arrested or killed, but Turner eluded capture for two months. "The whites around Southampton... were thrown into an utter panic, many of them fleeing the state," says Davis. By the time Turner was finally caught on October 30, 55 whites had been stabbed, shot, and clubbed to death. Turner's actions, while doomed to end with his death at the hands of the state, had impacted the South and its "peculiar institution" in a permanent manner. As Davis explains, "To whites and slaves alike, he had acquired some mystical qualities that made him larger than life, and even after his hanging, slave owners feared his influence." Turner and 54 others were executed but the violent rebellion brought Virginia to the verge of abolishing slavery. Predictably, the state chose instead to clamp down harder on slaves but this served only to heighten awareness of how untenable the situation had become. In the immediate future lurked John Brown, the Underground Railroad, Frederick Douglass, The Liberator, Uncle Tom's Cabin, and, of course, the Emancipation Proclamation. The South had indeed been put on notice. Mickey Z. is the author of several books, most recently 50 American Revolutions You're Not Supposed to Know (Disinformation Books). He can be found on the Web at: www.mickeyz.net. Other Recent Articles and Poems by Mickey Z. “I Have a Right to Keep My House Safe”
fwe2-CC-MAIN-2013-20-33151000
Cindy Driscoll, State Fish and Wildlife Veterinarian - Oxford Laboratory - Total Reports: 1 - View all reports by Cindy Driscoll → Posted on January 11, 2013 | Permalink Report Cold-Stunned Sea Turtles Sea turtles from the mid-Atlantic region usually migrate south in late fall. However, when feeding in shallow bays they can get “caught” in a cold-stun event if water temperatures drop rapidly. Sea turtles are cold-blooded and must rely on the external heat of warmer southern waters at this time of year. Since cold-stunned turtles are often are lethargic or motionless they can be assumed dead, but are actually in need of help quickly. What can you do to help? Call the Natural Resources Police toll-free, 24/7 hotline number to report any sea turtles found alive or dead during winter months: 1-800-628-9944. Please provide your name, phone number, and the stranding location (along with GPS coordinates if possible). A biologist with the Maryland stranding network will return your call.
fwe2-CC-MAIN-2013-20-33154000
St Patrick’s Trian Kids love this place, which in three separate and highly interactive exhibitions tells the history of Armagh, the story of St Patrick and recounts the tale of Gulliver’s Travels. The name Trian [say: Tree-an] is taken from the three old divisions of Armagh – into Irish Town, English Town and Scots Town, but the stories it tells have much wider interest and they are very well told. The Armagh Story You will have driven through Armagh by the time you see this, and will see how the city grew, from it’s earliest pre-Christian origins though the arrival of St Patrick, raids by the Vikings and the development of a fine Georgian city right to the modern day. It is not a dull presentation – kids are engaged with drawings and via interactive screens and get to join in with activities along the way. St Patrick’s Story Much of what was know about St Patrick is contained in The Book of Armagh, a treasure of early Christian Ireland which is on display here in facsimile (you can see the original in Trinity College, Dublin). In this part of the Trian we learn not just about St Patrick, but about how these manuscripts were created. Visitors can trace some of the images from the book – after they have learned to make their own ink and quill pen! Interactive displays allow you to converse with and question some of those involved in creating the book. Here the world expands and you become a tiny lilliputians, donning costumes and sitting in huge chairs, as you enter the home of the giant Gulliver to hear his amazing story. It’s really fun and you even get to tie up the giant! Visiting St Patrick’s Trian The exhibitions are open year round, all day Monday-Saturday, afternoons only on Sunday. More detailed info at the website. There is an auditorium at the site where events of various kinds, including musical events, are held. There is no information about these on the website, so you will need to contact them to find out if there is anything on when you are in the area. The is a nice bright and airy cafe at the centre, serving good snacks and light meals, with a terrace to eat on if the weather allows.
fwe2-CC-MAIN-2013-20-33155000
Above the Bit: Where the horse evades the rider's aids by raising the head above the level of the rider's hands. This reduces the amount of control the rider has over the horse. Action: The movement of the horse's legs. Aids: Signals or cues by which the rider communicates his wishes to the horse. The "natural" aids include the voice, the legs, the hands, and weight. "Artificial" aids include the whip and spurs. Airs Above the Ground: High school movements performed by highly trained horses, where either the front legs or all four legs are off the ground. Airs above the ground include the levade and the capriole. Amble: The slower form of the lateral pacing gait. (See Pacer) Back: To step a horse backward. Barrel Racing: A timed event in Western riding where horse and rider complete a cloverleaf pattern around three barrels. Bascule: Term used to describe the arc a horse makes as it jumps a fence. Blistering: Application of a caustic agent, or blister, to the leg. Formerly and, occasionally, still used in the treatment of a number of conditions, such as spavin, ringbone, and bowed tendon. Thought to encourage internal healing in some cases. Bosal: A braided noseband used in western equitation. Western bitless bridle. Breaking, or Breaking-In: The early education of the young horse, where it is taught the skills it will need for its future life as a riding or driving horse. Broken-In/Broke to Ride: Horse that has been accustomed to the tack and the rider and has begun initial training. (Also called greenbroke.) Buck: A leap in the air with the head lowered and the back arched. Canter: Three-beated gait of the horse in which one hind leg strides first (the leading leg), followed by the opposite diagonal pair and finally the opposite foreleg. Called the lope in Western riding. Capriole: One of the Airs Above the Ground in which the horse leaps with all four legs and strikes out with the hind legs in mid-leap. Cavelletti: Adjustable low wooden jumps used in the schooling of horse and rider. Chip/Chip-In: When a horse puts in a short, additional stride in front of a fence. Chukker: A seven-and-one-half-minute period in a polo game; from Hindu meaning "a circle." Class: A grouping of horses in a show involving horses with riders or shown at hand that perform according to the class specifications as described in the rulebook of that show. Collected: Controlled gait: a correct coordinated action. Collection: Where the rider, by means of carefully balanced driving and restraining aids, causes the horse's frame to become compacted and the horse light and supple in the hand. The baseline is shortened, the croup is lowered, the shoulder is raised and the head is held on the vertical. Cooling Out: Cooling down a heated horse by walking, brushing, giving very small drinks of water, and sponging him off after he has been worked. Counter Canter: School movement in which the horse canters in a circle with the outside leg leading, instead of the more usual inside leg. Courbette: One of the Airs Above the Ground. After performing the levade, the horse bounds or hops forward on bent hind legs. Combined Training: Equestrian competition held over one or three days and including the disciplines of dressage, cross country, and show jumping. Also known as Eventing. Cross-firing: Condition in which the hind foot strikes the opposite front leg or hoof. Crow Hopping: When a horse hops or leaps repeatedly in the air, with all four feet off the ground at the same time, he is said to be crow hopping. Crow Hops: Mild bucking motions. Cues: Another name for aids. Signals by which the rider communicates his wishes to the horse. Dishing: A faulty action, where the foot of the foreleg is thrown outward in a circular movement with each stride. Disunited: Canter in which the horse's legs are out of sequence. Diagonals: The horse's legs move in pairs at the trot, called diagonals. The left diagonal is when the left foreleg and right hindleg move together, the right diagonal is when the right foreleg and the left hindleg move together. Dressage: (i) The art of training the horse so that he is totally obedient and responsive to the rider, as well as supple and agile in his performance. (ii) Competitive sport which, by a series of set tests, seeks to judge the horse's natural movement and level of training against an ideal. Driving: A discipline in which a horse or horses pull a vehicle such as a carriage, cart, or wagon. Engagement: The hindlegs are engaged when they are brought well under the body. English Pleasure: A saddleseat class judged on manners, performance, attitude, and quality of the horse. Equitation: The art of horse riding. Eventing: Equestrian competition held over one or three days and including the disciplines of dressage, cross country, and show jumping. Also known as Combined Training. Extension: The extension of the paces is the lengthening of the frame and stride. The opposite of collection. Extravagant Action: High knee and hock action such as that seen in the Hackney and the Saddlebred. Flat Race: A race without jumps. Floating: The action associated with the trotting gait of the Arabian horse. Flying Change: Change of canter lead performed by the horse to rebalance during turns and changes of direction. Forefooting: Roping an animal by the forefeet. Forging: A fault in a gait which occurs when a hind foot strikes the bottom of the front foot on the same side. Four-In-Hand: A team of four harness horses. Fox Trot: A short-step gait, as when passing from walk to trot. Gait: The paces at which horses move, usually the walk, trot, canter, and gallop. Gallop: Four-beated gait of the horse, in which each foot touches the ground separately, as opposed to the canter, which is a three-beat gait. Going: Term used to describe the nature of the ground, i.e. deep, good, rough. Green: A horse that is in the early learning stage of his particular discipline is said to be green. Greenbroke: Horse that has been accustomed to the tack and the rider and has begun initial training. (Also called broken-in or broke to ride.) Ground Line: Pole placed on the ground in front of a fence to help the horse and/or rider judge the take-off point. Gymkhana: Mounted games, including bending poles, sack race, musical sacks, and a variety of other games and races. Gymnastic: Combination of fences placed at relative distances to each other, used in the training of the jumping horse. Habit: Traditional riding attire for sidesaddle riders. Half Halt: An exercise, basically a "pay attention, please" used to communicate to the horse that the rider is about to ask for some change of direction or gait, or other exercise or movement. Half Pass: Dressage movement performed on two tracks in which the horse moves sideways and forwards at the same time. Halter-broke: Term used to describe a young horse that has been accustomed to the very basics of wearing a halter. Halt: When the horse is at a standstill. Hand Gallop: An extension of the canter. Haute Ecole: The classical art of advanced riding. See also Airs Above the Ground. High School: Advanced training and exercise of the horse. Horsemanship: The art of equitation or riding. Hunt Seat: An English discipline which includes riding on the flat and over fences to demonstrate suitability to the hunt field. Impulsion: Strong, but controlled, forward movement in the horse (not to be confused with speed). In Front of the Bit: A term used to describe a horse which pulls or hangs heavily on the rider's hand. In Hand: When a horse is controlled from the ground rather than being ridden. Indirect Rein: The opposite rein to the direction in which the horse is moving. When giving an indirect rein aid, the instruction comes by pressing the opposite rein against the horse's neck. Inside Leg: The legs of both horse and rider which are on the inside of any circle or curved track being travelled. Inside: In a ring, the side of the horse closer to the center of the ring. Interference: Faulty gait in which a foot strikes the fetlock or cannon of the opposite foot; most often done by base-narrow, toe-wide, or splay-footed horses. Jog: Western riding term for trot. Also used to describe a slow, somewhat shortened pace in English riding. Leader: Either of the two leading horses in a team of four, or a single horse harnessed in front of one or more horses. The "near" leader is the left hand horse and the "off" leader is the right hand horse. Leg Up: Method of mounting in which an assistant stands behind the rider and supports the lower part of his left leg and giving a boost as necessary as the rider springs up off the ground. Leopard: A rope which attaches to the halter that is used to lead or tie a horse with. Levade: A classical air above the ground in which the forehand is lifted with bent forelegs on deeply bent hind legs - a controlled half-rear. Line-Up: A command used in the show ring for riders to come to the center of the ring and form a line. Lope: Slow Western canter. Manege: An enclosure used for training and schooling horses. Also called a school. Nearside: The left hand side of the horse. Offside: The right hand side of the horse. On the Bit: A horse is said to be "on the bit" when he carries his head in a near vertical position and he is calmly accepting the rider's contact on the reins. Outfit: The equipment of rancher or horseman. Outside: When riding in a ring, the side closest to the rail or fence of the ring. Overface: To present a young horse at a fence which is beyond his level of training, or beyond his physical capability. Overreaching: Faulty gait in which the hind foot steps on the heel of the front foot on the same side. Occurs most often when the horse is galloping or jumping. Pacer: A horse which moves its legs in lateral pairs, rather than the conventional diagonal pairs. Pace: A lateral two-beat gait mostly performed by gaited horses. Paddling: Throwing the front feet outward as they are picked up; most common in toe-narrow or pigeon-toed horses. Passage: Dressage movement in which the horse trots in an extremely collected and animated manner. Passenger: One who rides a horse without control, letting the horse go as he wishes. Performance Registry: A record book in which the performance of animals is recorded and preserved. Piaffe: Dressage movement in which the horse trots in place, with forehand elevated and croup lowered. Pirouette: Dressage movement in which the forelegs of the horse describe a small circle, while the hind legs remain in place, one of them acting as a pivot. Plantation Pleasure: An English class judged on manners and way of going to include Tennessee Walking Horses, which will show at the flat walk, running walk, and canter. Pleasure Driving: A class of horses pulling carts which is judged on manners and way of going. Pointing: Perceptible extension of the stride with little flexion; likely to occur in the long-strided Thoroughbred and Standardbred breeds - animals bred and trained for great speed. Posting Trot: The action of the rider rising from the saddle in rhythm with the horse's trot. (Also called Rising Trot.) Pounding: Heavy contact with ground instead of desired light, springy movement. Rack: The fifth gait of the American Saddlebred - a flashy four-beat gait. Rein Back: When a horse moves backward with the hooves being set down almost simultaneously in diagonal pairs. Reining: Type of Western riding in which advanced movements such as spins and slides are executed in various patterns. Reverse: A command used in the show ring to indicate a change of direction. Rising Trot: The action of the rider rising from the saddle in rhythm with the horse's trot. (Also called Posting Trot.) Running Walk: A four-beat gait faster than a walk, often over 6 miles per hour. Saddle Seat: A discipline of riding which is typically used for breeds that show with high knee and hock action and a very flashy, animated way of going. School Movements: The gymnastic exercises performed in the school or manege. School: Enclosed, marked out area used for the training and exercise of the horse. (See also Manege.) Serpentine: School movement in which the horse, at any pace, moves down the center of the school in a series of equal-sized loops. Shoulder-In: Two-track movement in which the horse is evenly bent along the length of its spine away from the direction in which it is moving. Showmanship: A class at a horse show judged on the exhibitor's ability to fit (prepare) and show a horse at halter being poised and confident while leading a well-groomed and conditioned horse through a precise pattern. Side-wheeler: A pacer that rolls the body sidewise as he paces. Single-foot: A term formerly used to designate the rack. Speedy Cutting: The inside of diagonal fore and hind pastern make contact; sometimes seen in fast-trotting horses. Spread: To stretch or pose. Trailer: Transportation vehicle of one or more horses, which is towed behind another vehicle. Transition: The act of changing from one pace to another. Walk to trot and trot to canter are known as "upward transitions." Canter to trot and trot to walk are known as "downward transitions." Trappy: A short, quick, choppy stride; a tendency of horses with short, straight pasterns and straight shoulders. Traverse or Side Up: Lateral movement without forward or backward movement. Tree: The wooden or metal frame of a saddle. Trot: Moderate-speed gait in which the horse moves from one diagonal pair of legs to the other, with a period of suspension in between. Two Track: School movements in which the hindlegs follow a separate track from that made by the forelegs. Vaulting: Equestrian sport involving gymnastic exercises done on the back of a moving horse. Vertical: Upright fence with no spread. Can be rails, planks, gate, or wall. Walk: A slow four-beat gait. Warming-up: The process of going through the gaits while performing suppling exercises to limber up both horse and rider in the beginning of a workout. Whoa: A verbal command used to signal a well-trained horse to stop. Usually combined with gently pulling back on the horse's reins. Wrangling: Rounding up; saddling range horses.
fwe2-CC-MAIN-2013-20-33167000
Tracing Jane Austen's Popularity Austen is now so popular that even non-novel readers recognize the name from seeing it in various, unexpected places like tea mugs and dating guides. Her immediate Regency siblings and her future Victorian collateral descendants would faint at seeing their sister and aunt depicted like this. For they presented her as a near saint. But Austen has also stepped off the pedestal into the trenches of World War I and classrooms ranging from high school to post-doctoral school seminars. Starting the Saint Jane myth When Henry Austen wrote his biography of his sister for the posthumous publications of Northanger Abbey and Persuasion, he presented a woman ready for sainthood: Faultless herself, as nearly as human nature can be, she always sought, in the faults of others, something to excuse, to forgive or forget. Where extenuation was impossible, she had a sure refuge in silence. She never uttered either a hasty, a silly, or a severe expression . . . She was thoroughly religious and devout; fearful of giving offence to God, and incapable of feeling it toward any fellow creature. . . . Henry's notice, of course, is understandably influenced by his feelings of loss over his 41-year-old sister. Henry also had recently become a clergyman of the Anglican Evangelical persuasion, so this recent career move certainly affected his decision to write of his sister's religious devotion. But imagine the shock when the edition of her letters came out in 1932. Here's another Austen one-liner from a letter that completely undercuts Henry's "incapable of feeling offence" line: "I do not want people to be very agreeable, as it saves me the trouble of liking them a great deal" (Letter, December 24, 1798). Yet 1932 was still a long way from 1818 when Henry wrote the biographical notice. And so the Austens had time to perpetuate "Saint Jane." Victorianizing Jane Austen Austen's next biographer was a beloved nephew, James Edward Austen-Leigh. By the time he published A Memoir of Jane Austen in December of 1869 (though dated as 1870 on the title page), he was a mutton-chopped Victorian. And so it's not surprising that he presented this type of Aunt Jane to the world with the help of his two sisters; all three of them, the children of Jane Austen's eldest brother James, knew their aunt well and still remembered her. The Memoir opens by saying that Austen's life was "singularly barren" of events. This portrayal doesn't look too promising! And because the Victorian mindset is one of silence and coverup, the Memoir proceeds accordingly. Not that Austen has anything to hide. But the Memoir presents Aunt Jane as a simple woman who had "genius" and lived a happy Christian life without complexity. The sarcasm, cynicism, and satire that you've seen in her letters and even seen in some of her fiction are all missing. Nevertheless, the Memoir satisfied the appetites of a new generation of Austen readers for information on the author's life. And it boosted Austen's popularity! Taking Austen to the trenches In 1894, the English critic George Saintsbury coined the word "Janeite" to mean an enthusiastic admirer of Austen's works. But Rudyard Kipling popularized the term in a short story called "The Janeites," first published in 1924. Written in heavy cockney slang, the story isn't the easiest text in the world to read. But it's worth the effort. Here's the story in summary: Soon after WWI, the story's narrator goes to a Masonic lodge on cleaning day. One of the cleaners is Humberstall who'd been wounded in the head but who still returned to the Western Front as assistant mess waiter for his old Heavy Artillery platoon. A simple and uneducated man, he tries to explain how his boss, the senior mess waiter, was able to talk with the university-educated officers on equal ground because of their shared love of Jane Austen's novels. Humberstall is coached on the novels and is led to think that the Austen readers, or Janeites, are all members of a Masonic-like secret society. They scratch the names of Austen characters on the guns. Then all but Humberstall are killed by a hail of gunfire. When he quotes Emma to a nurse, another secret Janeite, she saves his life by getting him on the hospital train back to England. Humberstall still reads Austen's novels as they remind him of his comrades back in the trenches. "There's no one to match Jane when you're in a tight place," he says, noting the comfort her novels provide. Yet her comfort isn't all healing, for as the other Masonic Lodge cleaner notes, Humberstall's mother has to come and take him home from the Lodge because he gets "fits." WWI soldiers agreed that while they were overseas in the war, reading Austen was an effective mental escape from gas masks and bayonets. The Army Medical Corps advised shell-shocked soldiers to read Austen for the books' soothing effects. Supposedly, Mr. and Mrs. Rudyard Kipling found comfort in Austen's novels, which they read to each other after their son was killed in 1914 in WWI. Taking Austen to school Austen's novels became continuously available since 1833, when England's Bentley Standard Novel Series produced affordable editions of her works. In 1923, R. W. Chapman's edition of Austen's novels was published by Oxford University Press. This scholarly edition is one of the earliest of the works of any English novelist. While Austen had readership popularity before, she now had academic distinction. Scholars began to pay serious attention to her novels, proceeding with literary analyses. Austen's use of irony was especially appealing to American academic critics writing just after WWII because analyzing her verbal irony made use of a popular new critical approach that treated the text as an object in itself and studied that text in terms of how the author used language. A study in 1997–1998 by the National Association of Scholars showed that in the 1964–1965 academic year, 25 liberal arts colleges surveyed in the United States still had no courses that cited Jane Austen in their catalogs. When those same schools were surveyed in the 1997–1998 academic year, however, Austen had moved into third place, just behind those old standbys Shakespeare and Chaucer. Austen's appearance in college catalogs' course descriptions is likely the result of the Women's Movement and the expansion of the canon (literary texts that authorities consider as the best representatives of their times). For along with Austen on the 1997–1998 lists were Virginia Woolf, Toni Morrison, Emily Dickinson, George Eliot, and Zora Neale Hurston. In the earlier list, no female writers were listed.
fwe2-CC-MAIN-2013-20-33175000
JOE MAHONEY/I-News Network JOE MAHONEY/I-News Network Single parenthood is a bigger indicator of poverty than race, according to an analysis of six decades of U.S. Census Bureau data by I-News Network. Combined as it often is with curtailed educational and employment opportunities, the rise of the single-parent family is a major factor in the widening disparities between blacks, Latinos and white state residents in the decades after the civil-rights era. The I-News analysis covered family income, poverty rates, high school and college graduation, and homeownership as reported by the Census Bureau from 1960 to 2010. Health data and justice reports also were examined. While the rate of single parenthood has increased among all races, its surge has been particularly dramatic among blacks. In Colorado, more than 50 percent of black households with young children are headed by a single parent compared to 25 percent of white households. Among Latino households in the state with young children, 35 percent are headed by a single parent, according to the I-News analysis. Those figures dovetail with the growing trend of births to single women. Nationally, 29 percent of white babies are born to unwed mothers, according to the federal Centers for Disease Control and Prevention, while 53 percent of Hispanic babies and 73 percent of black babies are born to single mothers. Denver Mayor Michael Hancock said, “The family structure has disintegrated in a sense. That challenge is real.” While many single parents raise thriving, productive children, the growing trend of fatherless homes has enormous implications for future generations. Children raised in female-headed homes in Colorado are four times more likely to live in poverty than those from married-couple homes, according to the I-News analysis. Other studies show they are less likely to go to college or even graduate high school. Regina Huerter, co-founder of the Gang Rescue And Support Project in Denver, which primarily serves Hispanic youths, theorized that the widening divide between the races stems from a “mutually reinforcing” convergence of births to unwed mothers, growing minority male incarceration rates and the demise of minority neighborhoods. All of these factors weakened the fabric of family life and changed the norms that defined communities just five decades ago, Huerter said. At some point, she said, it became socially acceptable for unmarried women to have babies. “When did that happen? What was the date? My mother would have killed me if I’d gotten pregnant,” said Huerter, who is 52. Huerter, Hancock and others linked the absence of fathers in the home, in part, to the rising number of black and Latino men in prisons, often for drug crimes. In 2010, about one in every 20 black men were incarcerated in Colorado state prisons compared to one out of every 50 Latino men and one of every 150 white men, according to an I-News analysis of government figures. The state’s black and Latino incarceration rates are higher than the national averages, where disparities also exist, according to an analysis of Bureau of Justice reports. Nationally, one of every 33 black men and one of every 83 Latino men was behind bars in 2010. Colorado’s rate for white men was equivalent to the national figure, one in 150. “The combination of the war on crack and mandatory sentencing saw a huge sweep of black males into prison and further degeneration of the black family,” said Theo Wilson, a district director for BarberShop Talk, a mentorship organization for men. The Rev. Leon Kelly, who has worked with thousands of Denver’s at-risk inner-city kids, believes intergenerational abandonment lies at the heart of the single-parenthood phenomenon. “When you have some of these heads of household that are women, sometimes they feel like, ‘This is the norm. This is what I was raised with,’” Kelly said. “They’re so used to people coming in and out of their life. With their kids, their babies, it’s something that nobody can take away. Their kids are going to be there.” I-News is a nonprofit news service serving Colorado.
fwe2-CC-MAIN-2013-20-33176000
Per Square Meter Warm-up: Relationships in Ecosystems (10 minutes) 1. Begin this lesson by presenting the powerpoint, “Per Square Meter”. 2. After the presentation, ask students to think of animal relationships that correspond to each of the following types; Competition, Predation, Parasitism, and Mutualism a. For example, two animals that compete for food are lions and cheetahs (they compete for zebras and antelopes) 3. Record the different types of relationships on the board. Activity One: My Own Square Meter (30 minutes) 1. Have students go outside and pick a small area (about a square meter each) to explore. It is preferable that this area be grassy or ‘natural’. The school playground might be a good spot. 2. Each student should keep a list of both the living organisms and man-made products found in their area (i.e grass, birds, insects, flowers, sidewalk etc.) Students are allowed to collect a few specimens from this area to show to the class. If students do not have jars, they can draw their observations. *See Reproducible #1 Activity Two: Who lives in our playground? (10 minutes) 1. After listing, collecting, and drawing specimens, students should return to the classroom and present their findings. a. Have the students sit in a circle. Each student should read his or her list of findings out loud. If they collected specimens or drew observations, have them present them to the class. 2. Make a list of these findings on the board. Only write repeated findings once (to avoid writing grass as many times as there are students). Keep one list of living organisms and one list of man-made products. 3. For now, focus on the list of living organisms. As a class, help students name possible relationships between the organisms. See if they can find one of each type of relationship. For example, a bee on a flower is an example of mutualism because the nectar from the flower nourishes the bee and in return, the bee pollinates the flower. Activity Three: Humans and the Environment: Human Effect on one Square Meter (15 minutes) 1. Now that students have focused on the animal relationships of their square meter, it is time to examine the effect of humans on the natural environment. Focus on the human-made product list. Ask students to consider the possible relationships between the human-made products and the environment. Prompt a brief class discussion on the effects of man-made products on the environment. Use the following questions as guidelines. a. What is the effect of an empty drink bottle (or any other piece of trash) in a grassy field? Will it decompose? Will it be used by an animal as a habitat or food? Answer: Trash is an invasive man-made product. Most trash is non-bio degradable and is harmful to the environment and to eco-system relations.Therefore, it is a harmful addition to the square meter. b. Who left the bottle there? Do you think they are still thinking about it? Did they leave it there on purpose? Why did they leave it there? Answer: Most people litter thoughtlessly. They are not thinking about their actions and how they may effect the environment or eco-systems. It is important that people recognize that litter has a major effect on the environment. c. What about a bench? Does a park bench have the same effect on the environment as a piece of trash? Answer: A park bench can be considered as a positive human-made product. A park bench has little negative effect on the environment and even helps humans further appreciate eco-systems. The Park Bench may even provide shelter or a perch for the eco-systems living organisms. d. Is there a difference between positive human-made products and negative ones? What are some examples of each? Answer: Yes, there is a difference between positive and negative human-made products. Positive products have minimal effect on the functioning of eco systems whereas negative products have major effects on eco systems. An example of a positive human-made product would be a solar powered house. An example of a negative human-made product would be a car that produces a lot of pollution. Wrap Up: Our Classroom Eco-Web (20-30 minutes) 1. Have students create classroom artwork by illustrating the relationships between their eco-systems. 2. Each student should draw at least two components of his or her square meter. 3. After everyone has finished their illustrations, create a web relating the illustrations. Draw arrows between illustrated components with written indications of the type of relationship exemplified. 4. Post the finished product in the classroom so that students can see the interconnectedness of the earth’s eco-systems. Extension: Exploring Aquatic Eco-Systems (On-going Activity) Students can explore another type of eco-system by creating a classroom aquarium or terrarium. The supplies for both of these mini eco-systems can be found at your local pet store. Students should help set up and maintain the aquarium or terrarium throughout the year. Periodically, students should observe how the mini-ecosystem is progressing, note changes, and assess the relationships between the organisms of the eco-system. This way, students are able to directly participate in the functioning of a natural system. Another related activity might be to take your students on a field trip to a different eco-system from that of your school. If you live near a river, lake, or ocean take them there to explore different ecological relations. If you live in a city, examples of diverse eco-systems can be found at the local zoo or aquarium.
fwe2-CC-MAIN-2013-20-33183000
The American Lung Association’s 2008 State of the Air Report found that about one-third of people in the U.S. live an an area with unhealthful levels of ground-level ozone pollution. Ozone pollution, which forms when emissions from vehicles, yard care equipment, and other sources react with heat and sunlight, can cause health problems for kids and adults suffering from asthma, bronchitis, and emphysema. Viewer Tip: You can help protect air quality at home by taking simple steps to reduce backyard emissions. Make sure that lawn mowers and other gas-powered equipment are functioning properly, and use the correct fuel-oil mixture in two-stroke equipment for maximum efficiency. Try to mow during cooler parts of the day – early morning or evening – when ozone pollution is less likely to form. And, if you have a small job, consider using hand-powered tools such as push mowers and hand clippers to eliminate emissions completely! (Source: The American Lung Association. 2008. “State of the Air.” http://www.stateoftheair.org/; U.S. EPA Office of Mobile Sources. “Your Yard andClean Air.” http://www.epa.gov/otaq/consumer/19-yard.pdf.)
fwe2-CC-MAIN-2013-20-33184000
Dinosaurs' active lifestyles suggest they were warm-blooded H. Pontzer, V. Allen, J.R. Hutchinson/PLoS ONE Whether dinosaurs were warm-blooded or cold-blooded has been a long-standing question in paleobiology. Now, new research on how two-legged dinosaurs walked and ran adds new evidence to the argument for warm-bloodedness, and suggests that even the earliest dinosaurs may have been warm-blooded. Warm-blooded (or endothermic) dinosaurs — able to regulate their own body temperatures — would have been more active and could have inhabited colder climates than cold-blooded (or ectothermic) dinos, which would have functioned more like modern reptiles — animals that become animated only as temperatures warm. Endothermic dinosaurs would have also required more energy to maintain their higher metabolic rates. Evidence such as rapidly growing bones, bird-like feathers and athletic builds have led most paleontologists to believe that dinosaurs were endothermic, says paleobiologist Greg Erickson of Florida State University in Tallahassee, Fla., who was not involved in the new research. But many scientists are still averse to the idea of warm-blooded dinosaurs. For example, some researchers have suggested that larger, more massive dinosaurs may have radiated much less heat than smaller dinosaurs — and thus, they could have been cold-blooded while still able to maintain relatively high body temperatures. In the new study, published today in PLoS ONE, biomechanist Herman Pontzer of Washington University in St. Louis, Mo., and colleagues sought to figure out whether the lower metabolism of an ectotherm would have afforded dinosaurs the energy they needed to walk and run. To test this possibility, the team looked at two factors thought to be linked with energy requirements in modern animals: hip height and the volume of muscle used to hold up and move an animal’s body forward. If the limb length and active muscle volumes of dinosaurs required more energy than an ectotherm’s metabolism would have been able to provide, Pontzer and colleagues reasoned, then the dinosaurs were likely endothermic. The team studied 13 different two-legged dinosaur species, ranging in size from Tyrannosaurus to the tiny, bird-like Archaeopteryx, as well as one early dinosaur relative, Marasuchus. Based on hip height, the results showed that the five largest dinosaurs (including Tyrannosaurus) would have needed endothermic metabolisms just to have the energy to walk, and all of the dinosaurs would have required endothermy to run at a moderate speed. Results based on estimated active muscle volume revealed a similar pattern: The five largest dino species would have needed to be endothermic to walk or run, while smaller, very active dinosaurs such as Velociraptor, must have been endothermic to be able to run. In addition, even the most ancient dinosaur-like relative, Marasuchus, may have been endothermic based on the data from the hip study, Pontzer says, suggesting that endothermy evolved very early on in the dinosaur lineage. Therefore, the results also suggest that all dinosaurs were endothermic, the team wrote. “I think their study is pointing to what a lot of other studies are saying — that these animals were endothermic,” Erickson says. “It’s just, what grade of endothermy were we dealing with?” For example, modern marsupials, although endothermic, generally grow more slowly and have lower metabolic rates than other mammals, he says. The study may not put the final "nail in the coffin" for the idea that large dinosaurs could have been ectothermic, but it does provide positive evidence for an alternative metabolic strategy, says Patrick O’Connor, a paleontologist at the Ohio University College of Osteopathic Medicine in Athens who was also not involved in the new research. "Studies like this add crucial new lines of evidence that help us refine existing hypotheses," O'Connor says. Estimating dinosaur metabolisms based on modern animals can only go so far, according to Erickson. For example, Pontzer and colleagues focused on two-legged dinosaurs because if they had used four-legged dinosaurs, they would have also needed to estimate how the dinosaurs’ weight was distributed across all four legs. But because all modern ectotherms, such as alligators, are four-legged, Pontzer and colleagues had to gauge the hypothetical ectothermic capacity for the two-legged dinosaurs against four-legged modern animals, Erickson notes. Moreover, even the largest modern ectotherms are much smaller than a 6-metric-ton Tyrannosaurus. “There are limitations from living organisms that make it so we may never be able to test all these ideas,” Erickson says. Still, Erickson says he thinks scientists are “honing in on the real answer” on the question of when endothermy evolved in dinosaurs and other ancient vertebrates. Other evidence, such as rates of bone growth, suggests pterosaurs, or flying reptiles, were also endothermic. “When you have all these different lines of evidence kind of pointing towards [endothermy],” he says, “I think it’s fairly compelling collectively.”
fwe2-CC-MAIN-2013-20-33185000
Kombu, also known as kelp, is a sea vegetables of the 'Laminaria' family of which there are more than ten species. EDEN Kombu Laminaria japonica is a dark greenish brown sea vegetable with thick, wide leafy fronds that grow in the waters off the southeastern coast of Hokkaido, Japan's northern most island. This type of kombu, known as 'Ma-konbu', is highly prized. not only for its abundance of essential minerals, vitamins, and trace elements but also for its natural glutamic salts that make It makes an excellent flavoring agent. Kombu contains the amino acid glutamine, a naturally sweet, superior flavor enhancer. EDEN Kombu Sea Vegetable grows wild in the clean, cold northern waters where the choicest grades of kombu grow bathed in steady Arctic currents. Eden selects only the tender central part of the plant that has the best flavor and texture. The fronds are hand harvested using long poles with knives attached to cut the kombu free from the ocean bottom. As the kombu floats to the surface it is gathered into boats and taken ashore. The fronds are washed, folded and naturally sun dried on the white sand beaches before cutting and packaging. Lesser grades of commercial kombu are cultivated artificially or simply gathered from the beach after washing ashore. Many are sprayed with chemically produced and toxic monosodium glutamate (MSG) to make the kombu more tender. EDEN Kombu grows wild and is gathered by hand from the sea while the plant is still living. EDEN Kombu Sea Vegetable is most frequently used to make the delicious Japanese noodle broth, dashi, seasoned with shoyu soy sauce. It can be used, however, to make a variety of soup stocks. Simply place a strip in a pot of water and bring to a boil. Remove the kombu after 4 to 5 minutes and discard or chop and use in other dishes. Vegetables, herbs, spices or fish can be added to the stock after removing the kombu. Kombu can also be soaked, chopped and simmered with carrots, onions, squash, daikon or other sweet vegetables. A small piece of kombu added to dried beans, helps to tenderize them as they cook.
fwe2-CC-MAIN-2013-20-33191000
When Your Child Steals - for Parents of Fifth Grade Children Come on, admit it: You've taken something that doesn't belong to you, even if you have to dig deep into your own childhood to remember – a candy bar at the convenience shore, a friend's toy, change from your mom's purse. Yet it shocks you nevertheless when you discover your own child stealing. Here's how to keep calm and handle it without envisioning mug shots in your child's future. What You Need to Know Communicating about money is important to address: - the positive and negative emotions it brings out of people - differences in values and attitudes toward spending and saving - differences in financial goals - potential money problems and how to overcome them - identification of personal values How You Can Help When you discover your child stealing: - Stay calm, and resist the inclination to treat your child like a common criminal – keep the phone on the hook, and your fingers away from the 9 and 1 digits for now. - Try thinking in children's, rather than adult's, terms in attempt to determine why your child might have taken something that did not belong to him. Remembering your own experiences as a child might help you come up with possible answers. - Once you have all the facts and are certain of your ability to remain calm enough for discussion, use this as an opportunity to address personal and family values regarding money, dealing with the matter: Take turns with your child answering the following questions: - On what would you spend an extra $20? An extra $2000? - What would you do if you witnessed one of your friends shoplifting? - What would you do if you found a wallet on the sidewalk with money and I.D. Inside? Or no I.D. Inside? - What would you do if your best friend's birthday is coming up, but you don't have enough money for a present? - What would you do if a cashier charged you too much? Too little? For more on this topic, please see the full article: Add your own comment - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - First Grade Sight Words List - Graduation Inspiration: Top 10 Graduation Quotes - 10 Fun Activities for Children with Autism - What Makes a School Effective? - Child Development Theories - Should Your Child Be Held Back a Grade? Know Your Rights - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner
fwe2-CC-MAIN-2013-20-33194000
"Sustainable Development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs" Sustainability embodies "stewardship" and "design with nature," well established goals of the design professions and "carrying capacity," a highly developed modeling technique used by scientists and planners. The most popular definition of sustainability can be traced to a 1987 UN conference. It defined sustainable developments as those that "meet present needs without compromising the ability of future generations to meet their needs"(WECD, 1987). Robert Gillman, editor of the In Context magazine, extends this goal oriented definition by stating "sustainability refers to a very old and simple concept (The Golden Rule)...do onto future generations as you would have them do onto you." These well-established definitions set an ideal premise, but do not clarify specific human and environmental parameters for modeling and measuring sustainable developments. The following definitions are more specific: - "Sustainable means using methods, systems and materials that won't deplete resources or harm natural cycles" (Rosenbaum, 1993). - Sustainability "identifies a concept and attitude in development that looks at a site's natural land, water, and energy resources as integral aspects of the development" (Vieira,1993) - "Sustainability integrates natural systems with human patterns and celebrates continuity, uniqueness and placemaking" (Early, 1993) In review of the plurality of these definitions, the site or the environmental context is an important variable to most working definitions of sustainability. This emphasis is expressed in the following composite definition: Sustainable developments are those which fulfill present and future needs (WECD, 1987) while [only] using and not harming renewable resources and unique human-environmental systems of a site: [air], water, land, energy, and human ecology and/or those of other [off-site] sustainable systems (Rosenbaum 1993 and Vieria 1993). These fundamental human-environmental exchanges of the community’s "site" were found very useful in developing critical "input - output" modeling techniques or indicators which directs the community's regenerative process. This selected network of a community’s human and environmental interrelationships were measured and placed in balance by the selected set of integrated design and planning strategies.
fwe2-CC-MAIN-2013-20-33200000
- Chrysanthemum leucanthemum L. - Composite family Parts Usually Used Description of Plant(s) and Culture White weed is a perennial plant; the furrowed, simple or sparingly branched stem grows from 1-3 feet high and bears alternate, toothed, sessile and clasping leaves. Both stem and radical leaves are spatulate or obovate with rounded ends; the radical leaves are more strongly toothed. The stem, and the branch, if any, is topped by a solitary flower head with yellow disk and white rays. Grows in fields and waste places over most of North America, Europe, and Asia as a common weed. Diaphoretic, diuretic, irritant White weed is very seldom used today. Can promote sweating and used to treat urinary and dropsical problems. Used to treat pulmonary diseases, palsy, sciatica, runny eyes, and gout. Externally; applied to promote the flow of blood to the surface and to treat warts, pustules, ulcers, wounds, bruises. The dried plant and even the flowers of the common daisy, boiled up with some honey, have been recommended as an alleviant to attacks of asthma.
fwe2-CC-MAIN-2013-20-33209000
HIV (Human Immunodeficiency Virus) HIV (human immunodeficiency virus) is a virus that attacks the immune system, making it difficult for the body to fight off infection and some diseases. Without treatment, HIV eventually causes AIDS (acquired immunodeficiency syndrome). Initial HIV symptoms are similar to those of the flu and include fatigue, fever, weight loss, and swollen lymph nodes in the neck, armpits, or groin. Although there currently is no cure for HIV infection, a combination of medicines called highly active antiretroviral therapy, or HAART, helps prolong life for most people. With treatment, a person with HIV infection may live for many years without developing AIDS. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org Find out what women really need. Most Popular Topics Pill Identifier on RxList - quick, easy, Find a Local Pharmacy - including 24 hour, pharmacies
fwe2-CC-MAIN-2013-20-33211000
Depression: Using Positive Thinking What is an Actionset? Depression is an illness that makes a person feel sad and hopeless much of the time. It's different than feeling a little sad or down. Depression can be treated with counseling or medicine, or both. Positive thinking also can help prevent or control depression. If you would like more information, see the topic: Return to topic: Positive thinking, or healthy thinking, is a way to help you stay well by changing how you think. It's based on research that shows that you can change how you think. And how you think affects how you feel. Cognitive-behavioral therapy, also called CBT, is a type of therapy that is often used to help people think in a healthy way. CBT can help you learn to replace negative thoughts with positive ones. These negative thoughts are sometimes called irrational or automatic thoughts. Working on your own or with a counselor, you can practice these three steps: The goal is to have positive thoughts come naturally. It may take some time to change the way you think. So you will need to practice positive thinking every day. Test Your Knowledge Cognitive-behavioral therapy (CBT) is a type of therapy that can help change how you think about yourself. You need to see a counselor to do CBT. Changing the way you think can help you replace negative thoughts with helpful ones. This can help you cope with depression and may help keep it from coming back.1 Maybe you weren't able to close a sale or get a big project done at work. Or perhaps a relationship has ended. It's normal to feel down. But you've had trouble sleeping. You can't enjoy many of your usual activities. And you're blaming yourself. "I'm a failure at everything," you tell yourself. The more you think about yourself in a negative way, the harder it is to feel hopeful and positive. The negative thinking makes you feel bad. And that can make you feel more depressed, which leads to more bad thoughts about yourself. It's a cycle that's hard to break. But with practice, you can retrain your brain. After all, you weren't born telling yourself negative things. You learned how to do it. So there's no reason you can't teach your brain to unlearn it and replace negative thinking with more helpful thoughts. Positive thinking also can help you manage stress. Too much stress can raise your blood pressure and make your heart work harder, which can increase your risk for a heart attack. Stress also can weaken your immune system, which can make you more open to infection and disease. Although you can use CBT on your own, it's important to talk to your doctor or a counselor if you feel that your mood is getting worse. You may need more help. Test Your Knowledge Positive thinking can help you stop negative thoughts that make depression worse. Positive thinking can help your health in other ways. Stop your thoughts The first step is to stop your negative thoughts or "self-talk." Self-talk is what you think and believe about yourself and your experiences. It's like a running commentary in your head. Your self-talk may be positive and helpful. Or it may be negative and not helpful. Ask about your thoughts The next step is to ask yourself whether your thoughts are helpful or unhelpful. Does the evidence support your negative thought? Some of your self-talk may be true. Or it may be partly true but exaggerated. There are several kinds of irrational thoughts. Here are a few types to look for: Choose your thoughts The next step is to choose a more positive, helpful thought to replace the unhelpful one. Keeping a journal of your thoughts is one of the best ways to practice stopping, asking, and choosing your thoughts. It makes you aware of your self-talk. Write down any negative or unhelpful thoughts you had during the day. If you think you might not remember at the end of your day, keep a notepad with you so you can write down any irrational thoughts as they happen. Then write down a helpful message to correct the unhelpful thought. If you do this every day, positive or helpful thoughts will soon come naturally to you. But there may be some truth in some of your negative thoughts. You may have some things you want to work on. If you didn't perform as well as you would like on something, write that down. You can work on a plan to correct or improve that area. If you want, you also could write down what kind of irrational thought you had. Journal entries might look something like this: Test Your Knowledge Which of these thoughts is an example of positive thinking? I've had a couple of bad relationships. I know I'll never have a good relationship. I feel bad that I didn't get as big a raise as I wanted. But there may have been reasons that had nothing to do with me. I'll talk to my supervisor to see if there is anything I can do to get a bigger raise next time. How can a daily journal help you have more positive thoughts? It makes you aware of your self-talk and can help you come up with helpful thoughts to correct an irrational thought. Writing in the journal every day will help positive thinking come naturally to you. Now that you have read this information, you are ready to practice positive thinking to help cope with depression. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org Find out what women really need. Pill Identifier on RxList - quick, easy, Find a Local Pharmacy - including 24 hour, pharmacies
fwe2-CC-MAIN-2013-20-33212000
Alzheimer's Disease Treatment Alzheimer's Disease Medications and Treatments Individuals with Alzheimer's disease should remain physically, mentally, and socially active as long as they are able. It is believed that mental activity can slow the progression of the disease. Puzzles, games, reading, and safe hobbies and crafts are good choices. These activities should ideally be interactive. They should be of an appropriate level of difficulty so that the person does not become overly frustrated. Behavior disorders such as agitation and aggression may improve with various interventions. Some interventions focus on helping the individual adjust or control his or her behavior. Others focus on helping caregivers and other family members change the person’s behavior. These approaches sometimes work better when combined with drug treatment for depression, mood stabilization, or psychosis. Alzheimer's disease symptoms can sometimes be relieved, at least temporarily, by medications such as cholinesterase inhibitors and NMDA inhibitors, which have been approved by the US Food and Drug Administration (FDA) for the treatment of moderate to severe Alzheimer's disease.
fwe2-CC-MAIN-2013-20-33214000
On top of concerns about high oil prices now comes the fear that we have reached “peak oil” and that global oil output will start to decline. Have we? If oil has peaked, do we face a future of growing energy shortages, rising prices and international conflict for supplies? No one should underestimate the energy challenge. With continued economic growth, the world’s energy needs could increase by half within 25 years. Unchecked, this will result in significantly higher carbon emissions. Many scientists agree that emissions from human activities are changing our climate and call for urgent action. The world’s energy needs must be met while cutting carbon dioxide emissions. But where are we going to find this energy? My view is that “easy” oil has probably passed its peak. But there are other reserves that are still a long way from their peak. In unconventional oil and gas – resources that are harder to tap – there are plenty of reserves. The oil industry has to explore new frontiers, develop new hydrocarbon energy sources and integrate “CO2 solutions”. The challenge is to develop technology that can fuel growth without environmental degradation. That means applying advances on the scale necessary to make real progress. It means integrating technologies because that is where the real benefits come in this complex business. It means applying those technologies in increasingly demanding projects, and accessing resources in challenging frontier environments such as the Arctic or in deep offshore waters. The biggest impact technology could have is to increase significantly the amount of conventional oil we recover from existing reservoirs. This is little more than one-third on average at present across the industry. Smart technology enabling engineers to monitor and control reservoir processes remotely, along with techniques using heat, gas or chemicals to make oil flow more easily, could significantly boost recovery rates. Integrating technology will also enable us to access previously inaccessible hydrocarbon resources. Much of the world’s huge reserves of natural gas are still untapped. Cooling gas into liquid allows it to be transported as liquefied natural gas for power generation in other markets. Demand for LNG is set to double in the next decade but this, again, depends on technological advance. Technology is also being used to turn gas to liquids. This will enable the industry to unlock reserves and convert gas into fuels such as diesel, which will be ideal for reducing pollution in major cities. Other new hydrocarbon energy frontiers include heavy oils and where oil is contained in sand and shales, contaminated and tight gas and coal-bed methane. There is lots of coal, too, particularly in the US and China. At Shell we are testing an environmentally sensitive way of unlocking the large potential of oil shale in Colorado using electric heaters to heat the rock formation and release light oil and gas. Coal gasification offers a way of using coal more efficiently, cleanly and flexibly. The resulting “syngas” can fuel efficient combined cycle power plants. It can also be used, with the same technology as gas to liquids, to produce high-quality liquid fuels. The world will need these resources. But they are more carbon-intensive and increase the urgency of finding ways of tackling carbon emissions. So my vision is for “green fossil fuels” with much of their CO2 captured and sequestrated underground or in inert materials. In the medium term, this could be cheaper, more convenient and more flexible than alternative energies. A typical one-gigawatt coal-fired power plant produces the same carbon emissions as 1.5m cars. China alone is building about 17 of these plants a year. This is why sequestration should be a priority for power plants. One prerequisite for success is ensuring sufficient investment to access more difficult resources and undertake long-term technology development. The International Energy Agency estimates that meeting global energy needs will require investing more than $17,000bn by 2030. Given the urgent investment needs, exacting windfall taxes is counterproductive, particularly in an industry with a history of volatile prices. So, while the good news is that there is a wide variety of energy sources to deal with the energy challenge, our industry has its work cut out for it. It will have to mobilise its experience and talents but also rely on governments and consumers to recognise that we share common concerns and have to respond to changing circumstances. The writer is chief executive of Royal Dutch Shell
fwe2-CC-MAIN-2013-20-33215000
The abuse of cocaine has become a major public-health problem in the United States since the 1970s. During that period it emerged from relative obscurity, described by experts as a harmless recreational drug with minimal toxicity. By the mid-1980s, cocaine use had increased substantially and its ability to lead to drug taking at levels that caused severe medical and psychological problems was obvious. Cocaine (also known as "coke," "snow," "lady," "CRACK" and "ready rock"), is an ALKALOID with both local anesthetic and PSYCHOMOTOR STIMULANT properties. It is generally taken in binge cycles, with periods of hours to days in which users take the drug repeatedly, alternating with periods of days to weeks when no cocaine is used. Many users are recalcitrant to treatment, and the introduction of substantial criminal penalties associated with its possession and sale have not yet been effective in reducing its prevalence of heavy use. In fact, although occasional use of cocaine diminished somewhat by the early 1990s, heavier use did not. Cocaine is extracted from the COCA PLANT (Erythroxylon coca), a shrub now found mainly in the Andean highlands and the northwestern parts of the Amazon in South America. The history of coca plant use by the cultures and civilizations who lived in these areas (including the Inca) goes back more than a thousand years, with evidence of use found archeologically in their burial sites. The Inca called the plant a "gift of the Sun god" and believed that the leaf had supernatural powers. They used the leaves much as the highland Indians of South America do today. A wad of leaves, along with some ash, is placed in the mouth and both chewed and sucked. The ash helps in the extraction of the cocaine from the coca leafnd the cocaine is efficiently absorbed through the mucous membranes of the mouth. During the height of the Inca Empire (11th-15th centuries) coca leaves were reserved for the nobility and for religious ceremonies, since it was believed that coca was of divine origin. With the conquest of the Inca Empire by the Spanish in the 1500s, coca use was banned. The Conquistadors soon discovered, however, that their Indian slaves worked harder and required less food if they were allowed to chew coca. The Catholic church began to cultivate coca plants, and in many cases the Indians were paid in coca leaves. Although glowing reports of the stimulant effects of coca reached Europe, coca use did not achieve popularity. This was no doubt related to the fact that coca plants could not be grown in Europe and the active ingredient in the coca leaves did not survive the long ocean voyage from South America. After the isolation of cocaine from coca leaves by the German chemist Albert Niemann in 1860 and the subsequent purification of the drug, it became more popular. It was aided in this regard by commercial endeavors in which cocaine was combined with wine (e.g., Vin de Coca), products for which there appeared many enthusiastic and uncritical endorsements by notables of the time. Both interest in and use of cocaine spread to the United States, where extracts of coca leaves were added to many patent medicines. Physicians began prescribing it for a variety of ills including dyspepsia, gastrointestinal disorders, headache, neuralgia, toothache, and morend use increased dramatically. By the beginning of the twentieth century, cocaine's harmful effects were noted and caused a reassessment of its utility. As part of a broader regulatory effort, the U.S. government began to control its manufacture and sale. In 1914, the HARRISON NARCOTIC ACT forbade use of cocaine in over-the-counter medications and required the registration of those involved in the importation, manufacture, and sale of either coca or opium products. This had the effect of substantially reducing cocaine use in the United States, which remained relatively low until the late 1960s, when it moved into the spotlight once again. Cocaine is a drug with both anesthetic and stimulant properties. Its local anesthetic and vasoconstriction effects remain its major medical use. The local anesthetic effect was established by Carl Koller in the mid-1880s, in experiments on the eye, but because it has been found to cause sloughing of the cornea, it is no longer used in eye surgery. Because it is the only local anesthetic capable of causing intense vasoconstriction, cocaine is beneficial in surgeries where shrinking of the mucous membranes and the associated increased visualization and decreased bleeding are necessary. Therefore, it remains useful for topical administration in the upper respiratory tract. When used in clinically appropriate doses, and with medical safeguards in place, cocaine appears to be a useful and safe local anesthetic. Cocaine can be taken by a number of routes of administrationral, intranasal, intravenous, and smoked. Although the effects of cocaine are similar no matter what the route, route clearly contributes to the likelihood that the drug will be abused. The likelihood that cocaine will be taken for nonmedical purposes is assumed to be related to the rate of increase in cocaine brain level (as measured by blood levels) associated with those routes that provide the largest and most rapid changes in brain level being associated with greater self-administration. The oral route of administration, not a route used by cocaine abusers, is characterized by relatively slow absorption and peak levels that do not appear until approximately an hour after ingestion. Cocaine, however, is quickly absorbed from the nasal mucosa when it is inhaled into the nose as a powder (cocaine hydrochloride). Because of its local anesthetic properties, cocaine numbs or "freezes" the mucous membranes, a quality used by those purchasing the drug on the street to test for purity. When cocaine is used intranasally ("snorting"), cocaine blood levels, as well as subjective and physiological effects, peak at about 20 to 30 minutes, and reports of a "rush" are minimal. Intranasal users report that they are ready to take a second dose of the drug within 30 to 40 minutes after the first dose. Although this route was the most common way for people to use cocaine in the mid-1980s, it is not as efficient in getting the drug to the brain as either smoking or intravenous injection, and it has declined in popularity. When taken intravenously, venous blood levels peak virtually immediately and subjects report a substantial, dose-related rush. This route was, until the mid-1980s, traditionally the choice of the experienced user, since it provided a rapid increase in brain levels of cocaine with a parallel increase in subjective effects. Blood levels of cocaine dissipate in parallel with subjective effects, and subjects report that they are ready for another intravenous dose within about 30 to 40 minutes. Users of intravenous cocaine are also more likely to combine their cocaine with HEROIN (e.g., a "speedball") than are users by other routes. In the mid-1980s, smoked cocaine began to achieve popularity. FREEBASE, or "crack," is cocaine base, which is not destroyed at temperatures required to volatilize it. As with intravenous cocaine, blood levels peak almost immediately and, as with intravenous cocaine, a substantial rush ensues after smoking it. Users can prepare their own free-base from the powdered form they purchase on the street, or they can purchase it in the form of crack, or "ready-rock." The development of a smokable form of cocaine provided a more socially acceptable route of drug administration (both NICOTINE and MARIJUANA cigarettes provided the model for smoking cocaine), resulting in a drug that was both easy to use and highly toxic, since the route allowed for frequent repeated dosing with a readily available and relatively inexpensive drug. The use of intravenous Cocaine is frequently taken in combination with other drugs such as alcohol, marijuana, and OPIATES. In fact, almost 75 percent of cocaine deaths reported in 1989 involved co-ingestion of other drugs. When taken in combination with alcohol, a metaboliteOCAETHYLENEs formed, which appears to be only slightly less potent than cocaine in its behavioral effects. It is possible that some of the toxicity reported after relatively low doses of cocaine might well be due to the combination of cocaine and alcohol. Cocaine is broken down rapidly by enzymes (esterases) in the blood and liver. The major metabolites of this action (all relatively inactive) are BENZOYLECGONINE, ecgonine, and ecgonine methyl ester, all of which are excreted in the urine. Cocaethylene is an additional metabolite when cocaine and alcohol are ingested in combination. People with deficient plasma cholinesterase activityetuses, infants, pregnant women, patients with liver disease, and the elderlyre all likely to be sensitive to cocaine and therefore at higher risk for adverse effects than are others. Research has been focused on the neurochemical and neuroanatomical substrates that mediate cocaine's reinforcing effects. Although a number of NEUROTRANSMITTER systems are involved, there is growing evidence that cocaine's effects on dopaminergic neurons in the mesolimbic and/or mesocortical neuronal systems of the brain are most closely associated with its reinforcing and other behavioral effects. The initial site of action in the brain for its reinforcing effects has been hypothesized to be the dopamine transporter of mesolimbocortical neurons. Cocaine action at the DOPAMINE transporter has the effect of inhibiting dopamine re-uptake, resulting in higher levels of dopamine at the synapse. These dopaminergic pathways may mediate the reinforcing effects of other stimulants and opiates as well. A substantial body of evidence suggests that dopamine plays a major role in mediating cocaine's reinforcing effects, although it is clear that cocaine affects not only the dopamine but also the SEROTONIN and noradrenaline systems. In addition to blocking the re-uptake of several neurotransmitters, cocaine use results in central nervous system stimulation and local anesthesia. This latter effect may be responsible for the neural and myocardial depression seen after taking large doses. Cocaine use has been implicated in a broad range of medical complications covering virtually every one of the body's organ systems. At low doses, cocaine causes increases in heart rate, blood pressure, respiration, and body temperature. There have been suggestions that cocaine's cardiovascular effects can interact with ongoing behavior, resulting in increased toxicity. Cocaine intoxication has been associated with cardiovascular toxicity, related to both its local anesthetic effects and its inhibition of neuronal uptake of catecholamines, including heart attacks, stroke, vasospasm, and cardiac arrhythmias. Cocaine is generally taken in binges, repeatedly, for several hours or days, followed by a period in which none is taken. When taken repeatedly, chronic cocaine intoxication can cause a psychosis, characterized by paranoia, anxiety, a stereotyped repetitive behavior pattern, and vivid visual, auditory, and tactile hallucinations. Less severe behavioral reactions to repeated cocaine use include irritability, hypervigilance, paranoid thinking, hyperactivity, and eating and sleep disturbances. In addition, when a cocaine binge ceases, there appears to be a crash response, characterized by depression, fatigue, and eating and sleep disturbances. Initially, the crash is accompanied by little cocaine craving, but as time increases since the last dose of cocaine, compulsive drug seeking can occur in which users think of little else but the next dose. Nonhuman Research Subjects. One of cocaine's characteristics, as a PSYCHOMOTOR STIMU-LANT, is its ability to elicit increases in the motor behavior of animals. Single low doses produce increases in exploration, locomotion, and grooming. With increasing doses, locomotor activity decreases and stereotyped behavior patterns emerge (continuous repetitious chains of behavior). When administered repeatedly, cocaine produces increased levels of locomotor activity, increases in stereotyped behavior, and increases in susceptibility to drug-induced seizures (i.e., "kindling"). This sensitization occurs in a number of different species and has been suggested as a model for psychosis or schizophrenia in humans. Although sensitization to cocaine's unconditioned behavioral effects generally occurs, such effects are related to dose, environmental context, and schedule of cocaine administration. For example, sensitization occurs more readily when dosing is intermittent rather than continuous and when dosing occurs in the same environment as testing. Learned behaviors, typically generated in the laboratory using operant schedules of reinforcement in which animals make responses that have consequences (e.g., press a lever to get food), generally show a rate-dependent effect of cocaine. As with AMPHETAMINE, cocaine engenders increases in low rates of responding and decreases in high rates of responding. Environmental variables and behavioral context can modify this effect. For example, responding maintained by food delivery was decreased by doses of cocaine that either had no effect or increased comparable rates of responding maintained by shock avoidance. Cocaine's effects can also be modified by drug history. Although repeated administration can result in the development of sensitization to cocaine's effects on unlearned behaviors, repeated administration generally results in tolerance to cocaine's effects on schedule-controlled responding. This decrease in effect of the same dose after repeated dosing is influenced by behavioral as well as pharmacological factors. Human Research Subjects. A major behavioral effect of cocaine in humans is its mood-altering effect, generally believed related to its potential for abuse. Traditionally, subjective effects have provided the basis for classifying a substance as having abuse potentialnd the cocaine-engendered profile of subjective effects is prototypic of stimulant drugs of abuse. Thus, cocaine produces dose-related reports of "high," "liking," and "euphoria"; increases in stimulant-related factors, such as increases on Vigor and Friendliness scale scores; ratings of "stimulated"; and decreases in various sedation scores. Subjective effects correlate well with single intravenous or smoked doses of cocaine, peaking soon after administration and dissipating in parallel with decreasing plasma concentrations. When cocaine is administered repeatedly, tolerance develops rapidly to many of its subjective effects and the same dose no longer exerts much of an effect. This means that the user must take increasingly larger amounts of cocaine to achieve the same effect. Tolerance to the cardiovascular effects of cocaine is less complete; the result here is a potential for drug-induced toxicity, since more and more drug is taken when the subjective effects are not present but the disruptions in cardiovascular function are still present. Although users of stimulant drugs claim that their performance of many activities is improved by cocaine use, the data do not support their assertions. In general, cocaine has little effect on performance except under conditions in which performance has deteriorated from fatigue. Under those conditions, cocaine can bring it back to nonfatigue levels. This effect, however, is relatively short-lived, since cocaine has a half-life of less than one hour. Despite substantial efforts directed toward treatment of cocaine abuse, in the mid-1990s we are still unable to treat successfully many of the cocaine abusers who seek treatment. For many years the only approach to treating these people was psychological or behavioral. As of 1994, the most promising of these include behavioral therapy, relapse prevention, rehabilitation (e.g., vocational, educational, and social-skills training) and supportive psychotherapy. A major problem with these treatment approaches is related to their lack of selectivity. Rather than tailoring programs to an individual's background, drug-use history, psychiatric state, and socioeconomic level, individuals receive the treatment being delivered by the particular program they happen to attend. Treatment programs that focus on specific target populations will be far more successful than those which cover all who apply. For example, patients with relatively mild symptoms might do quite well in a behavioral intervention with some relapse-prevention instructions but those with more severe problems might require the addition of pharmacotherapy. Pharmacological approaches to treating cocaine abusers have focused on potential neurophysiological changes related to chronic cocaine use. Thus, because dopamine appears to mediate cocaine's reinforcing effects, dopamine agonists such as AM-ANTADINE and bromocriptine have been tried. METHYLPHENIDATE, a stimulant, has been suggested as a possible substitution medication, and ANTIDEPRESSANTS such as desipramine have been studied because of their actions on the dopaminergic system. In addition, because cocaine blocks re-uptake of SEROTONIN at nerve terminals, serotonin-uptake blockers, such as fluoxetine, have also been tested. Although most of the potential medications have been shown to be successful in some patients under open label conditions, none have been clearly successful in double blind placebo-controlled clinical trials. Clearly, no medication yet exists for the treatment of cocaine abuse. It may well be that different medications may be effective for the various target populations and that variations in dosages and durations of treatment might be required, depending on a variety of patient characteristics. In fact, several medications have been shown to be effective only for small and carefully delineated populations (e.g., lithium for cocaine abusers diagnosed with concurrent bipolar manic-depressive or cyclothymic disorders). An artificial enzyme has been developed that inactivates cocaine as soon as it enters the blood-stream by binding the cocaine and breaking it into two inactive metabolites, and this has the potential for destroying much of the cocaine before it reaches the brain. As of 1994, this technique is unavailable for human use. In addition, and most importantly, cocaine abuse (and drug abuse in general) is a behavioral problem, and it is unlikely that any medication will be effective unless it is combined with an appropriate behavioral intervention. (SEE ALSO: ; Colombia As Drug Source; Epidemics of Drug Abuse; Epidemiology of Drug Abuse; National Household Survey on Drug Abuse; ) BOCK, G., & WHELAN, J. (1992). Cocaine: Scientific and social dimensions. Ciba Foundation Symposium 166. Chichester: Wiley. JOHANSON, C. E., & FISCHMAN, M. W. (1989). Pharmacology of cocaine related to its abuse. Pharmacological Reviews, 41, 3-52. KLEBER, H. D. (1989). Treatment of drug dependence: What works. International Review of Psychiatry, 1, 81-100. LANDRY, D. W., ET AL. (1993). Antibody-catalyzed degradation of cocaine. Science, 259, 1899-1901. MARIAN W. FISCHMAN Did this raise a question for you?
fwe2-CC-MAIN-2013-20-33220000
Beyond Buzzwords With SOA SOA has had time to mature, but discussions around it are still laden with buzzwords and unrealistic expectations. Learn what SOA is and isn't with the first of a new series. A Service-Oriented Architecture (SOA) facilitates flexibility and cost savings for businesses, but its flexibility is also at fault for diluting the concept of SOA. Many technologies can be used in support of designing systems that may be described as SOA, which leads to a global misunderstanding of the true spirit of SOA. We often get bogged down talking about the technology, so this article is going to take a step back and explain what it’s really about. What SOA Is Not SOA is not any single system, technology, or application. It is an architecture, or put another way: a way to design a group of system; a methodology. The service-oriented part speaks to the notion that systems interaction should be done via network-enabled services. Most often, since they provide the most flexibility, Web-based services and technologies are employed for this task. What SOA Is SOA can be broken into two parts: IT systems and the software that runs on them. In the systems design sense, SOA spells out the best way to design complex systems to ensure they are multi-functional. It is at this point where many people like to compare a big ERP system to SOA. With a large ERP system, you will have a set of servers that only interact with a database, and all the functionality is contained within that monolithic system. Features and extensibility are dependent on what the vendor allows you to do. With an SOA design, you can plug in features at-will, since they will interact with various “services” in a standard way. In reality, ERP is a task, and even the largest monolithic systems usually have some extensibility built-in. We most often see SOA proponents with the same ERP system everyone else uses, but they have likely extended it much further. In the software engineering sense, SOA is about code re-use. Not in the object-oriented programming sense of re-use; SOA actually takes it much further. Traditionally we might abstract common functions in software into libraries, and then share those libraries so that people don’t have to re-implement the same code. Think of a simple banking system’s online credit application. One function might require that we look up the applicant’s SSN to see if they have other accounts. We might call this function listUserAccounts. When called, it will return a list of all past and current accounts associated with the applicant. The function contains all the logic, and knows where to look for this information. It simply returns the proper information for use by the program that called Instead of requiring that software run on a single system that has access to listUserAccounts, an SOA-enabled environment will provide a network-accessible service whereby many systems can access this function. In short, software functions are turned into network-aware services instead of internal functions accessible by only a few programs. The same listUserAccounts function could be used by an online banking application when a person wants to see what their account balances are. Without careful design, large systems often paint themselves into a corner. SOA helps designers think in such a way that avoids creating single-purpose silos, and instead creates re-usable functionality. Nothing New to See Here Please realize that SOA does not represent a silver bullet install-it-and-realize-benefits technology. It is, in effect, simply a tool that has opened eyes worldwide and allowed systems and software designers to speak a consistent language. Critics of SOA often claim infrastructures naturally evolve toward this type of architecture. This is absolutely correct, but SOA’s value is that it provides a unified dictionary and set of best practices for companies to follow. Best practices are often a scary concept, making businesses implement things they don’t really need, but the nature of SOA helps alleviate the downsides of traditional “best practices religion.” SOA, thankfully, requires businesses to design systems that map directly to their needs. Implementations require that architects understand business processes as well as the capabilities of their computer systems before SOA can be implemented. Each implementation is different, and that does a long way toward avoiding the “me too” mentality too often found at IT organizations around the world. That is not to say that many IT shops aren’t grinding away implementing SOA services in a haphazard fashion, just that in spite of it, the projects are sure to improve systems to some degree. The other big problem with SOA is that people believe the cost of implementing new software approaches zero as the infrastructure grows. It’s just a matter of rearranging pieces and calling services that already exist to implement something new. As more and more functionality becomes service-oriented, it is true that development time decreases. Adding new functionality is always required, and regardless, someone still has to arrange the existing pieces into meaningful programs. It’s better, faster, less error prone, but certainly not ever free. The evolution of systems and services into an SOA design means that designers and even programmers begin to work at a higher level. Instead of implementing listUserAccounts (again), programmers begin to think about more creative ways to use that information. This, supports a positive evolution of the products engineers create, as their time has been freed up to work on the truly difficult problem. When companies fail at implementing SOA, it’s often because they don’t embrace the true meaning and benefits of SOA. Changing a few things in an existing system does not make an SOA; SOA requires that all systems, functions, and processes—both in current and especially future systems—present meaningful re-usable services to allow other systems to extend them. At the beginning of this article I said that people frequently getting bogged down talking about technologies is one of SOA’s drawbacks. Now that we’ve covered the fundamental methodology, we can delve into some of those technologies. Next week, the second part of this SOA tutorial will explain the popular technologies to help you make a well-informed decision about which should be leveraged in your SOA endeavors.
fwe2-CC-MAIN-2013-20-33221000
Developing a Water Management Plan A comprehensive water management plan helps kick-start a successful water management program by helping a facility set water conservation goals and identify water conservation opportunities. The plan should include clear information about how a facility uses its water, from the time it is piped into the facility through disposal. Knowledge of current water consumption and its costs is essential for making the most appropriate water management decisions. This page includes helpful hints in developing a water management plan. You can also view completed water management plans for many of EPA’s facilities. A focal point of water management plans is the Best Management Practices (BMPs) section. BMPs are designed to consider all of the various uses of water and maximize conservation. BMPs can be categorized to either maximize water efficiency or minimize water use. Following are 14 BMPs as recommended by the Federal Energy Management Program: - BMP #1 - Water Management Planning - BMP #2 - Information and Education Programs - BMP #3 - Distribution System Audits, Leak Detection and Repair - BMP #4 - Water-Efficient Landscaping - BMP #5 - Water-Efficient Irrigation - BMP #6 - Toilets and Urinals - BMP #7 - Faucets and Showerheads - BMP #8 - Boiler/Steam Systems - BMP #9 - Single-Pass Cooling Systems - BMP #10 - Cooling Tower Systems - BMP #11 - Commercial Kitchen Equipment - BMP #12 - Laboratory/Medical Equipment - BMP #13 - Other Water Use - BMP #14 - Alternate Water Sources Elements of a Proper Water Management Plan A water management plan can be divided into three components: water accounting, BMPs achieved, and water management opportunities. To develop a proper facility water management plan, it is important to include the following elements, at a minimum: - Operation and Maintenance (O&M). Appropriate O&M recommendations from the BMPs are included in facility operating plans or procedure manuals. - Utility Information. Appropriate utility information includes the following: - Contact information for all water and wastewater utilities. - Current rate schedules and alternative schedules appropriate for usage or facility type. This helps ensure that you are paying the best rate. - Copies of water/sewer bills for the past two years. This will help you identify inaccuracies and determine whether you are using the appropriate rate structure. - Information on financial or technical assistance available from the utilities to help with facility water planning and implementing water efficiency programs. Some energy utilities offer assistance on water efficiency. - Contact information for the agency or office that pays the water/sewer bills. - Production information, if the facility produces its water and/or treats its own wastewater. - Facility information. At a minimum, perform a walk-through audit of the facility to identify all major water-using processes, determine the location and accuracy of water measurement devices and main shut off-valves, and verify operating schedules and occupancy of buildings. To meet reporting requirements in Executive Order 13423, facilities should include a description of actions necessary to improve the accuracy of their water usage data. Activities can include a metering (or other measurement) plan for the facility.* - Emergency response information. Develop water emergency and/or drought contingency plans describing how your facility will meet minimum water needs in an emergency or reduce water consumption in a drought or other water shortage. This should be done in conjunction with your local water supplier. - Comprehensive Planning. Inform staff contractors and the public of the priority your agency or facility places on water and energy efficiency. Ensure appropriate considerations are taken into account early in the design and planning of any new or retrofit project. * In order to properly manage water conservation projects, it is important that all water be accounted for through precise measurement, such as water meters. It is necessary to have measurements not only to plan how to address water conservation, but also to monitor and track progress made in these programs as well as to adjust and make changes.
fwe2-CC-MAIN-2013-20-33224000
Here are the L-I-G-H-T-S to the Word of God: Literal Interpretation, Illumination by the Holy Spirit, Grammatical Principles, Historical Context, Teaching Ministry, Scriptural Harmony. Principles of biblical interpretation ought to be determined before developing one’s theology, but in practice the reverse is often true. Cultists in particular consistently read their deviant theologies into the biblical text instead of allowing the text to speak for itself. Faith teachers are also guilty of this practice, as I document in my book “Christianity in Crisis”. In view of this growing problem, it would be productive to consider some of the primary principles of hermeneutics. Before you run off because of the formidable sound of this term, however, let me quickly point out that hermeneutics is simply a “fifty-cent” word that describes the science of biblical interpretation. The purpose of hermeneutics is to provide the student of Scripture with basic guidelines and rules for “rightly dividing the word of truth” (2 Tim. 2:15). To help ensure that you will remember these principles, I’ve developed the acronym L-I-G-H-T-S. Just remember that the science of biblical interpretation “LIGHTS” your path as you walk through theWord. The L in LIGHTS will remind you of the literal principle of biblical interpretation. In simple terms, this means that we are to interpret the Word of God just as we interpret other forms of communication — in its most obvious and literal sense. Most often, the biblical authors employed literal statements to communicate their ideas (such as when the apostle Paul said of Jesus, “By Him all things were created, both in the heavens and on earth” — Col. 1:16). And where the biblical writers express their ideas in literal statements, the interpreter must take those statements in a literal sense. In this way, the interpreter will grasp the intended meaning of the writer. Of course, this is not to deny that Scripture employs figures of speech. Indeed, the biblical writers often used figurative language to communicate truth in a graphic way. And, in most cases, the meaning of such language is clear from the context. When Jesus says He is “the door” (John 10:7), for example, it is obvious He is not saying He is composed of wood and hinges. Rather, He is the “way” to salvation. Illumination by the Holy Spirit The I in LIGHTS will remind you of the illumination of Scripture that can only come from the Spirit of God. First Corinthians 2:12 says: “We have not received the spirit of the world but the Spirit who is from God, that we may understand what God has freely given us.” Because the author of Scripture — the Holy Spirit (2 Pet. 1:21). — resides within the child of God (1 Cor. 3:16), he or she is in a position to receive God’s illumination (1 Cor. 2:9-11). And, indeed, the Spirit of truth not only provides insights that permeate the mind, but also provides illumination that can penetrate the heart. The G in LIGHTS will remind you that Scripture is to be interpreted in accordance with typical rules of grammar — including syntax and style. For this reason, it is important for the student of Scripture to have a basic understanding of grammatical principles. It is also helpful to have a basic grasp of the Greek and Hebrew languages. If you do not know Greek or Hebrew, however, don’t panic. Today there are a host of eminently usable tools to aid you in gaining insights from the original languages of Scripture. Besides commentaries, there are “interlinear” translations that provide the Hebrew and Greek text of the Bible in parallel with the English text. As well, Strong’s concordance has a number-coding system by which you can look up the Greek or Hebrew word (along with a full definition) behind each word in the English Bible. Moreover, there are dictionaries of Old and New Testament words that are keyed to Strong’s concordance. Tools such as these make it easy for the layperson to obtain insights on the original Hebrew or Greek of the Bible without being fluent in these languages. The H in LIGHTS will remind you that the Christian faith is historical and evidential (Luke 1:1-4). The biblical text is best understood when one is familiar with the customs, culture, and historical context of biblical times. Thankfully, there are a host of excellent Bible handbooks and commentaries to aid us in the process of understanding the people and places of the Bible. The T in LIGHTS will remind you that even though the illumination of Scripture ultimately comes through the ministry of the Holy Spirit, God has also provided the church with uniquely gifted human teachers (Eph. 4:11). Therefore, as we seek to rightly interpret God’s Word (2 Tim. 2:15), we would do well to consult those whom God has uniquely gifted as teachers in the church (cf. Tit. 2:1-15). Of course, following the example of the Bereans (Acts 17:11), we should always make sure that what human teachers say is in line with Scripture (cf. 1 Thess. 5:21). The S in LIGHTS will remind you of the principle of Scriptural harmony. Individual passages of Scripture must always be in harmony with Scripture as a whole. The biblical interpreter must keep in mind that all of Scripture — though communicated through various human instruments — has a single Author (God). And, of course, God does not contradict Himself. Studying the Bible is the noblest of all pursuits, and rightly understanding it, the highest of all goals. The six principles listed above can help you attain this goal. And as the science of biblical interpretation continually LIGHTS your path through Scripture, you will find yourself growing in your understanding of Him who is the Light of the world — Jesus Christ (John 8:12).
fwe2-CC-MAIN-2013-20-33226000
A User-interface for Proofs and Certified Software by Janet Bertot, Yves Bertot, Yann Coscoy, Healfdene Goguen and Francis Montagnac By making it possible to express the properties of procedures and functions, proofs assistants can be used to help develop certified software. However, these proof assistants are often complicated to use and deserve real user-interfaces to make software development feasible. Since 1990, The CROAP team at INRIA Sophia-Antipolis has been studying the development of user-interfaces for theorem provers to reduce this level of complication. We have implemented a powerful prototype, CtCoq, that has been used successfully in the development of certified algorithms for program manipulation or polynomial mathematics. The last version of this proof environment has been released in February 1997. The semantics of programs can be mathematically described using relations between inputs and outputs or using functions from the domain of inputs to the domain of outputs. When these relations and functions are formally described, it is possible to use a computer to check mechanically some of their properties. This leads to the perspective of checking that programs fulfil a formal specification and ultimately to zero-default software. Since the correction of a given program may rely on an arbitrarily complex corpus of mathematics, the system used for the verification needs to have very powerful proving capabilities. To date, only the systems known as theorem provers or proof checkers provide enough mathematical capabilities for this task. The Coq proof assistant is one such proof checker (see previous article). It uses type theory to express the properties of functions and encode powerful mathematical tools such as recursion and algebraic structures. Intuitively, the types used in a programming language like Pascal or C make it possible to verify simple consistency properties between the components of a software. When using language with more expressive types, the properties that can be expressed using types can actually cover the complete specification of a software system. The CtCoq user-interface is an independent front-end for the Coq proof assistant. It uses technologies from the domain of programming environments to help the proof developer in several ways. The first element taken from programming environment technology is the use of syntax directed tools. These tools use a precise description of the proof assistant's syntax to help in the rapid construction of syntactically correct logical sentences, specifications, and proof commands. For instance, syntax directed menus make it possible to perform transformations on expressions or commands that respect the syntactic correctness of these expressions, thus reducing the time spent in correcting low-level errors. Syntax aware tools also make it easier to recognize usual mathematical notations and render them using multiple-font display, in a wysiwyg fashion. These tools make semantic manipulation of data possible, with interpretation of the user's pointing or dragging gestures using the mouse. For instance, pointing at an expression can be interpreted guiding the proof process towards this expression. In the same realm, dragging an expression can be used to rearrange data when the algebraic properties make it possible. Other tools taken from programming environments use the analysis of dependence graphs between functions, mathematical objects, and proof commands. This analysis can lead to quicker tools to help finding and correcting errors in specifications, thus making the development of completely proved software quicker. Powerful analyses also make it possible to extract natural language presentation from proofs data structures, thus making the results of proof developments understandable by mathematicians and engineers outside the community of Coq and CtCoq users. The CtCoq proof environment has been used successfully in the development of algorithms for symbolic computation, trajectory planning, and program partial evaluation. Future research around this user-interface aims on one side at a better integration with symbolic computation and computer algebra systems and on the other side at a better use of dependency graphs to make large proof maintenance and re-engineering feasible. Publication references for this research can be found at: http://www.inria.fr/croap/publications.html The CtCoq system can be retrieved by following the instructions found at: http://www.inria.fr/ctcoq/ctcoq-eng.html Yves Bertot - INRIA Tel: +33 4 9365 7739
fwe2-CC-MAIN-2013-20-33227000
Contact: Jules Asher NIH/National Institute of Mental Health Caption: Our brains are all made of the same stuff. Despite individual and ethnic genetic diversity, our prefrontal cortex shows a consistent molecular architecture. For example, overall differences in the genetic code (“genetic distance”) between African -Americans (AA) and caucasians (cauc) showed no effect on their overall difference in expressed transcripts (“transcriptional distance”). The vertical span of color-coded areas is about the same, indicating that our brains all share the same tissue at a molecular level, despite distinct DNA differences on the horizontal axis. Each dot represents a comparison between two individuals. The AA::AA comparisons (blue) generally show more genetic diversity than cauc::cauc comparisons (yellow), because caucasians are descended from a relatively small subset of ancestors who migrated from Africa, while African Americans are descended from a more diverse gene pool among the much larger population that remained in Africa. AA::cauc comparisons (green) differed most across their genomes as a whole, but this had no effect on their transcriptomes as a whole. Credit: Joel Kleinman, M.D., Ph.D., NIMH Clinical Brain Disorders Branch Usage Restrictions: None Related news release: Our brains are made of the same stuff, despite DNA differences
fwe2-CC-MAIN-2013-20-33232000
There are several Border Bridges the connect Canada to the US in Niagara and we have listed them below. The Niagara River runs between to of the Great Lakes connecting Lake Erie and Lake Ontario, and in turn, acts as a natural border separating Southern Ontario in Canada and New York State in the United States. In the last hundred years or so there have been many bridges of all shapes and sizes that have crossed the Niagara River and connected the two countries together allowing people and goods to cross without having to fight the wild rapids of the Niagara. Currently there are a total of six bridges crossing over the Niagara River connecting the US and Canada. Each of them only allow specific types of traffic to cross over them so choosing the right bridge to cross from on your trip to or from the Falls is crucial to saving time and money. Starting at the Lake Erie end of the Niagara River and working our way towards Lake Ontario the Border Bridges that cross the Niagara are as follows: The Peace Bridge This structure connects the city of Fort Erie Canada to Buffalo United States. It is open to car, truck and pedestrian traffic. Fort Erie is about a half hour drive from Niagara Falls Canada. The Rainbow Bridge This border crossing bridge connects the City of Niagara Falls Canada to the City of Niagara Falls Buffalo directly and takes visitors right into the heart of all the attractions of Niagara Falls. This bridge is open to car truck and pedestrian traffic. The Whirlpool Bridge This bridge also connects Niagara Falls Canada to Niagara Falls US but is only open to vehicles carrying the NEXUS pre-clearance passes only. Generally these are people who work or need to cross the Border Bridges frequently and have passed several security clearance applications to get approved. The Whirlpool Bridge is also open to train traffic. There is no pedestrian traffic allowed. The Lewiston-Queenston Bridge Connecting the towns of Lewiston New York with Queenston Ontario this bridge is open to both car and truck traffic. Only a few minutes down river from the city of Niagara Falls, this bridge can be a good alternative if the Rainbow bridge is experiencing heavy traffic. The Lewiston-Queenston Bridge is not open to pedestrian traffic. The International Railway Bridge Constructed only for the use by trains, this Bridge connects rail traffic from Fort Erie Canada to Buffalo, United States.Not accessible to cars, trucks or pedestrian traffic. The Michigan Central Railway Bridge The only bridge connecting the two countries over the Niagara River that is currently not in use the Michigan Railway bridge was a railway traffic only bridge that served train traffic from Niagara Falls Canada to Niagara Falls USA. This bridge is closed and has security barriers preventing any attempts to cross. Great deals on Niagara Falls hotels are still available!
fwe2-CC-MAIN-2013-20-33237000
Perhaps Steinbeck’s most successful work, which won him the Pulitzer Prize in 1940 and figured predominantly in his winning the Nobel Prize in 1962, “The Grapes of Wrath” tells the story of the Joad family, who driven out of the Oklahoma Dust Bowl by drought and hardship during the height of the Great Depression travels to California in search of jobs and a better future. Although written 80 years ago, the play, says director Michael Michetti, is “still profoundly relevant.” Steinbeck wrote it after observing the life of migrant farm workers. In a letter to his friend Elizabeth Otis, Steinbeck wrote about thousands of families starving to death, “not just hungry, but actually starving… the states and counties will give them nothing because they are outsiders. But the crops in any part of this state could not be harvested without these outsiders.” It is no secret that our country faces similar problems today. The migrant families are not from Oklahoma this time, but they are still migrating in search of employment and better conditions. Like the Joads, who lost their home and property, in the last few years our country has seen people lose their houses and struggle with unemployment. And like in the story, our society must realize the relevance of government involvement and the need for human kindness. At A Noise Within, Galati’s adaptation features Steinbeck’s words almost exclusively, as well as live music that includes period hymns, Dust Bowl songs, and original works by Michael Smith written for the play’s original 1988 production. Galati won two Tony Awards for this adaptation and its direction on Broadway. “The Grapes of Wrath” at A Noise Within is a thought-provoking production. It brings about the realization that, although time moves on, our society remains plagued by like issues. And it underlines the importance of looking beyond ourselves to help others. It is delightfully performed, creatively staged, and thoroughly enjoyable. "The Grapes of Wrath" A Noise Within 3352 East Foothill Blvd, Pasadena, CA 91107 Saturday, March 2, at 8 p.m. Sunday, March 3, at 2 p.m. Sunday, March 24, at 2 p.m. Sunday, March 24, at 7 p.m. Thursday, April 11, at 8 p.m. Friday, April 12, at 8 p.m. Saturday, April 20, at 8 p.m. Sunday, April 21, at 2 p.m. Friday, May 3, at 8 p.m. Saturday, May 11, at 2 p.m. Saturday, May 11, at 8 p.m. Price: $40 - $52 Special price for groups of 10 or more
fwe2-CC-MAIN-2013-20-33243000
Mark H. Shour, P&S, Entomology Several child abuse cases were documented recently when children were exposed to chemicals used in licensed child care centers in Iowa. Injuries ranged from chemical skin burns, to eye injuries, to respiratory distress. Although these injuries were not intentional, they were preventable. Currently, more than 250,000 children are enrolled in child care facilities (centers and in-home sites) in the state of Iowa. These children have the potential of being exposed to cleaning agents, pesticides, and other chemicals while in child care. A March 2007 statewide survey by ISU Extension found more than 700 cleaning and disinfectant products as well as over 130 insecticide, herbicide and rodenticides in child care centers. To increase awareness for chemical safety in Iowa’s child care centers through provider training and useful educational aids. A focus group of 10 licensed child care center directors in August 2007 determined that self-paced learning modules (Internet and DVD) would be the best way to train providers since staff/child contact time fills the work week (M-F, 6am – 6PM). Eight modules were assembled: Overview, Pesticides, Pesticide Labels, Cleaning Chemicals, Chemical Storage, Common Pest, Integrated Pest Management Overview, and “Is It Safe?” DVD by the Toxicology Education Foundation. The overall time was 2.5 hours. The Internet training site was: http://www.ipm.iastate.edu/ipm/childcare/home . An optional worksheet was developed to enhance the information gained through the audiovisual training. Approval for continuing education units was obtained from ISU Continuing Education for program participants. The child care center director’s focus group also determined that some eye-catching visual aid should be created to reinforce the training. A set of three full color posters (18”W x 24”L; laminated; SP 0316) was developed: “Reading Chemical Product Labels”, “Chemical Use in Child Care Facilities”, and “Choosing Pest Management Strategies”. A pilot training was conducted with 7 child care centers and 60 providers. Based on pre-training/post-training surveys, participants improved their knowledge of chemical safety issues for 17 of the 30 questions, and had proper understanding for 9 of the 30 questions. Participants strongly believed the training familiarized them with pesticide and other chemical safety and IPM. An additional 40 people have taken the Internet training following the pilot program. There were 9,000 poster sets printed for this project. One set was mailed to each of the 1506 licensed child care centers in Iowa. Additional posters were made available to the public. 170 Pesticide Applicator Training Page last updated: August 25, 2008 Page maintained by Linda Schultz, firstname.lastname@example.org
fwe2-CC-MAIN-2013-20-33245000
PhilippinesEdit This Page From FamilySearch Wiki |Welcome to Philippines!| Beginning in the late 1500s, the Spaniards took various censuses known as vecindarios (local censuses), padrón de almas (head census), or estado de almas (people status). The latter two were religious censuses conducted by parish clergy. Read more... - Filipinas Heritage Library - LibraryLink Website - National Statistics Office - Philippines Genealogy Search - Philippine History - Philippines Libraries (Libweb) - Websites about Philippine and Filipino Genealogy Things you can do In order to make this wiki a better research tool, we need your help! Many tasks need to be done. You can help by: New to the Research Wiki? In the FamilySearch Research Wiki, you can learn how to do genealogical research or share your knowledge with others.Learn More
fwe2-CC-MAIN-2013-20-33253000
Edutopia stw yesprep rubric and descrip literary analysis romeo and julietCopyright 2009 YES Prep Public Schools Romeo and Juliet Literary Analysis Paper For the Romeo and Juliet essay, you have a choice of six different topicsyou need to choose one. A good essay will have an introduction paragraph with a strong, clear thesis, several wellorganized body paragraphs with evidence that supports your point of view, and a concluding paragraph with the thesis restated. Your essay should be two to three pages, typed and double-spaced in 12-point font. This essay assignment will require you to plan and organize your essay carefully. I will provide time for you to work in class and will also help you organize your ideas and gather evidence. Choose a topic about which you have a strong personal opinion but your thesis must be supported by evidence in the play. Here is the timeline of due dates for your paper Add these dates to your calendar: Due DateAssignment Monday, April 27th 12 Dialectical Journal entries for your topic Declare project extension(s) Monday, May 4th Project Extension: Check-in 1 Monday, May 11th Project Extension: Check-in 2 Monday, May 18th Final Draft and Project Extensions Monday, May 18th- Thursday, May 21st Project Presentations I.Romeo and Juliet tells the story of two young lovers who fall deeply in love and do everything they can to be together. Compare and contrast the actions of this couple with those of typical teenagers we see today in America. What factors play a role in this love story that create similarities and differences to todays love stories? a.Possible topics to consider in your analysis (feel free to add your own): What are the common dating practices of each time period? ii.How do the different cultures affect dating norms? iii. How does age play a role in decision-making? iv.How much parental involvement is expected in a dating relationship? What is acceptable behavior if parents and children do not see eye to eye? v.Besides family grudges, what are other reasons two young lovers would be forbidden from being together? II.William Shakespeare is very intentional about how he shapes each character in the play so that each plays an integral role in the tragedy that ensues. Analyze the personality traits of 2-4 characters in Romeo and Juliet.
fwe2-CC-MAIN-2013-20-33264000
Possible Sandro Botticelli Fresco Painting Found in Ruined Hungarian Castle. 06/10/2007 - A restoration project in Esztergom, Hungary has resulted in a surprise possible attribution of a painting to Early Italian Renaissance painter Sandro Botticelli. While restorations have been ongoing since 2000 on the four-piece mural, it is only in recent weeks that art historians have recognized the hand of Sandro Botticelli in the one of the fresco paintings. One of the 4 fresco painting, ca. hereto see our fine art The mural paintings were commissioned to decorate the castle chapel by Janos Vitez, Archbishop of Esztergom. To paint the frescoes, Vitez employed the school of Fra Filippo Lippi to whom Sandro Botticelli was apprenticed. The images depict the four medieval virtues, a common theme of the Apparent in the fresco painting to even a casual observer is the flowing red hair characteristic of Sandro Botticelli's Simonetta Vespucci, who died in 1476, but who Botticelli continued to paint for the remainder of his career. Janos Vitez was a Hungarian Humanist who was born around 1400 and reigned as archbishop in Hungary from 1465 to 1472. Brenda Harness, Art Historian For more information on Italian Renaissance Art and book recommendations, click
fwe2-CC-MAIN-2013-20-33268000
The Tasmania Fire Service's Community Education Unit delivers a number of programs to the community to enhance community safety. These programs are free-of-charge. The School Fire Education Program delivers fire safety information to all Tasmanian primary school students, promoting awareness of fire safety and fire hazards, and steps to take if fire breaks out in the home. The Juvenile Fire Lighter Intervention Program is for children who engage in unsafe fire lighting behaviour at home or in the bush. Program content is tailored to the age and maturity of the children attending. Project Wake Up! provides opportunities for people with disabilities and aged people to get advice from firefighters about fire safety matters in their homes, and when necessary, to have free smoke alarms installed.
fwe2-CC-MAIN-2013-20-33271000
Feral and Stray Cat Management Trap, Neuter, Return (TNR) is a proven humane method for feral (wild) and stray cat population control. These cats are often referred to as "Community" Cats because they are a product of irresponsible pet owners in the community. An abandoned domestic cat that is not already spayed or netuered will produce kittens that will become feral. Feral cats colonies can survive anywhere there is a food source. Feeding them is the first step to controlling the population. The next step is trapping them so that they can be sterilized and vaccinated. The rate of disease in feral cats is the same as for domestic cats. Rabies is not a threat for feral cats in SW Florida. Feral cats act like any other wild animal, totally nocturnal, usually silent and they are normally clean and sleek. Stray cats are visible during the day, are vocal and are dirty and hungry. Once the cats are spayed and neutered, the pulbic nuisance issues of overpopulation, territory marking, fighting, predation and aggressive behavior cease. Controlled feral cat colonies must be managed by a caregiver who provides daily food and water. A colony of controlled feral cats can be a benefit to the community by providing free pest control for rats, mice and other vermin that carry diseases that could affect the wellbeing of the human community. Collier County has a TNR ordinance in place and is in full support of community members that are trapping and fixing feral and stray cats in their community. TNR is the centerpiece of our mission. We offer resources to residents of Collier County who are willing to trap and transport cats to local veterinarians for surgery and vaccines. Please visit us at the Petsmart store on Pine Ridge Road in Naples on Saturdays from 10-3 to discuss your needs. HOW TO HELP FERAL AND STRAY CATS Feed the cat(s) every day at the same time and place Feeding Location and Time- Locate the bowls of food and water in a protected area that is out of sight from public view. You do not want anyone doing harm to the cats. Feed just after dark or very early in the morning so no one sees you. If ants get into the food, buy “food grade diatomaceous earth” from the feed store (or online at www.dirtworks.com) and sprinkle on the ground around the area. It will kill the ants but will not be harmful to the cats. You can make an affordable covered feeding station/shelter for the cats by buying a large 4 foot Rubbermaid container from Home Depot. Take the lid off (use the lid for something else), turn it upside down, cut out both of the ends and use 2 bungee cords to strap it to a wooden shipping pallet (can be acquired from grocery stores and other business, usually for free). Trap, neuter and return (TNR) them to the same place- FERAL CATS CANNOT BE RELOCATED Sterilizing the cat affordably- Call the Collier Spay Neuter Clinic at 239-514-7647 Getting a trap- Borrow a trap from Domestic Animal Services on Davis Road in Naples (deposit required) or the Collier Spay/Neuter Clinic on Immokolee Rd 239-514-7647 Buy a trap from Lowes or Tractor Supply (Havahart #1079) for approx. $50.00 Trapping the cat- Call the Clinic to make sure they can take the cat on the morning you plan to have your cat in the trap Locate the trap where you feed. Cover the back, top and sides of the trap with a towel to make it more appealing. If the trap is near sprinklers, cover the trap with a black garbage bag first and overlay the bag with a towel. Put the trap out just after dusk and check on it during the night if you can. Otherwise, check early in the morning before anyone knows it is there If you have the cat in the trap, cover the trap completely with towels to calm the cat down. Cover the back seat of your car with garbage bags and transport the cat to the Clinic ASAP. Be prepared to pick the cat up the next morning. After you pick up the sterilized cat from the vet, put newspaper down on the floor of your garage and put the towel covered trap on the newspaper overnight Bring the covered trap back to your feeding location early the next morning and release the cat. The cat may disappear for several days but will be back to resume normal activities Continue to feed and care for the cats
fwe2-CC-MAIN-2013-20-33281000
What are tachyons? Tachyons are hypothetical particles that can only travel faster than the speed of light. As you probably know, objects with a real number for mass can never travel at the speed of light because of Einstein's theory of relativity. As a consequence of this theory, as a objects velocity increases its mass increases. As is it can be seen by the following formula mass=rest_mass*1/sqrt(1-v^2/c^2). At the speed of light the mass becomes infinite. So, it would take an infinite amount of energy for a massive particle to reach the speed of light. These objects are sometimes called tardyons. Photons can travel at the speed of light because they have no mass and their energy is E=planck's constant * nu(frequency of the photon). In order for something to travel at the speed of light it would have to have an imaginary number for its mass. An imaginary number is a number that is a multiple of the square root of a negative number. As a particle travels faster than the speed of light the denominator of mass=rest_mass*1/sqrt(1-v^2/c^2) becomes imaginary, the imaginary mass would counteract this and we (in the rest frame) would see something that had real mass in the rest frame but something that always traveled faster than the speed of light. There have been a few experiments to find tachyons using a detector called a cerenkov detector. This detector is able to measure the speed of a particle traveling through a medium. Photons travel at a slower speed inside a medium. If a particle travels though a medium at a speed that is greater than light for that medium cerenkov radiation occurs. This is analogous to the sonic boom produced when an airplane travels faster than the speed of sound in air or the shock wave at the bow of a ship. If tachyons existed you would be able to see cerenkov radiation in a vacuum. A few cerenkov experiments were conducted in a vacuum and no radiation was found, so it is generally accepted that tachyons do no exist. I hope this helped you. Christina L. Hebert Graduate Student at Fermilab |last modified 12/11/1999 firstname.lastname@example.org|
fwe2-CC-MAIN-2013-20-33283000
SELECT PRODUCT TYPE Tips & Advice : Different type of bird feeders Bird feeders come in all different shapes, materials & sizes all with the same goal of attracting the widest variety of birds as possible to your garden to feed. What you put in the feeder is the key factor to attracting the birds into your garden. So whether its a seed, peanut or suet feeder its important to try a variety of foods in the feeders to find out which birds like what food. The important factor is ensuring what ever feeder you use that the food can flow easily from it. Keeping the feeder clean will help ensure any trapped food is removed allowing it to flow freely and at the same time reduce the risk of any bacteria. Seed feeders are one of the most popular type of feeders probably because of the wide variety of foods you can get to fill them, and the more feeders you site with a variety of different seeds the wider species of birds you are likely to attract. Seed feeders come in a variety of designs from tubular feeding stations to hopper type feeders with a trough at the base for the birds to eat from. They all come in many different designs to not only appeal to the birds but also to the bird watchers. Seed feeders are designed to dispense various types of different seed like wild bird seed blended mixes, sunflowers - black, striped or hearts. Some seed requires a specially designed feeders like Niger/thistle seed feeders. Niger/thistle seed is very light and small and without using a specific feeder the seed can blow away and fall through the holes of standard seed feeders. Tubular style seed feeders come with a various number of feeding ports where birds can hang or perch to feed. The more ports you have the more birds that can feed at anyone time, and quiet often the more confident the birds feel feeding as there is safety in numbers. Ensure feeders with more than 2 ports are tall enough to enable birds of all sizes to perch or hang utilising all ports at once. Hopper style feeders - the lantern feeder is an example of a hopper feeder. The seed is dispensed through positioned gaps in base of the casing, and with these type of feeders the seed falls into a small trough round the base of the feeder and the birds perch and hang onto the perches to feed on the seed. Feeders can be hung from a bird table, post, wall, fence or suitable branch of a tree using a bracket, hook, rope or other secure suitable anchor point. Its important that the fixing is secure at all times as when the feeders are full of food and the birds are feeding the weight and movement can result in them becoming insecure. Peanut feeders are another popular type of feeder. They too come in various shapes, sizes & materials and are essential for dispensing peanuts to the birds in your garden. Its important that peanuts are fed from a specific feeder in order that birds cant get at the peanuts whole as this can cause choking especially amongst fledgling in the spring. Peanut feeders are usually a cylindrical tube covered in a steel mesh which birds hang from while they peck away at the nuts inside. Having to peck away at the nuts through the mesh not only ensure the birds cant eat the nuts whole but also ensures they hang around in your garden longer to feed. Bringing hours of entertainment to those who love watching the birds. Peanuts attract a wide variety of birds - Blue tits, Great tits, Woodpeckers, Greenfinches, nuthatch are just a few of the regular visitors that will enjoy feeding on peanuts in your garden. Another visitor that loves to feed at the peanut feeder will be the squirrel. So if you have problems with the squirrels that visit your garden then use a squirrel proof feeder to deter their unwanted attention. Tubular and hopper feeders made of plastic are an easy target for squirrels as they gnaw through the plastic fitments to get to the food. Buying feeders made of metal or ceramic fitments helps keep squirrels at bay as they are harder to penetrate while hanging upside down from a fence, tree or bird table. While they are more expensive to buy initially they are well worth it as they are more likely to survive the attention of the squirrels - although determined squirrels may still find a way. As Squirrels are very partial to seeds and nuts, while it can be very entertaining watching them become acrobat's as they strip and destroy the feeders to reduce this destruction you can buy specially designed feeder that restrict squirrels. Squirrel proof feeders - are very attractive feeders that have the tubular feeder encased by a metal plastic coated cage, which stops squirrels being able to get at the feeder and the food inside. These feeders are designed to allow the smaller birds into the cage through the wire gaps to feed in peace out of the reach of the larger birds and the squirrels that compete for their food. Squirrel proof feeders are available for dispensing seed or nuts and even ones that can dispense both within the same feeder. Even if you don't get a problem from squirrels in your garden using one of these feeders keeps a feeder in your garden that only the smaller birds can feed from encouraging a wider variety of smaller birds into your garden to feed. Suet feeders are plastic coated wire holders designed to hold fat balls or suet blocks efficiently while birds feed. The suet block holders are designed for holding suet blocks but are also idea for placing toast or bread and scrap in. Fat ball feeders are designed to hold fat balls. The plastic netting that fat balls come wrapped in ideally should be removed and the fat ball placed in a suitable feeder to reduce the risk of the birds becoming trapped in the fine netting or choking on the plastic netting as they peck at the suet. Suet feeders due to the nature of the food that goes in them require regular cleaning to avoid any remains of the food going rancid. They should be cleaned everytime before being replenished.Wash in hot soapy water first rinse well and then clean with a cleaner disinfectant like bug gard. Suet is high in fat which is a good source of nourishment and attracts many different birds in particular nuthatches, woodpeckers, Tits and Starlings. Starling can ly devour your suet treat in no time at all and to ensure others birds get the chance to feed from them you might want to place the suet in an area of the garden that the larger birds like starling find more difficult to feed from or place the suet feeder inside a metal cage to restrict access to the larger birds & squirrels. Feeding tray are an alternative way of feeding birds their food while keeping it off the ground. Feeding trays are raised off the ground, usually made of a wood or metal frame with a fine wire mesh base. The advantage of a feeding tray is food thats not suitable for feeding from a feeder - for example ground food, suet pellets, dried mealworms etc can be fed easily from a tray allowing you pick up any uneaten food at night to reduce the unwanted attention of vermin that may visit during the night attracted by the left overs. The other advantage is when it rains the rain can soaks through the mesh allowing the food to dry out quickly and also reducing the mess on your lawn or patio.
fwe2-CC-MAIN-2013-20-33285000
By: Katie Burns Date: 8/1/11 Registered dietitians are aware of the science behind the 2010 Dietary Guidelines for Americans and are knowledgeable of the impact poor diet can play on one’s health, but an important component of food and health that is often overlooked or oversimplified is food safety. A significant public health issue, the Centers for Disease Control and Prevention estimates that there are 48 million illnesses, 128,000 hospitalizations and 3,000 deaths per year in the United States related to foodborne illness. Food safety doesn’t strengthen your bones; it won’t help manage your weight, nor will it help you run faster and jump higher. However, it does play an important role in disease prevention and overall health and wellness. Unfortunately, the IFIC Foundation 2011 Food & Health Survey has shown a steady decline in safe food handling practices by consumers: • CLEAN: Only about 80% of Americans report washing their hands with soap and water and only 71% report washing cutting boards when preparing food or getting ready to eat; • SEPARATE: Less than two-thirds of Americans separate raw foods from ready-to-eat foods, and only about half use different cutting boards for each product. • COOK: While 68% of Americans indicate they cook their food to the required temperature, less than 30% report using a food thermometer to check the food’s doneness. • CHILL: Nearly 70% of Americans report storing leftovers within two hours of serving. While the majority of Americans are not seeking out registered dietitians to learn about food safety, only 2% stated they had received food safety information from a registered dietitian, according to the 2011 Food & Health Survey, the basic food safety practices of “clean, separate, cook and chill” can easily be attached to nutrition messages. After all, “food isn’t nutritious unless it’s safe!” • Enjoy your food. o CLEAN: When you’re getting ready to eat or make a meal, wash your hands with soap and water. You’ll enjoy your food so much more knowing you’ve taken a step to keep it safe, plus it will allow you to slow down and think about the food you’re about to eat—just one more way to be mindful about what you eat! • Make half your plate fruits and vegetables. o SEPARATE: When preparing your meal, keep raw meat, poultry, seafood and eggs (and their juices) away from ready-to-eat foods like fruits and vegetables. • Choose lean or low-fat meat and poultry. o COOK: When preparing your meat or poultry dish, use a food thermometer to determine when it has reached a safe internal temperature (high enough to kill the harmful bacteria that cause foodborne illness). This will prevent you from over-cooking it and will keep you safe! Visit the Partnership for Food Safety Education for a list of safe internal temperatures. • Eat less and avoid oversized portions. o CHILL: After serving the first helping, put leftovers into containers and refrigerate. This helps avoid the growth of the bacteria that causes foodborne illness and also keeps you from going back for seconds! These are just a few practical ways to incorporate food safety into the basic nutrition messages of the 2010 Dietary Guidelines. What are some other ways to make food safety fun and interesting?
fwe2-CC-MAIN-2013-20-33286000
How to Read a Balance Sheet: Current and Quick Ratios You've read through our abbreviated definitions of various items on the balance sheet -- congratulations. Now we'll have some fun with numbers and play around with these bits of information. We do this to get the nitty-gritty details about how well the company manages its assets and whether or not its price represents a bargain, based on the assets it has at its disposal. The first tool you use is called the current ratio. A measure of just how much liquidity a company has, this number is simply the current assets divided by the current liabilities. For instance, if Joe's Bar and Grill has $10 million in current assets and $5 million in current liabilities, here's the formula: $10 million current assets / $5 million current liabilities = 2.0 current ratio As a general rule, a current ratio of 1.5 or greater can meet near-term operating needs sufficiently. A higher current ratio can suggest that a company is hoarding assets instead of using them to grow the business -- not the worst thing in the world, but it's something that could affect long-term returns. You should always check a company's current ratio (and any other ratio) against the same information for its main competitors. Certain industries have their own norms in terms of the current ratios that do make sense and those that do not. For instance, in the auto industry, a high current ratio makes a lot of sense if a company does not want to go bankrupt during the next recession. When we discussed inventories, we mentioned that sometimes inventories are not necessarily worth the amount they are on the books for. This is particularly true in retail, where you routinely see close-out sales with 60% to 80% markdowns. It is even worse when a company going out of business is forced to liquidate its inventory, sometimes for pennies on the dollar. And if a company has much of its liquid assets tied up in inventory, it will be very dependent on the sale of that inventory to finance operations. If the company is not growing sales very quickly, this can turn into an albatross that forces the company to issue stock or take on debt. Because of all of this, it pays to check the quick ratio. The quick ratio is simply current assets minus inventories divided by current liabilities. By taking inventories out of the equation, you can find out if a company has sufficient liquid assets to meet short-term operating needs. If you look at the balance sheet of Joe's Bar and Grill, you'll see that the company has $2.5 million of current assets in hamburger buns that are sitting in inventory. You now can figure out the company's quick ratio: Quick ratio = (current assets - inventories) / current liabilities ($10 million current assets - $2.5 million inventories) / $5 million current liabilities = 1.5 quick ratio Looks like Joe's makes the grade again. Most people look for a quick ratio greater than 1.0 to be sure there is enough cash on hand to pay bills and keep going. Like the current ratio, the quick ratio can also vary by industry. It always pays to compare this ratio to that of peers in the same industry to understand what it means in context. In addition, some investors will use something called the cash ratio: the amount of cash a company has divided by its current liabilities. This is not a tool to use, however, so we don't have a general guideline if you want to check it. It is just another method to compare companies in the same industry to determine how well they are funded. For more lessons on reading a balance sheet, follow the links at the bottom of our introductory article.
fwe2-CC-MAIN-2013-20-33290000
No one knew the American history better than Gore Vidal. It was in his blood. He learnt it from a grandfather who served in the Senate, and from his personal association with the great American political families of his time. From his home atop a Ravello cliff face he spun wonderful stories out of American history, buttressed by a flawless memory and a talent for mimicry. His historical novels chart the emergence of America as a continental power with centralised government, and what he saw as a descent into imperialism. He embodied an anti-imperial tradition that goes back to Mark Twain – representing an isolationist viewpoint that once ran deep in America. Gore Vidal believed no foreign war justified a single American life and this view was his fundamental political commitment. And he loved a political feud – his own being a vendetta against Bobby Kennedy, with whom he clashed while campaigning for the US Congress in 1962. He told me once that addressing an anti-Nixon rally in Boston he was asked, "Why is Nixon so hated in Massachusetts?" His roared response: "Because having seen so many crooks in its history, the people of Massachusetts recognise a crook when they see one!" The public applause, so strong it was almost a blow to the chest, confirmed in him a love of oratory and the chance to occupy a political stage. He would have loved to have been a politician and stood twice – once for Congress and later for the Senate from California. He would have traded all his literary accomplishments for a chance to serve as a long term Senator, and to have one day run for President. Gore Vidal's passing at age 86 is a loss to his country, to literature and to history. Farewell to a polymath, a storyteller and a wonderful writer. His essays may have been the best in the language. There won't be another mind like his. - Minister's office: (02) 6277 7500 - DFAT Media Liaison: (02) 6261 1555
fwe2-CC-MAIN-2013-20-33294000
Miraflores is the name of one of the three locks that form part of the Panama Canal and the name of the small lake that separates these locks from the Pedro Miguel Locks upstream. In the Miraflores locks, vessels are lifted (or lowered) 54 feet (16.5 m) in two stages, allowing them to transit to or from the Pacific Ocean port of Balboa (near Panama City). Ships cross below the Puente de las Américas (Bridge of the Americas) which connects North and South America. As of 2005, the following schedule was in effect for ship transit through the locks. From 06:00 to 15:15, ships travel from the Pacific towards the Atlantic. From 15:45 to 23:00 ships travel from the Atlantic towards the Pacific. At any other time, travel is permitted in both directions, A modern visitor centre allows tourists to have a full view of the Miraflores locks operation. Binoculars are recommended to view the Pedro Miguel Locks in the distance. As of 2010, admittance for adults to the visitors centre costs US$5 (observation terrace) or $8 (supporting exhibits and video show added) with lower rates for children and senior citizens. Panama residents are admitted free of charge. Viewing a transit operation at the centre can take more than 30 minutes. A souvenir shop in the base level sells related merchandise. The centre closes at 17:00. The Panama Canal (Spanish: Canal de Panamá) is a 48-mile (77.1 km) ship canal in Panama that connects the Atlantic Ocean (via the Caribbean Sea) to the Pacific Ocean. The canal cuts across the Isthmus of Panama and is a key conduit for international maritime trade. There are locks at each end to lift ships up to Gatun Lake (85 feet (26 m) above sea-level). Gatun Lake was created to reduce the amount of work required for the canal. The current locks are 110 feet (33.5 m) wide. A third, wider lane of locks is being built. France began work on the canal in 1881, but had to stop because of engineering problems and high mortality due to disease. The United States (US) later took over the project and took a decade to complete the canal in 1914, enabling ships to avoid the lengthy Cape Horn route around the southernmost tip of South America (via the Drake Passage) or to navigate the Strait of Magellan. One of the largest and most difficult engineering projects ever undertaken, the Panama Canal shortcut made it possible for ships to travel between the Atlantic and Pacific Oceans in half the time previously required. The shorter, faster, safer route to the US West Coast and to nations in and along the Pacific Ocean allowed those places to become more integrated with the world economy. During this time, ownership of the territory that is now the Panama Canal was first Colombian, then French, and then American; the United States completed the construction. The canal was taken over in 1999 by the Panamanian government, as long planned. Annual traffic has risen from about 1,000 ships when the canal opened in 1914, to 14,702 vessels in 2008, the latter measuring a total of 309.6 million Panama Canal/Universal Measurement System (PC/UMS) tons. By 2008, more than 815,000 vessels had passed through the canal, many of them much larger than the original planners could have envisioned; the largest ships that can transit the canal today are called Panamax. The American Society of Civil Engineers has named the Panama Canal one of the seven wonders of the modern world.
fwe2-CC-MAIN-2013-20-33296000
Plant Problems Demystified Washington State University's Hortsense Web site presents specific information on about 600 common plant problems in western Washington. It's a tremendously helpful site and one I use often. Topics include problems with turfgrass, ornamentals, small fruits, tree fruits, vegetables, and weeds. The site offers non-chemical management information and a list of registered chemicals for the problem. Caring for Perennials Inside Caring for Perennials: What To Do and When To Do It, by Janet Macunovich (Storey, 1997; $18), you'll find a season-by-season guide to creating and maintaining a topflight garden with a minimum of effort. There's general advice for doing preventative maintenance in early spring; mulching, dividing, and moving plants in mid-spring; staking them in late spring; deadheading in midsummer; cutting back and removing them in mid-autumn; and composting in autumn. Chapters on pruning, edging, weeding, watering, and fertilizing round out the book, which focuses on the care and feeding of 130 of the most popular perennials.
fwe2-CC-MAIN-2013-20-33316000
Gardening Articles: Care :: Soil, Water, & Fertilizer Transplanting Eggplant, Peppers, and Okra (page 2 of 2) by National Gardening Association Editors Head Off Cutworms One of the simplest treatments to prevent cutworm damage can be done when you transplant. Simply take a strip of newspaper two or three inches wide and wrap it around the stem of the plant. When you place the plant in its hole, make sure an inch of the newspaper strip is below the soil surface, withthe rest staying above ground. This prevents the dreaded cutworm from chewing through the stem of your tender young plant. Run through a checklist in your mind before you start to transplant. This will prevent you from reaching for something that isn't there while the roots of the plant you've just taken from its pot start to dry out. Is it cloudy or late in the day? Is the soil fully prepared? Are the plants properly hardened off? Do you have a shovel or trowel, fertilizer or compost, newspaper cutworm collars and watering can at hand? If not, do whatever is necessary before starting to transplant. Even though you've hardened off your transplants, they still have tender roots. Getting them into the ground quickly will help prevent any damage to the roots and help minimize the shock.
fwe2-CC-MAIN-2013-20-33317000
Eshelby Islands Site Plan 1.1 RationaleSite plans are an important management tool used jointly by the Great Barrier Reef Marine Park Authority (GBRMPA) and the Department of Environment and Resource Management (DERM). They identify the significant values and management arrangements at a particular site, concentrating on the specific use issues and cumulative impacts. The waters surrounding Eshelby Island (20-012) and Little Eshelby Island (20-013) have been assigned to a protected setting (Setting 5) in the Whitsundays Plan of Management 2008 (WPOM). Due to their protected setting, the WPOM requires that this site plan be developed to ensure protection of the natural, cultural and heritage values of the Eshelby Islands. Eshelby Island is a high continental island, located approximately 30 kilometres north of Airlie Beach. Figure 1: Map of Eshelby Islands [PDF 1.262MB] 2. Natural, cultural and heritage values The values described below are not exhaustive, but are indicative of the significance of the area covered by this site plan. Birds are an integral part of the Marine Park and the Great Barrier Reef World Heritage Area and the Whitsundays are recognised internationally as an important stopover for migratory birds. Colfelt (1985) notes that Eshelby Island probably has the most prolific bird life of any island in the Whitsundays. Eshelby Island is an important rookery for the bridled tern Sterna anaethetus and the common noddy Anous stolidus. Up to 10 000 bridled terns have been recorded on the island at one time. Both of these species are listed marine and migratory species under the Environment Protection and Biodiversity Conservation Act 1999. Eshelby Island also hosts numerous other bird species, some of which are vulnerable, or at risk of becoming vulnerable, in the Whitsundays. In 1935 an unmanned navigational light was erected on Eshelby Island. Initially powered by batteries, it was converted to solar power in 1985. 2.3 Traditional Owners The islands and surrounding areas are culturally significant to the Ngaro Aboriginal Traditional Owner Group. The islands, reefs and surrounding waters are part of the cultural landscape. Spiritual connections are often associated with the natural and cultural resources. The Central Queensland Land Council Aboriginal Corporation is the representative body for Traditional Owners whose estates are located in the Whitsunday region. 3. Current Use Eshelby Island is a Commonwealth Island, managed by the GBRMPA. Part of Eshelby Island is leased to the Australian Maritime Safety Authority (AMSA), which maintains the lighthouse on the island. No recreational or commercial visitation to the islands or the surrounding waters is allowed as the islands are within a Preservation (Pink) Zone, which prohibits access. NB: Access is allowed under exceptional circumstances. Refer to 4.1.2(a). 4. Management strategies 4.1 Current management The waters surrounding the Eshelby Islands are within a Preservation (Pink) Zone under both State and Commonwealth Zoning Plans. Marine Park Zoning Map 10 shows the zoning at the Eshelby Islands. The objective of the Preservation Zone is to preserve the natural integrity and values of the area, generally undisturbed by human activities. Eshelby Island (as opposed to the waters surrounding the island) is zoned as a Commonwealth Island. The Zoning Plan is one of a range of management tools for the Eshelby Islands. Other management tools include the Whitsundays Plan of Management 2008, the Great Barrier Reef Marine Park Act 1975 and Great Barrier Reef Marine Park Regulations 1983. 4.1.2 Restricted access 4.1.2(a) Vessel access Access into the Preservation Zone surrounding the islands is prohibited without the written permission of the GBRMPA. Under exceptional circumstances, such as responding to an emergency, permission is not required to enter the Preservation Zone. These circumstances are described in Part 5 of the Zoning Plan. 4.1.2(b) Aircraft access The Eshelby Islands are identified as significant bird sites in the Whitsundays Plan of Management 2008. Aircraft are not allowed to approach within 1000 metres of the islands below 1500 feet (above ground or water). 4.1.3 Research and monitoring Written permission from the GBRMPA is required to conduct research activities in the Preservation Zone surrounding the Eshelby Islands, or on the islands themselves. In accordance with section 2.8.4 of the Zoning Plan, permission will only be granted if the research: a) is relevant to, and a priority for, the management of the Marine Park b) cannot reasonably be conducted elsewhere. 4.2 Proposed management Access to the Eshelby Islands is adequately managed under the Zoning Plan. At this time no further management strategies are required. 5. Community engagement This site plan was developed in consultation with DERM, the Whitsunday Local Marine Advisory Committee, the Tourism and Recreation Reef Advisory Committee, Traditional Owners and local users of the Marine Park. For further information or to provide comments on the site plan, please call (07) 4750 0700 or email email@example.com Blackwood, R.(1997). The Whitsunday Islands: an Historic Dictionary. Central Queensland University Press. Colfelt, D.(1985). 100 Magic Miles of the Great Barrier Reef – the Whitsunday Islands. Windward Publications. If you're heading out on the water, don't forget your free Zoning Map so you know where you can go and what you can do. The Great Barrier Reef is a hive of activity. If you're lucky enough to see a humpback whale from May to September, make sure you keep a safe distance. We're delighted to celebrate the 30th anniversary of the Great Barrier Reef Marine Park's World Heritage listing. Visit our Great Barrier Reef and discover its amazing plants, animals and habitats. There are a range of tourism experiences on offer. Everyone has a role to play in protecting our Great Barrier Reef. Find out what you can do to help protect this Great Australian icon. If you see sick, dead or stranded marine animals please call RSPCA QLD 1300 ANIMAL (1300 264 625) A Vulnerability Assessment: of the issues that could have far-reaching consequences for the Great Barrier Reef.
fwe2-CC-MAIN-2013-20-33320000
How the Flute Works The Native American flute has two chambers. Air enters the first chamber and is forced upward by the wall between the two chambers and out through an opening in the top of the flute. The air is then redirected by a block (also known as a bird or fetish) affixed to the top into a hole in the second chamber. As the air enters the second chamber, the sound is created. Because the block plays such an important role, make sure it is positioned properly. Make certain that the bottom leading edge of the block is lined up with the back of the hole and centered. Later you may wish to experiment by varying the position back or forward 1/32 to 1/16 of an inch. The sound is influenced by: 1. The tuning of the flute, 2. The nature of the wood from which the flute is crafted, 3. The finger placements over the holes, 4. The force of the air by the player. Let's Begin with the Basics Buttons to hear Example of Sounds) Check Fetish - Always check the fetish on your flute to make sure it is placed with the bottom leading edge of the block lined up with the back of the hole and centered. Holding your Flute & Finger Placement - The middle three fingers of each hand cover the holes of a six hole flute and the forefinger and middle finger of your top hand and the middle three fingers of your bottom hand cover the holes of your 5 hole flute. The "textbook" method is left hand on top but many fine players place the right hand on top. Experiment both ways to find which is most comfortable for you. Pads vs Tips of fingers - The holes are covered by the pads of the finger, not the tips. Covering the holes with the pads of your fingers best assures total coverage which is vital. Producing Sound & the "Sigh" - Sigh into the flute rather than blow. You should get a low mellow sound. Technically the top lip is pulled back against the upper teeth but this is far less important than breathing gently (sighing) into the flute rather than "blowing". If you hear a squeal, you are failing to completely cover all six holes. If you get a high sound, you are over blowing. However, there are a few fingering combinations that work best with "overblow" particularly on notes with the top hole open. You will discover this as you practice the drills. Air Leak Blow Variations in Breathing "Tuh" - If you elect to play the same note more than once in a sequence, "tuh" into the flute for each note you are playing while covering all holes on your flute. "Woo" - Volume variations can be easily achieved by breathing the sound "woo" into the flute while increasing and decreasing the volume of air. "Ha" - Vibrato can be created by making the sound "ha, ha, ha, ha, ha" almost like gargling into the flute. The more you play, the more you will learn to experiment with your breath. Combination - Play some notes long, some short, use more or less force (as a general rule, use more force on higher notes). Vary the force within a note, remember there is no right or wrong way, just fun and joy, so find what feels good to you. Tapping rapidly, lift and lower your finger or slide it over a hole during a continuous breath. Sliding is moving a finger across a hole while you play. LESSON FOUR Playing Scales 5 Hole Scale Basic Minor Scale 5 Hole Flute Practice this scale in both directions. Drills and songs for the basic scale follow. Remember to breathe as needed. You may wish to practice in front of a mirror (if you turn blue, you are not breathing enough). 6 Hole Scale Minor Pentatonic Scale 6 Hole Flute Practice this scale in both directions. Drills and songs for the basic pentatonic scale follow. Remember to breathe as needed. You may wish to practice in front of a mirror (if you turn blue, you are not breathing enough). NOTE: If your flute begins to sound a little funny after a long playing session, tap the air chamber hole vigorously against the palm of your hand or shake the flute, air chamber down, to remove the excess moisture. Most flute lovers own multiple flutes so they can rotate for freshness and to play different keys. Other scales, drills and songs are included in our BOOKS. WHY IS GRAND CANYON FLUTES THE NUMBER ONE PLACE TO BUY FLUTES & DRUMS? We carry the world's largest selection of finely crafted Native American Style Flutes & Drums We ship immediately by priority mail... no backorders.. .no delays We carry flutes from the finest flute & drum makers in the business Our instruction/song books are easy to follow Every purchase carries a satisfaction or money back guarantee.
fwe2-CC-MAIN-2013-20-33321000
The Łódź Ghetto (German: Ghetto Litzmannstadt) was the second-largest ghetto (after the Warsaw Ghetto) established for Jews and Roma in German-occupied Poland. Situated in the town of Łódź and originally intended as a temporary gathering point for Jews, the ghetto was transformed into a major industrial centre, providing much needed supplies for Nazi Germany and especially for the German Army. Because of its remarkable productivity, the ghetto managed to survive until August 1944, when the remaining population was transported to Auschwitz and Chełmno extermination camp. It was the last ghetto in Poland to be liquidated. - האנציקלופדיה של הגטאות, יד ושם - Model of the Łódź Ghetto built between 1940 and 1944, buried, and unearthed after the war by survivors
fwe2-CC-MAIN-2013-20-33326000
If you are the mother of a child with attention deficit hyperactivity disorder (ADHD), you may be at increased risk for depression, according to a study conducted by a Louisiana-based family physician. Dr. Louis McCormick conducted a year-long study of mothers of children with ADHD who were patients in his Franklin, La., medical practice. Of the 39 mothers who took the study’s Self-Test for Depression, 21 had scores that suggested depression. Eleven women scored in the “minimal to mild” range; five in the “moderate to marked” range; and five in the “severe to extreme” range. The mothers were then interviewed, diagnosed and, where appropriate, treated. So, what does this study really mean? McCormick theorizes that the stress of parenting an ADHD child can create situational depression, that is, depression caused by a specific stressful life event: divorce, death of a loved one, losing one’s job, an unwilling move to another location, ill parents and, of course, in this case, a child with an extremely demanding and frustrating condition which, in some cases, can also create financial distress. ADHD children are often loud, physically overactive, impulsive, and seemingly unwilling to follow directions. They can be reckless, even accident-prone. They may alienate their friends, frustrate their teachers, and annoy neighbors, thereby causing their mothers great distress on many levels. While certainly there is treatment for ADHD, it requires much time and fine-tuning to develop the appropriate skills for dealing with a child’s behavior, and for the overall treatment (which sometimes includes medication) to be effective. This requires great patience and persistence from the parents. This can take a major psychological toll on mothers who are most often the primary caretakers. In some cases, mothers may have a biological predisposition to depression. The enormous stress of parenting an ADHD child, McCormick believes, triggers that predisposition to depression, and is an underlying, additional layer of the situational depression. It is extremely important to be aware of the possible increased risk of depression if you are the mother of a child with ADHD so that you will be as vigilant about your own health as you are about your child’s. If you suspect that you may suffer from depression, have seen your family physician to eliminate any physiological causes, you can request a self-test or other appropriate test from a professional counselor or psychologist. You can take the test in the privacy of your home, and then discuss the results with your chosen counselor. Together, you can decide whether treatment would be helpful for you. It is critical for anyone who is a caregiver to be certain they attend to their own physical and psychological needs. It is especially important for mothers of ill children because the stressful demands of the child’s needs are constant for twenty-four hours every day. So for yourself, you need to be a healthy woman as well as a healthy mother. Based in Rockport, life coach and psychotherapist Susan Britt, M.Ed., teaches individuals, couples and families to resolve relationship conflicts, and clarify and achieve life and career goals. Questions and comments may be addressed to her at firstname.lastname@example.org or 978 546-9431.
fwe2-CC-MAIN-2013-20-33338000
What is a news feed? A news feed (also known as an RSS feed) is a listing of a website's content. It is updated whenever new content is published to the site. News readers "subscribe" to news feeds, which means they download lists of stories at an interval that you specify (every 30 minutes, for example), and present them to you in your news reader. A news feed might contain a list of story headlines, a list of excerpts from the stories, or a list containing each story from the website (BlueDevil's news feeds contain story excerpts). All news feeds will have a link back to the website, so if you see a headline / excerpt / story you like, you can click on the link for that piece of content and will be taken to the website to read it. What is a news reader? A news reader (also known as a news aggregator) is simply a piece of software that you can use to read your subscribed news feeds. It is to news feeds what Outlook, Hotmail, and Entourage are to email What is RSS? RSS (Really Simple Syndication) is an XML-based format for sharing and distributing Web content, such as news headlines. Using an RSS reader, you can view data feeds from various news sources, including headlines, summaries, links to full stories.
fwe2-CC-MAIN-2013-20-33342000
Public funding for transportation goes back further than the eye can see, from building the Erie Canal to construction of the New York City subway and Interstate highways. In each case, the decision to expend public monies was based on pressing public needs for expansion of canal, transit and highway systems and the belief that public investment would pay dividends in economic growth. New York and New Jersey officials are now faced with deciding whether privately operated ferries should be next in line for public funding. New York Waterway, which currently carries 32,000 passengers a day, primarily from New Jersey to Manhattan, is experiencing acute financial difficulties. The company’s president, Arthur Imperatore, announced this month at a City Council hearing that, “New York Waterway is dying.” The company has closed some routes and is planning to shut down additional routes later this month. As with the region’s bus, subway, rail and highway systems, proponents of public subsidies argue that ferries provide a vital transportation service and spur economic development, in this case, along the New Jersey waterfront. Without the ferries, as many as 5,000 cars now parked near the New Jersey shore might travel into Manhattan each day, further burdening already congested tunnels and bridges into Manhattan. And ferries provide important redundancy in the transportation system, a fact graphically highlighted on September 11, 2001. Ironically, NY Waterway, established in 1986, has long been hailed as a model of private entrepreneurship. Imperatore’s father set up the company to provide easy access to Manhattan for his New Jersey residential developments. The company offered not only ferry service but also bus service in Midtown to access West Side ferry terminals. At first glance, ferry service might seem to be an ideal candidate for private operation. Ferries serve a niche market of customers who lack ready access to New Jersey’s bus and rail networks or who prefer the ambiance of waterborne transportation to the crush of PATH trains. Ferry riders are generally well-paid professionals. And by increasing access to Manhattan jobs, the ferries make private residential developments on the Jersey side of the Hudson River more attractive and valuable. NY Waterway’s current troubles were spawned by decisions intended to build on its own success. Ferry ridership doubled after the 9/11 attacks, which closed the PATH system’s World Trade Center station. Both the company and public agencies leapt at opportunities to increase ferry services by adding routes and revamping ferry terminals. But after NY Waterway took on $33 million in debt to buy new boats, PATH reopened ahead of schedule, Manhattan job losses produced a drop in the number of commuters, ridership fell off after the Hudson River froze for two weeks last winter, fuel prices doubled and the company spent $4 million in legal fees over a dispute with the federal government over billing practices for post-9/11 service. Some officials are now calling on government agencies to keep the service running. Hudson County (NJ) officials formulated a plan to buy the company and operate it in concert with Hoboken and Weehawken. But state officials last week nixed that plan. Some Hudson County and New York City Council officials are calling on the City of New York and the Port Authority of New York and New Jersey to step in. New York City Councilmember David Yassky urged the city to waive docking fees that total $1.5 million a year. Council members also urged the Port Authority to authorize subsidies amounting to $16.6 million a year. Tom Fox of New York Water Taxi, a competitor to NY Waterway which is putting together a plan with three other ferry and sightseeing cruise companies to take over NY Waterway routes, said subsidies are needed to compete with the heavily subsidized PATH lines. The City Transportation Department responded that the Council would need to offset the revenue loss if landing fees were waived. The Port Authority was noncommittal, calling for a “multiagency review of ferry service throughout the region.” The Port Authority also pointed out that it has invested, or plans to invest, more than $100 million in ferry terminals, roadway upgrades and other infrastructure costs. Thus, the issue is not whether privately operated ferries should receive public subsidies, but what form of subsidies, and how much, and who will pay. The most fundamental question is whether public agencies should subsidize specific ferry routes to keep them running. On the one hand, equitable treatment of commuters suggest that the public should subsidize ferry travelers. After all, the PATH system and NYC bus system only recover 41 percent of their operating expenses and even the NYC subway, which leads all transit systems in the U.S. in farebox cost recovery, only pays 67 percent of its expenses from fares, according to federal data (In PDF Format) summarized in a Brookings Institution study. On the other hand, would political considerations overwhelm business decisions about which routes to keep, which to close and which to expand, leading to an inefficient and heavily subsidized ferry network that drains the public till? The reticence of the Port Authority and NYC Transportation Department to agree to operating subsidies is well founded. The Port Authority has increased its funding for PATH while keeping PATH fares relatively low. Under pressure from Staten Island voters and elected officials, the Giuliani Administration abolished the Staten Island ferry fare. The average trip on the S.I. ferry costs $2.95, all of which is paid by the city, none by riders. The City Council recently adopted a bill to increase late night service and some rush hour service at an additional cost of $5 million a year. The test for public officials is now to find a way to support continued ferry service where it provides substantial public benefits while maintaining the market-driven sensitivities to both changing demand and opportunities for service that are the hallmark of private companies. This is a difficult task, made more so in the current crisis-driven environment, as NY Waterway totters on the brink of closure. Bruce Schaller is head of Schaller Consulting, which provides research and analysis about transportation, and is also a Visiting Scholar at the Center for Transportation Policy and Management at New York University.Â
fwe2-CC-MAIN-2013-20-33353000
Screen production and the environmentA major part of the screen production industry relies on the beauty and imagery of dramatic unspoilt natural landscapes. If these landscapes are not protected and cared for, the screen production industry will lose the stunning backdrops for its productions. It is vital that these images relate to reality. Many of the negative environmental impacts associated with the screen production industry relate to location filming and how production companies treat sites. In many countries, including New Zealand, environmental regulators and/or local authorities have implemented regulations for productions using sites protected for their environmental, historical or cultural importance. However, negative environmental impacts are not restricted to filming on location. Studio operations also have their own environmental impacts, eg greenhouse gas emissions, even though there may be fewer environmental compliance issues than when filming on location. Often it will be up to the production company to introduce voluntary measures for improving the management of the environmental impacts for studio productions. The screen production industry has a particular dependency on technical equipment, media and information technology. These have associated environmental impacts but also have the potential to provide and promote solutions to environmental problems. The availability of film and television to audiences has grown tremendously over the past 50 years. Although entertainment is the main purpose for most films and programmes, screen production has the potential to profoundly influence the attitudes, beliefs and behaviours of audiences in their day-to-day lives. Convincing and effective portrayal of environmental and social issues through film and television plays an important role in raising public awareness especially in educational programmes and campaigns of governmental and non-governmental organisations. Just as the screen industry and broadcast media have been used to promote social change, with respect to the dangers of smoking and drink-driving, better public awareness of environmental issues will lead to positive changes in people’s behaviour. It may mean that someone who has never recycled may begin to do so, or somebody else may decide to take the bus instead of the car or purchase 'environmentally friendly' products.
fwe2-CC-MAIN-2013-20-33363000
No one knows how much warming is "safe". What we do know is that climate change is already harming people and ecosystems. Its reality can be seen in melting glaciers, disintegrating polar ice, thawing permafrost, changing monsoon patterns, rising sea levels, changing ecosystems and fatal heat waves. Scientists are not the only ones talking about these changes. From the apple growers in Himachal to the farmers in Vidharbha and those living in disappearing islands in the Sunderbans are already struggling with the impacts of climate change. But this is just the beginning. We need to act to avoid catastrophic climate change. While not all regional effects are known yet, here are some likely future effects if we allow current trends to continue. Relatively likely and early effects of small to moderate warming: Natural systems, including glaciers, coral reefs, mangroves, Arctic ecosystems, alpine ecosystems, Boreal forests, tropical forests, prairie wetlands and native grasslands, will be severely threatened. Longer term catastrophic effects if warming continues: Greenland and Antarctic ice sheets are melting. Unless checked, warming from emissions may trigger the irreversible meltdown of the Greenland ice sheet in the coming decades, which would add up to a seven meters rise in sea-level over some centuries. New evidence showing the rate of ice discharge from parts of the Antarctic means that it is also facing a risk of meltdown. Never before has humanity been forced to grapple with such an immense environmental crisis. If we do not take urgent and immediate action to stop global warming, the damage could become irreversible.
fwe2-CC-MAIN-2013-20-33364000
Stopping genetic junk Never in the past have crops, cultivated by us, had to undergo such scrutiny. But the scrutiny is required especially in the case of genetically engineered [GE] or genetically modified [GM] crops. GE crops are organisms created artificially in labs through a process known as recombinant DNA technology. The unpredictability and irreversibility of GE have raised a lot of questions about this technology. Moreover, studies have found that GE crops harm the environment and have a potential to risk human health. All this has resulted in a controversy across the world about the need to introduce this dangerous technology. Greenpeace in India and in several other countries entered the agriculture scenario with the campaign against the environmental release of GE or GM organisms. GE crops represent everything that is wrong with our agriculture. They perpetuate the destruction of our biodiversity and the increasing control of corporations over our food and farming. The anti GE campaign has contributed in ensuring a serious debate on the need for GE crops in the country. It has also ensured that India does not approve commercialisation of any GM food crop. The campaign has brought together farmers, consumers, traders, scientists and other civil society organisations to put up a brave front against the entry of GM crops in our country. This resulted in the indefinite moratorium on Bt brinjal, the first GM food crop that was up for commercialisation. While Bt brinjal has been stalled for now, 56 other crops are being genetically modified and are waiting for approval. Rice is the leader amongst these. If not stopped the entire country would become one big feeding experiment for GM seed companies. The campaign is trying to plug the gaps in the existing regulatory system in the country to stop the release of any GM crops. We are also asking the government to come up with a bio-safety regime that will prioritise citizen’s health, environmental safety and the nation’s socio-economic fabric. As the citizen is also a consumer and has a right to safe GM free food, we have been mobilizing consumers and engaging with food brands in the country to ensure that the food industry in the country remains GM free. For the first time in India there is a consumer campaign against GM food and food brands have started to notice this consumer opinion. To summarise, our basic demands are: 1. A complete ban the release of any genetically modified organisms in the environment, either for commercial cultivation or for experiments. 2. Re-focus scientific research on ecological alternatives, to identify agro-ecological practices that ensure future food security under a changing climate.
fwe2-CC-MAIN-2013-20-33365000
DARPA looks to nanotechnology to target illnesses The Defense Advanced Research Projects Agency hopes to develop intracellular platforms to fight diseases in warfighters. The research agency issued a solicitation on June 8 for help developing In Vivo Nanosensors for Therapeutics (IVN:Tx) that would fight diseases on a cellular level rather than relying on disease-specific medicines that require expensive and expansive storage and shipment. It said the new platform is needed because research like that done by the Military Infectious Disease Research Program has shown more warfighters are hospitalized each year for infectious diseases than are wounded in combat. The negative effects of warfighter illness and downtime multiply when extended across the military, it said. Numerous medicines have to be transported to military treatment facilities around the world, soldiers must be trained to fill new roles, and in some cases operational plans must be modified or even postponed, it said. The rapidly deployed and adaptable IVN:Tx platform to treat military-relevant disease may reduce logistical burdens and increase operational readiness , it said. The platform looks to revolutionary treatment methods to get sick warfighters back on their feet, fast. The agency’s solicitation calls for development of nanoplatforms that treat a variety of diseases, including nanoparticle therapeutic platforms that could be rapidly modified to treat a broad range of diseases, but based on safe and effective technologies. The civilian medical community has been using small-molecule therapeutics to treat diseases for years, it said, because traditional drugs are often effective against only one disease, can have significant side effects and are very expensive to develop. “Doctors have been waiting for a flexible platform that could help them treat a variety of problematic diseases,” said Timothy Broderick, physician and DARPA program manager. “DARPA seeks to do just that by advancing revolutionary technologies such as nanoparticles coated with small interfering RNA (siRNA). RNA plays an active role in all biological processes, and by targeting RNA in specific cells, we may be able to stop the processes that cause diseases of all types—from contagious, difficult-to-treat bacteria such as MRSA to traumatic brain injury.” The agency said safety is a key factor to the many potential technical approaches for IVN:Tx. Nanoplatforms, it said, must be biocompatible, nontoxic and designed with eventual regulatory approval in mind. The IVN:Tx approach of treating illness inside specific cells may also minimize dosing required for clinical efficacy, limit side effects and adverse immune system response, it said. Similar to today’s medicines, the therapeutic nanoparticles will move throughout the body in a natural, passive manner, it added. The agency noted that IVN is a technology demonstration and human trials wouldn’t be funded. However, it encouraged proposers to submit plans for testing that would result in a clinical protocol prepared for approval from the Food and Drug Administration (FDA). The FDA will be engaged with the IVN:Tx team throughout the program lifecycle by reviewing proposals, participating in Proposers’ Day meetings and participating in government review boards, it said.
fwe2-CC-MAIN-2013-20-33373000
Walking and cycling have long been considered the most environmentally sound methods of getting around. They still are but some environmentalists have argued that food production has become so fossil-fuel intensive that driving could be considered greener than walking (though the analysis has been debunked as flawed). What of other, more obviously polluting, modes of transport? The data below gives an idea of how your carbon footprint might grow depending on how you make a journey. If you were to take an average domestic flight rather than a high-speed electric train, you'd be personally responsible for 29 times as much carbon dioxide. The data also highlights how the UK government's plans to electrify parts of the rail network could cut emissions. Diesel trains are responsible for more greenhouse gases than electric trains, even taking into account Britain's carbon-heavy electricity production. On the roads, next-generation hybrid and electric vehicles can help those of us behind the wheel to be that little bit greener. However, no journey is completely carbon free.
fwe2-CC-MAIN-2013-20-33380000