text
stringlengths
188
632k
Did you know that parasites are a problem for millions of Americans? According to experts at the Centers for Disease Control, the problem is more common than many people realize. Moreover, many people that pick up a parasite often go undiagnosed, and suffer from illness and symptoms that could be treated or managed. In fact, it’s a common misconception that parasites are only a problem in underdeveloped or poor countries. Many people (mistakenly) believe that you can’t pick up a parasite if you have access to proper sanitation, running water, and food prepared according to a high quality of safety standards. But, there are still ways to pick up a parasite that might surprise you. What can cause a parasitic infection? These common risk factors might point to a parasitic infection: - Do you own a cat? What about a cat at a friend’s home, or have you come into contact with a stray? Contact with cat waste, either accidentally or by cleaning a litter box, can expose you to a parasitic infection by Toxoplasmosis. It is estimated that more than 60 million people are chronically infected with toxoplasma gondii. Often, many people don’t hear about the possibility of contracting the parasite until they (or a friend) get pregnant, when an obstetrician orders tests or counsels them to avoid exposure to cat waste. - Do you own a dog? You’re not better off than cat owners. The worms that live in the intestines of dogs (and some cats) carry a parasite called Toxocara. In fact, many Americans have been exposed and carry antibodies in their blood to fight off the infection. But for those that don’t, contracting Toxocara can lead to serious illness and even blindness. - Did you know that some parasites can be contracted through sexual contact? A protozoa called trichomonias is transmitted sexually. It’s a common infection that often is symptomless, but can cause itching and burning, and puts the person at risk of developing additional infections because the body is weakened by trying to keep the parasite at bay. - Eating undercooked meat — or even fruits and vegetables that haven’t been handled properly — can lead to exposure to tapeworms that live in the muscles and brain. Most people know that, but aren’t aware of how common exposure can be. - Have you been hiking or camping lately? What about drinking untreated, unfiltered well or spring water? Contaminated water, taken in while camping or from other untreated sources, can lead to parasitic infections and illness. - Been bitten by a bug lately? It’s well known that mosquitos and ticks can carry illness. But a bug called the triatomine bug can cause an illness called Chagas disease. According to some experts,”[m]ore than 300,000 American are infected with Trypanosoma cruzi, the parasite that causes Chagas disease, and more than 300 infected babies are born every year.” So, how do you know if you have a parasitic infection? The symptoms can seem general, and mimmic symptoms of other illnesses. Think fatigue, fever, nausea, heart and respiratory symptoms, headaches, rashes, and itching. Fairly vague, right? That’s why we recommend that if you have unexplained symptoms that aren’t going away, you ask your holistic wellness provider and doctor to run tests for parasitic infection. How do we test for parasites? The CDC recommends a few different tests. (You can call us to learn more, or visit the CDC page here.) The list of tests includes blood draws, fecal sample testing, and x-rays. More than one kind of test may be performed by your health care provider. Among other things, the tests will look for antibodies in your blood serum, evidence in fecal samples of the byproducts of parasites, and x-rays will be used to determine if your organs have developed lesions caused by some parasitic infections. In some cases, a colonoscopy may be done to more thoroughly screen for a parasite. You health care provider will determine which tests to order, and will often send the samples to a qualified lab for analysis. It’s important to learn more about the issue, because so many Americans go without proper diagnosis and treatment due to a lack of knowledge and the common misconception that parasites are not an issue here in the United States. If you think you may have unexplained symptoms and would benefit from a screening, either because you have one of the risk factors identified above, or just a long-term issue that has gone unresolved, please be sure to learn more and ask for a test. You can take control of your health by learning more, and finding natural ways to promote your health, healing and overall wellness. Don’t hesitate to visit our website to learn more, and reach out with any questions.
PICC Insertion Procedure Information What is a peripherally inserted central catheter? A peripherally inserted central catheter, often called a “PICC line,” is a long, very thin, flexible tube that is usually placed into one of the large veins in the arm, often just above or just below the elbow. This tube is threaded into a large vein above the right side of the heart. Why are PICC lines used? A PICC line is used to administer intravenous (IV) medicines or fluids. Because the tube is so small and flexible, the line can last several weeks to months, which means fewer needle pokes and less pain. The PICC line can be flushed and capped off when not in use. When it is time to administer medicine, the medicine is connected to the PICC line and disconnected again when the medicine is finished. Who inserts the PICC? At St. Joseph Hospital PICCs are inserted using ultrasound and fluoroscopic guidance. The imaging guidance provides safe and accurate placement of the PICC. The following individuals are qualified to insert a PICC: - A board-certified interventional radiologist with training in vascular interventional procedures. - Qualified and specially trained radiology nurses. - Physician Assistants How is the procedure performed? The procedure is performed in a radiology special procedure room or at the bedside after written informed consent is obtained. The procedure takes about an hour. - The patient lies on their back on a fluoroscopic procedure table or in a hospital bed, with the arm chosen for insertion resting on an arm board support, perpendicular to the body. - A tourniquet is tightened around the arm just below the shoulder. - Using ultrasound, the vein for venipuncture is selected. - The rest of the procedure is performed under sterile conditions. Lidocaine 1% is injected at the skin surface for local anesthesia and may sting and burn for a few seconds, but after that the patient may feel mainly a pressure sensation when the area is being worked on. - Under ultrasound guidance venipuncture is performed with a thin needle. - A thin safety guide wire with a coiled, floppy safety tip is inserted through the needle and into the vein. - The tourniquet is then loosened and the puncture site is enlarged slightly with a scalpel. - The needle is removed and catheter is advanced over the wire into the vein. - The PICC with an indwelling guide wire is inserted through the sheath catheter into the vena cava. - The internal catheter length is measured and recorded, the indwelling guide wire removed, and a connector assembly and injection cap are attached. - The catheter is tested and then flushed with sterile normal saline at this point and again at the end of the procedure. - Some catheters require additional flushes with sterile heparinized saline, a solution with a dilute blood thinner. - The catheter tip position is reconfirmed and the catheter is secured with an adhesive securing device. - An anti-microbial “patch” is placed at the catheter entry site and a clear adhesive dressing is placed over the securing device and patch. What are the contraindications for the procedure? An alternative vein/arm may be preferred for a PICC insertion if there is a history of any of the following in the region of that upper extremity: - Vascular surgery - Radiation therapy - Venous thrombosis - Permanent dialysis access - Auxiliary lymph node dissection - Local dermatitis - Burn injury - Infection in or near the region of the planned insertion site What are the risks/possible complications? - The risk of introduction of infection is low, approximately 2%. - Bleeding is usually minimal and very easy to control. - Injury of local structures is uncommon with the use of ultrasound and fluoroscopic guidance. - Clotting of blood in the vein around the catheter or at the wall of the vein occurs approximately 20-40% of the time, but is usually in such small amounts that it is not clinically evident and not clinically significant. - More extensive venous thrombosis is much less common. - Pulmonary embolus as a complication of this procedure is not common. - Allergic reactions to the local anesthetic, latex, sterile preparation solutions, flushing solutions or iodinated contrast agents are uncommon; patients are questioned about allergies prior to the procedure. - Pain is expected during the injection of the local anesthetic. - Discomfort or pain may occur related to arm position on the table during the procedure. - Adherence of the catheter within the venous system at the time of removal is rare when the catheter is indwelling for periods of a few months or less. - Breakage of materials such as guide wires or catheters during the procedure is rare. What are the alternatives to the procedure? - Long-term intravenous therapy can be performed with other central venous catheters (e.g., tunneled catheters or buried port catheters.) Compared to the PICC, insertion of these catheters is more invasive and removal is more complicated. - IV therapy can be performed with peripheral IV catheters, but these have to be replaced at least every three days and veins become increasingly difficult to catheterize over time. - Oral antibiotic therapy is an alternative in some cases, but oral antibiotics may not be effective against certain types of infection or against infections in certain locations. - Inadequate treatment of an infection could result in further spread or increasing severity of the infection. What can I expect after the procedure? - Patients are given a post-procedure instruction sheet in the event of a complication related to the PICC. - Mild soreness is expected at the entry site during the day of and for one or two days after the procedure. - There may be bleeding at the entry site, especially on the first and second day. - If the dressing becomes soaked with blood, patients should have the nurse change the dressing. Patients must keep the dressing and the external tubing dry. - If patients shower, they should cover the dressing and external tubing with a waterproof material such as plastic wrap secured with tape. - The entry site should not be submerged under water. - If the dressing gets wet, the nurse should change it as soon as possible. - Strenuous exercise should be done with caution to protect the PICC and only if permitted by the patient’s doctor. - Flushing instructions should be followed carefully. - Patients should not arrange the external catheter with any kinks or twists. - Patients should report any obstruction of flow, leakage of fluid, drainage at entry site, soft tissue swelling or pain to the nurse, primary physician or physician who ordered the placement of the PICC line.
- The park is home to more than 2,000 sandstone arches - Iron oxide creates the chemical reaction that turns the sandstone red - Utah's high desert has scalding summer and freezing winter temperatures - The best time to view wildlife in the park is October through December During the summer, southeastern Utah's high desert is like a furnace where the wind whips hot air over a seemingly endless expanse of arid terrain. Particles of sand erode and are re-deposited in new areas, collapsing old formations and slowly building sedimentary rock that will hold up new ones. The flora and fauna that live amidst those gusts are some of Mother Nature's toughest and most resourceful creations, enduring months without a drop of rain and withstanding desert heat and sub-freezing temperatures. Arches National Park is home to this wildlife as well as the world's highest concentration of natural sandstone arches. From the largest, Landscape Arch, to the tallest, Double Arch South, the park's 119 square miles are one of the most unique places on earth. Park stats: The visitor count at Arches has steadily increased since 2004. The park drew more than one million guests last year and has averaged that many since 2008. The location: Arches National Park is located in southeast Utah, about five miles north of Moab. There are airports in Moab, Salt Lake City, about four hours away and Grand Junction, Colorado, about two hours away. If you go: Park admission is $10 per vehicle. Individual admission is $5. Admission is valid for seven days. Park passports are $25 and provide entry to Arches, Canyonlands National Park, Hovenweep National Monument and Natural Bridges National Monument. They are valid for one year. The visitor center is open every day except Christmas, but be sure to check the website as hours change according to the season. Meet our ranger: Kait Thomas, an interpretive ranger at Arches National Park, grew up in Monrovia, Indiana. When she was 11, her dad started taking her on annual vacations to national parks in the western United States. The experience had a profound effect on her, she says. "I knew I wanted to move west," says Thomas, 25. "I wanted to be a part of the beauty that you find in national parks." But Thomas, 25, became a pre-law student when she moved to Salt Lake City for college. On the way back to school after a 2008 summer internship working for a political campaign, she had her "Aha!" moment, realizing she completed the dream to move west, but not the one to work for the national park service. "I just waltzed in to Arches National Park visitor center and asked if they needed help," she says. At first, the answer was "No, thanks." But Thomas says a combination of her stubbornness and a supervisor's willingness to listen during a 45-minute impromptu meeting led to her volunteering for six weeks in 2008. In 2009, the supervisor invited her to become a seasonal ranger. In 2010 Thomas became a full-time ranger. "It really is the first part of the country that I fell in love with," she says. "There is something about the desert and how hostile, dramatic and colorful it is. You have this contrast of something that is incredibly harsh yet unbelievably delicate." For a day trip don't miss: The Windows trail hike. Thomas says the area has the highest concentration of arches in the park, including five that range from 60 to 100 feet high. "It's a nice summary of the park," she says. "It's the most bang for the buck, if you will." Favorite less-traveled spot: Hiking to Tower Arch from Klondike Bluff. Thomas says reaching Klondike Bluff requires driving on a dirt road that will not support RVs or buses. It's about 3.5 miles from Klondike Bluff to tower Arch, she says. "It's a great place to escape all the hustle and bustle you find everywhere else in the park," she says. Favorite spot to view wildlife: Courthouse Wash. Thomas says hot summer temperatures make seeing wildlife difficult. But you can see mule deer, coyotes and bobcats at Courthouse Wash as well as big horn sheep near the visitor center from October through December. Most magical moment in the park: Having lunch in the shade of Wall Arch the day before it collapsed in August of 2008. Thomas said she was patrolling Devil's Garden Trail when the temperature hit 105, and she stopped at Wall Arch to rest and eat. The next day a group of tourists came into the visitor center and wanted to know why the trail was blocked. Thomas and other rangers went to investigate and discovered the arch had collapsed. "I realized geology is always happening," she says. "One sand of grain could have fallen and the whole thing pops and collapses. We don't have any answers as to when (the arches will fall) but that is why it is so special to be here now." Funniest moment in the park: Discovering that five members of a Norteño band, dressed in full concert costume, had lugged their instruments three miles to Delicate Arch and began belting out tunes under its shade. (Norteño music generally comes from northern Mexico and Texas. It features an accordion that produces musical rhythms similar to polkas.) "We informed them they needed a permit (to play inside the park)," she says. Oddest moment at the park: An excited family asking her to identify 10 species of lizards they captured, put in a black box and had planned to take home. "I identified all their lizards and promptly made them put them back," she says. It's illegal to remove wildlife from national parks. A ranger's request: Stay on the trails and off the arches. The land off the trails is home to biological earth crust which protects against erosion and takes decades to rejuvenate after being stepped on. The arches are all made of red sandstone, a mixture of quartz, feldspar and iron oxide. While they may look sturdy, they could collapse if you climb on them. Carry more water than you think you need. Thomas says heat-related illnesses are the No.1 medical issue at the park. She recommends you drink a minimum of one gallon of water per day and carry salty snacks to maintain electrolyte levels. Also, be sure to shake out your shoes before putting them on. That's because scorpions gravitate toward dark, cool spaces. If you see a rattlesnake on the trail, do not chase it. Thomas says the majority of rattlesnake bites that happen in the park are on peoples' hands. Another park she'd like to visit: Denali National Park in Alaska. "I've never been to Alaska and I want to see really raw, big mountains," Thomas says. "I'm always into the biggest and the best. I want to see something that is more primitive than anything else we have (here)." What national park would you like to visit? Please share your thoughts in the comments section below.
Due to specific regulations in , AOE is not currently enrolling students in your state. We apologize, but at this time you can not move forward with course enrollment. Let us know if you have any questions. Please contact us with any questions. Many of us use artmaking practices as a vehicle for teaching social activism. But, how we teach kids in the art room can be just as important as what we teach kids in the art room. This is especially true when it comes to modeling environmentally conscious behaviors. Most of us use a scrap box for basic recycling, but what else can be done to make our programs more “green”? Here are some easy ways to reduce, reuse, and recycle in your art room. The easiest way to “go green” in your art classroom is to reduce what you use in the first place. With dozens of classes and hundreds of kids, even small efforts to reduce will add up to big differences over time. Here are 6 ideas: When I reflect on my classroom waste, paper towels are at the top of the list. No matter what, we seem to go through them at an astounding rate. To reduce my paper towel usage, I try using ShamWows instead. These super absorbent cloths are a student favorite because they quickly and effectively eliminate liquid messes. A bonus is that some students have seen the infomercials and find them to be an exciting novelty. If you can’t find ShamWows, any shammy towel will do. Even a 100% cotton cloth will do the trick! Reduce your paper with class sets of copies. Creating hundreds of handouts can be very wasteful from an ecological standpoint. Instead, try making a single class set of handouts for students to use over and over. Ask your students to “save a tree” by not marking on the paper. Later, it can be reused with the next group. Consider sliding these handouts into plastic sheet protectors to prolong their use. If you are well organized, you can even use these papers year after year! Dim the lights to draw. We’ve all experienced afternoon classroom management challenges when the group energy isn’t ideal. Often, dimming the lights has a calming effect. Why not use this as a “green” strategy too? Turn off a few lights strategically, to create some drawing ambiance. If you are lucky enough to have a classroom with windows, use the natural light to reduce the amount of electricity you are using. Change the materials you purchase. Keeping up with supply orders can be tricky, but consider spending a few extra minutes researching new eco-friendly materials. For example, Pilot is now selling re-fillable dry erase markers, and Crayola has a company plan for reducing their environmental impact. Consider these factors as you make your order. Go digital to reduce paper. Paper is an essential artmaking tool, but it’s no longer crucial for other forms of communication. If you aren’t already doing it, consider transitioning your newsletter and notes home into a digital format. Power off and unplug for the weekend. Surprisingly, many electronic devices use a small amount of electricity, even when turned off. Talk to your district’s technology team about saving energy in your room by unplugging computers, pencil sharpeners, and other devices over the weekend. Some systems perform necessary updates over the weekend, so be sure to check first! Art supplies can be expensive, so combat rising program costs by exploring unique opportunities for reuse within your school. Here are 3 ideas: Use copy room cast-offs for sketching. Scrap paper is always in high demand as our students plan projects with thumbnails and sketches. Instead of using a fresh piece for each new drawing, consider using the “cast-offs” from your school’s copy room. Reusing the back side of copies is an ideal strategy for reusing the whole school’s supply of used paper. Just be careful you don’t include any sensitive information that may have come through the copier like 504 plans, IEPs, or grade reports. Do something with all those old crayons! Crayons do not biodegrade well in landfills, so give your crayons a second life by reusing them. After removing the wrappers, wax crayons can be melted and formed into new shapes to serve a new role in the art room. Realistically, many art teachers won’t have time for all of the tasks involved in this process, so why not ask your students to help? Allow students to take ownership of the project by peeling labels and sorting for you! Use recycled materials as art supplies. Demonstrate how re-use can be beautiful by making use of recycled materials as art supplies. Check out these great articles about projects that use recyclables: When an art supply has finally out lived its usefulness, investigate ways to recycle it safely. Here are 3 common art materials and how to recycle them. The most obvious way to encourage recycling in the art room is to make sure you have a recycling bin! Decorate your bin, so it stands out from its deceptive look alike, the trash can. If you aren’t familiar with Crayola’s marker recycling program, it is one of the easiest ways to recycle used school supplies. Start by visiting their website to sign up. The process takes less than three minutes. Throughout the year, collect all your “dead” markers. Surprisingly, they don’t even have to be Crayola. (Dry erase markers and highlighters are also accepted.) When you are finished collecting, log back onto the site and estimate the number of markers you will be sending. Then, print the provided label and mail your recycled donations back. The shipping and handling are paid for by Crayola. It could not be easier! Crayons Crayon recylcing is a little tougher than markers. While many organizations help recycle crayons, I have yet to find one that pays for shipping…and crayons are heavy! So, if you decide to take on the challenge, you will have to get creative to fin a donor who will spring for shipping. Or, make it a project-based learning endeavor and challenge your students to solve the problem themselves! With these tips, hopefully, you can have a small impact on your art program’s ecological footprint, and help foster lifelong interest in environmental responsibility in your art students. What tips do you have for reducing, reusing, and recycling in the art room? What are your favorite projects or techniques that use recycled materials?
How To Use A P10 Micropipette. Using an air displacement micropipette. You should aim to have it around 1 cm into the solution to avoid withdrawing close from the surface and taking in air, and avoiding any contact between the micropipette and the solution. Use the balance to measure the distilled water’s weight. To use a micropipette, the user must learn how to properly change the volume setting, add a tip, obtain a sample, dispense a sample, and dispose of the tip. The brand of micropipettes we will be using is made by rainin and called a pipetman. The Most Commonly Used Micropipettes Are. White tips are used for p2, p10, and p20. The brand of micropipettes we will be using is made by rainin and called a pipetman. Blue colored micropipette tips are used for p1000. Use a steady hand to hold your pipette, open the tip box and firmly press the pipette into the tip. Use the balance to measure the distilled water’s weight. If you exceed these limits it will put the pipet out of calibration. Correctly Read The Volume Indicator On The Micropipette. Air displacement micropipettes operate using the principle of air displacement. Micropipette is calibrated as per the standard of en iso 8655 for ensured accuracy & precision andcalibration report is supplied. Insert the micropipette shaft into the tip and press down firmly. How To Use A Micropipette Sample Delivery With Variable Automatic Micropipettes: The plunger is compressed by the thumb, and when it releases the liquid is drawn towards the tip of a disposable. This is important because accuracy decreases when the set volume is close to the pipette’s minimum capacity. Dispense the distilled water in the beaker using the micropipette. When Inhaling Liquid During Operation, Press The Button To The First Level To Absorb Liquid And Release The Button. Use the formula v = w * z to calculate the volume dispensed by the pipette. The scales on micropipettes are in microliters (1000! Using a 300 µl pipette will give you better results, whereas using a 50 µl pipette would be ideal.
In most cases, non-melanoma skin cancers are caused by overexposure to ultraviolet radiation, the invisible rays from the sun that can burn the skin. To reduce the risk of skin cancer, dermatologists encourage the public to be sun smart, including limiting sun exposure and using broad-spectrum sunscreens. Despite these efforts, the incidence of non-melanoma skin cancer continues to rise. Now, several agents—including medicines, foods and vitamins—are being investigated for their chemopreventive properties, or ability to prevent skin cancer. At the American Academy of Dermatology’s Summer Academy Meeting 2010 in Chicago, dermatologist Craig A. Elmets, MD, FAAD, professor and chair, department of dermatology and director of the Skin Diseases Research Center, University of Alabama at Birmingham, discussed promising new research on the use of medicine and diet to prevent UV-induced skin cancer in the future. “Based on the research conducted thus far, it appears that several different agents have the potential to be effective in providing enhanced sun protection and preventing non-melanoma skin cancers,” said Elmets. “While the way these agents work are different, we have seen encouraging results with both oral and topical agents, including non-steroidal anti-inflammatory drugs (NSAIDs), eflornithine and certain natural antioxidants.” Medications investigated as future chemopreventive agents NSAIDs are a class of drugs that block cyclooxygenase enzymes (COX-1 and COX-2), which produce prostaglandins that promote inflammation, pain and fever. When these enzyme messengers responsible for reducing prostaglandins throughout the body are blocked, ongoing inflammation, pain and fever are reduced. One such NSAID approved by the Food & Drug Administration (FDA) and used primarily to treat inflammation associated with arthritis is celecoxib. Elmets noted the chemopreventive agent's use in patients with a syndrome known as basal cell nevus syndrome. Caused by a genetic defect, basal cell nevus syndrome triggers patients to develop basal cell carcinomas at a very young age. “In patients with basal cell cacinomas, investigators have found that the COX-2 enzyme is elevated in non-melanoma skin cancers. Because celecoxib inhibits this enzyme, clinical studies have demonstrated that taking celecoxib seems to decrease the number of new basal cell carcinomas in basal cell nevus syndrome,” he said. “This is very encouraging, particularly if this can eventually be applied to basal cell skin cancer in the general population.” According to Elmets, eflornithine is another drug that has been shown to have beneficial effects in preventing basal cell carcinoma. FDA-approved as a topical treatment for excessive hair growth and as an injectable formulation to treat sleeping sickness, eflornithine inhibits the enzyme known as ornithine decarboxylase that is found to be elevated in skin cancers. “Although celecoxib and eflornithine work by different mechanisms, initial studies show that they both prevent basal cell carcinomas by at least 30%,” said Elmets. “Based on these initial findings, these two drugs are considered very promising as chemopreventive agents and require additional clinical study.” Natural antioxidants in preventing skin cancer In addition, numerous natural antioxidants are being evaluated for their chemopreventive properties. Antioxidants are substances that destroy free radicals, which are harmful compounds in the body that damage DNA and even cause cell death. Free radicals are believed to contribute to aging as well as the development of a number of health problems, including skin cancer. Animal studies and emerging clinical studies suggest that the abundance of antioxidant polyphenols in green tea and grape seed extract may play an important role in helping to prevent the onset and growth of skin tumors. Similarly, the pomegranate fruit also is thought to be effective in promoting skin health since it has very high levels of antioxidants called flavonoids that have been shown to counteract various cancer-causing free radicals. “It remains unclear precisely how these natural antioxidants work, but they all are considered powerful when used externally,” said Elmets. “These substances also have an anti-inflammatory effect, which is known to be chemopreventive. However, it is important to remember that the FDA has not approved the use of these natural antioxidants as chemopreventive agents, and controlled studies need to be conducted in humans to determine whether they may help prevent skin cancer. At present, the evidence to support these benefits is largely based on animal studies.” “As dermatologists, we will always recommend sunscreens and sun-smart behaviors, like seeking shade, wearing hats and limiting sun exposure. These lifestyle strategies are vital to preventing skin cancer and should not be replaced,” added Elmets. “However, I could envision in the future that we also may recommend a cocktail of chemopreventive agents to provide patients enhanced protection against UV-induced skin cancers. Our hope is that further human studies will help us better understand how to effectively incorporate these new agents into practice and thereby turn the tide on the escalating rate of skin cancer in this country.”
No Two Digital Cameras Are the Same: Fingerprinting Via Sensor Noise The previous article looked at how pieces of blank paper can be uniquely identified. This article continues the fingerprinting theme to another domain, digital cameras, and ends by speculating on the possibility of applying the technique on an Internet-wide scale. For various kinds of devices like digital cameras and RFID chips, even supposedly identical units that come out of a manufacturing plant behave slightly differently in characteristic ways, and can therefore be distinguished based on their output or behavior. How could this be? The unifying principle is this: Digital camera identification belongs to a class of techniques that exploits ‘pattern noise’ in the ‘sensor arrays’ that capture images. The same techniques can be used to fingerprint a scanner by analyzing pixel-level patterns in the images scanned by it, but that’ll be the focus of a later article. A long-exposure dark frame [source]. Click image to see full size. Three ‘hot pixels’ and some other sensor noise can be seen. A photo taken in the absence of any light doesn’t look completely black; a variety of factors introduce noise. There is random noise that varies in every image, but there is also ‘pattern noise’ due to inherent structural defects or irregularities in the physical sensor array. The key property of the latter kind of noise is that it manifests the same way every image taken by the camera. Thus, the total noise vector produced by a camera is not identical between images, nor is it completely independent. Nevertheless, separating the pattern noise from random noise and the image itself — after all, a good camera will seek to minimize the strength or ‘power’ of the noise in relation to the image — is a very difficult task, and is the primary technical challenge that camera fingerprinting techniques must address. Security vs. privacy. A quick note about the applications of camera fingerprinting. We saw in the previous article that there are security-enhancing and privacy-infringing applications of document fingerprinting. In fact, this is almost always the case with fingerprinting techniques. Camera fingerprinting can be used on the one hand for detecting forgeries (e.g., photoshopped images), and to aid criminal investigations by determining who (or rather, which camera) might have taken a picture. On the other hand, it could potentially also be used for unmasking individuals who wish to disseminate photos anonymously online. Sadly, most papers studying fingerprinting study only the former type of application, which is why we’ll have to speculate a bit on the privacy impact, even though the underlying math of fingerprinting is the same. Another point to note is that because of the focus on forensics, most of the work in this area so far has studied distinguishing different camera models. But there are some preliminary results on distinguishing ‘identical’ cameras, and it appears that the same techniques will work. In more detail. Let’s look at what I think is the most well-known paper on sensor pattern noise fingerprinting, by Binghamton University researchers Jan Lukáš, Jessica Fridrich, and Miroslav Golja. Here’s how it works: the first step is to build a reference pattern of a camera from multiple known images taken from it, so that later an unsourced image can be compared against these reference patterns. The authors suggest using at least 50, but for good measure, they use 320 in their experiments. In the forensics context, the investigator probably has physical possession of the camera and therefore can generate an unlimited number of images. We’ll discuss what this requirement means in the privacy-breach context later. There are two steps to build the reference pattern. First, for each image, a denoising filter is applied, and the denoised image is subtracted from the original to leave only the noise. Next, the noise is averaged across all the reference images — this way the random noise cancels out and leaves the pattern noise. Comparing a new image to a reference pattern, to test if it came from that camera, is easy: extract the noise from the test image, and compare this noise pixel-by-pixel with the reference noise. The noise from the test image includes random noise, so the match won’t be close to perfect, but nevertheless the correlation between the two noise patterns will be roughly equal to the contribution of pattern noise towards the total noise in the test image. On the other hand, if the test image didn’t come from the same camera, the correlation will be close to zero. The authors experimented with nine cameras, of which two were from the same brand and model (Olympus Camedia C765). In addition, two other cameras had the same type of sensor. There was not a single error in their 2,700 tests, including those involving the two ‘identical’ cameras — in each case, the algorithm correctly identified which of the nine cameras a given image came from. By extrapolating the correlation curves, they conservatively estimate that for a False Accept Rate of 10-3, their method achieves a False Reject Rate of anywhere between 10-2 to 10-10 or even less depending on the camera model and camera settings. The takeaway from this seems to be that distinguishing between cameras of different models can be performed with essentially perfect accuracy. Distinguishing between cameras of the same model also seems to have very high accuracy, but it is hard to generalize because of the small sample size. Improvements. Impressive as the above numbers are, there are at least two major ways in which this result can, and has been improved. First, the Binghamton paper is focused on a specific signal, sensor noise. But there are several stages in image acquisition and processing pipeline in the camera, each of which could leave idiosyncratic effects on the image. This paper out of Turkey incorporates many such effects by considering all patterns of certain types that occur in the lower order (least significant) bits of the image, which seems like a rather powerful technique. The effects other than sensor noise seem to help more with identifying the camera model than the specific device, but to the extent that the former is a component of the latter, it is useful. They achieve a 97.5% accuracy among 16 test cameras — but with cellphone cameras with pictures at a resolution of just 640×480. Second is the effect of the scene itself on the noise. Denoising transformations are not perfect — sharp boundaries look like noise. The Binghamton researchers picked their denoising filter (a wavelet transform) to minimize this problem, but a recent paper by Chang-Tsun Li claims to do it better, and shows even better numerical results: with 6 cameras (all different models), accurate (over 99%) identification for image fragments cropped to just 256 x 512. What does this mean for privacy? I said earlier that there is a duality between security and privacy, but let’s examine the relationship in more detail. In privacy-infringing applications like mass surveillance, the algorithm need not always produce an answer, and it can occasionally be wrong when it does. The penalty for errors is much lower. On the other hand, the matching algorithm in surveillance-like applications needs to handle a far larger number of candidate cameras. The key point is: My intuition is that state-of-the-art techniques, configured slightly differently, should allow probabilistic deanonymization from among tens of thousands of different cameras. A Flickr or Picasa profile with a few dozen images should suffice to fingerprint a camera. Combined with metadata such as location, this puts us within striking distance of Internet-scale source-camera identification from anonymous images. I really hope there will be some serious research on this question. Finally, a word defenses. If you find yourself in a position where you wish to anonymously publicize a sensitive photograph you took, but your camera is publicly tied to your identity because you’ve previously shared pictures on social networks (and who hasn’t), how do you protect yourself? Compressing the image is one possibility, because that destroys the ‘lower-order’ bits that fingerprinting crucially depends on. However, it would have to be way more aggressive than most camera defaults (JPEG quality factor ~60% according to one of the studies, whereas defaults are ~95%). A different strategy is rotating the image slightly in order to ‘desynchronize’ it, throwing off the fingerprint matching. An attack that defeats this will have to be much more sophisticated and will have a far higher error rate. The deanonymization threat here is analogous to writing-style fingerprinting: there are simple defenses, albeit not foolproof, but sadly most users are unaware of the problem, let alone solutions. That was a bit simplified; mathematically, there is an additive component (dark signal nonuniformity) and a multiplicative component (photoresponse nonuniformity). The former is easy to correct for, and higher-end cameras do, but the latter isn’t. Much has been said about the tension between security and privacy at a social/legal/political level, but I’m making a relatively uncontroversial technical statement here. Fridrich is incidentally one of the pioneers of speedcubing i.e., speed-solving the Rubik’s cube. The Binghamton paper uses 320 images per camera for building a fingerprint (and recommends at least 50); the Turkey paper uses 100, and Li’s paper 50. I suspect that if more than one image taken from the unknown camera is available, then the number of reference images can be brought down by a corresponding factor.
NASA's hobbled Kepler space telescope may be able to detect alien planets again, thanks to some creative troubleshooting. Kepler's original planet hunt ended this past May when the second of its four orientation-maintaining reaction wheels failed, robbing the spacecraft of its ultraprecise pointing ability. But mission team members may have found a way to restore much of this lost capacity, suggesting that a proposed new mission called K2 could be doable for Kepler. Engineers with the Kepler mission and Ball Aerospace, which built the telescope, have oriented the spacecraft such that it's nearly parallel to its path around the sun. In this position, the pressure exerted by sunlight is spread evenly across Kepler's surfaces, minimizing drift. [Gallery: A World of Kepler Planets] This strategy is returning some promising results, mission officials say. During a 30-minute pointing test in late October, for example, Kepler captured an image of a distant star field that was within 5 percent of the image quality achieved during Kepler's original mission. "This 'second light' image provides a successful first step in a process that may yet result in new observations and continued discoveries from the Kepler space telescope," Charlie Sobeck, Kepler deputy project manager at NASA's Ames Research Center in Moffett Field, Calif., said in a statement. The Kepler team is currently conducting tests to see if the spacecraft can maintain such pointing stability over periods of days and weeks — a necessity for discovering exoplanets. Kepler launched in March 2009 on a mission to determine how frequently Earth-like planets occur around the Milky Way galaxy. The spacecraft finds exoplanets via the "transit method," noting the telltale brightness dips caused when an alien world crosses the face of, or transits, its host star from the instrument's perspective. Kepler has been remarkably successful, spotting more than 3,500 planet candidates to date. Just 167 of them have been confirmed so far by follow-up observations, but mission scientists think 90 percent or so will end up being the real deal. Researchers are still sifting through the mountains of data Kepler returned during its four years of science operations. Kepler team members have expressed confidence that they'll find Earth analogs in these databases, allowing the mission's primary goal to be achieved. The Kepler team has officially presented the K2 mission concept to NASA Headquarters, which is expected to decide by the end of the year if the idea progresses to a vetting stage called "senior review." The ultimate fate of K2, and the Kepler spacecraft, will likely be known by the middle of next year, Kepler officials have said.
By Catherine Varmazis March 1, 2008 | In a major collaboration, researchers from the Cancer Institute of New Jersey (CINJ), several U.S. universities, and IBM are creating grid-enabled tools that perform high-throughput analysis of tissue microarrays to dramatically improve the accuracy and speed of cancer diagnoses. “Years ago, most patients would go through the same treatment: chemo, for example. If it didn’t work they’ve move on to drug 1. If that didn’t work they’d try drug 2, and so on,” says David Foran, director of the Center for Biomedical Imaging & Informatics at CINJ and lead investigator for the project. “Now we can bypass all these trials and go directly to what therapy is most appropriate based on [a patient’s] expression signature.” The project, which received a $2.5 million grant from the National Institutes of Health last October, makes use of tissue microarrays, pattern recognition algorithms, and grid-based supercomputing. Foran says each tiny tissue plug on a microarray contains different types of tissue, and that software that can distinguish between these heterogeneous bits and detect the presence of a specific cancer biomarker. “If it’s present we have computer vision techniques and software that we’ve developed which will tell us if it’s located in a specific tissue or in a certain sub-cellular compartment, like the nucleus or cytoplasm. All of these things have bearing on the clinical outcome of the specific patient we’re looking at.” To conduct the proof of concept required for funding the project, CINJ researchers took a set of “retrospective studies” of over 100,000 patient tissues for which the diagnoses were already known, and analyzed them using their specialized software. Programmers from IBM grid-enabled the software and ran the analysis over the World Community Grid (WCG) — a virtual supercomputer established by IBM. Computation of this magnitude would have taken a single desktop computer 2900 years to complete, but it took the WCG less than six months, says IBM’s Robin Willner, VP global community initiatives. When the analysis was complete, “We were able to compare the signatures we had generated and that we hoped would correlate with different stages and types of disease,” says Foran. “We compared them with the patient outcomes and profiles in terms of diagnosis and histologic types and found there was a very strong correlation.” Foran now plans to expand the number of disorders being investigated, grow the reference library of expression patterns, and build a clinical decision support system so oncologists at cancer centers around the world can download the CINJ client and analyze their own tissue specimens. The computation will be done on caGRID, an open source software infrastructure that has been developed as the main grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. In addition, IBM is donating a high-performance supercomputer to the CINJ’s new Center for High-Throughput Data Analysis for use in examining the digitally archived cancer specimens and genomic data. Joel Saltz, professor and chair of the Department of Biomedical Informatics at Ohio State University (OSU), where most of caGRID has been developed, says, “One of our roles in this project is to develop a caGRID-compliant infrastructure that supports the data and algorithms [that Foran’s group developed] so the tissue microarray and virtual slide data can be integrated with other kinds of experiments and translational research data types.” For data from different data sets to be compatible, there has to be a mechanism for standardizing the naming of biological terms and another for standardizing how complex data structures from different types of experiments are represented in XML schema. Saltz’s group is developing standard data models and well-defined biomedical ontologies that will be harmonized with the caBIG processes, to avoid isolated “information islands.” “The caGRID infrastructure is designed to connect databases as well as computational procedures, so it’s like having a worldwide programming environment of databases and procedures,” explains Saltz. “But for this environment to work, you need to know... what the query language is, and that’s where all this language and ontology stuff is, because otherwise if I tell you, ‘We’ve got this wonderful tissue microarray environment, feel free to use it.’ You’d say, ‘Well, thanks, but how am I going to find out how to? And what do you have in there?’” The complexity and scope of this work made multidisciplinary collaboration involving many organizations essential. “A lot of big science today requires a lot of different levels of expertise,” says Foran. “In fact, when we received our critiques from the NIH, they stated explicitly that this group of individuals [involved in the project] is unique in what they bring to the table.” Although still in the early stages, the tools are already being used by oncologists at CINJ. The plan for the coming year is to have a prototype system up and running that will be deployed at Arizona State University, Rutgers University, the University of Pennsylvania School of Medicine, Ohio State University, and the CINJ. “That will serve as our testbed for iterative prototyping, and then within the next three years, we’d be constantly updating the software as it becomes refined and optimized and we’re hoping we’ll have a product to put out to the research and clinical communities by year 4,” says Foran. This article appeared in Bio-IT World Magazine. Subscriptions are free for qualifying individuals. Apply Today.
Researchers at the Icahn School of Medicine at Mount Sinai have received a multi-year National Institutes of Health (NIH) grant to determine factors which may influence why African Americans are less likely than others to receive colorectal cancer (CRC) screenings, despite having the highest CRC incidence and mortality of any ethnic/racial group in America. "The short-term goal of this study is to understand why there is a lower screening prevalence among African Americans, and the long-term goal is to develop and disseminate effective intervention strategies to increase the CRC screening in this population, so that we can eliminate the race-related disparity in morbidity and mortality," said Lina Jandorf, MA, Research Professor in the Department of Oncological Sciences at Mount Sinai and a principal investigator in the study. Colorectal cancer is the second leading source of cancer deaths and the third leading source of new cancer cases in the United States. The mortality rate for CRC is a remarkable 49 percent higher for African Americans than for whites, according to the American Cancer Society. Improving CRC screening rates is important for early detection, treatment and improved survival rates. Researchers at Mount Sinai, Roswell Park Cancer Institute in Buffalo, NY and the University of Buffalo seek to enroll 900 study participants as part of the four-year, $2.6-million grant. They will compare the effectiveness of two approaches to educating African Americans over the age of 50 about the need for screening. One is a "narrative" approach and involves story telling i.e. encouraging participants to talk about their fears and thoughts associated with a colonoscopy screening. The other approach is "didactic" giving patients "just the facts" about the disease and the colonoscopy procedure. Based on their findings, the research team hopes to develop new tools for educating African Americans about screening. "While researchers have compared the success of narrative vs. didactic community education approaches for other cancers, this is the first such major comparative study for colorectal cancer in African Americans," said Professor Jandorf. "Mount Sinai's involvement in this study reflects our ongoing commitment to improving the health of our local community, much of which is African American." Under Professor Jandorf's direction, Mount Sinai has extensive experience in conducting investigations and developing successful strategies to improve colon cancer screening rates for Hispanics, East Harlem residents and low-income minorities. |Contact: Press Office| The Mount Sinai Hospital / Mount Sinai School of Medicine
This is a Java simulation of an air-core inductor. You should see an applet (below) with slider controls to select the coil's dimensions and wire size. (Trouble? See below.) Note this is only the coil and does not include a projectile. How to Calculate Inductance This program calculates inductance using Wheeler's Formula: where N = number of turns, A = average coil radius, B = coil length, and C = coil thickness. All dimensions are in inches and the result is microhenries. The simulator handles all necessary conversions to metric and millihenries. The simulator will incidentally give you other handy information, such as the number of turns and the length of wire needed. Your goal in this simulator is to design a coil with a certain amount of inductance. You should have already chosen a target inductance value somehow, perhaps by experimenting with the RLC Simulator program. Here you can try various wire sizes and physical dimensions to see what happens. The idea is to design a coil which can be physically built, and which has a reasonably low amount of resistance. Q. What is a circular mil? Q. Why are the "mm" wire sizes (under the 'Next Wire' button) slightly different than in the results panel? Buy Me Some Coffee Do you like this? Say 'thanks' by buying me a hot delicious cup of my favorite Starbucks coffee! It's only $5 and it puts me in the code-writing mood to make enhancements. After all, everyone knows that programming is defined as "the art of converting coffee into software." What do you use this inductor simulator for? Write me - I'm curious to know! This program requires Java Runtime Environment (JRE) 1.4 or above. Please visit www.java.com/en/download/index.jsp to download and install the JRE. Or, if you write Java code then get the software developer's kit (SDK) by following links on Sun's web site. If my applet still doesn't work, try to open your Java Console window in your web browser. Copy and paste its contents into a message and e-mail it to me for trouble-shooting. About the Program The program was written in Java using AWT classes. My Java source code is open and freely available: Please report bugs, request features, make suggestions and send me compliments about this simulator. Last Update 2013-07-28 ©1998-2016 Barry Hansen
For the last two and a half years, the government has been playing a game of chicken with the budget. And a lot of people are left nervously watching on the sidelines. The problem boils down to passing a spending bill to keep the government running. If the spending bill isn’t passed by the time the old bill runs out, the government effectively shuts down. This sounds impossible, but it has happened before—the last time at the end of 1995 and early 1996, when the government closed its doors twice.The current budget only extends through September 30, 2013. If a budget is not passed by October 1, 2013, then the government is in danger of shutting down. So why hasn’t the government passed a spending bill? This has been going on for the last two and a half years (since early 2011, more or less). The problem is that the government is close to reaching its debt ceiling, which means they can’t legally borrow more money. The government needs to borrow the money to continue current operations, so they need to extend the debt ceiling. Which brings politics into play. Politicians on both sides are hesitant to pass a spending bill with major items they don’t agree with, especially if it means borrowing more money or extending the debt ceiling. So each side is taking a firm stance and dragging their feet on passing he spending bill, bringing us closer and closer to a government shutdown. Let’s take a look at who could be affected. Debt Ceiling Crisis – What Happens if the Government Shuts Down? Government services affected by a possible shutdown Essential vs. Non-Essential Government personnel. As we all know, the government can’t completely shut down – too much of our day to day lives rely on government support. For example, the airlines require Air Traffic Controllers and TSA to keep the airlines going. We also need to think about our national defense and other safety issues. Keeping this in mind, the government is planning on breaking people into two distinct groups – essential and non-essential personnel. The difference is that essential personnel will continue working during a government shut down, even though they will not receive their paycheck until the government resumes operations. Non-essential personnel will receive a non-paid furlough until the spending bill is passed. Unfortunately, no one has officially designated who is included in each group of workers, and some estimates place the non-essential personnel at close to 800,000 people. There is a possibility that government workers placed on a non-paid furlough could receive back pay after the fact, but Congress would have to make a special approval for this to happen. Military pay and benefits. Military members won’t get paid. This is a departure from the last major government shutdown, which occurred in 1995, during which military members continued to work and get paid. Military members get paid on the 1st and the 15th of each month, and would still receive their paycheck on the 15th, but if the government shuts down on the 8th, as is a possibility, then military members would only receive their paycheck for money earned through the 8th of the month. Servicemembers would still be required to work because their services are required for national defense, and they would receive back pay for any pay and benefits they did not receive during the government shutdown. Federal taxes. This is an interesting situation, but basically it boils down to this: The IRS would stop processing paper returns and stop sending refunds by paper check. The IRS would continue to process returns filed electronically, and would likely continue sending electronic refunds. Other IRS activities potentially on the chopping block: audits, IRS help desk and tax payer hotline. Here is more about possible impacts of a government shutdown on taxes. What about Social Security? The last time the US government shut down, they continued paying Social Security benefits. However, this time around, President Obama has gone on the record stating he isn’t sure if the government would be able to continue sending out social security benefits if the government shuts down. This is a change from previous messages sent out by the government since Social Security benefits are paid out of the Social Security Trust Fund, which does not need annual budgetary approvals. Additional government services and how they may be affected: - US Postal Service would remain open because they are a self-funded government operation. - The Federal Court System has contingency funds which will last up to 2 weeks, after which operations may be affected. - Essential financial services and banking offices will remain open. - Law enforcement will continue operations - National Institutes of Health will stop clinical trials. - National parks, museums, and monuments would close. - Mortgages will not be processed by the FHA. - Passports will not be processed. - Small Business Administration will not process loans for small companies. Bottom line: The government can’t legally function without a properly approved budget. At issue is whether or not Congress can agree upon a budget by September 30th. If they do, then life goes on pretty much the same for everyone. If they don’t agree on a spending plan, then the government is at risk for shutting down until they can pass a spending bill. Let’s all hope this is just political posturing and a last minute budget deal is reached before government employees are furloughed and some people are left without paychecks.
Air Travel and Global Warming January 27, 2014 1 Comment Share tweet Op Ed By: Op Ed A person’s air travel has an outsized environmental cost relative to his or her other activities. Here is a rule of thumb: The environmental impact per passenger on a commercial airplane flight is about the same as that of one person’s driving the same distance solo in a moderately efficient car. So a passenger jet equipped to carry 200 people 2000 miles has roughly the same environmental impact as 200 moderately efficient cars driving 2000 miles. To put it another way, making one round trip to Europe from San Francisco in a year has about the same environmental impact as a year of typical car commuting to and from your workplace. Roughly speaking, one to two round-trip, cross-country flights have about the same environmental impact as a typical omnivore’s meat consumption in a year or as a typical Bay Area resident’s portion of the area’s residential electric and gas usage in a year. The rule of thumb is supported by documents produced by, among others, the Federal Aviation Administration, the International Civil Aviation Organization, the Natural Resources Defense Council and the Union of Concerned Scientists. This rule also accounts for how airlines respond to demand. Globally, aviation contributes approximately three to five percent of warming effects, depending on details of assessment. But this is a small number only because relatively few people fly at present. What can we do? Increasing the efficiency of air travel is important, of course. But experts agree that for the next few decades, there will be no major changes in the basic mechanisms of passenger airplanes. That is not to say invention in this area is not welcome; it is just not nearly enough. The most important thing to do is to reduce consumption, as measured by the product of passengers and miles. Perhaps the most promising way to reduce our passenger miles is to change the way we collaborate and do business as academics and professionals. Already, some of us regularly have video conferences with colleagues elsewhere in the world. In the coming years, we can develop virtual conference rooms in which real-time discussion and collaboration will be as rich of experiences as they are in person. Similarly, by improving technology and information delivery, we can make tomorrow’s virtual conferences feel not just as natural as, but also more useful than, today’s in-person conferences. Of course, in-person collaboration is occasionally irreplaceable. But if we invest our resources and effort into this project, we can make the majority of in-person professional trips not only unnecessary but also undesirable relative to virtual options. Here in academia, we apply for travel grants for purposes of collaboration. We could broaden travel grants to be collaboration grants. These would allow funds to be used to improve collaborators’ communications infrastructure: cameras, writing tablets, collaboration software and eventually virtual reality equipment. Additionally, we could arrange in-person conference locations to minimize the sum of participants’ travel miles and hold multiple small conferences in similar subject areas simultaneously in one location. Invention and entrepreneurship will be essential components of this project. Substantially increased interest in high-quality remote collaboration would open and broaden market opportunities in communications, networking, computer hardware, human-computer interaction, IT security, local infrastructure management, big data and more. Additionally, many of us travel by air to see family and friends and to see the world. We can make reasonable changes to our personal travel without sacrificing its key benefits. For example, with better planning, we can make family visits more efficient in terms of everyone’s travel, and we can take longer but fewer overseas trips. Still, in this piece I’m really focused on air travel for professional activities, which is where our technology can have the greatest impact. Any time you get on a plane for job-related travel and think to yourself, “Ugh, I’d rather not go on this trip,” your feeling is a clear opportunity for positive change. We live in an exciting time to be inventors and thinkers. What we create, we create forever, for good or for ill. It’s our choice. Andrew M. Bradley, B.S., M.S., 2002, Ph.D., 2010 I want to thank several of my friends for contributing ideas to this piece. Any error is mine. Thoughts on this topic? Contact me at firstname.lastname@example.org. 2014-01-27 Op Ed January 27, 2014 1 Comment Share tweet Subscribe Click here to subscribe to our daily newsletter of top headlines.
SMITH, ALFRED CORBETT, physician, medical superintendent, and leprologist; b. 7 June 1841 in Bathurst, N.B., son of James Smith and Susanna M. Dunn; m. 2 May 1866 Helen Young, sister of Robert Young, and they had two daughters and one son; d. 12 March 1909 in Tracadie, N.B. Little is known about Alfred Smith’s formative years. By 1858, however, he had fixed on a medical career and begun to study with Dr James Nicholson, the first resident physician at the leper hospital in Tracadie. In 1862 Smith entered Massachusetts Medical College (Harvard Medical School), where he received his degree on 9 March 1864. In 1865 Smith succeeded Nicholson in Tracadie. Four years later, however, after the arrival at the lazaretto of a group of Religious Hospitallers of St Joseph [see Amanda Viger], he was dismissed, the casualty of a money-saving measure. By 1870 he had established a private practice in Bathurst, and later that decade he relocated in Newcastle, where he also served as coroner, justice of the peace, and health officer. From 1877 to 1878 he did postgraduate work at the University of the City of New York. He received an md in 1884 from Victoria University in Cobourg, Ont. Smith had made no secret of his “long felt desire” to make leprosy “the special study” of his life. In fact, he amassed what he claimed to be the most complete library in Canada on the disease, and he corresponded with the leading dermatologists and leprologists of his time, including Dr Jonathan Hutchinson in England. Dr Joseph-Charles Taché* was one of the few Canadian physicians with whom he could share his enthusiasm for the study of leprosy. In 1880 Smith was appointed “Inspecting Physician” and “Medical Advisor” at the Tracadie lazaretto, but a more permanent position eluded him. He vigorously lobbied the federal government in 1889 to be given a “general superintendence” of leprosy in Canada and received a timely endorsement from the eminent physician Dr William Osler*. Urging the appointment of a full-time superintendent at Tracadie, Osler argued that there was “no one more suitable for the position” than Smith. In November 1889 Smith became “Inspector of Leprosy for the Dominion,” but the promotion was somewhat anticlimactic. His modest salary scarcely matched the grandeur of his title, and his private practice crumbled as clients were deterred by his chosen vocation. “I have never felt so poor,” he lamented in 1891. In 1899 he was placed on a firmer footing when the government resolved to administer the lazaretto more scientifically and elevated Smith to the rank of “Medical Superintendent.” At the leper hospital Smith’s responsibilities were as varied as they were arduous. He visited the wards daily, drew up prescriptions, frequently performed surgical and dental procedures, and regularly engaged in laboratory research. He was entrusted with such matters as diet, hygiene, and discipline. To him fell the preparation of annual government reports, maintenance of registers of admissions, compilation of genealogical data on leper families, and fumigation of railway cars used to transport leprous persons. Smith’s duties often took him outside the lazaretto on “tours of inspection.” Armed with a camera and notebook, he periodically crisscrossed Gloucester County, visiting households, lobster canneries, and fish-packing plants. On these trips he would engage in what he referred to facetiously as “leper hunting,” and would employ the requisite mixture of compassion and coercion to secure admission of “leper suspects” to the lazaretto. “When I declare an individual leprous,” he noted, “his nearest friends avoid him; he is refused employment, and he soon finds a resting place in the home provided for such unfortunates.” From the late 1880s Smith travelled farther afield – to Cape Breton, Victoria, and Winnipeg – to examine suspected cases of leprosy. The widely read Smith had distinctive views about the aetiology and treatment of leprosy. Although he was convinced of its contagious character, his trademarks were diagnostic caution and therapeutic moderation. He experimented with the popular anti-leprotics of the day, such as ichthyol and chaulmoogra oil, only after studied consideration. Unlike some of his contemporaries, he was optimistic about a cure. He not only conceded the possibility of spontaneous recovery, but also upheld the efficacy of chaulmoogra oil in combination with a nutritious diet and sound hygiene. In sharp contrast, however, he regarded compulsory segregation as essential in the treatment and containment of the disease. He also sought more stringent legislation for the apprehension, detention, and medical supervision of leprous persons, an objective that was eventually realized in the federal Leprosy Act of 1906. Under Smith’s superintendence the lazaretto at Tracadie was both modernized and humanized. He was unwavering in his vision that it should be more than a detention centre or a religious hospice. It should function both as a “hospital” and as a “home.” Its reputation at the turn of the century as a model institution was due in no small part to its caring and capable medical superintendent. For this reason alone, Smith deserves to be remembered more widely than he is. It is more difficult to gauge the significance of Smith’s medical research. Unfortunately, he published nothing about his microscopic investigations, but his private papers mention the use of microphotography and detail complex staining procedures. The Canadian government offered him little incentive, however. As a result of its short-sighted stinginess Smith could not even attend the 1897 Leprosy Congress in Berlin as an invited delegate. It was not until 1901 that he received a fully equipped laboratory. By then glaucoma seriously interfered with his research. Smith’s leisure time was filled with such purposeful Victorian hobbies as photography, taxidermy, natural history, and archaeology. He was a keen supporter of the Natural History Society of New Brunswick. In 1906 William Francis Ganong* observed that current knowledge of early Micmac camp and burial sites around Tracadie was attributable entirely to Smith, “who has studied them in the scholar’s spirit.” All sources on Smith point to the fact that he was a singular personality, who deliberately courted eccentricity. He was reclusive in his habits, opinionated in his views, and macabre in his sense of humour. To the townspeople of Tracadie, the Presbyterian turned Unitarian who was always “reading and thinking” was an enigma. Undoubtedly Smith’s specialty, which was very much on the fringes of medical science, appealed to his solitary temperament. It also reinforced his reclusion, for his association with the lepers tainted him in the public mind and obliged him to share with them the burden of social exile. In his later years Smith’s enthusiasm for his vocation was drained by an unsupportive government and the stubborn incurability of his patients. From 1907 ill health also took its toll. His death on 12 March 1909 was duly noted in the local newspapers and even merited an obituary in the New York Times. In his annual report that year Dr Frederick Montizambert*, director general of public health for Canada, registered the government’s loss of “a faithful and a zealous officer” and the lepers’ loss of “a kind and attentive friend.” No such acknowledgement was forthcoming from the Canada Lancet or the Canadian Practitioner and Medical Review. This omission was most telling, for during Smith’s era leprosy specialists more frequently captured media attention than professional recognition. Alfred Corbett Smith’s papers, including his correspondence, notebooks, scrapbooks, and letter-books, are preserved in the Soc. Hist. Nicolas-Denys, Centre de Documentation (Shippagan, N.-B.), cartons 105-1–8. Vital information concerning the Tracadie lazaretto, including Smith’s annual reports, may be found in N.B., House of Assembly, Journal, 1865–81, and the reports of the federal Dept. of Agriculture for 1880–1909 in Can., Parl., Sessional papers, 1881–1910. Also notable among Smith’s published medical reports is his reply to a lengthy questionnaire on leprosy solicited by the Hawaiian government, Questions regarding leprosy: enquiry made by the Hawaiian government; answers to the interrogatories submitted by his excellency the minister of foreign affairs of the kingdom of Hawaii . . . ([Ottawa, 1885]), prepared jointly with Joseph-Charles Taché. Some of his archaeological correspondence appears in the article “On pre-historic remains, and on an interment of the early French period, at Tabusintac River, N.B.,” N.B., Natural Hist. Soc., Bull. (Saint John), no.5 (1886): 14–19. Boston Medical Library–Harvard Medical Library, Harvard Univ. (Boston), Harvard Medical Arch., AA 17.5, vol.1 (Graduates with their theses, 1856–64); Biog. file on Harvard Medical School graduates, comp. c. 1905; “Massachusetts Medical College (Harvard Medical School), matriculations, 1860–1870 (winter).” College of Physicians Library (Philadelphia), Hist. Coll., Ashmead papers, Smith to Ashmead, 18 Aug. 1897. NA, RG 17, A I, 588, 613, 619, 623–24, 674, 678, 685, 744, 749, 1689; RG 29, 5, file 937015 1/2, pts.1–5; 299–300; 2355. PANB, MC 216/53; RS13/1/12: 9; RS153, A1/16, July 1880. Private arch., Young family (Tracadie, N.B.), A. C. Smith, medical certificates (mfm. at PANB, MC 291, B4–B7); Young family bible. UCC-C, Victoria Univ. Arch., 87.144V, no.2, 1884. L’Évangéline, 25 mars 1909. New York Times, 21 March 1909. Union Advocate (Newcastle, N.B.), 20 Sept. 1897, 17 March 1909. American Medical Assoc., Journal (Chicago), 52 (January–June 1909): 1131. Dictionnaire biographique du nord-est du Nouveau-Brunswick (5 cahiers parus, [Bertrand, N.-B.; Shippagan], 1983– ), l: 62–63. W. F. Ganong, “The history of Tracadie,” Acadiensis (Saint John), 6 (1906): 185–200. F.-M. Lajat, Le lazaret de Tracadie et la communauté des Religieuses hospitalières de Saint-Joseph (Montréal, 1938). New York Univ., Medical Dept., Annual announcement of lectures and catalogue, 1877–78. L. C. C. Stanley-Blackwell, “Leprosy in New Brunswick, 1844–1910: a reconsideration” (phd thesis, Queen’s Univ., Kingston, Ont., 1989).
|Anti-evolution books on sale during the Scopes "Monkey Trial" in 1925. Credit: Getty Images| Topics: Biology, Civics, Climate Change, Education, Science, Research History.com: Scopes Monkey Trial Nearly a quarter of a million science teachers are hard at work in public schools in the United States, helping to ensure that today’s students are equipped with the theoretical knowledge and the practical know-how they will need to flourish in tomorrow’s world. Ideally, they are doing so with the support of the lawmakers in their state’s legislatures. But in 2019 a handful of legislators scattered across the country introduced more than a dozen bills that threaten the integrity of science education. It was a mixed batch, to be sure. In Indiana, Montana and South Carolina, the bills sought to require the misrepresentation of supposedly controversial topics in the science classroom, while in North Dakota, Oklahoma and South Dakota, their counterparts were content simply to allow it. Meanwhile, bills in Connecticut, Florida and Iowa aimed beyond the classroom, targeting supposedly controversial topics in the state science standards and (in the case of Florida) instructional materials. Despite their variance, the bills shared a common goal: undermining the teaching of evolution or climate change. Sometimes it is clear: the one in Indiana would have allowed local school districts to require the teaching of a supposed alternative to evolution, while the Montana bill would have required the state’s public schools to present climate change denial. Sometimes it is cloaked in vague high-sounding language about objectivity and balance, requiring a careful analysis of the motives of the sponsors and supporters. Either way, though, such bills would frustrate the purpose of public science education. Students deserve to learn about scientific topics in accordance with the understanding of the scientific community. With the level of acceptance of evolution among biomedical scientists at 99 percent, and the level of acceptance of climate change among climate scientists not far behind at 97 percent, it is a disservice to students to misrepresent these theoretically and practically important topics as scientifically controversial. Science Education Is Under Legislative Attack, Glen Branch, Scientific American
A Credible Carbon Price Tomorrow Can Influence Decisions Today Over-the-horizon carbon prices can do a lot of good, but only if the promises are believable, the Potsdam Institute for Climate Impact Research concludes in a new report. A popular fear among some economists, the Potsdam researchers observe, is that “the anticipation of strong CO2 reduction policies might drive up [greenhouse gas] emissions” over the shorter term, in what they call a “green paradox.” This would happen as fossil fuel owners accelerated their extraction to maximize profit before regulations kick in. Like this story? Subscribe to The Energy Mix and never miss an edition of our free e-digest. An alternative theory has it that investors, seeing the impending end of high-carbon energy, will divest from producer companies rather than leaving their assets stranded, effectively forcing them to exit the market and close down production. Which effect is stronger? The short answer: divestiture can be a stronger economic force than profiteering, but the effect depends on the credibility of political commitments to future carbon pricing. The impact is much more pronounced for coal than for other fossils. And it can be felt as much as a decade ahead. “We find that 10 years before carbon pricing policies are actually introduced, investors start pulling their money out of the coal power sector,” says lead author Nico Bauer. “They shy away from investing in fossil-fueled power plants as they realize that the lifetime during which these plants will make money will be curtailed by future climate policy. We find this divestment reduces emissions by between five to 20%, depending on the strength of the climate policy in the time before the climate policy gets implemented.” Oil production is more likely to demonstrate the green paradox, the investigators found. But any resulting increase in carbon emissions will be smaller than the reduction from the closure of divested coal plants. The investigators found little reason to worry about fugitive emissions and polluter flight, as “emissions-intensive production facilities move from places of high regulation to those with low standards”. While it isn’t imaginary, they say, “this effect is limited.” But the positive outcomes depend heavily on good governance, the Potsdam group concludes. Anticipated regulations lower emissions even before coming into force only under several conditions: “that policy-makers can commit to introducing strong climate policies several years into the future, that the carbon pricing is uniform across regions, that investors believe the policy-makers will do what they say they will do, and that investors are shrewd in adapting their investment strategies accordingly.” Such shrewd allocation of private investment, the Potsdam group concludes, could “reduce emissions, helping us on the first step towards achieving deep emissions reductions—as long as the policy signals are strong, clear, and credible.” Canada is introducing a national floor price on carbon of $10 per tonne this year, rising to $50 per tonne by 2022.
The Human Heart: A Guide to Understanding How it Functions From the very first time the heart starts pounding until the time of death, it may beat more than 3.5 billion times. The center of the circulatory system is the heart. The average heart beats 100,000 times each day, pushing around 2,000 gallons of blood throughout your body. With a life span of 70-90 years, the heart will beat two to three billion times and circulate 50-65 million gallons of blood. The hearts role is to pump oxygenated blood to every cell in the body by having a continuous beat. Throughout time the heart has created mystery, however current technology has solved most of this mystery, but there still remains an enchantment and eagerness to learn more. In this article, we will learn the involvement of the hearts configuration involving how the blood travels through the blood vessels. Learn what you can do to monitor your heart's heath and how to keep it healthy during your lifetime. The Heart's Anatomy The weight of the heart is between 7 and 15 ounces and is a little bigger than the size of your fist. The location of the heart is between the lungs in the center of the chest. The membrane that surrounds the heart is called the pericardium. The heart consists of four chambers. The left and right atria are known as the upper chambers. The lower chambers are referred to as the left and right ventricles. The septum is the wall of muscle that divides the left and right atria and left and right ventricles. The strongest chamber of the heart is the left ventricle. - Your Heart and Blood Vessels – Illustrations and facts of the anatomy of the heart. - Heart Anatomy: Interior View – A tutorial page focusing on the interior view of the heart. - Anatomy of the Human Heart with Flash Illustration – A diagram the heart using flash illustration. - The Anatomy of the Heart – The anatomy of the heart and description of how it functions. - Home: Where the Heart Is – An outline and tour of the heart from Franklin Institute. The Role of the Heart The heart is described as the most valuable organ in the body. The function of the heart is to pump blood throughout the body. The heart works to pump and circulate all of the materials our body needs to operate properly. The right side of the heart receives de-oxygenated blood from the body. The blood rides through the Tricuspid Valve into the Right Ventricle. After that, it pumps through the Pulmonary Valve into the Pulmonary Artery. This is where the de-oxygenated blood is taken to the lungs to get oxygen. - The Human Heart: An Online Exploration - A comprehensive overview of the role of the heart in our bodies. - Function of the Heart – Information on how the heart functions. - Anatomy of the Cardiovascular System- Information on the role the heart plays in the cardiovascular system. - What Does the Heart Do All Day?- A description of how your heart functions everyday. - An Overview of The Human Heart- Information on what the human heart is and how it functions. - Heart Contractions and Blood Flow – An animation of the showing how the heart pumps. Keeping Your Heart Healthy It is important that we do everything we can to keep our heart healthy. In America, heart disease is the greatest cause of death. An estimate of 64 million Americans have some form of cardiovascular disease. Creating simple changes in your life can prevent cardiovascular problems and assist in living a longer life. Watch you diet, control your blood pressure, start exercising, and stop smoking are just a few of the things you can do to keep your heart healthy. Avoid fast food and any kind of fried foods. Stay away from an overabundance of sugar. Natural fruits, fresh juice and green vegetables are good for our body helping it rest. - Does Elevating Your Heart Rate During Exercise Have the Same Effect On You As Fight or Flight Response? - The explanation of how the sympathetic and parasympathetic nervous systems work during exercise and the fight or flight response. - Keeping Your Heart Healthy- Information on the heart and ways to keep it healthy. - How to Keep Your Heart Healthy – Different information and tips on keeping a healthy heart. - Keep Your Heart Healthy – Tips on how to keep your heart healthy. - Promote Physical Activity and Healthy Eating – Information and statistics on healthy eating. - Good Foods - A guide of good foods for your heart's health. - Young at Heart Tips for Older Adults – Information on weight control and nutrition for older adults. - Keeping Your Heart Healthy – The steps to take to keep your heart healthy. - The Exercise Habit – A guide to the best aerobic exercises, weight bearing exercises, and target heart rate. - How Healthy is Your Heart? - An evaluation with questions on how healthy your heart is. Monitor Your Heart Health Keeping track of your blood pressure is important when you are monitoring the health of your heart. It is very important to monitor your blood pressure and heart rate if you have a heart related disease. You can purchase a blood pressure monitor at a local store, or go to the fire department and have your blood pressure taken there. Doctors suggest taking your blood pressure if you feel anxious or are having an irregular heart beat. You should also monitor your heart rate when you are exercising. A doctor has different ways to monitor your heart, such as touching the chest wall and pushing. A doctor can tell if the heart is enlarged and what shape it is by beating lightly on the chest. The doctor listens through a stethoscope for any murmurs or unusual sounds. Finally, the last method of monitoring the heart would be through exploratory measures. - Monitoring Your Blood Pressure – Information on how to take your blood pressure. - Blood Pressure- Details on how to read your blood pressure. - Holter Monitor – Information on a holter monitor test. - Heart Murmurs and Other Sounds – Learn different ways the doctor will listen to your heart. - Examining the Heart – Information on how to listen to the heart with a stethoscope. - Heart Catherization Diagnosis and Interventions – Information on the procedure of a heart catheterization. Studies show every 34 seconds a person in the United States dies from Heart Disease and every 20 seconds someone has a heart attack. Heart Disease can refer to an assortment of diseases affecting the heart. A heart attack happens when heart muscle is destroyed or hurt due to not getting enough oxygenated blood to maintain life. Different examples of Heart Disease include Cardiomyopathy, Cardiovascular Disease, Hypertension, and Ischemic Hear Disease. - Pediatric Heart Information for Patients, Families, Medical Professionals – Information on different heart conditions and diseases in children. - Heart Disease – The facts about Heart Disease. - Gum Disease Links to Heart Attacks and Strokes – Learn the theories of how gum disease leads to heart problems. - Arrhythmia: Heart Rhythm Disorder – Information on the condition of having an arrhythmia of the heart. - President's Page: What is a Cardiologist? - The information on how a cardiologist treats defects and diseases of the heart. Related Categories on StartLocal |You May Like| Car / Auto |Beauty Salons||Estate Agents||Dentists||Wreckers| |Tatoo Artists||Electricians||Physiotherapists||Used Cars| |Child Care||Party Supplies||Obstetricians||New Cars|
Risk in Agriculture Risk is an important aspect of the farming business. The uncertainties inherent in weather, yields, prices, Government policies, global markets, and other factors that impact farming can cause wide swings in farm income. Risk management involves choosing among alternatives that reduce financial effects that can result from such uncertainties. Five general types of risk are described here: production risk, price or market risk, financial risk, institutional risk, and human or personal risk. - Production risk derives from the uncertain natural growth processes of crops and livestock. Weather, disease, pests, and other factors affect both the quantity and quality of commodities produced. - Price or market risk refers to uncertainty about the prices producers will receive for commodities or the prices they must pay for inputs. The nature of price risk varies significantly from commodity to commodity. - Financial risk results when the farm business borrows money and creates an obligation to repay debt. Rising interest rates, the prospect of loans being called by lenders, and restricted credit availability are also aspects of financial risk. - Institutional risk results from uncertainties surrounding Government actions. Tax laws, regulations for chemical use, rules for animal waste disposal, and the level of price or income support payments are examples of government decisions that can have a major impact on the farm business. - Human or personal risk refers to factors such as problems with human health or personal relationships that can affect the farm business. Accidents, illness, death, and divorce are examples of personal crises that can threaten a farm business.
According to a recently released study, 34 percent of the Las Vegas Valley’s freeways and major thoroughfares are in a mediocre or poor condition which contributes to significantly higher costs on Las Vegas motorists. Dangerous road conditions are a leading cause of traffic accidents, and Las Vegas has an abundance of poorly maintained roads. Causes of Dangerous Road Conditions Dangerous road conditions are caused by two factors: natural disasters and poor maintenance. While Las Vegas has its fair share of extreme weather conditions (such as droughts and heat waves), it does not ordinarily suffer from extreme natural disasters that would substantially damage roads (aside from the occasional flood). However, in some areas, Las Vegas has deferred road maintenance by over a decade. The study was conducted by TRIP, a private, Washington DC-based national transportation research group. TRIP found that nine percent of Las Vegas bridges are structurally deficient, meaning that they are still safe to drive on but will require replacement or rehabilitation in the next few years. Additionally, TRIP found that many of the dangerous road conditions are concentrated in the older, eastern portions of Las Vegas such as stretches of Nellis Boulevard, Eastern Avenue, and Hollywood Boulevard. Dangerous Road Conditions Contribute to Car Accidents The following road conditions commonly contribute to or cause car crashes: - Lack of rumble strips on freeways; - Obstructions of drivers’ visions (frequently, utility poles); - Faded paint markings; - Damaged or ineligible signage; - Potholes and cracks that cause a driver to lose control; - Lack of guardrails; and - Poor traffic control in construction or other hazardous zones. Liability for injuries sustained due to a dangerous road condition puts people on a collision course with the government. Road maintenance is under the purview of several government agencies including local, state, and federal agencies. In Las Vegas, the Department of Transportation and the Regional Transportation Commission of Southern Nevada. Recovering from government agencies is complex precisely because it is difficult to identify which agency is responsible. Often, victims will have to hire investigators to examine the crash, interview witnesses, and examine government records to identify who failed and in what way.
noun, plural: tissue cultures (1) The technique of culturing animal or plant tissue in a controlled medium away from the source organism (2) The biological culture of tissue grown and maintained through this process Biological cultures are a common laboratory means in order to study living organisms. There are different types of biological cultures, such as cell culture, tissue culture, and organ culture. Cell culture is a biological culture of cells of multicellular eukaryotes. Tissue culture is the cultivation of tissues from multicellular organisms. Organ culture is the cultivation of a part or all of an animal organ in a sterile controlled medium. Tissue culture is a type of biological culture wherein tissues from an animal or a plant source is grown and maintained in a controlled medium. It may also pertain to the culture itself. Lately, tissue culture and cell culture are used interchangeably. Tissue culture nowadays refers to the more popular technique that uses cells dispersed from tissues or distant descendants of such cells. Montrose Thomas Burrows, an American pathologist, was the one who coined the term, tissue culture.1 1 Carrel, Alexis and Montrose T. Burrows “Cultivation of Tissues in Vitro and its Technique”; Journal of Experimental Medicine 13 (1911: 387-96)
The mini ice age starts here By David Rose Last updated at 11:17 AM on 10th January 2010 The bitter winter afflicting much of the Northern Hemisphere is only the start of a global trend towards cooler weather that is likely to last for 20 or 30 years, say some of the world’s most eminent climate scientists. Their predictions – based on an analysis of natural cycles in water temperatures in the Pacific and Atlantic oceans – challenge some of the global warming orthodoxy’s most deeply cherished beliefs, such as the claim that the North Pole will be free of ice in summer by 2013. According to the US National Snow and Ice Data Centre in Colorado, Arctic summer sea ice has increased by 409,000 square miles, or 26 per cent, since 2007 – and even the most committed global warming activists do not dispute this. The scientists’ predictions also undermine the standard climate computer models, which assert that the warming of the Earth since 1900 has been driven solely by man-made greenhouse gas emissions and will continue as long as carbon dioxide levels rise. They say that their research shows that much of the warming was caused by oceanic cycles when they were in a ‘warm mode’ as opposed to the present ‘cold mode’. This challenge to the widespread view that the planet is on the brink of an irreversible catastrophe is all the greater because the scientists could never be described as global warming ‘deniers’ or sceptics. However, both main British political parties continue to insist that the world is facing imminent disaster without drastic cuts in CO2. Last week, as Britain froze, Climate Change Secretary Ed Miliband maintained in a parliamentary answer that the science of global warming was ‘settled’. Among the most prominent of the scientists is Professor Mojib Latif, a leading member of the UN’s Intergovernmental Panel on Climate Change (IPCC), which has been pushing the issue of man-made global warming on to the international political agenda since it was formed 22 years ago. Prof Latif, who leads a research team at the renowned Leibniz Institute at Germany’s Kiel University, has developed new methods for measuring ocean temperatures 3,000ft beneath the surface, where the cooling and warming cycles start. He and his colleagues predicted the new cooling trend in a paper published in 2008 and warned of it again at an IPCC conference in Geneva last September. Last night he told The Mail on Sunday: ‘A significant share of the warming we saw from 1980 to 2000 and at earlier periods in the 20th Century was due to these cycles – perhaps as much as 50 per cent. 'They have now gone into reverse, so winters like this one will become much more likely. Summers will also probably be cooler, and all this may well last two decades or longer. ‘The extreme retreats that we have seen in glaciers and sea ice will come to a halt. For the time being, global warming has paused, and there may well be some cooling.’ As Europe, Asia and North America froze last week, conventional wisdom insisted that this was merely a ‘blip’ of no long-term significance. Though record lows were experienced as far south as Cuba, where the daily maximum on beaches normally used for winter bathing was just 4.5C, the BBC assured viewers that the big chill was merely short-term ‘weather’ that had nothing to do with ‘climate’, which was still warming. The work of Prof Latif and the other scientists refutes that view. Read more: http://www.dailymail.co.uk/sciencete...#ixzz0cJoarW9e
This book traces the lives of one of the great German Jewish banking families from its beginnings through the end of the 20th century. The M. M. Warburg Bank was started in 1798 in Hamburg and flourished modestly through the 19th century under the guidance of one family member or another. At the end of the century control fell to Moritz Warburg and his 22 year old son Max. It was Max who built the banking company into a major economic force in Germany and the larger world up until the Nazi take over in the 1930's. The family branched out into the United States when Max's brothers, Paul and Felix, married into wealthy New York banking families. Chernow describes Paul's importance in American politics as perhaps the major force in the inception of the Federal Reserve System and Felix's significance in charitable work. Following the family through the twentieth century, Chernow details Jimmy Warburg's involvement in the Democratic Party as an advisor to Adlai Stevenson, Eric Warburg's career in intelligence work for the United States Army, and Sir Siegmund Warburg's financial successes in London after World War II, as well as the lives and careers of many other members of the family. The review of this Book prepared by Jack Goodstein
From Sun to Earth Outer SpaceThe enormous amount of energy continuously emitted by the sun is dispersed into outer space in all directions. Only a small fraction of this energy is intercepted by the earth and other solar planets. The solar energy reaching the periphery of the earth's atmosphere is considered to be constant for all practical purposes, and is known as the solar constant. Because of the difficulty in achieving accurate measurements, the exact value of the solar constant is not known with certainty but is believed to be between 1,353 and 1,395 W/m2 (approximately 1.4 kW/m2, or 2.0 cal/cm2/min). The solar constant value is estimated on the basis of the solar radiation received on a unit area exposed perpendicularly to the rays of the sun at an average distance between the sun and the earth. In passing through outer space, which is characterized by vacuum, the different types of solar energy remain intact and are not modified until the radiation reaches the top of the earth's atmosphere. In outer space, therefore, one would expect to encounter the types of radiation listed in Table 1, which are: gamma ray, X-ray, ultraviolet, and infrared radiations. Atmospheric EffectsNot all of the solar radiation received at the periphery of the atmosphere reaches the surfaces of the earth. This is because the earth's atmosphere plays an important role in selectively controlling the passage towards the earth's surface of the various components of solar radiation. A considerable portion of solar radiation is reflected back into outer space upon striking the uppermost layers of the atmosphere, and also from the tops of clouds. In the course of penetration through the atmosphere, some of the incoming radiation is either absorbed or scattered in all directions by atmospheric gases, vapours, and dust particles. In fact, there are two processes known to be involved in atmospheric scattering of solar radiation. These are termed selective scattering and non-selective scattering. These two processes are determined by the different sizes of particles in the atmosphere. Selective scattering is so named because radiations with shorter wavelengths are selectively scattered much more extensively than those with longer wavelengths. It is caused by atmospheric gases or particles that are smaller in dimension than the wavelength of a particular radiation. Such scattering could be caused by gas molecules, smoke, fumes, and haze. Under clear atmospheric conditions, therefore, selective scattering would be much less severe than when the atmosphere is extensively polluted from anthropogenic sources. Selective atmospheric scattering is, broadly speaking, inversely proportional to the wavelength of radiation and, therefore, decreases in the following order of magnitude: far UV > near UV > violet > blue > green > yellow > orange > red > infrared. Accordingly, the most severely scattered radiation is that which falls in the ultraviolet, violet, and blue bands of the spectrum. The scattering effect on radiation in these three bands is roughly ten times as great as on the red rays of sunlight. It is interesting to note that the selective scattering of violet and blue light by the atmosphere causes the blue colour of the sky. When the sun is directly overhead at around noon time, little selective scattering occurs and the sun appears white. This is because sunlight at this time passes through the minimum thickness of atmosphere. At sunrise and sunset, however, sunlight passes obliquely through a much thicker layer of atmosphere. This results in maximum atmospheric scattering of violet and blue light, with only a little effect on the red rays of sunlight. Hence, the sun appears to be red in colour at sunrise and sunset. Non-selective scattering occurring in the lower atmosphere is caused by dust, fog, and clouds with particle sizes more than ten times the wavelength of the components of solar radiation. Since the amount of scattering is equal for all wavelengths, clouds and fog appear white although their water particles are colourless. Atmospheric gases also absorb solar energy at certain wavelength intervals called absorption bands, in contrast to the wavelength regions characterized by high transmittance of solar radiation called atmospheric transmission bands, or atmospheric windows. The degree of absorption of solar radiation passing through the outer atmosphere depends upon the component rays of sunlight and their wavelengths. The gamma rays, X-rays, and ultraviolet radiation less than 200 nm in wavelength are absorbed by oxygen and nitrogen. Most of the radiation with a range of wavelengths from 200 to 300 nm is absorbed by the ozone (O3) layer in the upper atmosphere. These absorption phenomena are essential for living things because prolonged exposure to radiation of wavelengths shorter than 300 nm destroys living tissue. Solar radiation in the red and infrared regions of the spectrum at wavelengths greater than 700 nm is absorbed to some extent by carbon dioxide, ozone, and water present in the atmosphere in the form of vapour and condensed droplets (Table 1). In fact, the water droplets present in clouds not only absorb rays of long wavelengths, but also scatter some of the solar radiation of short wavelengths. Ground LevelAs a result of the atmospheric phenomena involving reflection, scattering, and absorption of radiation, the quantity of solar energy that ultimately reaches the earth's surface is much reduced in intensity as it traverses the atmosphere. The amount of reduction varies with the radiation wavelength, and depends on the length of the atmospheric path through which the solar radiation traverses. The intensity of the direct beams of sunlight thus depends on the altitude of the sun, and also varies with such factors as latitude, season, cloud coverage, and atmospheric pollutants. The total solar radiation received at ground level includes both direct radiation and indirect (or diffuse) radiation. Diffuse radiation is the component of total radiation caused by atmospheric scattering and reflection of the incident radiation on the ground. Reflection from the ground is primarily visible light with a maximum radiation peak at a wavelength of 555 nm (green light). The relatively small amount of energy radiated from the earth at an average ambient temperature of 17°C at its surface consists of infrared radiation with a peak concentration at 970 nm. This invisible radiation is dominant at night. During daylight hours, the amount of diffuse radiation may be as much as 10% of the total solar radiation at noon time even when the sky is clear. This value may rise to about 20% in the early morning and late afternoon. In conclusion, therefore, it is evident that in cloudy weather the total radiation received at ground level is greatly reduced, the amount of reduction being dependent on cloud coverage and cloud thickness. Under extreme cloud conditions a significant proportion of the incident radiation would be in the form of scattered or diffuse light. In addition, lesser solar radiation is expected during the early and late hours of the day. These facts are of practical value for the proper utilization of solar radiation for such purposes as destruction of microorganisms.
We recently published a widely debated news story about one company's claim that its new texturing process will make games 70% smaller. That company is called Allegorithmic and the technology the guys there are developing is designed to keep texture quality standards as high as they are now, whilst making the size of the texture files 90% smaller. The key to all this? A little thing called procedural textures. This follow up to that article, an interview with Sebastian DeGuy, hopes to unravel in a little more detail exactly what procedural textures are and what they will mean to you, the gamer. We discuss how small the textures can actual get, whether procedural textures can compare with textures in games like Crysis and much more in this interview. Tuck in. Note: All pictures are made using procedural textures, click on them to view for yourself the quality of the image. bit-tech: The first thing most people said upon reading the original story was 'I saw this with .kkreiger a few years ago.' What is the difference between that game and your procedural texturing technology? Sebastian DeGuy: Well, .kkrieger is a spinoff from the demo scene. In the demo scene, there are contests for doing more with the less. People compete with one another, starting from a very small program producing the most impressive demo. To do this they used a lot of procedural methods for generating the content (models, animation, textures, etc.). The procedural textures look pretty swish. On the other hand, you've got several researchers who worked a long time ago on procedural techniques for generating content. When people see what we're doing they sometimes say: "We've seen that with .kkrieger," and they are right in some ways. Procedural techniques have been considered by industry experts for a long time, even before .kkreiger. Despite some similarities, technique-wise, we are quite different in several ways. First, the inner technology (the maths) that we use is based on modern maths. We use 'Wavelets', instead of classic maths method of 'Fourier Transform', which was the mathematical technique used in the past by all the procedural texturing techniques (including .kkrieger). Our technique works on a new mathematical model that I developed whilst studying for my PhD. So in basic terms, you guys use a modern mathematical technique, whereas older procedural texturing techniques were based on old maths? Exactly - our technique allows developers a more complex ability to express themselves. A major reason procedural techniques for textures haven't been used that much so far has been the limitations of the Fourier method. As you can see, gritty textures are equally impressive. And the complexity of the maths? The inner characteristics of the Fourier Transform, yes. You would have to be a strong mathematician to master it sufficiently to produce good looking textures. We, on the other hand, can produce these with far less effort. The complexity of the Fourier Transformer is the main reason you wouldn't see that many people using procedural textures... using that technique is hard. The gallery shows what we can do with our program ProFX technology. We are doing real-time renders of our procedural textures, and the output looks like photographs to me. Doing that in traditional programmes would be quasi-impossible. It's quite easy with our tools.
This remarkable manuscript is very different to what the casual glance might suggest: despite the Islamic style carpet page, and the Arabic script, this is a Christian document - an account of the Gospels - made in Palestine in the 14th century. The Four Gospels in Arabic, Palestine, 1337. Gospel of Luke BL Add. MS 11856, ff. 94v–95 Copyright © The British Library Board What is a gospel? A gospel recounts the life of Jesus of Nazareth and his teachings, which form the foundations of the Christian faith. He lived in Israel during the Roman occupation of the country. His mission to reform what he saw as corruption in the Jewish faith caused conflict with the religious hierarchy and led to his execution by the Roman authorities. After his death and subsequent reports of his rising from the dead, followers of Christ - meaning 'the anointed one' - developed his teachings into a new faith, independent of Judaism but keeping much of its scriptures. Several gospels had been written by disciples of Jesus during the centuries following his death, but only four were authorised by the Council of Nicaea in 325 for inclusion in the Christian Bible. These four were attributed to St Matthew, St Mark, St Luke and St John, known as the four Evangelists. What is a carpet page? A carpet page is one decorated with the rich, ornate, ingeniously interwoven abstract patterns reminiscent of an exotic carpet. Such pages are strongly associated with Islamic manuscripts, though they also occur in some early 'insular' manuscripts such as the Lindisfarne Gospels from seventh- or eight-century England. Because the Qur'an cannot be illustrated with any representational images, Muslim scribes and artists developed techniques in abstract decoration and patterning to astonishing levels of subtlety and ingenuity. An example is shown in a page from this Mamluk Qur'an. What is on this page? The text on the right-hand page - written in Arabic, which runs right-to-left and downwards - is a portrait of one of the four Evangelists, Luke. He was Syrian, born in Antioch. Ancient manuscripts assert that he died aged 84, having never married or had children. He is the patron saint of physicians and surgeons, and is often depicted in western art as a physician. Luke is often associated with St Paul, whose biblical writings refer to Luke at various times. According to tradition, he wrote not only his Gospel, but also the Acts of the Apostles, the third and fifth books of the New Testament. He is also said to have painted the first icons: pictures of Mary, Peter and Paul. The Black Madonna of Czestochowa in Poland is claimed to have been painted by Luke. When were the Gospels translated into Arabic? The earliest manuscript copies of the Four Gospels in Arabic date from the late eighth or ninth centuries, and were made from a variety of languages, including Syriac, Greek and Coptic. Perhaps the oldest dated Arabic copy of the Four Gospels is in the library at Mount Sinai, dated 859. The Harley Trilingual Psalter, from mid-12th century Sicily, contains an Arabic translation of the Psalms made a century earlier.
If you’re like most people, you’ve probably heard about thyroid eye disease (TED) and thought it was a pretty serious condition. After all, it can cause vision problems, pain, and even blindness. But is TED really as bad as it seems? And if so, can it be cured? In this blog post, we will explore the realities of TED and answer these questions. We will also provide tips on how to prevent TED and how to cure it if it does occur. So read on to learn more about this often-misunderstood eye condition. What is thyroid eye disease? Thyroid eye disease is a condition in which the thyroid gland produces an excessive amount of hormones that can disturb the normal functioning of the eyes. Symptoms may include dry eyes, blurred vision, and sensitivity to light. Treatment usually involves medication and surgery. Thyroid eye disease is relatively rare, affecting about 1 in 5,000 people overall. There is no known cure, but treatment can often improve symptoms. Thyroid eye disease is a condition that occurs when the thyroid gland produces too much thyroid hormone. This excess hormone can damage the eyes and lead to vision problems, including blurry vision and eye fatigue. Treatment for thyroid eye disease typically includes reducing the amount of thyroid hormone in the body and using medication to improve vision. Some people may also require surgery to remove or correct abnormal growths on the eyes called goiters. Thyroid eye disease is usually treated with medications and surgery, but there is no cure yet available. Read more: Sanpaku Eyes Curse Symptoms of thyroid eye disease Thyroid eye disease (TED) is a common condition that affects the eyes. It can cause eye problems, including vision loss and blindness. TED can be treated with treatments such as surgery or medications. There is no cure for TED, but it can be managed. The most common symptoms of thyroid eye disease are fatigue, dry eyes, and vision changes. Other symptoms may include a lump in the throat, hoarseness, trouble swallowing, or an overactive thyroid. If left untreated, thyroid eye disease can lead to decreased vision and even blindness. There is no cure for thyroid eye disease, but treatments can help relieve some of the symptoms. Treatment may include medication to lower the level of thyroid hormones or surgery to remove the gland that produces these hormones. Diagnosis of thyroid eye disease There is no single test that can definitively diagnose thyroid eye disease (TED), as the condition can be caused by a variety of factors. Your doctor may perform a physical exam, order blood tests, and carry out other diagnostic measures to rule out other causes for your symptoms. If TED is suspected, your doctor may refer you to an ophthalmologist for further evaluation. If your doctor diagnoses TED, he or she will likely recommend treatment. Treatment options generally include medications to lower thyroid hormone levels and/or surgery to remove the infected material from the eye. In some cases, however, TED is not treatable and may require ongoing care from your doctor. Treatment of thyroid eye disease There is no one-size-fits-all answer to curing thyroid eye disease, as the treatment plan will vary depending on the individual’s symptoms and medical history. However, many people find relief from thyroid eye disease with a combination of conservative treatments and surgery. Conservative measures include using medications to control inflammation and lowering thyroid hormone levels if they are elevated. Surgery may be necessary if the condition is severe or if it is causing significant vision impairment. Some people find relief from symptoms by following a hypothyroid diet and taking natural supplements, but there is no guarantee that these methods will work. There is no known cure for thyroid eye disease, but treatments can help improve the symptoms. Treatment options include glasses, surgery, and medication. Surgery may be necessary if the condition is causing vision problems or if it is not responding to other treatments. Some patients also take thyroid hormone tablets or injections to control their symptoms. Read more: Sanpaku Eyes Curse Prevention of thyroid eye disease There is no one definitive answer when it comes to whether or not thyroid eye disease (TED) can be cured. However, there are a number of things that can be done to help reduce the risk of developing the condition and improve your chances of a successful resolution. One key strategy is to keep your thyroid health in check. If you have an overactive thyroid, your eyes may be more likely to become infected. Additionally, following a healthy diet and getting regular exercise will help keep your body functioning properly and may also lower your risk of developing TED. If you do experience symptoms of TED, make sure to see a doctor as soon as possible. Treatment options can vary depending on the severity of the condition, but often involve antibiotics or surgery to remove diseased tissue. If you are able to delay or avoid getting diagnosed with TED, your chances of a successful resolution are much higher.
For those of you that are exercising, and of course for those who aren’t here is why should you keep doing it, or even better start doing it. In addition to the strengthening of the cardiovascular system and the weight loss, the cardiovascular exercises are great for your brain. With improving the circulation, you will bring more blood to the brain, and that shall furnish in the oxygen and the glucose that your brain needs to function normally. With each movement the muscles will stimulate the release of various hormones on the brain. These hormones affect the growth of brain cells, the regulation of mood and improve concentration. With cardio exercises are released the next hormones: - Serotonin, which improves the mood. - Dopamine enhances concentration and the memory, positive effect on attention, perception, motivation and sexual arousal. - Noepinofren has positive effect on attention, perception, motivation and sexual arousal. If you practice every day, the release of hormones will improve the blood circulation, that will stimulates the brain growth. In one study, researchers found that exercising of 1 hour, three times a week for 6 months is simulating the growth of the hippo-campus, a part of the brain responsible for memory and learning. Productivity and cardio-exercising People who exercise everyday have 23% higher productivity. Exercise acts are as a cup of coffee. The heart starts to work faster, improves blood circulation, the body is filled with energy and your thoughts become clearer. Only 30 minutes of easy cycling will improve concentration, which will stay at a high level for the next 60 minutes. Memory and cardio-exercising If you are not remembering how the name goes, leave games like sudoku and start practicing. In one study it was found that women resolve a memory test as much as 20% better after running. The intensity of exercise is also important. Scientists have found that after intense physical activity memory information is improved for 20%. Stress and cardio-exercising Scientists have found that regular exercising significantly reduces the levels of stress. Only 20 minutes cycling will improve your mood. Best of all is that it will help keep on the good mood for the next 12 hours. During exercise there is growth of new cells in the brain. They will help the brain to cope with stress. In other words, your brain will be able better to deal with any stressful situation if you exercise regularly. Communication and cardio-exercising The researchers found that people who exercise regularly, adept much better in the everyday communication, business and private life. After a few months of regular exercise, you will feel more confident, which will lead to more successful communication that involves compromise, contracting and negotiation. In addition to better communication, people who exercise regularly are able and to make the best possible decisions. Mood communication and cardio-exercising Forget about antidepressants and start exercising. Scientists have found that exercising reduces the risk of depression and the frequency of sudden changes of mood. Physical activity will release hormones that stimulates the good mood (endorphins, serotonin). Confidence plays a big role in determining the mood. Exercise will improve your appearance, you will feel more confident, and ultimately you will be in a better mood.
THE aliens are out there and Earth had better watch out, at least according to Stephen Hawking. He has suggested that extraterrestrials are almost certain to exist ó but that instead of seeking them out, humanity should be doing all it that can to avoid any contact. The suggestions come in a new documentary series in which Hawking, one of the worldís leading scientists, will set out his latest thinking on some of the universeís greatest mysteries. Alien life, he will suggest, is almost certain to exist in many other parts of the universe: not just in planets, but perhaps in the centre of stars or even floating in interplanetary space. ... The real challenge is to work out what aliens might actually be like.Ē The answer, he suggests, is that most of it will be the equivalent of microbes or simple animals ó the sort of life that has dominated Earth for most of its history. One scene in his documentary for the Discovery Channel shows herds of two-legged herbivores browsing on an alien cliff-face where they are picked off by flying, yellow lizard-like predators. Another shows glowing fluorescent aquatic animals forming vast shoals in the oceans thought to underlie the thick ice coating Europa, one of the moons of Jupiter. Such scenes are speculative, but Hawking uses them to lead on to a serious point: that a few life forms could be intelligent and pose a threat. Hawking believes that contact with such a species could be devastating for humanity. ...
Fasted training is training without eating before hand, for example before breakfast. This type of training has become a topic of conversation in many fitness and health circles. But the benefits of this training are hotly debated. It is known scientifically that as exercise intensity increases, i.e. from walking to running, so does the bodies usage of carbohydrate. But does this mean that we have to consume carbohydrate before training in order to train effectively? Fat is one fuel source that everyone has plenty of stored in their bodies. But using fats to fuel high intensity activity is often inadequate as the metabolism of fat is a slow process. As a result, the body can’t keep up with the energy demands by using fats alone if you exercise at a high intensity for along period because the body needs lots of energy quickly. Protein is another source of energy but again its metabolism is a relatively slow process as the proteins have to be broken down before they are in a form which the muscles can use. Fasted training usually means training first thing in a morning without having had any breakfast. This means you will become glycogen depleted very quickly as your blood sugar will be low and much of your liver glycogen will have been used in your overnight fast. As you start to exercise your muscle glycogen is also used up very quickly. Fat becomes the dominant fuel source, and training in this way does improve the metabolic pathways for burning fat. There are also other benefits of such training: 1. Improved insulin sensitivity. - Insulin is released when we eat to help us absorb the nutrients from our food. The hormone initiates the removal of sugars from the bloodstream and directs them to the liver, muscles, and fat cells to be used as energy. - However, eating too much and too often increases the bodies resistance to insulin’s effects, and while poor insulin sensitivity increases the risk of heart disease and cancer, it also makes it harder to lose fat. - Eating less frequently (i.e. fasting more regularly) can help, because it results in the body releasing insulin less often, so we become more sensitive to it. This then helps individuals to lose fat and also get blood to their muscles. 2. Increase of growth hormone (GH). - GH is a hormone that helps the body make new muscle tissue, burn fat and improve bone quality, physical function, and longevity. - Along with regular weight training and sleep, fasting can help to increase the body’s GH. The effect ends when the fast does, which is a compelling reason to train fasted in order to keep muscle-friendly hormones at their highest levels. However, there are two major drawbacks to fasted training where fat is the dominant fuel: 1. It is not possible to train as quickly as would be possible on carbohydrate as the body cannot produce energy from fat fast enough. 2. The immune system relies on carbohydrate to function. Prolonged endurance exercise causes a drop in immune function regardless, and a low carbohydrate availability only makes this worse. This can leave the body more prone to coughs and colds- not ideal if you are wanting to train hard. So, based on the drawbacks of using fat for fuel, and the benefits that can come of such training for example by improving the metabolic pathway for fat metabolism, it has been suggested that the best way to use fasted training is at selected times, such as the early season, when the sessions are of a lower intensity and performance is not so important. Ideally fasted sessions should be no longer than 60 minutes so that the immune system is not put under too much stress. Finally, to conclude, you should not train in a fasted state for all of your sessions, and based on current research during your racing season, you should train how you race. Therefore the “train low, race high” approach should not be followed all the time as it does not improve performance. It must be remembered that carbohydrate is the dominant fuel for performing, and therefore carbohydrate should be available to the body in training in order to develop these metabolic pathways. Additionally, everyone is different. Fasting will not work for everyone, and the only way to see if it helps you to reach your goals is to try it out. For more information: Burke, L.M. (2010). Fueling strategies to optimize performance: training high or training low? Scandinavian Journal of Medicine & Science in Sports, 20(Suppl. 2), 48–58.
The NSW Environment Protection Authority (EPA) is the primary environmental regulator for New South Wales. We partner with business, government and the community to reduce pollution and waste, protect human health, and prevent degradation of the environment. We encourage businesses to make sure their activities do not harm the environment and human health by: Our work is informed by scientific evidence and consultation with stakeholders. The EPA Strategic Plan sets out our work priorities and key results areas (See ‘Our Performance’). The plan is updated each year to reflect changes in focus and emerging issues. The EPA was established in 1991 under the Protection of the Environment Administration Act 1991 (POEA Act). The EPA built a strong reputation over the next decade as an effective and innovative environmental regulator. In 2003, the EPA was incorporated, with other environment-related agencies, into a new Department of Environment and Conservation, reflecting a shift in government priorities from pollution prevention to conservation. In 2011, a major pollution incident at Kooragang Island in Newcastle catalysed the NSW Government into reprioritising pollution prevention and regulation. In February 2012, the EPA was re-established as an independent authority with a clearly defined mandate and enhanced powers. Since 2012 the EPA has been working towards developing the full set of capacities required to meet its mandate and increasing independence from services provided by OEH.
Test for Low-Level Brain Activity May Aid in Next Schiavo Case Doctors may be able to tell whether a patient is in a vegetative or minimally conscious state by tracking signals on a path through the brain, a study said. The findings could lead to a new diagnostic tool to help doctors with life-support decisions in cases such as that of Terri Schiavo, the Florida woman who was in a vegetative state for 15 years before a court ordered her feeding tube removed. The study, by researchers from the University of Liege in Belgium, is published in the journal Science. It can be difficult to differentiate between people in a vegetative state, in which patients lack cognitive function though display forms of wakefulness, and a less serious impairment that leaves patients minimally conscious yet often able to communicate in some form. This medical dilemma led to a legal battle in the case of Schiavo, who fell into a coma in 1990 and died in 2005. “This is one step further in understanding the brain function in these patients, and it gets closer to a diagnostic tool,” said Melanie Boly, the study’s first author and a post- doctoral researcher at the Coma Science Group at the University of Liege in Belgium, in a telephone interview. “This may be convenient to use clinically because you can bring the technique to the patient’s bedside.” More study with more patients will be required to determine whether the method can be used as a diagnostic measure, said Nicholas Schiff, a professor of neurology and neuroscience at Weill Cornell Medical College in New York. Diagnostic measures have a higher standard of proof than this study demonstrates, he said. Insight Not Diagnostic “It’s really a kind of an insight,” said Schiff in a telephone interview. “In patients who are clearly vegetative or clearly minimally conscious, do you see a biological distinction?” The researchers used electroencephalogram recordings of brain activity from 22 healthy volunteers, 8 patients in vegetative states and 13 in minimally conscious states to model brain activity. The EEG recordings were taken while subjects listened to tones. They were then placed in a mathematical model to determine what the recordings meant. Most people’s brains process sounds by sending signals “up” to the front of the brain, in the parietal and frontal cortex, to locate the type of sound, identify it, and make decisions. The signal is then sent down to the temporal cortex, in a feedback loop. The model showed vegetative state patients didn’t complete the loop. The healthy subjects and minimally conscious patients did. “What it’s saying, in simpler terms, is that these patients had a more profoundly impaired frontal part of their brain,” Schiff said. “It’s an elegant, nice study, and it does comport nicely with a lot of other work.” To contact the editor responsible for this story: Reg Gale at email@example.com.
River Stour Course The river stour starts in Clent Hills, passes through Halesowen, Cradley Heath, Lye, Stourbridge, Amblecote, close to Kinver. And then towards Kidderminster, finally meeting the River Severn at Stourport-on-Severn. Whist it is accepted that the river stour rises in the Clent Hills at roughly 250 metres, there is no officially recognised single source as it is fed by a number of small header streams which flow from springs either on the surface or under-ground, where water oozes to the surface causing a small boggy area. At least two of these streams start in Uffmoor Wood, and another has its origin near Saint Kenelm’s Road and flows along the edge of Uffmoor Wood and then through breach Dingle. * The responsibility for monitoring the water quality lies with the Environment Agency. Testing is only done in a few sites and infrequently. It is not tested in Stourbridge. THe testing proved that invertebrates are poor and reflect the pollution and low oxygen levels. Only pollution tolerant invertebrates can live in the river stour. THe pollution is ‘diffuse’ from small and numerous sources – this can be wrongly connected appliances, which run directly into the river. Even screen wash from car windscreen’s and oil & chemicals wrongly disposed of by businesses and individuals. Litter and branches are not removed unless there is a flood risk. History and today The river and its tributaries have undergone many man-made changes over the centuries. First it was dammed in places to form mill pools and at Halesowen Abbey, fish pools. Then it was diverted to operate mills and more recently to facilitate the construction of buildings and roads. It has been channeled through open culverts to prevent flooding, and enclosed culverts to enable buildings and roads to be constructed above the river. In Stourbridge the river cannot be seen alongside the ring road where it is culverted. It reappears near the Bonded Warehouse and almost follows the Stourbridge Arm Canal. The present route only approximates to its natural course in some places. Our new video about volunteers clearing rubbish out of the River Stour. Anyone wanting to get involved we’re always looking for new people, there’s something for everyone to do. River Stour Clear Water Video Film produced, music written and played by Ian Winstanley River Clean-up Dates - River Stour Clean-up – Sunday 14th May - River Stour clean-up event Date: Sunday14th May Time: 10.15 for 10.30 Place: Still to be decided Other dates: June 11th, July 16th, August 20th Please email Rosanne for further information firstname.lastname@example.org Update We have managed to get the river bank cleared of tipping at Bagley Street by contacting the owners of the adjoining business. There was quick … Continue reading River Stour Clean-up – Sunday 14th May - Nature Returns to the River Stour - Thanks to our dedicated and committed volunteers we’ve almost completely cleared the River Stour behind the Lion Health Centre. The wonderful news is that as we’ve cleared away discarded shopping trollies and tyres, the wildlife has returned. On Sunday 5th June we saw fish swimming in the now fast flowing river, and as we were … Continue reading Nature Returns to the River Stour - River Stour Clean-up – Sunday 5th June - Our next event is Sunday June 5th at Murray’s, Bradley Road, Stourbridge. Please check email before coming on Sunday in case of changes due to weather, etc. For any newcomers : Please email email@example.com to be added to the mailing list, thanks Date: Sunday 5th June Time: 10.15 for 10.30 start Place: Murray’s by Lion … Continue reading River Stour Clean-up – Sunday 5th June River Stour Maps River Stour Booklet We’ve recently reprinted River Stour booklet (written by Gerald Darby). This gives the history and course of the Stour with photos. The cost £3.95 of which £1.50 will go to Mary Stevens Hospice. Available from the Fair Trade gift shop Fair & Square, Market St, Stourbridge
Family • Cucurbitaceae Scientific names Common names Cucumis sativus Linn. Kalabaga (Bis.) Huang gua (Chin.) Kasimun (Bon.) Maras (Sul.) Madas (Sul.) Pepino (Span., Tag.) Pipino (Tag., Ilk.) Cucumber (Engl.) Other vernacular names CHINESE: Huang kwa, wong gaw, qing gua, tseng kwa BURMESE : Thakhwa. DANISH : Agurk. DUTCH : Komkommer. FINNISH : Kurkku. FRENCH : Concombre, Concombre commun, Concombre vert long, Concombre blanc long GERMAN : Gurke. ITALIAN : Cetriolo HINDI : Kheera, Kakri, Kakdi, Tihu. INDONESIA : Ketimun JAPANESE : Kyu uri, Kyu uri, Moro kyu. KHMER : Trâsâk. KOREAN : Oh ee (oi). LAOTIAN : Tèèng. MALAYSIA : Timun NEPALESE : Asare kankro, Airelu kankro, Kakro, Khira. PORTUGUESE : Pepino. SINHALESE : Pipinya (Pipingha), Pipingkai. SPANISH : Pepino, Cohombro. SUNDANESE : Bonteng. THAI : Taeng kwaa , Taeng om (ChiangMai), Taeng raan (Northern Thailand). Pipino is an annual, rather coarse, fleshy, prostrate or climbing vine. Leaves are ovate, 8 to 14 centimeters long, 5-angled or 5-lobed, the lobes or angles being pointed, and hispidious on both surfaces. Flowers are axillary, solitary, or fascicled, stalkless or short-stalked, and bell-shaped. Male and female flowers are similar in color and size, yellow, and about 2 centimeters long. Fruit is usually cylindric, 10 to 20 centimeters long, smooth, yellow when mature, and slightly tuberculated. A variety is smaller and greenish. Seeds are numerous, oblong, compressed, and smooth. - Cultivated in the Philippines. - Planted in all warm countries. - Phytochemical screening yielded alkaloids, glycosides, steroids, saponin, flavonoid, and tannin. - Fruit contains dextrose (0.11 to 0.98%); saccharose (0.05 to 0.13%); fixed oil (0.11-0.98%). - Seed contains fixed oil (Gurken oil) 25% consisting of oleic acid (58%), linolic acid (3.7%), palmitic acid (6.8%), stearic acid (3.7%); phytine; and lecithine. - Aerial parts contain a 14a-methyl D-phytosterol. - Pulp yields shikimate dehydrogenase. - Leaves contain urea and an alkaloid, hypoxanthine. - Study yielded two new megastigmanes from the leaves of C sativus - cucumegastigmanes I and II with other known compounds. - Seeds are antihelminthic; also, cooling, diuretic, and strengthening. - Active ingredient of the essential oil is considered aphrodisiac in nature. - Checkmate dehydrogenase from the pulp is considered a facial skin softener; also cooling and a natural sunscreen. Edibility / Nutritional - Peeled raw fruit is peeled, sliced thin, served with vinegar, sugar, salt, pepper and calamansi makes a good vegetable side dish. - Common salad ingredient; also boiled in stew dishes. - Seed kernel is edible. - A variety is used for making pickles. - In Malaya, young leaves are eaten raw or steamed. - Good source of calcium and iron, vitamins B and C. - Juice of leaves used as an emetic in acute indigestion in children. - Ripe, raw cucumbers said to be good for sprue. - Bruised root applied to swelling from the wound of hedgehog quill. - Raw cucumbers used for dysentery. - Cucumber salve used for scalds and burns. - Seeds used as taeniacide (1 - 2 oz of seed thoroughly ground, with sugar, taken fasting, followed in 1-2 hours with a purge). Also used as an emetic with water. - In Indo-China, immature fruit given to children for dysentery. - In India, used as diuretic and for throat infections. Pulp considered healing and soothing, used to keep facial skin soft; is toning and soothing on damage skin and provides a natural sunscreen. - In Bangladesh, fruit used with cumin seeds for throat infections. - Cosmetic: Fruit is excellent for rubbing over the skin for softness and whiteness. - Cooling, healing, and soothing to the skin irritated by the sun or raw from effects of eruptions - Used in the manufacture of cucumber soap. - Cucumber scent, one of a few others, linked to female sexual arousal. source • Phytochemicals / C-Glycosides: Study yielded the following C-glycosides from the leaves: isovitexin 2″-O-glucoside, isovitexin, isoorientin, 4′-X-O-diglucosides of isovitexin and swertiajaponin. Flowers yielded kaempferol 3-O-rhamnoside and 3-O-glycosides of kaempferol, quercetin, isoramnetin was revealed. • Hypoglycemic / Anti-Diabetes: In Mexico, one of the edible plants with hypoglycemic activity. (2) • Antihyperglycemic: Antihyperglycemic effect of 12 edible plants was studied in healthy rabbits. Cucumis sativus significantly decreased the area under the glucose tolerance curve and the hyperglycemic peak. Study suggests the integration of a diet that includes edible plants with hypoglycemic activity. (6) • Anthelmintic: Ethanolic extract of C sativus exhibited a potent activity against tapeworms comparable to the effect of piperazine citrate. (3) • Skin Whitening / Melanin Inhibition: Six plants parts of C sativus were studied for its inhibitory effect on melanogenesis. Leaves and stems showed inhibition of melanin production. Of 8 compounds isolated, lutein was a potentially skin whitening component. (4) • Hepatoprotective / Antioxidant: Studies have isolated isovitexin and isoorientin, two C-glycosylflavones. Isoorientin has exhibited hepatoprotective effect and isovitexin, an antioxidant effect.(7) • Cytotoxicity / Antifungal: Studies of various extracts of leaves and stems were evaluated for cytotoxicity and antifungal activities. Chloroform extract showed lethality against brine shrimp nauplii. Ethanol and chloroform extracts showed moderate antifungal activity against all tested organisms. Aspergillus niger was most sensitive to the ethanol extract. (11) • Antacid / Carminative: Study evaluated the carminative and antacid properties of C. sativus fruit pulp aqueous extract. Result showed the extract significant neutralized acid and showed resistance against pH changes and also showed good carminative potential. (12) • Antidiabetic: Study of C. sativus seed extracts in STZ-induced diabetic rats showed no initial phase effects but showed blood glucose lowering and weight lost after 9 days of continued daily therapy. (13) • Hepatoprotective: Study showed an aqueous extract of Cucumis sativus possessed hepatoprotective and antioxidant activity against CHP (cumene hydroperoxide) induced-cytotoxicity and ROS (reactive oxygen species) formation. (14) • Delayed Caractogenesis: Study in Sprague-Dawley rats investigated the anti-cataract properties of Cucumis sativus and Cucumbita pepo prior to induction of cataracts using galactose. Both C. sativus and C. pepo significantly delayed cataract formation. Results suggest regular low doses may be effective in delaying cataractogenesis. (15) • Cosmetic Ingredients: Study evaluated the safety of six ingredients from various extracts of Cucumis sativus (fruit, juice, seed) used in cosmetics as skin conditioning agents. The extracts were found safe in present practices of use and concentration. (16) • Phytochemicals / Antimicrobial: Analysis of for proximate principles showed cucumber to be high in all nutritional content, with considerable amounts of proteins, carbohydrates, calcium, iron, phosphorus, vitamin C and crude fibers. Antimicrobial activity of aqueous extract of cucumber with and without peel against Salmonella typhi showed an MIC of 100%. (17) • Antacid and Carminative Properties / Fruit Pulp: Study of C. sativus fruit pulp aqueous extract showed significant carminative properties and antacid effect comparable to that of standard NaHCO3. (18) • Amelioration of Ulcerative Colitis: Study evaluated the effect of an aqueous extract of fruit of Cucumis sativus in acetic acid induced colitis in wistar rats. Results showed potent therapeutic value in the amelioration of experimental colitis in the animal model by inhibition of the inflammatory mediator. (19) • Antiulcer Effect / Fruit Pulp: Study evaluated the gastroprotective potential of C. sativus fruit pulp aqueous extract in gastric ulcerated rats. Results showed gastroprotective properties with significant increase in pH, decrease in gastric juice volume, free and total acidity, and lipid peroxide levels. Polyphenols and flavonoids may be responsible for the gastroprotective effect. (20) • Anti-Inflammatory / Seeds: Study evaluated the anti-inflammatory activity of C. sativus seed in Carrageenan paw edema model and xylene induced ear edema model using albino wistar rats. Results showed significant anti-inflammatory activity, with inhibit on of carrageenan induced paw edema comparable to that produced by indomethacin. (21) • Antimicrobial / Cytotoxic / Leaves: Study investigated various extracts of leaves for antimicrobial and cytotoxic activity. The ethyl acetate, chloroform, and n-hexane extracts exhibited almost the same antimicrobial activity against most of the bacterial test strains, with moderate to good antifungal activity. Cytotoxic potentiality showed significant activity against A. salina. (22) • Antifungal / Cytotoxicity / Reducing Power: Study of ethanol extracts of peels yielded the presence of alkaloids, glycosides, saponins, flavonoids, steroids and tannins. The extracts showed significant reducing power, antifungal activity, and cytotoxicity in the brine shrimp lethality assay. (23) • Antidiarrheal / Leaves: Study investigated the antidiarrheal activity of crude methanol extracts of leaves. Results showed significant dose-dependent inhibitory activity against castor oil induced diarrhea., with a significant reduction in gastrointestinal motility in charcoal meal test i mice. Effect was probably through an antisecretory mechanism. (24) Small or large scale commercial production. Last Updated June 2014 Photos © Godofredo Stuart / StuartXchange OTHER IMAGE SOURCE: / File:114 Cucumis sativus L.jpg / ATLAS DES PLANTES DE FRANCE / 1891 / A. Mascief / Public Domain / Wikipedia Additional Sources and Suggested Readings Demonstration of Activity of -Galactosidase Secreted by Cucumis sativus L. Cells / J Stano et al / Acta Biotechnologica / Volume 21 Issue 1, Pages 83 - 87 / DOI 10.1002/1521-3846(200102)21:1<83::AID-ABIO83>3.0.CO;2-7 Studies on Hypoglycemic Activity of Mexican Medicinal Plants / Proc. West. Pharmacol. Soc. 45: 118-124 (2002) The Anthelmintic Activity of Some Iraqi Plants of the Cucurbitaceae / Pharmaceutical Biology / 1987, Vol. 25, No. 3, Pages 153-157 Inhibitory Effect of Cucumis sativus on Melanin Production in Melanoma B16 Cells by Downregulation of Tyrosinase Expression / Planta Med 2008; 74: 1785-1788 / DOI: 10.1055/s-0028-1088338 Preparative separation of isovitexin and isoorientin from Patrinia villosa Juss by high-speed counter-current chromatography / Journal of Chromatography A, 1074 (2005) 111–115 Anti-hyperglycemic effect of some edible plants / R Roman-Ramos et al / Journal of Ethnopharmacology Volume 48, Issue 1, 11 August 1995, Pages 25-32 / doi:10.1016/0378-8741(95)01279-M Flavonoids from some species of the genus Cucumis / Miros awa Krauze-Baranowska and Wojciech Cisowski / Biochemical Systematics and Ecology, Volume 29, Issue 3, March 2001, Pages 321-324 / doi:10.1016/S0305-1978(00)00053-3 Two New Megastigmanes from the Leaves of Cucumis sativus / Hisahiro Kai, Masaki Baba, and Toru Okuyama / CHEMICAL & PHARMACEUTICAL BULLETIN, Vol. 55 (2007) , No. 1 133 Cucumis sativus L / Catalogue of Life, China Sorting Cucumis names / Maintained by: Michel H. Porcher, / MULTILINGUAL MULTISCRIPT PLANT NAME DATABASE Cytotoxicity and Antifungal Activities of Ethanolic and Chloroform Extracts of Cucumis sativus Linn (Cucurbitaceae) Leaves and Stems / Joysree Das, Anusua Chowdhury, Subrata Kumar Biswas, Utpal Kumar Karmakar, Syeda Ridita Sharif, Sheikh Zahir Raihan and Md Abdul Muhit / Research Journal of Phytochemistry, 6: 25-30. / DOI: 10.3923/rjphyto.2012.25.30 Evaluation of antacid and carminative properties of Cucumis sativus under simulated conditions / Swapnil Sharma, Jaya Dwivedi and Sarvesh Paliwal / Der Pharmacia Lettre, 2012, 4 (1):234-239 Effect of Hydroalcoholic and Buthanolic Extract of Cucumis sativus Seeds on Blood Glucose Level of Normal and Streptozotocin-Induced Diabetic Rats / Mohsen Minaiyan, Behzad Zolfaghari, Amin Kamal / Iranian Journal of Basic Medical Sciences Vol. 14, No. 5, Sep-Oct 2011, 436-442 Hepatoprotective activity of Cucumis sativus against cumene hydroperoxide induced-oxidative stress / H. Heidari, M. Kamalinejad, M.R. Eskandari / Research in Pharmaceutical Sciences, 2012;7(5) The effect of Cucumis sativus L. and Cucumbit pepo L (Cucurbitaceae) aqueous preparations on galactose-induced cataract in Sprague-Dawley rats / Clement Afari, George Asumeng Koffuer, Precious Duah / International Research Journ of Pharmacy and Pharmacology, Vol 2(78) pp 174-180, July 2012 Cucumis Sativus (Cucumber) -Derived Ingredients as Used in Cosmetics: Tentative Safety Assessment / March 16, 2012 / © Cosmetic Ingredient Review / email@example.com Biochemical, Anti-Microbial and Organoleptic Studies of Cucumber (Cucumis Sativus) / Jyoti D. Vora, Lakshmi Rane, Swetha Ashok Kumar / International Journal of Science and Research (IJSR) , Vol 3, Issue 3, March 2014 Evaluation of antacid and carminative properties of Cucumis sativus under simulated conditions / *Swapnil Sharma, Jaya Dwivedi and Sarvesh Paliwal / Scholars Research Library Der Pharmacia Lettre, 2012, 4 (1):234-239 Effect of aqueous extract of Cucumis sativus Linn. fruit in ulcerative colitis in laboratory animals / Mithun Vishwanath K Patil*, Amit D Kandhare, Sucheta D Bhise / Asian Pacific Journal of Tropical Biomedicine (2012)S962-S969 Cytoprotection mediated antiulcer effect of aqueous fruit pulp extract of Cucumis sativus / Swapnil Sharma, Jaya Dwivedi, Meenakshi Agrawal and Sarvesh Paliwal / Asian Pacific Journal of Tropical Disease (2012)S61-S67 ANTI-INFLAMMATORY ACTIVITY OF CUCUMIS SATIVUS SEED IN CARRAGEENAN AND XYLENE INDUCED EDEMA MODEL USING ALBINO WISTAR RATS / Vetriselvan S*, Subasini U, Velmurugan C, Muthuramu T, Shankar Jothi, Revathy / International Journal of Biopharmaceutics. 2013; 4(1): 34-37. ANTIMICROBIAL AND CYTOTOXIC ACTIVITY OF ETHYL ACETATE, CHLOROFORM AND N- HEXANE EXTRACTS OF CUCUMIS SATIVUS LEAVES. / Fatema Nasrin et al, / The Experiment, 2014, Vol. 21(3), 1480-1486 Phytochemical Screening and In-vitro Evaluation of Reducing Power, Cytotoxicity and Anti-Fungal Activities of Ethanol Extracts of Cucumis sativus / Jony Mallik*, Roksana Akhter / International Journal of Pharmaceutical & Biological Archives 2012; 3(3):555-560 Antidiarhoeal activity of Cucumis sativus leaves. / Fatema Nasrin* Laizuman Nahar / IJPDA, Vol: 2 Issue:2 Page:106-110
Math Games I for Special Needs and Struggling Learners Math Games I is the perfect introductory course for special needs students or struggling learners! This course will strengthen each students understanding of numbers, problem solving ability math fluency and math. Students will develop problem-solving strategies and critical thinking skills despite learning disabilities! Students who will benefit the most from this course may exhibit the following: - need to master basic number sense skills - do not understand problem solving steps - do not remember problem solving steps - struggle with math facts Level 1 will cover Basic Operations such as umber sense, quantities, Addition and Subtraction Math Facts, Place Value through 100 Billion and Adding and Subtracting large numbers as well as place value. Why are the class sizes limited to 5 students for our Special Needs/ Struggling Learners classes? - Classes are kept small to facilitate a true cooperative learning environment with our students’ needs kept foremost in mind. - Classes are customized according to the needs of the students. - Activities progress according to the progress and needs of the students in each class. - Small class sizes allow executive functioning skills to be developed including problem-solving and meta-cognition. - Small classes provide an anxiety free learning environment allowing students to focus on skill building success! Educational Therapy or Tutoring for Special Needs and Struggling Learners? Tutoring guides special needs students through a text or curriculum. Our classes, based on Educational Therapy, go beyond that. We are concerned with students grasping the concepts in this course as well as being equipped to handle the next academic challenge. We use questions to guide students through the learning process, helping them understand and correct mistakes. Students will grasp math concepts as well as understand how to apply these concepts to other areas of their lives. Math Class Games are taught by Special Needs instructor and advocate Amy Vickrey. Amy has over a decade of teaching experience, is a SPED Homeschool Consultant, and the Director of True North Homeschool Academy’s Special Needs/ Struggling Learners Program. Check out our Catalog for other courses, clubs, Academic Advising and more!
Over the weekend, numerous Lowell area residents called 911 to report seeing a black bear roaming their neighborhood. The bear was not aggressive, but residents are cautioned to give it space. Washington Department of Fish and Wildlife has been contacted and they say bears tend to avoid humans, but may become aggressive while searching for food. Bears are opportunistic and eat trash, bird seed and pet food. Bears expend a great amount of energy digging under, breaking down or crawling over barriers to get food. The best way to protect your family is to stay away from bears, don’t feed them, feed your pets indoors, and manage your garbage by keeping trash in cans with the lid tightly closed and secure. Washington Department of Fish and Wildlife gives the following advice if you come in close contact with a bear: - Stop, remain calm, and assess the situation. If the bear seems unaware of you, move away quietly when it’s not looking in your direction. Continue to observe the animal as you retreat, watching for changes in its behavior. - If a bear walks toward you, identify yourself as a human by standing up, waving your hands above your head, and talking to the bear in a low voice. - Don’t throw anything at the bear. The bear could interpret that as a threat or a challenge. - If you cannot safely move away from the bear or the bear continues toward you, scare it away by clapping your hands, stomping your feet, yelling, and staring the animal in the eyes. If you are in a group, stand shoulder-to shoulder and raise and wave your arms to appear intimidating. The more it persists the more aggressive your response should be. If you have bear spray, use it. - Do not run from the bear. Bears can run up to 35 mph and running may trigger an attack. Climbing a tree is also not recommended. Tell neighbors about this community concern and call 911 if the bear becomes aggressive.
Cooperative commitment brings continued conservation What Has ANR Done?Several years ago, the Ormond Beach wetlands area became an approved Master Gardener project site. Decades of industrial waste were removed by the ton with the help of community groups and the City of Oxnard. Much research and work went into creating seed banks of native plants to restore coastal wetland vegetation. In 2009 and 2010, UCCE Sea Grant advisor Monique Myers led the RESTOR project, a grant-funded wetlands/ecological restoration program linking teachers and youth with science education and community service opportunities at Ormond Beach. More than 1,000 middle-school-aged youth participated. In 2011, 4-H All Stars designated the Ormond Beach wetlands as the location for their service project. They created a walking trail complete with 10 exercise points. Exercises feature yoga poses and other meditative exercises in tune with the natural setting. The All Stars will also be identifying native plants and birds and include this information along the walking path. The youth are working with community groups, government agencies, and local business to make their dream a reality. The trail opened in April 2012. Strengthening communities one step at a timeA walking trail with exercise stations throughout a nature preserve will provide many positive benefits for families and the community. Increased physical activity can help to reduce obesity, and time spent together can strengthen families. Continual renewal and restoration to a local environmental treasure brings awareness, appreciation and pride to neighboring communities. The youth leading this ambitious project will gain leadership and life skills to last a lifetime. Supporting Unit: Ventura CountyRose Hayden-Smith
The world's largest discharge of municipal sewage sludge to surface waters of the deep sea has caused measurable changes in the concentration of sludge indicators in sea-floor sediments, in a spatial pattern which agrees with the predictions of a recent sludge deposition model. Silver, linear alkylbenzenes, coprostanol, and spores of the bacterium Clostridium perfringens, in bottom sediments and in near-bottom suspended sediment, provide evidence for rapid settling of a portion of discharged solids, accumulation on the sea floor, and biological mixing beneath the water-sediment interface. Biological effects include an increase in 1989 of two species of benthic polychaete worm not abundant at the dump site before sludge dumping began in 1986. These changes in benthic ecology are attributed to the increased deposition of utilizable food in the form of sludge-derived organic matter. All Science Journal Classification (ASJC) codes - Aquatic Science
While reading this paper on describing biological control of movement as a servomechanism, I came across a good explanation of the formation and value for muscle memory (even though the paper did not state it in the exact terms). The gist of the paper is that, accurate biological movement would require both desired position AND velocity signals from the central nervous system (CNS), forming a proportional-plus-derivative control loop from the musculer-skeletal system and the reflex feedbacks. There are various ways in which the CNS can generate the positional and the velocity signals need for a certain movement. There’s evidence that the cortex contains representations of the velocity profile of a movement (and thus requiring an integrating circuit for position). The converse is also a possibility. However, my favorite theory is one that suggests the basis for muscle memory: A strategy based on learning could also be imagined. The CNS might precompute only the positions for the trajectory of a novel movement. The movement then could be executed via one of the classical equilibrium-point models using a relatively high level of stiffness to ensure the fidelity of the movement. The CNS could then “remember” the signals generated by the velocity signals during these learning trials. For subsequent trials at low stiffness, the CNS would utilize this memory as the required velocity reference signal. A difficulty with any memory-based control scheme is that of initializing the memory for novel movements. The learning scheme based on our control model explicitly addresses this problem. By performing novel movements initially at high stiffness, the sensory organs themselves produce the exact pattern of activation required as a velocity reference signal for subsequent low-gain movements. This is a pretty old paper (from 1993), so I’m not sure how much more advancements have been made on this theory. Nevertheless, the same principle can be applied to a number of physical and abstract systems.
As you should guess the whaleboat was a famous boat used to catch whales, this boat was created approximately in 1620 with the purpose of catching whales. Its structure could be described in the next way: is a very narrow boat since the beginning until the end of the boat. These boats were propelled by oars. Nowadays this structure have not changed too much, the new Whaleboats have been equipment with a centerboard to make easier the navigation. Its unique structure has been designed to have an easy maneuverability and very efficient in shallow water. This boat is able to move forwards or backwards with any problem. These boats have participated in several wars and conflicts, the most remembered wars are: The independent war of United States, the French & Indian war and other, these wars around 1770. Whaleboats were used to transport equipment and crew. Many people the Whaleboat as a fishing boats, nowadays these boats are used to simple tasks, near to the coast are used to rescue people, fishing and relax. Many people enjoy spending time in these boats. These boats are used too for works near beaches and in the surf zone. As we mentioned the structure of this boat is very simple, nowadays are very small with a double ended design that permits an easy transport, many of them have a detachable sailing that is used when there are the perfect weather conditions. In the museums there are several of the first models, the structure was very different in contrast to the new models that are more sophisticated.
Dropbox is a tool you can use to save your files through the internet! It allows you to share your files with your colleagues as well. Here are some ways you can use this as a teacher: - Send your new files you compiled digitally instead of walking over with a flash drive to share them. - Have parents send you photos of family in different places (like a Flat Stanley project, maybe!) - You could even send parents information/files through it as long as they have an account! - Backup all your teacher files (and freebies!) so that if your computer dies, they will still be accessible The sky is the limit! There are so many ways to use Dropbox, so I'm going to go ahead and introduce it to you through this YouTube video I recorded last night. I think I'm coming down with a head cold, so please don't judge me too harshly! :) I wanted to get this video out because I feel that it is so useful for you as a teacher! Here it is! I hope that you take the time to fill out this quick form to help me out! You don't have to put in your email address. You can just click this link here --> Get Your Dropbox Account! How would you use Dropbox? Make sure you drop by my blog to see all my old technology posts as well. Oh, and click the follow button here on Technology Tailgate! You won't regret it! This is the new home of all things TECHNOLOGY in the CLASSROOM! :)
Playground operators, owners and local communities can take the lead in preventing playground injuries. The Canadian Standards Association (CSA) publishes voluntary playground standards (CAN/CSA Z614 “Children’s Playspaces and Equipment”) for outdoor public play spaces. This is considered the “gold standard” for public playgrounds. To make your playgrounds safer: - Ensure that equipment and play spaces comply with the CSA standard by having them inspected by someone who is certified and experienced in playground inspections using the CSA standard. - Inspect and maintain playground equipment on a regular basis. Look for new hazards, such as worn surfacing or broken equipment. - Report any injuries that occur on public play equipment to Health Canada. The Hazardous Products Act requires that all injuries related to consumer products be reported. Reduce the height of play equipment Surfacing and fall height are the two main factors that determine how seriously a child is injured in a fall. To protect children from falling: - Select new equipment that reduces overall fall height. - Avoid equipment where a child could fall from an open elevated platform. - Look for equipment with high protective barriers, and play structures that discourage climbing (e.g. onto the roof or up the outer structure) and/or have fully enclosed spaces on the highest elevated platforms. Install and maintain adequate protective surfacing Appropriate surfacing can decrease the risk of a serious injury at the playground. To make your playground safer: - Install and maintain surfacing according to the CSA standard (CAN/CSA Z614 “Children’s Playspaces and Equipment”). The standard can be purchased at www.csa.ca and includes detailed information on types of surfacing and how to test for impact absorption using a tri-axial accelerometer (triax). - Have your playground surface tested. The City of Winnipeg operates a triax loan program for playground maintenance workers who have completed triax use training. For more information on this program, contact Jason Bell at email@example.com. Promote training for inspectors, operators and supervisors The Canadian Parks and Recreation Association’s Canadian Playground Safety Institute offers of several online and in-classroom courses on playground safety. Courses are based on the CAN/CSA Children’s Playspaces and Equipment Standards. Courses include: - Theory (Certification – part 1 of 2) - Practical (Certification – part 2 of 2) - Managing Safe Playspaces (non-certification course) - Accessibility (non-certification course) - Playground Inspector re-certification Use a checklist to inspect your playground regularly There are many checklists available to help you inspect your playground. The following local documents have information on safe playgrounds as well as checklists: - The Manitoba School Boards Association publishes Risk Management at a Glance: Forms which has a Monthly Playground Maintenance Inspection Report, a Play Space Inspection Report and a Weekly Playground Inspection Checklist. - The Manitoba Childcare Program provides information on how to maintain safe indoor and outdoor play spaces in Developing Enhanced Safety Plans and Codes of Conduct: A Guide to Safety Charter Requirements for Child Care Centres. This document has sample daily, monthly and yearly checklists (see Section K). Consider natural alternatives Natural play spaces are an increasingly popular choice for playground designers and communities. Talk to a certified inspector or landscape architect to ensure your ideas comply with the CSA standard and meet relevant provincial guidelines, such as those established for schools and child care facilities. - Read other resources to learn more about natural play ideas for communities, families and child care centres. - Innovative Playgrounds provides case studies and a design matrices for creating innovative playgrounds. - Children and Nature has toolkits for families and research summaries of the many health benefits related to play and learning in nature. - Green Hour recommends that parents give their kids a “Green Hour” every day, in a garden, backyard or neighbourhood park. - Green Hearts Institute for Nature in Childhood features a Parents Guide to Nature Play with ideas for parents and child care centres. - Encourage the use of local parks, paths and trails by creating activity kits that families can borrow from a community centre, library, school or child care facility. - Outdoor play kits can include index cards with simple and fun activities and games for families to do together. They can provide basic outdoor equipment such as soft balls of varying sizes, skipping ropes, small plastic pylons and Frisbees for children to play with. - Nature kits can include nature checklists, scavenger hunts, or I Spy Nature ideas, and provide a bag or basket with a plastic bucket and shovels, nets, bug containers and a plastic magnifying glass. - Host a Play Day in your local park. Educate families and community partners about playground safety Many serious injuries at playgrounds can be prevented by adult supervision and smart playground choices. You can help be a community playground advocate by: - Educating parents by distributing the Kids Don’t Bounce family action guide - Educating day cares, community centres, schools and other groups who are responsible for playgrounds by distributing the Kids Don’t Bounce community action guide. - Using the following key messages in your media communications and newsletters: - Supervise young children - Select age-appropriate equipment - Check for soft surfacing - Teach your children playground rules - Report safety concerns - Consider natural alternatives
To get a better visualization and comparison, we may need to extract some specific data based on certain criteria. In this article, we will show you how to extract data based on a drop-down list selection in Excel. How to Extract Data Based on a Drop Down List Selection in Excel: Easy Steps In the image below, a sample data set is provided to accomplish the tutorial to show how to extract data from the drop-down list. We will use Data Validation to make a drop-down list. Later on, we will use the FILTER function, to filter the extracted data. Step 1: Create a Table to Extract Data Based on a Drop Down List Selection in Excel - Select the Table. - Click on the Insert. - Click on the Table Design, and give a name (Sales) to it. Step 2: Extract the Unique Data Based on a Drop Down List Selection in Excel - To make a list with the unique Values in the Branch column, apply the UNIQUE function. - Therefore, you will get the unique values for the Branch. Step 3: Insert a Data Validation List to Find Data Based on a Drop Down List Selection in Excel - To create a Data Validation list, click on Data. - Then, click on the Data Validation. - Select the List from the Allow. - Press Enter. - In the source box, select the List. - Finally, press Enter. - As a result, you will see that, Data Validation drop down list is created. Step 4: Apply the FILTER Function to Extract Data Based on a Drop Down List Selection in Excel - In the FILTER Function, add the Table ‘Sales’ as the array element by using the formula. - In the Include argument, add the Branch Use the following formula. =FILTER(Sales,Sales[Branch] = H4 - H4 is the cell of the drop-down selection box. - In the ‘if empty’ argument, type “Nothing Found”. =FILTER(Sales,Sales[Branch] = H4,"Nothing Found") - Now, select any option (Texas), to extract all the related value. - Therefore, you will find all the values regarding ‘Texas’. Notes. The FILTER function is only available in Microsoft 365. Step 5: Insert Another Criterion to Extract Data Based on a Drop Down List Selection - To insert another criterion, make a unique list with another column (Products). Type the formula in a cell. - Therefore, another unique list will be created for the ‘Products‘ column. - Make another Data Validation drop down list with selecting the cell values. - Then, press Enter. Step 6: For Multiple Criteria Extract Data Based on a Drop Down Selection List in Excel - After creating another drop-down list, it will show like the image below. - Type the following formula to apply both the criteria. =FILTER(Sales,(Sales[Branch] = H4)*(Sales[Products]=H6),"Nothing Found") - Select any two options from the two drop down list. - As a result, you will get the value of the rows, satisfied both the criteria. Download Practice Workbook Download this practice workbook to exercise while you are reading this article. Finally, I hope you now know how to convert text to date using Excel VBA. All of these tactics should be performed while your data is being educated and practiced. Look over the practice book and put what you’ve learned to use. Because of your generous support, we are motivated to continue delivering initiatives like these. Please do not hesitate to contact us if you have any questions. Please let us know what you think in the comments area below. Stay with us and keep learning. - Conditional Drop Down List in Excel - How to Use IF Statement to Create Drop-Down List in Excel - How to Create Dynamic Dependent Drop Down List in Excel - Excel Dependent Drop Down List - How to Make Dependent Drop Down List with Spaces in Excel - Excel Formula Based on Drop-Down List - How to Populate List Based on Cell Value in Excel - How to Change Drop Down List Based on Cell Value in Excel
Capital Budgetinggeting1.0 INTRODUCTION Capital budgeting plays an important role in a firm’s financial management, the selection of a project is of great importance because it required a very large capital expenditure which will have a significant impact on the financial performance of the firm. Therefore a mistake in capital budgeting process by a firm will cost them a long period of time. Capital budgeting can be defined or seen as a designed process which involves management of available resources to select long time investments that will generate high return on the investment of those resources, Brealey, R. A et al (2006). Companies are into businesses with the main aim of making profit, therefore, it is vital for companies to know how to evaluate their expenditure. It is very important for a company to know the present value of the future investment and the time period it will take to mature before investing in a project. Examples of investment decision are purchase of new equipment or acquisition of industrial building. 2.0 ANALYSIS AND DECISION MAKING OF COVERED INTEREST ABITRAGE This can be described as an investment strategy which involves the buying of financial instrument dominated in a foreign currency by an investor and also the selling of a forward contract in his base currency in order to hedges his foreign exchange risk, Bodie, Z. and Kane, A. (2007). Based on the covered interest arbitrage i agree that there will be no difference if HW Technologies raise the capital needed for the joint venture in USA or Malaysia because the risk of interest and the fluctuation of currency are protected. In other words whether the money will be raised in USA or Malaysia the covered interest arbitrage will provide a hedge. 3.0 PREPARATION OF CASH FLOW, TABULATION SHOWING INFLATION OF MALAYSIA ANS USA | CASH FLOW BEFORE TAX | | YEAR | CASH FLOW | TOTAL PRICE | CASH FLOW | 1 | 50000 | 25 | 1250000 | 2 | 51500 | 25 |...
The study by Professor Ausubel states that wind turbine energy produces 1.2 watts per square metre of land. A typical 1 megawatt (1MW) turbine can occupy as little as 10,000m2. Even if it took up 40,000m2, Dr Ausubel's figure would give us an output of 4.8kW. In fact, even at 10% efficiency our 1MW turbine would give us an average output of 100kW. At a rate of 1MW per 10,000m2, we would need 1,000km2, less than 0.5% of the UK land mass, to generate 100 gigawatts (the approximate UK electric capacity). Even if we multiply this by 10 for loss due to efficiency and excess area requirements, it would only be 5% of our land mass. Professor Ausubel is correct that the energy density of renewables will never compare with fossil fuels or nuclear. There are of course many other obvious considerations that do justify renewables - fortunately. However, the article misses the point that solar panels (and, to an extent, some other renewables) do not require new land to be used up for electricity generating infrastructure. All over the world, solar panels are being integrated into the rooftops of the buildings for which the land is being used already. Try doing that with nuclear. Does a wind farm on the offshore shoals of the Thames estuary or Morecambe Bay or the Wash "devour" anything? Do solar power stations in the empty wastes of the Sahara "devour" anything? Do photovoltaic panels on our roofs "devour" anything? It seems hardly accidental that this report was in The International Journal of Nuclear Governance. Radioactive waste is probably not the main problem associated with nuclear power, though I understand that hardly any waste has yet been found safe permanent storage. There is simply not enough uranium, insurance costs make nuclear non-commercial, power stations take too long to build and are always associated with nuclear weapons, and there will never be any answer to determined terrorism. New Milton, Hampshire If all buildings were made energy efficient, much of the need for new and increased generation would be removed. We can all do simple things - like change light bulbs, buy efficient boilers, replace draughty windows and insulate our roofs and walls - to make a real and immediate difference. A reduced VAT rate of 5% on all energy-efficient home improvement products would help. Readers can support this idea at petitions.pm.gov.uk/reduced-VAT. Managing director, Masterframe Windows Repetition of the mantra that nuclear power is carbon-free (Nuclear waste is hardly a worry when the climate change threat is so urgent, July 26) does not make it true. Extracting, processing and transporting uranium; building a nuclear power station; and long-term storage, reprocessing, decommissioning and clean-up - all add up to give electricity from nuclear power a considerable carbon weight. Research suggests it produces around a third as much carbon dioxide as electricity from gas. At current consumption rates resources of high-quality uranium ore might last for about 45 years. Lesser quality ores will require far more energy to process, potentially releasing more CO2 per unit if electricity than gas. Nottingham Energy Partnership
Chapter 1, June 1861 1. How old is Charley, the main character in the story? 2. In what state does Charley live? 3. What is Charley's last name? 4. In what war does Charley want to fight? (a) Korean War (b) Civil War (c) Vietnam War (d) World War I 5. What is the date when the novel begins? (a) June, 1861 (b) June, 1951 (c) June, 1965 (d) June, 1916 6. What is the name of the town where Charley lives? 7. How much would Charley earn as a soldier? (a) $1500 month (b) $11 a month (c) $300 week (d) $10 week 8. What is the name of Charley's brother? 9. What prevents Charley from enlisting in the army? (a) He is too young (b) He is flat-footed (c) He cannot get his mother's permission (d) He is too small 10. Which of the following is not an appeal for Charley? (a) Getting a musket (b) Getting a uniform (c) Marching in the heat (d) Fighting like a man This section contains 3,347 words (approx. 12 pages at 300 words per page)
Developing our capacity to be resilient, tapping into our inner strength and cultivating courage are essential qualities for maintaining overall balance, managing stress, and being able to deal with difficult emotions and situations. Learning to be resilient can also help to make you stronger; with each challenge you encounter, you can develop new skills and new ways of dealing with life. Resilience and confidence often go together, resilience is the ability to bounce back from the challenges and pressures that life throws at us and maintain a positive outlook. People who are resilient generally have confidence in themselves and their ability to overcome setbacks. An important part of being confident is believing in yourself! How you see yourself is crucial in building confidence and resilience...your self-esteem, self-respect and self-worth. The traits of inner strength become our own toolbox of resources that we can draw on when needed, such as confidence, determination, belief, and motivation. Inner strength can positively assist in maintaining physical, emotional and mental health and wellbeing.
The Inca made up the largest civilization in South America, which dominated what is now Columbia, Ecuador, Peru, Chile and into the mountains of Bolivia and Argentina from about 1300 until the Spanish conquest in 1532. After thousands of years in the Andes, the indigenous peoples adapted by developing one-third greater lung capacity and 10 per cent more blood, making it possible to live and work in higher altitudes. They developed a small stature and stocky build. Many of the porters—who carry trekkers’ backpacks, tents, tables, benches, cooking gear, propane, water and food—are farmers in the Andes. On our tour, they were paid the equivalent of $66 to $70 U.S. for the four-day trek, not including tips of up to $38.They carry a maximum of 30 kilos each, halved from a decade ago. They seem to carry their burdens with little difficulty, wearing rubber sandles and often running as they chew energy-giving coca leaves. Almost 70 and retired, I started enthusiastically. As I tackled slopes of 35 to 45 degrees in the high altitude, I realized I wasn’t in shape to tackle the more challenging second day. On Day 2, I was offered a horse for the 12.5-kilometre return trip, but one horse owner wanted a 25-per-cent surcharge because of my 88-kilo weight; a second told me I would have to dismount on downhill sections because it was “dangerous.” No way. I walked for six hours back to the start of the Inca Trail, accompanied by a porter and a guide. The moral of the story: Anyone over 50 should be in top physical shape before tackling the four-day hike. The one-day trek makes more sense. Also, the high altitude can limit your mobility. Even when taking the train to get to the base, people face challenging walks to get to the site. It is well worth the effort. The scenery is spectacular, the ruins of this once-sacred city a reminder that even powerful and advanced civilizations like the Inca can fall into ruin. There is a limit of 500 people on the trail a day, about 300 of whom are porters and guides. At the site, the limit is 2,500 a day. Travellers should do their research and book well in advance. The four-day, three-night option with private tent booked through South America Exotic Travel costs $625 for an adult, $567 for a student.
An excerpt from Vijay Prashad’s The Karma of Brown Folk [link to book]: The lives of migrants to the United States came under special scrutiny from those who fashioned themselves as guardians of its cultural inheritance. Benjamin Franklin, for instance, was struck by the entry of Germans into his “Anglo-Saxon” domain, so much so that he worried that they would “soon so outnumber us that [despite] the advantages we have, we will, in my opinion, not be able to preserve our language, and even our Government will become precarious.” Anything less than total assimilation to the core of “Anglo-Saxon” culture was tantamount to treason. Since “assimilate” means to “make similar,” there is an expectation among some U.S. residents that those who are different may be transformed into those who are similar, or, indeed, identical. There are some who cannot become even similar (let alone identical), so the attempt to assimilate is futile for them. This is indeed the tenor of Thomas Jefferson’s remarks about blacks in Notes on the State of Virginia (1787) and, notably, in a letter Jefferson wrote to James Monroe in 1801: “It is impossible not to look forward to distant times, when our rapid multiplication will expand itself and cover the whole northern, if not the southern continent, with a people speaking the same language, governed in similar forms, and by similar laws; nor can we contemplate with satisfaction either blot or mixture on that surface.” Without “blot and mixture,” the United States was to be a homogenous realm for the free enterprise of the “Anglo-Saxon.” Of course, the United States was never homogenous, given that the early Republic already contained within it Amerindians, blacks, and Catholics—all “blots” on the surface of the white, Protestant Republic…. The problem with U.S. multiculturalism as it stands is that it pretends to be the solution to chauvinism rather than the means for a struggle against white supremacy. Whereas assimilation demands that each inhabitant of the United States be transformed into the norm, U.S. multiculturalism asks that each immigrant group preserve its own heritage (as long as it speaks English). The heritage, or “culture,” is not treated as a living set of social relations but as a timeless trait. “As an Asian or African,” an Iranian intellectual complained, “I am supposed to preserve my manners, culture, music, religion, and so forth untouched, like an unearthed relic, so that the gentlemen can find and excavate them, so they can display them in a museum and say, ‘Yes, another example of primitive life.’ ” Desi schoolchildren encounter this “encyclopedic” notion of culture, as an inert set of artifacts that can be saved and preserved, when their teachers ask them to wear “Indian clothes” to school as part of show-and-tell. Consumerism seems to be the main drive for this kind of multiculturalism, with all that is seen as “fun” adopted while all that is deemed to be “fundamentalist” is abjured. The hijab and falafel are welcome, but the “Arab-type” is to be feared. “There is difference and there is power,” June Jordan noted, “and who holds the power shall decide the meaning of difference.”
How to Deal with a Housefly Infestation!jopagicc Houseflies are one type of fly in the insect order Diptera, all of which have one set of wings. They are also known as “nuisance” flies. House flies are about ¼-inch long and gray. Houseflies are found almost everywhere people live. They lay their eggs on animal faces, garbage, and rotten organic material. A housefly infestation may not be a major concern, but it can also spread diseases. There are many safe and easy ways to prevent infestations or get rid of them when they happen. Are houseflies harmful? In many cases, housefly infestations are often just a nuisance. However, houseflies can also carry viruses and bacteria that can spread when they bite. Diseases house flies might carry include: - food poisoning - E. coli - typhoid fever - eye infections - a tropical infection called yaws How to get rid of houseflies naturally It’s possible, and often even preferable, to get rid of houseflies naturally, without pesticides. Potential methods include: - Herbs and flowers Herbs and flowers can be planted both in your garden and outside to keep flies away from your house. Herbs and flowers that can repel flies include: - bay leaves As a bonus, some can be used for cooking as well. - Vinegar and dish soap A mixture of vinegar and dish soap can help you trap flies. To use this method, mix about an inch of apple cider vinegar and a few drops of dish soap in a tall glass. Cover the glass with plastic wrap. Secure the plastic wrap with a rubber band and poke small holes in the top. Flies will be attracted to the vinegar in the glass and will fly through the holes. However, the dish soap causes the flies to sink instead of being able to land on the vinegar. - Cayenne pepper and water Cayenne pepper can help repel houseflies. Mix it with water and spray around the house to deter flies from coming in. - Venus flytrap Venus flytraps are carnivorous plants that eat insects. If you plant them outside, they’ll naturally eat flies. Inside, you might need to feed the plant flies. When a fly gets in the plant’s trap, it closes around the fly. It then secretes digestive fluid to dissolve the insect’s soft insides. It breaks down the insect over 5 to 12 days, then spits out the exoskeleton. - Natural trap bait You can also use foods or drinks to which flies are attracted in order to entice them into traps. These include: - sugar (honey or molasses) - Insecticide and other repellents In addition to natural ways to get rid of houseflies, you can use insecticides and traps to kill or remove the flies. What attracts houseflies to your home? Houseflies are mainly attracted by material in which they can lay their eggs. This incudes: - rotting material, including food waste and other garbage - animal feces Bright lights at night can also attract flies. Preventing a housefly infestation The best way to deal with a housefly infestation is to prevent it in the first place. Make sure they don’t have areas to lay eggs and remove things that can attract the flies. - Make sure your windows, doors, and house vents are sealed properly and free from holes or other damage. - Use a garbage can with a tight-fitting lid, and take the bag out as soon as it’s full. - Store food properly in airtight containers. - Don’t leave dirty dishes or glasses out on the counter. - Don’t leave grass clippings or leaves out to decay near your house. - Turn off outdoor lights at night when possible. Flies are attracted to light. - Clean up animal feces, such as in a cat’s litter box, right away. Housefly infestations aren’t just a nuisance. They can also be hazardous to your health. By keeping your house clean — especially free from food waste — you can help prevent a housefly infestation. If an infestation does occur, there are many ways to get rid of them and have a peaceful and happy home and the environment! Yes, you care about your home and environment and JOPAG HEP-IMRI care for her as well!
"Nazi" is still a dirty four-letter word in the USA, synonymous with hatred and annihilation of others aroused by their skin color, their religion or their nationality. Adolph Hitler killed millions in gas chambers simply because they were not members of a purported “master race.” As much as we in the United States believe our ideology to be vastly different from Hitler's, we still have much fundamentally in common linguistically with Adolph Hitler's Third Reich. Hitler believed that human beings could be divided into separate "races" that were fundamentally genetically different from each other, not only in superficial terms such as skin color, hair and facial characteristics, but also in terms of other profound and fundamental biological differences. Americans deeply share the Nazi belief in the existence of "race", with over 42 million hits for the word "racial" at Google. One need only read the New York Times or Washington Post to confirm that Americans (as quoted in American newspapers) still believe that of different skin colors are part of separate human "races". The New York Times printed the following paragraph in one recent article: In dozens of interviews in seven states over the last several days, black men and women like Mr. Sallis said they were feeling more optimistic about race relations than even a year ago, when Mr. Obama em“I feel a lot more comfortable starting up a conversation with people of other races on the streets now than I did before,” said Mitch Hansch, 29, a white waiter in New York City. ( . . .) “Since Obama was elected, racial tensions seem a little lower. I think it’s fantastic.”“I feel a lot more comfortable starting up a conversation with people of other races on the streets now than I did before,” said Mitch Hansch, 29, a white waiter in New York City. Susan Saulny, NYT In spite of the 2007 announcement by the United States’ Government’s Human Genome Project that there is no biological basis to believe in the existence of separate races, yet Americans -- Black, white and other -- still believe in separate "races" today with the same conviction that Adolph Hitler did during the Second World war. The American Journal of Color Arousal is a Journal (AMJCA) was founded to report upon and catalyze the progress of Americans toward the day when proven genetic science will conquer the biological ignorance of Hitlerian belief in the existence of separate "races" within the human species. The AMJCA reports: "DNA studies do not indicate that separate classifiable subspecies (races) exist within modern humans. While different genes for physical traits such as skin and hair color can be identified between individuals, no consistent patterns of genes across the human genome exist to distinguish one race from another. There also is no genetic basis for divisions of human ethnicity. People who have lived in the same geographic region for many generations may have some alleles in common, but no allele will be found in all members of one population and in no members of any other." U.S. Department of Energy Office of Science, Office of Biological and Environmental Research, Human Genome Program In other words, the Human Genome Project has proven that, as a matter of scientific fact, that which we call "race" does not exist as a matter of biology, and so all references to "race" are references to a fallacy. AMJCA Although we in America believe ourselves to the the antithesis of Hitlerian ideology, our fundamental belief about the nature and meaning of skin color are actually quite similar to Hitler’s beliefs, even though our Government’s expensive and exhaustive genetic science program, mapping the human genome in its entirety, has since proved that Hitler was wrong and so is the New York Times. When one compares the New York Times fundamental belief in race to that of the Stormfront white supremacist organization, one has to conclude that their Hitlerian belief in the existence of biological race is one and the same. Another article in the American Journal of Color Arousal points out: The New York Times and the white supremacist group Storm Front share a [Hitlerian] belief that there are significant and fundamental "racial" differences between color and linguistic groups based upon and residing in their genetic heritage. If you consult the Storm Front white supremacist group's website, you will discover that, in spite of the findings to the contrary of the Human Genome Project, Storm Front still believes in the theory of "race" -- that you can discern meaningful and fundamental genetic differences between people with white skin, brown and Black skin by making reference to their skin color, facial characteristics and the languages they speak. One writer at Storm Front, reviewing the white supremacist David Duke's "My Awakening", says, for example: It’s the question to which Carleton Putnam’s Race and Reason led David Duke. “I asked myself,” writes David Duke, “What if the things [Carleton Putnam] writes are true? What if the distinctions, quality and composition of races are the primary factors in the vitality of civilizations (My Awakening, 37) …”It alarmed me to think of the implications of race having a cardinal role in the creation and maintenance of culture and civilization. If true, then replacement of the White race through immigration and race-mixing could conceivably destroy Western Civilization itself.” (43) In a paper David Duke wrote early in his career, he summarized Putnam’s thesis: “It is his belief that a civilization is the product of the particular racial group that created it and that demographic replacement of the founding race, through race-mixing, immigration, and differential birthrates, will diminish and ultimately destroy the vitality of the culture and civilization.” (43) ( . . . ) Genetics may enable some people to become great athletes or entertainers or to be shrewd at bargaining and splitting hairs. But it is the very special type of intelligence of our own race that has founded Western Civilization. And it is thatspecial type of intelligence that will someday lead us to the star. http://www.davidduke.com/race-information-library/racial-differences/intelligence-heredity-vs-environment/the-question-of-civilization_477.html Evidently, David Duke's "awakening" is based on the premise that "races" exist in the first place, and that is a premise that is, every day of the week, supported by the New York Times, even though the premise has been conclusively disproved by 20th and 21st century empirical science. And while many white residents said there were no RACIAL tensions locally except those being sparked by news coverage and claims from out-of-town civil rights groups, Latinos offered a different view. ( . . . ) “I was skeptical of the claim that there were RACIAL tensions in town,” he said, “but then the details started coming out and people started speaking up. I was shocked by what they were saying.” (Emphasis added.) In the context of the article, it is clear that the writer, by the use of the word "racial", is subscribing to the very beliefs about skin color that are fundamental to David Duke's and Storm Front's white supremacist ideation. That is, both the New York Times and Storm Front believe that you can look at someone's skin color, national origin and language and thereby distinguish what "race" they belong to. For so long as the New York Times promotes this theory of "race", it will be significantly harder to isolate those at Storm Front who share the same theory. When we analyze the Hitlerian “racialist” language used in our foremost newspapers, in television news, and in discussions between Americans with brown skin and with white skin, the conclusion is inevitable that many or most of us still believe in the Nazi concept of separate "races" just as much today as Adolph Hitler did seventy years ago, during the Second World War and before the scientific discoveries made by the United States Government –sponsored Human Genome Project. Whether we look to the “liberal” New York Times, the radical white supremacist terrorist organization Stormfront, the nightly news or our conversations among Americans of the same skin color or different skin colors, the inevitable conclusion is that most of us believe in separate “races” just as much as Adolph Hitler did. This is no mere semantic discussion, because the belief in "race" has had profound policy repercussions in the past and present. For example, during the World War II it is fair to assume that many servicemembers died because they did not receive blood transfusion, based on the biologically erroneous idea that blood from members of the postulated black "race" could not or should not be mixed with the blood of members of the white "race". Dr. Charles Richard Drew (June 3, 1904 - April 1, 1950) was an American medical doctor and surgeon who started the idea of a blood bank and a system for the long-term preservation of blood plasma (he found that plasma kept longer than whole blood). His ideas revolutionized the medical profession and have saved many, many lives. Dr. Drew set up and operated the blood plasma bank at the Presbyterian Hospital in New York City, NY. Drew's project was the model for the Red Cross' system of blood banks, of which he became the first director. Drew resigned his position as director after the US War Department issued a directive stating that blood taken from white donors should not be mixed with blood taken from black donors. Dr. Drew strongly objected, and stated "the blood of individual human beings may differ by blood groupings, but there is absolutely no scientific basis to indicate any difference in human blood from race to race." Dr. Drew also formed Britain's blood bank system. Enchanted Learning (Emphasis added. With so many servicemembers dying for lack of blood transfusions, the cost of the belief in "race" was obviously very high and deadly to those white and brown-skinned servicemembers who were denied blood transfusions because of Nazi-like conviction in the existence of nonexistent "racial differences". It is equally difficult to estimate the cost to Americans in the present of the continued Nazi belief in the existence of separate "races" within the human species.What is certain, based on the persistent and uniquitous use of the terms "race", "racial" (See Google), "racist" and "races" is that linguistically, politically, ideologically, culturally and economically, the belief in the concept of "race" continues to hold almost as much currency in 21st Century America as it did in Nazi Germany, before the science of genetics proved that "race" does not exist.
NASA CGI image of an exoplanet which could potentially support life Hot on the heels of Earth 2 - another planet discovered this summer which has the potential to harbour life - Wolf 1061c is the closest planet outside our solar system which could hold alien life. Dubbed Earth 3, it is more than four times the mass of Earth. The large planet is still small enough to be rocky with a solid surface, but a year there lasts just 18 days. It also orbits the Red Dwarf sun within the "Goldilocks zone" meaning its temperature would be just right to hold liquid water so life could potentially develop within its oceans if it has any. In July NASA held a historic press conference revealing it had founder a "second Earth" using the Kepler telescope. NASA graphic showing notable exoplanets found by Kepler Kepler 452b was thought to be rocky and in the Goldilocks zone, but was 1,400 light years away - 100 times further than Wolf 1061c is. Wolf 1061c is just 14 light years away in the in the constellation of Ophiucus, orbiting the sun called Wolf 1061. It is one of three planets orbiting the star found by Australian astronomers. Lead study author Dr Duncan Wright of the University of New South Wales (UNSW), said: "It is a particularly exciting find because all three planets (b, c and d) are of low enough mass to be potentially rocky and have a solid surface. "The middle planet, Wolf 1061c, sits within the 'Goldilocks' zone where it might be possible for liquid water - and maybe even life - to exist. - Support fearless journalism - Read The Daily Express online, advert free - Get super-fast page loading
USAF lost nuclear missile in the sea reveals documentary The move came during the Sixties – at the height of the Cold War – as the US and Soviet Union battled it out to be the ultimate superpower on the ground and in space. Both nations had already tested several nuclear weapons and other countries – including Britain and France – were eager to get a slice of the action too. Papers from 1967 show that Mr Wilson approached former French President Charles de Gaulle for a meeting to discuss the possibility of working together on a separate nuclear agreement should Britain be accepted into the EEC, the EU’s precursor. The brief reads: “Our first two POLARIS submarines have already been launched. “We plan that all four should be operational by December 1969. “We are taking steps to ensure that our POLARIS missiles will remain as effective weapons if the Russians complete their deployment of an ABM system. “We have, however, decided not to buy the POSEIDON missile or to embark on the development of a new generation of nuclear weapons in co-operation with the US.” Harold Wilson wanted to join the EEC The UGM-73 POSEIDON missile was the second US Navy nuclear-armed submarine-launched ballistic missile (SLBM) system, powered by a two-stage solid-fuel rocket. However, Mr Wilson was seemingly uninterested in this weapon and wanted to use the UK's position as a nuclear power to gain entry to the EEC. His brief continued: “In a few year’s time, therefore, our military nuclear relationship with the US, as it has existed since Nassau, will probably be coming to an end. “We may then face a choice between renewing such nuclear cooperation with the US or developing our nuclear policy in a primarily European context. “How this decision moves [forward] is bound to be determined largely by the possibilities presented to Britain of a fuller participation in Europe’s economic and political developments.” - Support fearless journalism - Read The Daily Express online, advert free - Get super-fast page loading Mr Wilson, then, appeared to be hinting that he could share the UK’s nuclear secrets with the bloc – but his efforts were in vain. General de Gaulle – who served as President of France from 1959 to 1969 – said “non” to Britain’s EEC membership application later in 1967, humiliating Mr Wilson in the process. It came four years after his first veto, when it was Harold Macmillan he rejected with repeated references to Britain’s insular and maritime status. Britain, he argued, was not European enough and had “in all her doings very marked and very original habits and traditions”. He added: “In short, the nature, the structure, the very situation that are England’s differ profoundly from those of the continentals.” Tehran's war capability revealed amid tensions with West [ANALYSIS US soldier risked 'cataclysmic outcome' with defection to USSR [COMMENT] Turkey close to Russia's grasp amid Trump fury after Venezuela ruling [ANALYSIS] The EEC was formed at the Treaty of Rome on March 24, 1957 – with Belgium, West Germany, France, Italy, Luxembourg and the Netherlands making up the original six signatures. But the UK’s Commonwealth ties, domestic agricultural policy, and close links to the US were considered a problem for de Gaulle. It was not until the Gallic icon passed away in 1970 that the UK was free to sign up to the bloc. Former Prime Minister Edward Heath successfully took the UK into the EEC on January 1, 1973. The UK would go on to scrap Mr Wilson's plans and develop Chevaline – a system to update and improve the POLARIS capabilities. The Trident nuclear programme was launched in 1979 as an operational system of four Vanguard-class submarines armed with US Trident D5 missiles. Today, the Trident nuclear programme is still active, and its purpose as stated by the Ministry of Defence is to "deter the most extreme threats to our national security and way of life, which cannot be done by other means". It is operated by the Royal Navy and based at Clyde Naval Base on the west coast of Scotland, 25 miles from Glasgow. At least one submarine is always on patrol to provide a continuous at-sea capability. Each one carries up to eight missiles and 40 warheads, although their capacity is much larger.
|Income Tax (Earnings And Pensions) Bill - continued||House of Commons| |back to previous text| 2720. Deductions from pay by employers have been part of the income tax system for a very long time. For example in 1803 tax on emoluments from public offices and employments of profit was actually assessed on the employer. The employer was, however, entitled to deduct it from the salary. But the real history of PAYE as such starts in the Second World War. 2721. At the start of the war, manual workers and many other employees paid tax direct to the collector at half-yearly intervals. Manual workers were permitted to spread the payments over 13 weeks by buying "income tax stamps". But employers were not involved in this system. 2722. The war saw a big increase in both the number of employees paying tax and in the rates of tax. Many found this hard. This led to the introduction in F(No. 2)A 1940 of arrangements for employers to deduct Schedule E tax from pay. These arrangements were widened in 1942 to any weekly wage-earners. But they were nothing like PAYE. The tax was still assessed every six months by the Inland Revenue. The Inland Revenue then told employers how much to deduct. 2723. This left a lot of problems. Payment lagged on average some ten months behind earnings. So tax due on high earnings (for example when doing a lot of overtime) could end up being deducted when earnings were low. Changes of job (from higher to lower earnings) were another obvious source of difficulties. All this led to a search for a system of deductions from "current earnings". And one which did not deduct too much - leaving perhaps millions of people to have less to live on while they waited for a repayment after the end of the year. 2724. These problems were discussed in a White Paper in 1942 (Cmd. 6348). That favoured sticking with pretty much the system of deductions in arrears despite its problems. But the reactions to that White Paper (and the fact that both the USA and Canada had come up with systems of deduction based on current earnings) led to a change of views. Another White Paper in 1943 (Cmd. 6469) proposed what is recognisably the current PAYE system. Crucially it involved deductions based on the cumulative pay and tax deducted in the year. (This is still the feature which distinguishes PAYE in the United Kingdom from most other countries' PAYE system. All major developed countries other than France have some system of deduction of tax by employers from wages and salaries. That in the United Kingdom is at the far end of the spectrum in requiring cumulative calculations and in the sophistication of its codes and procedures for trying to keep codes up to date.) 2725. The system was proposed only for weekly-wage earners and pensions paid by their former employers. But the legislation introduced in 1943 was extended, in response to representations, during the passage of the Bill to others earning less than £600 a year. It was enacted as the Income Tax (Employments) Act 1943 (6&7 Geo. 6. (1942-43) c.45). The core provisions of the 1943 Act are recognisably the source of the current PAYE vires in section 203 of ICTA: Income Tax (Employments) Act 1943 (6&7 Geo. 6. (1942-43) c.45) Basis of charge and method of collection of income tax on certain emoluments 1.-(1) Income tax for the year 1944-45 or any subsequent year of assessment shall be assessed and charged in respect of the emoluments specified in subsection (2) of this section on the amount of those emoluments for the year; and, on the making of any payment of, or on account of, any such emoluments made during the year 1944-45 or any subsequent year of assessment, income tax shall, subject to and in accordance with the regulations made by the Commissioners of Inland Revenue under section two of this Act, be deducted and repaid by the person making the payment, notwithstanding that when the payment is made no assessment has been made in respect of the emoluments... Regulations of Commissioners of Inland Revenue 2-(1) The Commissioners of Inland Revenue shall make regulations with respect to the assessment, charge, collection and recovery of income tax in respect of emoluments to which this Act applies, being tax for the year 1944-45 or any subsequent year, and those regulations may, in particular, include provision and any such regulations shall have effect notwithstanding anything in the Income Tax Acts: (2) The said tax tables shall be constructed with a view to securing that, so far as possible In this subsection the references to the total tax payable for the year shall be construed as references to the total tax, other than surtax, estimated to be payable for the year in respect of the emoluments, subject to a provisional deduction for allowances and reliefs, and subject also, if necessary, to an adjustment for amounts overpaid or remaining unpaid on account of income tax in respect of emoluments to which this Act applies for any previous year (including any year previous to the year 1944-45). For the purpose of estimating the total tax payable as aforesaid, it may be assumed in relation to any payment of, or on account of, emoluments, that the emoluments paid in the part of the year of assessment which ends with the making of the payment will bear to the emoluments for the whole of that year the same proportion as that part of the year bears to the whole year... 2726. The scope of PAYE was then further extended, before the system had come into operation, by the Income Tax (Offices and Employments) Act 1944 (7&8 Geo. 6. (1943-44) c.12) to all emoluments assessable under Schedule E (other than those payable for the armed forces) The Income Tax (Offices and Employments) Act 1944 (7&8 Geo. 6. (1943-44) c.12) Extension of principal Act (subject to exceptions) to all emoluments taxable under Schedule E 1-(1) Subject to the provisions of this Act, the Income Tax (Employments) Act, 1943 (hereafter in this Act referred to as "the principal Act") shall extend to all emoluments assessable to income tax under Schedule E, other than pay, pension or other emoluments payable in respect of any service in or with the armed forces of the Crown, and accordingly that Act shall have effect as if for subsections (2) to (4) of section one thereof there were substituted the following subsection - 2727. Both the 1943 and 1944 Acts were short and contained mainly transitional provisions. The details of PAYE were then, as now, left for regulations made under section 2 of the 1943 Act. The scope of PAYE 2728. The scope of PAYE was extended by subsequent Acts to all income assessable under Schedule E; and the scope of Schedule E was itself also extended. Highlights were: 2729. The point of this selective list is two-fold: What is and is not a payment for PAYE 2730. The distinction between what is and is not a payment of Schedule E income matters because: 2731. The boundary between what is and is not a payment was used to avoid PAYE (often as part of attempts also to avoid National Insurance Contributions). This led to legislation to treat income as paid - mainly in: Machinery of PAYE 2732. There have also been a few changes to the machinery of PAYE - but remarkably few given the nearly 60 years since the 1943 Act: Chapter 1: Introduction 2733. Clause 682 introduces the Part. 2734. Clause 683 defines PAYE income. Clause 682: Scope of this Part 2735. This clause is purely introductory. It gives readers an indication of what they will find in the Part. Clause 683: PAYE Income 2736. This clause defines "PAYE income". It is new. 2737. The term "PAYE income" takes the place of "income assessable under Schedule E" which is used in section 203 of ICTA. The meaning of "PAYE income" might be thought to be different from "income assessable under Schedule E". But on close examination it has the same meaning. See Note 55 in Annex 2. 2738. Subsection (1) defines PAYE income as the sum of the three amounts defined in this clause. 2739. Subsection (2) defines PAYE employment income using the terms introduced in Chapter 3 of Part 2 of the Bill (see the commentary on page Error! Bookmark not defined.). 2740. Subsections (3) and (4) define PAYE pension income for the year. This is taxable pension income (as defined in Part 9 of the Bill) which: 2741. Subsection (5) similarly defines PAYE social security income as the total of any social security income (as defined in Part 10 of the Bill) which: 2742. Finally, the label "PAYE income" may not be ideal in all respects. This is because: 2743. But this is no different from the present position with income assessable under Schedule E. In the course of consultation users welcomed the label "PAYE income" as plain language; and felt the subsequent clauses (and the PAYE regulations) would make clear the distinction between PAYE income and payments of PAYE income. Chapter 2: PAYE: General 2744. Clause 684 requires the Board to make regulations to collect income tax on PAYE income. These regulations include in particular requirements for those making payments of PAYE income to deduct tax by reference to tax tables. 2745. Clause 685 requires the tax tables to try to collect the right amount of tax on the PAYE income by the end of each tax year and to try to do so evenly. 2746. Clause 686 defines when a payment of PAYE income is made for PAYE purposes (in the same way as clause 18 defines when earnings are received for the purposes of Part 2). Clause 684: PAYE regulations 2747. This clause provides powers for the Board of Inland Revenue to make PAYE regulations. It derives mainly from part of section 203 of ICTA. 2748. Item 5 derives from section 203(10). It allows PAYE regulations to provide for the way in which any matters dealt with in the regulations are to be proved - for example in proceedings to recover tax. Section 203(10) also includes provision for proving the contents or transmission of anything that, by virtue of the regulations, takes an electronic form or is transmitted to any person by electronic means. This Part of the provision was enacted to deal with electronic filing, a predecessor of filing by internet. It is due to be repealed by section 139 of and Part VII of Schedule 20 to FA 1999 from a date laid down by Order, and is therefore omitted. Paragraph 89 of Schedule 7 to the Bill preserves the omitted words in the meantime. 2749. Items 10 and 11 derive from sections 203L(4) and 206A(6) of ICTA but are applied more widely in the Bill. See Change 147 in Annex 1. 2750. Subsection (8) defines the term "PAYE regulations" for the purposes of the Bill and of other legislation. This allows other legislation to refer more briefly and naturally to "PAYE regulations" rather than to "regulations made under section 684 of the Income Tax (Earnings and Pensions) Act 2003". 2751. This clause omits as unnecessary the provision in section 203(1) that deductions are to be made from payments "notwithstanding that when payment is made no assessment has been made in respect of the income". See Note 57 in Annex 2. 2752. Section 203(3A) is also omitted from this clause as unnecessary. Section 203(3A) is a transitional rule from the introduction of independent taxation in FA 1988. It cannot affect any tax year to which the Bill applies. Clause 685: Tax tables 2753. This clause requires the Board to produce tax tables for PAYE which aim to collect the right tax for the tax year. It derives from section 203(6), (7) and (8) of ICTA. 2754. Subsection (1) requires the Board to produce tax tables which, where possible, result in: 2755. The main practical effect of subsection (2) is to collect underpayments through PAYE rather than by the taxpayer making a lump sum payment. 2756. Subsection (3) provides that, in trying to collect tax evenly, it can be assumed that the rate of past payments in the tax year is a guide to the rate of future payments. Clause 686: Meaning of "payment" 2757. This clause deals with the meaning of "payment" in this Part. It derives from section 203A and part of section 202B of ICTA. 2758. Section 203A was introduced in 1989 as part of the package of changes dealing with the switch to a receipts basis of assessment. Prior to 1989 income under Cases I and II of Schedule E was assessed on an arising basis, whereas PAYE deductions were made when the emoluments were paid. The 1989 reforms made the emoluments assessable at the same time that they were paid for PAYE purposes. They provided : 2759. These definitions are essentially the same. This clause therefore matches clause 18, which derives from section 202B. 2760. In consultation leading up to this Bill some users said that the heading of the clause was inappropriate because it deals only with the timing of a payment. The clause reproduces the heading from section 203A. It is on close examination appropriate. Section 203A and this clause are not only giving the time of a payment. They also make some things which would not be payments into payments for PAYE purposes. A simple example is where an employee is entitled to collect a bonus of £1,000 on Monday. That is a payment for PAYE purposes on Monday even if the employee does not get around to collecting the money until later. 2761. Subsection (1) provides that for the purposes of the PAYE regulations, any payment of (or on account of) PAYE income is a payment for PAYE purposes at the earliest time given by any of the dates derived from the rules given. 2762. Rule 3 derives from section 203A(2), and Rule 3(a) from section 203A(4). 2763. Subsection (2) provides that a person is treated as a director for the purposes of rule 3 in subsection (1) if he or she is a director at any time during the tax year. 2764. Subsections (3) and (4) derive from section 203A(5), and from section 202B(5) and (6). In section 203A(5) reference is made to the definition of director in section 202B(5) and (6). It is more helpful to readers to repeat the definition here. Chapter 3 PAYE: Special types of payer or payee 2765. This Chapter deals with PAYE obligations where there are special types of payer or payee. The majority of these provisions were introduced to counter avoidance of PAYE - see paragraph 2731. 2766. Clause 687 treats certain payments of PAYE income actually made by an intermediary of an employer as made by the employer. 2767. Clause 688 treats agency workers who are treated as having earnings by clause 44 as employees of the agency for the purposes of most of the provisions for special types of payer, payee and types of income in this Chapter and Chapter 4. It also treats the client rather than the agency as the employer for the purposes of those provisions in relation to some payments. 2768. Clause 689 deals with employees who work in the United Kingdom but whose employer is not subject to PAYE regulations, typically where the employer has no UK presence. It treats the person for whom an employee works in the United Kingdom as if that person had made certain payments which are actually made by the employer or an intermediary of the employer. 2769. Clause 690 applies only to employees who are not resident (or not ordinarily resident) in the United Kingdom and who work partly in the United Kingdom and partly not. It treats payments of income of the employee as payments of PAYE income. It also provides for the Inland Revenue to direct that only a proportion of such payments be treated as PAYE income. 2770. Clause 691 deals with workers provided by contractors. It provides for the Board to direct that PAYE must be operated by the person employees actually work for rather than their employer if that person pays for their work and the employer is not likely to operate PAYE properly. 2771. Clause 692 provides for regulations to require PAYE to be operated on tips which are collected and shared among a group of employees by the person who runs that arrangement. It also provides for the employer to operate PAYE in some circumstances if the person who shares out the tips fails to do so properly. Clause 687: Payments by intermediary 2772. This clause deals with payments made by intermediaries. It derives from section 203B of ICTA. 2773. Section 203B was introduced as part of a package of PAYE provisions in FA 1994. It prevents avoidance of PAYE by using an intermediary not subject to PAYE regulations to make payments. An example is an intermediary outside the UK tax net. 2774. Subsection (1) states the basic proposition that a payment of income by an intermediary is treated as a payment by the employer. This clause (like others in this and later Chapters) refers explicitly to "employer" and "employee". These terms take their meanings from clause 712. The commentary on them uses the words in the same sense. 2775. Subsection (2) disapplies subsection (1) if the intermediary complies with the PAYE regulations. The wording in the Bill makes it clearer that the intermediary must both deduct and account for tax in accordance with the PAYE regulations. See Note 58 in Annex 2. 2776. Similar clarifications have been made in clauses 689(1)(d) and 691(1)(c). 2777. Section 203B(5) applies section 839 of ICTA to give the meaning of connected persons. Clause 718 of the Bill does that for the Bill as a whole so no provision to that effect is needed in subsection (4). |© Parliamentary copyright 2002||Prepared: 5 December 2002|
PORTLAND, Maine — The runaway oil train that caused explosions and fires that killed 47 people in a Quebec town underscored the dangers of transporting hazardous materials via rail. But there are other materials as dangerous as oil that are shipped over rail lines in Maine. On any given day, freight trains rumbling through Maine's cities and across the countryside carry hazardous materials that have the potential to start fires, ignite explosions, harm the environment, make people ill and, in extreme cases, even kill. Besides transporting oil, trains last year also carried about 20 other kinds of materials through Maine that are classified as hazardous, including chlorine, ammonia and sulfuric acid, according to records supplied by the Maine Department of Environmental Protection through a Freedom of Access request by The Associated Press. Those materials weighed a total of more than 300,000 tons and were bound for paper mills, chemical companies, a distillery, energy companies and other destinations both in and out of Maine. Some materials labeled as hazardous are relatively harmless, such as paraffin wax and potassium chloride, a salt substitute, said University of Maine chemistry Professor Ray Fort. And while crude oil is dangerous because it can be explosive, many of the other materials are dangerous in different ways. A spill of chlorine or ammonia, for instance, could be devastating because they damage lungs and mucous membranes if inhaled. Chlorine gas was used as a weapon during World War I. "They would be environmental disasters of a different sort," Fort said. "Chlorine is really nasty stuff." Trains carrying crude oil across Maine have come under scrutiny after an unattended Montreal, Maine & Atlantic Railway train barreled into Lac-Megantic, Quebec, derailed and set off explosions and fires that killed dozens of people and leveled much of downtown. The railroad is based in Hermon, Maine, and Lac-Megantic is about five miles from the Maine border. Oil can be a dangerous material, but so are many other gases, liquids and solids that Montreal, Maine & Atlantic and Pan Am Railways reported to the DEP last year. Capt. Mike Nixon of the Portland Fire Department said trains in Portland aren't too susceptible to derailments because the tracks are relatively straight and trains travel at relatively slow speeds through most of the city. In his nearly 20 years as a firefighter, he's only responded once to one hazardous spill from a train, when muriatic acid leaked from a rail car in 1991. Nixon's bigger concern is hazardous materials transported by truck. "Think about it," he said. "How much of this material comes by truck?" Robert Gardner, technological hazards coordinator for the Maine Emergency Management Agency, agreed that trucks have a greater likelihood of an accident than trains do. However, he added, there's a vast difference in the volumes they carry. "Trucks carry smaller amounts but we have more trucks on the road than we have rail cars. Rail cars when they derail, because of the amounts involved, have a greater risk," he said. "A large propane tractor-trailer will hold maybe 9,000 gallons of propane. A rail car will hold 30,000 gallons. " In his 17 years with MEMA, Gardner recalls three times that residents had to be evacuated because of a hazardous material spill from a train derailment. As common carriers, railroads are required to ship any commodity that a company requests, said Cynthia Scarano, executive vice president of Pan Am Railways. Tank cars have become much safer over the years, she said, and railroads, regulatory agencies, and first responders are better trained to respond to spills. Railroads also are required to take extra safeguards when handling hazardous materials, she said. Montreal, Maine & Atlantic will no longer transport oil, Chairman Ed Burkhardt told the Montreal Gazette on Monday. Railroad President Robert Grindrod didn't return a phone call for comment. Hazardous materials can be found just about anywhere, Gardner said. A 20-pound propane tank used for a gas grill is the equivalent of 100 pounds of dynamite, he said. He urged people who live along rail lines used to transport dangerous cargo to have a disaster-response plan in place. "Everyone should have an emergency plan for themselves, 'What do we do if?'" he said. "And it doesn't matter if it's a power outage, a hurricane, a river flooding or if you're on a route for hazardous chemicals."
Rest of the story Media mixup on what ship really rescued Terra Nova crew in 1943 Posted September 14, 2012 The recent discovery off the coast of Greenland of the remains of the SS Terra Nova, the ship that carried Briton Robert F. Scott and his team to Antarctica in 1910, generated headlines around the world. It also perpetuated a little factual inaccuracy on a footnote to the Terra Nova story, including by this publication. A private research vessel reported finding the Terra Nova last month during a sea trial of its sonar equipment. The ship has gained fame for its role in the British Antarctic Expedition 1910-13 , during which Scott and his crew conducted pioneering research while also attempting to reach the geographic South Pole. [Link to previous story — Shipwreck: Remains of Scott's vessel Terra Nova found off Greenland coast.] About 30 years after its famous Antarctic voyage, the wooden-hulled barque had been chartered in 1942 to carry supplies to base stations in Greenland during World War II. On Sept. 12, 1943, the vessel sent an SOS reporting damage, according to the declassified September 1943 Secret War Diary of the Greenland Patrol , a little-known corner of World War II history that involved U.S. Coast Guard (USCG) and Navy ships guarding the coastal waters of Greenland. The Terra Nova reported “water over its boilers and pumps not working.” The USCG icebreaker Southwind, as had been widely reported, was the first ship to respond and rescued the crew. It made for an interesting tale, especially for polar enthusiasts who know that the Southwind was later involved in Operation Deep Freeze in Antarctica some 20 years later. End of story. Except for the fact that the Southwind wasn’t the ship that saved the Terra Nova crew. Reader Michael Zemyan contacted The Antarctic Sun after an alert friend had noted that the Southwind wasn’t commissioned for service until July 1944, about 10 months after the sinking of the Terra Nova. In the Book of Valor , a 1945 publication of the USCG Public Affairs Division that lists those Coast Guardsmen who were awarded decorations during World War II, four crewmembers are cited for their roles in the rescue of the Terra Nova. The men all served aboard the USCG cutter Atak. Each man was cited for his “heroic conduct” for volunteering to operate a lifeboat for four hours without relief to transfer all 20 survivors of the Terra Nova to the Atak, a former trawler taken into service in 1942 by the USCG for service on the Greenland Patrol . She was formerly the Winchester and was stationed out of Boston during her time with the Coast Guard, according to a fact sheet from the USCG History Program . Additional online research by amateur polar historian Bill Spindler found the Greenland War Diary for September 1943, which clearly describes the role of the Atak in the rescue. It reached the Terra Nova on Sept. 13, according to the logs, rescued all personnel aboard and then proceeded to Narsarssuak, Greenland. The cutters Amarok, Laurel, and Manitou, also part of the Greenland Patrol, responded to the SOS as well, but the Amarok and Manitou turned back after their services were not needed. The Laurel sank the burning Terra Nova with gunfire, sending it to the bottom of the sea at an estimated 70 fathoms, or about 400 feet. So, how did the mistake in identity happen? It’s somewhat unclear, but the popular online encyclopedia Wikipedia had cited the Southwind as the rescue ship on Sept. 13, 1943, for the Terra Nova ship entry. It has since been updated. Zemyan has a theory. The Southwind had a long and varied career, during which she went by several names. The following history comes from the official USCG Southwind history webpage . After a brief period of service along the coast of Greenland, where she assisted her sister-ship Eastwind in capturing German weather teams, the icebreaker was transferred to the Soviet Union under the terms of the Lend-Lease program on March 25, 1945. It was renamed Admiral Makarov after a famous Russian mariner and naval architect who is recognized as the father of the modern icebreaker. The ship operated in the Russian merchant marine for four-and-a-half years before the Soviet Union returned her to the United States on Dec. 28, 1949, when it changed names yet again, becoming the USS Atka. During her years as the Atka, she made a total of 19 trips into Arctic waters and nine extensive voyages to Antarctica. In October 1966, she was transferred back to the USCG. Her new commanding officer, on behalf of the crew, requested the Atka revert back to Southwind. The request was granted early in 1967. She was eventually decommissioned in 1976 and sold for scrap. “Perhaps at some point someone was searching for information on the Atak, and looked up Atka instead, and then found the history of the Southwind,” Zemyan said. And, in the words of Paul Harvey, that’s the rest of the story. About the Sun
Written by Hilary Reno, MD, PhD, associate professor in the Department of Medicine at the School of Medicine In November, the Centers for Disease Control and Prevention (CDC) released two reports detailing the increasing rates of sexually transmitted infections in the US. As a provider of care for patients with sexually transmitted infections (STIs) and HIV, increasing rates of all STIs are a great concern to me. National STI data indicates that rates of chlamydia, gonorrhea, and primary and secondary syphilis all increased from 2013 to 2014 (Sexually Transmitted Disease Surveillance 2014). Increasing rates in men accounted for some of these increases, and at risk groups including youth and men who have sex with men (MSM) continued to have high rates of STIs. For many years, the per capita rates of STIs in the St. Louis region have been some of the highest in the United States. In the St. Louis region, patients with STIs seek care at a variety of providers and locations. Thus, we felt that improving the care of patients with STIs in our region would require a group effort from such systems as public health departments, universities, primary care providers, and emergency departments to address this multi-faceted health issue. With the support of the Institute for Public Health, earlier this year I formed the STI Regional Response Coalition (STIRR). STIRR first met this past spring with nearly fifty attendees from a variety of agencies, including state, city, and county health departments, academic medical centers, emergency departments, urgent cares, federally qualified health centers, and private providers attending. Focused work by primary care/women’s health providers, emergency department providers, and HIV providers will now start to examine STI testing and treatment practices. With a cooperative approach from health departments and providers, we can target clinical practice for improvement and focus our resources. We will also be able to draw on the resources offered from the St. Louis STD and HIV Prevention Center to help raise awareness of CDC treatment guidelines and STI standards of care for providers. Centers for Disease Control and Prevention. Sexually Transmitted Disease Surveillance 2014. Atlanta: U.S. Department of Health and Human Services; 2015. Increase in Incidence of Congenital Syphilis — United States, 2012–2014. MMWR 64(40); 1150-1.
Otrzymano: Sierpień 04, 2022 Zaakceptowano: Kwiecień 04, 2023 Opublikowano online: 2023-06-06 Jaroszewska A. , Horticulture and forestry Leaves can be a valuable source of biologically active compounds that effectively protect the body against oxidative stress and resulting disease. The aim of the experiment was to determine the effect of a genotype and water conditions on the phytochemical substances and antioxidant activity of leaves of sweet cherry cv. ‘Vanda’ and plum cv. ‘Amers’. The study was conducted at the Experimental Station in Lipnik, Poland, in 2017 and 2018. The soil on which the experiment was conducted belongs to the typical rusty soil group and is classified as a Haplic Cambisol. In the Ap level (arable-humus horizon), it has the granulometric composition of clay with a slightly acidic pH. The first experimental factor was a genotype (sweet cherry and plum). The second factor was under-crown watering; half of the trees from each variety were subjected to irrigation (W), and half were used as the control group (O), without irrigation, and with the soil water potential below - 0.01 MPa. Hadar micro-sprinklers were used for watering, sprinkler range r = 1 m and efficiency of 2.5l x h-1; one sprinkler per tree. It was found that the leaves of both species are abundant in compounds (macro- and microelements, crude protein, antioxidants, vitamins) with pro-health activities, including antioxidant, antibacterial, and anti-inflammatory effects. They are especially characterized by high antioxidant activity. By-products of plant production, such as cherry and plum leaves, can be a valuable and relatively cheap raw material for the production of food, pharmaceuticals or nutraceuticals, and, in times of insufficient feed supply, a valuable supplement to the feed of grazing animals. Jaroszewska, A., Telesiński, A. and Podsiadło, C. (2023) 'Pro-health potential of Prunus avium L. and Prunus domestica L. leaves cultivated in different water conditions' Journal of Elementology, 28(2), 295-305, available: http://dx.doi.org/10.5601/jelem.2022.27.3.2321. antioxidants, chemical composition, leaves, sweet cherry, plum, vitamins
The Boston Massacre was a clash between American colonists and British soldiers. It took place in Boston, Massachusetts, in 1770. The colonists wanted to end the British Parliament’s control over their taxes. The soldiers were there to enforce those taxes and other laws in the town. As news of the massacre spread, anger intensified in the colonies against Great Britain’s rule. These tensions eventually contributed to the American Revolution (1775–83). Did You Know? The colonists coined the phrase “No taxation without representation!” They thought that it wasn’t fair for the British government to tax them when they had no say in Parliament’s policies. Tensions had been rising between the British and the colonists in the years before the Boston Massacre. In 1767 the British Parliament had passed the Townshend Acts. These acts placed taxes on goods such as tea, paper, glass, and paint. Many colonists felt that the Townshend Acts were unfair because the colonies were not represented in Parliament. The colonists organized boycotts of those goods and harassed British officials. In response, Parliament sent more British troops to keep order, increasing the tension further. Brawls often broke out between the two sides. On the night of March 5, 1770, a crowd of colonists confronted eight British soldiers in the streets of Boston. The colonists taunted the soldiers and threw snow and other objects at them. In the confusion, one of the soldiers was shoved and, in fear, fired his musket. Other soldiers, thinking they had heard the command to fire, also fired into the crowd. As a result of the shootings, five colonists were killed and several others were wounded. Meanwhile, the British governor of the area soon arrived. He ordered the soldiers back to their barracks and promised the crowd that justice would be done. Did You Know? Among those killed during the Boston Massacre was Crispus Attucks. He was a Black sailor who likely was formerly enslaved. He became notorious as one of the first people to die for the cause of independence. The soldiers involved in the clash were arrested and put on trial. John Adams, one of Boston’s leading attorneys at the time, helped to defend them. He reminded the jury that the night of the massacre was chaotic. No one knew for sure if a command to fire had been given. In addition, Adams argued that none of the soldiers had intentionally meant to cause harm to the colonists. His arguments persuaded the jury, and six of the soldiers were acquitted. The other two were released after being branded on the thumbs. The Boston Massacre struck some colonists as a battle for liberty against the oppressive British government. The encounter led to acts of disobedience in the colonies against the harsh laws and heavy taxes imposed by Parliament.
Most of the time, counseling and psychotherapy refer to the same thing. In all probability, there is some counseling and some psychotherapy intermittently taking place in the course of any single therapeutic hour. Technically, the word counselor means advisor, or one who advises or teaches. In counseling it is understood that two individuals are putting their heads together in an intellectual manner in order to solve a problem. Therefore, we could say that counseling is that which provides advice, or is conducted in such a manner that one is actually taught or given information in a didactic way. The term counseling is commonly used in conjunction with other professions, such as legal counselor, or financial counselor, or spiritual counselor. There are even references in the Bible to Christ as Counselor. Psychotherapy, on the other hand, is a term that, generally speaking, refers to the “treatment of mental and emotional disorders through the use of psychological techniques designed to encourage communication of conflicts and insights into problems, with the goal being personality growth and behavior modification” (American Heritage Dictionary). However, the use of the terms “mental and emotional disorders” above may be viewed as archaic. Over the last few years a growing number of therapists and theorists have viewed the traditional pathology or medical-model based definition of mental and emotional disorders as being too narrow and inaccurate. Instead, this newer model relies on the “narratives” or “stories” that individuals relate which describes their difficulties in terms of the context and belief systems in which they occur. This paradigm shifts the person of the expert from the therapist to the client, that is, the client knows more about their own difficulty than does the therapist or counselor. It also means that the strengths of the client are viewed by the professional as being crucially important to the process of reaching a point of balance. As a consequence, the client is considered the ultimate teller of their own story, and the chief guide through their course of resolving their pain with the assistance of a therapist or counselor. Irrespective of the approach to which one subscribes, the differences between psychotherapy and counseling do exist. The interchangeable use of the terms, however, typically sparks neither controversy nor confusion.
This video on plastic is called Alphabet Soup- A Look at Pollution in the Ocean #1. And here is #2. From Wildlife Extra: Lush bow to pressure and ditch plastic glitter from products Lush remove microplastic glitter from products January 2013. Lush Cosmetics have decided to remove all glitter and microplastics from their product range, just days after claiming it was a “non-story”. Lush have done some great work for the environment, and have often been leaders in creating animal and environment friendly products, but on this issue they fell down. However, after pressure from an online petition (started nearly 2 months ago), the media and other concerned parties they have made a sharp volte-face and decided to ditch the litter glitter from their products. 5 tons of litter a “tiny amount” It was estimated that they were dumping some 5 tons of plastic glitter per year down the drains of the UK (And more in other countries), so their claim that it was a “tiny amount” was a little disingenuous. Lush have also spun this move to make it appear that they are doing it from the good of their hearts, rather than in response to complaints. Hilary Jones, Ethics Director at Lush Cosmetics said: “Some of the shine and sparkle in our bath products used to be from micro plastic glitters. For some years now, because of our concerns about the accumulation of plastics in the environment, we have been gradually replacing these plastic glitters with new alternatives as innovations have become available. From British weekly The Observer: Are microbeads and microplastics in beauty products a threat to the oceans? The ubiquitous use of tiny fragments of plastic in cosmetics seems to be a serious problem for the marine environment. Am I right, and what can be done about it? - The Observer, Sunday 9 December 2012 “Washing your face can be an act of pollution if you use a cleaner that contains zillions of plastic microbeads – aka ‘mermaid tears‘ – for exfoliation”: Lucy Siegle on the microplastics which in some seas are more plentiful than plankton. Photograph: Observer It is true that microscopic particles of polyethylene now bob around the high seas. It’s also true that the origins of these microplastics are likely to be consumer products. Washing your face can be an act of pollution if you use a cleaner that contains zillions of plastic microbeads for exfoliation. Too small to be sifted out at sewage treatment plants, they end up in the ocean, where the plastic becomes a persistent pollutant. As sea temperatures are low, plastic does not biodegrade; it is also ingested by wildlife. How could they avoid it? In some seas plastic fragments are more plentiful than plankton. So let’s dry our guilt-induced “mermaid tears” – as these polluting plastic particles are poetically known – and face this issue. Largely this involves staring down the behemoth cosmetics industry, which has developed something of a dependency on fragments of plastic – apparently even some companies that send out beautiful sustainable messages about other parts of their supply chain. So why use such an ugly ingredient? Well, plastic is cheap and far more cost effective than traditional biodegradable exfoliators such as coconut husk. There is also a tendency for the beauty industry to stick its head in the sand. Show us proof that microplastics cause ecosystem collapse and we’ll think about ending its use seems a pervasive message. But experts have accumulated evidence that tells us the time to act is now. Last month, for example, scientists from Wageningen University in the Netherlands showed that plastic nanoparticles have an adverse effect on sea organisms such as mussels. At the same time the beauty industry is ever more dependent on the oceans for its own survival. Recent beauty products developed courtesy of the oceans include sea fennel in sun creams, seaweed in anti-cellulite treatments and even ingredients derived from salmon hatcheries. The industry needs a reminder that an ecosystem driven to the edge will not be productive. As consumers we are well placed to provide this. Follow the lead set by the Plastic Soup Foundation (plasticsoupfoundation.org), which quizzes all mainstream cosmetic companies on their use of microplastic. There’s even an app, Beat the Micro Bead, to assist with shopping (scan the barcode – if the app turns red, the company is using microbeads and unrepentant about it; orange means the company has pledged to phase out microplastics). Actively asking questions, boycotting and pushing for alternatives are things we can all do. It’s time to scrub microplastics out of your skincare routine. The perfect Christmas gift for the eco-design nerd in your life? Energy Trumps from the Agency of Design. This graphically attractive and terribly useful deck of cards presents information about the eco credentials of 45 widely used materials such as cardboard and PLA (polylactic acid) in a simple way that lets you see their environmental impact at a glance. Energy Trumps allows you to make super-fast decisions about which material is better. Go to agencyofdesign.co.uk/energytrumps - Unilever to phase out ‘microplastics’ (bigpondnews.com) - ‘Plastic micro beads’ to be removed from soap (cnn.com) - Phasing Out Polluting Microplastics (blogs.discovermagazine.com) - One-third of fish caught in Channel have plastic contamination, study shows (guardian.co.uk) - Unilever to phase out ‘microplastics’ by 2015 (ctvnews.ca) - Unilever to phase out ‘microplastics,’ a common ingredient in cosmetic cleansers (macleans.ca) - Tide of plastic devastates marine food chain (scotsman.com) - Common Plastics in Environment Absorb Contaminants (polymersolutions.com)
Curve Shape in Digital Photography (Why a Linear Curve May Not Be Straight, And Other Curiosities) * The eye's response to light is logarithmic. * In film photography, we always use logarithmic scales. * The computer screen's response to pixel values is close to antilogarithmic. * In Photoshop, the tone curves represent pixel values. * The digital sensor's response to light is linear. * Digital cameras normally convert sensor values to appropriate screen pixel values. * For dark frame subtraction and other kinds of analysis, we need linear conversion. Curve shape is the difference between these three pictures: It would not be correct to say that they differ in darkness or lightness, because all three have the same maximum white and minimum black. They differ only in the position of the midtones relative to the highlights and shadows. This is what it would look like if they differed in brightness and not curve shape: Here one is too dark (with no whites) and one is too light (with no blacks). See the difference? In what follows, my goal is to explain curve shape as it applies to digital photography. I haven't been able to find all this information gathered in one place, so I've worked a few things out for myself by experiment and calculation. I would be glad to hear from anyone with corrections. (1) The eye's response to light is logarithmic. The human eye responds to light in terms of equal ratios rather than equal increments - that is, in terms of multiplication rather than addition. Another way to say this is that the eye's response is logarithmic. What I mean by this is that if I ask you adjust several lights to arrange their brightness in equal steps, you'll probably give them actual brightnesses of something like 10, 20, 40, 80, 160, or maybe 10, 30, 90, 270, 810, not 100, 200, 300, 400 or 100, 150, 200, 250. That is, each step will multiply the previous step by a constant. We call this logarithmic because the logarithms of the values are arranged in equal steps. Take our hypothetical series 10, 20, 40, 80, 160, and take the logarithms to base 10. They come out to 1.0, 1.3, 1.6, 1.9, and 2.2. That is, when you double the actual value, you're adding 0.3 (more precisely 0.301) to its logarithm. That's handy, because logarithms let you substitute addition (or shifting along a scale) for multiplication (expanding or contracting the whole scale). That's exactly what the eye does with light levels: we see equal steps where the physical reality is multiplication by equal constants. (2) In film photography, we always use logarithmic scales. Here is the kind of characteristic curve you'll find on a film data sheet: Notice that the are logarithmic. Equal distances represent equal ratios. Each scale is labeled several ways, all equivalent. An ideal film would have a straight "curve" - would produce density proportional to exposure. Or would that really be ideal? Actually, some compression at the toe helps get rid of lens flare, and compression at the shoulder helps deal with overexposed pictures. As we'll see, digital cameras imitate this. The gamma of a film negative is the slope of the straight-line portion of the curve. Typically, negatives have a gamma of about 0.7 and are printed on photographic paper with a gamma of about 1.4 (to turn the negative back into a positive). Note that 0.7×1.4=1.0, so the picture looks like the original. (3) The computer screen's response to pixel values is close to antilogarithmic. Digital pictures don't have negatives. Instead, they are stored as enormous grids of pixels each of which has a value from 0 to 255 (actually, three such values, for red, green, and blue, but for now, we'll think in black and white). These are 8-bit pixels; you may also encounter 12-bit pixels (0 to 4095) and 16-bit pixels (0 to 65,535). How do pixel values map onto the brightness of the screen? In fact, the mapping is close to antilogarithmic, you can treat pixel values as a logarithmic scale for brightness. That is, pixel values 0 to 255 look equally spaced to the human eye, and they would be evenly spaced along the vertical scale of the graph: The truth of the matter is a little more complicated. The mapping of pixel values to brightness is actually a power law, not an antilogarithmic (exponential) function. Brightness = Max brightness × (Pixel value / Max pixel value)Gamma Gamma ranges from about 1.8 to 2.2, depending on how the screen is calibrated. In conventional television technology it was 2.5, but on computers, we've brought it down to 1.8 to match the original Apple Laserwriter, which set a lot of standards for the digital darkroom. This can be extremely close to the desired antilogarithmic function, especially if your screen's black value isn't pitch-black (a common situation on CRTs). Here's a graph that shows you just how close they can get. The red curve represents a screen with gamma of 1.8 and a minimum black of 15% of full brightness. The green curve is antilogarithmic. In real life, most computer screens are a bit dark in the shadows, but otherwise close Screen gamma is distantly related to film gamma - but only distantly. The two are not directly comparable. (4) In Photoshop, the tone curves represent pixel values. When you adjust the tone curve in Photoshop (Ctrl-M), you're editing a graph both of whose axes are linear scales of pixel values (corresponding roughly to logarithmic scales of Thus, the curve does what you expect, and a straight line represents no change. Photoshop Elements lacks the Curves adjustment, but it has Levels (Ctrl-L), and the middle slider (shown here in red) makes the middle of the curve bend up or down. The other sliders set the ends of the curve on the horizontal and vertical axes. The black hump shape is a histogram that shows you how much of the picture is at each This one indicates a picture most of whose surface area is darker than halfway along the scale. (5) The digital sensor's response to light is linear. The actual output voltage from each cell of an image sensor, whether CCD or CMOS, is proportional to the number of photons that struck it during the exposure. This means digital image sensors are ruthlessly linear, which is what makes them so appealing for scientific use. You can add the light from two sources and be sure that what you get is the exact sum. You can then subtract one of them and be left with just the other. With film, this isn't possible because the response isn't linear; 2 + 2 does not equal 4. (6) Digital cameras normally convert sensor values to appropriate screen pixel values. As you might expect, almost all digital cameras perform gamma correction (tone curve correction) to make the pictures look right on an approximately antilogarithmic screen display. Mainly, what they have to do is lighten the midtones a lot. Here's a typical conversion function, plotted with both linear and logarithmic scales for the sensor output: For concreteness I'm assuming a sensor with a 12-bit ADC (analog-to-digital converter) such as the Canon Digital Rebel, with sensor outputs ranging from 0 to 4095. The actual output of the sensor is an analog voltage. (7) For dark frame subtraction and other kinds of analysis, we need linear conversion. For some purposes we need to know the actual numbers that came out of the image sensor, or numbers linearly proportional to them, without gamma correction. This enables us to do such things as measure the brightness of objects in the picture (important in astronomy) or subtract light from a known, unwelcome source leaving behind only the light we actually wanted. Related to the latter is dark frame subtraction, where we make two exposures, one with the lenscap on, and subtract them, so that the noise from hot pixels can be removed from the picture. Linear image conversion (from the raw data file produced by the camera) can be done by Canon File Viewer Utility, Canon Digital Photo Pro, ImagesPlus, and a number of other software packages. The linear conversion function is a straight line if you use a linear scale for the sensor output; otherwise it's quite bent. Here it is, superimposed on the graphs you just saw: That is why, when you ask for a linear conversion in Canon Digital Photo Pro, you see a curve that sags severely in the middle! Copyright 2004 Michael A. Covington. Caching in search engines is explicitly permitted. Please link to this page rather than reproducing copies of it. This page is not in any way connected with or endorsed by any photographic manufacturer. Many of the product names that appear on this page are registered trademarks of their respective owners.
NEW YORK – Usually, physics research starts with a known problem. There are surprises, of course, but they don’t often come from Internet videos, as happened with the case of the mysterious chain fountain. It started with Steve Mould, a host of science television shows in Britain. Mould, who has a master’s degree in physics from Oxford, seems to be the discoverer of the chain fountain, which he demonstrated in a startling video posted online. In it, he pulls one end of a long chain of metal beads out of a glass container. Once he starts it off, the bead chain continues spilling out of the container on its own, like water or gasoline being siphoned from a tank. That, in itself, would be interesting enough. But the real surprise is that the chain doesn’t just run over the edge. It rises up in a curve, like a water fountain, as it falls. “I came across it by accident,” he said. “I was looking for a physical model of a polymer,” a long molecule. He thought a chain of beads would work, and in the process of investigating that possibility he saw a demonstration of plastic beads self-siphoning from a container, tumbling over the edge. “I thought, ‘I want to reproduce this, but I think metal might look better.’” He tried it, and to his surprise the chain rose like a charmed snake. The video, posted about a year ago, went viral, and John Biggins, a Cambridge physicist, saw it. He had been talking with another physicist at Cambridge, Mark Warner, about a project Warner was working on, an online course to improve physics education in high school. Biggins brought the chain fountain video to Warner’s attention and they agreed it was an ideal problem to present to students because it involved Newtonian physics, not some extreme variant of string theory or quantum mechanics. Then they realized that they didn’t actually understand it. The fountain, said Biggins, which he had never seen before the video, was “surprisingly complicated.” The chain was moving faster than gravity would account for, and they realized that something had to be pushing the chain up from the container in which it was held. A key to understanding the phenomenon, Biggins said, is that mathematically, a chain can be thought of as a series of connected rods. When you pick up one end of a rod, he said, two things happen. One end goes up, and the other end goes down, or tries to. But if the downward force is stopped by the pile of chain beneath it, there is a kind of kickback, and the rod, or link, is pushed upward. That is what makes the chain rise. Biggins said they explored this possibility partly because of earlier findings by a Cornell researcher, Andy Ruina, on a different, but related, problem in falling-chain physics. Finding a new physics problem in an Internet video was, Biggins said, something of a treat. For a scientist, it’s “reassuring” that new problems like this can pop up, he said. He and Warner published their paper on Jan. 15 in the Proceedings of the Royal Society A. And the chain fountain is now, as Warner hoped, part of a physics course for high school students. As for Mould, he is touring at the moment as part of a science comedy show called “Festival of the Spoken Nerd,” in which he talks about the chain fountain. One of the most enjoyable performances, he said, was in Cambridge last month, because Biggins and Warner, who had talked to him about their research, attended the show. “They were in the audience,” he said, “with the whole physics department.”
Sheets of Cheap Carbon Nanotubes Now a Reality March 3, 2008 12:33 PM comment(s) - last by No shadows and mirrors -- a Nanocomp employee poses next to a 3x6 ft sample mesh composed of randomly overlapping millimeter long carbon nanotubes. (Source: Nanocomp Technologies) The birthplace of the nanotube sheets is Nanocomp's high-tech tube-processing "Big Box", which can churn out a full-sized sheet per day. (Source: Nanocomp Technologies) Company makes 3x6 ft carbon nanotube sheets, 100 sq. ft. sheets by the end of the summer; possible uses include consumer electronics, aircraft, and spacecraft are like a materials scientists' dream come true -- high strength to weight ratio , and flame resistance orders of magnitude higher than many commonly used materials. However, in the past, while these tiny tube-molecules composed carbon atoms were raved about by researchers, plans for practical applications remained largely in the realm of fantasy. The chief problem was the "nano" part of the nanotubes -- these tiny tubes would need to be scaled to visible-sized pieces of material in order to be utilized in many practical devices. Such scaling had enjoyed little previous success, and was seen as a major roadblock to putting the ubiquitous nanotubes to work. breakthrough from Nanocomp Technologies , a New Hampshire startup, promises to provide sheets composed of carbon nanotubes on an unprecedented scale. Using nanotubes measuring in the tens of nanometers, Nanocomp produces sheets of carbon nanotubes measuring 3 by 6 feet. Better yet, by the end of the summer the company is promising slabs of 100 square feet or more. Nanocomp says that the days of waiting for nanotubes materials to be manufactured on a usable scale are over. The company is taking production seriously, and is not looking to remain in the realm of pure research. Says CEO and co-founder of Nanocomp, Peter Antoinette, "From the get-go, we wanted to build something that would be manufacturable. We’re out to make value-added components out of that material." The hardest part about making nanotube materials is growing long enough tubes. Nanocomp makes a powder of tubes that are approximately 1 mm, a relatively long length. In a highly secretive process Nanocomp takes a carbon source such as ethanol or methanol, heats it and flows it past a nanoparticle catalyst, which experts speculate to possibly be an oxide of nickel, cobalt or iron. The carbon molecules react with the catalyst, powered by the heat, forming a nanotube. The size of the catalyst correlates to the size of the nanotube formed. The big hurdle was maintaining a large enough catalyst, keeping it stable enough for millimeter long tubes to grow. Antoinette says that in order to achieve this, Nanocomp uses an advanced computer controlling 30 different parameters in the process, including temperature, temperature gradient, gas flow rates, and the chemical composition of the mix. Using this precision control, researchers can both control the desired tube length and select from single-walled or multi-walled tubes. Mr. Antoinette states, "We can dial it in." The resulting nanotubes are arranged randomly overlapping into the large final sheets. To give an idea of the sheet's strength, it has a tensile strength of 200 to 500 megapascals -- aluminum has a strength of approximately 500 megapascals. Better yet, if Nanocomp moves from a random alignment to an organized alignment, the nanotube material could have a tensile strength as high as 1,200 megapascals. Antoinette comments that Nanocomp is heavily marketing the material to consumer electronics manufacturers and is receiving substantial interest. In cell phone handsets, the material could help to provide protection against stray signals, thanks to its superior shielding against electromagnetic interference (EMI) . This could yield clearer sound and reception. Also under consideration is the the use of the material and laptops. The material would provide such portable electronics with both heat dissipation from their chipsets and processors and EMI shielding from unwanted signals, via its randomly aligned nanotubes. Better yet, smaller strips of could act as powerful antennas, grabbing wireless signals for superior network reception and transmission. Nanocomp wants to eventually use the material in composites similar to the carbon-composite used to build the new Boeing 787 jets. Current composites can't conduct electricity making them vulnerable to lightning strikes. Nanotube composites could safely channel strikes to harmless locations, protecting the aircraft's electronics. As an added benefit, current could be run through the nanotubes to heat up the aircraft body, and provide additional de-icing capabilities. Antoinette points out that most of aerospace industry still uses pure copper wire for its conductors -- virtually the same copper wire used since the 1850s. His company's nanotubes could replace this material with better conducting nanotubes, which weigh a mere 20 percent as a much as the copper wiring per volume. Antoinette adds, "Copper wire is still the conductor of all our satellites, all our aircraft." He points out that a current 747 jet has two tons of copper wire aboard -- a weight cost that could be cut in half by the use of nanotubes. He says, "you’re talking literally millions of dollars of savings in fuel costs over the life of an airplane." Boeing, Lockheed Martin, and Northrop Grumman are, needless to say, very excited about the potential for the new large scale material. They have already qualified Nanocomp as a vendor and are currently receiving and testing samples of the material from Nanocomp. In hopes of meeting these and other industry leaders demands, after the 100 sq. ft. samples are complete, Nanocomp plans on focusing its efforts on having a pilot-plant running by 2010, with full scale production by 2012. David Lashmore, Nanocomp co-founder and Chief Technology Officer, pioneered the material assembly process. The other co-founder was Robert Dean, a former engineering professor at Dartmouth who started Synergy Innovations, a high-tech incubator in Lebanon, NH. The pair joined with Antoinette to start up the exciting new firm. They've received a $2.5M USD contract from the U.S. Army and a Small Business Innovation Research grant from the Air Force. They are currently raising additional private financing. To protect itself legally, Nanocomp has signed a non-exclusive license with IBM for IBM's single-wall nanotube process. With 16 of its own patents, Nanocomp looks to safeguard its own prospects as well. This article is over a month old, voting and posting comments is disabled more vulnerable to lightning 3/3/2008 8:42:55 PM if you are in a metal aircraft it forms a faraday cage and conducts the electricity around you....very little voltage drop. Since Power = V * I and V~0 then little power is discharged in the planes conduction. If this resistance goes up now it starts dissipating more power. Very bad. "I f***ing cannot play Halo 2 multiplayer. I cannot do it." -- Bungie Technical Lead Chris Butcher Thinking Small: New Intel Architecture Champions Sub 1 Watt x86 March 2, 2008, 8:12 PM It Cuts Both Ways: American Majority Deems Nanotechnology Immoral February 23, 2008, 1:57 AM Military Goes For Stealthy Kills With Ultra-Black Nanotubes February 22, 2008, 10:37 AM Scientists Design 1GHz CMOS Circuit Using Carbon Nanotubes February 15, 2008, 3:10 PM Scientists Develop Ingenious Method for Nanotube Alignment January 25, 2008, 10:46 AM Creationists are Mad About Google Doodle Depicting Evolution November 24, 2015, 8:48 PM DHS and TSA: Whoops, We Missed That 73 Airport Employees May be Terrorists November 19, 2015, 2:16 PM Star Wars Spinoff Film "Rogue One", Theme Park Attractions Announced August 17, 2015, 12:20 PM SpaceX Falcon 9's Seventh Supply Mission to ISS Ends w/ Fiery Stage 1 Explosion June 28, 2015, 1:10 PM Cool Science Video: Glowing Millipede Prowls the Nevada Desert May 18, 2015, 12:00 PM Newly Discovered Costa Rican Glass Frog is Kermit's Doppelgänger April 22, 2015, 11:26 AM Latest Blog Posts Sceptre Airs 27", 120 Hz. 1080p Monitor/HDTV w/ 5 ms Response Time for $220 Dec 3, 2014, 10:32 PM Costco Gives Employees Thanksgiving Off; Wal-Mart Leads "Black Thursday" Charge Oct 29, 2014, 9:57 PM "Bear Selfies" Fad Could Turn Deadly, Warn Nevada Wildlife Officials Oct 28, 2014, 12:00 PM The Surface Mini That Was Never Released Gets "Hands On" Treatment Sep 26, 2014, 8:22 AM ISIS Imposes Ban on Teaching Evolution in Iraq Sep 17, 2014, 5:22 PM More Blog Posts Copyright 2016 DailyTech LLC. - Terms, Conditions & Privacy Information
- What virtue do today's young people most need to learn? - This reproducible resource provides a variety of Bible related activities to help students understand what it means to respect God, parents, other people and themselves. - Includes suggested Bible stories that emphasise the importance of showing respect in everyday life as well as discussion questions and What if? situations. - Students will enjoy the puzzles, crafts, games and projects. - Perfect for home or school. Please note: Product may be an old edition or slightly damaged. Only while stock lasts.
The Oyster Creek Nuclear Generating Station sits near the shore of New Jersey, in Lacey Township, a small town in Ocean County. The single boiling water reactor, commissioned in 1969, was the first large-scale commercial nuclear power plant in the United States. It has a capacity of 625 MW, producing over 5,000 GWh in 2007, about 9 percent of the state's energy. The benefits of the plant are numerous. It reduces reliance on unstable oil sources, it provides clean energy, and it’s far cheaper than wind or solar, rivaling even fossil fuel generation in cost per kilowatt-hour. The plant also is a boon for the local economy, creating over 900 jobs and donating over $100,000 yearly to the charity United Way. This spring the plant won a 20-year extension of its operating license. That's when the environmentalists reared their heads. A plethora of alarmist groups, including the New Jersey Environmental Federation, the New Jersey Sierra Club, the Public Interest Research Group, the Nuclear Information Resource Service and Grandmothers, Mothers and More for Energy Safety (GRAMMES) appealed the decision, taking it to the federal court system. The coalition's attorney, Richard Webster, of the Eastern Environmental Law Center, claims that the suit is over lack of information about how the plant will continue to operate safely. This claim is flat-out false. The plant submitted a bit of light reading -- a 462-page licensing application and a 59-page environmental impact report. Both reports extensively detailed the safety precautions and environmental safeguards the plant would take. The environmentalists' complaints center around two topics. The first is Barnegat Bay. The plant dumps controlled amounts of non-radioactive cooling water into the bay. The water has little if any impact, raising the temperature at most a couple degrees in a small localized region. Solar warming and currents can create similar heat pockets in ocean water without human intervention. The second complaint concerns the 650 tons of radioactive waste that sits in a holding pond outside the plant. Again, while the lobbies are eager to alarm the public, this pond, carefully constructed with concrete, poses no threat to the populace. In the first place, this is low-grade radioactive waste, and secondly it has been carefully maintained. And it is important to remember that these are the same lobbies that blocked applications of new plants that could remove and reprocess this waste. If the people want something to protest about, protest the Environmental Federation, the Sierra Club, and these alarmists. They are hurting the environment, their community, and our nation. Worst of all, by forcing power companies to lose productivity and spend funds on legal defense; they're raising the cost of power for New Jersey citizens. Let's hope this one sees its way swiftly through the Justice System and that people -- and our government representatives start standing up to this kind of behavior.
Tilapia Nutrition Information Tilapia A Good Source of Vitamins and Minerals While many are saying that tilapia is much like bacon nutrition wise, one has to note that the white fish deemed as the Fish of the 21st Century is an excellent source of minerals like niacin, phosphorus, potassium, and selenium. Tilapia is also a good source of Vitamin B12 among other nutrients that are important for the human body. Nutrition Facts of Tilapia High Protein Content, Almost Like Chicken Studies show that fish in general contains six times more protein as compared to milk. Tilapia, also known as the chicken of the sea or the aquatic chicken because it’s the easiest seafood source to farm, contains 21 to 24 grams of protein in a 3 ounce fillet. Chicken, on the other hand, contains 25 grams of protein per two drumsticks. This means eating tilapia is a lighter alternative as compared to eating chicken. Tilapia is extremely versatile Because of its white, flaky meat when cooked and because it’s generally considered bland as compared to other seafood, tilapia is very versatile making it a favorite among chefs and home cooks alike. You can broil it, grill it, pan fry it until it’s crisp or cook it in coconut milk –the list of recipes is endless. Also Tilapia will quickly soak up the flavor of a delicious marinade. 100 gram Tilapia has 10 mg Calcium Calcium is good for the body. While you can easily get calcium by taking supplements, getting it the natural way will also benefit your muscles and your bones. It is interesting to note that tilapia has as much calcium as that contained in tuna per 100 mg. Tilapia has 0.7 mg Iron per 100 mg Although this particular fish does not stand out when it comes to its iron content, it is still a decent source of the mineral. In fact, 3.5 ounces of tilapia will cover 4 percent of the iron needed for women from 20 to 50. Tilapia contains what is called heme iron — an iron form that can be easily absorbed by the body. The fish is also an effective iron enhancer which helps the body get more iron from vegetables and other iron sources. Iron deficiency is a common condition in the United States according to the Centers for Disease Control. The nutrient is needed by the body and is a component of hemoglobin. Hemoglobin is contained I red blood cells which the human body needs in acquiring oxygen. Muscle Building Meal: Delicious Tilapia & Rice Next you pick how much you want to save with Amazon Discount Finder
English Summaries (02-03/2018) Co-operation and psychological information enhance the functioning of students with ADHD symptoms at school Behavioral problems in school are common and especially prevalent with students who have attention deficit hyperactivity disorder (ADHD). Increasing multi-professional co-operation along with preventative evidence-based interventions are needed to address the needs of students with problem behavior in school. Previous literature has shown that effective support for students with ADHD symptoms can be integrated into mainstream classroom activities. The purpose of this study was to investigate the efficacy of Check in – check out (CICO) intervention on problem behavior in Finnish schools piloting ProSchool school-wide positive behavior support. The key features of the CICO are brief morning and afternoon meetings with an adult, the use of a daily point card, regular positive feedback during the day and parental involvement. With a single case experimental design with two students, we examined the effects of CICO intervention on problem behavior, and the fidelity and acceptability of the intervention. Direct observation data obtained from external observers showed a decrease of problem behaviors and an increase of appropriate behaviors during the CICO intervention for both students. CICO was implemented with high fidelity, and its acceptability among school personnel, students and parents was excellent. The results indicate that effective behavior support for students with disruptive behaviors can be easily applied in general education classrooms. Keywords: behavior, intervention, intensified support, observation study, single case design, ADHD Psychosocial problems among immigrant youth in the Helsinki metropolitan area Due to increasing immigration, a growing number of Finnish youths or their parents are immigrants. It is therefore more important than ever to study the psychosocial well-being of immigrant background youths but, so far, studies on this topic are scarce. Our aim was to study whether the psychosocial problems reported by immigrant background youths differed from those reported by non-immigrant background youths. We also studied the development and permanence of the problems. Psychosocial problems were measured using the SDQ Questionnaire. We utilized the Metropolitan Longitudinal Finland (MetLoFIN) data, gathered from seventh-graders (N = 9 497), ninth-graders (N = 7 738) and second-year high school and vocational school students (N = 8 461) in 14 municipalities in the Helsinki metropolitan area in 2011–2016. Immigrant background youths, especially boys, reported a slightly higher number of psychosocial problems than non-immigrant background boys. Internalizing problems developed in the same way in all youths, while the trajectories of externalizing problems in immigrant background youths were different than in non-immigrant youths. The problems proved to be comparatively permanent regardless of the youth’s background. To our knowledge, this is the first large-scale longitudinal study on immigrant background youths’ psychosocial problems in Finland. Keywords: immigrant, psychosocial problems, internalizing problems, externalizing problems, SDQ, longitudinal The associations of participation in extracurricular activities, school type and gender with adolescents’ subjective well-being in the transition from primary school to lower secondary school The aim of this study was to investigate the role that participating in extracurricular activities and changing from one school to another play in the changes of adolescent subjective well-being, and whether this role is different depending on the adolescent’s gender. The data consisted of 848 adolescents who answered questions on their extracurricular activities in the fall of grade 6 and questions on subjective well-being in the fall of grade 6, the fall of grade 7, and the spring of grade 7. The results showed that adolescents’ self-esteem decreased and depressive symptoms increased after adolescents had transitioned to lower secondary school. Lack of extracurricular activities also acted as a risk factor regarding adolescents’ subjective well-being. Depressive symptoms increased and life satisfaction decreased during the school transition especially among those adolescents who did not participate in any extracurricular activities. Changing to another school was related only to the development of self-esteem: self-esteem increased from the fall of grade 6 to the fall of grade 7 among students who changed schools during the transition from primary school to lower secondary school, whereas the self-esteem of students who stayed in the same school did not change. Adolescent gender also played a role in the associations between participation in extracurricular activities, changing from one school to another and subjective well-being. In particular, the self-esteem of boys who changed to another school was higher than the self-esteem of girls who changed schools. In addition, participation in structured activities was especially important for boys’ life satisfaction, whereas overall lack of extracurricular activities (either structured or unstructured) was a risk factor for girls’ life satisfaction. Keywords: school transition, extracurricular activities, structured activities, unstructured activities, changing to another school, gender, subjective well-being, early adolescence Achievement goal orientations, education-related personal goals and academic achievement among sixth-graders We examined what kinds of achievement goal orienta-tion profiles can be identified among sixth-graders (N = 745) and how students with different profiles differ with respect to their education-related personal goals, goal appraisals (i.e., commitment, effort, progress and stress) and academic achievement. By utilizing a person-oriented approach and K-means cluster analysis, groups of students with different motivational profiles were identified. The open-ended answers concerning personal goals were categorized by means of qualitative content analysis. Group differences in education-related goals, goal appraisals and academic achievement were examined by means of cross-tabulations and analyses of variance. Four achievement goal orientation profiles were identified: performance-avoidance-oriented (30 %), mastery-oriented (25 %), success-oriented (22 %) and avoidance-oriented (23 %). Mastery- and success-oriented students reported the highest academic achievement, as well as commitment and effort related to their educational goals. Performance-avoidance- and success-oriented students appraised their goals as more stressful than mastery-oriented students. The results indicate that there are differently motivated sixth-graders who also differ with respect to their goal appraisals. Mastery- and success-oriented students have positive goal appraisals, but a strong performance-focus seems also to entail goal-related stress. Keywords: motivation, achievement goal orientations, personal goals, education-related goals, academic achievement Does effort explain group differences in performance? Log data analysis on the relation of self-reported effort, working time as a measure of invested effort, and performance in mathematical task Studies conducted in educational settings often contain cognitive measurements which have no consequences for participating pupils. Some students do not put their best effort in the testing situation, which leads to underestimation of their competence level. In this study, we tested a hypothesis about the role of effort in explaining the gender gap often observed in educational assessment studies with 15-year-old ninth graders. We expected that effort as measured by log data analysis of computer-based testing would also explain the increasing performance gaps regarding students’ support needs after controlling for initial performance differences in seventh grade. We tested the hypotheses by fitting structural equation models on the MetrOP data consisting of circa 7,000 students. The results indicated that in the tasks measuring mathematical thinking, girls performed slightly better at the baseline measurement and the gap increased during lower secondary education. The increase was fully explained by differences in self-reported effort and log data analysis of time investment. Students with support needs had a lower baseline level and the gap increased more. This was only partially explained by effort. Time investment was a much stronger predictor of performance than self-reported effort. It was concluded that log data enable more accurate measurement of effort and task behavior, and these can partly explain observed performance differences both at the individual and group level. Keywords: log data analysis, self-reported effort, time investment, mathematical performance, changes, time-, gender- and special educational gaps Executive function behaviors in preschool children with difficulties in language or social skills Executive functions (EF) are essential for learning and predict school success even better than academic skills. During the preschool year EFs can be evaluated by questionnaires in the everyday daycare environment. The aim of this study was to examine how EF features are associated with difficulties in language and/or social skills identified during the preschool year. The study is part of the standardization study of Attention and Executive Function Rating Inventory for Preschoolers (Attex-P). The participants of this study (n = 323) were divided into four subgroups according to the Taito questionnaire: Language skills difficulties group (n = 21), Social skills difficulties group (n = 39), Language and social skills difficulties group (n = 8) and control group (n = 255). EF behaviors were assessed with 44 items included in the Attex-P. Compared to the control group, children with language and/or social skills difficulties had more EF problems in everyday situations in preschool. Children with language difficulties had problems of inattention and taking initiative while children with social skills difficulties had more dysfunctions in inhibition. Children with language and social skills difficulties had more wide-ranging difficulties in EF behaviors than the other groups. Thus, according to EF ratings, children with different developmental difficulties showed diverging features of EFs that need to be acknowledged during the preschool period. Using questionnaires enables gathering information about a child’s everyday capability. This in turn serves as a good basis for planning the support needed in preschool and at school. Keywords: executive function, language skills, social skills, questionnaires, preschool Associations between socio-digital participation, sleep quality and school well-being among 6thgraders The aim of this study was to examine the relationships between socio-digital participation (SDP), sleep quality and school well-being among 6thgraders. More specifically, it examined how socio-digital participation, i.e. technology-mediated social practices and digital gaming, was associated with 6thgraders’ sleep quality, school burnout and school engagement. Fur-ther, we examined how these differ across genders. In addition, the mediating effect of sleep quality between SDP and school burnout and school engagement was examined. This study was part of the Mind the Gap research project and the data were collected from 6thgraders in Helsinki in spring 2013 (N = 749). Results suggested that active friendship-driven SDP and playing action games were associated with poorer sleep quality among girls. Among girls active media consumption was associated with lower school engagement and active knowledge creation with higher school engagement. Boys who were consuming media actively reported poorer sleep quality and boys who actively played action games reported school burnout. Poorer sleep quality was associated with school burnout and lower school engagement within both genders. Among girls sleep quality partly mediated the association between friendship-driven SDP and inadequacy as a student and association be-tween friendship-driven SDP and exhaustion. Among boys sleep quality did not mediate the association between SDP and school well-being. Keywords: socio-digital participation, social media use, digital gaming, sleep quality, school burnout, school engagement Making the invisible observable: Cluster analysis of learning abstract phenomenon by augmented reality The role of digitalization in tea-ching is rapidly growing. In addition, adapting augmented reality (AR) is becoming more and more common at school and therefore its study within the formal and informal learn-ing context is important. Participants of the AR-study were 146 sixth-graders. They used the AR at the science center. By using cluster analysis of self-organizing maps (SOM), the aim was to identify subgroups of the students and supplement earlier results. The students using AR in science learning were clustered based on reasoning, motivation and science knowl-edge results. Earlier it had been noticed that after the AR-experience science test results generally improved, with the biggest gain among the students with lowest achievement. The cluster analysis supplemented this by identifying a majority group of boys in which the students were especially interested in science learning both at school and at the science center using AR. In spite of low school achievement their high motivation led to good science learning results subsequent to the exhibition. The earlier results, according to which the girls closed the sci-ence knowledge gap between boys after using AR, became more relative, as two girl-dominated subgroups were identified. In one group the students were motivated, but wrong an-swers increased; in the other the students were highly uncertain and after the AR-experience there was no change. Possible reasons for the results were considered on the basis of motivation and concept formation theories. The clustering results complemented earlier findings of AR-gains in learning as an unexpected response to intervention was discovered by the non-linear analysis. Keywords: augmented reality, SDT-motivation theory, informal learning, SOM-cluster analysis, science education
The lack of a consistent signal inside the office or home means that users experience dropped calls and slower data speeds. The answer to this problem seemed to be femtocells. A femtocell is a small device that looks similar to a router for internet access that picks up the wireless call and sends it over the user's broadband network rather than the mobile phone network. Femtocells were also looked at as a way for mobile network operators to improve speeds and reduce the overall load on their networks. The reality is that femtocells have been deployed in much lower numbers than was predicted by analysts and mobile providers. ABI Research today issued a notice that it had reduced the expected number of femtocells that will ship this year by 55%. The research firm blames the reduction in its estimates on the fact that mobile operators have been slower to adopt the technology than expected. ABI now estimates that 350,000 femtocells will have been shipped by the end of the ABI's Aditya Kaul said, "Even femtocell vendors are a bit surprised that the operators haven’t pushed femtocells as much or as soon as expected. We expect that deployments in 2010 will pick up but will be slower than expected – our data suggests about a 40% reduction on previous estimates." As for why carriers are slow to adopt the technology the reason is unclear. ABI reports that some feel femtocells have yet to prove their worth. Others believe that the poor economy is contributing to the slow adoption. The economy is a good reason. Femtocells require users to pay more each month to provide better signal. A strong signal is something that many users expect their carrier to provide for the money they already pay each month. Kaul said, "We still believe in this market’s potential. We anticipate that by 2014, shipments will only be about 10% lower than our previous estimates. The drivers are real, but it will take longer than anticipated. Next year will be critical: if conditions don’t improve by the end of 2010, some smaller vendors may find themselves in trouble." reported in March that femtocell deployment was slow due to the economy. At the time, the number of femtocells shipped was expected to climb to over a million units The launch of femtocells from Verizon was expected to happen in 2008, which never really materialized. Verizon then reported that it would launch femtocells in early 2009 along with handsets that would support Femtocells. Verizon is currently offering femtocells to its customers for an extra $10 to $20 per month added to the phone bill.
Recipe by Aleigh Michelle My compost rules are this: no citrus. It takes forever for those rinds to break down, and plus, some of those compost microbes really don't enjoy the citrus. Why can't citrus be composted? Citrus fruits (and randomly, onions) are not friendly for compost because of their acidity. Why does acidity matter? The acidity and natural chemicals in citrus peels are harmful to your compost bin's friendly microorganism and worm digesters, and can effectively slow down the rate of decomposition in the compost. How to use your citrus peels? Enter: citrus salt! With our recipe editor's help, we've devised a fancy gourmet way to use those citrus peels and reduce waste. You can create this recipe before you've eaten your oranges, juiced your lemons, or whatever else you're doing with that juicy citrus. Or you can save those peels to make this recipe later. Check out the recipe below to create your own citrus salt for cocktails, seasoning and more: - 1/4 cup Maldon Sea Salt - Grated skin of 1 navel orange - Grated skin of 1 lemon - Preheat oven to 250 degrees. Combine the grated citrus peel and maldon salt into a bowl. - At this point you can use the fresh citrus salt to line a glass for a nice mocktail or cocktail. Alternatively, place on a baking sheet and bake for 15 minutes until the grated peel is dehydrated. - Store in an airtight container and use as a topping for salads, avocado toast, soups, and more! Watch our (easy peasy, lemon squeezy anyone?) 30-second Recipe Tutorial Here: Add additional herbs to come up with your own flavor profiles. Most importantly, have fun! Fight food waste! Citrus Salt Pairing Suggestion: Try it with an easy Avocado Mash + Sea Salt or Jalapeño Lime Pulp Chips. Yum. About Pulp Pantry Pulp Pantry turns overlooked resources like upcycled vegetable juice pulp into wholesome everyday snacks that make it convenient and delicious to eat more servings of vegetables and fiber. Pair them with dips, top them on your salad, or dig in as an afternoon snack. A delicious, satisfying, crunchy, nutritious snack for every day (hello extra veggies, and fiber). With 5g fiber & prebiotics in each serving, Pulp Chips the best go-to snack to satisfy cravings and hunger.
|American bittern (Botaurus lentiginosus)| Botaurus is a genus of bitterns, a group of wading bird in the heron family Ardeidae. The genus name Botaurus was given by the English naturalist James Francis Stephens, and is derived from Medieval Latin butaurus, "bittern", itself constructed from the Middle English name for the Eurasian Bittern, Botor. Pliny gave a fanciful derivation from Bos (ox) and taurus (bull), because the bittern's call resembles the bellowing of a bull. The genus has a single representative species in each of North, Central and South America, Eurasia, and Australasia. The two northern species are partially migratory, with many birds moving south to warmer areas in winter. The four Botaurus bitterns are all large chunky, heavily streaked brown birds which breed in large reed beds. Almost uniquely for predatory birds, the female rears the young alone. They are secretive and well-camouflaged, and despite their size they can be difficult to observe except for occasional flight views. Like other bitterns, they eat fish, frogs, and similar aquatic life. - American bittern, Botaurus lentiginosus - Eurasian bittern, Botaurus stellaris - Pinnated bittern or South American bittern, Botaurus pinnatus - Australasian bittern Botaurus poiciloptilus - Botaurus hibbardi (fossil) - Jobling, James A (2010). The Helm Dictionary of Scientific Bird Names. London: Christopher Helm. pp. 75,365. ISBN 978-1-4081-2501-4. - "Bittern (1)". Oxford English Dictionary. Oxford University Press. Retrieved 16 May 2016.(subscription required) - Sibly, Richard M.; Christopher C. Witt; Natalie A. Wright; Chris Venditti; Walter Jetz; James H. Brown (2012). "Energetics, lifestyle, and reproduction in birds" (abstract). Proceedings of the National Academy of Sciences of the United States of America. 109: 10937–10941. PMC . PMID 22615391. doi:10.1073/pnas.1206512109. |This Pelecaniformes-related article is a stub. You can help Wikipedia by expanding it.|
Pesticide Guide: The Dirty Dozen and The Clean Fifteen Are you interested in lowering pesticide exposure in your diet but you aren’t sure where to start? The Environmental Working Group, a non-profit organization that advocates for policies that protect global and individual health has produced a guide that can help you get started. Based on the analysis of 87,000 tests for pesticides on the 47 most popular fruits and vegetables, the EWG has found that people can lower their pesticide exposure by almost 80 percent by avoiding the top twelve most contaminated fruits and vegetables and eating the least contaminated instead. The Shopper’s Guide to Pesticides contains the latest and most up to date information about which fruits and vegetables have the highest residues on them and therefore should only be consumed certified organic. It also gives you insight into which fruits and vegetables are on the lower end of the spectrum, making them safer to buy conventionally grown. There is a common misconception that rinsing and/or peeling fruits and vegetables will remove pesticide residues. In nearly all the studies used to make this list, the produce was tested after it had been rinsed or peeled. Every year, new research is published confirming the toxicity of pesticides to human health and the environment. Certified organic fruits and vegetables are by definition grown without the use of pesticides but some find that the expense of buying all their produce organic is prohibitive. Eating organic doesn’t need to be an all or nothing approach. The “Dirty Dozen” has the highest pesticide load, making them the most important to buy or grown organic. The “Clean 15” has the lowest pesticide load making them the safest conventionally grown crops to consume. The following list is a summary of the Environmental Working Group’s pesticide wallet guide that you can carry with you to the grocery store until you become more familiar with which produce is safer to buy organic: Dirty Dozen – Buy These Organic - Bell Pepper - Grapes (Imported) Clean 15 – Lowest in Pesticides - Sweet Corn - Sweet Peas - Sweet Potato
When your gums bleed easily after brushing or flossing, it's usually a sign of an underlying oral health problem, such as gum disease. According to the American Dental Association, an estimated 100 million Americans don't go to the dentist every year, even though regular dental exams and good oral hygiene can prevent most dental diseases. The following are 10 signs that let someone know if they have good oral hygiene. Address13247 S Baltimore Ave Chicago, IL 60633. Pain inside the mouth is a sign of a problem. Pain can point to a damaged or decayed tooth. It may also indicate that you have gum disease. Both gingivitis and periodontitis can cause gum tenderness and pain. A pain-free mouth is a good sign that you are practicing good oral hygiene. Everyone should be aware of their dental health. Toothaches are considered by most to be the worst pain one can suffer. Sleepless nights, dental visits, and often 24-hour discomfort are part of the course. Fortunately, all of this can be avoided simply with a good oral hygiene routine. Firm, pink gums are a good indication that your mouth is healthy. Swollen, bright red gums are an indication that a person might have gum disease, such as gingivitis, while pale gums can be a sign of anemia. Gums that don't bleed when brushing or flossing are also an indicator of a healthy mouth. A healthy tongue is one that is pink and covered with taste buds, also called taste buds. In addition to a lower risk of oral complications, excellent oral health means healthier body rest, as the body and mouth are correlated. And with Australian government data showing that dental hygiene has improved over the past 30 years, there's no excuse not to have clean, beautiful teeth. When asked to comment on the number of people with poor oral hygiene, despite the Australian government releasing figures showing 30-year highs in standards, celebrant Jermaine Clarke said: “In my role, I work with couples who are thrilled to jump into the next chapter of their lives. But when they feel self-conscious about smiling to take pictures, or can't enjoy their food due to tooth sensitivity, it's clear that a visit to a professional dentist could yield great rewards. According to the American Dental Association, you should have examinations and cleanings at regular intervals specified by your dentist. Many people need to clean themselves every six months to keep their teeth and gums healthy. However, if you're prone to tooth decay or gum disease, your dentist may need to see you more often. For example, it's common for people with gum disease to see their dentist every three to four months. This is because oral bacteria populate more quickly in some people. Ask your dentist what type of cleaning program is right for you. If the patient has chronic bad breath that doesn't go away with brushing, then it's the sign of an oral health problem that needs treatment. If your gums bleed after brushing or flossing, it may be a sign of other oral health problems. This means that when performing the usual routines of flossing and brushing your teeth, your gums should never get fussy or bleed with oral hygiene techniques. A good oral hygiene routine isn't limited to brushing, flossing, and using mouthwash; it also includes regular dental visits to make sure everything works properly in your mouth. There are many options to choose from, and each one claims to be the best for your oral health. But how do you know if you are practicing good oral hygiene? Signs of good oral hygiene are easy to spot once you know what you're looking for. It's unhealthy to have bad breath, and this is usually due to an overgrowth of oral bacteria, poor oral hygiene, or an underlying oral problem or problem within the body itself. Stained, chipped, or cracked teeth may indicate that you are neglecting some of your oral hygiene tasks. Normally, the body's natural defenses and good oral health care, such as daily brushing and flossing, keep bacteria under control. This usually occurs due to poor oral hygiene that allows bacteria to build up along the gum line, causing the gums to swell. It is essential to maintain regular oral hygiene routines at home, brushing twice a day and flossing once a day. This will allow you to understand any oral hygiene improvements you need to make so that you can have the healthiest mouth possible. If you or a loved one has any of the conditions listed above, ask your dentist how to promote and support overall health through proper oral hygiene. .
A college campus is often associated with political correctness and liberalism. We are told that prejudging people based on their ethnicity, gender, socio-economic class or other categories is immoral, narrow-minded and ignorant. When you need help with a math problem, is it your first instinct to go to the sorority girl or the football player for help? Probably not, even though many sorority girls and football players are quite capable in math. You’ll go to the kid that’s always with a calculator and raises his hand in class. Does that make you immoral, narrow-minded and ignorant? Absolutely not. With the limited amount of time we have and the immense number of people there are, stereotyping is an efficient way to categorize crowds of people. Ideally, it would be nice to get to know everyone as individuals, but that’s just not feasible. We resort to stereotypes because it’s the next best thing. You sort of know every person based on their race or occupation or the kind of clothes they wear by the preconceived notions that are thrown at us by the media and what we acknowledge as the generalized truth. It might sound awful at first glance, but there’s some merit to such generalizations. Isn’t it better to sort of know a person than to not know them at all? Stereotypes are a bit of an equalizer. No matter who you are, everyone is always being generalized, in both negative and positive ways. It’s something everyone in the entire world has in common. Akin to most things in life, it does impact some more harmfully than others, which is a shame but inevitable due to our hierarchal nature. Additionally, stereotyping allows us to face issues as opposed to ignoring them. If a certain demographic group maintains a dire stereotype, there evidently is some truth to it that needs to be addressed. Stereotypes can be a call to action. Furthermore, they are often funny and entertaining and jokes tend to decrease the worth of a topic. The more of a joke a stereotype is, the more clich it becomes, and we all get bored with clichs eventually. That boredom makes us look for originality and give credit to people that defy the stereotypes that are placed on them. Alternatively, there are also a lot of positive stereotypes that encourage people to fit into them and that, in turn, work constructively for that person. You can hardly call that immoral. Conversely, discriminating against people based on their demographic group is all of those adjectives and then some. If an employer doesn’t hire someone because their name is DeAndre or Mohammed, that is clearly racist. However, if DeAndre didn’t get the job because he arrived to the job interview dressed as a stereotypical black man, DeAndre’s deficiency of sense is at fault. When people fit many of the stereotypes their demographic group is linked with, their lack of individuality deserves the stereotyping they are bound to receive from others. For instance, if you dress like a stereotypical hippie, act like a hippie and talk like one, people will call you a hippie. Criticize it all you want, but you made it that much easier for people to stereotype you. Stereotyping isn’t as evil as it’s portrayed to be. As long as it doesn’t close your mind to other dimensions and possibilities, it’s a useful organizational tool. And of course, there are always those people who can’t look past stereotypes, but don’t fret those people aren’t ever worth your breath complaining about. Kernogitski is a member of the class of 2013.
A heat-reflecting, futuristic supermaterial that looks like a roll of plastic wrap could one day cool both houses and power plants without using any energy, according to a new study. Unlike solar panels, the material keeps working even when the sun sets, with no additional electricity. And the plastic wrap is made up of cheap, simple-to-produce materials that could be easily mass-produced on rolls. “We feel that this low-cost manufacturing process will be transformative for real-world applications,” Xiaobo Yin, a mechanical engineer and materials scientist at the University of Colorado Boulder, said in a statement. When radiation, such as sunlight, hits an object, different wavelengths of light can be reflected, transmitted or absorbed, depending on the properties of the material. For instance, black-colored materials, such as asphalt, tend to absorb most incoming visible light, while pale or shiny objects tend to reflect that light. [50 Interesting Facts About Planet Earth] Yin said he and his colleagues wondered whether they could manipulate the movement of light through a material so that the substance would efficiently cool objects passively, without using electricity. To do so, they looked to a giant: Earth, which on clear nights cools itself by radiating infrared light out into the cosmos. The catch is that Earth heats up tremendously during the day as incoming rays of sunlight bombard the planet. However, the team suspected there was a way to harness radiative infrared cooling while simultaneously deflecting incoming rays from the sun, Yin said. The team devised a three-compound metamaterial whose base layer is a sheet, slightly thicker than aluminum foil, made of the see-through polymer polymethylpentene. The researchers then randomly interspersed miniscule glass beads throughout the material and coated the bottom with a thin layer of reflective silver. The glass beads were just the right size to induce a quantum effect known as phonon-polariton resonance. This effect occurs when a photon, or light particle, in the infrared spectrum interacts with vibrations in the atoms of the glass. The researchers found that when sunlight hit the top of the material, the glass beads and shiny silver bottom of the material scattered the visible light back out into the air. Meanwhile, infrared radiation passed from the bottom out through the top of the material, allowing whatever was beneath the material to cool off, the investigators said. In total, about 96 percent of the sunlight that hit the material bounced back off, the researchers reported on Feb. 9 in the journal Science. When the researchers tested the material in the field, they found that it created a cooling effect equivalent to about 110 watts per square meter over a 72-hour period and up to 90 watts per square meter when facing direct sunlight at high noon, the scientists said in a statement. That’s about the same amount of power as is produced by a typical solar panel in those time periods. (The material passively cools, but does not actively provide power like a solar panel does). “Just 10 to 20 square meters [107 to 215 square feet] of this material on the rooftop could nicely cool down a single-family house in summer,” study co-author Gang Tan, a civil and architectural engineering professor at the University of Wyoming, said in a statement. The new material could also be used to cool off thermoelectric power plants, which currently use water and energy to keep machinery cool, the researchers said. In addition, the new material could increase the lifetimes and improve the operating efficiencies of solar panels, which often get too hot to work efficiently, the scientists said. “Just by applying this material to the surface of a solar panel, we can cool the panel and recover an additional 1 to 2 percent of solar efficiency,” Yin said. “That makes a big difference at scale.” This article was posted on livescience.com
Study sets, textbooks, questions Upgrade to remove ads Depth of Knowledge List 5 Terms in this set (15) to put in the proper state, to fix, prepare, or bring into existence by shaping or changing material, combining parts, etc. to figure out the extent, dimensions, quantity, capacity of, as by a comparison with a predetermined standard to produce a type, or version of a product, usually on a smaller scale to arrange methodically or suitably, as by sequence, chronologically, etc. to systematically coordinate parts into like groups or logical order to take or have a part or share as part of a larger group to put into, or earn, a specified standing with relation to others through examination, competition, position, situation, or relation to get ready, to put in proper condition to determine, indicate, or express the quantity of, to make explicit the number of items. to ask or inquire to set down in writing or some permanent form, as for the purpose of preserving evidence to refuse to accept, to rebuff, to cast out of, eject answer. to make less complex or complicated, to make plainer or easier To present or perform, to guide, escort or usher, to explain or make clear; make known to compose and produce in words or characters, to express or communicate in a written form Sets with similar terms ALL- VERBS OFTEN USED IN SCIENCE TESTING Action Verbs - Mark Lincoln Sept-Jan Academic Vocab September through January Academic vocabulary revi… Sets found in the same folder MS SAT List 4 MS SAT List 6 MS SAT List 5 Other sets by this creator Unit Three Greek & Latin Roots Roots 8 Unit 1 How is Maud Martha like the son in "Powder"? A Both show a sense of responsibility. B Both act carefree most of the time. C Both have sisters they admire. D Both disagree with their parents' actions. Develop your own list of at least three examples of what you should "never say" versus what you should "say instead" in a job application, on a date, or in a college essay. Based on what you know about brave, selfless people, are the mother's actions credible? Explain your answer. How do people come to have different views of society?: How does the setting of The Canterbury Tales allow Chaucer to develop characters that represent many levels of society? Other Quizlet sets All Daily Questions NBCOT Domain 3 Questions/Answers Zoonotic Viral Diseases
This project investigated what impact different settings (structured vs unstructured) have on the engagement and the learning attitude of young children in a makerspace using a longitudinal approach (multiple visits in a makerspace of the same children). More specifically we offered children two alternative settings: a structured school-like setting. In this the facilitator directs the children through a game or a story in which there is an explicitly declared goal to be achieved. These take the form of specific activities which have to be followed in a rather strict logical sequence. an unstructured makerspace-like setting. In this space children can freely perform the same activities but on their own initiative with guidance provided only on request. We wanted to see how children of this age choose one or the other of these settings and how and why they migrate from one to the other. Professionals from Hatch Atelier makerspace worked alongside academics from the Institute of Sociology in providing three series of five workshops for children (6–8 years old). The project involved the usage of different tools and activities to be found in a makerspace including creation of 3D printed objects, playing and constructing modular robots (MOSS robotics) and games (Scratch, Kerbal Space Program1, Universe Sandbox2) as a support for learning. The project offered an insight on how young children actually engage in such space and explored the challenges that professionals of these spaces face when working with this age group. Aims of project Through the involvement of children 6–8 years old in makerspace-type of workshops focusing on the idea of space exploration (Space Academy), the project aimed to: Introduce children to STEM subjects (space, robots). Develop children’s digital skills (by using video games, by using digital devices to document their projects or for online searching and communicating). Develop children’s communication skills (children are encouraged to report on and present their work not to the facilitators, but to colleagues that play the role of ‘reporters’). How do children with various socioeconomic background engage with digital, traditional and mixed technologies in a makerspace-setting? How do girls and boys approach digital (in-game or with robots) creation v. traditional creation (arts and crafts)? How does a long term engagement in a makerspace-setting influence children’s interests and literacy in digital technologies? The Romanian team planned to provide three series of workshops (each around nine meetings) in three schools, with groups of children with different socioeconomic backgrounds. Each group was planned to consist of 10 children 6–8 years-old. The venue of the workshops was planned to be in schools (all of them in Bucharest), in a mobile popup makerspace setting. The participant schools and the schedule of the workshops Pilot study at Ferdinand School: a series of six meetings, first three weeks of October 2017. Romanian-Finnish School (private school) in October–November 2017, 12 workshops. School No. 136 – in partnership with The Alternative Education Club – in February 2018 (nine workshops). Ferdinand School, March 2018, seven workshops. The workshops took place three times a week, for 2–3 hours each session. Differences among the three groups workshops In the first group, the workshops lasted the longest (around three and a half hours each), whereas in the other schools they were of a duration of around two hours. The meal arrangements differed in each setting (the private school offering a snack to the children in the middle of the workshop). The main series of the Ferdinand School took place in the ICT room, all children having access to a computer if they would like (this setting differed from the other two series, where only three laptops were available for children). The workshops of the second group were the only ones where educators or school staff were present in the classroom during the activities. At the School 136, a carer from the Alternative Education Club was present most of the time in the classroom. She did not get involved in the activities, but helped maintain the discipline. The technology used and the activities KerbalEdu (Kerbal Space Program) is a space flight simulator that allows users to build their own rocket and launch it. The rocket is built using a variety of elements that are realistically designed and proportionate. The game perfectly simulated the laws of physics. We considered it for introducing children to physics, technology, and engineering. Universe Sandbox isa videogame that allows them to simulate the creation of a universe by adding various types of planets, stars, black holes and seeing their interaction. This was used only with the first group. Within the last group children also played Minecraft (as unplanned activities, using their own Minecraft accounts) and Reddit (their favourite). Cubelets modular robots. This used three types of cubes (sensing, acting and programmable blocks) and brick adapters to link the robots with Lego blocks. The Cubelets were considered as a good introduction in robotics challenge (basic I/O). During the workshops we didn’t use the programmable affordances. Arts and craft (plasticine, drawing, beads). Digital cameras were provided to children in order for them to document their activities and communicate throughout it.
BRIDGETOWN, Barbados, CMC – A new species of coral found in the Caribbean has been named after the singer – Jennifer Lopez. Researchers found the new species in Mona Passage off the coast of Puerto Rico. The Pontarachnid mite represents a common but still unstudied group of marine animals. Vladimir Pesic, the lead author of the article in the journal, ZooKeys, explained that the reason behind unusual name for the new species was that the Puerto Rican singer’s songs and videos kept the team in a continuous good mood when writing the manuscript and watching World Cup Soccer 2014. The new mite species was collected from nearly 70m depth, the greatest depth from which Pontarachnid mites have been found until now. Mesophotic coral ecosystems (MCEs), like Bajo de Sico where the new species was found are light-dependent habitats dominated by macroalgae, sponges and scleractinian corals and are found on the insular and continental slopes of Caribbean islands between 30 and 100m.
'Deep learning' software automatically detects diseases University of Saskatchewan PhD student Yi Wang developed software that can get higher image quality. It improves current computer-aided diagnosis (CADx) technology, which assists doctors to detect diseases from medical imaging scans such as ultrasound, computer tomography (CT) and retinal fundus imaging, which captures photos of the back of the eye. Wang's software makes diagnosis faster—it takes less than 30 seconds and it is around 10 times faster than current ones. "Our software will help medical staff reduce the time they take to interpret medical images, so that they can provide better patient care," said Seok-Bum Ko, an electrical and computer engineering professor and Wang's supervisor. "Radiologists and doctors can use their saved time more efficiently for other important tasks." Wang has tested his software on detecting abnormal retinal blood vessels in the eye—a symptom common to diabetes or heart disease—and was 97 per cent accurate at identifying abnormal vessels that needed further diagnosis. The results are published in the journal Computerized and Medical Imaging Graphics. Detecting blood vessels from retinal fundus imaging is often difficult. The images may end up blurred, so the vessels may be difficult to identify. Also, doctors usually have to mark blood vessel patterns manually on the image to determine whether vessels are broken—a time consuming process. Wang's software uses a state-of-the-art system called "deep learning" that helps improve image classification and quality. "Deep learning relies on software algorithms that make the software automatically learn and analyze image patterns," said Wang, a student from China. "The idea is that the more images the software 'reads,' the better and more accurate it becomes at distinguishing healthy vessels from broken ones, so we may say it progressively 'learns.' This idea is at the core of all studies on artificial intelligence." To prove that his software works, Wang tested it on more than 130 images taken from a public database where diagnoses were already available, so that he could compare systems. His software is proved to be two per cent more accurate than commercial counterparts. "Our software is a good tool to complement radiologists' and doctors' expertise, not to substitute it," said Ko. "There is a concern that this type of new 'intelligent' technologies will replace humans, like in science fiction. That is not the case, because we will always need people to make machines work." Wang and Ko, who have been awarded funding from the federal agency NSERC, are already teaching the software to detect lung and breast cancer from CT and ultrasound images respectively, with very positive results. "We are very excited about our detection system, and we are sure it will also make a change in medical teaching," said Ko. Future applications may include using the software to teach medical students how to recognize diseases from CT images. Ko tested it on researchers at Chonbuk National University, South Korea, in a class setting and said early results are very promising. More information: Zhexin Jiang et al. Retinal blood vessel segmentation using fully convolutional network with transfer learning, Computerized Medical Imaging and Graphics (2018). DOI: 10.1016/j.compmedimag.2018.04.005
A few months ago, a packed plane pulled into the gate at Reagan International Airport in Washington, D.C. As people were preparing to unfasten their seatbelts and collect their belongings, the pilot announced that we had had the honor of bringing home a fallen soldier along with his family. He asked everyone to show their appreciation for the soldier's service, which prompted thunderous applause. Then, out of respect for the soldier and his family, the plane went silent as they were ushered off the plane. It was poignant and somber. People were clearly moved. Some shed tears; others closed their eyes. Nobody complained about needing to get off the plane to catch another flight. Through the years, more than 1 million men and women have lost their lives protecting the freedoms and interests of the United States of America. Monday is Memorial Day: A day set aside to remember the men and women who died while serving our country. Formerly known as Decoration Day, it originated after the American Civil War to commemorate the Union and Confederate soldiers who died in the war. By the 20th century, Memorial Day had been extended to honor all Americans who have died while in the military service. Because of the generations of yesteryear who answered the call to serve, we enjoy living in the freest nation in the world. As a result, we can be a light to other nations and people, especially those nations where people's freedoms are suppressed. Freedom has never been free - it is costly. It has required sacrifice - being a prisoner of war, leaving behind loved ones to serve, even the supreme sacrifice of giving one's life. Remembering the great sacrifices that have been made in order for us to enjoy what we have today is what Memorial Day is about. In the midst of celebrating the beginning of summer, take time to help your family fully understand the freedom we experience on a daily basis. Encourage your children to think about sacrifices that have been made for freedom on their family's behalf and to appreciate what our current military does in more than 80 countries around the world. Here are a few ways to participate in honoring those who paid the ultimate sacrifice: • Visit the National Cemetery and place flags or flowers on the graves of our fallen heroes. • Visit the Veteran's Memorial Park in Collegedale. • Fly the U.S. flag at half-staff until noon. • Participate in a "National Moment of Remembrance" at 3 p.m. to pause and reflect upon the true meaning of the day. • Pledge to assist the widows, widowers and orphans of our fallen dead and to aid the disabled veterans. • Make care packages to send overseas. • Write a thank-you note to someone you know who has served in the armed forces. • Remember the men and women who serve and their families in your thoughts and prayers. According to a recent poll, 80 percent of Americans don't know the meaning behind Memorial Day. We can change that by making sure we teach our children about this day and model what it means to appreciate those who have sacrificed on our behalf. Julie Baumgardner is president and CEO of First Things First. Contact her at firstname.lastname@example.org.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2005 September 19 Explanation: What are asteroids made of? To help find out, Japan's JAXA space agency launched the Hayabusa mission to rendezvous with asteroid Itokawa. Last week, the small robotic Hayabusa spacecraft arrived at asteroid Itokawa and stationed itself only 20 kilometers away. Although a long term goal is to find out how much ice, rock and trace elements reside on the asteroid's surface, a shorter term goal is to determine the mass of the asteroid by measuring the attraction of the drifting Hayabusa spacecraft. During the next few months, Hayabusa will also image and map asteroid Itokawa as it orbits the Sun. The above time-lapse image sequence was taken by Hayabusa upon final approach, showing the general oblong shape of the asteroid. In November, a small coffee-can sized robot dubbed MINERVA is scheduled for release and is expected to hop around the asteroid taking pictures. Also in November, Hayabusa will fire pellets into asteroid Itokawa and collect some of the debris in a return capsule. In December, Hayabusa will fire its rockets toward Earth and drop the return capsule to Earth in 2007 June. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: EUD at NASA / GSFC & Michigan Tech. U.
The tariffs imposed on goods traded between the United States and China are re-shaping the global economy, but not the way the chief antagonist in that battle, US president Donald Trump, has predicted. While trade with China has fallen slightly, the statistics also show that imports to the United States from other developing economies are fast increasing. In other words, the White House’s nationalist trade policy is changing where the United States sources its imports, not growing production at home. Trump has defined trade deficits as the key metric in this battle, though economists would say that the balance of trade isn’t a useful metric on its own. Thanks to the new taxes imposed by the Trump administration on Americans who purchase Chinese goods, that metric has fallen quite a bit—in March 2019, the United States imported $20 billion more than it exported to China, the lowest deficit since March 2014. Still, in the first quarter of 2019, the United States purchased $80 billion more in goods and services from China than it exported there. The overall trade deficit hasn’t gone away, with US government data from 2018 showing a record high US trade deficit of $891 billion. The reason is simple—US businesses looking to import cheap goods abroad are simply turning to different markets. One obvious choice is Mexico, where the United States had a record high trade deficit in March 2019, and from other advanced economies—imports from Germany and Japan hit record-high levels in March as well. As the Council on Foreign Relations’ Brad Setser pointed out, one of the biggest winners is Vietnam, which has seen its trade with the United States increase dramatically. While some at the US Treasury are examining the situation for signs that Vietnam is artificially devaluing its currency to be more competitive in global markets, Setser concluded that “the recent jump in its surplus (and the surplus of many other East Asian economies) is almost certainly the consequence of Trump’s tariffs on China.” Which makes sense, considering the country’s recent development trajectory as a kind of China in waiting, making everything from furniture to consumer electronics. Some Chinese companies have simply moved their factories into Vietnam in response to the new tariffs, while its participation in the Trans-Pacific Partnership, a global free trade deal, has integrated it more deeply into existing supply chains. It’s a similar story with Malaysia, also a TPP member. All this suggests that the White House’s zero-sum approach to global trade policy is changing the world—but not necessarily to the benefit of the United States.
Accretionary and collisional orogenesis in the Tarim and North China cratons during Paleozoic time can be correlated with events associated with the assembly and subsequent incipient dispersal of Gondwana. Zircon U-Pb and Hf isotopic data from the northern margins of the two cratons and neighbors have revealed comparable eHf(t)-time patterns. Zircons with magmatic ages of 500-400 Ma display a large spread and decreasing eHf(t) values with time, whereas 400-310 Ma zircons have dominantly positive eHf(t) values and an overall increasing trend. The marked shift of the zircon Hf array at ca. 400 Ma was most likely related to a major tectonic switch from advancing to retreating accretionary orogenesis, corresponding to the development of regional extension. The commencement of subduction at 500 Ma and establishment of an advancing accretionary orogen along the preexisting passive margin was synchronous with early Paleozoic continental collisional events along the southern margins of the two cratons. The temporal agreement of these events, and their accordance with collision and/or accretion events during Gondwana assembly, suggest that the Tarim and North China cratons likely collided with the northern Australia margin of East Gondwana at ca. 500 Ma. They subsequently dispersed from Gondwana in the Early Devonian, coinciding with switching accretionary tectonics along the northern margins of the two cratons that were possibly induced by the slab rollback of the subducting paleo- Asian Ocean plate.
Even mine vehicles such as the cage – an elevator-type device used to transport miners to and from the surface and considered a safety net for miners in case of emergency – could themselves be danger traps. In early November 1910, 10 miners had boarded their cage at the No. 4 colliery near Lansford for their usual descent on the 7 a.m. shift. At about 200 feet, they noticed the cage was traveling unusually fast. It was in fact in freefall, dropping 500 feet in seconds before crashing. According to the Tamaqua Courier, the cage struck the platform at the bottom of the shaft, which covered about 45 feet of water, with "a terrible crash." "Had the cage gone through this (platform), which seems a miracle it did not, they all would have been drowned in a trap," the reporter said. Physicians from Lansford administered emergency aid as they emerged from the mine shaft before being rushed to Panther Creek Valley Hospital on a special train. Seven of the men were badly injured but amazingly, no one was killed. The injured included Paul Sczponski, 27, laborer from Lansford, broken ankle; John Eames, 32, Summit Hill miner, dislocated thigh, leg fractures; John Gayton, 54, Summit Hill tracklayer, broken leg; John Coucklu, 24, Lansford laborer, foot fracture; Joseph Yaris, 21, Summit Hill laborer, ankle fractures; and Joseph Yonnes, 30, Summit Hill, laborer, fractured thigh. The writer noted that strangely almost all the workers' injuries were to the right foot or leg, and that none suffered internal injuries. As to the cause, officials at LC&N in Lansford felt that the engineer made what was called an overwind. When the engineer realized that the marker that usually indicated the stopping point was misplaced, it was either too late to reverse the rapidly-moving cage or "he got excited and could not work the lever properly." Another accident at the same colliery a few days later was even more tragic since it claimed two lives. The cause this time, however, was a miner who had had too much to drink. When a mner named Martin Starick of Lansford showed up at the No. 5 shaft drunk, the foreman sent him home. Instead of heading home, Starick opened the guard gate and walked to the mouth of the No. 5 shaft. He fell into the shaft, a distance of 480 feet, before striking the miner's cage at the bottom. The force of the fall broke the bonnet of the cage before striking another miner, Alex Samonski, who was waiting to be hoisted up after finishing his shift. Starick died immediately in the fall and Samonski was rushed to the Panther Valley Creek Hospital, where he died a few hours later. Another man who was in the cage at the time, suffered a scalp wound but survived. Six weeks after the first cage accident at the No. 4 Colliery in Lansford and just a week before Christmas, there was yet another shaft tragedy involving a cage at the No. 10 colliery in Coaldale. When the engineer started to lower the cage it refused to move because it had frozen to the side. About 30 feet of cable had unwound. Before he could take up the slack the ice in the shaft broke away, sending the cage plunging down the shaft. Fortunately the cable did not break but the awful jar at the end of the cable tossed the miners to the floor of the cage with such force that many suffered broken bones. Injured were George Welsh, 29, a laborer from Coaldale, broken back; William Tucket, 30, laborer, back injury; Bernard O'Donnell, Coaldale, 25, miner, back injury and broken ribs; Benjamin Welsh, Coaldale, 25, miner, broken leg; John Fisher, Seek, 27, driver boss, spine injury; and David Yemm, Coaldale, 35, mine foreman, fractured legs and spine injury. Immediately after being hoisted to the surface, the injured men were rushed to Panther Valley Hospital, which had been neatly decorated for the Christmas season by the Ladies Auxillary. The injured miners who limped or were helped into the hospital that day no doubt felt some warmth and comfort with the holiday surroundings. A large Christmas tree "loaded down with good things" was erected in each of the three wards. Holly also decorated the wards and corridors. "The good people of the Panther Creek Valley have not forgotten the poor unfortunates suffering in the hospital," the Tamaqua Courier noted. "Christmas will permeate every nook and corner of the cozy institution on Christmas Day." It certainly brightened the spirits of the six unfortunate men who were helped into the facility to have their broken bones set and other injuries treated. Next week: The push for mine safety
What’s new in pituitary tumor research and treatment? Research into pituitary tumors is taking place in many university hospitals, medical centers, and other institutions around the world. Doctors now have a better understanding of the genetic basis of pituitary tumors. This is already leading to improvements in genetic testing for people who are suspected of having multiple endocrine neoplasia, type I (MEN1) or other syndromes. This work is also shedding light on the characteristics of non-functioning adenomas, which may lead to new medical therapies for these tumors. Imaging tests such as MRI scans continue to improve, leading to better accuracy in finding and determining the extent of new and recurrent tumors. Surgical techniques are improving, allowing doctors to remove tumors with fewer complications than ever before. Radiation therapy techniques are improving as well, letting doctors focus radiation more precisely on tumors and limiting the damage to nearby normal tissues. Progress is also being made in the medicines used to treat both pituitary tumors and the side effects of some other forms of treatment. For example, growth hormone is now produced by DNA technology and has been approved for treating adults who don’t make enough growth hormone after treatment for a pituitary tumor. Doctors are looking to see if combining some of the drugs used to treat pituitary tumors (at lower doses) might work better than using a single drug for some types of tumors. Researchers are also studying some newer drugs. An example is lapatinib (Tykerb), a drug that targets a protein called HER2, which is found in large amounts on some fast-growing cells (including some pituitary tumor cells). This drug is already used to treat breast cancer, and it is now being studied for use against pituitary tumors. Other drugs are now being studied in clinical trials as well. Last Medical Review: 05/08/2014 Last Revised: 05/08/2014
Definitions for gallinago This page provides all possible meanings and translations of the word gallinago Gallinago, genus Gallinago, Capella, genus Capella(noun) Gallinago is a genus of birds in the wader family Scolopacidae, containing 16 species. This genus contains the majority of the world's snipe species, the other two extant genera being Coenocorypha, with two species, and Lymnocryptes, the Jack Snipe. Morphologically, they are all similar, with a very long slender bill and cryptic plumage. Most have distinctive displays, usually given at dawn or dusk. They search for invertebrates in the mud with a "sewing-machine" action of their long bills. The numerical value of gallinago in Chaldean Numerology is: 9 The numerical value of gallinago in Pythagorean Numerology is: 6 Find a translation for the gallinago definition in other languages: Select another language:
9 July 2013 University scientists have made a significant advance in their ability to target skin cancer. Researchers at the School of Medicine are part of a team that has developed an artificially enhanced molecule called a T-cell receptor, derived from a white blood cell that targets and kills melanoma cells. Working in collaboration with an Oxford-based biotechnology company (Immunocore), scientists at the University were able to solve the molecular structure of the enhanced T-cell receptor, bound to a fragment from a melanoma cell. The T-cell receptor was engineered to bind to cancerous cells with a 30,000-fold improved affinity using technology developed by partners, Immunocore. Molecular visualisation using X-rays (the technique used to solve the structure of DNA) enabled them to understand how this molecule targets melanoma cells with high specificity and affinity. Dr David Cole of the Institute of Infection and Immunity at the School of Medicine said: "We wanted to visualise these novel molecules at the molecular level to better understand how they work. We hope that these experiments will provide the information we need to safely improve these T-cell receptors to target cancer and other types of human diseases." Dr Ian Lewis, Associate Director of Research at the Welsh Cancer Charity Tenovus, said; "We are extremely proud to be associated with this research that seeks to improve the treatment of malignant melanoma, one of the most aggressive and increasingly common cancers in the UK.
Cultivation of Hemp (Cannabis) for industrial and medical purposes: a synopsis Dr. Robert Gorter, et al. Robert Gorter, MD, PhD, is emeritus professor of the University of California San Francisco Medical School (UCSF) Introduction to Industrial Hemp and Medicinal Hemp Section I: Fiber Hemp Fiber Hemp contains less than 0.3% THC. In coffee shops, owned by the local city government of Amsterdam, the Netherlands, Hemp seeds can be obtained legally which will grow Hemp which contains up to 14% THC. Hemp seeds do not contain any THC. Hemp or industrial hemp (from Old English hænep), typically found in the northern hemisphere, is a variety of the Cannabis sativa plant species that is grown specifically for the industrial uses of its derived products. It is one of the fastest growing plants and was one of the first plants to be spun into usable fiber 10,000 years ago It can be refined into a variety of commercial items including paper, textiles, clothing, biodegradable plastics, paint, insulation, biofuel, food, and animal feed and human nutrition Fiber Hemp fields in Côtes-d’Armor, Brittany, France, and Hemp seeds Although recreational marijuana and industrial hemp are both members of the species Cannabis sativa and contain the psychoactive component delta-9-tetrahydrocannabinol (THC), they are distinct strains with unique biochemical compositions and uses. Hemp has lower concentrations of THC and higher concentrations of Cannabidiol (CBD), which decreases or eliminates its psychoactive effects. The legality of industrial hemp varies widely between countries. Some governments regulate the concentration of THC and permit hemp that’s bred with an especially low THC content: usually <0.2%. Hemp stem showing fibers and one of many end products: a sack for storage Hemp is used to make a variety of commercial and industrial products including rope, clothes, food, paper, textiles, plastics, insulation and biofuel. The bast fibers can be used to make textiles that are 100% hemp, but they are commonly blended with other organic fibers such as flax, cotton or silk, to make woven fabrics for apparel and furnishings. The inner two fibers of the plant are woodier and typically have industrial applications, such as mulch, animal bedding and litter. When oxidized (commonly referred to as “drying”), hemp oil from the seeds becomes solid and can be used in the manufacture of oil-based paints, in creams as a moisturizing agent, for cooking, and in plastics. Hemp seeds have been used in bird feed mix as well. A survey in 2003 showed that more than 95% of hemp seed sold in the European Union was used in animal and bird feed. Hemp seeds can be eaten raw, roasted, ground into a meal, sprouted, or made into dried sprout powder. The leaves of the hemp plant can be consumed raw in salads. Hemp can also be made into a liquid and used for baking or for beverages such as hemp milk, hemp juice, in beer and chocolate to give it its typical bitter taste and tea. Hempseed oil is cold-pressed from the seed and is high in unsaturated fatty acids. In 2015, the U.S. imported approximately $200 million worth of hemp products, mostly driven by growth in demand for hemp seed and hemp oil for use as ingredients in foods such as granola. Currently, there is a fast-growing market for hemp pulp, for example as high-quality paper and cigarette paper. Hemp fiber is mixed with fiber from other sources than Hemp. Biodiesel are made from the oils in hemp seeds and stalks and alcohol fuel (ethanol or, less commonly, methanol) from the fermentation of the whole plant. Biodiesel produced from hemp is sometimes known as “hempoline.” The world-leading producer of hemp is France, which produces more than 70% of the world output. China ranks second with approximately a quarter of the world production. There is smaller production in Europe, Chile and North Korea. Over thirty countries produce industrial hemp, including Australia, Austria, Canada, Chile, China, Denmark, Egypt, Finland, Germany, Great Britain, Hungary, India, Italy, Japan, Korea, Netherlands, New Zealand, Poland, Portugal, Romania, Russia, Slovenia, Spain, Sweden, Switzerland, Thailand, Turkey and Ukraine. As part of a campaign to stimulate sustainable agriculture, the European Union (EU) supports farmers who want to switch to the cultivation of fiber Hemp with 6.000€ per hectare. Hemp fibers better than graphene Graphene is an allotrope of carbon in the form of a two-dimensional, atomic-scale, honey-comb lattice in which one atom forms each vertex. It is the basic structural element of other allotropes, including graphite, charcoal, carbon nanotubes and fullerenes. It can also be considered as an indefinitely large aromatic molecule, the ultimate case of the family of flat polycyclic aromatic hydrocarbons. Graphene has many extraordinary properties. It is about 100 times stronger than the strongest steel. It conducts heat and electricity efficiently and is nearly transparent. Graphene also shows a large and nonlinear diamagnetism, even greater than graphite, and can be levitated by Nd-Fe-B magnets. Researchers have identified the bipolar transistor effect, ballistic transport of charges and large quantum oscillations in the material. Scientists have theorized about graphene for decades. It has likely been unknowingly produced in small quantities for centuries, through the use of pencils and other similar applications of graphite. It was originally observed in electron microscopes in 1962, but only studied while supported on metal surfaces. The material was later rediscovered, isolated and characterized in 2004 by Andre Geim and Konstantin Novoselov at the University of Manchester. Research was informed by existing theoretical descriptions of its composition, structure and properties. High-quality graphene proved to be surprisingly easy to isolate, making more research possible. This work resulted in the two winning the Nobel Prize in Physics in 2010 “for groundbreaking experiments regarding the two-dimensional material graphene.” The global market for graphene is reported to have reached $9 million by 2012 and $300 in 2015 with most sales in the semiconductor, electronics, battery energy, telecommunications, super conductors and composites industries. Graphene is an atomic-scale honeycomb lattice made of a single layer of carbon atoms Conventional batteries store large reservoirs of energy and drip-feed it slowly, whereas supercapacitors can rapidly discharge their entire load. They are ideal in machines that rely on sharp bursts of power. In electric cars, for example, supercapacitors are used for regenerative braking. Releasing this torrent requires electrodes with high surface area – one of graphene’s many phenomenal properties. Section 2: Medicinal Hemp (Cannabis sativa) The bud of a Cannabis sativa flower coated with trichomes bearing Cannabidiol (CBD), Tetrahydrocannabinol (THC) and other Cannabinoids Medicinal Hemp has been defined as Hemp containing >0.2% THC. Usually, Medicinal Hemp is cultivated for its THC content which is the only Cannabinoid out of 68 isolated and well-defined Cannabinoids which has psychotropic effects. Cannabis sativa has been use for its bio-medical actions since 3.000 years and played an important role in Chinese and Western medicine. Up till the 1985, Cannabis in various forms where manufactured and available through any local pharmacy in The Netherlands, Germany, Switzerland and in Northern Europe at large. The interest in the medicinal properties of Cannabis sativa has surged over the last three decades and in the EU Cannabis derivates have been licensed again for medical use since 10.-15 years. In most European countries, THC can be prescribed and usually the health insurance picks up the bill. In coffee shops, usually owned by the local city government of Amsterdam, the Netherlands, Hemp seeds can be obtained legally which will grow Hemp which contains up to 14% THC. Dronabinol is the INN for a pure isomer of THC, (–)-trans-Δ9-tetrahydro-cannabinol, which is the main isomer found in cannabis. It is used to treat anorexia in people with HIV/AIDS as well as for refractory nausea and vomiting in people undergoing chemotherapy. It is safe and effective for these uses. THC is also an active ingredient in nabiximols, a specific extract of Cannabis that was approved as a botanical drug in the United Kingdom in 2010 as a mouth spray for people with multiple sclerosis (MS) to alleviate neuropathic pain, spasticity, overactive bladder, Crohn’s Disease, LED, Vertigo (Syndrome of Meniere) and other symptoms and diseases. Cannabidiol (CBD) is one of at least 113 active cannabinoids identified in cannabis. It is a major phytocannabinoid, accounting for up to 40% of the plant’s extract. CBD is considered to have a wide scope of potential medical applications – due to clinical reports showing the lack of any side effects, particularly a lack of psychoactivity (as is typically associated with ∆9-THC, or Dronabibol), and non-interference with several psychomotor learning and psychological functions. Cannabidiol (CBD) use in Epilepsy Dravet syndrome is a rare form of epilepsy that is difficult to treat. It is a catastrophic form of intractable epilepsy that begins in infancy. Initial seizures are most often prolonged events and in the second year of life other seizure types begin to emerge. A number of high profile and anecdotal reports have sparked interest in treatment of Dravet syndrome with cannabidiol. Some cannabis/hemp extract preparations containing CBD are marketed as dietary supplements and claim efficacy against Dravet Syndrome. One such preparation is marketed under the tradename Charlotte’s Web Hemp Extract. Blended/suspended in oil, the supplement contains 0.3% THC. http://www.dravetfoundation.org/dravet-syndrome/what-is-dravet-syndrome#sthash.jAC0bZ89.dpuf What is Dravet Syndrome? GW Pharmaceuticals is seeking FDA approval to market a liquid formulation of pure plant-derived CBD, under the trade name Epidiolex (containing 99% cannabidiol and less than 0.10% Δ9-THC) as a treatment for Dravet syndrome. Epidiolex was granted fast-track status and is in late stage trials following positive early results from the drug. A 2014 review stated that cannabidiol has been claimed, anecdotally, to be of benefit in helping people with epilepsy. Information in the review stated that there is no established mechanism of action and the lack of high-quality evidence in this area precluded conclusions being drawn. But new clinical data that merged in 2016 showed more evidence of the benefits of CBD in not only Dravet Syndrome but also in other, more common forms of epilepsy. The 2016 review in The New England Journal of Medicine states that since 2013, data has been collected on patients with severe epilepsy (Dravet’s syndrome and the Lennox–Gastaut syndrome). Among 137 patients treated with Epidiolex (qualifies chemically as hemp, see Legal status below), the median reduction in the number of seizures was 54%. There is tentative evidence that CBD had an anti-psychotic effect, but research in this area is limited. CBD safety in humans has been studied in multiple studies, suggesting that it is well tolerated at doses of up to 1500 mg/day (p.o.) or 30 mg (i.v.). Devinsky, Orrin (2015). “Efficacy and Safety of Epidiolex (Cannabidiol) in Children and Young Adults with Treatment-Resistant Epilepsy”. Annual Meeting Abstracts.American Epilepsy Society. Retrieved 13 December 2015. Angus, Chen (8 December 2015). “Marijuana’s Main Ingredient, Cannabidiol, May Be An Effective Way To Treat Epilepsy”. Medical Daily. Retrieved 14 December2015. Devinsky O, Cilio MR, Cross H, Fernandez-Ruiz J, French J, Hill C, Katz R, Di Marzo V, Jutras-Aswad D, Notcutt WG, Martinez-Orgado J, Robson PJ, Rohrback BG, Thiele E, Whalley B, Friedman D (2014). “Cannabidiol: pharmacology and potential therapeutic role in epilepsy and other neuropsychiatric disorders”.Epilepsia (Review). 55 (6): 791–802. doi:10.1111/epi.12631. PMC 4707667.PMID 24854329. Leweke FM, Mueller JK, Lange B, Rohleder C (2016). “Therapeutic Potential of Cannabinoids in Psychosis”. Biol. Psychiatry. 79 (7): 604–12.doi:10.1016/j.biopsych.2015.11.018. PMID 26852073. Prescription Medicine (Schedule 4) for therapeutic use containing 2 per cent (2.0%) or less of other cannabinoids commonly found in cannabis (such as ∆9-THC). Cannabidiol is a Schedule II drug in Canada. Cannabidiol, in an oral-mucosal spray formulation combined with delta-9-tetrahydrocannabinol, is a prescription product available for relief of severe spasticity due to multiple sclerosis (where other anti-spasmodics have not been effective). Cannabidiol is listed in EU Cosmetics Ingredient Database and is therefore, not a controlled substance. Table 1. Pharmacologic Characteristics of Some Main Cannabinoids |Eye Ball Pressure||¯||¯||¯||¯||¯| |Survival various Cancers||||| Ad 1) Industrial Hemp: licensing and subsidies What is Industrial Hemp? Industrial hemp means a plant of the genus Cannabis and any part of the plant, whether growing or not, containing a delta-9 tetrahydrocannabinol concentration of no more than three-tenths of one percent (0,3%) on a dry weight basis. The Industrial Hemp Program in the EU registers growers of industrial hemp and samples the crop to verify that the THC concentration does not exceed 0.3% on dry weight basis. Interest in commercial hemp production continues to grow in the United States. To date, all states have authorized marketing and production of industrial hemp legalized commercial hemp production. Ultimately it is the Drug Enforcement Agency (as mandated by federal statute), who has the authority to permit hemp production (for research or commercial purposes), despite state attempts to usurp this power. As these and other states contemplate full-scale commercial production, the experiences of several decades by Canada and the European Union (EU) can be insightful. Both Canada and the EU have diverse agricultural sectors that are well-supported by government programs, similar to that of the United States. Canada legalized hemp production in 1996 and licenses commercial production for those varieties of Cannabis sativa with less than 0.3% tetrahydrocannabinol (THC) content. The government of Canada does not provide any direct monetary support for hemp production or processing but the EU does extensively since the early 1970s. Hemp production in many European countries, notably France, Italy and the Netherlands, has never been prohibited. In addition, the European Union provides incentives — including direct monetary aid to farmers and processors — to support revive their domestic hemp industry. Why has the EU been so proactive in supporting their hemp market? The EU has a tradition of providing extensive subsidies to many, if not all, of its agricultural industries, and hemp is no different. It could also be said that the EU has a much stronger environmental voice, as evidenced by the existence of a bonafide Green Party, the highest rate of organic and bio-dynamic food consumption per capita of the industrialized nations, and arguably some of the best. The founding legislation providing aid to the hemp industry also provided support to the flax industry. Subsequent amendments to this legislation affect both fiber crops. Within the European Union, farmers can obtain a significant subsidy for the cultivation of hemp and flax. The basis for the subsidy was established in the early 1970’s to enable farmers a decent income from flax or hemp and to compete with world market prices. Despite the subsidy, hemp cultivation only survived the 1970’s and 1980’s in France and Spain, with a planted area of about 6,000 hectares. A renewed interest in hemp as a natural fiber source enabled the cultivation of hemp in the UK in 1993, followed by The Netherlands in 1994 and Germany in 1996. The total area increased to about 12,000 hectares in 1996, and has increased to more than 100,000 hectares by 2014. The EU regulations 1308/70, 619/71 and 1164/89 form the basis of the current subsidy practice. These regulations are applicable for flax and hemp (Cannabis sativa) with GN-codes 5301 and 5302. The practical workings of these regulations differ in each member country. On an average, the subsidy to grow Cannabis sativa is about 4,500€ per hectare (differs slightly from state to state of the EU). Hemp fiber produced in the EU has gone to a variety of uses. However, most of the market information is rather anecdotal and suggests a niche market of limited scale. Examples of hemp markets in various Member States follow: Hemcore, one of the largest hemp companies in England, currently contracts about 12,000 acres of hemp for hurd production (the woody inner core of the stalk) for use as horse bedding. Hemcore has also developed a new spinning technology. Another UK company, Friendship Estates, sells hemp bedding for the pet industry. The Bioregional Development Group in Surrey has developed flax and hemp fibers for textiles and paper production. The hurds and seeds could also be used for composite board, linoleum, and animal feed. The new board of EIHA at the International Congress on iHemp: “Growing demand for natural fibers from the automotive sector, for hemp seeds and CBD from the health food sector – first ISSC-PLUS sustainability certification for hemp – new board of Directors for the European Industrial Hemp Association (EIHA) – next EIHA conference in June 2016.” “Hemp cultivation areas in Europe have expanded from 8,000 ha in 2011 to almost 25,000 ha in 2016, which shows a triple increase in 5 years. The reason is the growing demand for different raw materials obtained from this outstanding multi-purpose crop. Hemp fibers are used in the automotive industry as well as in insulation and specialty paper production. They are particularly well established as material for natural fiber composites (NFC) used for the reinforcement of automotive interior parts. Due to their huge potential in light-weight construction, the demand for NFC has been growing continuously. Whereas exotic natural fibers from Asia are suffering from limited availability because of the local competition with food crops and temporary export bans, European hemp fibers can meet increasing demands – if the demand is communicated before the sowing time in March. Hemp seeds are becoming an important factor in both functional and supplement food industries. In 2015, hemp food products entered the mainstream market and were produced by well-known companies. Farmers in more and more European countries are discovering hemp as an alternative crop for the production of hulled hemp seeds, protein powder and oil with high nutritional value. Hemp seed is a nutritional powerhouse and is highly digestible. Its oil has an excellent fatty acid spectrum, it has the “almost perfect” balance of the omega-3 and -6 essential fatty acids plus the presence of two “higher” omega-3 and -6 fatty acids, stearidonic acid (SDA) and gamma linoleic acid (GLA), the share of unsaturated fatty acids is incredible 90%. The protein includes all 21 amino acids providing minerals, Vitamin E and is high in dietary fiber. In addition, the hemp crop produces Cannabidiol (CBD), a food supplement with therapeutic potential, which does not have any psychoactive effects. Many companies are globally investing in hemp as natural drug; this new natural medicine could reach at least a similar significance as valerian. The German company and EIHA member “Hanf Farm GmbH” is the first hemp producer worldwide to receive the “International Sustainability and Carbon Certification (ISCC PLUS)” for their hemp raw material “Hemp flower” and “Hemp nuts”. More certifications for different hemp products will follow this year, including those for the fiber. The European Industrial Hemp Association (www.eiha.org) is the organization of the European Hemp Industry. It represents cultivators, processors, producers and traders as well as scientists.” European Industrial Hemp Association (EIHA) is a consortium of the hemp-processing industry. It represents the common interest of industrial hemp farmers and producers, both nationally and on a European level. EIHA is the only European consortium in the industrial hemp sector. This sector includes, amongst other things, the use of hemp fibers, shavings, seeds and cannabinoids. Originally founded as an association for the European hemp industry, a quarter of the 130 EIHA’s members are based in countries outside the EU. Industrial hemp is making its “Coming Back” in the EU, the USA and Canada. Hemp’s many uses from food to paper to cloth and clothing to modern technologies such as hempcrete are just astounding. The most ground breaking of these though is hemp “graphene.” To explain, regular graphene is comprised of a two-dimensional, hexagonal honeycomb lattice layer of tightly packed carbon atoms, and is one of the strongest, lightest and most conductive compounds ever discovered. It is considered one of the best materials for supercapacitor electrodes. The term was also used in early descriptions of carbon nanotubes, and can be considered a type of nanotechnology. Many of graphene’s uses are in the area of energy storage; some uses that are under development include electronics, biological engineering, filtration and strong, lightweight composite materials. However, a scientist by the name of Dr. David Mitlin from Clarkson University in New York says he’s found a way to manufacture hemp waste into a material that appears to be better than graphene. Dr. Mitlin and his team were able to recycle leftover hemp-based fiber, cook it down and then dissolve it until carbon nanosheets that resembled the structure of graphene were left behind. They proceeded to build these nanosheets into powerful energy-storing supercapacitors with high energy density, thus creating a hemp-based “graphene.” Essentially, Mitlin’s team discovered a process for converting fibrous hemp waste into a unique graphene-like nanomaterial that many say out-performs graphene. David Mitlin, Professor and GE Chair in Oil and Gas Systems Creating this graphene-like hemp material costs only a fraction of regular graphene production. Graphene costs as much as $2,000 per gram to manufacture, while the hemp-based nanomaterial can be manufactured for less than $500 per ton. To give proper perspective, there are 907,185 grams in one ton. Hemp professionals and activists in Oregon and elsewhere in the USA and the EU are thrilled about this new technology and its potential for energy. Ben Christensen, owner of Oregon Hemp Works in Portland, said, “As a renewable energy major and hemp business owner, I find this very exciting. One of the bigger challenges with renewable energy is storage. I often find hemp being left out of the renewable energy conversation, but I feel you can’t really talk about renewable energy or sustainability unless hemp is being talked about as well. It also seems that when hemp is introduced as a replacement, it is just as good as what it’s replacing and even better in a lot of cases.” BMW has finally come out with an all-electric car, which made its world debut on Monday. And in true BMW fashion, they’ve outdone just about every other electric car in what matters most: Weight. The BMW i3 is a mere 2,700 pounds – 800 pounds less than the Nissan Leaf and the Chevy Volt. Hemp and kenaf materials contribute to the BMW i3′s natural looking interior A BMW 5 series door panel completely made out of hemp The i8 will be the next electric car sold by BMW. The hybrid supercar accelerates from 0 to 100 km/h in 4.8 seconds and has an electronically limited top speed of 250 km/h Henry Ford (Chicago) axing his car made of hemp plastic (1941) Amy Peradotta is the Chairwoman of the Portland Chapter of Women Grow. Women Grow is a national professional networking organization with 45 chapters across the US, Canada and the EU. Women Grow serves to connect, educate, and empower women in the cannabis industry. Currently, the Portland chapter is the largest and fastest growing chapter in the country. Peradotta is an advocate for the entire cannabis genus, recognizing that all cultivars/strains/varieties of the Cannabis L. Sativa plant – from industrial hemp to medical and adult use – serve to benefit mankind. Amy Peradotta, hemp activist and chairwoman of the Portland Women Grow chapter, agreed. She expressed, “Using hemp cellulose to replace graphene in super-capacitor batteries will change how we store energy and how we mass produce electronic products from computers and phones to electric cars. Imagine a future where your electric car battery is made with hemp supercapacitor electrodes, the body of the car is made with nontoxic, lightweight hemp cellulose composite materials and the interior door panels and upholstery are made from hemp fiber. Then, we can also use hemp supercapacitors to store renewable energy for our indoor cannabis-grown houses made of hempcrete. Pair that with solar panels and you have a sustainably designed, energy efficient cannabis production facility.” iHemp can replace cotton in many areas of human consumption. Most people don’t understand the truly diverse value of hemp. Cultures have relied on this hardy plant for centuries to produce textiles such as clothing, fabric and paper. Today, hemp is also used for food, fuel, medicine, building materials and plastics. Now with the energy storage industry starting to take notice, perhaps more government authorities will take a closer look at this plant. Joy Beckerman Maher, Industrial Hemp Professional & Public Relations Specialist Joy Beckerman, principal at Seattle-based Hemp Ace International and a 20+ year veteran in the industrial hemp movement said, “As activists and entrepreneurs, we simply did not see this coming 25 years ago. No one was sufficiently intellectual back then to predict the unique and exponential power within micro fibrils from hemp bast fiber or hemp’s ability to completely revolutionize the most critical areas of research and development. Graphite whisker and carbon nanotube are highest in stiffness and strength, but they are severely cost-prohibitive. Hemp cellulose nanocrystals are a considerably low cost nanoparticle, which makes them enormously attractive and competitive when one looks at the larger picture including price, availability, toxicity and sustainability.” Robert Gorter summarizes the potential market for iHemp in the EU as follows: “There is a growing market for Hemp Food and Pharmaceuticals with potential Billions of Euros markets in Europe.” “Hemp is a multi-purpose crop, delivering fibres, shivs, seeds and pharmaceuticals. Hemp seeds, small nuts with a high nutritional value, can be consumed raw, roasted or pressed into hemp oil, both with excellent and unique fatty acid profile and high value proteins. Both seeds and oil are used for human food and animal feed. The non-psychotropic Cannabinoid CBD is an interesting pharmaceutical and food supplement also derived from industrial hemp. In 2015, industrial hemp grew in the European Union to a cultivation area of 25,000 ha, a strong increase from 8,000 ha in 2011. The growth is mainly driven by the increasing demand from the hemp food, nutritional supplement industries and the pharmaceutical sectors.” Hemp Food – highly nutritious “super (bio) food” The “Market Study on Hemp Food” is based on a survey from August to October 2015 with 171 producers and traders of hemp food from 21 countries mainly spread across Europe and North America. The potential for growth and hemp’s health benefits were equally mentioned as the biggest strengths of the European and North American hemp food markets. The main market sections for hemp foods are the so called “super food”, “nutritious food” and “bio food”. All these markets are analyzed and discussed – including comparisons with other nuts in the European Union and the USA. A double digit growth rate is expected with demand rising especially in food goods. A hemp seed market potential linked to a penetration of 5% of the European nut market would signify am added market value of € 1 billion/year. Two major problems are delaying the growth: government legislation and lack of consumer awareness. It seems like hemp can be used for almost anything these days, including ice cream. Made from hemp milk – a blend of hemp seeds and water – the concept of hemp gelato was born out of one man’s desire for a “delicious, nutritious, animal-and-earth-friendly frozen dessert.” While the frozen treat may be an obvious choice for the lactose intolerant – including none other than Colorado Rep. Jared Polis, who reportedly sampled Zendulgence gelato last week at a Vote Hemp brunch and gave it “two thumbs up” – its incredible nutritional profile should be enough to tempt any ice cream lover. Asher serving samples of his hemp gelato at a local Whole Foods Market Just half a cup of Zendulgence gelato packs 600 milligrams of omega-3 fats, 4 to 5 grams of protein, 4 grams of fiber, and a variety of vitamins and minerals. Gluten-free, diary-free and only 190 calories per serving, Asher once said in an interview with Natural Products Expo he believes hemp gelato is so nutritious that “anyone can eat as much as they like.” Cannabidiol – high potential in pharma and food The “Market Study on Cannabidiol (CBD)” is based on the same survey as the hemp food study the majority of the participants were SME from Germany, USA and Canada. Currently, CBD is a market niche ruled by SME. Due to the fact that CBD is still a new product on the market, it is not surprising that 87.5% identify the potential for growth as CBD’s biggest strength in Europe and North America. Increased public and governmental awareness is crucial to expand the market and frame proper legislation, or the market risks being damaged at an early stage. Therefore, the way CBD is legally framed may affect its market size. According to the report, CBD has an upper market potential in Europe of € 2 billion if used as medicine for chronic diseases. Putting it on a level with over-the-counter medicine such as valerian, CBD has a minimum market penetration potential of € 24 million. The gap between upper and lower market potentials can be reduced by more consumer information, investments in research to support initial medical claims and adoption of the necessary legal provisions, as seems to be the current trend. In the study by the NOVA Institute in Cologne in Germany, the market potential per disease was evaluated, such as epilepsy, anxiety disorders, ADHD, schizophrenia, inflammation-associated pain, dystonia and more.” The International NOVA Institute in Cologne in Germany Robert Gorter: “To think that the Cannabis plant can supplement modern technology and medicine so dramatically and for a fraction of the usual costs is really incredible. This only reaffirms why one must continue to defend Cannabis sativa and iHemp everywhere and push the various authorities to reschedule this plant. It is time that hemp be extensively researched, grown and mass produced for its infinite uses and unexplored technological applications.“
Robots that help severely disabled people are often assumed to must be tremendous smart, capable of altering human caretakers. Engineers at Georgia Tech decided to take a extremely completely completely different technique, instead of empowering disabled of us to manage the robots that help them. The expertise permits disabled individuals to see what the robotic is seeing since a video feed is handed from the robotic’s cameras to the bedside computer. It moreover permits for cautious, deliberate administration of the robotic’s actions. The robotic used was the PR2 cellular manipulator from Willow Storage, a Silicon Valley robotics agency. It’s a humanoid robotic with two arms which will keep onto towels, spoons, and completely different points. They may even be used to scratch an itch, a really in type perform. Two analysis had been carried out to guage whether or not or not severely disabled individuals, using interfaces they’re already accustomed to, along with eye and head trackers, could perform the robots. One was further “digital” than the alternative, and it involved individuals controlling a PR2 robotic that was someplace else. They did pretty successfully and confirmed that actually a flowery robotic is likely to be operated by individuals that may’t switch lots of their very personal physique. The alternative look at had an individual making the most of a robotic to hold out widespread duties over a interval of 1 week. Henry Evans, who in every other case can’t switch his arms or legs, was capable of shave, brush his enamel, and even took advantage of the two robotic arms to concurrently use a towel to clean up. On account of the PR2 robotic is already accessible and prices for robots are dropping quickly, it seems to be like like they will shortly be serving to of us obtain important independence and allow caretakers to take care of completely different points. Even healthful of us will shortly be jealous of these items.
It’s what keeps you hooked on running. Or you think it does anyway. But what exactly is runner’s high? And what could possibly cause it? And hell — is it even real? The Happy Chemical Runner’s high is often contributed to a rise in your endorphins, which are the ‘happy’ chemicals naturally produced by the body that can induce feelings of pleasure and pain relief. In truth, though, it’s actually quite a bit more complicated. The theory that increased endorphin levels are the cause for that happy feeling that comes after a workout was born from research conducted in the 1980s that showed that, following prolonged periods of exercise, levels of endorphins in the blood spiked. Some researchers assumed that this spike must be responsible for that sense of euphoria following a workout. And so it is still commonly thought. But more recent studies that have been conducted using mice suggest that those endorphins may not have anything at all to do with the so-called runner’s high. The issue with the explanation involving endorphins is that endorphins are molecules that are actually quite large – in fact, they are too large to move from your blood into your brain. Indeed, the barrier between the blood and the brain is critical in keeping your brain safe because that barrier prevents certain molecules and pathogens from passing from your blood into your brain. Because the endorphins aren’t able to get through, it makes it unlikely that they are the chemical–or the only chemical–that’s responsible for those good feelings that are involved with exercise. As an alternative, scientists have hypothesized that this effect can actually be contributed to a few other chemicals that are also found or formed naturally in the body and that produce similar happy and pain-relieving feelings in other circumstances. Chemicals called endocannabinoids. When you exercise, your levels of anandamide also rise. This was found in a study on mice done in 2015, as well as in one on people that was conducted in 2004. This particular chemical is a sort of endocannabinoid — or in layman’s terms, a chemical that helps moderate that psychoactive, good feeling that comes from smoking marijuana. Unlike those large endorphins, this chemical can easily go from your blood into your brain. In the first study, conducted in 2015, researchers from the Central Institute of Mental Health compared the effects of both the endocannabinoids and endorphins on mice while they were running on exercise wheels. They discovered that along with feeling calmer and less sensitive to any sort of pain following running, the test mice also had increased levels of both endocannabinoids and endorphins. Those mice also spent more of their time in the areas of their cages that were better lit, which is something that mice that are less anxious and more calm tend to do. They also happened to be a bit more tolerant of pain following their runtime on the exercise wheels. In order to measure the effects of each individual chemical, the researchers gave the test mice drugs that would block the effects of each of them. When the endorphins were blocked, nothing at all happened. The mice remained more tolerant of pain and relaxed. However, when they blocked the effects from the endocannabinoids, the symptoms of a runner’s high in the mice were non-existent. Their findings suggest that the elevated levels of endorphins in the mice didn’t have too much–if anything at all!–to do with their ‘runner’s high’. That being said, this research did have a major caveat – and that is that mice are not humans. This study also found that you would more than likely need to run a fairly long distance before you would experience a runner’s high. The test mice averaged about 3 miles each day – which is quite a long stretch for mice. There have been other studies conducted that suggest that neither endocannabinoids nor endorphins are the reason for a runner’s high. One study, conducted in 2015, found that mice who had a low level of the hormone, leptin, had a tendency to run farther than the mice who had normal leptin levels. Leptin is also called the satiety hormone, and it works to inhibit the feelings of hunger so that our energy levels can be regulated. The idea is that the hungrier you feel, the more motivated you will be to continue running. That additional motivation might just make it a bit easier to obtain a runner’s high. In the end, leptin sends a clear message to the brain that when food isn’t plentiful, it is good to run in order to chase it down. Then again, although these results have been shown in mice, that doesn’t necessarily mean that the same effects will be had on humans. There may also actually be a combination of different factors at work; and because of this, definitive evidence of the exact cause for a runner’s high may continue to be elusive to researchers. Runner’s High and How to Achieve It Whatever you believe about this phenomenon, it is undeniable that runners seem to enjoy running and they at least think it makes them feel good, if not high, when they do. Runners who are experienced achieve a sense of elation that will happen after only a few miles. However, many newbies and even some of the more experienced runners struggle to achieve that elusive euphoric moment. The reason for this is that it isn’t an easy feeling to obtain. The actual cause for it still remains unknown, even though, as we’ve seen, researchers do know that it has something to do with how your brain and your body change when you exercise. It makes evolutionary sense though if you think about it. Really, just think about it: whatever chemical is responsible for runner’s high–or whatever you want to call it at this point–is released by our bodies so as to signal to our bodies that grueling physical activity is a good thing, a desirable thing. Maybe that’s even why we’re here, in some small part. Maybe our ancestors actually enjoyed running from predators into safety. A Moderately Intense, Long Workout is Critical for Triggering It Again, what research has shown us about runner’s high is that we’re more likely to experience it if we have a long and continuous session of exercise, especially one that has a rhythm. This is according to Paul J. Arciero, a professor in health and sciences at Skidmore College. He also says that the activity being performed must be non-stop and that the sweet spot for a runner’s high is at about 2 hours. This means that the longer your workout is, the better your chances of experiencing it. The intensity of your workout is also critical. A moderate intensity seems to be best. It appears to trigger the ideal environment in your brain where your blood flow is maximized and your endocannabinoid receptors appear to be at their most stimulated. If the workout is overly intense, the self-protecting mechanism in your brain can switch on and reduce both blood flow and stimulation. If it is too low, it will not be enough to stimulate your endocannabinoid receptors. If you really want to set yourself up to experience this high, you should focus on something known as steady-state cardio. This is where your heart rate will be sustainable but elevated. To accomplish this, you need to be exerting yourself to about a level 7 or 8.5 if you rate your activity on a scale of 1 – 10. More Experienced People are More Likely to Feel It This isn’t great news for those new to running. Because it means that you won’t have that runner’s high to assist you in getting through those beginning stages where you’re running only a mile or three a day. The silver lining is that it is fantastic motivation to keep going. Meaning, if you try hard enough, and run long enough, you may actually acheive it. A runner’s high is difficult to achieve when you’re just beginning a new program, according to Dr. Timothy Miller, an orthopedic surgeon and sports medicine specialist at the Wexner Medical Center at Ohio State University. That might be why so many beginner runners have a hard time keeping at it. But the promise is always there: after a couple of months of running and continuing to build your endurance, when you aren’t slogging through the run and counting each second until it’s finished, that’s when you might first experience it. Professor Arciero explains this could be due to a combination of factors. One of them is that those who are new to running aren’t likely to be running non-stop for a couple of hours. Another is that when you’re just getting started with running, your body is utilizing most of its energy to remain efficiently moving, regardless of any lapses in technique and form. It isn’t clear if this leads to the release of less feel-good chemicals, or if instead these chemicals are actually being released but aren’t getting noticed because of your brain being occupied with keeping you efficient. Either way though, something is happening that keeps you from getting that runner’s high–that much is certain. Feeling It Can Give You the Additional Boost You Need to Inspire You to Keep Running People who run marathons might not feel satisfied by workouts or runs that are shorter. Your mind and body will want to do more. Part of the reason for this is that the brain is seeking for the low-level high that it has gotten used to feeling. That being said, just because you have gotten that high once, it doesn’t mean that you will get it every single time you run. It will likely happen about once every few runs due to the fact that there are quite a few factors that need to line up perfectly – such as your general level of stress, the weather, and the intensity of your workout. Unfortunately, even if you feel it on a regular basis, there isn’t any way to ensure that it will happen. At least, there isn’t any known way. You Might Not Even Need to Run for It You can get the so-called runner’s high from any sort of workout routine… even biking or swimming. The key, again, is the controlling the intensity, the continuity, the rhythm, and the length of your workout. In other words, it can occur during any sort of training provided that you do it for a long enough period of time and with the proper form. Even though it’s called a runner’s high, it really is just a fitness high. If you aren’t the type to be into running, you can find another sort of cardio workout that you enjoy and that makes you both confident and happy. You will feel great each time you work out regardless of whether you get the high or not. So How Real is It? Is that feeling of euphoria for real? More critically, will it be enough for you to get across that finish line? When you run a marathon, regardless of where it is, you will need to use every single trick in the book in order to get through both the grueling training and the race itself. Runners often experience a sort of euphoria, which can present as a sense of being invincible, accompanied by a reduced amount of pain or discomfort, and a sense of a loss of time while they run. This is according to Jesse Pittsley Ph.D., the president of the American Society for Exercise Physiologists. What is it that makes runners push themselves for a whopping 26.2 miles? Is it necessary to run in order to get that same euphoric feeling? Is it possible to find those same emotions even with other types of exercises? We know now that endorphins aren’t the end-all, be-all of runner’s high. As we saw earlier, they may not even have anything to do with it at all. The fact is that it’s still unknown; it may even be a case of a placebo effect! Researchers have also examined other sorts of neurotransmitters that could possibly play a role. Neurotransmitters like serotonin, dopamine, and the secretion of norepinephrine have all been proven to assist in the reduction of depression. Additionally, these neurotransmitters are released and even produced in higher concentrations while people exercise, and this led researchers to believe that it might be some of these substances that are responsible for runner’s high. Yet another theory postulated is that body temperature has something to do with it too. This theory states that the change in the body temperature might affect your mood indirectly. Less Low, More High Runner’s high might be a short-term thing, but it’s well-known that exercising regularly offers other benefits for both the body and the mind that are long-term. Typically, we tend to see those people who are habitual exercisers or runners as having moods that are better, suffering less from anxiety and depression. People who are regularly physically active perform what’s known as active relaxation. By moving your body and being focused on the sensation of your moving body and by getting into the rhythmic motion and activity, you trigger the response for relaxation and that significantly contributes to those feelings of well-being. Running marathons can take a toll on the body, but it also offers significant benefits. When it comes to running at this level, there are clearly many health benefits. Intelligent marathon runners who have put in quite a few hours of training over the weeks and months before the marathon, and the health benefits that are sustained during aerobic exercise have been well documented. They include things like better self-esteem, lowered blood cholesterol, reduced body fat, and improved circulation, among others. Going Beyond the High When runners come down from their high, many of them might wonder why they bother. What is the point of running for X amount of miles at a time? The thing is, the point is the accomplishment after all of those months of hard work and training that drives people to compete repetitively. The events themselves aren’t always about competition. The marathon can be viewed as the reward for all of those months of training before it. You can’t build a house in a single day. You have to plan. You have to get up early each day and work extremely hard. This concept is what a marathon embodies for some runners. And obviously, there’s more to it than just a runner’s high. There is also the finish line. Runners say that there isn’t any better feeling than being able to raise your hands high when you cross the finish line of the marathon while listening to hundreds of onlookers cheering you on. The emotional high that comes with finishing a marathon can actually last for a few days. How to Get that High Science has revealed how to produce even more of those feel-good chemicals while you are running. Sometimes it happens and sometimes it doesn’t, but we always want that runner’s high – and more. When we are lucky enough to achieve it, our runs can be exhilarating, easy, and even euphoric. We aren’t always that lucky though. More recently, there have been researchers who studied how our brains respond to the activity of running and they found that the actual ability to get that runner’s high while we are logging those miles might actually be hard-wired in our bodies. Many years ago, the survival of our ancestors probably depended on being able to chase down their food. Their motivation was their desire to live and this meant that they needed to run. Those feel-good chemicals in the brain were released when they did have to run down their food, and this more than likely helped them when it came to achieving the distance and speeds that were necessary. That runner’s high might have served them – as it does us – as a natural sort of painkiller, which can mask things like blistered feet and tired legs. Even though most of us no longer need to worry about chasing our dinner down, learning how these happy reactions in our brains get turned on can possibly help when it comes to achieving a runner’s high with more frequency. Some More on Endorphins Endorphins can be looked at as nature’s own home-brewed opiates. They are chemicals that tend to act a lot like morphine – which is their counterpart that has been medically engineered. Runners have been giving endorphins the credit for runner’s high for many decades. That being said, it wasn’t until a decade ago that German researchers utilized brain scans done on runners to determine the origin of the endorphins. Those researchers discovered that during runs that lasted 2 hours, the limbic and prefrontal regions are where the endorphins spewed from. These are also the areas that light up when responding to emotions such as love. The greater the surge of endorphins in these areas of the brain, the more euphoric feelings were reported. To get this feeling, you will need to push yourself, just not too hard. Essentially, endorphins are a sort of painkiller that the body produces when it is in physical discomfort. That doesn’t necessarily mean that your runs have to be excruciating. Instead, you should find that sweet spot where you are comfortably challenged. For example, in the German study, the subjects were actually experienced runners. However, they weren’t so experienced that a run at a pace of 6 miles an hour for 2 hours was exactly easy, but it also wasn’t exceedingly much. Runners tend to experience an increase in endorphins when their bodies aren’t expending the maximum effort but are pushing themselves moderately. Runs that are short and casual will probably not produce enough of the discomfort that is necessary to trigger a rush of endorphins. Trying for a distance or pace that is overly aggressive will more than likely be overwhelming your body too much to produce them. Even as powerful as they can be, endorphins will not be able to override a lack of training or an injury. Running with other people might also be helpful. A study done at Oxford University found that rowers who trained and worked out together had a significant increase in the release of endorphins when compared to those who didn’t work out or train together. If you have to work out on your own, think about wearing headphones. Research has found that a spike in endorphins might result from listening to your preferred music. Some more on Endocannabinoids These are essentially a naturally made type of THC, which is the chemical that is responsible for the mellow high produced by marijuana. Anandamide is the endocannabinoid in the body that has been examined the most, and this is what is thought to create that feeling of calmness. Endorphins are only produced by specialized neurons, while nearly any cell in your body has the ability to make endocannabinoids. This means that they are more likely to make a larger impact on the brain. The production of endocannabinoids is believed to be more prolific when responding to stress than it is when compared to pain. With endorphins, pain is the strongest activator. Differentiating between discomfort and physical stress while running is almost impossible. This means that the same thing that triggers the release of endorphins might also trigger the release of endocannabinoids – a workout that is challenging. As far as your heart rate goes, running at about 70-85% of the maximum is just right for producing the main stress hormone , cortisol, as well as for producing endocannabinoids. Research suggests that when it’s in small doses, mental stress can also increase the production of endocannabinoids. This means that those pre-race jitters might actually be beneficial. That said, if stress is chronic, it can dull the effect. In short, whether or not you are a runner, if you exercise with the right intensity and for the right amount of time, you too will be able to achieve a runner’s high, and this high is one that is beneficial to your entire body. - Science Blogs, The Neurological Basis of the Runner’s High - FitDay, The Benefits of Running: Experiencing a Natural High - YouTube, How to Achieve Runner’s High - Chemical & Engineering News, Exploring the Molecular Basis of “Runner’s High” - Shape, The Truth About Runner’s High
Notarial and Authentication (Apostille) DISCLAIMER: THE INFORMATION IN THIS CIRCULAR RELATING TO THE LEGAL REQUIREMENTS OF SPECIFIC FOREIGN COUNTRIES IS PROVIDED FOR GENERAL INFORMATION ONLY. QUESTIONS INVOLVING INTERPRETATION OF SPECIFIC FOREIGN LAWS SHOULD BE ADDRESSED TO FOREIGN COUNSEL. An Apostille is a certificate issued by a designated authority in a country where the Hague Convention Abolishing the Requirement for Legalization of Foreign Public Documents, Apostille Convention, is in force. See a model Apostille. Apostilles authenticate the seals and signatures of officials on public documents such as birth certificates, notarials, court orders, or any other document issued by a public authority, so that they can be recognized in foreign countries that are parties to the Convention. In the United States, there are multiple designated Competent Authorities to issue Apostilles, the authority to issue an Apostille for a particular document depends on the origin of the document in question. Federal executive branch documents, such as FBI background checks, are authenticated by the federal Competent Authority, the U.S. Department of State Authentications Office. State documents such as notarizations or vital records are authenticated by designated state competent authorities, usually the state Secretary of State. The Hague Conference on Private International Law, the international organization that created the Apostille Convention, maintains an Apostille Section on its website with helpful information such as a user brochure The ABCs of Apostilles, and links to competent authorities for every country, including the United States, where the Convention is in force. If you have a document that needs to be authenticated for use in a country where the Apostille Convention is not in force, the U.S. Department of State Authentications Office has useful information on its website about the process.
The 5 Types of Pacific Salmon in British Columbia Waters With so many species of fish to fish for in BC waters, there is something to catch at practically any time of the year. One of the most popular fish that draws anglers to our region is salmon. You’ll find this post on the different types of salmon helpful if you’re planning a trip to the Pacific Northwest, or if you are a local who just wants to freshen up on your familiarity of the 5 main types of Pacific salmon in British Columbia. What Makes a Salmon a Salmon? It’s no secret that British Columbia is best known for our salmon fishing, both freshwater and saltwater. So, what makes a salmon, a salmon? The name “salmon” covers several species of ray-finned fish in the Salmonidae family. (Trout, char, grayling, and whitefish are also in the Salmonidae family and will be covered in a future post). Pacific salmon are anadromous, which means they are born in freshwater streams, then migrate to the ocean for most of their lives before returning to the same freshwater stream in which they were born, to reproduce (spawn). Pacific salmon are also semelparous, which means they die after reproduction and become a food source for other life forms in BC’s coastal ecosystems. There are 5 Pacific salmon species indigenous to the coastal waters of British Columbia. They are Chinook, Chum, Sockeye, Coho, and Pink. There are also two additional species of Pacific salmon – masu and amago – that are indigenous to Asia and cannot be found in BC. It should also be noted that Pacific salmon are distantly related to Atlantic salmon but have different amounts of chromosomes. Chinook Salmon (also called “King” or “Spring” salmon) are the largest and rarest of the Pacific salmon, weighing upwards of 50 kg and measuring up to 40 or more inches long. Chinook that weigh over 30 lbs are called “ Tyee”. Tyee salmon are highly sought after and popular amongst anglers because they are big, strong, and taste great – especially when grilled or prepared as smoked salmon. You can identify chum by their dark mouths, black gums, and V-shaped, silver tails that are often covered in spots. Anglers are allowed to catch up to 30 chinook per year and must log each catch. Saltwater chinook fishing is best done from your boat or yacht between May and September using baitfish like herring or anchovies. Lure casting, trolling, and float fishing are all common methods used to catch chinook, whether you are on a boat or fishing for chinook salmon from lakes and rivers as well. Use big spoons, jigs, hootchies, or spin ‘n’ glows to get started. Chum Salmon (or “Dog” Salmon, nicknamed for their canine-like teeth) are the second largest of the Pacific salmon and are easy to spot due to each of them having a dark horizontal stripe running down each of their sides. They also have large pupils, white jaws, and a somewhat forked, spotted tail. Chum can be 20 inches long or more and weigh 10 to 30 lbs. They are strong and highly abundant, but not as tasty as the other Pacific salmon. They are best when poached or steamed to enhance texture and taste. Chum can be caught in saltwater before October, when they start to migrate back to freshwater between October and December. Note that they are easier to catch than they are to reel in, and for this reason, a heavier rod, reel, and line are recommended. Try out various techniques like drift fishing with a float, spinning with spoons or spinners, or trolling in the ocean using hootchies. Sockeye Salmon (or “Red” salmon) are medium-sized, silver/blue salmon that have small black speckles on their bodies. When they migrate back to their home streams, the bodies of sockeye become reddish in colour with bright green heads. They have pink gums, large eyes, and slightly forked tails without spots. Sockeye measure about 24-32 inch long and weigh around 6-18 lbs. They are delicious fish, with grilling and eating raw as sushi or as a salmon poke bowl being very popular. Around the Vancouver Island region, sockeye salmon fishing season is usually July to early September. You will have a lot of success trolling for sockeye in the Georgia Strait and the mouth of the Fraser River using colourful hootchies or spoons. Coho Salmon, also commonly known as “silvers” or “bluebacks” because they stay a nice chrome colour for almost their entire lives, are the most populous of the Pacific salmon. They are modestly sized, at 20-24 inches long and topping out at around about 25-30 lbs. They have white mouths and gums and a squared tail. Coho are a favourite amongst anglers because they are tasty and a tad tricky to catch with their aggressive behaviour and acrobatic skills. Coho salmon fishing in both ocean and rivers is common. They like to hang out in kelp beds in search of smaller fish. A number of techniques can be used to target coho salmon, with trolling, spincasting, mooching, flyfishing, and barfishing all offering their own perks. Silver or copper spoons and spinners are recommended. Pink salmon are the smallest of the five Pacific salmon, weighing in at just 4-7 lbs each. Their flesh is a nice pink colour, meaning they are aptly named. Mature male pinks have a large, humped back and large oval black spots on their backs and V-shaped tail fins. Pink salmon are the only salmon without silver in their tails. Despite their smaller size, pinks are a popular sportfish for beginners because they readily bite at all kinds of lures and flies and are light enough for young children to have no problem reeling in. A lightweight fishing rod and line is all that is needed, as well as any type of colourful artificial lure. Pink salmon fishing season is from July to September. For an illustrative guide to these 5 pacific salmon species, check out the Pacific Salmon Foundation’s salmon poster. All proceeds go to charity. For more information on what other types of fish can be found in BC’s lakes, rivers, and coastlines, check out the provincial government’s list of the most common sport fish in BC. No matter what type of salmon you set out to catch, make sure you’re aware of the freshwater and saltwater fishing regulations put forth by the federal Department of Fisheries and Oceans (DFO). Finally, find out how to prepare any of the 5 species of Pacific salmon with these great salmon recipe suggestions. - Baitcast vs Spin Reels: What’s the Difference? - 33 Different Styles of Fishing - Guide to Cleaning Fish On Your Boat If you need a new boat or yacht for salmon fishing in BC, Van Isle Marina has a wide range of yacht services and yachts for sale moored at our docks. We’ll also share our favourite spots for catching salmon by boat. Check out our selection online or come and see us in person. We are located at 2320 Harbour Road in Sidney, British Columbia near Swartz Bay Ferry Terminal.
Cats greet in several ways, and their body language is a large part that can tell other cats whether they are welcome to get closer or not. Stretching out the head and sniffing is an encouraging welcome. They sniff faces to get a sense of the identify of the stranger from the smell. Cats who know each-other or are greeting each-other warmly will often give an affectionate head-butt (seen often in warriors) or a nose-bump. These actions often turn into rubbing and tail twining. A friendly or polite cat will not make eye contact for long, as it is seen as threatening and hostile, so cats use their excellent peripheral vision to see each-other. An interested cat will have it’s ears pricked forward slightly and it’s whiskers will also be perked forward in order to sniff the new cat. A contented friendly cat will hold it’s tail up straight, which is also used in Warriors as a signal to follow. A hooked tail indicates a friendly but unsure cat. A hostile cat is often easy to recognize. Cats become hostile when they feel threatened — in Warriors the cats may feel threatened when their territory is encroached upon, when fighting, or even when insulted. Generally a confident or aggressive cat will raise it’s head: cats often settle arguments with long stares, as eye contact is seen as impolite and hostile. A tail still straight but down means a cat is feeling aggressive, and bristling fur means it’s angry or frightened. The more threatened a cat feels it will arch more and their fur will bristle. Since Warriors do not often sacrifice their dignity in battle or retreat, the submissive language is often used instead between warriors and a leader, or apprentices and mentors, as a sign of respect. The dipping or turning of the head to avoid all eye contact, and a curled tail (especially curled under the body), crouching down or flinching all indicate respect and submission and are all non-threatening. Cat happiness is easy enough to interpret — the most famous sign of course is purring, a low, rumbling sound that is comparable to a human smile; it can act only as a sign of contentedness, but also as a non-threatening message: I am safe, you do not need to fear me. The louder the purr generally, the more stimulated the cat is by happiness. Sometimes a cat can also purr out of fear or anxiety, but that is not usually the case. Facial expressions change as a cat is happy, although they are more easily interrupted by other cats then humans, sensitive changes amount of light or shape of the eye (can be described as glowing, as if often is in warriors) and some cats facial muscles even take on a smile-like shape. When a cat in content or relaxed it’s muscles relax, and breathing slows down. If a cat is content and not seeking affection, a flicking tail or a rippling of the pelt can indicate it wants to maintain distance. The amounts of touch a cat presents to other cats and it’s owners often depends on the cat’s temperament. Some cats express happiness by seeking contact, rubbing and bumping heads, licking and grooming, kneading, or general touch, like paws. Some breeds of cats actually drool, and many cats exert their energy by adding extra power to jumping. Another symbol of affection, perhaps comparable to a kitty “kiss” is a long stare and then a slow blink. Cats, especially housecats, tend to be sensitive in general. Anxiety is common, and can wane and wax easily. Some types of anxiety, stemming from separation and other causes, can last for a long time and effect a temperament of a cat to a more nervous or timid persona. Signs of short term anxiety is a tail flicking back and forth, rubbing, restlessness (flexing of the muscles or claws), an anxious chutter-chutter like meow. It can disappear quickly when the threat is disposed of or wears off. Prolonged anxiety, tends to be somewhat more serious as a cat emotion, and can be stemmed from anything from the presence a strange cat or animal, to the separation of an owner, family member, or in the Warriors case, a Clanmate. As well as some symptoms of short term anxiety listed above, a cat may become reclusive and eat less or nothing at all for long periods of time and sometimes obsessive grooming. Anxiety can be built upon when a cat is stressed by normally non-threatening objects, and an aggressive or nervous cat may lash out, especially tomcats. Another type of anxiety is not always negative and is more in anticipation. The unique symptoms of this are again quite obvious; pacing, urgent meowing, and staring. Negative anticipation is exhibited by trying to shirk whatever is coming by feigning sleep, restlessness, or reclusiveness. Cats are very sensitive to the emotions of other cats (they can sense fear, anger, restlessness, sadness etc.) and will often avoid threatening or high leveled emotions like happiness or anger, or symbols of it like loud noise, which make cats uncomfortable. They prefer to be around low-level emotions. Another recognizable and instinctual obvious emotion of the cat is high anger: ears flat back on skull, pupils dilate, fur ripples as the muscles tense, fur standing on end, arched back and tail lashing. More symptoms are rapid breathing, and a loud, warning yowl or growling sound. Listed above are actually defensive gestures and not offensive — the point of these signs are to ward off a potential attacker, not to invite a fight. The defensive gestures can quickly erupt into offensive, obviously scratching, biting, leaping and fighting. These bouts of rage can come when a cat feels threatened or cornered, and often dispense fast when energy is used up or the cat is distracted. Sometimes it takes time for a cat to relax after anger. Like happiness, sad body language is easier for other cats to interpret then humans. Eyes change subtly in shape, moping, low-energy levels and tail dropping are all signs of sadness. Sometimes the cat will neglect a lot of grooming or eating, often resulting in a dull pelt, and scratching. The cat is very sensitive to tension and anxiety, so medical problems can be created by prolonged sadness or stress.
Telling—and listening—to stories goes back to the dawn of mankind. Before there was writing of course, stories which were often embellished to become more that that—to become narratives—were the only way to pass knowledge from generation to generation. What’s behind the narrative process, and how does the creative use of such matter to you today? In a recent interview in The Financial Times, author and historian Yuval Noah Harari discusses the premise of his book Sapiens: A Brief History of Mankind. In it he advances the theory that humans have surpassed other species primarily due to our ability to create compelling fictional accounts or narratives. This makes it possible, he argues, for people to band together—to help or fight each other—be it to believe in the possibility of bringing down animals much bigger and stronger in an organized hunt, to the idea of currency, or a cause worth fighting or dying for. “Any band of Neanderthals, Harari suggests, can raise a few dozen people for a hunt but humans can tell the stories needed to ensure co-operation in groups of 150 or more – numbers large enough to organize mass hunting using prepared traps, raise modern armies, or subdue the natural world.” The narrative seems critical to our ability to understand and relate to each other as well. For example, a study published last year in Science showed that reading literary fiction helps people understand others’ mental states and beliefs, a crucial skill in building relationships, and certainly part of a healthy “EQ”. The power of the narrative is also getting the attention of the Defense Advanced Research Agency (DARPA). The agency’s biotechnology office is studying how the narrative process is critical in how we process events— most notably traumatic ones– and how that impacts post-traumatic stress disorder (PTSD) and can contribute to radicalization as well. “Narratives may consolidate memory, shape emotions, cue heuristics and biases in judgment, and influence group distinctions.” In other words, narratives can be used for good or bad. Indeed, Nassim Taleb discusses what he calls the “narrative fallacy” in his book The Black Swan at length. We are vulnerable to the seduction of the narrative he argues, and that can result in cognitive biases not the least of which makes us susceptible to the allure of correlation (as opposed to causality) from data mining and big data (particularly relevant issue today). In other words, our minds are desperate to find meaning, and we need to be on guard for that. The common thread I’m trying to weave with these varied accounts of the narrative process is that without creativity, these narratives (fictional or otherwise) wouldn’t be possible (and therefore be used for either good or nefarious ends). And without imagination on the part of those that listen, they would have no power; creativity and imagination are two sides of the same coin. The creativity of the person writing the narrative is very important of course, but he relies on the imagination of those receiving the narrative, which then, given their capability for empathy and ‘theory of mind’, can understand and co-opt that narrative as their own. It remains a uniquely human capability—at least in this degree of complexity we assume. (Perhaps animals have ways to tell “stories” in a limited way, as has been suggested by observing ants and bees doing a dance to communicate distant food sources etc.) The famous fax machine analogy also comes to mind: one fax machine is worthless—two have some value and many, a lot of value. But just as that fax on the receiving end has to decode the signal to recreate an image—a facsimile—so does the human imagination decode the narrative. But in the case of humans as opposed to the lowly fax machine, this reconstruction can vary dramatically from individual to individual and be modified as well, adding a special dimension that is unique to them and their experiences. And just as the more fax machines there are is directly and exponentially related to the usefulness and value of the machine (Metcalfe’s law), so does the size of the network of those tapping the creative narrative, and hence the power of social media (and, unfortunately, it’s increasing use by radicalized terror groups.) Creative Heuristic #2: Use the power of creative narratives for good. Use them to inspire your coworkers, employees, and customers—do so in a way that taps into their imagination and so that narrative becomes in part their own as well. -Mark HT Ridinger
You can see the smoke from space. It looks like wisps of cotton stuck on a branch, floating toward the blue of the sea. On the ground, amid the actual flames and swirling winds, there’s nothing but orange, in all shades: tangerine flames, brick-hued sky and a bloody, hazy horizon. Many animals are dying in this hellfire, with nowhere to go and little relief as the disaster in Australia rages forth. Wildfires can spread at nearly 7 miles per hour in forests and an astonishing 14 miles per hour in grasslands, ravaging animal habitats and consuming generations whole. The creatures in Australia are famously tough, having evolved in a land of extremes. It’s just the human touch, really, that’s erased entire species over the last 200 years in this massive island continent. With the specter of accelerating climate change and catastrophes like the ongoing fires, scientists fear upwards of 30 percent of existing species could be “committed to extinction” by 2050. The tragedy is obvious, even if you haven’t seen images of dead marsupials lining the freeway; one estimate suggests half a billion animals have been affected in Australia so far. What’s less clear, however, is how human-driven climate change will impact whether survivors can continue bloodlines. Experts say that we need more info, stat, on how a warming ecosystem can hurt a species’ ability to reproduce. But what we know so far paints an alarming picture — for humans, not just other animals. “Currently, the information we have suggests this will be a serious issue for many organisms. But which ones are most at risk? Are fertility losses going to be enough to wipe out populations, or can just a few fertile individuals keep populations going? At the moment, we just don’t know,” Tom Price, evolutionary biologist at the University of Liverpool, wrote in a scientific journal op-ed last year. The horrors of climate change have been depicted through flames, melting ice, monsoon floods and whipping winds, but the question of fertility is the one that looms largest for us. Humans are clever enough to evacuate and take shelter in more effective ways than the average mammal, but even we can’t outrun the effects of heat on our literal balls. Aquatic species and cold-blooded animals are most vulnerable to the impacts of climate change, but mammals are highly sensitive, too. Especially in the testicles, which usually hang outside of male mammalian bodies because body heat kills sperm. Turns out, this is critical: Heat stress makes males infertile before females, according to research from the University of Lincoln. This isn’t just a problem in environments with future rising temperatures, but rather a part of current fertility patterns in the U.S. Even a balmy summer heat wave can make men shoot blanks, as a UCLA researcher discovered by reviewing annual birth rates. Environmental economist Alan Barreca observed that this dip in fertility happens across the country, in different climates, even despite evidence that people have more sex when it gets warm. This is a real problem given that we’re headed toward serious underpopulation issues, which could slow economic growth in ways that decimate the “middle class” and those in poverty. Americans had fewer babies in 2017 than any year since 1978. And looking at the culture, it doesn’t appear that’ll dramatically change anytime soon. We’re already choosing to not have kids because we don’t want to bring them into a world that’s spiraling toward ecological and economic disrepair. Couple that with the damage climate change can and will do to home values, prices of goods, development, job growth and beyond, and it’s easy to see why we’re facing some deep, systemic issues without much recourse visible. Some nihilists will suggest that “overpopulation” is the climate problem, actually, and that us going infertile and sinking into a nice, long Great Depression 2.0 is partly just Mother Earth’s correction. Except researchers all say that’s not really true. We really need to be having kids. (And even if a couple can get pregnant, they’ll have to deal with the danger of heat stress hurting fetuses in the womb.) One of the cruelest twists of all is that this infertility problem won’t affect all societies equally. New research last year suggested that global temperature increases will disproportionately hurt poorer countries, which usually rely on agriculture to support the working class. Scarcity of goods would lead to higher prices, and ultimately, a change in labor demands (i.e., a need for more manual workers). “Our model showed that climate change decreases the return on acquiring skills, leading parents to invest fewer resources in the education of each child, and to increase fertility,” wrote researcher Soheil Shayegh of Bocconi University in Milan. People in developed nations like the U.S. wouldn’t have the same economic pressure, lowering our birth rates, and allowing a smaller pool of kids to have more resources like education. “This is particularly poignant, because those richer countries have disproportionately benefited from the natural resource use that’s driven climate change,” co-author Gregory Casey, of Williams College, observed. The slow horror of mysteriously not being able to have kids, even when you desperately want one, is a stark contrast from the images of climate change consequences we normally see. There will be no train a la Snowpiercer, and no tidal waves like the ones seen in The Day After Tomorrow. But scientists are racing to find more data on infertility because of the profound way birth shapes the fabric of both society and the individual. Perhaps just a handful of fertile individuals can technically keep other mammalian species going, as some biologists suggest. Certainly, that process will help rebuild some ecosystems in Australia, even as the lands burn into a charred, cracked crust. But having a child is so much more than survival for humans, and to lose that brings unique pain. There will be no massive storm to point to as the source of destruction. Just a string of too-warm days, and the sinking feeling that it can’t be fixed.
A trip on the Camino del Diablo The Camino del Diablo – Devil’s Highway, is a trail for those who love history. Created from virgin desert as many as a thousand years ago by the Papago Indians on their way to trade (or maybe make war) with tribes to the west, it was a foot trail until the Spanish arrived in Mexico. There are different thoughts as to just where the Camino actually began. Some believe its starting point to be near the town of Sonoita, Mexico. Others are inclined to think it began near the town of Caborca, further east. The trail heads northwest, out of Mexico, then drops toward the southwest and runs for several miles parallel to and about a mile from the Mexican border. When it reaches the Tule Mountains it turns northwestward, then heads almost due west, past the Cabeza Prieta mountains to the Tinajas Altas Mountains, perhaps the most important mountain chain of many on the route. The Tinajas hold natural water tanks, a lifesaver for more than one thirsty traveler. From that point the Camino heads north-northwest along the foot of the Gila Mountains to Yuma. The first known Spaniard to travel the Camino was Melchior Diaz, A captain and part of Coronado’s exploration force. Diaz, it’s said, didn’t fare well on the trail, impaling himself on his own lance while attempting to spear a jackrabbit from his horse. Later Europeans included a priest, Father Eusebio Kino, who first traveled it in search of lost souls in the late 1690s. An old ore-wagon road running southwest from Ajo joins the Camino at a point around 40 miles from that town. The ore wagons would continue on the Camino to Yuma with their loads of copper ore. The Camino was most heavily used from 1849, when the gold rush started, through the 1860s and beyond, when placer gold was found north of Yuma. It was a safer route than one running parallel and to the north, the Gila Trail. Apaches were much less likely to raid parties of whites crossing the infernally hot sands through which the Camino ran. After the middle 1850s, boundary survey crews used it extensively to determine an accurate line between Mexico and the U.S. When the railroads were laid in the late 1800s, traffic on the Camino dropped dramatically, That is until recently, when this country started seeing more and more of our southern neighbors, people who walked or drove across the trail going north, not west. I began my walk at Papago Well, at the approximate confluence of the Ajo Ore Wagon Road and the Camino. Flora in that area consists of scattered Saguaros, Ocotillo, Cholla and the ever present Creosote bush. The trail to this point and for some distance west is easy to travel, though bumpy. That was to change after 10 miles. My friend and SAG (Supplies And Gear) partner, Rich Gerow, out of Martinez California was driving his pick up, pulling my small quarter ton trailer. That was to be my bed. We agreed that he’d stop every couple miles, to make sure I was still moving my feet west. My main interest was simply to walk where others had trod for so many hundreds of years. On the way, I wanted to attempt to spot gravesites. It was hard to believe so many people had rested their weary heads for the last time in the middle of that lonely but beautiful piece of Arizona. It’s safe to say the majority of those who left this world while on the Camino did so during the gold rush, starting in 1849. Not being able to transport bodies of friends and loved ones for burial in more civilized locales, the dead were simply buried where they fell. The numerous gravesites just off the road attest to that. Just a short distance west of my starting point, I came across two faint gravesites, side by side; rectangles of small rocks partly covered by blowing sand. About 4 miles down the road I came to O’Neill’s Pass, wherein lies his grave. Mr. O’Neill was a prospector in the late 1800s and early into the 20th century. It’s said he enjoyed his spirits. Unfortunately for him, his spirit left him one day, when he tripped over a rock near his camp, and fell into a water puddle and drowned. In the driest desert in Arizona. His grave is marked just off the road. Passers-by leave mementoes, maybe for luck, coins, bullets, bottles, etc. A couple miles further, In the middle of a silent beautiful desert vista, stands an ugly but necessary monument to one country’s attempt to protect its borders from other nations’ products and people. Camp Grip, located on the north side of the Camino is a very large building, maybe the size of a hangar, outside of which are parked any number of BP (Border Patrol) vehicles, and supporting equipment. A fence surrounds it. It is an incongruous spectacle. I passed it quickly, and a couple hours later found myself at the eastern edge of the lava beds. Many eons ago, a volcano spewed tons of small lava rocks over the landscape. That volcano, located a couple miles into Mexico, was given the name ‘Pinacate’, a derivation of “Pinacati”, meaning Black Beetle in the Aztec language. Judging from the number of those big bugs that walk with their butts high in the air, the lava rocks don’t outnumber them by much. Rich had picked a camp site 50 feet into the desert and adjacent to the end of a ‘road-drag’ section at the east edge of that lava bed. BPA’s (Border Patrol Agents) regularly smooth the road as best they can, to detect footprints of Illegals crossing. The smoothing is done by use of a number of old vehicle tires connected by chain and attached to the back of the BP truck. That evening, a BPA stopped by and talked to us briefly. He said our site would be checked periodically during the night. And it was. Several times we saw headlights slowly moving by our camp. The next day we broke camp, and I headed across that lava bed. It’s a pretty rough ride in a truck, but vehicle passengers don’t get stone bruises. That’s what I found myself with, a perfectly round stone bruise on my left foot. Within the lava beds, and a short distance south of the trail exists a gravesite with the name ‘Nameer’, a Middle Eastern name, laid out with lava rocks. More rocks surround the name. No one seems to be able to identify the person or persons lying there, but he/she must have been a person of some importance. Gravesites off the Camino aren’t ordinarily built so elaborately.. A few minutes west of the grave, while walking off the trail, a 4-door Wrangler came in from the west, and stopped. Two young men were gazing at me, and the passenger said, “Are you missing something – like a car?” I had to laugh. “Nope. I’m just out for a walk.” “Okay. Have a good time”, he said, and they were on their way Shortly after noon I limped off the lava bed and into the Pinta sands. As a geological feature, the sands surround the lava bed on three sides. The grains seem small enough to have the consistency of flour, which was easy on the stone bruise but because of the way I had to walk to favor it, made for lots of fatigue. I wasn’t more than a mile into the sands when I hitched a ride with Rich. He was able to navigate the truck with some difficulty, and we eventually made it through to a more firm surface. This part of the Camino is sunk about 3 feet below the surrounding desert surface, due likely to the consistency of the sand. A mile or so further and just east of the Tule Mountains, we came upo more graves. These sites were laid in a heavily vegetated area, and one which showed evidence of recent ‘gully washers’, with wide, shallow beds wandering among the Saguaro, Mesquite and Creosote bushes. A lot of paired-up desert flora exists here. This particular area had probably the densest flora growth we’d see, with the exception of Tule Well, which we drove to a short time later. Tule Well is out of the Pinta Sands region, and is well treed, with many Mesquites and other plant growth seen. And it has a little history. Some time after the well was dug, late in the 19th century, a traveler named Rafael Pumpelly was asked how he found the water at the well. He said, in early 1900s vernacular, that it was pure nasty, to which the questioner replied, “That’s because we dumped a body down there a couple years ago.” The well area is a popular camping spot for Camino travelers, having a couple picnic benches and grills, but no water. An adobe hut was built here by a government agency back in the late ‘30s, and on a rise nearby, a tattered Old Glory waved. We had our meal that evening, then I spread my sleeping bag on a picnic bench, under aMesquite and immediately went to sleep. The following morning I awoke with swollen lips and eyes, (maybe an allergic reaction to the mesquite) to hear the BP driving up to hook up their road drag device. We chatted, and shortly after, they were on their way. About 9AM I was back on the road, and an hour later passed within a half mile of the Tule Tanks, another likely stopping-off water hole for Fr. Kino and all who came after. My destination on this day was due north of still another gravesite where eight people in a Mexican family died – Mom and Dad and 6 kids – when a wagon wheel broke and their sole container of water smashed onto the desert floor. They’re buried in a circle in a spot 10 miles west of Tule Well, and a half mile south of the Camino. Pronghorn tracks were numerous, crossing the trail, and the sand consistency was much like that in the Pinta. The Cabeza Prietas were on my right, the Tule Mtns on my left and the white desert directly in front, peopled with an occasional Saguaro, a few Ocotillo, and of course the ever present Creosote. Another middle-of-the-afternoon stop, and this was to be our last night on the Devil’s Highway, so we toasted to our progress. Our last day – up at the crack of dawn we broke camp, and Rich was on his way, promising to wait every one and a half miles, so he wouldn’t have to back-track too far should the stone bruise became too much of an obstacle. The last 10 miles was a walk between two rows of Mesquites bordering the road. A straight while line, directly to the Tinajas Altas Mountains, now standing in clear view, vertical lines, fissures, separating high, perfectly smooth surfaces, reaching up into the sky. Maybe an hour into my hike, I spotted a BP truck coming east. I stepped to the side, the truck stopped, and the young agent asked me in a professional tone if I were a citizen. I explained what I was doing, and he said, “Oh, you’re with the guy in the dark pick up.” We discussed the Camino, then he went on his way. About noon I came to a couple large signs designating the boundary of the Goldwater bombing range and the National Wildlife Refuge. There’d been little evidence of the range to this point. Some frequent thumping was heard earlier, as bombs hit the desert miles to the east. I’d also come across a number of 20-millimeter and .50-caliber cartridge cases, and an old bomb fin during my walk. A short while later I found myself at the foot of the Tinajas Altas, and the end of this leg of the trip. It’s been written that in the early 1900s, one desert explorer counted 250 graves at the foot of the mountains’ tanks. It’s a scene of sadness and of wonder, that a people were so driven as to attempt to cross the hottest desert in this country – in the summer, unprepared. It’s a place worth visiting just once. A place to think about what drives people, sometimes to almost certain death, all for a chance to possess that yellow metal. Friends and others were uncomfortable thinking of this geriatric walking in an area known for its heavy, illegal foot traffic, but we saw not one northbounder during our adventure. The Border Patrol is doing an outstanding job, and their efforts are appreciated. Many thanks, too, to CPNWR rep., Margot Bissell who got us on the road to the Camino from the Refuge office in Ajo. She answered questions and made good suggestions. Finally, a trip on the Camino should not be taken without at least one reliable partner. Rich filled that need, keeping his truck on the road and his partner on his feet.
Red meat, processed foods and dairy products: the risks these foods that are the basis of a Western-style diet can cause to your health are underscored by a new study in the American Journal of Medicine. By examining medical data for a number of British adults from 1985-2009, researchers found that, by eating the fried, sugar-loaded, processed diet typical of too many in Western countries, people reduced their likelihood not only of living into old age but of enjoying all of one’s years. Researchers from France under Tasnime Akbaraly studied 3,775 men and 1,575 women with a mean age of 51 — at the midpoint of life. All had been part of the British Whitehall II study that looked specifically at how diet can affect metabolic syndrome, which is known to be a predictor of heart disease. Specifically, they assessed the health of participants following the Alternative Healthy Eating Index (AHEI), which was developed by members of the Harvard School of Public Health and others as an alternative to the USDA’s dietary guidelines. The AHEI was created to, indeed, show how “specific dietary patterns and eating behaviors” are “associated with lower chronic disease risk based on previous epidemiological and clinical investigations.” Under the AHEI guidelines, people are to make certain food choices, such as “white meat over red meat, whole grains over refined grains, oils high in unsaturated fat over ones with saturated fat and multivitamin use” — that is, to forego what has become the typical Western diet for something more healthy. Akbaraly and the other researchers found that, among the thousands of British adults they studied, those who did not closely stick to the the AHEI’s guidelines raised their risk of cardiovascular and noncardiovascular death while lowering their chances for “ideal aging” in a state of good health, free of chronic diseases. My grandmother, who died at the age of 103 in 2008, offers an example of what such a state of “high functionality” in old age looked like. For all but the very end of her life, she walked almost every day to Oakland’s Chinatown to buy groceries and play mahjong. She cooked and sewed and was a central focus of generations of my father’s family. She lived just a few blocks from a McDonalds, but I don’t recall seeing her eat anything from there. While she didn’t eat whole grains, sticking to the white rice that is a staple of Cantonese food, she ate (and had us eat) plenty of green, leafy vegetables. American fast food — McDonalds, Kentucky Fried Chicken, Starbucks — is something you can find “exported” to seemingly anywhere in the world, from India to Russia. Sadly, obesity and related health issues such as heart disease and diabetes, are also increasing in countries like Japan and India, where people have abandoned a traditional, far healthier diet and become more sedentary in their lifestyle. Japanese men and women could once be said to “live longer and healthier than everyone else on Earth” thanks to a diet involving a “healthier balance of filling, delicious lower-calorie foods, presented with beautiful portion control in pretty little dishes and plates” — what is pretty much the exact opposite of the salt, sugar and fat-laden paper-wrapped food too many Westerners eat. Other research has shown how addictive junk food can be. Perhaps the answer to living to a healthy old age lies in developing a preference for a plate of sushi (minus, of course, the potentially mercury-contaminated fish). Related Care2 Coverage Photo from Thinkstock
Guest essay by Eric Worrall Scientific American reports that the world economy is growing without increases in CO2 emissions, which the author attributes to the rise of green energy. However, there are several issues with this claim. World Economy Grows without Growth in Global Warming Pollution Energy-sector emissions of CO2 remains flat for second year in a row Global energy-related carbon dioxide emissions held steady for the second year in a row while the economy grew, according to the International Energy Agency. In a simple, two-column spreadsheet released yesterday, IEA showed that the world’s energy sector produced 32.14 metric gigatons of carbon dioxide in 2015, up slightly from 32.13 metric gigatons in 2014. Meanwhile, the global economy grew more than 3 percent. Analysts credited the rise of renewables—clean energy made up more than 90 percent of new energy production in 2015—for keeping greenhouse gas emissions flat. “The new figures confirm last year’s surprising but welcome news: we now have seen two straight years of greenhouse gas emissions decoupling from economic growth,” said IEA Executive Director Fatih Birol in a press release. “Coming just a few months after the landmark COP21 agreement in Paris, this is yet another boost to the global fight against climate change.” But some were skeptical of the carbon numbers and questioned IEA’s conclusion that economic growth and energy emissions aren’t linked anymore. CONSERVATIVES, OTHERS QUESTION IEA DATA “I think that’s just silly,” said Benjamin Zycher, the John G. Searle chair and an energy scholar at the American Enterprise Institute. “The estimates of global greenhouse gas emissions really vary depending on which data set you are looking at.” Global energy-related greenhouse gas emissions are likely higher, Zycher said. Some nations have had flat emissions but for unique factors that are hard to replicate elsewhere, he said. Frankly I’m a little skeptical of the model estimates of anthropogenic CO2 emissions. For example, we have seen recent enormous revisions to Chinese CO2 estimates, which begs the question of what other mistakes are waiting to be discovered. Whatever is happening to anthropogenic CO2, there doesn’t seem to be a noticeable change to the Mauna Loa CO2 trend, though who knows – perhaps it is too early to tell.
GE Jenbacher Landfill Gas Engines Generate Renewable Electricity For Grid Renewable energy production initiatives in North Carolina, U.S.A. recently received a boost with the formal opening of a new landfill gas-to-energy plant in Durham, the state’s fourth largest city. Built by the landfill gas project developer Methane Power Inc., the energy plant is powered by three of GE’s containerized JGC 320 Jenbacher landfill gas engines. GE’s Jenbacher landfill gas engines are generating 3.17 MW of renewable electricity for the regional grid by utilizing the landfill’s methane gas, which is created by the decomposition of municipal solid waste. The facility is generating enough energy to support about 1800 North Carolina homes. North Carolina is one of 27 states with a renewable portfolio standard (RPS), which requires utilities to produce a certain percentage of electricity from renewable sources, including biogas. Nixon Energy Solutions, GE’s Jenbacher gas engine distributor for North Carolina, delivered and installed the Jenbacher units for Methane Power’s Durham plant. GE and Nixon Energy Solutions also will provide follow-up services, including parts and systems maintenance, for the entire operating life of the power plant. Commissioned in October 2009, the Durham landfill gas project is the first of eight new U.S. landfill gas plants that Methane Power plans to develop. Methane Power already has ordered three additional Jenbacher landfill gas engines that will be installed at two other sites in North Carolina. Electricity generated by the Durham landfill energy plant is being sold to Duke Energy Carolinas under a power purchase agreement. For more information: www.ge.com
Cato Policy Analysis No. 300 April 9, 1998 TWO CHEERS FOR THE 1872 MINING LAW by Richard Gordon and Peter VanDoren Richard Gordon is professor emeritus of mineral economics, the Pennsylvania State University, and Peter VanDoren is assistant director of environmental studies at the Cato Institute. Metal mining on federal lands is governed by an 1872 law. Critics argue that the law "gives away" valuable assets at prices well below market value, often for uses other than mining, and does not allow the government to conserve mineral resources through public ownership. Estimates of the "giveaway" are vastly overstated because of the failure to use conventional financial methodology; any "giveaway" occurred long ago and is not ongoing. The "fraudulent" use of land for nonmining purposes is simply the result of unwise restrictions on land uses that are more profitable than mining. The need to conserve exhaustible resources is a red herring. No exhaustible resource industry has vanished because of the exhaustion of supply, but many renewable resources have vanished for that reason. The U.S. government owns land because many Americans believe that land markets and extractive activities, like mining, do not operate well unless they are publicly owned and subject to scrutiny very different from that received by supermarkets. We would never accept public ownership as a solution to whatever market failures existed in food markets. We also should not accept public ownership in land markets. Future mining claims should be allocated at auction without royalties, but existing claims should remain unaltered. A second-best alternative would be to allow anyone to bid against mining companies under the current mining law regime. If both of those options remain closed because of political considerations, then the 1872 Mining Law should be left alone. Many laws affect public land administration as a general matter, but separate laws govern the commercial exploitation of energy and mineral resources. The laws that govern commercial use of energy resources, such as crude oil, natural gas, and coal, reflect the belief that the federal government should retain ownership of the public estate and that commercial access to that land ought to be on a rental, rather than an ownership, basis. Metal mining is the exception. It is still governed by laws that were enacted in 1866 and 1872.(1) Those laws allow individuals to lease or own land that contains valuable minerals. Other laws have closed land to mining or given the secretary of the interior discretion to propose exclusions. However, for the minerals still governed by the 1872 law, the rules have changed very little over the past 125 years. The Mining Law of 1872 allows U.S. citizens to claim land for mining purposes in units of 20 acres as long as $100 per year is spent on the land. The law also permits U.S. citizens to convert their claim to ownership of the land for $2.50 an acre. From that simple regime numerous complications arise. First, the government must decide whether the claimant is the first discoverer of a valuable mining deposit, but the determination of first discovery and the value of the mineral deposits is difficult.(2) Of course, the main complaint about the current regime is about the fees paid to exploit minerals found on public land; those fees are attacked by critics as hopelessly outdated, given subsequent inflation. Such challenges implicitly assume both that minimum charges are desirable and that government should monitor development. Although most Americans continue to believe that government should not as a general matter dictate how (or at what pace) specific tracts of land are used or developed, policymakers consider mining lands an exception to that rule. The bill of particulars marshaled by critics of the law is well-known: - The mining industry is acquiring valuable real estate at prices well below market value, and thus the law is an example of "corporate welfare."(3) - The low annual work requirement ($100) allows large tracts of land to be held in inventory rather than actually used for mining, which promotes speculation and inefficient land use.(4) - Mining claims are often used as a subterfuge to secure land for other uses such as real estate developments, a practice that both subverts the public interest in minerals production and enriches private parties at the public's expense.(5) Critics of the 1872 Mining Law focus on the fact that it has not been changed for 125 years rather than on the narrowing of its applicability. Starting with the 1920 Mineral Leasing Act,(6) Congress initiated what was to become the standard policy of extending access on a rental rather than an ownership basis. The 1920 act established leasing as the means of access to fossil fuels (oil, gas, and coal) and fertilizer minerals. Later Congresses also modified the mining law to prevent the transfer of land obtained for mining purposes. For example, a 1955 law(7) excluded sand, gravel, cinders, and other common materials from coverage under the mining laws because claims to federal land around some western cities, particularly Las Vegas, were obtained under the 1872 Mining Law and quickly converted to commercial and residential real estate.(8) In the 105th Congress (1997-98) the struggle over mining law reform continues.(9) Several bills had been introduced but not enacted as of February 1998. Sen. Dale Bumpers (D-Ark.), a long-time proponent of changes in land law that require payments to the federal government, and Reps. Nick Rahall (D-W.Va.) and George Miller (D-Calif.) have introduced bills that would impose a 5 percent royalty on gross income minus processing costs (also called net smelter income) from mineral production on federal land, add a progressive net profits tax on private mines that were privatized under the patent provisions of the 1872 law, and terminate the right of citizens to purchase federal land used for mining.(10) The federal government would retain ownership in perpetuity. Finally, the bills would impose federal reclamation standards on hardrock mines for the first time. Sens. Larry Craig (R-Idaho) and Harry Reid (D-Nev.) have introduced the Mining Reform Act of 1997 (S. 1102), which is supported by the National Mining Association. The bill would alter the sale of patents of mining land to require payment of "fair market value" rather than $2.50 an acre (sec. 204). The fair market value, however, would apply only to the land exclusive of any minerals. In addition, the federal government would charge a 5 percent royalty on the net proceeds from mining on all unpatented mining claims and all mining claims patented after enactment (sec. 401). The federal government also would allow states to enforce the relevant environmental regulations if the states requested to do so (sec. 307). Unfortunately, the debate over reform of the 1872 Mining Law pits largely defective attacks against generally incomplete defenses. Critics of the current law use an egregiously inaccurate methodology to conclude that the "economic giveaway" is quite large. What critics call abuses are simply efficient economic responses to bad laws. The environmental impacts of mining, moreover, are dramatically overstated. Opponents of reform are unfortunately content to accept public ownership of the mineral estate, a regime that inevitably politicizes economic decisionmaking and introduces all of the complications inherent in socialized enterprises. Defenders of the current regime also argue that valuable mineral deposits are unique and rare. Thus, they believe that a law prohibiting alternative uses of mining land is the best policy. That argument, in turn, has two important implicit premises, one of which is valid and vital, and the other of which is wrong. The valid premise is that the government must adopt simple rules because it cannot handle complexities.(11) The invalid proposition is that government should, nevertheless, control private decisions about how land is used. The first two sections of this study examine the most common criticisms of the 1872 Mining Law: that it is a subsidy to mining interests and that "waste, fraud, and abuse" are rampant. The third section considers the prescriptions offered by the critics to remedy those problems. The final section makes the case for invigorating the best parts of the law by making more muscular its land-disposal orientation. In sum, we find the law worthy of "two cheers"; the criticisms leveled against it are largely ill-considered. It would be worthy of a third cheer were it a more robust engine of unbiased privatization. The Absolute and Final Word on the Mining Fee Since the early years of the Republic, a critical aspect of the public lands debate has been a largely pernicious preoccupation with payments to the Treasury. The federal government, for example, vigorously promoted the imposition of fees for grants of farmland but eventually abandoned the effort in the face of massive opposition.(12) Even when fees are levied, complaints that actual payments are unsatisfactory are never far from the political surface. Those complaints are particularly strong in connection with the 1872 Mining Law because the extraction of valuable minerals on federal land takes place with minimal payments to the Treasury. What the debate is really about is the distribution of wealth. Critics of the 1872 Mining Law contend that the profits generated by mining federal lands are huge and that they belong to the taxpayers, not the private mining industry. The evidence is largely anecdotes about how little is paid to the federal government for land that yields tremendous mineral revenue. Typical was an April 9, 1997, NBC "Fleecing of America" segment on the Nightly News with Tom Brokaw that used as an example a parcel of land in California that contained $266 million in gold but was sold for only $1,725. Even the most casual analysis, however, finds that the quest to transfer natural resource rents from mining companies to taxpayers is not worth the populist attention given the issue by the media. The mining profits generated from that land--to the extent that they exist--are absolutely trivial. Critics of the present claim fee err in three important ways. First, they ignore the speculative nature of mining claims when they retrospectively examine land sales and asset values. Second, their calculation of profits from mining federal land is wildly inflated. And finally, they ignore the existence of secondary markets for federal land claims as well as the dissipation of "subsidy" that occurs through nonmarket competition for rents. When those factors are accounted for, one is hard-pressed to identify any "subsidy" of consequence. Retrospective Examination of Asset Values The first error made by critics of the mining fee is their practice of obsessing over how little the federal government receives for the mining land relative to its later market value. At first glance, $2.50 per acre does seem underpriced. But before we can determine whether that fee is too low, we need to understand how individuals determine an asset's value in a market economy. If the advantages that flow from ownership of an asset are certain, people will pay the present value of the flow of future benefits using a risk-free interest rate, such as the return on U.S. Treasury notes. If an asset's benefits are uncertain (or the time at which the benefits will end is uncertain), then the discount rate used in the present value calculation is much higher than the risk-free rate. In most situations, the future benefits from assets are uncertain as to both size and timing, and, thus, the discount of those benefits creates prices that are low relative to the price of an asset the returns on which are certain. Some assets initially clouded by uncertainty turn out to perform very well. If one examines only the subset of "good performers" from the universe of initially uncertain assets, one will always conclude that the purchaser of the asset was advantaged. A 1989 General Accounting Office study of lands patented under the 1872 Mining Law used that style of analysis when it noted that our review of 20 patents issued since 1970 showed that the federal government received less than $4,500 for lands valued in 1988 at between $13.8 million and $47.9 million. . . . Patent holders sold 17,000 acres of oil shale land to major oil companies for $37 million. Just weeks earlier they had patented the land and paid the government $42,500.(13) The fallacy of such thinking, of course, is that it ignores the subset of "bad performers" that may form a large percentage of the original universe of initially uncertain assets.(14) An examination of all assets, including those that do not perform well subsequent to the start of the analysis, leads to the conclusion that in markets with many participants, risk-adjusted excess profits on assets are zero.(15) Some of the assets surrounded by uncertainty will make large profits, but others will have been bad bets and will prove nearly worthless. If bids were gathered for all assets, the total bids would equal the present value of the excess profits. However, that need not be true for any one asset. The bid for a property that proves highly profitable may have been far lower than the present value of the land, but that is offset by payments in excess of the present value of the land for less successful ventures. On average, the returns are normal, but they are not necessarily normal for any particular asset. The relevant policy question in the case of the 1872 Mining Law, however, is whether the price of zero (free access), or $2.50 per acre if the land is purchased, for a ticket to the "mineral claim" lottery deviated substantially from the expected value of the winnings. The fact that some of the mineral claims subsequently became very valuable does not necessarily imply that the market price for the "lottery tickets" that gave rights to such claims would have been much greater than zero. Gross Errors in the "Giveaway" Calculations The second error critics of the mining fee make is their complete misunderstanding of how valuable mining land is to the nation as a whole and the mining industry in particular. Critics routinely point to the staggering sums that have supposedly been "given away" to corporations under the aegis of the 1872 law. Some perspective, however, is necessary. First of all, profits derived from land are not a large part of national income. Most estimates are around 6 percent of national income, or $372 billion in 1994 (approximately $1,400 per capita).(16) Only a tiny fraction of that amount could possibly come from mining activity on public lands. If 1 percent of land rents was derived from such land, the amount would be $14 per capita. Nevertheless, for several years the Mineral Policy Center has campaigned against an alleged $231 billion giveaway of public lands claimed for metal mining. Their estimate is widely referenced by politicians, in leading newspapers, and on television.(17) Even by the low standards of populist crusades, however, the MPC's work is severely flawed. That is evident from interpreting and checking the data from the position paper that presents the numbers.(18) The exaggeration involves both using an inappropriate measure of the worth of mineral land and language that seems deliberately designed to mislead. The $231 billion figure is actually an estimate of the cumulative market value (in 1994 prices) of all metals produced from federal land since the operative law was passed in 1872. While the MPC is vague about the methodology used to calculate the estimate, enough information is provided to convince us that our interpretation is correct.(19) The MPC goes even further astray by relying on "'gross' value, meaning [the value of the mineral reserve] excluding extraction, processing, and marketing costs."(20) That statement, at best, involves a strange definition of "excluding." The usual concept of "gross" is revenues before deducting (which most people would consider to mean including) costs. The report clearly uses projected receipts without deducting projected costs and invalidly uses those values as a measure of the giveaway. To make matters worse, the report makes the mistake of calling those gross values the worth of the minerals in the ground or "taxpayer loss."(21) Mining, however, is no more a free lunch than are other activities. The correct measure of the worth of mining land is real or projected revenues less all relevant costs. Those costs, moreover, include the return on investment needed to repay outlays to hold minerals and the plant and equipment needed to produce them. There is simply no way to salvage the MPC's calculations. That the melding of mineral deposits, labor, plant, and equipment produced $231 billion in output may be interesting but not necessarily in the sense the center claims. Those activities are a trivial part of a giant, 125-year economy. More critically, the number tells us nothing of public policy relevance. By definition, the profits from mining operations are the difference between revenues and the costs of labor, plant, equipment, and other inputs. A priori, we have no way of knowing how large those windfalls may be. The optimistic possibilities are deposits so attractive in terms of mining cost, ease of ore processing, and proximity to market that very large profits are made. At the other extreme, the prospects may have disadvantages that prevent any windfalls from occurring. Interestingly enough, none of the pending mining claims discussed by the MPC is an example of the large, high-grade deposits that are the large profit generators in metal mining.(22) They seem more like operations that will generate low or nonexistent rents.(23) The MPC, in fact, tacitly recognizes that by demanding that taxpayers receive only 8 percent of mining revenues.(24) But if mining land were truly the huge profit generator implied by the MPC, settling for 8 percent of earnings would be the type of policy routinely denounced as a giveaway at fire-sale prices. The only plausible explanation for accepting such a low royalty is that the MPC knows that it is using an inflated measure of worth. All that suggests that the center is manipulating the data to make trivial amounts seem more interesting. In principle, the true value of the "giveaway" could be anywhere between zero and $231 billion. The high figure is wildly implausible because mining has never been limited to claims so fabulously profitable that extraction costs are negligible. Zero, in fact, is a much more reasonable figure. Averaged out over the bonanzas and busts, the return on mining claims may be very low. Mining industry folklore has it that the industry is perennially unprofitable. That, too, is obvious hyperbole. Too many firms persisted for long periods for them to have failed to make money.(25) However, the exaggeration is probably much smaller than is the claim of $231 billion in gain. If we accept (for the purpose of argument) the 8 percent royalty proposal as an estimate of the rents that properly belong to the taxpayer, the $231 billion "giveaway" in reality amounts to only $18.5 billion. Even that adjustment, however, fails to address the problem of those revenues' being returns on investment spread out over extended periods. A correction is needed for the interest charges arising from leaving resources in the ground for extended periods of time. Unfortunately, the ideal "correction" for the value of unextracted mineral resources at any given moment over the last 125 years cannot possibly be calculated.(26) What we can consider is how asset values are affected by time. We conducted a sensitivity analysis using a wide range of plausible corrections (see Appendix). In our scenarios, the true figure could be as much as 86.6 percent of the $18.5 billion ($16 billion) or as little as .014 percent ($3 million). If we restrict our estimates to "standard" scenarios used in the analysis of investment projects, our estimates range from $3.9 billion to $9.8 billion.(27) Whatever those values, they were earned over a large but unknown number of claims. Mining law specialist John Leshy cites a 1986 government study that reported that 2 million claims "had been recorded," but more recent information suggests that the number of still-valid claims is only 330,000.(28) If, for the sake of argument, 2 million claims were made over the entire history of the 1872 Mining Law, the average "subsidy" received by a claimant under the aegis of that law was worth only $1.50 to $8,000 if our broad range ($3 million to $16 billion) is used and $1,950 to $4,900 if our narrower range ($3.9 billion to $9.8 billion) is used. In short, even the simple adjustments we made suggest that the payoff per claim probably has been trivial.(29) The MPC also applies its gross value methodology to the 30 pending claims that it wished to block at the time of the report. The value of those claims is estimated at $34 billion.(30) Given the center's apparent belief that an 8 percent royalty is appropriate, the value of those claims falls to $3 billion. Applying our adjustment methodology (scaling factor of .014 percent to 86.6 percent) then reduces the figure to $400,000 to $2.6 billion, or $13,000 to $87 million per claim, for the broad range and $600 million (21 percent × $3 billion) to $1.5 billion (53 percent × $3 billion), or $20 million to $50 million per claim, for the narrower range. Thus, even the potentially successful claims might have trivial values, and even the most generous estimates are too low to justify an elaborate new program. In sum, intelligent consideration of economics and simple math indicate that critics of the 1872 Mining Law are making political mountains out of "subsidy" molehills. If the 1872 law has created any "giveaways," they range from $2.5 million to $16 billion (with the true number probably closer to the lower figure), not $231 billion. Each recipient of that "giveaway" pocketed at most $8,000 that was rightfully the taxpayers'. Although subsidies are objectionable, that amount pales in comparison with the exaggerated figures that have been widely cited in news reports and in the halls of Congress. Why a Giveaway Really Isn't The third error critics of the mining fee make is their failure to recognize how subsidies are dissipated through routine market processes. Even if the 1872 Mining Law "gives away" vast wealth to private interests, two fundamental principles imply that those subsidies do not benefit present owners of mining businesses. Both principles reflect a fundamental insight of economics: "good deals" do not persist in markets. Once information about a "good deal" becomes known, prices change to eliminate excess profits. For example, even if the initial mining-claim process transfers wealth from taxpayers to those who make mining claims, a secondary market for mining claims has existed since 1872. Those who obtained their claims in the secondary market, rather than through the initial "free" claim process, paid market prices for the claims to the original owners and, thus, did not receive a giveaway. If a giveaway occurred, the only possible recipients were the initial claimants under the 1872 law. All others have paid for their claims in the secondary market. Even in situations in which markets do not exist for the "good deal," like the initial "free" federal mining leases, and no prices exist that can change to eliminate the "good deal," competition will occur through alternative means (such as fees to lawyers who are good at filing claims or dinners for bureaucrats who file the claims) to achieve the same dissipation of excess profits.(31) The problem with those implicit forms of competition (referred to by economists as "rent seeking") is that they waste resources.(32) The ability to secure valuable assets for no cost leads to investments simply to secure the giveaway. An array of economic studies suggests that vigorous competition for those services (a combination of efforts to comply with the rules for securing the rights and to influence--by legal or illegal means--the grant process) will lead to expenditures equal to the rents.(33) Thus, there is no giveaway, but the process is inefficient because resources are diverted from productive uses to unproductive ones. Although there is not enough information available to determine how much of the 1872 Mining Law's "good deal" was eaten up in rent-seeking costs (if indeed there was any "good deal" available to begin with), we can be reasonably sure that, over the span of 125 years, the market has had more than enough time to react to any subsidy and dissipate it through nonmarket competition. Conclusion: What Subsidies? When put under an economic microscope, the giveaways alleged to occur under the 1872 Mining Law prove nonexistent. First, critics err by concentrating on those claims that have returned stunning profits without due consideration of the expected value of the claim. The purchase of assets in markets is best viewed as a lottery.(34) Focusing journalistic and political attention on assets that performed well but were bought on the cheap is like focusing critical attention on the winners of a lottery who collect $10 million but paid only $1 for the ticket. The ticket price paid by the winner tells us nothing about whether the lottery operator should raise or lower ticket prices in general. Second, the alleged size of the "giveaway" is dramatically inflated by the law's critics. The widely referenced $231 billion estimate of that giveaway is wildly unrealistic. First, the $231 billion figure is an estimate of the cumulative market value of all metals produced on federal lands since 1872. If mining costs are not deducted, it tells us nothing that might help "price" the subsidy. Moreover, since advocates of reform typically demand royalties of less than 10 percent of mining revenues, it is clear that even they do not seriously consider the $231 billion figure representative of the 1872 Mining Law's subsidy. Back-of-the-envelope calculations suggest that the true subsidy over 125 years ranges from $3 million to $16 billion, or $1.50 to $8,000 per claimant under the act. Finally, critics forget that "good deals" are invariably dissipated through market competition as prices change to eliminate excess profits. Moreover, to the extent that any giveaways occurred under the act, the only beneficiaries were the initial claimants under the 1872 law (most of whom are long gone now). All others acquired their claims through secondary markets, where the claims were most certainly sold at market prices. Beyond the Fee: Speculation, Fraud, and Abuse? Although the sale of federal mining land for $2.50 per acre is the main criticism of the 1872 Mining Law, other matters have stuck in the craw of would-be reformers.(35) Critics decry private speculation that often occurs when claims are made. They worry that, to the detriment of consumers, resources are being "hoarded" and not exploited quickly enough because only $100 a year must be spent on developing a site for a claim to remain valid. A related criticism is that land is being claimed under the 1872 Mining Law and diverted to other uses, primarily real estate development. While both observations are accurate, there is nothing necessarily wrong with current practices and little economic reason to control how mineral lands are used. Moreover, some critics have maintained that federal ownership of mineral reserves is necessary to ameliorate the negative economic and social ramifications of resource depletion. Shortages are coming, they maintain, and governments would be less likely to recklessly draw down dwindling reserves and would distribute those resources more fairly than would private markets. The 1872 Mining Law, in their view, makes more difficult government's responsibility to manage scarce mineral resources. Not only is that argument incompatible with the criticism that resources are being hoarded; the charge that governments are better able to deal with resource shortages than are market actors is intellectually threadbare. Speculating about Speculation Many restrictions are imposed on the timing of mining activities on federal land. Diligence requirements limit how long a lease can be held without any development and how long it can be held after production is shut down. Moreover, regular expenditures are required on land development. Critics, however, often complain that those restrictions are not rigorous enough to constrain speculation and counterproductive hoarding. Others think that restrictions are a good idea but that present ones are more than adequate. Does the 1872 Mining Law give the federal government too little control over the timing of development? A straightforward implication of efficient markets is that you can never transfer too soon, but you can transfer too late. If mineral rights are transferred before the optimal time to extract, the recipient will wait until the right moment. The only possible danger is that legal barriers will delay a transfer until after the optimum starting date.(36) Research demonstrates that complications do not alter the case. No market failure unambiguously implies that delaying the creation of transferable property rights to a resource becomes desirable. If monopolies exist in competing for land rights, they will persist over time. If there are problems controlling environmental effects, those problems also arise whenever access is granted. If one posits, as we most certainly do not, that governments are more farsighted than markets, it is still impossible to delineate a workable strategy of delayed release that would be an improvement. Clearly, if one believes, as we do, that governments are less farsighted than market actors, one favors more rapid grants of rights. This criticism, moreover, applies to postgrant as well as pregrant policy. For the same reasons that grants should be unrestricted, it makes no sense for the federal government to impose any requirements on when and how leased properties are used. Other incendiary critiques of the Mining Law of 1872 are centrally concerned with fraud. The most common example is the patenting of land for a mining purpose followed by a quick sale (usually accompanied by large capital gains) and transformation into a ski resort or real estate development.(37) Those who complain about lawbreaking, however, should realize that the purpose of resource law is to encourage the efficient use of resources. Assertions about land frauds implicitly assume that the statute satisfactorily promotes the efficient use of land resources and, therefore, should be enforced. Fraudulent uses of patented land are simply the result of unwise restrictions on uses of land that are more profitable than mining.(38) Why should the government "decide" that land should be used for mining rather than for hotels or ski resorts? Seeking to prevent subterfuge without determining its cause is never good policy.(39) Every example presented of the "misuse" of the mining laws (most are real estate examples) involves diversion of the land to uses that would be considered desirable if undertaken in other contexts. Restrictions on the disposal of public land should be dismantled. Until they are, laws allowing some disposal are preferable to further restrictions on access. Do Shortages Justify Government Ownership of Resources? A frequent objection to the transfer of mineral lands to the private sector is that mineral reserves are scarce, dwindling, and imminently depletable. Private owners, critics sometimes argue, will inadequately provide for future generations that might demand those resources. But even in situations in which markets do not preserve future options against all contingencies, the presumption that governments could and would improve on private decisions is doubtful. The global financial community is more imaginative and flexible than any government. The idea that natural resources are an exception to the above rule comes from the lingering heat generated by the fires of the Progressive Era. Economist Marion Clawson's celebrated survey of public land policy noted that national forests were established because of a "concern for timber supply."(40) Gifford Pinchot, founder and first chief of the Forest Service, forthrightly declared, "Conservation is the most democratic movement this country has known for a generation. It holds that the people have not only the right, but the duty to control the use of natural resources."(41) The Forest Service likewise maintained in 1933 that "the depletion of America's forest resources may be largely attributed to the national conception of the rights of the private citizen and the policies set up to protect those rights even at the expense of public welfare. Laissez-faire private effort has seriously deteriorated or destroyed the basic resources of timber, forage, and land universally."(42) So what do we make of the concern that private markets overexploit natural resources (and impose corresponding unnecessary environmental damage)? First, we must be clear about the charge. Is it that markets inefficiently exploit resources, or that markets may be efficient but are somehow socially derelict? Most of the political critics of land privatization confuse the two arguments and use them interchangeably. They are, however, two separate matters. As far as the former argument is concerned, efficient use of land in general--and mining land in particular--may mean development under some circumstances and hoarding under others. For example, many economists have demonstrated that reducing the rate of interest to stimulate investment does not necessarily retard extraction of exhaustible resources.(43) The lower rate of interest makes both holding back (because the lost interest income is lower) and producing (because interest charges on capital are lower) less expensive. When prices greatly exceed costs, the hoarding effect dominates and exploitation slows. When prices are close to costs, however, the cost-lowering effect dominates and exploitation is accelerated. The finite nature of minerals in the world adds nothing to the argument. Finitude may be irrelevant. Mineral industries seem to behave no differently from unconstrained industries, which usually die because of displacement by a superior product. And some nonmineral industries exhibit patterns supposedly unique to exhaustible resources.(44) The usual concern of those who are skeptical about the market's ability to properly handle the extraction of mineral resources over time is exhaustion. To date, however, no exhaustible resource industry has vanished because of exhaustion of supply.(45) Yet many renewable resources have vanished from use because of their limitations. Exhaustible fossil fuels, for example, were adopted as substitutes for supposedly renewable alternatives such as firewood and whale oil. The "limited" supplies of fossil fuel were far larger and more adaptable than those of renewables. Established producers of nonrenewable minerals have yielded to newcomers long before extinction occurred. In energy, Middle Eastern oil has displaced oil production in the United States and high-cost coal supplies in Western Europe and Japan. Iron ore production in the United States and Europe was similarly replaced by production from Brazil and Australia. Australia did not begin to flourish as an iron ore producer until it removed ore export controls established to shelter domestic steel producers from depletion. The resulting incentives to development increased ore supplies despite their theoretically finite nature.(46) Even if limits are germane, the overwhelming consensus of academic resource economists is that the market will spread the output efficiently over time.(47) Happily, however, this entire debate is perhaps moot because of the indisputable fact that mineral resources are becoming more abundant, not more scarce, with time and are probably not depletable at all.(48) In sum, the argument that government must directly manage mineral reserves to either mitigate future shortages or more fairly allocate those reserves in times of scarcity is spurious. Government ownership of mineral reserves--either in the context of the 1872 Mining Law or in the context of the various reforms to that law currently under consideration--is unwarranted. Prescriptions for Reform: A Second Opinion Our discussion up until now has concentrated on examining the alleged shortcomings of the 1872 Mining Law. We have found those criticisms to be largely uninformed and ill considered. Since the diagnosis made by mining law critics is incorrect, it is not particularly surprising that their prescriptions are similarly wrong-headed. In this section we examine the reforms that should not be enacted in a misguided attempt to extract on behalf of taxpayers natural resource rents from the developers of mines. The reforms introduced in the 105th Congress involve significant changes in how mining companies would gain access to minerals on public lands and how much they would pay for that access.(49) Yet any discussion of charges associated with the transfer and use of publicly owned assets must recognize that landowners have different ways of charging. Three basic legal systems are available: - charges associated with grant of ownership, - charges associated with ceding a lease, and - conventional taxation. In principle, all possible methods of charging could be employed under any of the legal systems. Charges associated with the grant of ownership or lease are the only efficient method of transferring wealth from buyers to sellers. All three legal systems could be limited to such charges at the time of transfer (and all could impose undesirable obligations for post-transfer charges). However, ownership grants are less likely to impose future levies. Curiously, none of the proposed reforms of the Mining Law of 1872 advocates the use of one-time charges at the time of transfer of lease or ownership. We, however, advocate that reform in the next section. Do Not Worry about Past Giveaways Our most important advice to those who would reform the mining law is not to enact any reforms that affect current claim holders or those who have already privatized their claims under the 1872 law.(50) The reform measures introduced in the 105th Congress by Senator Bumpers and Representatives Rahall and Miller would impose a 5 percent net smelter royalty on existing as well as new mining claims and a progressive profits tax (ranging from 2 to 5 percent) on private mines originally on federal land but patented under the provisions of the 1872 Mining Law. A maxim in public finance is that an old tax or law is a good tax or law. Once markets recognize the existence of the burden created by a new tax or law, the market prices of land, labor, and capital change to reflect the change. Once that occurs, wealth effects do not occur again as long as the tax or law remain stable. New taxes or laws may and usually do create ongoing efficiency effects, but changes in wealth occur only once. That central insight of public finance is important because it provides lessons about any policy reform. Just as the initial enactment of policies or taxes causes changes in the distribution of wealth, so do reforms of existing taxes or policies. Those wealth effects are usually the basis for organized support of and opposition to the policy changes. As a result, the efficiency gains from policy reform, for which no one is organized, get lost in the political controversy.(51) Because the 1872 Mining Law is so old, it is extremely unlikely that any subsidies continue. In the ongoing secondary market in which people trade claims made under the law, all the advantages and disadvantages of those property rights are embedded in the prices that people pay for them, in the same way land prices contain all the advantages and disadvantages created by arbitrary property taxes.(52) The mischief created by the 1872 Mining Law involves efficiency, not equity. The existence of a below-market price for mining claims (if in fact the current price is below the market price) sets up nonmarket processes by which the benefits are dissipated much as are those associated with finding an apartment in New York City. The resources used in such nonmarket activities are pure waste from society's view. Unlike the distortions created by the property tax on new investment, however, the "free access" claim system under the 1872 Mining Law has no additional efficiency effects on decisions about the timing or level of extraction from a claim. Moreover, the era of massive claiming is long past. The main wastes have already occurred. Any changes to the 1872 law should affect only future and not current mining claims. Because the law is so old, all actors in mining markets have operated for some time with expectations based on the property rights regime created by the 1872 law. To rearrange those expectations now for the 300,000 current mining claim owners would cause arbitrary wealth transfers that would activate political opposition and doom any possibility of reform and the efficiency gains that might go with it. Public Ownership with Leasing One possible reform would alter the policy governing metal mining on public lands to be like the policy that governs offshore oil and gas drilling: public ownership with a leasing system.(53) In theory, public ownership with an auction leasing system is economically similar to transfer of ownership to the private sector at auction.(54) If the market value of the land remains constant, a series of periodic leases will have the same (risk-adjusted) present value as a one-time sale bid. In reality, however, public ownership is a menace to the purported goal of ensuring that lessees contribute to the Treasury. First, Congress often undertakes public works to assist those using the public lands.(55) Second, government usually cannot resist the imposition of post-transfer charges. Such charges reduce the value of the output from the land and, thus, reduce contributions to the Treasury.(56) Third, governments tend to deny leaseholders the flexibility inherent in private property. Land leased under one law can be used only for the use specified in that law rather than the use that would be most profitable. Currently, the federal government offers grazing, mineral extraction, and similar single-use rights on the land it owns.(57) The coal-leasing fiasco of the early 1980s graphically illustrates the difficulties with such a leasing arrangement. Coal leasing underwent a long moratorium beginning in 1971 because of misplaced Interior Department concerns that the need for the coal was unclear. That occurred just as western coal output started substantial growth. Because of various regulatory hurdles, resumption was delayed until the start of the Reagan administration in 1981. A 1982 lease sale was challenged because of concerns over alleged information leaks that were thought to have corrupted the auction. Investigations by the General Accounting Office and the staff of a congressional committee failed to verify the leaks.(58) Instead, the methodology for determining minimum acceptable bids was accused of having a severe downward bias.(59) The first step Congress took was to demand an investigation of the administration of existing laws. The commission charged with the study had no choice but to suggest that the Department of the Interior develop procedures that would better assure Congress that the program was run efficiently. DOI was forced to spend two years constructing an overly elaborate bid evaluation process. By then, DOI was not anxious to resume leasing, and no one pressured it to do so.(60) Among the many things that got lost in the congressional inquiry was evidence that the government itself imposed the only barrier to competition in coal reserve bidding. Bidding in large-scale government auctions is generally confined to businesses that are highly likely to attract vigorous competition. The visibility of such auctions means that many of those who aspire to profit from neglected profit opportunities will bid should insiders fail to pay the maximum possible. Such speculators once had participated in coal leasing but allegedly had become discouraged. The most critical disincentive to bid was the "due diligence provision," which limited the time that the coal lease could be held inactive and, thus, made holding the lease less attractive. In the absence of such disincentives, speculators would return (if they ever really left) if established mining companies truly got mining rights at bargain prices.(61) In the end, Congress micromanaged the program to such an extent that it was effectively shut down.(62) The coal-leasing experience illustrates the formidable practical barriers to implementing a policy that satisfies all citizens that fair market value was paid. Opponents of leasing auctions are often successful in requiring the search for nonexistent data. Federal valuation guidelines are manipulated to require unattainable certainty. The political complications involved in public leasing arrangements are reflected in federal guidelines for valuing property acquired or sold. There are three possible accounting methods: - comparable worth (obtaining market price data on similar properties), - present value (generating estimates of the profitability of using the property), and - reproduction cost (inapplicable, of course, to a natural resource). The guidelines correctly contend that comparable worth is the preferable method since it relies on market data that epitomize informed judgment of values (i.e., the classic case for reliance on market prices is tacitly adopted). Present value is considered inferior because it relies on governmental second-guessing of market valuation. Neither method, however, can work well for public land unless sales are frequent. Not enough private land is traded to establish comparable worth. Lost in the coal-leasing fiasco, for example, was the fact that the Bureau of Land Management had established comparable worth by establishing rules for adjusting the only sale value report it could obtain. Critics of the BLM generated extensive (and inconsistent) criticisms of the adjustment rules but ignored the more critical point that a single market transaction is no basis for estimation.(63) As long as members of Congress insist on independent government estimates of market value, such indefensible practices will continue. Thus, not only is the case against accepting market values invalid, but the evidence shows that the government cannot produce satisfactory counterestimates. The sensible conclusion is that independent government estimates of value are an exercise in futility that should be abandoned.(64) Moreover, if the policy of free exploration access under the mining law is ended, government-funded exploration is a possible but unlikely unattractive alternative. The experience of coal leasing again should give us pause. Coal leasing was once governed by a policy similar to that of the present mining law. Leases were granted noncompetitively to those who first discovered coal. The law that ended noncompetitive leases authorized an exploration program to replace the incentive to be the first claimant. The program, however, was never funded. Severe problems also arise in devising appropriate incentives for private exploration. That is illustrated by changes made in federal on-shore oil and gas leasing. The right to secure uncontested leases depended on the absence of evidence that oil or gas reserves were "known" to exist beneath the tract of land in question. Unfortunately, the BLM proved incapable of making that determination. The Case against Royalties If land rents exist, the most efficient way to identify and transfer them is to auction the land and transfer ownership in return for a one-time payment. Private land transactions are conducted in that manner every day. For reasons that are inexplicable to us, legislators and bureaucrats believe that the federal government will be short-changed if land auctions are used to transfer mining lands to the private sector. Instead, they prefer to require payments to the government set as a fixed percentage of sales.(65) Royalties are economically counterproductive because they vary with the production and sales decisions of the firm. Funds that consumers were willing to give producers are diverted to whoever imposes the tax. That revenue transfer discourages production and consumption and violates the central economic principle that every expansion of output that costs less than its value to consumers should occur.(66) Royalties are an indirect attempt by the federal government to use a populist distrust of accepting bids for privatization to capture profits. Ironically, the regular tax system probably is at least as effective in capturing profits as are use charges by federal land agencies.(67) A special tax system could be and often is devised specifically to collect profits. The belief that special monitoring agencies are better collection agencies than are regular tax collection organizations is as dubious as often-made proposals that land managers act to complement the actions of specialized environmental agencies in controlling environmental impacts of federal land use. A further disadvantage of royalties of all types is increased administrative cost. First, any attempt by public officials to evaluate the value of land (for bonus bid evaluation) becomes more difficult. One must calculate a residual (rents minus royalties) of a residual (incomes minus cost). Moreover, requiring more payment means more compliance efforts by government and land users.(68) The economic theories that support competitive bidding imply that monitoring is unnecessary because competition ensures maximum possible payments. However, policymakers suspect that the conditions needed to produce competition do not prevail. The imposition of output-related charges is then justified by claiming that the defects of tying the payments to the activity are outweighed by the income gains. Such blind faith ignores all the drawbacks we have noted. Ownership (even with charges) probably produces losses to the federal government and thus its taxpayers.(69) Sharing the Wealth: What to Do with Mining Revenues The populist criticism of "giveaways" created by the Mining Law of 1872 ignores an issue critical to Congress, how recaptured mining income should be distributed among the people. The rhetoric seems to imply that every citizen will share in the revenue generated by royalties and fair-market sales. The rhetoric, however, is misleading, because mining fees are presently distributed primarily to residents of sparsely populated western states. It is not even clear whether those public beneficiaries of present mining payments are a larger or more needy group than the mining company stockholders who are surrendering the wealth. That phenomenon stems from the fact that Congress allocates half of gross mining receipts to the state in which the activity occurs. Because the federal government assumes responsibility for all the mining program's administrative costs, host states often receive more than the federal government nets before the transfer. Thus, whenever administrative costs exceed the half of gross receipts kept by the federal government, the federal government loses money.(70) That regime could hardly be called desirable. That practice, unfortunately, is continued in the proposed Title V of the Mining Law Reform Act of 1997 (S. 1102), supported by the National Mining Association. The measure would establish a 5 percent royalty on existing mining claims, new claims, and mining land privatized after the enactment of the reform. The proceeds from the royalty would be deposited in a fund under the control of the state in which the minerals were extracted. The fund would be used for reclamation of abandoned mines. Another questionable practice is the "earmarking" of the gross federal share of public land revenues for public works in the West. While undesirable, that may not actually result in additional expenditures. The targeting may only be a legal fiction to increase the acceptability of making expenditures that would have been made anyway. If that is not the case, however, such incentives to western projects are as undesirable as every other device to promote spending. Given the evidence of inefficiency and narrow benefits of those projects, evidence that public land revenue stimulates such projects would strengthen the case against wealth transfers. Environmentalists who attack the mining law conveniently forget that rent collection may promote environmentally undesirable actions. The Path Less Traveled: Robust Privatization Governments in the United States do not own supermarkets, gas stations, or car manufacturers, and most citizens would object if governments did own such assets. Governments do own land, however, and not only do most people not object, many favor it. They do so because they believe that the federal government owns particularly precious land that cannot be trusted to private ownership.(71) That belief implies that land markets and the extractive activities that take place on land, like mining, do not operate well unless they are publicly owned and subject to scrutiny very different from that received by supermarkets. Land markets may not be perfect, but neither are most other markets, and we would never accept public ownership as a solution to whatever market failures existed in the manufacture of automobiles. We also should not accept public ownership in land markets. The Mining Law of 1872 reflects the disposal orientation of the late 19th century, the belief that the government should not own land. We agree with such an orientation and find the 1872 Mining Law one of the better federal resource statutes on the books. It is not, however, ideal. Its first flaw is that it presumes that, if minerals are found on otherwise nonrestricted federal land, mining is preferable to alternative development options. That single-use concept reflected in the 1872 law--under which federal land can be privatized for mining but not for ranching--is unwise. It undermines the ability of those who value vacant land to compete against other possible users in the market. While alternative uses of land privatized under the mining law are not unheard of (indeed, they are the source of much concern as we noted earlier), those who wish to use "mining" land for other purposes are confronted with unnecessarily burdensome transactions costs that impede their efforts. The second flaw in the 1872 law is the fixed fee charged those who wish to lay claim to mining land. As noted earlier, the $2.50 per acre charge is probably only marginally below the market price (at least, below the market price if the only bidders are mining interests), but still, market prices are preferable to political prices. Yet that flaw is relatively minor. First, it is not altogether obvious that maximizing federal revenues should be the paramount concern of those sympathetic to limited government. Second, the efficiency gains stemming from privatization more than offset any theoretical revenue shortfall caused by suboptimal sale prices. The ideal means of privatizing public assets is probably the process that generates the fewest transactions costs. Our response to current policies is to call for adoption of competitive bidding for federal land rights with payments only at the time of transfer. Any party with an interest in ownership would be welcome to purchase land at auction and then use it in any way the new owner desired.(72) Any failure of that process to recover the full value of the land is better corrected by the general U.S. tax system than by a complicated lease and royalty scheme (which, as we noted above, clearly promotes market inefficiency, political gamesmanship, and political unmanageability). Ideally, future mining claims should be allocated by auction, but that is secondary to ensuring that existing claims remain unaltered and new claims are free from royalties and unrealistic purchase prices. The new auction system would eliminate the need for potential claimants to engage in wasteful activities that give them an "edge" in the game to get "free" mining claims, but no existing claims would be altered to avoid creating wealth rearrangements that would doom the reform. In the case of the transfer of public land to private ownership, the auction prices that undeveloped public land would command in a competitive bidding process for the right of private ownership would be an efficient tax like a head tax or pure land tax.(73) The maximum anyone would bid in such an auction is the (present discounted) value of the expected rents. Vigorous competition among bidders would force payments to be the maximum.(74) Of course, some environmentalists will object to our proposed reforms because of their misguided preference for public ownership of land or animus against one-time transfer payments for property. But environmentalists should be reminded that under our proposed regime they would gain the right to bid against mining interests for land. There is every reason to believe that, if potential mining properties are environmentally desirable, preservationist organizations could muster the few dollars per acre necessary to win the bidding.(75) While our proposed reform would open up all nonrestricted public lands for such bidding (and, thus, accelerate the privatization of public land), preservationist groups would have a greater opportunity to secure rights to that land. Some mining interests also might look unfavorably on our proposal. They might be concerned that, if others were allowed to bid on property harboring mineral reserves, they would be hard-pressed to make a profit on federal land. And maybe they should worry. Yet our concern, as policy analysts, is that resources be devoted to their highest valued uses. It is not properly our concern how the domestic mining industry might fare under competitive pressures. A related objection might be that, under our open auction proposal, preservationist groups would have an unfair advantage over mining businesses. That's because wilderness areas, national parks, and other "restricted" lands would not be open for bidding; only lands that are currently available for commercial uses would. Accordingly, preservationist organizations would have more resources at their disposal to outbid rival uses than they would if mining groups could bid against preservationist groups for environmentally sensitive land. However, that argument is a variant of the specious arguments used to criticize the unfair advantages mining companies presently possess. Actually, another virtue of a market economy is its ability to finance attractive investments. The mining industry surely can secure the resources needed to buy the properties whose best use is mining. Ideally, most public land would be privatized via some sort of auction process (because all land in principle should be put to its most valued use). Yet such an alternative is scarcely on the political horizon. The remote possibility that mining interests might be disadvantaged under our modified auction is a poor reason to abandon the fundamental economic principle that resources should be allocated to those who value them most highly, no matter how distorted the economy might be by the public ownership of resources. Finally, mining companies (as well as others) often cling to the argument that land ought to be available for "multiple uses" and that our auction proposal would deliver land to owners who might not choose to allow multiple uses of their resources. Any such charge reflects another misunderstanding of market economics. Profit motives will ensure that all profitable uses will be allowed. Advocates of free-market environmentalism often note how environmental groups allow oil and gas drilling on privately held "protected" lands. Yet the number of uses of land is irrelevant as a public policy criterion. If all the uses are individually consumed goods (private goods), no governmental intervention is justified. Collective consumption issues are not fundamentally altered if given lands have multiple possible uses at least some of which are collective. Sorting out the appropriate multiple collective uses would be virtually impossible absent near omniscience. Unfortunately, opening up all nonrestricted federal lands to an open, competitive auction might prove too radical an alternative for many legislators and lobbyists regardless of the proposal's merits. The bias against privatization of western lands will likely prove difficult to change in the short term. Accordingly, a second-best alternative might be to allow nonmining interests to bid against mining companies that wish to take title to federal land under the current regime. Privatized land, however, would not have any of the current restrictions regarding subsequent use. The present $2.50 per acre charge would be the initial offer price. The theoretic case for market-oriented reforms of the 1872 Mining Act, however, must be conditioned by concern that if a change is made, it might well make the system worse. For instance, it is unclear whether a competitive bidding system would be free of the unrealism that marred federal coal leasing. Thus, we cannot be certain that a shift to competitive bidding BLM style would be a net improvement. We might offset the gains from lesser rent seeking by slowing down land sales. For that reason, it is probably best to leave the 1872 Mining Law alone and press for public land privatization outside the context of this debate. If a consensus is ever reached that the federal government should divest itself of its vast western land holdings, there will be more than enough time to then repeal the 1872 Mining Law as an inferior and obsolete tool of land disposal. Any reform aimed specifically at the law, no matter how well intentioned or theoretically sound, would probably be corrupted in its execution and prove to be a cure worse than the disease. Our exploration of the issues surrounding the 1872 Mining Act yields two conclusions. First, the media and many mineral analysts poorly understand the distribution of wealth under the current system. Second, in their moral quest to prevent giveaways and generate revenue for the federal government, reformers have proposed policies that will make the extraction of minerals less efficient and may even increase the burdens on taxpayers. The distribution of mining profits is poorly understood by those who criticize the 1872 Mining Law because they do not recognize the flaw in the ex post examination of successful assets. Looking backward at the price history of current successful assets ignores all the assets that failed. All those failures are what make investments risky. Asset prices are most analogous to lotteries. We would not claim that the winner of a lottery paid too little for the winnings because we recognize that most people who buy lottery tickets lose. The same is true for mining claims. The distribution of mining wealth is also poorly understood by the media because many policy observers do not understand the economics of "giveaways." We do not believe that much wealth has been given away by the Mining Law of 1872, but regardless of the amount, the existence of any amount greater than zero set into motion nonmarket activities that dissipated the benefits of the underpriced giveaway. Those so-called rent-seeking activities completely offset the effect of the giveaway. Giveaways are bad for the economy not because they give anything away (they do not) but because they encourage agents to waste resources to secure the underpriced commodity. Moreover, whatever little is left will be impossible to recapture because it was capitalized into the payments made for mining rights that have been resold. Whatever the magnitude of the giveaway, it has already occurred. Reforms of the 1872 Mining Law should affect only new claims and have as their goal the prevention of the necessity for rent-seeking activity. The imposition of retroactive charges on existing claims will create political resistance to reforms that eliminate rent-seeking activity and, thus, enhance efficiency. Overall, the 1872 Mining Law serves America relatively well. It could be improved by broadening its reach to all federal land and allowing any interested party to bid for public resources, but such a reform--if attempted in a more limited manor aimed only at potential mining lands--runs the risk of being corrupted through the political process and overburdened by special-interest pleadings. Accordingly, the mining law's relatively few flaws should be remedied in the context of overall public land privatization. If that path remains closed because of political considerations, then the 1872 Mining Law probably should be left alone. Appendix: Sensitivity Analysis Obviously, given the lack of data on mining claims and their use, any attempt to measure the value of those claims must be highly speculative. We can, however, use the simplest standard methods of financial analysis to show how sensitive the results are to various assumptions. The key point is that the government is ceding the right to secure incomes that start some time after the grant is made and last over the life of the mine. What the government "loses" at the time of sale is the present value of those profits. So, at minimum, we must adjust the income flows to what they were worth at the time of the claim. However, because mines were claimed at different times, the present values of income from different mines cannot simply be added to make them comparable. They should be discounted back to 1872. Unfortunately, we cannot do that because we do not know when various mines were claimed. Instead, we provide estimates for various times of initiation and cessation of mine operations relative to 1872. A frequent simplification in financial analysis is to consider payments as consisting of a constant annual income over a fixed time period. We therefore tabulated the value of $1 per year (of mine income) over time periods ranging from 5 to 125 years (thus encompassing a range from a very short mine life to one that lasted through the 125-year history of the 1872 Mining Act). Even the 125-year life assumption in some ways is too conservative. It measures the value in 1872 of mining claims if they were all put in operation immediately or with only the lags considered here. The cost of the law is actually the present value at enactment in 1872 of all the claims ever made and developed. Given that mining did not start immediately or even with a short lag, the values we calculated that assumed a 125-year continuous life are significantly higher than the actual 1872 values of claims. Because the critics of the 1872 Mining Law use undiscounted incomes in their claims about the size of the giveaway, we calculate the ratio of the present value of the income to the undiscounted gross value of the income over the same time periods. Those ratios, or scaling factors, were calculated for numerous interest-rate (5 to 30 percent) and mine-life (5 to 125 years) scenarios and then further modified to take into account the time needed to develop a mine. That was done for 5-, 10-, and 20-year waits, and the adjustment is substantial. For most mine-life and interest-rate scenarios, a 20-year wait severely reduces the ratios. As the life of a mine increases, undiscounted receipts increase and, at a given interest rate, present values increase as well. Higher interest rates lower the present value of a given stream of receipts. The combination of cases we considered is so broad that the lowest interest- rate and life combination considered ($1 per year for 5 years at 5 percent interest) produces a higher present value ($4.33) than the highest interest-rate and life combination ($1 per year for 125 years at 30 percent interest, or $3.33). In contrast, the scaling factor declines with both interest rate and mine life. At a given mine life, higher interest rates reduce the net present value for any given level of gross receipts. If the life of a mine rises at a given interest rate, the present value does not grow as rapidly as the undiscounted incomes. Thus, in the 5-year, 5 percent case, receipts have a present value of 86.6 percent of their undiscounted value (this is the most optimistic scaling factor used in the text); it falls to 3 percent in the 125-year, 30 percent case. Finally, increasing the length of the lag between securing a right to mine and the actual start of income from the mine reduces present values and scaling factors. The reductions are independent of the length of mine life. For a 5-year lag, the present values and scaling factors are 78 percent of their values without the lag at 5 percent interest and 27 percent of their values without the lag at 30 percent interest. With a 10-year lag, the ratio changes range from 61 to 7 percent; with 20 years from 37 to .005 percent. (The combined scaling factor is then the product of the factor for immediate start of operations and the factor that accounts for delay.) Thus, the 5-year, 5 percent scaling factor drops to 67.5 percent with a 5-year wait, to 47 percent with a 10-year wait, and to 33 percent with a 20-years wait. For the 125-year, 30 percent case, the 5-, 10-, and 20-year figures are .7 percent, .2 percent, and .014 percent, respectively. The .014 percent scaling factor produces the $3 million low end of the range of possible values. Clearly, discounting income flows to their properly calculated present values severely attenuates the value to the government of a land grant. 1. Mining Law of 1866, 14 Stat. 251 (1866), and General Mining Law of 1872, 17 Stat. 91 (1872) (codified at 30 U.S.C. §§ 21-54. 2. A vehement critic of the 1872 Mining Law, John Leshy, begins his attack with complaints about how much administrative adjudication is required, "perhaps more than address[es] any other substantive federal statute. . . ." John Leshy, The Mining Law (Washington: Resources for the Future, 1987), p. 20. What he fails to see is that the problem stems from the limitation of land grants to particular purposes and claims based on priority. Leshy, a lawyer by training, is currently solicitor for the Department of the Interior. His book does not provide a satisfactory economic analysis of the 1872 Mining Law but does have the virtue of examining only that law rather than all public land issues. It is an invaluable source of facts and (bad) arguments. Two economists, Marion Clawson, once head of the Bureau of Land Management and long on the staff of Resources for the Future, and Robert H. Nelson, long a Department of the Interior economist and now at the University of Maryland, present sounder but more wide-ranging discussions. See Marion Clawson, The Public Lands Revisited (Washington: Resources for the Future, 1983); and Robert H. Nelson Public Lands and Private Rights: The Failure of Scientific Management (Lanham, Md.: Rowman & Littlefield, 1995). All three books were helpful in writing this paper. 3. Albert Shapiro and Chris Soares, "Cut and Invest to Grow," Progressive Policy Institute Policy Report no. 26, Washington, July 1997, p. 25; and Friends of the Earth, "Green Scissors '97," Washington, 1997. 4. Leshy, p. 295. 5. General Accounting Office, Federal Land Management: The Mining Law of 1872 Needs Revision (Washington: Government Printing Office, March 1989), GAO/RCED-89-72, pp. 22-44. 6. Mineral Leasing Act of 1920, 41 Stat. 437 (1920). 7. Amendments to the Surface Resources Act of 1947, 69 Stat. 367 (1955) (codified at 30 U.S.C. §§ 601-15). 8. Leshy, p. 69. 9. The Clinton administration also proposed to achieve environmental reforms through the federal rulemaking process and royalty reforms through the 1998 budget process. See Joby Warrick, "Taking Another Approach to 'Antiquated' Mine Law," Washington Post, February 28, 1997, p. A19. The royalty proposals were not even considered by Congress in the budget process, but the environmental rule reform had proceeded through one public input phase by September 1997. The 1976 Federal Land Policy and Management Act directed the secretary of the interior to "prevent unnecessary or undue degradation of the lands." The regulations implementing that statutory directive were codified at 43 C.F.R. 3809 (1981). On January 6, 1997, Secretary of the Interior Bruce Babbitt proposed that the "3809" regulations be modified to require the use of "best available technology" to prevent environmental degradation. In addition, he requested that claims of less than five acres, currently exempt from a requirement to file a plan of operations with the department in advance of any mining activity, be governed by the same regulations as mining claims larger than five acres. The rulemaking approval process is expected to take 1 1/2 to 2 years. See Bureau of Land Management, Press release, February 25, 1997, at http://www.blm.gov/nhp/new/ press/pr970225.html. 10. The Hardrock Mining Royalty Act of 1997 (S. 327 and H.R. 778) and the Abandoned Hardrock Mines Reclamation Act of 1977 (S. 326 and H.R. 780). The former would terminate the right to patent (privatize) mining claims for which a patent application was not made prior to September 30, 1994 (sec. 4). 11. See generally Richard Epstein, Simple Rules for a Complex World (Cambridge, Mass.: Harvard University Press, 1995). 12. Nelson, pp. 7-10. 13. General Accounting Office, pp. 24, 25. The GAO example at a minimum illustrates the point made below that prospects are often resold and thus government profit recapture is often impossible. Moreover, the mineral involved, oil shale, is one the promise of which consistently fails to be realized. Thus, the profit realized by the claimant was due, not to economic success of the claim, but to a passing interest in oil shale. 14. It is important to emphasize that even examining the full universe of patents would not afford us an honest analysis. Most claims under the 1872 Mining Act were never patented. Presumably, the claims that were patented were those that "panned out," so to speak. Examining only patented claims for an analysis of the appropriateness of the fee again misleads by relying on an unrepresentative subset of all claims. 15. For a fine exposition of this view in the context of financial markets, see Burton G. Malkiel, A Random Walk Down Wall Street (New York: W. W. Norton, 1996). 16. For the estimate that land rents are 6 percent of na-tional income, see William A. Fischel, The Economics of Zoning Laws (Baltimore: Johns Hopkins University Press, 1985), p. 13. 17. An op-ed in the New York Times discussed the transfer of 1,949 acres in Elko, Nevada, to the American Barrick Resources Company for $9,765 and claimed that the gold that would be mined there was worth $10 billion. David James Duncan, "How Much Gold Is a River Worth?" New York Times, April 12, 1997, p. 23. NBC Nightly News has discussed the 1872 Mining Law twice in its "Fleecing of America" segments. The most recent (April 9, 1997) repeated the claim about the giveaway but quoted a figure of $270 billion. 18. Thomas J. Hilliard with James S. Lyon and Beverly A. Reece, Golden Patents, Empty Pockets: A 19th Century Law Gives Miners Billions, the Public Pennies (Washington: Mineral Policy Center, 1994). Cited hereafter as Mineral Policy Center. 19. One clear example is the assertion repeated several times in the report that a gold deposit in Nevada is worth $10 billion. The report states that the basis of the calculation is multiplication of reserves by the selling price of refined gold. Ibid., p. 30. A modest effort confirms that this is the methodology used throughout. A critical table purports to present estimates of the giveaway but simply presents (by state) figures for the value of the output of several metals. The text indicates the assumed output levels. Thus, the assumed unit values can be computed by simple arithmetic. Comparisons with quotations in the Wall Street Journal and the New York Times suggest that once again the gross value of output is being presented. The only adjustment made was to take 49 percent of the total as an estimate of the portion of western mining that was on public land. Clearly, the implicit unit values are close to prevailing market prices. 20. Mineral Policy Center, p. 10. 21. Ibid., pp. 12, 30. 22. Ibid., p. 12, lists 30 prospective mines. The estimate of "taxpayer loss" (again, actually the gross value of production) is only $34 billion. Our methodology discussed below suggests that the mines will generate only $3 billion in royalties and the present worth of those royalties is at most $2 billion and might be well below $100,000. The mines listed by the MPC include 15 gold mines, 4 gold/silver mines, 1 platinum/palladium mine, 2 copper mines, 2 silver/copper mines, 1 bentonite mine, 1 beryllium mine, and 1 molybdenum mine. The gold mines account for 48 percent of the "taxpayer loss," the platinum mine almost 10 percent, and the gold/silver mines about 1.5 percent so that almost 60 percent of the value clearly is in precious metals, whose mining typically is done with a narrow profit margin. That excludes the silver share of the 16 percent of value from the copper/silver mines. The biggest value for other mines is almost $3 billion, or 9 percent, for the molybdenum mine. In no case does it appear that fabulous net profits will arise. 23. A further problem arises from the MPC's consideration of the cumulative value of minerals taken from public lands. Whatever the taxpayers' losses, those from past claims cannot be recovered. At most, history indicates that the problem has existed for a long time. 24. Mining Policy Center, p. 3. Again the report handles this point stealthily. It never directly advocates a royalty at any rate but still creates the impression of support for an 8 percent royalty. The page cited only reports the yield of an 8 percent royalty on a mine the MPC considers a good example; on p. 27, the desirability of a royalty is noted. On p. 33, a bill advocating an 8 percent royalty is summarized; comments elsewhere in the report suggest that this law meets the center's goals. How it was determined that the 8 percent royalty proposal is optimum is another mystery. Presumably, it arose from crude recognition that higher rates, such as the 12.5 percent on federal coal leases, would kill metal mining. What matters most here is the implication that the MPC knows that the profits generated on land developed under the aegis of the 1872 Mining Law are no more than 8 percent of gross production. 25. According to the January 31, 1997, issue of Value Line, for the years 1992 through 1996 profits as a percentage of revenue for the gold, silver, aluminum, and copper industries were 3 percent, 2.6 percent, 4.3 percent, 7.9 percent, and 5.9 percent, respectively. Of course, people who claim unprofitability would argue that those gains were offset by losses too small to appear in the data. That, too, seems questionable. 26. For such a calculation, we need to know when each claim is made and when its payoff occurs and bring the values back to 1872. However, the data necessary for detailed computation do not exist. 27. Standard assumptions in published studies of investment values assume that projects must earn "around" 10 percent (defined as somewhere between 8 and 12 percent), last for 10 to 20 years, and have delays of 3 to 5 years. Given such assumptions, scaling factors would range from 21 to 53 percent. 28. Leshy, pp. 81-82, 313. Robert Cronin, a natural resources management analyst with the General Accounting Office, says that the number of mining claims has dropped dramatically since the mid-1980s because of the $100 annual filing fee. In 1988, 1.2 million claims were active. By 1995 the number had dropped to 330,000. Personal conversation, March 6, 1997. 29. The Mineral Policy Center commits three additional crimes against reasonable analysis. First, the bloated value assigned the pending leases is described (p. 11) as more than the sales of all but nine of the Fortune 500 companies. Once again the center relies on exaggeration by aggregation. On one side are the multiple-year incomes of a group of companies. On the other are the single-year sales of Fortune 500 companies. Again, comparative sales are not the right measure of ability to earn profits, and multiyear, multicompany comparisons to one-year figures on one company have no meaning whatever measure is used. Second, jingoism underlies the analysis when the authors note that "nine [mines] are foreign-owned" (ibid.). Third, the only recognition of the undesirable effects of royalty payments (p. 28) is a quotation from the president of a mining corporation that argues about the effects of the royalty on competition with foreign companies. The center, however, uses the quote only as proof that mining companies are scared. 30. Mining Policy Center, pp. 11-12. 31. "The prospect of securing mineral rights and even fee title at bargain prices has proved to be a considerable lure. It has justified hiring imaginative lawyers to obtain under the Mining Law what can no longer be obtained under the homestead or other disposal laws, nor obtained so easily or cheaply under other federal laws." Leshy, p. 91. 32. For an excellent discussion of the phenomenon, see Robert Tollison, "Rent Seeking," in Perspectives on Public Choice: A Handbook, ed. Dennis Mueller (New York: Cambridge University Press, 1997), pp. 506-25. 33. The argument arises, among other places, in the writings on regulation by those associated with the University of Chicago approach to economics, the public-choice approach of James Buchanan and his associates, and in work by various international trade economists. One of the latter, Anne Krueger, produced a widely cited article that gave the term "rent seeking" wide attention. Ann O. Krueger, "The Political Economy of the Rent Seeking Society," American Economic Review 64 (June 1974): 291-303. 34. See Nelson, p. 268, for the use of this analogy in the context of mining. 35. The arguments in this section apply to all aspects of public land management, not just the extraction of hardrock minerals. 36. Richard Gordon presented these arguments at a conference on coal leasing in 1979. They were subsequently published in Richard Gordon, Federal Coal Leasing Policy: Competition in the Energy Industries (Washington: American Enterprise Institute, 1981), pp. 11-12. For further discussion of the argument that no market failure can be offset by limiting sales or leasing, see Richard L. Gordon, Regulation and Economic Analysis: A Critique over Two Centuries (Boston: Kluwer, 1994), pp. 160-65. 37. All three of the most important relevant studies make this point. Economist Marion Clawson, in his widely referenced book specified "fraud and abuse" (pp. 124-28) as the first reason why proposals were made to discourage land disposal. Similarly, Nelson (p. 11) notes that distaste for lawbreaking affected efforts in the first decades of the 19th century to correct problems produced by early federal land policies. Leshy's chapter on "Success, Abuse, and Difficulty: The Up and Down Sides of Free Access in Operation" (pp. 49-87) devotes 6 pages to successes and 22 to abuses, mainly involving the use of claims to get land for nonmining purposes. For a less scholarly example of concern over the matter, see General Accounting Office. 38. The failure of land law to promote efficient disposal is a recurrent theme. Laws designed to facilitate small-scale farming were unsuited for the ranching and forestry uses that were optimal in the West. Nelson (pp. 8-18) adds that many decades were required to secure the laws that encouraged farming. The frauds then are efforts to bypass the impediments. 39. Further restrictions on mining have been imposed in the process of dedicating lands to parks, wildlife refuges, and wilderness and by the so-called Superfund program, which is directed at the cleanup of waste sites so broadly defined that many abandoned mining (and manufacturing) sites are included. The flaws of that program have generated another enormous literature that is ignored here. 40. Clawson, p. 72. See also Nelson, pp. 43-146. 41. Gifford Pinchot, The Fight for Conservation (Seattle: University of Washington Press, 1967), p. 81, quoted in Karl Hess Jr., Visions upon the Land: Man and Nature on the Western Range (Washington: Island, 1992), p. 79. 42. U.S. Department of Agriculture, Forest Service, A National Plan for American Forestry, Senate document no. 12, 73d Cong., 1st sess. (Washington: Government Printing Office, 1933), p. 1589, quoted in Hess, pp. 79-80. 43. Richard Gordon, "Conservation and the Theory of Exhaustible Resources," Canadian Journal of Economics and Political Science 32 (August 1966): 319-26. 44. M. A. Adelman, for example, suggests that the production patterns of mainframe computers and long-playing phonograph records followed output patterns (rapid growth followed by slowdown and decline) that were supposedly unique to "exhaustible" resource industries. He says, "For the period 1950-1990, the graph is a fairly good picture of 33 1/3 rpm phonograph record production and for 1950-2000, of mainframe computers." M. A. Adelman, The Genie Out of the Bottle: World Oil since 1970 (Cambridge, Mass.: MIT Press, 1995), p. 13. 45. In fact, the stock of "exhaustible" resources has been increasing, not decreasing, over time. See The State of Humanity, ed. Julian Simon (Cambridge: Blackwell, 1995), pp. 279-93, 303-22, 328-45. 46. See Donald W. Barnett, Minerals and Energy in Australia (Stanmore: Cassell Australia, 1979), pp. 191-210, for a review of iron ore developments; the removal of exports controls in 1960 is noted on pp. 182-83. Barnett shows an output rise from 4 million metric tons in 1960 to 51 million in 1970 and 96 million in 1977. Output for fiscal year 1995-96 was reported at 149 million metric tons. Australian Bureau of Resource Economics, Australian Commodities Forecasts and Issues, March 1997, p. 113. 47. See Harold Barnett and Chandler Morse, Scarcity and Growth: The Economics of Natural Resource Availability (Baltimore: Johns Hopkins University Press, 1963); Scarcity and Growth Reconsidered, ed. V. Kerry Smith (Baltimore: Johns Hopkins University Press, 1979); Richard Gordon, "A Reinterpretation of the Pure Theory of Exhaustion," Journal of Political Economy 75 (June 1967): 274-86; and M. A. Adelman, "Economics of Exploration for Petroleum and Other Minerals," Geoexploration 8 (1970): 131-50. 48. See John Myers, Stephen Moore, and Julian Simon, "Trends in Availability of Non-Fuel Minerals," in The True State of Humanity, ed. Julian Simon (Cambridge: Blackwell, 1995), pp. 303-12; and David Osterfeld, Prosperity versus Planning: How Government Stifles Economic Growth (New York: Oxford University Press, 1992), pp. 84-103. 49. The Hardrock Mining Royalty Act of 1997 (S. 327 and H.R. 778), sponsored by Senator Bumpers and Representatives Miller and Rahall, would terminate the right to patent (privatize) mining claims made after September 30, 1994 (sec. 4). The industry-sponsored alternative, the Mining Law Reform Act of 1997 (S. 1102), sponsored by Sens. Larry Craig (R-Idaho) and Harry Reid (D-Nev.), would alter the sale of patents of mining land to require a "fair market" value rather than $2.50 an acre (sec. 204). The "fair market" value, however, would apply only to the land exclusive of any minerals. 50. Robert Nelson and Vernon Smith, "On Divestiture and the Creation of Property Rights in Public Lands," Cato Journal 2 (Winter 1982): 663-85, agree with our assessment. Nelson (pp. 333-64) observes that all regulation, including public land policy, creates tacit rights. Leaseholders have undertaken substantial investments in their activities. Reforms, as most advocates of privatizing public land have independently concluded, should recognize those rights. No known system of disposal by competitive bidding can prevent confiscation or total destruction of those investments. The most feasible solution is free transfer of the public land to established users. Only when multiple fresh claimants arise would competitive bidding apply. 51. The Reagan administration's effort to privatize some federal lands was greatly harmed by the introduction of revenue-raising considerations into the argument. Instead of producing support from those wishing to reduce deficits, that simply fostered opposition from current leaseholders. Nevertheless, the Clinton administration has repeated that error by its calls for higher grazing fees and now for emphasis on revenue collection in future grants of rights to extract metal ores from public lands. 52. John Yinger et al., Property Taxes and House Values (San Diego: Academic Press, 1988). 53. The oil lease program is not ideal in one important respect, however: it requires royalties in addition to one-time charges at lease inception. 54. The main difference is the identity of who bears the risk of changes in the market value of the asset subsequent to the term of the lease. If the land is leased, the public sector bears the risk of unexpected changes. If the land is sold, the new owner bears the risk. 55. The timber-road access system operated by the Forest Service is a good example. See Jacob M. Schlesinger, "Ka-sich Prepares Attack on 'Corporate Welfare,'" Wall Street Journal, January 17, 1997, p. A14. 56. William Spanger Pierce, Economics of the Energy Industries, 2d ed. (Westport, Conn.: Praeger, 1996), pp. 86-90. 57. Under current rules, bidders in federal timber auctions must cut down the trees. See Mark Mauro, "Let Ecologists Buy Federal Timber," New York Times, March 29, 1997, p. A23. 58. Late in the hearings by the independent commission appointed to view the situation, the DOI inspector general suddenly remembered that his office had two reports on the issue. One simply disclosed that the Interior Department officials running the leasing program had allowed a lobbyist to treat the officials and their wives to dinner at an expensive Washington, D.C., restaurant. The other report conveyed the tale of a "consulting geologist" who found data on DOI estimates of what the tracts were worth lying in the open in a Mineral Management Service office in Casper Wyoming, memorized the figures, and wrote them down after leaving the office. Those and related incidents are described in Commission on Fair Market Value Policy for Federal Coal Leasing, Report of the Commission: Fair Market Value Policy for Federal Coal Leasing (Washington: Government Printing Office, 1984), pp. 381-86. 59. This argument not only ignores our warning that the government cannot assess accurately the value of mineral resources but also relies on both looking at only the "errors" that understate values and stretching the list of questionable objections. The whole controversy boiled down to ill-founded concerns that the collapse in the willingness to pay reflected temporary unfounded pessimism. That seemed a dubious proposition in 1984, and in 1998 it seems absurd. 60. Thus, while the Commission on Fair Market Value Policy for Federal Coal Leasing initially seemed to spur action, only DOI reports were produced, which today lie dusty on bookshelves. 61. One problem in establishing that this competition is possible is that established mining companies are aware of the threat and may preempt it by bids that equal the value of the rights. Obviously, in the charged arena of leasing, an unseen potential is not good enough. 62. The program remained shut down as of the completion of this paper in early 1998. 63. Most members of the coal-leasing commission were clueless about the data deficiency problem. The effort to create awareness backfired. Two noted consultants were invited as witnesses; the executive director and one of the present authors (Gordon) briefed those consultants about the misimpressions about data, but the testimony still created the impression that data could readily be generated. 64. The coal-leasing commission's emphasis on the need to improve valuation arose mainly because its mandate effectively forced acceptance of presale estimates. The problem was exacerbated because only one member of the commission was a natural resource economist or in any other way familiar with public land resource issues. 65. A further complication is that much attention is paid to whether the payment is formally treated as a fee for using the land or a tax. Under a "fee" system, the charge is termed a royalty. Under a "tax" system, the charge is called a severance tax. The distinctions, however, are irrelevant to our argument. The important consideration is that a transfer occurs, not what it is called. Royalties are a broader concept since any land owner can demand a royalty but only governments can tax. For expositional purposes, the terms "royalties" and "severance taxes" are used interchangeably. 66. All outputs with marginal costs (the cost of expanding output) less than price should be undertaken. The main qualification is that if substantial externalities exist, taxes or subsidies are needed to eliminate or "internalize" the externalities. Coase's analysis of externalities shows that private deals can substitute for public ones and that the choice between a tax and a subsidy leaves the decisions unchanged but produces different cost burdens. He further suggests that the best way to share burdens differs from case to case. Ronald Coase, "The Problem of Social Cost," Journal of Law and Economics 3 (1960): 1-44. 67. That suggests the additional point that payments to land agencies are an incomplete measure of the overall payoff to government because taxation can be and is used as an alternative to charges. 68. Another advantage of reliance only on payments at the time of transfer is that no need arises to incur expenses of monitoring the level of production on the relevant land. 69. To make matters worse, many severance systems require initial payment in addition to the royalty. The royalties, of course, reduce the initial payment by the present value of the royalties as well as the deadweight losses that arise from sales lost because of the royalties. 70. Nelson (p. 77) reports that the Forest Service estimates that 22 percent of the timber volume harvested in 1978 did not generate enough revenue to cover public costs. 71. As background for this study, we examined the extent of federal land and its use. Although space does not permit presentation of those data, their essence is that a very large part of federal land is used for ordinary commercial activities, mainly ranching and timber harvesting. Those activities can be efficiently conducted on private land, and all the evidence suggests that public ownership inspires less efficient use than occurs under private ownership. 72. For a detailed examination of how such a program might be carried out, see Nelson and Smith. 73. In contrast, taxes on capital and labor (and taxes on land value) are always distortionary because those who are taxed alter their behavior to avoid some of the tax. 74. By vigorous competition we mean a state of affairs such that anyone who earns excess profits attracts to that activity newcomers who compete to reduce the excess. In an auction, vigorous competition would exist if new bidders raised the price paid for land rights. In public land policy debates, people advocate that sales of public land be conditional on payment of "fair market value." That legal term seems simply to mean what the asset would sell for in a competitive market. The Commission on Fair Market Value for Federal Coal Leases, for example, was told that fair market value had more complex meanings. The only complexity identified was that market values can be hard to determine. The lawyer on the commission was so frustrated by that that he had an associate search the literature and find a Supreme Count decision that said that the adjective "fair" added nothing to the concept. 75. Total contributions (including corporate, foundation, and bequests) to environmental charities in 1994 were $3.5 billion. U.S. Department of Commerce, Bureau of the Census, Statistical Abstract of the United States 1996 (Washington: Government Printing Office, 1997), tables 610 and 611, pp. 387-88. Published by the Cato Institute, Policy Analysis is a regular series evaluating government policies and offering proposals for reform. Nothing in Policy Analysis should be construed as necessarily reflecting the views of the Cato Institute or as an attempt to aid or hinder the passage of any bill before Congress. Contact the Cato Institute for reprint permission.Printed copies of Policy Analysis are $6.00 each ($3.00 each for five or more). To order, or for a complete listing of available studies, write to: Cato Institute, 1000 Massachusetts Avenue NW, Washington, DC, 20001. (202)842-0200 FAX (202)842-3490 E-mail firstname.lastname@example.org | Search | Policy Analysis | Cato Home Page |
It feels like there’s nothing to do but eat. Most of us learned quickly that whatever snacks we bought to hold ourselves over through quarantine can only last a few days. Often, the eating is just thoughtless: We’re near our kitchens all the time now, and there’s nothing stopping us from picking through everything edible we own. Maybe we have kids in the house who demand junk food; maybe we buy boxes of Cheez-Its for ourselves against our better judgment. Food is a comfort, one we’re subconsciously drawn to during challenging times. Mindless snacking isn’t a great habit, but it’s not our fault — it’s just the way our brains are wired. Yet a little bit of knowledge and a few hacks can trick that wiring to work in our favor. According to nutrition researcher and processed food addiction expert Joan Ifland, the way our brains developed millions of years ago is responsible for our compulsion to snack. “Visual signaling drives a lot of eating,” she says. “When the primitive brain, which runs a lot of food behavior, sees something edible it says, ‘Eat it. If you don’t eat it, some other animal is going to come along and eat it.’ The competitive urge for food kicks in. This is in a very different part of the brain from the rational brain. The frontal lobe is in the front, the primitive brain is in the back. They don’t necessarily communicate.” A big part of managing that compulsion to snack, then, is to simply keep snacks out of our sight. “Being aware of visual triggers as to what’s available in the house has a huge impact,” Ifland says. “So, if you’re going to have processed foods in the house, I often tell people to lock them in the trunk of their car. If there’s something in the house that you don’t want to eat, put it in a locked container. Everyone has a lockable trunk in their car, so that’s easy.” By making it more of a chore to get to the processed foods we have, we’re less likely to reach for them. Still, this might not necessarily satisfy our mental urge to eat. Fortunately, Ifland says we can trick our brain to halt this, as well. To do so, we have to think about smell. “The nose is the only place in the entire human body where you have neurons exposed to the air,” she explains. “So smell has the greatest impact on the brain of anything. Here’s the trick: almost everyone has a crockpot or slow cooker, and it’s really easy to fill them up. You can just dump whatever ingredients you have in — a pound of ground beef, a bag of carrots, some garlic powder, whatever. Put the lid on, set it for eight hours and walk away.” In doing so, we fill our homes with the scent of food without much effort. “You will naturally gravitate toward wanting to eat that [as opposed to the other snacks in the house] because your primitive brain knows it’s available,” she continues. “You’re just naturally drawn to it — in other words, the fight isn’t there.” Basically, your subconscious brain knows it will be eating at some point, so the competitive instinct for food is quieted. Without this instinct, you’re less likely to think about food or gravitate toward snacking. “Everyone has this fight in their brain,” says Ifland. “You impulsively eat, and then you feel this remorse. [People] start blaming themselves and hating themselves, and it’s just because processed foods have this craving allure. Processed foods are deliberately designed to be highly crave-able, setting off urges in the brain. It’s uncomfortable. People don’t know that they can make that go away. If you’ve got processed foods all over the house, but if you have the smell of healthy food through the house, then you’ve activated the natural healthy feeding pathways in the brain.” Of course, Ifland says, the best way to prevent snacking is to simply keep fewer snacks at home. More than that, though, she suggests eating more during scheduled meals. “Getting hungry well before mealtime is a sign that you haven’t eaten enough in the previous meal. Eat full portions at mealtime, and you won’t be as tempted to snack,” she says. Both reducing the snacks in the house and eating fuller meals can be particularly challenging for people with children, but according to Ifland, it’s a worthwhile fight. “Processed foods are very much like cigarettes in the diseases they create,” she says. “We know today that food-related diseases have overtaken smoking as the leading cause of preventable death. One of the things parents may not know is that these processed foods given to children can cause irritability, anxiety, depression, fatigue, attention issues and more because of the cravings. When the brain is focused on cravings, there’s literally not enough blood flow entering the frontal lobe, which dictates attention, decision-making, emotional processing, etc.” It’s better, then, to kick the processed food habit now, while kids are home from school. “My tip here is, in the morning, make a big bowl of cut-up fruit,” she says. “Leave it out for the kids to access. There’s a visual cue there, and the parents are able to suggest the kids eat the fruit before turning to processed foods.” Even if you don’t have children, setting yourself up with a healthier snack option in the morning that lasts through the day might be a useful tactic for compulsive snackers. Food is one of our few indulgences these days, and it’s okay to lean into that in moderation. But as many people are thinking about their overall well-being and immune health right now, it’s important to consider how our coping methods will impact us long term. It’s not necessarily that you have to eat less, either. Think about it this way: A pork roast lovingly simmered with onions, potatoes, garlic and herbs over several hours will probably taste way better than a bag of Doritos, anyway.