text
stringlengths
188
632k
In a world that’s increasingly conscious of its environmental impact, eco-tourism has emerged as a popular and responsible way to explore the beauty of our planet while preserving its natural wonders. Eco-tourist destinations offer travelers a unique opportunity to connect with nature, support local communities, and contribute to conservation efforts. In this article, we will embark on a journey to discover some of the most captivating eco-tourist destinations around the globe, each offering a remarkable blend of natural beauty, adventure, and sustainability. Table of Contents - What is Eco-Tourism? - The Importance of Eco-Tourism - Costa Rica: A Biodiversity Haven - Iceland’s Glacial Wonders - The Pristine Beauty of New Zealand - Galápagos Islands: Darwin’s Playground - Sustainable Safari in Botswana - Exploring the Amazon Rainforest - Bhutan: A Carbon-Neutral Kingdom - Adventure in Alaska - The Enigmatic Greenland - Sustainable Trekking in Nepal - Baja California: A Marine Wonderland - Madagascar: A Unique Biodiversity Hotspot - Patagonia’s Untamed Wilderness What is Eco-Tourism? Eco-tourism, short for ecological tourism, is a responsible form of travel that promotes environmental conservation and sustains the well-being of local communities. It involves exploring natural areas, often pristine and untouched, while minimizing the negative impact on the environment. Eco-tourists seek experiences that connect them with nature, wildlife, and indigenous cultures, fostering a deeper understanding of our planet’s intricacies. The Importance of Eco-Tourism Eco-tourism plays a pivotal role in preserving the world’s biodiversity, protecting fragile ecosystems, and supporting local economies. It encourages travelers to appreciate and respect the environment while contributing to conservation efforts through entrance fees and sustainable practices. Moreover, it promotes the cultural exchange between visitors and local communities, creating a win-win situation for both. Costa Rica: A Biodiversity Haven Nestled in Central America, Costa Rica is a gem for eco-tourists. With lush rainforests, diverse wildlife, and pristine beaches, it’s a biodiversity hotspot. Travelers can explore national parks like Corcovado and Manuel Antonio, hike volcanoes, and witness sea turtle nesting. Costa Rica is a shining example of how eco-tourism can protect and celebrate nature. Iceland’s Glacial Wonders Iceland boasts surreal landscapes, including glaciers, geysers, and waterfalls. The country’s commitment to sustainability is evident in its use of renewable energy and responsible tourism practices. Visitors can soak in natural hot springs, go ice caving, and marvel at the Northern Lights in this land of fire and ice. The Pristine Beauty of New Zealand New Zealand offers travelers an array of eco-friendly adventures, from exploring the otherworldly landscapes of Fiordland National Park to encountering indigenous Maori culture. The country’s dedication to conservation makes it a prime destination for eco-conscious travelers. Galápagos Islands: Darwin’s Playground The Galápagos Islands, famous for inspiring Charles Darwin’s theory of evolution, remain an eco-tourist’s dream. These remote islands are home to unique species found nowhere else on Earth. With strict regulations to protect the environment, visitors can snorkel with sea lions and giant tortoises while contributing to preservation efforts. Sustainable Safari in Botswana Botswana offers a luxurious yet eco-conscious safari experience. Explore the Okavango Delta, home to diverse wildlife, on a low-impact safari. By choosing responsible operators, you support conservation and ensure that generations to come can witness Africa’s natural wonders. Exploring the Amazon Rainforest The Amazon Rainforest, often dubbed “the lungs of the Earth,” is a biodiverse wonder. Eco-tourists can venture into this dense wilderness, spot rare wildlife, and learn about indigenous cultures. Supporting sustainable practices in the Amazon helps combat deforestation and protect countless species. Bhutan: A Carbon-Neutral Kingdom Bhutan is a pioneer in sustainability, measuring its success through Gross National Happiness rather than GDP. With lush landscapes, monasteries, and a commitment to carbon neutrality, Bhutan offers eco-tourists a unique blend of culture and conservation. Adventure in Alaska Alaska’s untouched wilderness beckons adventure seekers. From glacier hiking to wildlife cruises, this rugged state provides ample opportunities to connect with nature while supporting local economies committed to sustainable tourism. The Enigmatic Greenland Greenland, with its icebergs and Arctic landscapes, is a fascinating eco-tourist destination. Travelers can kayak among glaciers, witness the midnight sun, and gain insights into the Inuit way of life. Greenland’s untouched beauty is a testament to responsible tourism. Sustainable Trekking in Nepal Nepal offers trekkers a chance to explore the majestic Himalayas while contributing to local communities. Eco-tourists can embark on breathtaking treks, visit ancient monasteries, and immerse themselves in Nepal’s rich culture. Baja California: A Marine Wonderland Baja California’s pristine waters are a haven for marine life enthusiasts. Dive with sharks, swim alongside dolphins, and explore vibrant coral reefs while supporting marine conservation efforts in this Mexican paradise. Madagascar: A Unique Biodiversity Hotspot Madagascar, an island nation off the coast of Africa, is a biodiversity hotspot. It’s home to lemurs, chameleons, and countless unique species. By choosing eco-friendly lodges and tours, travelers can explore this ecological wonder responsibly. Patagonia’s Untamed Wilderness Patagonia, shared by Chile and Argentina, offers untamed wilderness and dramatic landscapes. Eco-tourists can hike in Torres del Paine National Park, witness glaciers, and experience the region’s rugged beauty while supporting sustainable tourism initiatives. Eco-tourism is more than just a travel trend; it’s a commitment to preserving our planet’s natural wonders for future generations. By choosing eco-tourist destinations and practicing responsible tourism, travelers can make a positive impact on the environment and local communities while embarking on unforgettable adventures. FAQs About Eco-Tourist Destinations - What is the main goal of eco-tourism? - The main goal of eco-tourism is to promote environmental conservation, support local communities, and provide travelers with unique and sustainable experiences. - How can I ensure I’m being an eco-conscious traveler? - To be an eco-conscious traveler, you can research your destination, choose eco-friendly accommodations, and follow responsible tourism guidelines, such as minimizing waste and respecting wildlife. - Are eco-tourist destinations more expensive than traditional ones? - Eco-tourism destinations vary in price, but they often offer unique and enriching experiences that justify the cost. - What role do local communities play in eco-tourism? - Local communities in eco-tourist destinations benefit from job opportunities and income generated by tourism, which can lead to improved living standards. - How can I contribute to conservation efforts during my eco-tourism trip? - You can contribute to conservation efforts by choosing eco-friendly tours and activities, respecting natural habitats and wildlife, and supporting local initiatives aimed at preserving the environment. - What are some eco-tourist destinations that are accessible to families with children? - Many eco-tourist destinations are family-friendly, offering educational experiences for children. Some examples include Costa Rica, New Zealand, and the Galápagos Islands. - Is it necessary to have prior outdoor experience to enjoy eco-tourism? - No, prior outdoor experience is not always necessary. Eco-tourist destinations often offer activities and tours suitable for a wide range of experience levels, from beginners to seasoned adventurers. - How can I verify if a tour operator or accommodation is truly eco-friendly? - Look for certifications or affiliations with eco-friendly organizations, read reviews from previous travelers, and ask the operator or accommodation provider about their sustainability practices and policies. - What are some eco-tourism destinations that focus on marine conservation? - Eco-tourism destinations with a strong focus on marine conservation include Baja California, the Great Barrier Reef in Australia, and the Seychelles in the Indian Ocean. - How can I reduce my carbon footprint while traveling to eco-tourist destinations? - You can reduce your carbon footprint by choosing eco-friendly transportation options, minimizing single-use plastics, conserving energy and water, and supporting local initiatives that offset carbon emissions.
In order to effectively teach students how to critically consume media it is paramount for teachers to be media literate (Ian & Temur, 2012; Keller-Raber, 1995; Schmidt, 2012). Using Freirean critical literacy as a theoretical framework, this case study investigated how a 60-hour teacher training program in media literacy promoting Catholic Social Teaching and how undergoing this training has influenced teachers’ perceptions of media literacy, Catholic Social Teaching, and the link between the two. As the researcher, I performed participant-observation as a trainee in the program. Five teachers, alumni of the program, participated in this study: one middle school teacher, three high-school teachers, and one college professor, all of them taught at Christian private schools. I recorded how participants applied the Media Mindfulness—a faith based media literacy strategy—in their practice as a response to the Church’s call for Catholic teachers to engage in media education (Benedict XVI, 2008; John Paul II, 1987, 1990, 1992, 2005). Findings show how the Media Mindfulness method helped teachers integrate media literacy in their practice, promoting student empowerment and character education. A follow up action research at a Catholic high school where teachers are trained in Media Mindfulness is recommended to find out: a) how the training influenced teachers’ confidence in integrating media education into their practice? b) to what extent students’ assimilation of Catholic Social Teaching concepts resulted from the teacher training program? c) and how training teachers in the media mindfulness model influenced the school’s culture in addressing social justice issues? Some files may require a special program or browser plug-in. More Information |Commitee:||Bickett, Jill, Stephenson, Rebecca| |School:||Loyola Marymount University| |School Location:||United States -- California| |Source:||DAI-A 76/10(E), Dissertation Abstracts International| |Subjects:||Religious education, Teacher education, Mass communications| |Keywords:||Catholic education, Catholic social teaching, Mass media, Media education, Media literacy, Teacher training| Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved The supplemental file or files you are about to download were provided to ProQuest by the author as part of a dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or may be a .exe file. We recommend caution as you open such files. Copyright of the original materials contained in the supplemental file is retained by the author and your access to the supplemental files is subject to the ProQuest Terms and Conditions of use. Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be
Archived: Better evaluation could unlock benefits of public health law 24 May 2016 Australia has an excellent track record in effective public health law interventions, such as plain packaging of cigarettes, gun laws, food labelling and mandatory seatbelt laws, but there is great untouched potential to use the law to combat the chronic disease epidemic, a seminar on public health law has heard. PhD candidate Jan Muhunthan (pictured above) called for greater emphasis on the measurement and evaluation of public health law so it could play a more prominent role in decision making. “At national, state and territory and local government levels, we need to redesign public policies that are failing our nation’s health,” she told the seminar, co-hosted by The Australian Prevention Partnership Centre and The George Institute for Global Health. “Debates about health all too often focus on healthcare, not the social determinants of health,” Ms Muhunthan said. “We need significant investment targeted to broad areas that affect our health, such as poverty, inequality, housing, transport and education, and the law can play a part in all of these.” Barriers to research She said barriers to the generation of public law health research in Australia included a lack of funding, a lack of education opportunities for researchers in this field, and a disconnect between public health policy and law experts and key decision makers in health and other sectors. Ms Muhunthan is undertaking a Prevention Centre-funded PhD on the role of public health law in preventing chronic disease. Her research has already identified many unintended impacts associated with liquor control laws. For example, in NSW, Shoalhaven City Council rejected an application for a huge liquor outlet in East Nowra, based on community and police evidence that the ready availability of cheap alcohol would have negative impacts on health, such as exacerbating already high levels of domestic violence, poor school attendance and alcohol-related crime. However, the court overturned the decision and the proposal went ahead. When Ms Muhunthan investigated the drivers associated with similar court rulings, she found that in 75% of cases, the alcohol industry was successful in overturning local government decisions around the availability of liquor outlets, pubs and clubs sometimes despite evidence from the community over possible adverse health outcomes. Ms Muhunthan’s project is evaluating the impact of public health law by investigating case law on the regulation of alcohol availability in Australia, through systematic reviews into the effectiveness of Indigenous community-led alcohol restrictions, and through qualitative methods to gauge the effectiveness of public health law interventions. She aims to develop new methodological approaches to evaluating public health law, policy and regulation, and to translate the evidence generated by the project into clear and practical guidelines that will see public health law prioritised in disease prevention. - Helen Signy, Senior Communications Officer
Acne is a chronic inflammatory condition that causes both psychological and physical scarring. While it most often affects adolescents, it is not uncommon in adults and can also be seen in children. Acne is associated with significant physical and psychological morbidity, such as permanent scarring, poor self-image, depression, and anxiety. In fact, data indicates that the negative effects of acne on a patient’s quality of life are similar to that of asthma, epilepsy, or arthritis. [Zaenglein 2018] Negative impacts of acne can be greater in adults than in adolescents because acne is not normally seen as a condition that affects adults. The good news is that clinicians in primary care can make a major difference for people with acne by taking a proactive approach and offering guideline-recommended treatments, many of which are available over-the-counter (OTC). [Jefferany 2010] Enhancing Communication About Acne in Primary Care Despite the profound impact of acne, patients report that their healthcare professionals (HCPs) often have “unempathetic” responses when they voice their concerns. [Golnick 2008] One reason is that many clinicians consider acne as a minor condition that will go away after adolescence, and so they do not feel an urgency to offer treatment. However, primary care clinicians are on the front line and have the ability to positively impact the care of their patients with acne by providing proactive management and counselling. Primary care clinicians have the ideal opportunity to ask their adolescent and adult patients about acne during the annual wellness visit. Based on feedback from patients with acne, clinicians should be aware that the patient’s perception of their acne severity and the personal impact may not necessarily correlate with the clinical assessment. Therefore, clinicians should be empathetic and ask specific questions to determine the patient’s perception of their acne. The short, four-question Acne-Q4 questionnaire can be used to evaluate the impact of acne on patients. Questions are focused on being dissatisfied with appearance, feeling upset, concerns about meeting new people, and fears about scarring. The questionnaire can also be used to evaluate the impact of treatment. [Saitta 2012] Acne vulgaris is characterized by noninflammatory, open or closed comedones and by inflammatory papules, pustules, and nodules. It typically affects the face, upper chest, and back. Acne severity is classified based on the number and type of lesions and is used to inform guideline-based treatment decisions (Table 1): [Van Onselen 2017] Table 1: Classification of Acne Severity [Zaenglein 2016] It is important to be aware that the formation of acne scars can occur even in patients with mild acne; however, early and effective treatment can help reduce the risk of scar formation. [Zaenglein 2018; UK Guidelines; Kownacki 2016] Management of Acne The 2016 American Academy of Dermatology (AAD) acne management guidelines provide primary care providers with simple directions (Table 2). [Zaenglein 2016] Mild to moderate acne can usually be effectively managed with topical OTC treatments, while patients with severe acne should be referred to a dermatologist. Table 2: Treatment algorithm for the management of acne vulgaris [Zaenglein 2016] Note: underlined text indicates that the drug may be prescribed as a fixed combination product or as separate component. The most appropriate treatment is based on the grade and severity of the acne [Zaenglein 2018] and should be directed toward the known pathogenic factors, including: increased sebum production hyperkeratinization of the follicular infundibulum presence of Cutibacterium acnes (formerly Propionibacterium acnes) Topical retinoids (e.g.., adapalene, tretinoin, and tazarotene) are the foundation of maintenance treatment for acne of any severity. They are anti-inflammatory and comedolytic, and they treat precursor microcomedone lesions. They also treat secondary lesions, including scarring and pigmentation through actions in the dermis. [Leyden 2017] Standard tretinoin formulations cannot be applied at the same time as benzoyl peroxide. And while they are unstable when exposed to light, microsphere and polyolprepolymer formulations do not have these restrictions. [Zaenglein 2018] Adapalene 0.1% gel has similar efficacy as tretinoin 0.025% gel with a better safety profile, and this concentration is available OTC. [Khemani 2016] A long-term safety study of a higher concentration of adapalene (0.3%) gel applied once daily for 52 weeks showed that signs and symptoms of local cutaneous irritation (e.g., erythema, dryness, scaling, and stinging/burning) are usually mild or moderate. Mean tolerability scores in the study were below 1 (mild) at all time points for the parameters assessed. [Weiss 2008], BP is the topical antimicrobial of choice for acne, and it is often combined with retinoids due to synergistic effects. It releases free oxygen radicals that reduce the concentration of C. acnes without causing antimicrobial resistance. [Zaenglein 2018] The bactericidal properties of BP and the complementary comedolytic and anti-inflammatory effects of topical retinoids make these agents the preferred choice. [Zaenglein 2018] The synergistic efficacy of BP with adapalene is evidenced by pooled data from 3 double-blind controlled studies, in which patients were randomized to receive adapalene-BPO, adapalene, BPO, or vehicle once daily for 12 weeks. The combination was significantly more effective than individual components in decreasing lesion counts as early as week 1 and throughout 12 weeks. [Tan 2011] Many formulations of BP are available OTC, including washes and leave-on creams and gels. Conveniently, BP 2.5% gel is also available in prescription formulations in a fixed-dose combination with adapalene 0.1% and 0.3% gel. [Zaenglein 2018] The topical antibiotics clindamycin and erythromycin also decrease the concentration of C. acnes, but their use can lead to bacterial resistance. They should be used in combination with tretinoin or BPO in patients with moderate to severe acne. [Zaenglein 2018] They should only be used for 3 to 4 months to limit the development of antimicrobial resistance. Oral contraceptives containing an estrogen and a progestin are as effective as oral antibiotics to control inflammatory acne in adolescent and adult women. [Zaenglein 2018] Spironolactone has potent antiandrogen activity by decreasing testosterone production and by inhibiting the binding of testosterone and dihydrotestosterone to androgen receptors in the skin. It is recommended in the 2016 AAD guidelines as a treatment option for acne in select females. Zaenglein 2016] Oral isotretinoin is a last resort for patients with severe acne. It is a teratogenic agent and must be prescribed only by providers, usually dermatologists, who are enrolled in the FDA-mandated risk management program iPLEDGE. (Further details regarding this program can be found at www.ipledgeprogram.com). [Zaenglein 2018] Despite evidence-based recommendations for topical retinoids as the foundation of treatment for all types of acne, they are underused in clinical practice by both primary care and dermatology clinicians, especially in the preadolescent population. [Leyden 2017] Data from a recent survey of patients with acne of all severities indicates that only about 1/3 actually purchased OTC formulations of BPO, but most did get their recommended prescription products. [Huyler 2017; Zaenglein 2018] Primary care clinicians can take an active role in ensuring that their patients with mild to moderate acne have access to the most effective and well-tolerated treatments for acne, including the foundational treatment of a retinoid in combination with BP. [Zaenglein 2018] Reducing Skin Irritation and Optimizing Adherence to Acne Treatment Nonadherence to acne treatment is common and is often due to inappropriate selection and/or application of medications, as well as unrealistic expectations about the time course of treatment response. Clinicians should use a shared decision-making approach when selecting a personalized treatment plan with consideration of individual factors, such as skin sensitivity, lifestyle, patient preferences, and experience with previous treatments to optimize adherence. Since skin irritation from topical treatments with retinoids and BP is a common cause of non-adherence, it is important to select agents with a good tolerability profile. For example, adapalene is significantly less irritating than tretinoin or tazarotene. [Burchett 2017] And using lower concentrations of BP is often as effective as higher concentrations and decreases irritation. BP does have a bleaching effect on colored garments and sheets, however, so patients should be instructed to take necessary precautions or be recommended a BP-based wash. Following a good basic skin care routine is also important to achieve optimal results. Clinicians should instruct patients to follow a skin routine that limits washing to twice daily using gentle, non-comedogenic cleansers. A fragrance-free moisturizer applied over topical medication can also minimize dryness and irritation. All the retinoids are mildly photosensitizing, so patients should always use sunscreen to avoid sunburn. [Zaenglein 2018] It is critically important for clinicians to set clear expectations about the time course of treatment response. Patients often expect immediate results and become discouraged, causing them to stop treatment. They should understand that it may take up to 8 to 12 weeks before they notice an improvement in their acne, and there may even be a transient worsening shortly after treatment is started. Noncomedogenic makeup may be used to cover blemishes until the medications take effect. [Zaenglein 2018] Scheduling close follow-ups and continued communication with the patient through the HIPPA-compliant patient portal are possible strategies that clinicians can use to motivate their patients to adhere to their treatment. For example, clinicians can encourage patients to document their treatment progress by taking pictures at regular intervals. Primary care clinicians who proactively identify and evaluate acne severity and provide evidence-based treatments for their adolescent and adult patients address an important unmet need. By developing an individualized treatment plan and providing appropriate education, they can maximize patient adherence and optimize outcomes of their patients with acne. Burchett S. Is treatment of acne as simple as encouraging primary care physicians to prescribe more retinoids? JAAD. 2017;76:37. Gollnick HP, Finlay AY, Shear N; Global Alliance to Improve Outcomes in Acne. Can we define acne as a chronic disease? If so, how and when? Am J Clin Dermatol. 2008;9(5):279-84. Hauk L. Acne Vulgaris: Treatment Guidelines from the AAD. Am Fam Physician. 2017 Jun 1;95(11):740-741. Jafferany M1, Vander Stoep A, Dumitrescu A, Hornung RL. The knowledge, awareness, and practice patterns of dermatologists toward psychocutaneous disorders: results of a survey study. Int J Dermatol. 2010 Jul;49(7):784-9. Khemani UN, Khopkar US, Nayak CS. A comparison study of the clinical efficacy and safety of topical adapalene gel (0.1%) and tretinoin cream (0.025%) in the treatment of acne vulgaris. IJBCP. 2016. DOI: http://dx.doi.org/10.18203/2319-2003.ijbcp20161465. Saitta P, Grekin SK. A Four-question Approach to Determining the Impact of Acne Treatment on Quality of Life. J Clin Aesthet Dermatol. 2012 Mar; 5(3): 51–57. Stein Gold L, Weiss J, Rueda MJ, Liu H, Tanghetti E. Moderate and severe inflammatory acne vulgaris effectively treated with single-agent therapy by a new fixed-dose combination adapalene 0.3 %/ benzoyl peroxide 2.5 % gel: a randomized, double-blind, parallel-group, controlled study. Am J Clin Dermatol. 2016;17: 293-303. Tan J, Gollnick HP, Loesche C, Ma YM, Gold LS. Synergistic efficacy of adapalene 0.1%-benzoyl peroxide 2.5% in the treatment of 3855 acne vulgaris patients. J Dermatolog Treat. 2011 Aug;22(4):197-205. Van Onselen J. Managing acne in primary care. Br J Fam Med. 2017. Available at: https://www.bjfm.co.uk/managing-acne-in-primary-care. Accessed Sept. 10, 2017. Weiss JS, Thiboutot DM, Hwa J, Liu Y, Graeber M. Long-term safety and efficacy study of adapalene 0.3% gel. J Drugs Dermatol. 2008 Jun;7(6 Suppl):s24-8. Zaenglein AL, Pathy AL, Schlosser BJ, et al. Guidelines of care for the management of acne vulgaris. J Am Acad Dermatol. 2016 May;74(5):945-73.e33. Zaenglein AL. Acne vulgaris. N Engl J Med. 2018; 379:1343-52.
The recent slowdown in health care spending may be the result of the economic recession, or it may reflect public and private efforts to control spending. Whatever the cause, it remains the case that current spending is far higher than it should be. Continued efforts at cost control—including continued implementation of the Affordable Care Act’s reforms—will be necessary to bring spending more in line with the rest of the economy and to avoid rationing care. Health care spending increased by only 0.8 percent in 2012, slightly less than the rise in gross domestic product per capita. This is a considerable slowdown: since 1969, annual health spending had increased an average of 2.3 percentage points more than GDP growth. Analysts are divided over the reasons. Some see the moderation in spending as part of slow recovery from the recession of 2007–09, and they fully expect costs to surge as the economy recovers. But others believe recent efforts to control health spending—including some features of the Affordable Care Act—may be working. In a New England Journal of Medicine article, The Commonwealth Fund’s David Blumenthal and Kristof Stremikis and Harvard University’s David Cutler weighed in on the debate. History of Health Care Spending Growth in national health care spending escalated rapidly during the 1960s following enactment of Medicare and Medicaid and remained high throughout the ’70s and ’80s. Presidents Nixon, Carter, and Reagan tried different strategies to curb costs, but none had any real impact. In the ’90s, employers and insurers turned to managed care. Although this approach met with some initial success, rising costs returned, in part because of the managed care backlash and the mergers taking place in the health care industry. Causes and Consequences of Health Care Cost Increases During the past 50 years, the major factor in cost growth has been the development and diffusion of new medical technologies—from cardiac procedures to prescription drugs to advances in imaging. Estimates suggest about half the annual increase in U.S. health care spending has resulted from the introduction of new technologies. Rising prices for health care services have been an important factor as well, given that the U.S. does not set or negotiate prices with providers or pharmaceutical companies, as many other countries do. The aging of the population and the obesity epidemic have had modest effects, while waste throughout our health system—in the form of redundant tests, unnecessary procedures, and inefficiencies in insurance administration—has had a larger impact. Recently, however, new, expensive blockbuster drugs are being developed at a decidedly slower pace. This trend, along with the spread of tiered formularies in prescription drug plans, has helped lower annual pharmaceutical spending from 10.1 percent in 1993–2003 to 2.3 percent in 2003–2012. The diffusion and development of medical technologies appears to be slowing, too. On the demand side, many people face very high cost-sharing, which discourages use of health services. On the supply side, providers are facing restrictions on utilization and incentives to prescribe less care. Both public and private purchasers have introduced reforms, such as penalties for preventable readmissions, which encourage more efficient care. And payment reforms, such as accountable care and bundled payments, have shown early evidence of savings. Strategies to Curb Health Care Costs If cost increases return to their historical pattern, the United States is projected to spend $5 trillion on health services in 2022, placing a tremendous burden on the government and forcing higher taxes or cutbacks in spending on other high-priority areas, like education. In the private sector, these increases would cut into wage gains for all employees. Rationing services, by reducing benefits and increasing cost-sharing, is one option for containing costs. But the authors point to another: “Almost without exception, recent studies of health care costs have recommended discarding the current fee-for-service payment system in favor of having providers share risk for the cost and quality of services,” they write. Such arrangements could include capitation or partial capitation, global budgeting, or accountable care organizations. All these approaches financially reward providers for avoiding unnecessary care and delivering higher-value services. Readily available health information, care coordination, accessible primary care services, and engaged consumers are also part of the solution. The Bottom Line Regardless of whether health care spending continues to slow or returns to the pre-recession rate of escalation, we have an opportunity to create constructive, systemic reform that avoids the pain of health care rationing.
from The American Heritage® Dictionary of the English Language, 4th Edition - n. A person evacuated from a dangerous area. from Wiktionary, Creative Commons Attribution/Share-Alike License - n. A person who has been evacuated, especially a civilian evacuated from a dangerous place in time of war from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. - n. a person who has been evacuated from a dangerous place Sorry, no etymologies found. Since the evacuee is in a shelter, mail service has been suspended in many of the hardest hit areas and some of the homes are likely still under water, it seems clear that those claim forms won't be mailed back any time soon. Following up on a footnote to yesterday's post about violence in evacuee shelters -- and speculation that withdrawal from illicit and prescription meds may be a factor -- Boing Boing reader Laura says Imagine the coolness of a Shelter OS that could, with little or no work by the staff, find other Shelter OS’s running elsewhere, trade information with them and start to link up scattered families or automatically bring along medical information and history as an evacuee is moved from shelter to shelter. The consequences of their relationship resonate through the lives of a vividly imagined cast of characters: the drunken BBC comedian who befriends Esther, Esther's stubborn father, and the resentful young British "evacuee" who lives on the farm -- even the German-Jewish interrogator investigating the most notorious German prisoner in Wales, Rudolf Hess. Barry Lemoine, a refugee (or are we saying "evacuee" now?) from Hurricane Katrina has been living in Troy, writing for the local paper. The word "evacuee" may not be fully accurate if, as I suspect, many of the displaced persons end up settling elsewhere permanently. First, though, a different kind of evacuee problem. Shuji Kajiyama/Associated Press An evacuee slept at an temporary shelter in Rikuzentakata, Iwate prefecture, on March 21. We are glad it's over, said evacuee Sara Ali, a 30-year-old with dual Libyan-American citizenship who lives in Libya. It was pretty uncomfortable just because of the delay, said Lucile Usielmerazcerna, another evacuee from Santa Cruz, California. Wordnik is becoming a not-for-profit! Read our announcement here.
Students to travel to Germany to learn about renewable energy and ‘green’ buildings PLATTEVILLE, Wis. —Dr. Samir El-Omari, general engineering professor at the University of Wisconsin-Platteville, will be taking a group of students to Germany this summer to explore green technology in Europe. El-Omari did his undergraduate studies in Germany, and maintains many contacts with professors there. According to El-Omari, Germany is one of the most advanced countries in green building design and renewable energy. Students will visit solar and wind farms, attend lectures and work on projects with other German students and their professors. They will also be visiting a green building job site to see the building process. So, What makes a green building? A green building is a high performance building that tries to use the least amount of conventional energy as possible. Conventional energy comes from fossil fuel, coal and natural gas. The buildings use more green technology such as solar, wind and reduce energy and water consumption. There are many different ways builders are trying to go greener. They build different types of windows to let more natural light in instead of artificial light. Builders also try to use the outside air to heat and cool the building instead of the traditional boiler and air conditioning units. Automatic lighting will turn the lights off when the room is empty and dims the light depending on the natural light coming in. There are even some buildings that have waterfalls inside to help control the humidity levels. According to El-Omari, this trip will be essential for future engineers wishing to get into the renewable energy field. “This will help students gain the understanding they need to prove to future investors that green is the more efficient and cheaper way,” said El-Omari. When talking about green building construction, the cost to build is usually 2-8 percent higher than traditional building costs. However, investors will save in the long run from lower energy costs. El-Omari hopes that this trip will help students understand that green energy is important for future generations and that they have to work hard to produce green buildings. The trip will last two weeks and students will visit different cities in Germany including Munich, Stuttgart, Koln, Frankfurt and Darmstadt. El-Omari hopes to make this an annual visit to include different majors and countries. El-Omari is also thinking of including other countries on the list. El-Omari is also thinking of including developing countries. “Developing countries have a different perspective about green technology,” said El-Omari. “That means they will have different expertise to offer to students.” El-Omari hopes that students will get a good international experience and appreciate the chance to work with students from Germany. Anyone interested in going may contact El-Omari to discuss the trip. Contact: Samir El-Omari, general engineering, (608) 342-6170, email@example.com Written by: Megan Schmidt, UW-Platteville University Information and Communications,(608) 342-1194, firstname.lastname@example.org
Keep Your Heart Healthy with Resistance Training nbsp; Heart disease remains the leading cause of death among U.S. men and women, claiming a life every 33 seconds. Smoking and eating a high-fat, high-salt diet are risk factors for the disease, and exercise (or lack of exercise) may also play a role. Consider a recent study published by the British Journal of Sports Medicine. Twenty-four healthy premenopausal women were evaluated to examine the effects of a supervised 14-week resistance training program on cholesterol levels and overall body composition. Subjects were randomly assigned to a non-exercising control group or to an exercise group that participated in 45-50-minute resistance training sessions, three days a week on non-consecutive days. At the end of the 14-week training period, total cholesterol and LDLC (the "bad" cholesterol) levels were significantly lower in the training group compared to the control group, and HDLC levels (the "good" cholesterol) had increased slightly. Resistance training can involve free weights and/or weight machines, and many men (and more and more women) use resistance training as a supplement or alternative to aerobic exercise. Your chiropractor can help you choose a resistance training program best suited to your physical condition, time constraints and fitness goals. Prabhakaran B, Dowling EA, Branch JD, et al. Effect of 14 weeks of resitance training on lipid profile and body fat percentage in premenopausal women. British Journal of Sports Medicine 1999: Vol. 33, pp190-195.
Security Awareness : Computer Virus Security Awareness : Computer Virus In 1983, Fred Cohen coined the term “computer virus”, postulating a virus was “a program that can ‘infect’ other programs by modifying them to include a possibly evolved copy of itself.” The term virus is actually an acronym for Vital Information Resources Under Seize. Mr. Cohen expanded his definition a year later in his 1984 paper, “A Computer Virus”, noting that “a virus can spread throughout a computer system or network using the authorizations of every user using it to infect their programs. Every program that gets infected may also act as a virus and thus the infection grows.” Computer viruses, as we know them now, originated in 1986 with the creation of Brain – the first virus for personal computers. Two brothers wrote it (Basid and Farooq Alvi who ran a small software house in Lahore, Pakistan) and started the race between viruses and anti-virus programs which still goes on today. Using the above explanation, it can be said that viruses infect program files. However, viruses can also infect certain types of data files, specifically those types of data files that support executable content, for example, files created in Microsoft Office programs that rely on macros. Compounding the definition difficulty, viruses also exist that demonstrate a similar ability to infect data files that don’t typically support executable content – for example, Adobe PDF files, widely used for document sharing, and .JPG image files. However, in both cases, the respective virus has a dependency on an outside executable and thus neither virus can be considered more than a simple ‘proof of concept’. In other cases, the data files themselves may not be infectable, but can allow for the introduction of viral code. Specifically, vulnerabilities in certain products can allow data files to be manipulated in such a way that it will cause the host program to become unstable, after which malicious code can be introduced to the system. These examples are given simply to note that viruses no longer relegate themselves to simply infecting program files, as was the case when Mr. Cohen first defined the term. Thus, to simplify and modernize, it can be safely stated that a virus infects other files, whether program or data. Computer viruses are called viruses because they share some of the traits of biological viruses. A computer virus passes from computer to computer like a biological virus passes from person to person. There are similarities at a deeper level, as well. A biological virus is not a living thing. A virus is a fragment of DNA inside a protective jacket. Unlike a cell, a virus has no way to do anything or to reproduce by itself — it is not alive. Instead, a biological virus must inject its DNA into a cell. The viral DNA then uses the cell’s existing machinery to reproduce itself. In some cases, the cell fills with new viral particles until it bursts, releasing the virus. In other cases, the new virus particles bud off the cell one at a time, and the cell remains alive. A computer virus shares some of these traits. A computer virus must piggyback on top of some other program or document in order to get executed. Once it is running, it is then able to infect other programs or documents. Obviously, the analogy between computer and biological viruses stretches things a bit, but there are enough similarities that the name sticks. A computer virus is a program that replicates. To do so, it needs to attach itself to other program files (for example, .exe, .com, .dll) and execute whenever the host program executes. Beyond simple replication, a virus almost always seeks to fulfill another purpose: to cause damage. Called the damage routine, or payload, the destructive portion of a virus can range from overwriting critical information kept on the hard disk’s partition table to scrambling the numbers in the spreadsheets to just taunting the user with sounds, pictures, or obnoxious effects. It’s worth bearing in mind, however, that even without a ”damage routine”, if viruses are allowed to run unabated then it will continue to propagate–consuming system memory, disk space, slowing network traffic and generally degrading performance. Besides, virus code is often buggy and can also be the source of mysterious system problems that take weeks to understand. So, whether a virus is harmful or not, its presence on the system can lead to instability and should not be tolerated. Some viruses, in conjunction with “logic bombs,” do not make their presence known for months. Instead of causing damage right away, these viruses do nothing but replicate–until the preordained trigger day or event when they unleash their damage routines on the host system or across a network. Impact of Viruses on Computer Systems Virus can be reprogrammed to do many kinds of harm including the following. 1.Copy themselves to other programs or areas of a disk. 2.Replicate as rapidly and frequently as possible, filling up the infected system’s disk and memory rendering the systems useless. 3.Display information on the screen. 4.Modify, corrupt or destroy selected files. 5.Erase the contents of entire disks. 6.Lie dormant for a specified time or until a given condition is met, and then become active. 7.Open a back door to the infected system that allows someone else to access and even control of the system through a network or internet connection. 8.Some viruses can crash the system by causing some programs (typically Windows) to behave oddly. How viruses spread from one system to another? The most likely virus entry points are email, Internet and network connections, floppy disk drives, and modems or other serial or parallel port connections. In today’s increasingly interconnected workplace (Internet, intranet, shared drives, removable drives, and email), virus outbreaks now can spread faster and wider than ever before. The following are some common ways for a virus to enter the users’ computer system: •Malicious scripts in web pages or HTML email •FTP traffic from the Internet (file downloads) •Shared network files & network traffic in general •Shrink-wrapped, production programs (rare) •Electronic bulletin boards (BBS) •Diskette swapping (using other people’s diskettes for carrying data and programs back and forth) High risk files The most dangerous files types are: .EXE, .COM, .XLS, .DOC, .MDB Because they don’t need any special conversion to infect a computer — all they’ve got to do is run and consequently the virus spreads. It has been estimated that 99% of all viruses are written for these file formats. A list of possible virus carriers includes: EXE – (Executable file) SYS – (Executable file) COM – (Executable file) DOC – (Microsoft Word) XLS – (Microsoft Excel) MDB – (Microsoft Access) ZIP – (Compressed file, common in the USA) ARJ – (Compressed file, common in the USA) DRV – (Device driver) BIN – (Common boot sector image file) SCR – (Microsoft screen saver) Common Symptoms Of Virus Infection Computer does not boot. Computer hard drive space is reduced. Applications will not load. An application takes longer to load than normal time period. Hard dive activity increases especially when nothing is being done on the computer. An anti virus software message appears. The number of hard drive bad sectors steadily increases. Unusual graphics or messages appear on the screen Files are missing (deleted) A message appears that hard drive cannot be detected or recognized. Strange sounds come from the computer. Some viruses take control of the keyboard and occasionally substitute a neighboring key for the one actually pressed. Another virus “swallows” key presses so that nothing appears on the screen. Also interesting are system time effects. Clocks going backwards are especially frightening for workers who cannot wait to go home. More seriously though, this type of virus can cause chaos for programs which depend on the system time or date. Some viruses can cost the user dearly by dialing out on his modem. We do not know of one which dials premium telephone numbers but no doubt we shall see one soon. One particularly malicious virus dials 911 (the emergency number in the USA) and takes up the valuable time of the emergency services. Categories of viruses Depending on the source of information different types of viruses may be categorized in the following ways: The increasing power of PDAs has spawned a new breed of viruses. Maliciously creative programmers have leveraged the PDA’s ability to communicate with other devices and run programs, to cause digital mayhem. The blissfully safe world where users of these devices could synchronize and download with impunity came to an end in August 2000 with the discovery of the virus Palm Liberty. Since then, many more viruses have been discovered. Though not yet as harmful as their PC-based cousins, these viruses still pose a threat to unsuspecting users. Their effects vary from the harmless flashing of an unwanted message or an increase in power consumption, to the deletion of all installed programs. But the threat is growing, and the destructiveness of these viruses is expected to parallel the development of the devices they attack. A virus that combines two or more different infection methods is called a multipartite virus. This type of virus can infect both files and boot sector of a disk. Multi-partite viruses share some of the characteristics of boot sector viruses and file viruses: They can infect .com files, .exe files, and the boot sector of the computer’s hard drive. On a computer booted up with an infected diskette, the typical multi-partite virus will first make itself resident in memory then infect the boot sector of the hard drive. From there, the virus may infect a PC’s entire environment. Not many forms of this virus class actually exist. However, they do account for a disproportionately large percentage of all infections. Tequila and Anticad are the examples of multipartite viruses. The two most prevalent types of bombs are time bombs and logic bombs. A time bomb hides on the victim’s disk and waits until a specific date before running. A logic bomb may be activated by a date, a change to a file, or a particular action taken by a user or a program. Bombs are treated as viruses because they can cause damage or disruption to a system. BOOT SECTOR VIRUSES Until the mid-1990s, boot sector viruses were the most prevalent virus type, spreading primarily in the 16-bit DOS world via floppy disk. Boot sector viruses infect the boot sector on a floppy disk and spread to a user’s hard disk, and can also infect the master boot record (MBR) on a user’s hard drive. Once the MBR or boot sector on the hard drive is infected, the virus attempts to infect the boot sector of every floppy disk that is inserted into the computer and accessed. Examples of boot sector viruses are Michelangelo, Satria and Keydrop. Boot sector viruses work like this: Let us assume that the user received a diskette with an infected boot sector. The user copied data from it but forgot to remove it from drive A:. When he started the computer next time the boot process will execute the infected boot sector program from the diskette. The virus will load first and infect the hard disk. Note that this can be prevented by changing the boot sequence in CMOS (Let C: drive boot before A:). By hiding on the first sector of a disk, the virus is loaded into memory before the system files are loaded. This allows it to gain complete control of DOS interrupts and in the process replaces the original contents of the MBR or DOS boot sector with their own contents and move the original boot sector data to another area on the disk. Because the virus has infected a system area of the hard disk it will be loaded into memory each time the computer is started. It will first take control of the lowest level disk system services before executing the original boot sector code which it has stored in another part of the hard disk. The computer seems to behave exactly as it should. Nobody will notice the extra few fractions of a second added to the boot sequence. During normal operation the virus will happily stay in memory. Thanks to the fact that it has control of the disk services it can easily monitor requests for disk access – including diskettes. As soon as it gets a request for access to a diskette it will determine that there is a diskette in the floppy drive. It will then examine its boot sector to see if it has already been infected. If it finds the diskette clean it will replace the boot sector with its own code. From this moment the diskette will be a “carrier” and become a medium for infections on other PC’s. The virus will also monitor special disk requests for access to the boot sector. The boot sector contains its own code, and a request to read it could be from an anti-virus program checking for virus presence. The virus will not allow the boot sector to be read and will redirect all requests to the place on the hard disk where it has backed up the original contents. In this way nothing unusual is detected. Such methods are called stealth techniques and their main goal is to mask the presence of the virus. Not all boot viruses use stealth but those which do are common. Boot viruses also infect the non-file (system) areas of hard and floppy disks. These areas offer an efficient way for a virus to spread from one computer to another. Boot viruses have achieved a higher degree of success than program viruses in infecting their targets and spreading. Boot virus can infect DOS, Windows 3.x, Windows 95/98, Windows NT, and even Novell Netware systems. This is because they exploit inherent features of the computer (rather than the operating system) to spread and activate. Cleaning up a boot sector virus can be performed by booting the machine from an uninfected floppy system disk rather than from the hard drive, or by finding the original boot sector and replacing it in the correct location on the disk. This type of virus makes changes to a disks file system. If any program is run from the infected disk, the program causes the virus to run as well. This technique creates the illusion that the virus has infected every program on the disk. These types of viruses can be transmitted via e-mail messages sent across private networks or the internet. Some e-mail viruses are transmitted as an infected attachment- a document file or program that is attached to the message. This type of virus is run when the victim opens the file that is attached to the message. Other types of email viruses reside within the body of the message itself. To store a virus, the message must be encoded in html format. Once launched many e-mail viruses attempt to spread by sending messages to everyone in the victim’s address book; each of those contains a copy of the virus. The latest thing in the world of computer viruses is the e-mail virus called Melissa virus which surfaced in March 1999. Melissa spread in Microsoft Word documents sent via e-mail, and it worked like this: Someone created the virus as a Word document uploaded to an Internet newsgroup. Anyone who downloaded the document and opened it would trigger the virus. The virus would then send the document (and therefore itself) in an e-mail message to the first 50 people in the person’s address book. The e-mail message contained a friendly note that included the person’s name, so the recipient would open the document thinking it was harmless. The virus would then create 50 new messages from the recipient’s machine. As a result, the Melissa virus was the fastest-spreading virus ever seen and it forced a number of large companies to shut down their e-mail systems at that time. The ILOVEYOU virus which appeared on May 4, 2000, was even simpler. It contained a piece of code as an attachment. People who double clicked on the attachment allowed the code to execute. The code sent copies of itself to everyone in the victim’s address book and then started corrupting files on the victim’s machine. This is as simple as a virus can get. It is really more of a Trojan horse distributed by e-mail than it is a virus. The Melissa virus took advantage of the programming language built into Microsoft Word called VBA, or Visual Basic for Applications. It is a complete programming language and it can be programmed to do things like modify files and send e-mail messages. It also has a useful but dangerous auto-execute feature. A programmer can insert a program into a document that runs instantly whenever the document is opened. This is how the Melissa virus was programmed. Anyone who opened a document infected with Melissa would immediately activate the virus. It would send the 50 e-mails, and then infect a central file called NORMAL.DOT so that any file saved later would also contain the virus! It created a huge mess. FILE INFECTING VIRUSES File infectors operate in memory and usually infect executable files with the following extensions: *.COM, *.EXE, *.DRV, *.DLL, *.BIN, *.OVL, *.SYS. They activate every time the infected file is executed by copying themselves into other executable files and can remain in memory long after the virus has activated. Thousands of different file infecting viruses exist, but similar to boot sector viruses, the vast majority operates in a DOS 16-bit environment. Some, however, have successfully infected the Microsoft Windows, IBM OS/2, and Apple Computer Macintosh environments. File viruses can be separated further into sub-categories by the way they manipulate their targets: TSR FILE VIRUSES A less common type of virus is the terminate-and-stay-resident file virus. As the name suggests these infect files usually these are .com and .exe files. there are however some device driver viruses, some viruses that infect overlay files, and although over 99% of executable programs have the extension .com and .exe, some do not .For a TSR virus to spread some one has to run an infected program. The virus goes memory resident typically looking at each program run thereafter and infects it. Examples of TSR file viruses are Dark Avenger and Green Caterpillar. These viruses infect by overwriting part of their target with their own code but, by doing so, they damage the file. The file will never serve another purpose other than spreading the virus further. Because of this they are usually detected quickly and do not spread easily. These viruses attach themselves to executables without substantially changing the contents of the host program. They attach by adding their code to the beginning, end, or even middle of the file and divert program flow so that the virus is executed first. When the virus has finished its job, control is passed on to the host. Execution of the host is a little delayed but this is usually not noticeable. Many older applications had simple macro systems that allowed the user to record a sequence of operations within the application and associate them with a specific keystroke. Later, the user could perform the same sequence of operations by merely hitting the specified key. Newer applications provide much more complex macro systems. User can write entire macro-programs that run within the word processor or spreadsheet environment and are attached directly onto word processing and spreadsheet files. Unfortunately, this ability also makes it possible to create macro viruses. Macro viruses currently account for about 80 percent of all viruses, according to the International Computer Security Association (ICSA), and are the fastest growing viruses in computer history. Unlike other virus types, macro viruses aren’t specific to an operating system and spread with ease via email attachments, floppy disks, Web downloads, file transfers, and cooperative applications. Macro viruses are, however, application-specific. A macro virus is designed to infect a specific type of document file, such as Microsoft word or excel files. They infect macro utilities that accompany such applications as Microsoft Word and Excel, which means a Word macro virus cannot infect an Excel document and vice versa. A macro virus is embedded in a document file and can travel between data files in the application and can eventually infect hundreds of files if undeterred and in the process do various levels of damage to data from corrupting documents to deleting data. Macro viruses are written in “every man’s programming language” — Visual Basic — and are relatively easy to create. They can infect at different points during a file’s use, for example, when it is opened, saved, closed, or deleted A typical chronology for macro virus infection begins when an infected document or spreadsheet is loaded. The application also loads any accompanying macros that are attached to the file. If one or more of the macros meet certain criteria, the application will also immediately execute these macros. Macro viruses rely upon this auto-execution capability to gain control of the application’s macro system. Once the macro virus has been loaded and executed, it waits for the user to edit a new document, and then kicks into action again. It attaches its virus macro programs onto the new document, and then allows the application to save the document normally. In this fashion, the virus spreads to another file and does so in a completely discrete fashion. Users have no idea of the infection. If this new file is later opened on another computer, the virus will once again load, be launched by the application, and find other unsuspecting files to infect. Finally, as far as a macro virus is concerned, the application serves as the operating system. A single macro virus can spread to any of the platforms on which the application is installed and running. For example, a single macro virus that uses Microsoft Word could conceivably spread to Windows 3.x, Windows 95/98, Window NT, and the Macintosh. Macro viruses for Word In the summer of 1995, Microsoft Word 6 was the first product affected with macro virus. The first one (WM/Concept.A) was really only a proof of concept – one of the installed macros (called Payload) contained only this remark: “That’s enough to prove my point” Most macro viruses for Word use a feature called ‘automacros’. The basic principle is that some macros with special names are automatically executed when Word starts, opens a file, or closes a file. The macro virus then inserts macros into NORMAL.DOT – a standard template which is loaded every time Word starts. In Word there are some ways to disable automacros but this isn’t the ultimate solution. Some macro viruses use other methods to take control over the Word environment. Another method of self-protection may be to set NORMAL.DOT to read only. But this can also be bypassed and, in addition, it prevents the user from customizing the template. Macro viruses for Excel Excel has the same opportunities for virus authors as Word. It has automacros and a directory called XLSTART from which templates are automatically loaded. But Excel does not have just normal VBA macros like Word. In Excel there are so called ‘formulas’ – macros stored in spreadsheet cells. The first macro virus using this technology was XF/Paix. Macro viruses for other MS Office products: Writing a macro virus for other Office products is not difficult. There have been already some viruses for Access, and it is expected that there will be macro viruses for Power Point in the near future. But those macro viruses are not as dangerous as the macro viruses for Word or Excel. Not because of some limitation of these other Office products, but because data files from these products are not so frequently shared. There is one danger which can be seen in today’s Power Point even without native macro viruses written for this product. Programmers can include in their presentation any number of objects from Excel or Word. And these objects can be infected with macro viruses – if they edit the presentation and open the infected object with its parent application, then the virus can spread further. But the current situation may change dramatically over the next few years. Microsoft has licensed VBA technology to many firms, so one can expect to see more macro viruses for other products, too. This type of virus can change itself each time it is copied, making it difficult to isolate. Most simple viruses attach identical copies of themselves to the files they infect. An anti-virus program can detect the virus’s code (or signature) because it is always the same and quickly ferret out the virus. To avoid such easy detection, polymorphic viruses operate somewhat differently. Unlike the simple virus, when a polymorphic virus infects a program, it scrambles its virus code in the program body. This scrambling means that no two infections look the same, making detection more difficult. These viruses create a new decryption routine each time they infect, so every infected file will have a different sequence of virus code. Stealth viruses actively seek to conceal themselves from attempts to detect or remove them. They also can conceal changes they make to other files, hiding the damage from the user and the operating system. Stealth viruses, or Interrupt Interceptors, as they are sometimes called, take control of key DOS-level instructions by intercepting the interrupt table, which is located at the beginning of memory. This gives the virus the ability to do two important things: 1) gain control of the system by re-directing the interrupt calls, and 2) hide itself to prevent detection. They use techniques such as intercepting disk reads to provide an uninfected copy of the original item in place of the infected copy (read-stealthing viruses), altering disk directory or folder data for infected program files (size-stealthing), or both. For example, the Whale virus is a size-stealthing virus. It infects .EXE program files and alters the folder entries of infected files when other programs attempt to read them. The Whale virus adds 9216 bytes to an infected file. Because changes in file size are an indication that a virus might be present, the virus then subtracts the same number of bytes (9216) from the file size given in the directory/folder entry to trick the user into believing that the file’s size has not changed. An antivirus program which is not equipped with anti-stealth technology will be deceived. A companion virus is the exception to the rule that a virus must attach itself to a file. The companion virus instead creates a new file and relies on a behavior of DOS to execute it instead of the program file that is normally executed. These viruses target EXE programs. They create another file of the same name but with a COM extension containing the virus code. These viruses take advantage of a property of MS-DOS which allows files to share the same first name in the same directory (e.g. ABC.EXE and ABC.COM) but executes COM files in preference to EXE files. For example, the companion virus might create a file named CHKDSK.COM and place it in the same directory as CHKDSK.EXE. Whenever DOS must choose between executing two files of the same name where one has an .EXE extension and the other a .COM extension, it executes the .COM file. This is not an effective way of spreading but has one big advantage – it does not amend files in any way and so can escape integrity tests or resident protection. Another method which can be used by companion viruses is based on defined path. A virus simply puts an infected file into the path listed before the directory within the original program. Like normal programs, program viruses must be written for a specific operating system. The vast majority of viruses are written for DOS but some have been written for Windows 3.x, Windows 95/98, and even UNIX. All versions of Windows are compatible with DOS and can host DOS viruses with varying degrees of success. Program viruses infect program files, which commonly have extensions such as .COM, .EXE, .SYS, .DLL, .OVL, or .SCR. Program files are attractive targets for virus writers because they are widely used and have relatively simple formats to which viruses can attach. Malicious Programs and Scripts Viruses that infect agent programs (such as those that download software from the Internet; for example, JAVA and ActiveX). A worm is a computer program that has the ability to copy itself from machine to machine. Worms normally move around and infect other machines through computer networks. An entire LAN or corporate e-mail system can become totally clogged with copies of a worm, rendering it useless. Worms are commonly spread over the internet via e-mail message attachments and through internet relay chat channels. For example, the Code Red worm replicated itself over 250,000 times in approximately nine hours on July 19, 2001. A worm usually exploits some sort of security hole in a piece of software or the operating system. For example, the Slammer worm (which caused mayhem in January 2003) exploited a hole in Microsoft’s SQL server. Worms use up computer time and network bandwidth when they are replicating, and they often have some sort of evil intent. A worm called Code Red made huge headlines in 2001. Experts predicted that this worm could clog the Internet so effectively that things would completely grind to a halt. The Code Red worm slowed down Internet traffic when it began to replicate itself, but not nearly as badly as predicted. Each copy of the worm scanned the Internet for Windows NT or Windows 2000 servers that do not have the Microsoft security patch installed. Each time it found an unsecured server, the worm copied itself to that server. The new copy then scanned for other servers to infect. Depending on the number of unsecured servers, a worm could conceivably create hundreds of thousands of copies. The Code Red worm was designed to do three things: •Replicate itself for the first 20 days of each month •Replace Web pages on infected servers with a page that declares “Hacked by Chinese” •Launch a concerted attack on the White House Web server in an attempt to overwhelm it The most common version of Code Red is a variation, typically referred to as a mutated strain, of the original Ida Code Red that replicated itself on July 19, 2001. Trojans, another form of malware, are generally agreed upon as doing something other than the user expected, with that “something” defined as malicious. Most often, Trojans are associated with remote access programs that perform illicit operations such as password-stealing or which allow compromised machines to be used for targeted denial of service attacks. One of the more basic forms of a denial of service (DoS) attack involves flooding a target system with so much data, traffic, or commands that it can no longer perform its core functions. When multiple machines are gathered together to launch such an attack, it is known as a distributed denial of service attack, or DDoS. Because Trojan horses do not make duplicates of themselves on the victims disk (or copy themselves to other disks), they are not technically viruses. But because they can do harm, many experts consider them to be a type of virus. Trojan horses are often used as by hackers to create a back door to an infected system. Trojans, such as BackOrrifice are very dangerous. If anyone runs this program and his computer is connected to the internet, then the hacker can take control of that computer – transfer files to or from the computer, capture screen contents, run any program or kill any running process, etc. Once a Trojan is installed onto the system this program has the same privileges as the user of the computer and can exploit the system to do something the user did not intend such as: Transmit to the intruder any files that the user can read Change any files that the user can modify Install other programs with the user’s privileges Execute privilege-elevation attacks—the Trojan can attempt to exploit a weakness to raise the level of access beyond the user running the Trojan. If successful, the Trojan can operate with increased privileges. Install other Trojans The Following Tips Will Help The User To Minimize Virus Risk: If the users are truly worried about traditional (as opposed to e-mail) viruses, they should be running a more secure operating system like UNIX. One should never hear about viruses on these operating systems because the security features keep viruses (and unwanted human visitors) away from the hard disk. If the users are using an unsecured operating system, then buying virus protection software is a nice safeguard. Some popular anti virus programs include: •McAfee Virus Scan •Norton Anti Virus •AVG Anti Virus System Automatic protection of anti-virus software should be turned on at all times. The users should perform a manual scan (or schedule a scan to occur automatically) of their hard disks weekly. These scans supplement automatic protection and confirm that the computer is virus-free. Scan all floppy disks before first use. Disable floppy disk booting — most computers now allow the user to do this, and that will eliminate the risk of a boot sector virus coming in from a floppy disk accidentally left in the drive. The users should Enable Automatic Update option of their anti-virus software in order to update their virus definition files. Creation and maintenance of a rescue disk should be done by the user in order to facilitate recovery from certain boot viruses. Periodic backups of the hard disk should be done. Users’ should buy legal copies of all software they use and make write-protected backups. Email messages and email attachments from unknown people should not be opened. Attachments that come in as Word files (.DOC), spreadsheets (.XLS), images (.GIF and .JPG), etc., are data files and they can do no damage (noting the macro virus problem in Word and Excel documents mentioned above). A file with an extension like EXE, COM or VBS is an executable, and an executable can do any sort of damage it wants. Further it should be verified that the “author” of the email has sent the attachments. Newer viruses can send email messages that appear to be from a person user know. The potential users should make sure that Macro Virus Protection is enabled in all Microsoft applications, and they should never run macros in a document unless they know specifically the functionality of the macros. Appropriate Passwords should be assigned to the shared network drives. Things that are not viruses! Joke programs are not viruses and do not inflict any damage. Their purpose is to frighten their victims into thinking that a virus has infected and damaged their system. For example, a joke program may display a message warning the user not to touch any keys or else the computer’s hard disk will be formatted. A dropper is a program that is not a virus, nor is it infected with a virus but when run it installs a virus into memory on to the disk, or onto a file. Droppers have been written sometimes as a convenient carrier for a virus and sometimes as an act of sabotage. There must be very few people on email who haven’t received a chain letter with the subject line warning of a virus doing the rounds. These are often hoaxes and meant to scare people and have fun at their expense. The warnings encourage the recipient of the e-mail to pass the warning to the netizens and thus create an unnecessary furor, besides clogging mailboxes, as it usurps an air of credibility. Methodology of virus detection applied by antivirus softwares: Three main methods exist for detecting viruses: integrity checking (also known as checksumming), behavior monitoring and pattern matching (scanning). Antivirus programs that use integrity checking start by building an initial record of the status (size, time, date, etc.) of every application file on the hard drive. Using this data, checksumming programs then monitor the files to see if changes have been made. If the status changes, the integrity checker warns the user of a possible virus. However, this method has several disadvantages, the biggest being that false alarms are altogether too common. The records used by checksumming programs are often rendered obsolete by legitimate programs, which, in their normal course of operations, make changes to files that appear to the Integrity checker to be viral activity. Another weakness of integrity checking is that it can only alert the user after a virus has infected the system. Behavior Monitoring programs are usually terminate and stay resident (TSR) and constantly monitor requests that are passed to the interrupt table. These programs are on the lookout for activities that a virus might engage in–requests to write to a boot sector, opening an executable program for writing, or placing itself resident in memory. The behavior these programs monitor is derived from a user-configurable set of rules. Using a process called “pattern matching,” the anti-virus software draws upon an extensive database of virus patterns to identify known virus signatures, or telltale snippets of virus code. Key areas of each scanned file are compared against the list of thousands of virus signatures that the anti-virus software has on record. Whenever a match occurs, the anti-virus software takes the action the user has configured: Clean, Delete, Quarantine, Pass (Deny Access for Real-time Scan), or Rename. Self Defense Mechanisms Evolved By Viruses Virus authors of course wish that their child successfully lives. For this reason there are many viruses outfitted with some self-defense mechanisms against anti virus systems. Passive Defense : Viruses use a variety of methods to hide themselves from antivirus programs. Passive defense uses programming methods which make analysis of the virus more difficult, e.g. polymorphic viruses which were developed to counter scanners looking for constant strings of virus code. Today antivirus systems are capable of analyzing polymorphic code and searching for virus identifiers in the decrypted body. The virus authors reacted by making the encryption too complex for antivirus software to unravel, thus mistaking it for a clean program. Active Self-defense : Viruses actively defend themselves by protecting their own code or by attempting to damage antivirus software. A simple method is to locate antivirus software databases and amend or delete them. More sophisticated resident viruses use stealth techniques. When they detect a request to use an infected file, they can temporarily “clean” it or report its original (uninfected) parameters. They can monitor which programs are being executed and react if it is antivirus software. The list of such reactions is endless. Usually, the execution of the antivirus program is refused, but it could be erased (often accompanied by a bogus error message) or the virus suspends its activities while it runs. There are occasionally extremely ‘clever’ viruses which modify the code of a specific AV program to partially disable it. There are very rare viruses which consider an attempt to run an anti-virus program as arrogant and immediately reply with some revenge action – for example hard disk formatting. A trap is the most malicious form of self-defense and works as follows. Although the user’s computer is infected but everything appears to work correctly. Once the user discovers the virus and removes it things get complicated – programs no longer run properly or the hard disk may become inaccessible even when booting from a clean system diskette. The best known trap virus is One_Half. It continuously encrypts the data on a hard disk (two tracks on every boot). If it is removed from the partition sector before data files are decoded then some files will become inaccessible. At this stage the situation is serious but recovery of the data is still possible. However, if the user runs a disk utility (Scandisk etc.) to repair the damage then the data will almost certainly be lost forever. These utilities are designed to repair relatively minor damage to file system and do not recognize the encrypted data. 1. Mary Landesman “What is a virus?” 2. NetGuide “What are computer viruses? “– 3. Marshall Brain “How Computer Viruses Work” http://www.Howstuffworks How Computer Viruses Work.htm 4. AVG Anti Virus Free Edition Help Developed by Grisoft Inc 5. Norton Anti-virus Help Developed by Symantec Corporation 6. Trend Micro PC-cillin Help Developed by Trend Micro Inc 7. Peter Norton “Computer Viruses” Introduction to Computers, Tata McGraw Hill Co: 8. Dr.Solomon ”About Viruses” &”Virus Prevention” Dr.Solomon’s Virus Encyclopedia, Dr.Solomon’s Software Ltd. 9. C.A.Schmidt ”Virus” The Complete Computer Upgrade And Repair Text Book,Dreamtech 10. S.Jaiswal “Virus Detection And Elimination” Information Technology Today, Galgotia Publication Pvt. Ltd.
Facts about Willem-Alexander King Willem-Alexander Biography King Willem-Alexander became ruler of The Netherlands on 30 April 2013 after the official abdication of his mother, Queen Beatrix. He is the first child of Beatrix and her husband, Prince Claus. (“On his birth, he received the titles of Prince of the Netherlands and Prince of Orange-Nassau, Jonkheer van Amsberg,” according to the Royal Family’s official site.) Willem-Alexander attended Atlantic College in Llantwit Major, Wales, gaining an International Baccalaureate in 1985. After two years of military service in the Royal Netherlands Navy (August 1985-January 1987), he went to Leiden University and earned a history degree in 1993; according to DutchNews.nl, his love of partying and beer while there earned him the nickname “Prince Pils.” He later earned a military pilot’s license and devoted himself to the study of Dutch government and history in preparation for his future role as the country’s king. Known as the Prince of Orange, he also pursued interests in the environment and water quality, serving as chairman of the UN’s water and sanitation advisory board (UNSGAB). He married Maxima Zorreguieta, a former investment banker from Argentina, on 2 February 2002. The marriage was a bit controversial: her father was a former member of Argentina’s military junta. (The controversy echoed that surrounding his mother’s marriage to his father, a former member of the Hitler Youth.) King Willem-Alexander and his wife have three children: Catharina-Amalia (born 7 December 2003), Alexia (b. 26 June 2005) and Ariane (b. 10 April 2007). Queen Beatrix announced on 28 January 2003 her plans to abdicate on April 30th of that year; her son succeeded her on that day, taking the title of King Willem-Alexander. Willem-Alexander is the first Dutch king since William III, who ruled from 1849-1890. He was followed by Queen Wilhelmina (1890-1948), Queen Juliana (1948-80), and Queen Beatrix (1980-2013). After her abdication, Queen Beatrix became known as Princess Beatrix… Willem-Alexander is an athlete and enthusiastic sportsman; he skated the legendary 125-mile (200-kilometer) Eleven Cities ice-skating marathon — the Elfstedentocht — under the name “W.A. van Buren” in 1986… Upon the death of his father in 2002, he became head of the German House of Amsberg.
Malaria is an infection of the blood that causes chills and high fever. Usually Malaria is spread mosquitoes and it is common in Tropic regions. Symptoms of Malaria 1. Chills and Headaches 2. Chills and fever 3. Sweating and body temperature fluctuates 1. If you suspect Malaria or have repeated fever, get blood test and seek treatment immediately. 2. If the malaria is severe begins to have fits or other signs of meningitis, it may cause cerebral malaria. Inject malaria medicine at once. Traditional Treatment ( for 12 years old or above) 1. Boil Water 2. Add pawpaw leaves in the hot water in a bucket 3. Patient sits near the bucket 4. Cover the patient and the bucket of hot water with a thick blanket. 5. While covered, the patient stirs the hot water with a stick for 10 - 15 minutes. Check out for all the latest Health Tips on this page. Follow us on social media as well. ...Your Health is your Life!.... Healths News and Tips in Your Email: Subscribe Now
The recent farm laws passed in India by avoiding deliberate debate have drawn mixed views. There are people who support them but majority of the farmers are against them. Shekhar Gupta, a journalist who is not a part of the ‘Godi’ media considers these as very good reforms. Raghuram Rajan had refused to comment upon their economic impact and had only expressed that these should have been passed after greater deliberation. Reforms in agriculture sector are definitely required. In this blog I have tried to explain the macro-economic picture as regards agriculture and the recent laws for the benefit of my students and followers. Big Problem of Agriculture India has about 44% of the population directly engaged with agriculture. The contribution of this population to the GDP is around 15%. Small and marginal farmers with less than two hectares of land account for 86.2% of all farmers in India, who own just 47.3% of the crop area. In comparison US employs less than 2% population in the sector and UK 1.5% of the population. India thus has a high amount of disguised unemployment in the sector. This shows very clearly that there is poverty among farmers and there is a need to redeploy the population in other sectors. Increasing the size of farm holdings and reducing the number of people engaged in farming will increase crop production by promoting the use of modern methods. The 86.2 % of the population presently engaged with agriculture if redeployed in manufacturing or services sectors will increase their earning and thus India’s GDP. A factory or a warehouse or a cold store established on one acre of land is likely to employ more people and also generate greater wealth than wheat and rice sown in the same piece of land. In the next 10-12 years approximately 10-15% of the population should remain engaged with farming and the balance should be redeployed in other sectors for India’s economic development. Likely Impact of Farm Laws The farm laws brought in are likely to have the following impact over the next 5-8 years: - The small and marginal farmers (85%) will be forced to sell their land to the big corporates because they will have to sell their crops at low prices and the input costs of seeds, diesel, fertilizers, pesticides and electricity will go on increasing. These farmers have no bargaining power with the buyers because they have no capacity to store the harvested crop. - These farmers, after selling their land, will be available as cheap labour for the manufacturing and services sectors for the Indian corporate. India will also export more labour to the world. Is Economic Situation Ideal for Present Laws? - Had Indian economy been growing at around 10%, the capacity utilization of manufacturing sector been above 90%, had “Make in India” been successful then these laws would have made a very positive impact on the GDP because unemployment situation would have been good and the capacity of the secondary and tertiary sectors to absorb labour would have been good. - In the present circumstances the Indian economy does not have the capacity to engage the large population which will get unemployed as a result of these laws. - This year the government is likely to have a fiscal deficit of about Rs 12-14 lac crore. The government is incapable of reducing the revenue expenditure and thus the capital expenditure will fall. This trend is likely to continue for the next 3-4 years. Thus the ability of the government to generate employment through infrastructure creation will remain restricted. - The corporate is unlikely to increase investment or generate employment in the absence of domestic demand. India cannot become a manufacturing hub for the world suddenly without the requisite infrastructure, environment and trained manpower. - In my view these laws will make India a larger version of the state of Bihar which exports hard-working people to the rest of the India. India may be forced to export cheap labour to the rest of the world because India is unlikely to generate employment for the large number of farmers who will be pushed out of farming. Farm Laws are Good or Bad? I have presented an analysis; now please decide as to whether these laws will prove to be good or bad? - Agriculture in the United States – Wikipedia - Agriculture in the United Kingdom – Wikipedia - Small and marginal farmers own just 47.3% of crop area, shows farm census (livemint.com)
Talk to your tots New studies suggest that the number of different words an infant hears may be the best predictor of later intelligence, success in school, and social skills. These words have to come from a human being-words on the radio or TV apparently have no effect. According to research conducted by Patricia Kuhl, a neuroscientist at the University of Washington, an infant's neural connections in the brain are organized by language. As early as six months, babies have learned the sounds of their native language, and the foundation for rational thinking is established as early as one year. Simply talking to one's babies can have an enormous impact on their later development. This research shows that infants need more than pacifiers and caregivers-they need active, verbal parents. It shows that in an image-dominated age, language is irreplaceable. Furthermore, it gives evidence to support the biblical principle of the centrality of the Word. Although Shakespeare is making a comeback in Hollywood, he is getting bumped in the nation's top colleges. According to a study by the National Alumni Forum, of 70 colleges surveyed-including the U. S. News & World Report top 50-only 23 still require English majors to take a course in Shakespeare. Instead of studying the greatest writer in the English language, these schools are substituting classes in works that are emphatically not great. Georgetown dropped its Shakespeare requirement but offers four courses in detective and prison literature, including one on "The Gangster Film," which can count as a requirement. Duke offers "Melodrama and Soap Opera." The University of Virginia's senior English seminar features such topics as "Marketing Miss America," "Monticello and Graceland," and "White Trash." The phenomenon reflects the postmodernist academic establishment, in which high culture is replaced by pop culture. Some of what were once America's finest schools, seduced by relativism, are rejecting the very concept of excellence. Disneyfying the Bible After mining American history, fairy tales, Greek mythology, and literary classics for material, it was only a matter of time before Disney took on the Bible. This time the entertainment empire put on a Broadway musical, King David. With music by Alan Menken (Beauty and the Beast, Little Shop of Horrors) and Tim Rice (Evita, Jesus Christ Superstar), the saga of David from shepherd boy to King of Israel was presented as a pop oratorio. The show closed, to lukewarm reviews, after a limited run of only nine performances, but Disney officials described it as "a work in progress." This probably means that Disney's David will be back, perhaps as a cartoon. Be ready for Bathsheba dolls, Canaanite action figures, and Goliath Happy Meals. Soaps on the rope Soap operas-long the staple of daytime TV-are fading in popularity. Since 1985, the serial dramas have lost 31 percent of their prime viewers, women between ages 18 and 49. Bill Bell, creator of The Young and the Restless, blames the decline on the O.J. Simpson trial. "O.J. was like a narcotic," he complains. "It was its own serial." Other observers point out how the public's taste for watching mixed-up people's entangled relationships is now being satisfied by real-life talk shows. "Suddenly," observes Rick Schindler, an editor at TV Guide, "you could watch Ricki Lake and see characters far more outrageous than Erica on All My Children." In other words, soap operas are declining because real life is becoming a soap opera. The latest manifestation of our culture's death wish is a bizarre and deadly game called "Catching a Breeze." Teenagers go to the railroad tracks and stand between two passing trains. The thrill comes not only from the danger but also from the feel of the wind as both locomotives rush by. Since trains are two feet wider than the tracks on either side, railroad safety experts point out, it is extremely difficult to judge exactly where to stand. Also, the force generated by the trains can pull a person under the wheels. In Wisconsin, two young men who caught more than a breeze were killed in two weeks, and other fatalities are being reported across the country.
So much has been said in educational circles about people's different ways of perceiving reality. We're inundated with "right brain, left brain" jargon. We hear that some people think with their feelings, senses, and emotions; others with facts and figures. Clearly, we don't all think—or learn—alike. Educators have found at least four separate learning styles, each with its own optimum teaching methods. Individual educational theorists label their quadrants differently, but I prefer the schema Bernice McCarthy outlined in the 4MAT System. She distinguishes four kinds of students: innovative learners, analytic learners, common-sense learners, and dynamic learners. The following descriptions lean heavily on her work. Innovative learners seek meaning. They learn as they listen and share ideas. For them, being personally involved in the learning process is important. McCarthy writes, "They are divergent thinkers who believe in their own experience, excel in viewing concrete situations from many perspectives, and model themselves on those they respect." As you might have guessed, I'm an innovative learner. We innovative ones like to participate in small-group discussions. We're idea people; our favorite questions are "Why?" and "Why not?" Often we're found in careers in the humanities, in personnel work, counseling, or organizational development. But I hate art projects. Don't invite me to carve a bar of Ivory soap into a dove to represent the Holy Spirit. I'm not interested. It won't look good. I'd think the whole idea is silly—unless we could sit around and talk about the process. Then I could get excited. So put me into a small group at some point in the learning experience. Let me discuss with other learners the application of biblical truths. I want to talk about it and hear others' opinions. I go crazy when a teacher just talks on and on. I want my turn to work with the idea or to get to know the teacher as a real person. Discussions, skits, small groups, drama, and interaction with others are the learning strategies from which I learn best. Since I prosper in this kind of learning atmosphere, it's hard for me to believe that not every student longs for that moment in class when he's invited to move his folding chair into a circle. I have to remember there are three other learning styles. An analytic learner says, "Just give me the facts." Analytic learners like to know the mind of the experts. For them, learning comes through thinking through ideas to form reality. They tend to have less interest in people than in ideas and concepts. They like to critique information and collect good data. These are the people who love the traditional classroom. Straight lecture suits them well, as long as the lecturer is qualified. They are willing to do the memory work and lap up all the facts. As a teacher, it's easy to like these students because they are happy to sit still and listen. Learners like these excel at creating concepts and models. They cluster in careers like math, research, the basic sciences, and planning departments. A man in my group named Bob is an analytical learner. When I suggest making big, colorful collages that depict the pressures society puts on 20th-century Christians, he wants to hear what scientists say about current trends, what facts I've dug out of the most recent journals, and what predictions specialists make for the future. Forget the collage for Bob, unless it is adapted to appeal to his learning style.
|Read the magazine story to find out more.| By Sandra Avant December 3, 2013 Proteins called interferons are among the latest weapons U.S. Department of Agriculture (USDA) scientists are using to combat foot-and-mouth disease (FMD). These proteins kill or stop viruses from growing and reproducing. Scientists with the Agricultural Research Service (ARS) Foreign Animal Disease Research Unit, located at the Plum Island Animal Disease Center at Orient Point, N.Y., have demonstrated that interferons can be used to protect animals immediately against FMD infection. This rapid protection gives vaccines time to induce the animal's immune response needed to fight the disease. Interferons consist of three families—type I (alpha-beta), type II (gamma), and type III (lambda). Retired ARS chemist Marvin Grubman discovered that type I is very effective in controlling FMD virus infection. Pigs inoculated with a viral vector containing the gene coding for swine type I interferon and challenged with FMD virus were protected for five days. To cover the seven-day window it takes for vaccines to start protecting against FMD, Grubman combined type I and II in an antiviral vaccine-delivery system, which quickly blocks the virus in pigs. In combination with a vaccine, this patented technology provided thorough protection from day one until the vaccine immune response kicked in seven days later. These methods work well in pigs, but not in cattle. However, ARS microbiologist Teresa de los Santos, computational biologist James Zhu and Grubman have identified a type III interferon that rapidly protects cattle against FMD virus as early as one day after vaccination. In laboratory tests, disease was significantly delayed in animals exposed to FMD virus after previously being treated with bovine type III interferon, as compared to a control group that did not receive treatment. In other experiments, the type III interferon treatment was found to be even more protective in cows that were naturally exposed to FMD, according to de los Santos. ARS is USDA's principal intramural scientific research agency, and this research supports the USDA priority of promoting international food security. Read more about this research in the October 2013 issue of Agricultural Research magazine.
It began before Jefferson was president. He was then the American ambassador to France, and he was disturbed by what was happening in the Mediterranean. For centuries, the "Barbary Coast Pirates" had been raiding ships passing through the Mediterranean. These were not "pirates" in the sense that we usually think of them. They weren't rogue agents acting independently. They worked for the governments of the North African countries. They would steal the ship and return to their countries, confiscating the booty aboard the ship, most of which added to the wealth of the Islamic state. They also took the ship's crew members captive and ransomed the ones they could. Most of that money also went to the Islamic government. The crew who could not be ransomed were sold into slavery. In addition, these "pirates" randomly raided towns on the coasts of Europe and captured people, bringing them back to North Africa to ransom or sell into slavery. They even came so far as the shores of America to capture settlers, to bring them back to Africa to ransom or sell. They especially prized young women and children who could be used as concubines or made into eunuchs. Over a million Europeans and Americans were sold into slavery during the 200-year reign of the Muslim "Barbary Coast" rulers. Many European countries wanted this to stop, of course, to which the leaders of the North African countries replied, "All you have to do is pay us a certain amount of money annually, and we will not attack ships from your country." Many European countries paid the tribute. It was cheaper than going to war. Of course, that was a short-term, self-defeating solution, since paying the tribute made the North African Muslim countries more powerful and more capable of terror, plunder, and mayhem. The United States was paying this tribute also. This bothered Jefferson. It just so happened that while he was an ambassador in France, Jefferson met with John Adams (then the American ambassador to Britain) and together the two men met with the ambassador from Tripoli (one of the North African Muslim pirate countries). Jefferson and Adams sat down to talk with this man. They asked him why Tripoli attacked ships. Why attack the United States? They had no previous interactions. Why the hostility? Why did they choose America as an enemy? The Tripoli Muslim ambassador was very straightforward. He said, basically, "That's what we do. We are commanded to do so by Allah." Jefferson later wrote that the Tripoli ambassador told him, "It was written in their Koran that all nations which had not acknowledged the Prophet were sinners, whom it was the right and duty of the faithful to plunder and enslave; and that every mussulman (Muslim) who was slain in this warfare was sure to go to Paradise." Completely taken aback by this revelation, Jefferson decided to look into the matter further, and did the one thing everyone should do: He read the Koran. He learned what Islam was about. And when he became president, he expanded and then mobilized the United States Navy to protect American ships from Muslim piracy and then sent Marines to the shores of Tripoli, who soundly defeated the Muslim warriors. This brought an end to the "Barbary Coast Pirates." This was the first foreign war fought by the U.S. and military aggressiveness of Islamic countries remained contained and weakened for over a century. It's amazing what a little accurate information can do. Read more: Thomas Jefferson's Koran. And for goodness sake, Take the Pledge to read the Koran yourself and convince everyone you know to do the same.
What Are Prescription Opioids? Opioids are sometimes called narcotics, it is a medication prescribed by doctors to treat severe pain after an injury, surgery, or during pregnancy time to relieve pain. They can even be used to treat a cough or diarrhea. This painkiller is derived from the poppy plant and contains semi-synthetic opiates which are drugs synthesized from naturally occurring opiates like heroin from morphine and oxycodone from thebaine, including codeine, morphine and oxycodone. What Are Some Common Opioids? The most commonly used prescription opioids are as follows Why Are They Dangerous to Pregnant Women? Some studies suggest that prescription opioids during pregnancy as a general group might be associated with birth defects like - Poor fetal growth - Preterm birth - Stillbirth or miscarriage - Neonatal abstinence syndrome - Premature birth - Inflammation of the fetal membranes - Postpartum heavy bleeding These defects often happen because people misuse their prescription by getting addicted to and taking with alcohol or other drugs, alternatively accidentally taking the other’s opioids or in a higher dose than prescribed by doctors. Even if you take them exactly as your provider tells you to, it still may cause NAS in your baby, at the same time quitting suddenly can also cause severe problems. Women who use prescription opioids during pregnancy should be aware of the possible risks during pregnancy. To avoid these risks when pregnant or even when thinking about getting pregnant, inform the situation to your healthcare provider so that they can change to a medicine that’s safer for your baby. And to treat this disorder during pregnancy you can practice opioid replacement therapy including medication-assisted therapy or opioid-assisted therapy. If you become opioid-dependent during pregnancy your baby will experience signs and symptoms like diarrhea, irritability, high-pitched cry, tremors, jitteriness, and poor sleep which often begin shortly after birth and might last days to weeks. This can be cured after weeks of hospitalization.
There are innate sex differences in psychology which arose primarily from evolution, and specifically from the evolution of the pair bond. These innate sex differences have been enhanced, or suppressed, to varying degrees in different cultures, but never eliminated. The dominant feature is a ceding of moral authority to women, thus facilitating men’s essentially altruistic role in the pair bond. Men are motivated to avoid female disapproval. The ability of humans to form very large cooperative societies depends upon an innate tendency to conform to agreed (and partly arbitrary) rules of social behaviour: a social morality. This is policed by severe social disapprobation (outrage) and sanctions against offenders. The social emotions, especially guilt and shame, are instrumental in promoting conformance. Most people will self-police for fear of guilt and shaming. Whilst social morality will have common elements between cultures, much of it is contingent and malleable: moral relativism. (State controlled criminal law and punishment is not the most significant factor in promoting stable and cooperative societies). Humans’ rational cognition, especially the perception of mortality, promotes an existential unease manifest as questioning the meaning of life. In the past many (most?) people have found an adequate answer in adopting the socially approved stance in relation to and , above, underpinned and “anchored” by a common religion which promoted the same values and addressed meaning through a metaphysical credo. The decline of religion has uprooted the anchor and left people adrift as regards meaning. The results are manifold, including anomie and nihilism and a tendency to seek meaning in social causes, which are then adopted with intense, and intolerant, religious fervour. (Such intemperate religiosity was condemned in times past as “enthusiasm”). People are strongly motivated to avoid the socially-enforced discomfort of guilt and shame, even though the socially prescribed morality may be arbitrary. Unease may arise because our rational cognition is able to perceive the arbitrariness of morally relativistic rules. This cognitive dissonance is relieved by the erection of narratives (ideologies) which rationalise the social morality. Such narratives survive despite being commonly replete with contradictions because of the pressing need to provide ostensible motivation for beliefs which are actually motivated by the urge to avoid the discomfort of dissent (instantiated by shaming). Reversing , the power of comforting narratives to validate arbitrary social rules can be exploited by those who control the narrative to impose opinions on the public which become perceived as moral. This is the basic process of moral manipulation made possible by the disappearance of an externally imposed moral “anchor”. In the absence of an externally imposed moral “anchor”, the above conditions define a dynamical system in which social morality will undergo continual change in a direction which advantages those who control the narrative. This will continue until such a time as the deception is perceived by the majority. What then happens is unknown. Behind our present political polarisation lies moral manipulation by those seeking power. The above hypotheses describe the human proclivities which enable this mechanism. Feminism is naturally aligned with those seeking power through this mechanism because evolution has imbued women “ready-made” with the requisite moral cachet (hypothesis ). The much-discussed relationship between feminism and Marxism arises because they both deploy the same mechanism based on moral force. In classical Marxism it is the moral force of the economically oppressed; in feminism it is the moral force of the oppression of women. Success of the political strategy depends upon success of the oppression narrative, not on its factual accuracy. It would be an exaggeration to claim that a suitable narrative could impose any arbitrary social moral code. There is, at any given time and place, only a certain degree of latitude in what the public will accept. Push the narrative too far, too fast, and the public will reject it. The narrative must be presented as aligned in some way with existing perceptions of fairness. This careful choice of the ground on which the narrative is promulgated by its proponents, namely the ostensible moral high ground, presents the dissident with a great difficulty: opposition will always appear to the masses as morally reprehensible. A basic strategy of narrative projection is Moral Vampirism. Any source of moral cachet can be annexed, co-opted and redirected for the purposes of the Grand Narrative. These subjects are moral redoubts which, like forts on the border of enemy territory, are the jumping-off points for attacks on the prevailing order. These sources of moral cachet act as a smoke screen behind which less well advertised objectives are pursued. Thus, the promotion of women’s rights was always the moral face of a political desire to erode and bring down our existing culture. To the public it is promoted as equal pay and preventing violence to women; to its zealots it is promoted as “smashing the patriarchy”; but its purpose was always to eliminate the nuclear family and bring down the western capitalist system. It is remarkable that these overtly revolutionary aspects have been written about openly, in the name of feminism, in vast numbers of academic books and journal papers for half a century – and yet the public remain incredulous that such things could lie behind what is presented to them as the morally unassailable “preventing violence to women”. Such is the power of moral blind-siding. One of the most significant victories for those pursuing this covert strategy was to have enshrined in legislation the ruling that “equality does not mean treating everyone the same”, Ref.1. Here we have the clearest example of how morality-as-smoke-screen operates. The public will readily accept that the pursuance of equality is a moral good (whether this is true is not apposite). But, cunningly, “equality” has been redefined on the basis of needs. We have been told that treating people equally may mean treating them differently because their needs are different. Thus the covert objective of implementing biased policies is achieved in the guise of “equality”. Prejudice is repackaged as the New Equality, and those pursuing this objective pass themselves off as the champions of loveliness whilst those who object are castigated by the narrative as reprehensible. The legislative, political and judicial recognition that different groups have distinct needs, and that policy must be driven by these needs, enshrines Identity Politics within our culture. Identity Politics is inherently divisive, the approved distinct needs sanctioning preferential treatment – for some. The conflict between preferenced groups and out-groups which is promoted by this system reinforces the tribal perspectives within these groups, and amplifies an Identity-based mindset. The system therefore benefits from positive feedback once established. All this assists those of a collectivist stance, as group membership gains in significance. Were they wiser, they might realise that unopposed positive feedback is always catastrophic. This mechanism creates a monster which its creators will not be able to control; schism is inevitable, Ref.2. The widespread adoption of the strategy of advancing unexamined policies behind a façade of moral rectitude has left us vulnerable to policies whose motivation is destructive. Motives which are psychologically dark may be amplified into widespread societal malaise by being promulgated via this covert mechanism. In particular we now have the female shadow running rampant through society, released by feminism. For this reason it is appropriate to examine the negative psychological characteristics of feminism in particular (see my next post). Douglas Murray (Ref.3) has observed that, as regards gay rights, women’s rights and racial equality, we seem to have snatched defeat from the jaws of victory – or, as he puts it, “just as the train seemed to be arriving at the station, suddenly it has accelerated away again”. He is puzzled. I can explain. Gays, women, racial minorities,….they are all sources of moral cachet. The Morality Vampires descended upon them to usurp their causes for use as smoke screens for their Greater Purpose. What Murray, and any reasonable person, would perceive as “near victory” for these causes is, from the point of view of the Morality Vampires, the near exhaustion of their reserves of moral sustenance. Consequently, the narrative has been activated to revivify the oppression of these groups, so that they may still function as effective moral bludgeons. The speed of the train, in Murray’s metaphor, is proportional to the power these issues grant to those willing to exploit it. It has become obvious that the in-groups of gays, women and racial minorities are being used as a front for entirely different political purposes than the interests of these groups themselves. Murray discusses many case studies which expose the matter. For example, merely having sex with other men, and not with women, is no longer sufficient to qualify as gay, apparently. In Murray’s words, you will be “excommunicated from the Church of Gay” for not having the Correct political opinions. In the reverse manner, Rachel Dolezal was rather put out when it was suggested that she was not an African-American black, as she had claimed. She felt that criticism of her stance based merely on the pathetic grounds that she was in fact white, with white parents, was invalid. She knew, she felt, she believed, that she was black – and so she was. And Whoopi Goldberg agreed. So now you understand trans. Of course a person with a penis can be a woman; it’s just a matter of holding the Correct political views. In this way, political Correctness becomes all there is; objective reality dissolves. Two and two is five because I say so. The preferential treatment for some groups which results from this system naturally encourages support from within these groups (excluding those individuals who are sufficiently enlightened to see through the game). But the implied disadvantaging of other groups is part of the benefit to Identity Politicians. The ignoble motives that lie behind Moral Vampirism are again clear. Orwell observed that the motivation for so many socialists in his day was not love of the poor, but hatred of the rich. In our time the promotion of the rights of gays, women, racial minorities and trans is not so much compassion for these people as distain, even hatred, for white heterosexual men and a desire to bring them down. This is where “equality does not mean treating everyone the same” takes you. Many of our political class do not have the wit to understand this. But then there are those who do. And they are worse. Control of the narrative includes propagandising, control of access to information, control of the media, limiting of free speech, and monopolising sources of supposed authority (schools, academia, the judiciary, charities, quangos, the civil service, Parliament and Government departments). At the start only partial control of a few of these functions will apply. As the narrative takes root, all these areas will be colonised. They are now. Profound changes in social morality require a sustained campaign over decades (nudge, nudge, nudge). But over a single lifetime, moral sense can be completely reversed (e.g., the perception of what constitutes racism; Martin Luther King is now a racist). A signature feature of moral manipulation is a complete reversal of moral position occurring quickly (e.g., politicians’ rapid reversal on same-sex marriage). Conservatism is an instinctive protection against moral corruption. But at the level of those in positions of power, conservatism has fallen. Those who control the narrative have become the ruling elite. Those who used to be the ruling elite have adopted the brave new narrative as the only means of hanging onto power. Once the public has been duped, the game is up in a democracy. Bad money drives out good. One of the greatest benefits of pursuing a political strategy via moral manipulation is that, once established, its adherents will advocate the policies with ferocious energy and passion. To some onlookers, such firm conviction may be confused with validity. But actually it reflects the seat of moral conviction in the emotional psyche. Those whose opinions lean more upon rationality and evidence will tend to express their views with less passion. Unfortunately, the result is that the views which deserve greater respect are afforded less, as people tend to respond to emotionality more than to rationality. Many social causes which have been adopted by their adherents with passion now inhabit the psychological space once occupied by religion. This comes about (I guess) because of the strong urge to find meaning and fulfilment in life, an urge which is satisfied by espousing these (morally promoted) causes with intense zeal. Feminists, eco-warriors, SJWs, supporters of Black Lives Matter, etc., are notable for their absolute intolerance of alternative opinions. These credos are akin to a religion sweeping through the world converting people by the sword. Speaking of tolerance, Murray gives examples of very different reactions to people’s “mistakes”. A white man using, in innocence, the phrase “coloured people” rather than the approved “people of colour” will have a struggle to weather the resulting storm of criticism. In contrast, a black woman spending years tweeting KillAllMen and endlessly stating White People Are Trash and like sentiments, will face no censure. I apologise for the lack of originality, but it has to be said this is straight from Marcuse: repressive tolerance. Tolerate from the left, tolerate nothing from the right. And we have already seen that “black” means left and “white man” means right. It’s such a simple tactic even morons can do it. Despicable morons can do it especially well. So don’t bother labouring the fact that there is no semantic difference between “coloured people” and “people of colour”. No one gives a shit. They just hate you, and they will find a way to bring you down. One of the reasons this cultural disease has spread so quickly and become so popular is that it appeals to people who are privileged. The evidence is there. Which universities are the most badly infected? That will be the most prestigious universities, attended, by definition, by the privileged. And the reason why these views are so dominant in centres of power and influence is not just entryism. It is also because the privileged can expiate their guilt by espousing the Correct views, the very purpose of which – thanks to longstanding Moral Vampirism – is precisely to reward adherents with absolution. What’s not to like? And it costs them nothing. They only need to conform. It is wryly amusing to note the virtual isomorphism between this process and Christian faith-confession-absolution. No doubt they both appeal to the same neural pathways. The situation is further inflamed by the fact that many people appear now to be genuinely incapable of valid moral judgment. So we have male feminists apologising for their masculinity, and we have white professors who open a speech with “I’d like to be less white, which means less oppressive, oblivious, defensive, ignorant and arrogant”. One can imagine the black audience swelling with pride at the implication that they are free from any such character flaws. Why do male feminists and white professors grovel so? It is not grovelling really. It is closer to self-aggrandisement by a circuitous route. The Pharisee may be on his knees, praying in a position of obeisance, but he is doing so on a street corner. By declaring their allegiance to “the oppressed” they distinguish themselves from “those other men” and “those other whites”. It is those others they are truly blaming, whilst they themselves are the “one good man”, or the “one good white”. The pernicious aspect of this sneaky-fucker cunning is that it hugely reinforces the apparent validity of women’s (or blacks’) claimed oppression; after all, the oppressor has just admitted it! And does this faux-grovelling by feminist men and intersectionalist professors help heal the rift between the sexes and races? No, it works to deepen the rift by continually reinforcing the perception that it has a sound basis. Yet both sides in this wicked symbiosis are unconcerned that they are aggravating a running sore and promoting division because both sides profit from it (in terms of social standing, and perhaps financially). The monster the pseudo-left have created is running out of control. It devours its own. The old second wave feminists are now the bigots, according to the trans lobby. And the black feminists and the BLM crew are placing all white people on the naughty table and being a lesbian won’t save you. This meltdown is inevitable. Identity Politics crushes opposition, and then it crushes its adherents. Even the dominant in-groups inevitably schism as the criteria for Correct status ratchets ever upwards and the minefield of potential errors becomes ever more tricky to negotiate (Ref.2). How can this be stopped? Those who believe there is an absolute morality will naturally wish this to provide the missing moral anchor, the antidote to moral manipulation. However, the benefit of an externally imposed moral order is not restricted to absolute rectitude. Even a rather poor moral code, assuming it is not too tyrannical, will be better than a condition in which society ratchets ever downwards into turpitude and eventual societal collapse. - 2010 Equality Act - The mathematical basis of the catastrophic nature of identity politics - Douglas Murray The Madness of Crowds: Gender, Race and Identity (Bloomsbury Continuum, 2019)
By the first article the house representatives shall consist of members, chosen every second year by the people of the several states, who are qualified to vote for members of their several state assemblies; it can therefore readily be believed, that the different state legislatures, provided such can exist after the adoption of this government, will continue those easy and convenient modes for the election of representatives for the national legislature, that are in use, for the election of members of assembly for their own states;2 but the congress have, by the constitution, a power to make other regulations, or alter those in practice, prescribed by your own state legislatures; hence, instead of having the places of elections in the precincts, and brought home almost to your own doors, Congress may establish a place, or places, at either the extremes, center, or outer parts of the states; at a time and season too, when it may be very inconvenient to attend; and by these means destroy the rights of election; but in opposition to this reasoning, it is asserted, that it is a necessary power because the states might omit making rules for the purpose, and thereby defeat the existence of that branch of the government; this is what logicians call argumentum absurdum, for the different states, if they will have any security at all in this government, will find it in the house of representatives, and they, therefore, would not be very ready to eradicate a principle in which it dwells, or involve their country in an instantaneous revolution. Besides, if this was the apprehension of the framers, and the ground of that provision, why did not they extend this controuling power to the other duties of the several state legislatures. To exemplify this the states are to appoint senators, and electors for choosing of a president; but the time is to be under the direction of congress.3 Now, suppose they were to omit the appointment of senators and electors, though congress was to appoint the time, which might well be apprehended the omission of regulations for the election of members of the house of representatives, provided they had that power; or suppose they were not to meet at all: of course, the government cannot proceed in its exercise. And from this motive, or apprehension, congress ought to have taken these duties entirely in their own hands, and, by a decisive declaration, annihilated them, which they in fact have done by leaving them without the means of support, or at least resting on their bounty. To this, the advocates for this system oppose the common, empty declamation, that there is no danger that congress will abuse this power; but such language, as relative to so important a subject, is mere vapour and sound without sense. Is it not in their power, however, to make such regulations as may be inconvenient to you? It must be admitted, because the words are unlimited in their sense. It is a good rule, in the construction of a contract, to suppose, that what may be done will be; therefore, in considering this subject, you are to suppose, that in the exercise of this government, a regulation of congress will be made, for holding an election for the whole state at Poughkeepsie, at New-York, or, perhaps, at Fort-Stanwix: who will then be the actual electors for the house of representatives? Very few more than those who may live in the vicinity of these places. Could any others afford the expence and time of attending? And would not the government by this means have it in their power to put whom they pleased in the house of representatives? You ought certainly to have as much or more distrust with respect to the exercise of these powers by congress, than congress ought to have with respect to the exercise of those duties which ought to be entrusted to the several states, because over them congress can have a legislative controuling power.4
Permanent Pacemakers are employed to ensure that the heart rate is not allowed to drop below definable limits. For most people, they are implanted because of symptomatic slow heart rhythms that have either resulted in dizzy spells or blackouts. In some cases, pacemakers are recommended because of a perceived high risk of having blackouts. The normal heart beat is generated by a cluster of specialised cells high in the right atrium. This area is called the sinus node. Impulses emitted by this area spread in a wave through the uppre chambers causing them to contract. The electrical impulses then travel to the lower chambers through another specialised region known as the atrioventricular node. The signals are then split into two branches that rapidly transmit the signals through the left and right ventricles. The development of a slow heart rhythm can be due to failure of any or several parts of the conduction system. The nature of the conduction problem often influences the type of pacemaker that is chosen for patients. How Are Pacemakers Implanted? Pacemakers are implanted under local anaesthetic. The procedure usually takes less than one hour and it is usual to stay in hospital overnight after the procedure. The local anaesthetic is injected into the skin under the collarbone on the left side of the chest. A small incision is made in this region and the pacemaker leads are inserted into a vein through the incision. The leads are very floppy and flexible and slide down the vein until they are sitting in contact with the appropriate heart chamber(s). In most patients, alead is passed to both the upper and lower chambers. The ends of the leads are then plugged into the pacemaker box. This box is roughly the size of a gentleman's wristwatch. The wound is then closed with stitches that will dissolve over a few weeks. How Do Pacemakers Work? The basic technology is very simple. The leads in the heart detect the electrical heartbeat in the chamber(s) that they are situated. Provided signals are regularly detected and relayed to pacemaker box, the pacemaker will not do anything. However, if the time interval between beats is longer than the limit programmed into the pacemaker, the generator will send a signal down the leads to the heart stimulating it to beat. Similarly, if the pacemaker senses that the signals are not getting through from upper to lower chambers in time, an impulse is sent to the lower chambers telling them to contract in harmony with the upper chambers to maintain the maximum efficiency of the heart. Pacemakers have additional functions including the ability to sense physical activity and breathing rate. This enables the pacemaker to make the heart speed up or slow down according to the activity levels of the patient. How Long Do Pacemakers Last? This depends on how much work it is required to do and depends on many factors. If a patient's heart requires pacing all the time the battery may last 5 years or less. If the pacemkaer is rarely needed it may last twice as long. Many patients will require a generator change within their lifetime but this is usually even more straightforward than the original operation. How Long Will I Need To Be Off Work? Most patients can return to work after 7 days. If you have a very physical job, or one that is associated with a high risk or accidents or responsiblility for others, your employer may stipulate up to a month off work. The DVLA state that you are not allowed to drive for 7 days after a pacemaker implantation. For drivers with 'Group 2' entitlement such as HGV drivers, longer periods of disqualification apply. How Often Will I Need to Come For Checkups? After the initial operation, follow-up visits are usually arranged 2-4 weeks after the operation, then 3 months later and then it is usually only once a year until the battery begins to show signs of running low. Once this happens, you would usually be put on a list for routine replacement of the generator.
Most Teens Aren't Active Enough, And It's Not Always Their Fault Sure, you think, my kid's on a football team. That takes care of his exercise needs, right? Probably not. "There are these bursts of activity," says Jim Sallis, a professor of family and preventive medicine at the University of California, San Diego. "But if you think about it, one hour of playing football out on the field means that the vast majority of that time is spent standing around waiting for the next play." And that's a problem, federal health officials say, because children need at least one hour of moderate to vigorous physical activity every day. "We know that physical activity in childhood strengthens your bones, increases your muscle mass," says Tala Fakhouri, an epidemiologist with the Centers for Disease Control and Prevention. "It also has effects on psychological well-being in kids and teens. It increases their capacity for learning, their self-esteem and it may also help them deal with stress." But just one in four young teenagers between ages 12 and 15 actually get that one hour of exercise every day, Fakhouri says. She analyzed federal health data gathered from 800 teenagers in 2012. While kids may be active in childhood, it's typical to see a decline as they move into their teen years. "We know, for example, that sedentary behaviors like watching TV are the single biggest contributor to physical inactivity in adolescence," Fakhouri says. But it's not that teenagers no longer enjoy sports. In the study, teenage boys said their favorite physical activities outside of gym class were basketball, running, football, bicycling and walking. Girls favored running, walking, basketball, dancing and bicycling. Most studies of physical activity find boys more active than girls, and this one was no different. It found that 27 percent of boys and 22.5 percent of girls got the recommended one hour of exercise daily. That includes gym class, organized activities and play. It's not necessarily teenagers' fault that they're not more active, researchers say. Parents worry about safety when their kids go outside. They worry about bullying from other kids and crime in urban neighborhoods. Sallis adds that a surprising number of parents are concerned about traffic. "They don't want their kids to go out because traffic is so bad. There's no safe place to cross the street," he says. But organized classes or teams aren't the only option. Families can make small changes in their schedule to build in more exercise, Fakhouri says. "You can take a long walk after dinner. You can take your dog on long walk. Play basketball, dance together." And with many schools reducing or cutting out PE, Sallis says parents may have to put pressure on the schools, too. "Look at what's happening in PE," Sallis says. "If they're not going out at all or very much, complain about that. If you see PE class and it's not very active, inform the principal that that's not acceptable." Bottom line: Physically active kids become physically active adults. And that's another critical reason, Sallis says, to help your kids get out and get moving. Copyright 2020 NPR. To see more, visit https://www.npr.org.
State of Utah ORIGIN OF STATE NAME: Named for the Ute Indians. NICKNAME: The Beehive State. CAPITAL: Salt Lake City. ENTERED UNION: 4 January 1896 (45th). SONG: "Utah, We Love Thee" and "Utah, This is the Place." MOTTO: Industry. COAT OF ARMS: In the center, a shield, flanked by American flags, shows a beehive with the state motto and six arrows above, sego lilies on either side, and the numerals "1847" (the year the Mormons settled in Utah) below. Perched atop the shield is an American eagle. FLAG: Inside a thin gold circle, the coat of arms and the year of statehood are centered on a blue field, fringed with gold. OFFICIAL SEAL: The coat of arms with the words "The Great Seal of the State of Utah 1896" surrounding. ANIMAL: Rocky Mountain elk. BIRD: California sea gull. FISH: Bonneville cutthroat trout. INSECT: Honeybee. FLOWER: Sego lily. TREE: Blue spruce. GEM: Topaz. EMBLEM: Beehive. LEGAL HOLIDAYS: New Year's Day, 1 January; Birthday of Martin Luther King, Jr., 3rd Monday in January; Lincoln's Birthday, 12 February; Washington's Birthday, 3rd Monday in February; Memorial Day, last Monday in May; Independence Day, 4 July; Pioneer Day, 24 July; Labor Day, 1st Monday in September; Columbus Day, 2nd Monday in October; Veterans Day, 11 November; Thanksgiving Day, 4th Thursday in November; Christmas Day, 25 December. TIME: 5 AM MST = noon GMT.
In 1680, the feudal ruler of the Land of Rhode (Land van Rode), Lopez Maria Rodriguez de Evora y Vega, moved the seat of his fief from Schelderode to his new castle in Beerlegem. In 1682, Land of Rhode was promoted from barony to marquisate, and the fief of Beerlegem to barony. Around 1730, the marquis of Rhode started the construction of the Bieze Castle (Kasteel ten Bieze) in Beerlegem. This castle was completely walled and had nine bays. At the face and rear, two lion sculptures were holding the coat of arms of the marquis. An east wing and a west wing were built in a sober classicistic style, respectively in 1773 and 1778. In 1872, the castle was enlarged and renewed in a neoclassicistic style. The original facade decoration disappeared and the roof was made higher. Though the majestical wrought-iron gate at the entrance of the castle domain is a beautiful example of rococo art, it would date back to only 1860-61 (1) . Beerlegem is now a part of the Town of Zwalm (Belgium). The castle of Beerlegem is still in private hands. (1) The date of 1860-61 is mentioned in : DHANENS, Elisabeth, Kastelen in Belgie (Castles in Belgium), in Spiegel Historiael, 7, 6, 1972, pp. 340-49. Source: Koenraad DE WOLF, Zuid-Oost-Vlaanderen. Barok, Rococo & Classicisme (1625-1800), 1999, pp. 72-74 and 78. (English translation: Architectural Guide of South-East-Flanders. Baroque, Rococo & Classicism (1625-1800).)
People are more likely to start eating more fruits and vegetables this time of the year either to get that swim suit body or for just a healthy change. "A pasta salad with whole wheat rotini with some kale, zucchini, squash, red bell peppers, just very very colorful with a light low fat vinaigrette," said Registered Dietitian Jamie Sharp. That is what Registered Dietitian Jamie Sharp thinks is the ideal meal, but whether you choose to eat like her or not it is important to stay healthy, especially during the summer months. "I say stick with the basics, of course, during the summer when it is hotter if you are doing more fruit and things that are juicy it might be a little bit better it might help prevent dehydration," said Sharp. In addition to what you should eat, there are a few foods you should avoid. "I think heavier high fat foods take longer to digest its hotter in the summer it could lead to more nausea not feeling well lighter foods, again fruits and vegetables, lean cuts of meat low fat dairy products," said Sharp. And locals are used to having fruits and vegetables only available during the summer but they should also grow accustomed to having those foods, during other seasons. "One of the hardest things for so many of us especially here in the south and i only say that because i live in the south we really concentrate on fruits and vegetables during the summer because that's when they have always been available but what we have learned is that we have a lot of growers that grow year round," said Co-manager or the farmers market at Dothan Nurseries, Susan Owens. Local farmers have you covered if you are afraid of those harmful pesticides "We try to use only natural techniques. We try not to spray at all but if we do then we use garlic based water, something natural that mostly aggravates the bugs more than kills the grass," said local grower Kenneth Cox. Despite where you get your produce from it is important to remember to get your five to nine servings a day. Fruits and vegetables are low in calories high in fiber and full of vitamins and minerals.
|Monday, 16 December 2019| NAA: The Destruction of the Ozone Layer From: Chris Scheurweghs <firstname.lastname@example.org> THE DESTRUCTION OF THE OZONE LAYER Draft Special Report Mr. Paolo RIANI (Italy) Special Rapporteur* International Secretariat October 1995 * Until this document has been approved by the Scientific and Technical Committee, it represents only the views of the Rapporteur. 1. The existence of the climatic changes that have been taking place during the last few decades is no longer the subject of debate. The task today is to refine techniques that will enable increasingly urgent remedies to be found. 2. The composition of the Earth's atmosphere has changed over the past two centuries as a result of human activities. The advent of the Industrial Revolution led to ever-increasing consumption of fuel and a rising level of carbon dioxide in the atmosphere. Agro-industrial activities and increased energy consumption brought about a rise in methane and nitrous oxide levels. 3. Furthermore, since the 1950s, there has been a rise in the quantity of chlorine compounds in the atmosphere, as a result of the large-scale use of well-known compounds such as chlorofluorocarbons (CFCs). The effects of these variations are diverse: carbon dioxide (CO2), methane (CH4) and nitrous oxide (N20) help to absorb infrared radiation and, as a result, to warm the Earth; others, such as the CFCs or other chlorinated compounds tend to destroy stratospheric ozone. 4. Recently, scientific research has shown that the depletion of ozone in the stratosphere caused by CFCs seems to contribute significantly to disruption of the upper atmosphere. There is less ozone to intercept solar radiation and re-radiate longer wavelengths on to the Earth's surface. If this finding is confirmed by further research, it would mean that the real effect of CFCs on global warming would be negligible or non-existent. This would not in any way obviate the need to eliminate CFCs, given the damage they do to the ozone layer. 5. The most striking change to the ozone layer is what is known as the ozone "hole" over the Antarctic. If the ozone layer's thickness in 1957 is compared with its thickness today, we find that the layer's density is constant practically everywhere except in the Antarctic polar region during the three months of the southern spring (September, October and November). Secular trends in the Antarctic show a constant decrease in density during the southern spring months, measured at about 300-320 Dobson units, in recent years. This means that the quantity of ozone normally present at these extreme latitudes during the spring has been halved, and is falling towards one-third of what it was. 6. This effect, though typically regional, ultimately causes small reductions in the ozone layer. Generally, a 5% depletion of the whole ozone layer at our latitudes causes an average rise of 10% in ultraviolet (UV) radiation intensity at ground level. 7. Also, as widely predicted by scientific models in recent years, the destruction of the ozone layer, which until recently only occurred during the southern spring months, now continues throughout the year and shows a very disturbing trend. Very recent observations by the British Antarctic Survey have revealed that, at the present rate of depletion, the ozone will have completely disappeared from the polar region by 2005. 8. The problem is magnified by the results observed in Europe and the United States. It is now beyond dispute that, with a 2 to 6% fall at medium latitudes in the northern hemisphere during the period from 1969 to 1986, depletion of the ozone layer is more than twice what was predicted and will in all probability quadruple by the end of the century. 9. It should also be remembered that, according to representative sampling, long-term trends on a wide geographical scale indicate a global fall not attributable to known natural processes. 10. Measures taken by health authorities in an effort to reduce exposure of the population to solar radiation, in both summer and winter, are now considered normal. The risks of diseases linked to the effects of UV radiation not filtered by the stratospheric ozone are rising steadily and disturbingly. 11. Scientific analyses of the hole in the ozone layer have alerted the international community and given rise to a first initiative; the Vienna Convention and the Protocol that followed, together with its amendments (Montreal, London, Nairobi and Copenhagen) have imposed significant reductions on the production and consumption of CFCs. 12. The present situation nevertheless calls for a number of negative comments on international commitments regarding ozone. The essentially pragmatic nature of the resolutions adopted in recent years during revision of the Montreal Protocol has clearly shown the inadequacy of the instruments of international cooperation to face up to the environmental priorities on a global scale. 13. Data in recent years have shown that the only really effective decision aimed at reducing destruction of the ozone layer is total elimination of CFCs, and that the gradual pace of international plans is being constantly outstripped by the overall situation. 14. It takes a very long time for CFCs to reach the stratosphere - about 7 years. The present situation results from the pollution before the 1980s. Owing to their chemical stability, CFCs remain in the atmosphere for about 100 years. This means that an immediate halt to emissions would not have significant effects for another century. Similarly, even very small emissions of CFCs will lead to a 6% overall depletion of the ozone layer and a 10-15% increase in UV radiation at ground level, with the consequences we know, over the next 50 years. 15. This is the great problem of global environment policy on which international organizations such as NATO must reflect. Special procedures must be defined for environmental intervention, so that no difficulties of implementation render scientifically-based decisions ineffective. If new forms of international cooperation do not emerge, it will only be possible to apply partial and ineffective remedies to problems such as climatic change. 16. The countries party to the Montreal Protocol have decided totally to eliminate chemical emissions harming the ozone layer by the year 2000, but that is not enough. While the producer industries are now replacing CFCs by other compounds and are thus seeing additional market niches opening up, there nevertheless remain difficult problems to solve. 17. CFC substitutes, consisting of molecules very similar to those of chlorofluorocarbons, cost 6 to 7 times as much. The Third World countries are not in a position today to take on the necessary technological change. Also, on account of their legitimate tendency to increase consumption and their low technological level, those countries are causing a general increase in atmospheric pollution. 18. The economic support programmes launched in recent years, and the delay in their implementation, clearly show the enormous gap between the scale of the problem and the weakness of the resources employed. While the NATO countries are still significantly committed, the current situation nonetheless shows that international cooperation policies are inadequate overall. 19. Environment policy experts assert that, in order to be effective, the steps taken to combat atmospheric pollution must form part of exhaustive forecasting systems, taking account of demographic trends, the economy and technological status of countries, in order to set significant objectives. And while this consideration concerns the more general problem of the greenhouse effect, it is still more significant for the destruction of the ozone layer, for this man-made damage is also direct and immediate, whereas the solution to the problem seems a long way off. 20. Article 3 of the Framework Convention on Climate Change, signed in Rio in June 1992, states that "the Parties should protect the climate system for the benefit of present and future generations of humankind, on the basis of equity and in accordance with their common but differentiated responsibilities and respective capabilities". Although it is only a basic principle necessitating implementing protocols, this declaration contains the key to solving environmental problems because it modulates the solutions according to structural differences and political realities. The same path must be followed in regard to the ozone problem.
This essay Richard III has a total of 1043 words and 4 pages. "What qualities of character did Richard III have that enabled him to ascend the throne?" Name and show these characteristics in action in the play Richard III. Also: "Richard III is a consummate villain". Show that his summation of Richard's character is true. To achieve goals, in one's life, one must be determined and must have certain characteristics that reciprocate to one's goals. In the play Richard III, Richard III's goal is to ascend the throne. There are two ways that one can claim the throne, by birthright, or by might. Since Richard III cannot claim the throne by right he must therefore take it by might. To accomplish this goal Richard Duke of Gloucester must be determined to achieve his goal at all costs and he must have the characteristics to meet his determination. In the first scene of the play, Richard announces in a narration, his plan to become king. Richard plainly states that he is "Deformed, Unfinished, and sent before his time" and "since he cannot prove to be a lover; he is determined to prove a villain." As a villain Richard must be heartless, he cannot let his emotions interfere with his actions. He must also be intelligent and organized; a villain must know exactly what he has to do, when he has to do it and how he is going to do it. A villain must also be manipulative and persuasive so that if he is accused of a crime or if he finds himself between a rock and a hard place he is able to talk his way out or convince people that he did not commit the crimes in question. A villain must also have scapegoats to use if he is discovered or if he is in a dangerous situation. Richard devised a brutal stratagem to ascend the English throne. Brilliantly, he executed his plan. Heartlessly, he executed family, friends, and subjects. Richard did indeed display these characteristics and therefore fulfilled his goal to ascend the throne, as you will see in the paragraphs below. With his elder brother, King Edward IV, dying, Richard believes himself to be the most qualified to rule. He sets his plan to ascend to the throne into action. The first step was to lock up the rightful heir, his other brother George, Duke of Clarence, in the tower. He demonstrates his manipulation skills and plants the seeds of distrust in his brother Clarence's head. He tells Clarence that it is not the king that is locking him up in the tower, "'Tis the lady Grey his wife that tempers him to that extremity" he says. He then puts on a show; sobbing to Clarence in front of Brakenbury, telling Clarence " I will deliver you, or else lie for you." A statement that turned out to be false. Instead of trying to deliver Clarence from tower Richard hires two murderers to kill Clarence, this plan was executed perfectly. His next step in his plan to claim the throne was to claim a bride. He had one woman in mind; a widow named Anne Neville. Anne Neville was formerly married to Edward, Prince of Wales, Henry VI's son, both of whom Richard murdered. He stated that "The readiest way to make Anne amends is to become her husband and her father". He once more demonstrates his wonderful manipulative, and persuasive abilities to woo Anne to marry him. In this he succeeded he married Anne even though he killed her husband and her father. When King Edward dies, Richard, Duke of Gloucester decides that he needs a scapegoat, so that if he should fail to execute the next steps in his plan, he will have someone to break his fall. He employs the Duke of Buckingham, a powerful political ally. The next step in Richard's plan is to eliminate the family of the late king's wife, Queen Elizabeth, who naturally would prefer to see her sons, Prince Edward and Richard Duke of York, to ascend the throne. To discredit the two little princes, Richard circulates rumors that the sons of Queen Elizabeth are bastards therefore they cannot claim the throne. Richard III decides that the only way to make sure that the little princes cannot claim the throne is to eliminate them permanently. Richard decides that the most secure way to kill the princes is to become their most trusted friend. Topics Related to Richard III Richard III of England, House of York, Knights of the Garter, British films, English-language films, Richard III, Princes in the Tower, Edward IV of England, Richard II of England, Anne Neville, House of Plantagenet, Elizabeth Woodville, richard duke of gloucester, king edward iv, duke of gloucester, brother king, richard iii, elder brother, rock and a hard place, stratagem, scapegoats, dangerous situation, villain, birthright, summation, narration, family friends, paragraphs, two ways, crimes, emotions Essays Related to Richard III
THE DETAILS: Researchers from New York City’s Department of Health and Mental Hygiene surveyed nearly 11,000 customers in 13 fast-food chains—McDonald's, Burger King, Wendy's, Subway, Au Bon Pain, KFC, Popeye's, Domino's, Pizza Hut, Papa John's, Taco Bell, Starbucks, and Dunkin Donuts—across 275 New York City locations in the spring of 2007, and another 12,000 customers earlier this year. They gave each customer a $2 Metrocard in exchange for his or her register receipt and completion of a brief survey. What they found was that customers who read the calorie information on menu items posted in the chain purchased an average of 106 fewer calories than those who said they didn't notice the postings—754 calories' worth of food as opposed to 860 calories' worth. The difference was most pronounced at Burger King, and burger joints in general, where customers who read the posted info purchased 152 fewer calories, on average, than those who didn't. Fifteen percent of the customers surveyed said they saw the info on calorie counts and used it when making their orders. WHAT IT MEANS: Posting calorie counts changes the context in which people decide what to buy, notes Lynn Silver, MD, assistant commissioner in the Bureau of Chronic Disease Prevention and Control at New York City's Department of Health. "It makes [their] default decisions healthy." In non-MD-speak: It works. The research contradicts an earlier New York University study of the city's menu labeling requirements, which showed that the postings may have increased awareness of calorie content, but not a decrease in calories purchased in lower-income and minority neighborhoods. The Department of Health researchers parry by saying their study was more representative of the city as a whole, and people in general. And in it, they see reasons to hope that a widespread improvement in eating habits is possible. "Dietary change is likely to come gradually," says Dr. Silver, "but it will start with consumers interested in making informed, healthy eating decisions." Here's how to take advantage of this phenomenon and make informed, healthy eating decisions: • Pay attention to posted calorie counts. Make it standard practice to read them and take a moment to absorb them before you order your food in a restaurant. It's highly likely you'll order less as a result. If the info isn't posted, ask for a brochure, or check online before you go out. You can also keep the listings from your favorite eateries in your bag or car for reviewing as needed. • Read food labels in the grocery store. Take the results from this study and apply it to your time in the aisles. Odds are, you'll make healthier, more informed buying decisions and eat healthier as a result. • Bone up. Confused by food labels? To find out what they really mean, check out the U.S. Food and Drug Administration's labeling primer. Click on "Consumer Information" to find out how food labels can help you choose food wisely. And check out Rodale's Eat This, Not That! series of books for comparisons of the calorie counts and other nutrient information for thousands of supermarket, fast food, and restaurant options.
Education: Women’s and Gender Studies Women faculty members in the higher education system in Israel share with their sisters in other Western developed countries characteristics regarding proportions, promotions, and positions. They constitute a small minority of the total tenure-track faculty, with somewhat larger minorities in the humanities and social sciences, and very small minorities in the physical sciences and engineering. This bibliography concentrates on books, chapters in anthologies, and periodical articles on the collective history of American Jewish women and archival resources on individuals and women’s organizations. In narratives or abridged cycles more or less faithful to the biblical text, art has portrayed biblical women as role models and reference, occasionally adding exegetical elements both Christian and Jewish. Although the text of the Bible became fixed at different dates and in various versions, these images are not fixed, but reflect the ebb and flow in society’s attitudes towards women and their role. Dora Askowith, author, historian, and college educator, believed that a knowledge of Jewish women’s history would serve as a catalyst for organization, activism, and moral leadership. She taught women at Hunter College for a total of forty-five years, and wrote that she was anxious to teach college students Jewish history because they were “poorly versed in the history of their own faith.” The Ba’alot Teshuvahs’ decision to explore Orthodox Jewish ways of life represents one possible solution to current widespread questions about women’s proper roles. The structural changes in American society in the past thirty years, in particular the changing demographics of women’s educational, occupational, marital, and childbearing patterns, have occasioned a debate in our culture about women’s nature and social roles similar to the late nineteenth-century “woman question” that followed the Industrial Revolution. Evelyn Torton Beck is Professor Emerita of women’s studies as well as an affiliate faculty member in the Jewish studies and comparative literature programs at the University of Maryland, College Park (UMCP). She is a scholar, a teacher, a feminist, and an outspoken Jew and lesbian on campus. With her energy and drive, the state flagship campus has become a more welcome place for Jewish, female, and homosexual students, faculty, and staff. Already the best-known woman sociologist of her generation, she quickly became an important voice of American feminism. May Brodbeck was among the foremost American-born philosophers of science. Phyllis Chesler, a self-described “radical feminist” and “liberation psychologist,” is a prolific writer, seasoned activist and organizer, and committed Jew and Zionist. Also a psychotherapist and Emerita Professor of Psychology and Women’s Studies, Chesler is the author of twelve books. A biographical entry on the Jewish-Algerian-French writer Hélène Cixous commands close attention to her work because, in her case, “life writing,” as she calls it, is a key topic for her imaginative and critical enterprise in the fields of poetic fiction, literary theory, feminist analysis, and the theater. Natalie Zemon Davis is a leading European historian, a pioneer in feminist studies, and one of the first women to assume a senior position in academic life. In 1987, when she served as president of the American Historical Association, the largest professional organization of historians in the United States, she became only the second woman ever to hold that post. Davis’s work has enriched historical understanding by challenging the boundaries of scholarly inquiry and broadening the scope of the historical profession. The existence of two autobiographies and two biographies attest to the importance of Florence Denmark’s contributions to American psychology. However, none of these published materials mention the fact that she is Jewish, probably because she has never felt that her Jewish heritage is particularly salient to her. Nevertheless, like the work of other Jewish women of her generation, Denmark’s contributions to psychology have been socially activist in nature. She is a founder of the field of the psychology of women, and has contributed much to its legitimization in terms of both scholarship and organizational leadership. Barbara Berman Dobkin is the pre-eminent Jewish feminist philanthropist of the end of the twentieth and beginning of the twenty-first century. Her vision, dedication, generosity and financial commitment have contributed significantly to changing the landscape of Jewish women’s organizations and funding in both North America and Israel. In her central pursuit of the full equality and integration of women and women’s issues into every aspect of Jewish life, Dobkin co-founded Ma’yan: The Jewish Women’s Project and has served as the chair of The Jewish Women’s Archive and the ten million dollar Hadassah Foundation. She has also been a pioneering donor-activist on Jewish gay and lesbian issues, in progressive Israeli organizations, and in the U.S. women’s funding movement, and has garnered a national reputation as a speaker on issues of women’s philanthropy and leadership. The first woman, according to the biblical creation story in Genesis 2–3, Eve is perhaps the best-known female figure in the Hebrew Bible. Her prominence comes not only from her role in the Garden of Eden story itself, but also from her frequent appearance in Western art, theology, and literature. Indeed, the image of Eve, who never appears in the Hebrew Bible after the opening chapters of Genesis, may be more strongly colored by postbiblical culture than by the biblical narrative itself. For many, Eve represents sin, seduction and the secondary nature of woman. Because such aspects of her character are not actually part of the Hebrew narrative of Genesis, but have become associated with her through Jewish and Christian interpretive traditions, a discussion of Eve means first pointing out some of those views that are not intrinsic to the ancient Hebrew tale. Carol Gilligan has broken new ground in psychology, challenging mainstream psychologists with her theory that accepted benchmarks of moral and personal developments were drawn to a male bias and do not apply to women. Gilligan proposed that women have different moral criteria and follow a different path in maturation. A psychologist who taught at Harvard and Cambridge, Gilligan brought a feminist perspective to challenge Freud and new life to the statement “The personal is political.” Doris Bauman Gold was motivated by her long participation in Jewish organizational life to found Biblio Press, dedicated to educating Jewish women about their own history and accomplishments. Through Biblio Press, Gold has published more than twenty-seven general audience books that address and illuminate the culture, history, experiences, and spiritual yearnings of Jewish women. Ruth Gruber was born on September 30, 1911, in Brooklyn, the fourth of five children of David and Gussie (Rockower) Gruber, Russian Jewish immigrants who owned a wholesale and retail liquor store and later went into real estate. She graduated from New York University at age eighteen and in 1930 won a fellowship to the University of Wisconsin, where she received her M.A. in German and English literature. In 1931, Gruber received a fellowship from the Institute of International Education for study in Cologne, Germany. Her parents pleaded with her not to go: Hitler was coming to power. Nevertheless, she went to Cologne and took courses in German philosophy, modern English literature, and art history. She also attended Nazi rallies, her American passport in her purse, a tiny American flag on her lapel. She listened, appalled, as Hitler ranted hysterically against Americans and even more hysterically against Jews. When seven women concluded on February 14, 1912, “that the time is ripe for a large organization of women Zionists” and issued an invitation to interested friends “to attend a meeting for the purpose of discussing the feasibility of forming an organization” to promote Jewish institutions in Palestine and foster Jewish ideals, they scarcely anticipated that their resolve would lead to the creation of American Jews’ largest mass-membership organization. Yet Hadassah, the Women’s Zionist Organization of America, became not only the most popular American Jewish organization within a short span of years, maintaining that preeminence to this day, but also the most successful American women’s volunteer organization, enrolling more women and raising more funds than any other national women’s volunteer organization. Florence Howe was born on March 17, 1929, in New York City to Samuel and Frances (Stilly) Rosenfeld. Her father was a taxi driver and her mother a bookkeeper. She received her B.A. from Hunter College in 1950 and her M.A. from Smith College in 1951. She then did graduate study at the University of Wisconsin from 1951 to 1954. Izraeli was Professor of Sociology and former Chairperson of the Department of Sociology and Anthropology at Bar Ilan University, Israel. At the time of her death, she was Chair of the Interdisciplinary Program in Gender Studies and head of the Rachel and J. L. Gewurz Center for Research on Gender at Bar Ilan, which she endowed in the name of her parents. The Bar Ilan Program, which she initiated, is one of only two M. A./Ph. D. Gender and Women’s Studies programs in Israel. Founded in 1995 on the premise that the history of Jewish women—celebrated and unheralded alike—must be considered systematically and creatively in order to produce a balanced and complete historical record, the Jewish Women's Archive took as its mission “to uncover, chronicle and transmit the rich legacy of Jewish women and their contributions to our families and communities, to our people and our world.” The Eleanor Leff Jewish Women’s Resource Center (JWRC) of the National Council of Jewish Women, New York Section, maintains an extensive collection of materials by and about Jewish women and creates Jewish programming with a feminist focus. The JWRC was founded in 1976 to document and advance the modern Jewish women’s movement.
1370 – England/France King Charles of France has announced that he is confiscating Aquitaine, and Prince Edward has sent for knights and men at arms from England to assist him in opposing the French king. Lord Gilbert Talbot is required to provide five knights, twelve squires, and twenty archers and men at arms, and wishes his surgeon “ Hugh de Singleton“ to travel with the party, while Hugh’s wife Kate will oversee the castle. Among the party will be Sir Simon Trillowe, Hugh’s old nemesis and Kate’s former suitor, who had once set fire to Hugh’s house. After a brawl on the streets of Oxford Sir Simon had nearly lost an ear; Hugh had sewn it back on but it had healed crooked, and Simon blamed Hugh for the disfigurement. Finding himself in the same party, Hugh resolves not to turn his back on the knight“ but it is Sir Simon who should not have turned his back.
Interactive Whiteboard Activities The Halloween Tooth: A Max's Math Adventure Practice patterns with Max's Halloween candy challenge. - Grades: PreK–K, 1–2 Next, they must complete this activity page by making a pattern of candy objects. Finally, students can complete four Extra Challenges of varying difficulty levels: two challenges about making patterns, one about counting, and one about probability.
Re: Are the Laws of Physics Wrong? While it may have been hotly debated in the past that craters on the moon are volcanic, it hasnt been in quite sometime. Since you can clearly see the ejecta fans around them it is fairly clear that they are impact craters. Not to mention the lack of evidence for any substantial vulcanism at any time in the moons history. In addition, whether or not a meteorite "explodes" on impact or simply dives its way into the ground has far more to do with its composition then with its kinetic energy at impact. I think you might benefit from a kinematics course at your local community college. If you are not part of the solution, you are part of the precipitate.
Conventional practices of monitoring water quality rely on output as a means for quality assurance. Such practices are not timely enough to prevent consumption of contaminated water and do not give sufficient information to identify the source of contamination (when, why, and where it occurred). Water safety plans stand in contrast to conventional approaches. They introduce pro-active risk management that contributes to timely detection of contamination to prevent illness and rectify the problem through monitoring of critical points at the water source, treatment, distribution to the consumer, and end storage. The World Health Organization (WHO) has promoted water quality assurance through water safety plans since the early 2000s and formally recommended them in the Third Edition of Guidelines for Drinking Quality published in 2004. This is because water safety plans work. These plans identify credible risks in the water supply system from the source to consumer, prioritize those risks, and put in place controls to mitigate them. They also include processes to monitor and validate the effectiveness of management control systems and the quality of the water produced. Water safety plans have been adopted by a number of developed and developing countries. Plans can vary in size and complexity as appropriate for the setting (monitoring hazards associated with a particular setting or system for example) though they commonly identify one catchment area, its distribution system, and associated consumption levels. A current example of implementation of a water safety plan is Sri Lanka. In 2009, the WASH Coalition implemented a comprehensive quality surveillance programme that included a water safety plan. Another example comes from WSSCC’s partner, the South Pacific Applied Geoscience Commission (SOPAC) which, in 2005, developed a framework for action on drinking water quality and health in countries in the Pacific Islands.
|Adams, Todd -| |Zack, Richard -| |Crabo, Lars -| Submitted to: Annals of the Entomological Society of America Publication Type: Peer Reviewed Journal Publication Acceptance Date: March 7, 2011 Publication Date: May 10, 2011 Citation: Landolt, P.J., Adams, T., Zack, R.S., Crabo, L. 2011. A diversity of moths (Lepidoptera) trapped with two feeding attractants. Annals of the Entomological Society of America. 104(3): 498-506. Interpretive Summary: New methods and approaches are needed to control insect pests of begetable crops. Monitoring with chemical attractants is used as a means of determining the presence of an insect pest and the necessity of control measures. Researchers at the USDA-ARS laboratory in Wapato, Washington, in collaboration with Washington State University scientists are styding chemical odorants from sweet baits and flowers to discover and develop chemical attractants for use in monitoring the presence and seasonality of moth pests of crops. They determined that two types of feeding attractants, based on sweet baits and on flower odors, lure different types of moths. They also documented the attraction of a large number of species of moths to these lures, and determined novel responses by several cutworms, armyworms and loopers that are widespread crop pests. This ifnormation provides guidance to researchers and growers in selecting the right feeding attractant for monitoring a particular pest, and suggests new avenues of research to develop these feeding attractants for additional crop pest species. Technical Abstract: Feeding attractants for moths are useful as survey tools to assess moth species diversity, and for monitoring of the relative abundance of certain pest species. We assessed the relative breadth of attractiveness of two such lures to moths, at sites with varied habitats during 2006. Eighty-six of the 114 species of Lepidoptera captured were in traps baited with acetic acid plus 3-methyl-1-butanol (AAMB), a moth lure that is based on the odor chemistry of fermented molasses baits. Fifty-two of the 114 species were trapped with a floral odorant lure comprised of phenylacetaldehyde, B-myrcene, methyl salicylate, and methyl-2-methoxy benzoate. Preference for one lure type was statistically supported for 10 species of moths; seven to the AAMB lure and 3 to the floral lure. To gain better information on lure preference, 10 pairs of traps baited with the same lures were maintained in a single habitat type (riparian) during 2008. Sixty-eight of 89 species captured were in traps baited with AAMB and 43 were in traps baited with the floral lure. Preference for a lure type was statistically supported for 39 of the 89 species of moths trapped; 32 to the AAMB lure and 7 to the floral lure. Both of these lures hold advantages for trapping different types of moths, and both lures might be used in a complementary way to sample moth biodiversity.
Barrett’s syndrome is a disorder in which the tissues lining the esophagus get inflamed, due to the irritation caused by acid reflux. It also involves replacement of esophageal cells by those lining the intestines. This condition can lead to a deadly form of esophageal cancer. Barrett’s syndrome is a gastrointestinal disorder in which the esophagus (tube carrying food from throat to stomach) lining gets damaged, and replaced by cells that are usually found in the intestine. Acid reflux from the stomach damages the flat-shaped squamous cells of the esophagus, which are then replaced by column-like cells of the intestine. This change of cell type is termed as metaplasia. It is a deleterious condition, because if neglected it can gradually lead to cancer. Although the exact cause of this syndrome is not known, there are some causative factors that can lead to this condition. Factors such as age, obesity, gastrointestinal reflux disorder, gender, etc., are known to play a causative role. Moreover, around 60% of people with Barrett’s syndrome have gastroesophageal reflux disease (GERD). People with chronic conditions of GERD are at higher risks of contracting this syndrome. Moreover, unhealthy eating habits, chronic heartburns, etc., can also lead to this condition. Barrett’s syndrome does not have any particular symptoms of its own. People with this disorder experience the same symptoms of GERD. The symptoms are as follows: - Chest pain - Acid reflux (sour taste in mouth) - Blood spots in vomit - Blood in stool - Pain while swallowing - Sore throat As far as internal symptoms are concerned, the cells lining the esophagus get replaced by the ones lining the intestine. The esophagus with changed cells is called Barrett’s esophagus. This can increase the risk of developing a deadly form of esophageal cancer called adenocarcinoma (malignant tumor originating in glandular epithelium). The doctor may conduct an upper endoscopy or biopsy to diagnose the condition after performing a physical exam. During the endoscopy, he can examine the esophagus for any inflammation and irritation with the help of a small flexible tube attached to a camera. The tube is inserted through the patient’s mouth and into the patient’s esophagus, whereby, the doctor can view the esophagus and stomach regions. This test is quite a vexing one for the patient, however, it’s something that just needs to be done. Furthermore, he may even conduct a biopsy if he finds something suspicious. With the help of a biopsy, he can inspect a sample of esophageal lining tissue and test for cancerous cells. There is no specific cure for treating a Barrett’s esophagus. This disorder is very vague, with no specific symptoms or treatment line. However, since this condition can be fatal (can lead to a rare and deadly type of esophageal cancer), it needs proper medical intervention. Treatment generally involves treatment of GERD. The patient may be given antacids, cholinergic agents, promotility agents, histamine H2 receptor blockers, or proton pump inhibitors. The doctor will also keep his eyes open for early signs of cancer and will treat it accordingly. If the syndrome is on the severe side, then, surgery is advised. However, since these surgeries involve several complications they are recommended only for people with esophageal cancer, or those who are highly vulnerable to get it. The surgery may involve removal of the esophageal segment affected. This surgery is called esophagectomy. Moreover, in another surgery called fundoplication, a part of the upper stomach is enfolded upon the esophagus, so as to decrease damage done to it by the acid reflux. Barrett’s syndrome is a disorder affecting the esophageal lining and is a condition without specific cure. Hence, it is only wise to try to prevent the onset of this condition by controlling one’s weight, refraining from heartburn-causing foods like caffeine beverages, alcohol, spicy and acidic foods. Moreover, it is recommended not to eat or drink anything 3-4 hours before going to bed. An ounce of prevention is better than a pound of cure! Disclaimer: This article is for informative purposes only and does not in any way attempt to replace expert medical advice.
Revista de Salud Pública Print version ISSN 0124-0064 MARCOS-DACCARETT, Nydia J. et al. Obesity as risk factor for metabolic disorder in Mexican adolescents, 2005. Rev. salud pública [online]. 2007, vol.9, n.2, pp. 180-193. ISSN 0124-0064. http://dx.doi.org/10.1590/S0124-00642007000200003. Objective Determining the prevalence and estimating the risk of obesity for dyslipidemia and hyperinsulinemia in adolescents. The existence of a linear association betweenanthropometric measures, lipids and insulin was also evaluated. Material and Methods A comparative study was carried out amongst obese (body mass index=BMI >95th percentile for age and gender; n=120) and non-obese adolescents (BMI <85th percentile for age and sex; n=120) aged 10-19. A structured questionnaire was used for collecting anthropometric and demographic data. Glucose, insulin and lipid profiles were obtained for each adolescent. Results Prevalence of at least one dyslipidemia was 56,6 % among obese adolescents and 20,8 % amongst non-obese ones (p<.001). The former registered 50 % prevalence of hyperinsulinemia, the latter 4 % (p<.001). Obesity increased hyperinsulinemia risk having a 23 odds ratio (8.3-68.9 95 % CI) and for at least one dyslipidemia (OR=5,0; 2,7-9,2 95 % CI). Insulin level significantly correlated with BMI (r=0,57), triglycerides (r=0,57), VLDL (r=0,57), HDL (-0,37), waist-hip circumference index (r=0,29), cholesterol (r=0,22), and LDL (r=0,13). Conclusions Obesity can be considered to be a risk factor for developing metabolic disorders in adolescents. In fact, there was a linear relationship between anthropometric measurement, lipids and insulin. Prevention should focus on improving predisposing environments for obesity amongst families having children and teenagers. Emphasising life-styles and healthy behaviour is essential, as well as training and treatment options for complete care of individuals in this age-group. Keywords : Obesity; adolescence; risk; dyslipidemia; insulin.
Tanya Asim Cooper Abstracted from: Tanya Asim Cooper, Racial Bias in American Foster Care: The National Debate, 97 Marquette Law Review 215 (Winter, 2013) (311 Footnotes) (Full Article) Professionals in the foster care system routinely contend that Native American and African American children are the most at-risk for child abuse and neglect, a presumption currently reflected in the system. Based on this belief, the system removes these children from their families at rates higher than children of any other race. Whether this disproportionate representation in foster care of African American and Native American minorities is justified or biased is the question in the ongoing national debate. *218 The nation's poorest children, not surprisingly, make up most of the foster care population. African Americans and Native Americans are disproportionately poor, and that correlation increases the probability of foster care for these races. Once in foster care, however, children face heightened risk for abuse and neglect within the system itself and generally suffer poorer outcomes and prospects, as studies and current events repeatedly demonstrate. What this means, therefore, is that African American and *219 Native American children, especially those who are poor, are disproportionately more likely to enter foster care, where they are at high risk of secondary harm by the system itself. Foster care is a big, billion-dollar business. Craig and Herbert estimated in 1997 that publicly-funded foster care cost American taxpayers annually $12 billion; one year in foster care per child cost $17,500; group-home foster care per child in 1994 cost $36,500; and institutional placements in some states per-year, per-child cost $42,000. Costs since 1997 only rose, as ABC News reported in 2006: "Despite more than a decade of intended reform, the nation's foster care system is still overcrowded and rife with problems. But taxpayers are spending $22 billion a year--or $40,000 a child--on foster care programs." According to the Government Accountability Office, Each year, hundreds of thousands of the nation's most vulnerable children are removed from their homes and placed in foster care. While states are primarily responsible for providing safe and stable out-of-home care for these children, Title IV-E of the Social Security Act provides federal financial support. The Administration for Children and Families (ACF) in the Department of Health and Human Services (HHS) is responsible for administering and overseeing federal funding for Foster Care. Financial incentives in federal laws and policies perpetuate state practices to place children in government-subsidized foster care rather than leaving the children in their own homes and providing their families with aid, which is much cheaper. These costs to America's children and taxpayers warrant close scrutiny. With the release of the Fourth National Incidence Study of Child *221 Abuse and Neglect (NIS-4), and national organizations, leading scholars, and local jurisdictions around the country focusing on unintended bias in foster care systems, the civil rights debate on foster care continues. Many believe the system protects the most vulnerable from maltreatment while others believe it exploits the most disenfranchised. Because foster care is a direct function of the political and social will of the people at a particular point in American history, this billion-dollar enterprise will always be relevant. This Article explores the central debate on bias, race, and poverty in America's foster care system, and aims to highlight those places in the system where unintended bias manifests and consequently affects *222 decisions regarding which children are removed from their families and placed in foster care. Foster care laws vacillate in intent and effect, but as discussed in this Article, the laws themselves are vague and their practice is particularly vulnerable to biased decision-making that frequently increases the risk of error and secondary harm to these already disenfranchised families. Using the lens of systems theory to conceptualize the foster care system reveals key decision points vulnerable to bias where the high risk of secondary harm to children in foster care can far outweigh any benefits of removal from the children's own homes. Systems thinking framework also points to those solutions most likely to strengthen the critical junctures in the system that are vulnerable to bias in American foster care--a system that most agree is flawed. Tanya Asim Cooper is a clinical law school professor and certified child welfare law specialist.
Artificial Intelligence (AI) and the Internet of Things (IoT) are two of the most transformative technologies of the 21st century. The integration of AI and IoT has opened a whole new world of possibilities, with smart devices and systems that can learn and adapt to their environment, making them more efficient and effective. Fundamentally, AI is the ability of machines to learn from data and make decisions based on that data. In contrast, IoT stands for the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, and connectivity. This network empowers these objects to connect and seamlessly exchange data. The combination of AI and IoT allows for the creation of intelligent systems that can process and analyse large amounts of data in real-time, providing valuable insights that can be used to optimise operations and improve performance. The integration of AI and IoT has numerous use cases across various industries, including healthcare, the manufacturing process, transportation, and agriculture. However, there are also challenges in implementing AI in IoT, such as data privacy, security, and interoperability. Despite these challenges, the benefits of AI in IoT are significant, and the future trends in this area are promising. Artificial Intelligence (AI) is the ability of machines to perform tasks that require human intelligence, such as perception, speech recognition, decision-making, and language translation. It aims to create intelligent machines that can learn, reason, and make decisions like humans. AI can be categorised into three types: Narrow AI for specific tasks like chess, facial recognition, or translation; General AI for human-like intellectual tasks (still in research); and Super AI, a theoretical AI surpassing human intelligence, tackling tasks beyond human capability. AI is based on several fundamental concepts, including: In the context of the Internet of Things (IoT), AI can be used to improve the efficiency and effectiveness of IoT systems. For example, AI can be used to analyse data from sensors and devices to identify patterns and anomalies, predict equipment failures, and optimise energy consumption. By combining the power of AI with IoT, it is possible to create intelligent systems that can adapt and respond to changing conditions in real-time. The Internet of Things (IoT) represents a network comprising physical devices, vehicles, household appliances, and various objects integrated with sensors, software, and connectivity. This enables seamless data exchange and communication with other devices and systems through the internet. IoT devices can range from simple sensors that collect data to complex devices that after data collection can perform actions based on that data. IoT devices are devices connected either to the internet through wireless or wired networks, and they can communicate with other devices through various protocols such as Wi-Fi, Bluetooth, Zigbee, and LoRaWAN. The data collected by IoT devices can be analysed and used to improve efficiency, safety, and productivity in various industries such as healthcare, agriculture, transportation smart retail, and manufacturing. One of the key features of IoT is its ability to collect and process large amounts of data in real-time. IoT devices can collect data from multiple sources such as sensors, cameras, and GPS, and send it to cloud-based platforms for analysis and storage. This data can then be used to generate insights and predictions that can help businesses make informed decisions. Another important aspect of IoT is security. IoT devices can be vulnerable to cyber-attacks, and it is important to ensure that they are secure and protected from unauthorized access. This can be achieved through various measures such as encryption, authentication, and access control. Overall, IoT is a rapidly growing field that has the potential to transform various industries and improve the quality of life for people around the world. As more devices become connected to the internet, the possibilities for innovation and progress are endless. The integration of AI and IoT has the potential to revolutionize the way we interact with technology. By combining the power of machine learning algorithms with the vast amount of data generated by IoT devices, businesses and individuals can gain valuable insights and make more informed decisions. One of the key benefits of integrating AI and IoT is the ability to create intelligent systems that can learn and adapt to changing conditions. For example, smart homes can use AI algorithms to learn the preferences of their occupants and adjust the temperature, lighting, and other settings accordingly. Similarly, smart cities and factories can use AI to optimize production schedules and improve efficiency. Another advantage of AI and IoT technology integration, is the ability to automate repetitive tasks and reduce the workload on human operators. For instance, in the healthcare industry, AI-powered sensors can monitor patients’ vital signs and alert medical staff to any changes that require attention. This can help improve patient outcomes and reduce the risk of medical errors. However, there are also challenges to integrating AI and IoT, including the need for robust security measures to protect sensitive data and prevent cyberattacks. Additionally, there is a risk that AI algorithms may make biased or unfair decisions if they are not properly designed and tested. Overall, the integration of AI and IoT has the potential to transform many industries and improve the quality of life for people around the world. As technology continues to evolve, it will be important to ensure that these systems are designed and implemented in a responsible and ethical manner. Artificial Intelligence (AI) has become an integral part of the Internet of Things (IoT) ecosystem. AI enables IoT devices to learn from data and make intelligent decisions, which can help improve efficiency, reduce costs, and enhance user experiences. Here are some of the use cases of AI in IoT: AI-powered smart home devices can learn user behaviour and preferences to automate tasks such as turning on lights, adjusting temperature, and playing music. For example, Amazon’s Alexa and Google Home use natural language processing (NLP) to understand user commands and respond accordingly. Smart thermostats like Nest can learn user schedules and adjust the temperature, accordingly, reducing energy consumption. AI in IoT can help improve healthcare by monitoring patient health, predicting diseases, and providing personalised treatment. Wearable medical devices such as Fitbit and Apple Watch can track vital signs and alert users of any abnormalities. Smart pills can transmit data about medication ingestion and dosage to remote healthcare’ providers. AI algorithms can analyse this data and provide insights into patient health, enabling doctors to make better treatment decisions. AI in IoT can improve transportation by enabling autonomous vehicles, optimising traffic flow, and improving driver safety. Self-driving cars use sensors and AI algorithms to navigate roads and avoid collisions. Smart traffic lights can adjust their timing based on traffic patterns, reducing congestion. AI-powered driver assistance systems can alert drivers of potential hazards and prevent accidents. AI in IoT can improve manufacturing by optimizing production processes, reducing downtime, and improving quality control. Smart sensors can monitor equipment performance and alert maintenance teams of any issues. AI algorithms can analyse production data to identify inefficiencies and suggest improvements. Predictive maintenance can help prevent equipment failures and reduce downtime. In conclusion, AI in IoT has the potential to revolutionize various industries by enabling intelligent decision-making and automation. As the technology continues to evolve, we can expect to see even more innovative use cases in the future. Artificial intelligence (AI) can provide several benefits when integrated with the Internet of Things (IoT). Here are some of the advantages of using AI in IoT: AI algorithms can analyse large amounts of data collected by IoT devices and provide valuable insights. By analysing this data, businesses can optimise their operations, reduce costs, and improve their overall efficiency. For example, AI can help predict equipment failures, allowing maintenance teams to proactively address issues before they cause downtime. AI can also help improve safety in IoT applications. For instance, AI-powered cameras can detect potential hazards in industrial settings and alert workers to take appropriate action. Similarly, AI algorithms can analyse the data gathered from sensors in vehicles to detect unsafe driving behaviours and provide real-time feedback to drivers. AI can help businesses personalise their products and services to meet the needs of individual customers. By analysing data from IoT devices, businesses can gain insights into customer preferences and behaviour, enabling them to offer tailored experiences. For example, a fitness tracker could use AI to suggest personalised workout routines based on the user’s fitness level and goals. AI can help predict when equipment needs maintenance, reducing downtime from equipment failure and increasing efficiency. By analysing data from IoT sensors, AI algorithms can detect patterns that indicate potential equipment failures. This allows maintenance teams to schedule repairs before equipment breaks down, reducing costs and improving uptime. In conclusion, integrating AI with IoT can provide several benefits, including improved operational efficiency, enhanced safety, personalisation, and predictive maintenance. By using edge analytics and leveraging the power of AI, businesses can optimise their operations and provide better experiences for their customers. One of the biggest challenges in implementing AI in IoT is security. As more devices become connected to the internet, the risk of cyber-attacks increases. Hackers can exploit vulnerabilities in IoT devices to gain access to sensitive information or take control of the devices. AI systems and intelligent devices are also prone to attacks, and if compromised, they can cause serious damage. To mitigate these risks, companies must ensure that their IoT devices and AI systems are secure and that they have robust security protocols in place. Another challenge in implementing AI in IoT is the data storage and management. IoT devices generate vast amounts of data, and it can be challenging to manage and store this data. AI systems require large amounts of data to learn and improve, and if the data is not managed properly, it can lead to inaccurate results. Companies must ensure that they have the infrastructure in place to manage and store large amounts of data securely. Finally, there is a lack of skilled professionals in the field of AI and IoT. The demand for these professionals is high, but the supply is low. This makes it difficult for companies to find the right talent to implement and manage their AI and IoT systems. Companies must invest in training programs to upskill their current employees and attract new talent to the field. In summary, implementing AI in IoT presents several challenges, including security issues, data management, and a lack of skilled professionals. Companies must address these challenges to ensure that their AI and IoT systems are secure, accurate, and effective. Artificial Intelligence (AI) and the Internet of Things (IoT) are two rapidly growing technologies that are changing the way we live and work. As these technologies continue to evolve, there are several future trends that are expected to emerge. Edge computing is a technology that allows data to be processed closer to the source, rather than sending it to a centralised data centre. This technology is becoming increasingly important in the world of IoT, as it allows for faster processing of incoming data, and reduces the need for large amounts of bandwidth transfer data across. In the future, it is expected that edge computing will become even more prevalent in the world of AI and IoT. Cloud computing has been a key enabler of the growth of AI and IoT, allowing for the storage data exchange and processing of large amounts of data. In the future, it is expected that there will be even greater integration between AI and IoT and cloud computing. This will allow for more powerful AI algorithms to be developed, as well as the ability to process and store even larger amounts of data. Predictive analytics is a technology that uses data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical or data points. In the world of AI and IoT, predictive data analysis and analytics is becoming increasingly important, as it allows for more accurate predictions about future events. In the future, it is expected that predictive analytics will become even more prevalent, as the amount of data generated by IoT devices continues to grow. As the number of IoT devices continues to grow, so does the potential for security breaches. In the future, there will be a greater emphasis on security in the world of AI and IoT, as companies and individuals seek to protect their data and devices from cyber-attacks. This will require the development of new security technologies and protocols, as well as greater awareness of the potential risks associated human interaction with IoT devices. In conclusion, the future of AI and IoT is bright, with many exciting developments expected to emerge in the years to come. From increased use of edge computing to greater emphasis on security, these technologies are set to transform the way we live and work. In conclusion, the integration of Artificial Intelligence (AI) and Internet of Things (IoT) has revolutionised the way devices and machines communicate and operate. The use of AI in IoT has enabled machines to learn, adapt and make decisions based on real-time data, leading to improved efficiency and productivity. One of the major benefits of AI in IoT is predictive maintenance. By analysing data from sensors and other IoT devices, AI algorithms can predict when maintenance is required, reducing downtime and increasing productivity. Additionally, AI can help in detecting anomalies in data, which can be used to identify potential problems before they occur. Another significant advantage of AI in IoT is the ability to automate tasks. With AI-powered automation, businesses can reduce manual labour, increase accuracy and efficiency, and reduce costs. Furthermore, AI and intelligent automation can help in enhanced operational efficiency by making smart decisions, such as optimising energy consumption, reducing waste and improving safety. However, there are also challenges associated with the integration of AI and IoT, such as security and privacy concerns. As the number of connected devices increases, so does the risk of cyber-attacks. Therefore, it is crucial to implement robust security measures to protect data and devices from cyber threats. In conclusion, the integration of AI and IoT has the potential to transform industries and improve the way we live and work. While there are challenges to be addressed, the benefits of AI in IoT are significant and cannot be ignored. Please wait while you are redirected to the right page...
Initial Sharp Pain An initial sharp pain could indicate a heel spur or another foot condition. Most patients with heel spurs describe the pain as feeling like a knife or pin, and it often occurs when they first stand up after sitting for a while. However, fifty percent of patients with heel spurs do not experience any pain from them. Since foot pain can be due to a variety of medical issues, patients who notice a sharp pain in any area of their foot should visit a podiatrist or an orthopedist if the pain does not improve after a few days. The doctor will ask the patient questions about when the pain began and if any activities make it worse. They will also want to know about any remedies the patient has tried and if anything has reduced the pain. After completing the health history, the physician will assess the patient's heel and foot. They will gently press on the area to determine the location of the pain, and the patient might be asked to walk across the room or to perform certain movements with the toes or foot. In addition to pain medication, the specialist may recommend the use of orthotic shoe inserts or a course of physical therapy to minimize pain. Keep reading to learn about more warning signs of a heel spur now.
Traffic calming was born in the streets of the Netherlands in the late 1960s as a reaction to the rapid increase in traffic volumes and the accompanying deterioration in the livability of post-World War II Dutch cities. At the time, the idea of placing obstacles in the roadway to restrain traffic speeds and flow was considered to be extremely radical—even in the Netherlands. However, by the mid-1970s traffic calming had been adopted as official Dutch government policy for design and had spread to surrounding northern European countries and even to America in places like Berkeley and Seattle. It may have taken 40 years, but the engineering establishment in the U.S. has slowly come to accept traffic calming as a part of the tool box for street design—at least for use on local streets. While we in the U.S. have been cautiously grappling with how and where to use traffic calming, the Dutch and their northern European neighbors have continued to experiment and innovate in finding constructive ways to accommodate cars in their cities. Designers in these countries have been motivated both by the desire to enhance living conditions in their cities and to increase safety for all road users. It is worth noting that in 1970, the fatality rate per capita in the Netherlands and the U.S. was almost identical. Now the Dutch rate is 2.5 times less than that in America. What is interesting is that Dutch engineers do not seem to consider this dramatic progress to be good enough: Now the goal in countries like the Netherlands and Sweden is to actually eliminate traffic fatality as a factor in everyday life. Everybody has a piece Out of this innovative environment has emerged a new concept for street design that is variously referred to as legible streets, self-explaining streets or shared streets. In the United Kingdom, some publications have even taken to using the term “naked streets” in reference to the fact that a feature of these streets is that they have been “stripped” of the signs and markings that are necessary for the operation of conventionally designed streets. The thinking is that shared streets do not need signs and markings, because users are guided on their proper use by the physical design of the streets themselves. The salient feature of shared streets is not just that they are “naked,” but that they are designed to be fully part of the public realm and not just a conduit for traffic. In other words, the whole right-of-way of the shared street is designed to be an integral extension of the surrounding land-use context. Therefore, all users have equal access. A vehicle is considered to be just another user that must negotiate space on an equal footing with shoppers, bikers, skaters and pedestrians. The idea is to make the street legible so that the users can understand that it is a shared environment and then behave accordingly. Ben Hamilton-Baille, a UK designer who has studied shared streets all over northern Europe, reported on an intersection in Christiansfeld, Denmark, that was converted to a shared street in order to address safety concerns. The original intersection was a conventional one with traffic signals and the requisite signs. The idea for the redesign was to create a sense of place at the intersection through the use of appropriate surface treatment, lighting columns and squared-off corners at the crossroads. According to Hamilton-Baille, these changes result in an intersection that now feels like a town center. Data from the Danish Traffic Directorate shows that in the three years since the conversion there have been no serious injury accidents at this intersection, compared with an average of three injury accidents per year before conversion. More surprisingly, traffic backups during peak hours have actually decreased. The data suggest that the new intersection has improved capacity and results in fewer delays than the original traffic-signal controlled intersection. The best-known designer of shared streets is the Dutch engineer Hans Monderman, who works in the northern Dutch province of Friesland. In 1998, Monderman converted an intersection know as “de Brink” from a conventional signal-controlled intersection to a simple brick-paved intersection that has been stripped of signals, signs, markings and barriers. Hamilton-Baille reported: “While observing the workings of ‘de Brink’ in the center of the Friesland market town of Oosterwolde, I was intrigued to hear a traffic engineer and safety official remark, with satisfaction, how many ‘traffic violations’ were taking place each moment in the raised paved stage-like square that constitutes the new intersection. Trucks, bikes, cars and pedestrians intermingle with apparent chaos and disorder using eye contact and careful observation to negotiate space. The guiding control of the state is absent: it relies entirely on informal conventions and legibility.” “De Brink” is a fairly quiet intersection with only 4,500 cars per day. In sharp contrast the “Laweiplein,” which is a major intersection in the Friesland city of Drachten, carries almost 20,000 cars per day. In 2003, Laweiplein was converted from an unattractive, signal-controlled intersection to a shared-street intersection. The Laweiplein is now a textured intersection where the sidewalk merges with the roadway. At the center is a roundabout, which is delineated by a contrasting surface treatment, and at each of the four corners of the intersection there are fountains that are lit at night. According to city engineer Koop Kerkstra, accident rates have fallen about 20% since the conversion, and travel time for crossing the city has improved dramatically. This apparent success of the Laweiplein conversion suggests that shared streets are not just for low-volume local streets. Numerous shared-street projects are now in place across the Netherlands, Scandinavia and the United Kingdom. The consensus report is that these conversions have significantly increased safety and also have improved traffic flow efficiency. The primary explanation for these somewhat counterintuitive outcomes is that the shared-street environment reduces vehicle speeds generally to less than 20 mph. From the point of view of traffic safety, research in the U.S. and Europe has long shown that 20 mph is an important threshold. Below 20 mph the chance of being severely injured in a traffic accident is relatively low. But 20 mph also is the threshold speed at which people are able to interact and maintain eye contact and pedestrians and bicyclists feel comfortable in a mixed-use environment. These factors taken together explain why a number of European cities have begun to institute a uniform speed zone of 19 mph in their built-up area. The greater efficiency of traffic flow on shared streets also is attributed to the low-speed environment. The theory is that intersections, which are usually the bottlenecks in built-up areas, function much more efficiently at lower speeds since traffic signals are not needed. This theory seems to be borne out in many instances where cities have instituted area-wide 19-mph speed zones and have found that the traffic moved much more smoothly. Not quite calm The concept of shared streets has many of the same goals as does traffic calming, but the approach to achieving those goals is quite different. Although traffic calming is typically based on adding devices to the roadway to slow or restrict traffic, it still relies on conventional traffic operational principles. In other words, the assumptions in traffic calming are (1) that the pavement is for traffic and the sidewalk is for pedestrians and (2) that signs and markings are needed to regulate behavior. The concept of shared streets represents a break from these essentially conventional assumptions. In the American context, the idea of shared streets (especially for busier urban streets) is probably as radical a concept as traffic calming seemed 40 years ago when it first appeared in the Netherlands. The shared-street concept challenges prevailing orthodoxy about how streets are designed and about traffic safety. The idea of regulating traffic and separating users in time or space is very ingrained into our design philosophy. The apparent success with shared-street design in north Europe raises the question of whether the conventional approach to street design needs to be reconsidered both from the perspective of place making and safety. The advocates for shared streets are exhorting us to “do it in the streets.” In other words they are saying we need to recapture the vehicle right-of-way and make it an integral part of the public realm. Given the existing regulatory framework for design in the U.S., I am not so sure we are in a position to fully embrace this philosophy on any but the most local of streets. But the success that countries like the Netherlands have had in reducing traffic fatalities and in enhancing the vitality of their cities lends the credibility that suggests we need to be open to learning more about their approach to street design. Shared streets rely on social rather than regulatory controls to govern how all users behave. There is a growing body of data to show that in situations where there is a mix of different types of users, this design approach can be the most effective for safety and efficient traffic movement. The last decade has seen an explosion in the design of mixed-use environments in the U.S. For one, the new urbanist and the smart-growth movements have brought about an appreciation of place making and the need to move away from auto-dominated, segregated, single-use development in favor of more walkable, mixed-use communities. In addition, more and more places are incorporating light rail, bicycles and pedestrians into their transportation traffic mix. With these changes the prevailing approach to street design, which focused on servicing auto-oriented development, is no longer relevant in many places. As engineers seek better ways of accommodating the new paradigm in American place making, the concept of shared streets, as being practiced in Europe, might contain important pointers on designing streets for the new mixed-use, walkable American city. New approach to urban street design succeeds by keeping the speed limit under 20 mph
How many inches is in a centimeter? Do you want to convert 14.7 centimeters to the equivalent of inches? first, you should determine how many inches one centimeter represents. This is how I will be specific: one cm is equal to 0.3937 inches. Meaning of centimeter A centimeter is a common unit of length in the metric system. It equals to 0.01 meter. This unit is used in CGS system, maps, home repaire and all areas in our life. A single centimeter is approximately equivalent to 39.37 inches. What is Inch? The inch is a unit of length in the UK and the US customary systems of measurement. An inch is equal to 1/12 of a foot or 1/36 yard. How do I convert 1 cm to inches? To convert 1 centimeter to inches, simply multiply 1cm by the conversion rate of 0.3937. This will help you to easily calculate 14.7 cm to inches. So 1 cm to inches = 1 times 0.3937 = 0.3937 inches. This will allow you to answer the following question easily and quickly. - What is one centimeter into inches? - What is conversion factor cm to inches? - How many inches equals 1 cm? - What is 1 cm in inches equal? How to convert 14.7 cm to inches? You now fully understand cm to inches by the above. Here is the exact algorithm: Value in inches = value in cm × 0.3937 So, 14.7 cm to inches = 14.7 cm × 0.3937 = 5.78739 inches This formula will allow you to answer the following questions: - What is 14.7 cm in inches? - How can you convert cm into inches? - How can you change cm into inches? - How to turn cm to inches? - How many inches is 14.7 cm equal to? |14.3 cm||5.62991 inches| |14.35 cm||5.649595 inches| |14.4 cm||5.66928 inches| |14.45 cm||5.688965 inches| |14.5 cm||5.70865 inches| |14.55 cm||5.728335 inches| |14.6 cm||5.74802 inches| |14.65 cm||5.767705 inches| |14.7 cm||5.78739 inches| |14.75 cm||5.807075 inches| |14.8 cm||5.82676 inches| |14.85 cm||5.846445 inches| |14.9 cm||5.86613 inches| |14.95 cm||5.885815 inches| |15 cm||5.9055 inches| |15.05 cm||5.925185 inches|
KDHS Lecture: THE ROLE OF TURF IN FAMILY LIFE IN WEST CLARE Clara Hoyne's lecture entitled "Drawing home the turf: the role of turf in family life in West Clare" will examine the economic, social and cultural impact that turf had in the area and its contribution to life in west Clare. Bog and turf were in abundance in County Clare, especially in the west which had vast areas of blanket bog. Little has been recorded about the history of turf cutting, but Clara's research provides rich material regarding turf in agriculture and in daily family life. Interviews were conducted by the author in the course of the research as well as exploring the National Folklore Collection and the local oral history repository, Cuimhneamh an Chláir. The findings confirm that turf played a vital part in all aspects of life in west Clare and sustained many families, in a range of trades and in an area with poor agricultural land. This study demonstrates the central role that turf played in the local economy. The talk will examine some aspects of turf harvesting, including its transport by boat, train and road; turf in family and home life and the general traditions and culture of turf cutting. Clara Hoyne undertook an MA in History of Family in 2013, pursuing her interest in genealogy and social history. She has recently retired as secretary of the Clare Roots Society having served a number of years in that role. During her time as secretary she was involved in many projects; editing many of the publications, organising community training for recording the graveyards of County Clare and organising three successful family history conferences attracting attendees from the UK, the US and Australia. More recently she contributed several articles to a local history publication in Clarecastle. Having since changed her research focus, she is currently a PhD researcher in Psychology in Mary Immaculate College, Limerick. KDHS lectures are free to members, EUR5 for non-members. New members are welcome. The annual membership fee (July-June) is EUR20. You can contact the Society here...
THE MOSCOW METRO In 1923 the Moscow underground railway was designed. In 1935 the Moscow Metro opened with 11 kilometres of line and 13 stations. The Moscow Metro currently has 182 stations. It is the second most heavily used rapid transport in the world second to the Tokyo twin subway. The metro is 301.2 kilometres (187.2 miles) in length; it has 12 lines and 182 stations. The minimum interval between trains is 90 seconds during the morning and evening rush hours. THE COLD WAR The beginning of the Cold War led to the building of a deeper part of the Abatpokroskaya line. The stations on this line are very deep and were planned as shelters in the event of a nuclear war.
ATHENS, Ga. -- Scientists have found increasing evidence that the earth's oceans may be in serious trouble. From coral reef destruction to massive fish kills, the problems facing the world's seas are serious and growing. There's one area of marine ecology that may be important in solving many of these problems and yet remains virtually unknown: bacteria. What bacteria are in the oceans and how do they live? A new study by marine scientists at the University of Georgia is uncovering intriguing and unexpected clues about marine bacteria. Most interesting may be the dominance of bacteria from the so-called "marine alpha group" in the near-shore waters and estuaries of the Georgia coast. As much as 30 percent of the bacteria in the area belongs to a single group named marine alpha bacteria, but the reasons for it - and its significance - are still unclear. "Right now, an important goal in marine microbiology is understanding the connection between the structure of bacterial communities and their ecological function," said Dr. Mary Ann Moran. "We are studying a group of bacteria that are closely related but which may be very diverse functionally." The research was published late last year in the journal Applied and Environmental Microbiology, and it has been funded by grants from the Georgia College Sea Grant Program and the National Science Foundation. Moran has collaborated with postdoctoral associate Dr. JosŽ Gonz‡lez on the research. Scientists agree that coastal bacteria play a substantial role in biogeochemical processes and that these bacteria could one day be important in business and industry as well as in maintaining the health of near-shore ecosystems. A number of problems, however, have kept the composition of bacterial communities largely unknown. Researchers have been reluctant to dive into the vast array of marine bacteria because they are devilishly hard to culture in laboratories. And even the ones that have been grown often are unrelated to the bacteria that are ecologically important in marine ecosystems, making scientific connections and inferences difficult. That has all changed in the past few years with the advent of new techniques to identify bacteria at the level of their basic building blocks. Using specific gene sequences as targets for identification, scientists can recognize and identify bacteria without the need to culture them first. "The target for our work is called the 16S ribosomal RNA gene," said Moran, "and it's a good gene to focus on because all living things need ribosomes to make proteins." It's all about what happens inside the cell. Ribosomes are small cellular components where protein synthesis takes place, and they are composed of specialized ribosomal RNA (which is abbreviated rRNA) molecules. The genes coding for rRNA are an essential component of the genetic material of all prokaryotes (cellular organisms such as bacteria that lack a limiting membrane), but they vary enough to give each species a unique name tag. Moran and Gonz‡lez found preliminary evidence that a cluster of marine bacteria may be particularly important in coastal seawater of the southeastern U. S. and then used the key rRna gene to quantify just how abundant they may be. In work that is drawing considerable interest from marine scientists worldwide, they designed a "probe," a molecule that is labeled in some way - usually with radioactivity or fluorescence - to seek out the 16S rRNA gene of marine alpha bacteria in seawater. The probe looks for similar sequences of base pairs (the chemical building blocks of the double-stranded DNA) and marks them by hybridizing to them. Using this process, the marine scientists were able to discover, to their surprise, that up to 30 percent of the bacterial genes present at a number of locations off coastal Georgia over a three-year period were from members of the marine alpha group of bacteria. The presence of the marine alpha bacteria dropped off in the fresh-water areas of estuaries, such as where the Altamaha River empties, indicating they are exclusively marine and specialized for the salty water found along the Georgia coast. Moran really needed more proof that the probe was correctly targeting marine alpha bacteria, however. So she and Gonz‡lez used a technique called polymerase chair reaction (PCR) to greatly amplify the genetic sequences of the 16S rRNA genes for study, while at the same time successfully culturing the bacteria in low-nutrient seawater agar medium. "In addition to providing further evidence for the abundance of marine alpha bacteria in coastal seawater of the southeastern United States, successful culturing of bacteria from this group furnishes organisms for studies of the physiology and ecology of this important cluster," said Moran. Now that it's clear that marine alpha bacteria are abundant in near-shore marine systems, the question remains: What role do they play? As it turns out, these bacteria could have an important future place in industry, since at least one described for the first time and named by Gonz‡les and Moran (Sagittula stellata) can break down lignin, the polymer that acts as a natural binder and support for cellulose fibers of woody plants. The potential use of such bacteria in the pulp and paper industry is obvious. And lignin and related compounds also have many natural sources in coastal waters. Also, there is evidence that some of the bacteria may be involved in sulfur cycling - an important natural process that is also used in business and industry. Much more remains to be known before these marine bacteria can be used by humankind, however. Indeed, scientists are still trying to completely understand the role of these bacteria in the seas. These are only two aspects of the expanding research, and Moran has just received a three-year, $273,000 grant from the National Science Foundation to discover more. The above post is reprinted from materials provided by University Of Georgia. Note: Materials may be edited for content and length. Cite This Page:
If Easter Sunday falls “on the first Sunday AFTER the first full moon after the vernal equinox”, why doesn’t it fall at different times in different time zones? This year for example, tonight’s (Easter Saturday March 31 2018) full moon occurs after midnight in places between the International date line and the UTC+11 time zone. So according to the formula, Kiwi kids should have to wait another week before breaking open those chocolate eggs. Well, it turns out that the formula is not set by the astronomical path of the moon, but by a bunch of men (I’ve no doubt women weren’t invited) who formulated the Ecclesiastical Lunar Calendar so long ago that it was before the split of the Gregorian and Julian calendars. (In 325 AD/CE in fact). Which means today we actually have two Easters, one for each of the divergent calendars, even though both follow the same formula. Anyway, in the said Ecclesiastical Lunar Calendar, the vernal equinox is always March 21, irrespective of the position of the earth in regard to its transit around the sun. And Easter is always the Sunday following the Pascal Full Moon. And for the calculation of Easter, the Pascal Full moon is defined as been the 14th day after the Ecclesiastical Lunar new moon – so we are back to the Ecclesiastical Lunar Calendar and its ancient origins. Now it’s probably a good thing that there is a universal standard or two, it means we only have two variations – the Gregorian and the Julian – of Easter throughout the world, and children in New Zealand, Fiji etc. don’t have to hang out for another week to get their Easter Eggs – oh that’s unless they are following the Julian calendar (as Orthodox Christians do), it which case they will have to wait until April 8 2018!
The world of technology is constantly evolving and with its security measures must evolve as well. One such security system is Dynamic Device Key Generation, (aka DDKG). DDKG is the process of creating an encryption key on a hardware device that changes every time the device is used. It’s designed to be extremely secure and virtually impossible for hackers or attackers to exploit. In this blog post, we will explore what DDKG is, how it works, and why it’s important for businesses and consumers alike. We will also look at how companies are implementing this technology in their own business operations and discuss best practices for using DDKG. Dynamic Device Key Generation is a security measure used to protect IoT devices and data. Through DDKG, IoT devices can securely identify themselves to the server. It works by generating a unique key for each device that is used to encrypt and decrypt data. Certificates have become a better solution than passwords, as they can be revoked and managed, as well as hold information about their chain of trust. This means that if one device is lost or stolen, the data on it cannot be accessed by anyone who does not have the key. This makes it much more difficult for hackers to get access to sensitive information. DDKG is a security protocol that uses a device’s unique hardware characteristics to generate a safe and secure key. This key is then used to encrypt data on the device, making it difficult for attackers to access the device or encrypted data. DDKG works by first generating a random number on the device. This number is then combined with information about the device, such as its serial number and manufacturing date. This combination of numbers is fed into an algorithm which generates a unique key. This key is then used to encrypt data on the device. DDKG is a strong security protocol because it is very difficult for attackers to replicate a device’s hardware characteristics. Even if an attacker was able to gain access to a device, they would not be able to generate the same key and decrypt the data. Dynamic Device Key Generation (DDKG) is a security feature that generates a unique key for each individual device. This means that even if one device is compromised, the keys for other devices remain secure. DDKG also makes it possible to revoke access for a specific device without affecting the security of other devices. DDKG is also a better solution for the IoT as setting individual passwords for the number of devices in the IoT becomes almost impossible. With DDKG the device does most of the authentication work. DDKG is also a good solution due for access management, as the unique keys only allow access from the correct IoT devices. As the access is self-generated from the device, it establishes security across the network, as the devices secure their own integrity preventing cloning and spoofing attacks. Dynamic Device Key Generation (DDKG) is a relatively new concept in the world of cryptography and security. It addresses the problems associated with the use of static keys, which can be compromised if they are not properly managed. DDKG offers a more robust and secure solution by generating keys dynamically, making it much more difficult for an attacker to compromise the system. However, DDKG comes with its own set of challenges. One major challenge is that it requires all devices in a system to be able to generate and exchange keys securely. This can be a difficult task, particularly in systems with many devices. Another challenge is that DDKG can introduce latency into a system, as each device must generate its own key before it can communicate with other devices. This can be an issue in real-time systems where low latency is critical. To take advantage of DDKG, device manufacturers need to first generate a unique, unchangeable key for each device during the manufacturing process. This key can be generated using a variety of methods, but it must be impossible to change or duplicate. Once this key is generated, it needs to be securely stored on the device. When a user wants to activate a new device, they will need to generate a new key pair using the manufacturer’s key as part of the process. This new key pair will be used to encrypt and decrypt data on the device. The private key should never leave the device, and the public key can be shared with anyone who needs to send data to the device. In order to ensure that only authorized devices can access data, servers or services that store or transmit data should keep a list of approved public keys. When new data is being sent, the sender should encrypt it using one of the approved public keys. This way, even if an unauthorised device manages to get hold of the data, they won’t be able to read it without also having access to the corresponding private key. Dynamic Device Key Generation, or DDKG, is a secure method for generating device keys that can be used to authenticate devices. Not only does DDKG provide robust security protocols, but it also helps organizations easily manage the key distribution process for their various devices. With its ability to generate unique device keys on demand and quickly provision them over an API connection, DDKG provides organizations with an efficient and cost-effective way of protecting their data from unauthorised access. Please wait while you are redirected to the right page...
history is a long, exciting and fascinating one, with the oldest archaeological findings showing that people have been living there as far back as about a half million years ago - making them among the very first East Asians who practiced agriculture in that area. first truly influential part of history in Vietnam occurred during the Bronze Age, when the Dong So culture was in Vietnam, dramatically advancing their level of civilization. 200BC and 938AD, the Chinese ruled over this region, having conquered the Red River Delta in the 2nd century BC - a critical expanse in Vietnamese history. This led to the inclusion of Chinese population and culture within the Vietnamese borders. The native culture, though, hung on strong, with a strong sense of national identity among the people, and though many Chinese influences still exist in Vietnam, the traditional Vietnamese culture has hung on throughout the years and is still greatly predominant in that country. The Chinese influences that continued were Confucianism and Taoism, which was the official ideology, as well as the Chinese ideographs (writing) which was used to express the Vietnamese language. the Chinese rule, the south of what we know as Vietnam, was called the Funan kingdom - which was actually more influenced by Indian culture than by the Chinese rulers. Similarly, Champa, in the extremely far south, was considered a Hindu kingdom and grew dramatically between the 2nd and 8th centuries. the 10th century, China had moved out of the Tang dynasty with a huge collapse, and Vietnamese revolutionaries took the opportunity (under the leadership of Ngo Quyen) to stage continual revolts, overthrowing the Chinese soldiers and ending their rule by the year 938. Ngo Quyen died, Vietnam was plunged into a century of anarchy, disorder and chaos, until the very first Vietnam dynasty was formed. This dynasty was called the Ly Dynasty, which existed from 1010 to 1225, and was founded by Ly Thai To. Within the two hundred years of the Ly Dynasty, its rulers created the Temple of Literature in Hanoi - Vietnamís first university - established an enormous system of organization, promoted agriculture and created the first system of flood control along the banks of the Red The Ly Dynasty also greatly faded out the ideology of Confucianism and Buddhism grew. the Ly Dynasty, came the Tran Dynasty, which remained until the year 1400. By this time, Vietnam was prosperous, heading off many new Chinese attempts to regain the Red River Delta and other important regions. The Later Le Dynasty followed, lasting until 1524, when Vietnam was once again conquered by the Chinese, but only for two decades, when revolutionary and self-declared emperor Le Loi defeated them once more. the Ly Dynasty declined, during the 17th and 18th centuries, Vietnam was divided into two zones, ruled over by the Trinh Lords in the north and the Nguyen Lords in the These two families were so strong, because the Nguyen were supplied weaponry by the Portuguese, while the Trinh had armaments from the Dutch. 1771-1802, Vietnam broke into rebellion in Tay Son, lead by three brothers of the Nguyen: Nguyen Nhac, Nguyen Hue and Nguyen Lu. The result was the crowning of Nguyen Lu as king of the South, Nguyen Nhac as king of central Vietnam and Nguyen Hue as emperor Quang Trung of the north. Again the Chinese attacked, in 1789, but once again they were fought off, making it one of the largest victories in Vietnam history. Nguyen Anh soon stepped in, taking most of the country and declaring himself emperor Gia Long. Nguyen Dynasty then began in 1802 and continued until 1945. This involved a great deal of social, ideological and organizational change. Especially when the French moved in from 1859 until 1954, at first making Vietnam a protectorate and then making it a colony. This created a strong anti-colonialism feeling among Though the appreciated the improvements in communication, commerce and transportation brought by the French, they had a deep seeded historical desire for national independence. So in 1941, the most successful Vietnamese revolutionary in Vietnamese history stepped up, creating the Indochina Communist Party called Viet Nam Doc Menh Lap Dong Minh Hoi better known as Vietminh. Vietminh grew greatly in strength, gaining power over both the north and south and declared Vietnam the Democratic Republic of Vietnam in 1945. Though many negotiation talks existed between the French and Vietnam, war finally broke out in 1946, ending 8 years later with the Geneva Accords, leaving Vietminh in the North and the French and their Vietnamese supporters in the South. A political protocol was then signed to reunify the country 2 years after the treaty was signed. In 1955 the south experienced an uprising, led by Ngo Dinh Diem and continuing and building until the North announced the formation of the National Liberation Front (NLF), later known as Vietcong. In 1963, Diem was overthrown and killed by a military coup, leading to the Vietnam war which started in 1964. By 1965, the South was losing badly and the US Military committed combat troops to the war. This presence grew and grew with the increasing success of the Vietcong. By December 1967, there were almost half a million American men in the Vietnam War with the death number of 16,021. Frustrations built in US fighting units, discipline and moral began to decline, use of drugs and alcohol increased, and the fighting capabilities continued to erode. Paris agreements were signed by USA, South Vietnam, North Vietnam and Vietcong in 1973 followed by the total withdrawal of US combat forces. The guerrilla war still continued and the south fell. is no longer involved in the conflict and as a result, Vietnam has enjoyed its first true peace. Tourism is now on Vietnamís top priority list.
Sometimes interesting intellectual journeys can start with literally one small dot on a map. This happened to us when Achim was looking at a book that featured the map of nuclear power plants planned for the Soviet Union. Do you know anything about this dot on the territory of Estonia, he asked. I did not. The dot was somewhat misplaced geographically and timewise, it seems, but nevertheless opened a question: what about that power plant planned for Estonia? There never was “a real” nuclear power plant in Estonia, although ESSR had some nuclear infrastructure: for example, 90 and 70 mW reactors in Paldiski, meant for training nuclear submarinists, or the uranium processing facilities in Sillamäe. Any bigger nuclear power plants were never built. A closer look reveals a story that talks to the core of the NuclearWaters project. Some time between 1966-1968, The Council of Ministers of the Union of Soviet Socialist Republics started to enquire about the possibility to build a nuclear power plant at Lake Võrtsjärv and summoned a series of meetings in Estonia. Three Estonian experts were apparently involved in the meetings, all from the Estonian Academy of Sciences: Ilmar Öpik, Harald Haberman and Anto Raukas. Document trail of these negotiations is hard to pin down but luckily Academician Raukas is still in good health and could meet me and Achim in early May to talk about the parts that he remembered. Võrtsjärv may look big on a map but it is extremely shallow. Initial plans envisioned an RBMK of the size of 4000 mWatts! What would this do to a lake with a volume of less than 1 cubic km? The three Estonian Academicians summoned help from the limnology specialists and together they reached a conclusion that even a 1000 megaWatt reactor would heat the lake by 10 degrees, causing a major ecological collapse. According to Raukas, raising the level of the lake was not considered, in order to protect the fertile agricultural lands of Rannu collective farm. Yet loads of questions remain that guide us into new avenues and archives. How important was rivalry for water resources between the energy sector and agriculture in the early Soviet Union? Would food security really weigh more than energy supply in the central planning documents? How would the experts calculate the impact of the reactor type that had never been built before? When the impressive 10 degrees calculation was done, no RMBKs had been built yet. Why not think of a river or was that the realm destined for hydroelectricity only? Why did they consider lakes and not sea? Sosnovyi Bor was eventually built on the Baltic coast so why not go for the Latvian coast if the purpose of the NPP was to provide energy to Riga? While memories are elusive and many documents will never be accessible, the journey continues…
At first, many saw COVID-19 as an exotic illness, specific to an outdoor meat market in Wuhan, China, where it originated. Then, it was a mild, infectious disease that affected only the elderly and sick–nothing more than the flu. But then, thousands started dying. The public opinion on the novel coronavirus (officially known as COVID-19) has evolved drastically over the last month or two. No one expected it to reach beyond mainland China, make its way to the West, and endanger the health of the public–putting millions of elderly and immunocompromised people at risk. It’s clear that this is no mild flu or head cold. But it does not warrant panic. There’s no reason to stock up on toilet paper and hand sanitizer. This will not be an apocalypse. Around 80% of COVID-19 cases are mild or asymptomatic, and most recover without complications. If all goes well, things will be back to normal in a few months, provided global leaders take the actions recommended by the WHO. The general sentiment is: Take this seriously, but don’t panic. No matter if you’re concerned or not, you should take the health precautions recommended by the WHO very seriously. If you’re young and healthy, you might not think you need to take any precautions at all. After all, your immune system is strong and in top shape. But there are others who may not be as healthy as you. Currently, at-risk people include the elderly and those with compromised immune systems. Take the below measures to protect yourself and your loved ones from getting COVID-19. Steps to take to prevent getting COVID-19 - Wash your hands often with soap and water, covering all surfaces of your hands (backs, too) for about 20 seconds. Use hand sanitizer if soap isn’t available. - Cover your mouth and nose with a tissue when you cough or sneeze. Discard the tissue afterwards and wash your hands. - Avoid touching your face with unwashed hands. - If you feel sick, even if you think it’s not COVID-19, stay at home. - Clean surfaces and objects that have been touched by other people. - Stay away from others. You don’t have to stay completely inside; you can sit outside or take a walk if the weather is nice. Just avoid other people or stay at least 3 feet away from them. - Quarantine yourself. This is not necessary for the majority of people, but if you fall into an at-risk category, you may want to completely quarantine yourself. - Avoid public transportation if possible. That includes taxis and ride-sharing services. - Avoid public settings or areas with a lot of people in general.
DEAR DR.PAUL: My four-year-old son seems to be sick all the time. We are constantly at the doctor’s office getting antibiotics for his infections. I am worried and wonder how can I tell if my son’s proneness to upper respiratory infections is a symptom of an underlying problem. PEDIATRICIAN DR.PAUL Answers: Your question brings up two points. First, many children, especially when first entering daycare or school setting, usually get sick frequently. The other point is the use and sometimes overuse, of antibiotics in these children. We know that the average child who first enters daycare will get about 12-14 infections (either colds or gastroenteritis) per year. This is because a child who attends daycare or school for the first time is exposed to many new germs in these settings. As the years go by, the child develops immunity or protection against these infections and is sick less often. This means that a four-year-old child, for example, may be sick more frequently than every month and if each infection lasts about a week, a child can then seem to be sick all the time. This is a common situation. To know more visit: why is my child always sick with a cold The obvious concern is whether there is an underlying problem making a child prone to a large number of infections. One important clue lies in whether a child is growing well despite repeated infections. Fortunately, in most cases, these children grow normally, according to the growth curves. This is a very important and reassuring find. Another clue we look for is how severe the infections are: Do they require hospitalization such as for severe pneumonia or infection of the blood? In most cases, the infections are not serious and usually subside on their own. This pattern or trend helps reassure us that the child has no underlying problem. If there is a suspicion of something more serious, then further tests are necessary. However, this is not the case in the majority of children with frequent infections. The other point raised by this question is the need for antibiotics in a sick child. Clearly, most infections we see in children, such as colds and gastroenteritis (diarrhea and vomiting), are usually caused by viruses. Viral infections do not require antibiotics and usually subside on their own. The need for antibiotics depends on the presence of an infection that we suspect is caused by a bacteria, such as an ear infection, “strep-throat” infection, or pneumonia confirmed by a chest x-ray. However, if there is no suspicion of a bacterial cause of an infection, I do not feel that antibiotics are necessary. Overuse or misuse of antibiotics, has resulted in certain bacteria becoming more resistant to commonly-used antibiotics. Pediatrician DR.PAUL Roumeliotis is certified by the American Board of Pediatrics and Royal College of Physicians and Surgeons of Canada. The information provided above is designed to be an educational aid only. It is not intended to replace the advice and care of your child’s physician, nor is it intended to be used for medical diagnosis or treatment. If you suspect that your child has a medical condition always consult a physician.
2. As the things that are made of any material are contained in the potentiality of the material, so the things done by any agent must be in the active power of the agent. But the potentiality of the material would not be perfectly reduced to actuality, if out of the material were made only one of those things to which the material is in potentiality.* Therefore if any agent whose power extends to various effects were to produce only one of those effects, his power would not be so completely reduced to actuality as by making many. But by the reduction of active power to actuality the effect attains to the likeness of the agent. Therefore the likeness of God would not be perfect in the universe, if there was only one grade of all beings.* 3. A creature approaches more perfectly to the likeness of God by being not only good itself, but able to act for the good of others. But no creature could do anything for the good of another creature, unless there were plurality and inequality among creatures, because the agent must be other than the patient and in a position of advantage (honorabilius) over it.* 5. The goodness of the species transcends the goodness of the individual.* Therefore the multiplication of species is a greater addition to the good of the universe than the multiplication of individuals of one species. 7. To a work contrived by sovereign goodness there ought not to be lacking the height of perfection proper to it. But the good of order in variety is better than the isolated good of any one of the things that enter into the order: therefore the good of order ought not to be wanting to the work of God; which good could not be, if there were no diversity and inequality of creatures. There is then diversity and inequality between creatures, not by chance, not from diversity of elements, not by the intervention of any (inferior) cause, or consideration of merit, but by the special intention of God, wishing to give the creature such perfection as it was capable of having. Hence it is said, God saw all things that he had made, and they were very good (Gen. i, 31); and this after He had said of them singly, that they were good; because while things are good singly in their several natures, all taken together they are very good, because of the order of the universe, which is the final and noblest perfection of creation. 2.44 : That the Variety of Creatures has not arisen from Variety of Merits and Demerits 2.46 : That it was necessary for the Perfection of the Universe that there should be some Intellectual Natures
Otoplasty surgery is performed to improve the appearance of the ears. This is often done to address excessively protruding ears, abnormally large ears, or other ear deformities such as sometimes seen after skin cancer resection. Otoplasty is appropriate for a patient of any age who is bothered by the appearance of his or her ears. Often surgery is done as early as 5 years old, once the ears have reached full size and before the child enters school. Surgery is performed through an incision hidden in the crease behind the ear. Dr. Obourn typically utilizes “cartilage-sparing” techniques that involve placement of a series of sutures to help reshape the ear. That being said, this is a highly customizable procedure and Dr. Obourn will come up with your surgical plan based on your specific concerns and goals. Otoplasty can be performed under general anesthesia, or using local anesthesia with some sedation in the office. General anesthesia is usually necessary for younger children. - Hematoma (collection of fluid under the skin requiring drainage) - Temporary or permanent paresthesias (i.e. numbness, tingling, pins/needles, etc.) Click here for Postoperative Instructions: Postop Instructions Otoplasty
Mutations in BRCA genes have been linked to an increased risk of developing breast and/or ovarian cancer. Learn about these genes, their connection to Despite what their names might suggest, BRCA genes do not cause breast cancer. In fact, these genes normally play a big role in preventing breast cancer. Background: The BRCA1 and BRCA2 genes confer increased susceptibility to breast and ovarian cancer and to a spectrum of other cancers. There is controversy regarding the risk of colorectal cancer conferred by germline mutations in these two genes. Methods: We followed 7015 women with a BRCA mutation for new cases of colorectal cancer. For BRCA2, median progression-free survival and overall survival were even longer at 21.6 and 75.2 months, for mutations in genes other than-BRCA, median progression-free survival and overall survival were 16 and 56 months, similar to that seen for BRCA1 mutations. pressor genes BRCA1 and BRCA2 predispose women to a high risk of breast and ovarian cancers.1–3 Genetic testing for BRCA1 and BRCA2 mutations is a current practice for women with a family history of breast or ovarian cancer.4–6 This genetic testing of BRCA1 and BRCA2 is performed by PCR amplification of individual exons and flanking 2021-04-12 · BRCA1 and BRCA2 genetic mutations can be passed from a mother or father to a son or daughter. People with a first-degree relative (a parent, sibling, or child) with a BRCA1 or BRCA2 mutation have a 50% chance of having inherited the mutation. Mutations in the BRCA1 and BRCA2 genes have been found in people all over the world. However, some 2020-09-21 · Most inherited cases of breast cancer are associated with two abnormal genes: BRCA1 (BReast CAncer gene 1) or BRCA2 (BReast CAncer gene 2). Women who inherit a mutation, or abnormal change, in either of these genes — from their mothers or their fathers — have a much higher-than-average lifetime risk of developing breast cancer and ovarian cancer. Men with an abnormal BRCA1 gene have a slightly higher risk of prostate cancer. Men with an abnormal BRCA1 gene have a slightly higher risk of prostate cancer. Men with an abnormal BRCA2 gene are 7 times more likely than men without the abnormal gene to develop prostate cancer. Other cancer risks, such as cancer of the skin or digestive tract, also may be slightly higher in men with abnormal BRCA1 or BRCA2 genes. The mutations are highly penetrant, carrying a lifetime risk of 30-70% for cancer incidence (Ford et al., 1998), with variation related to genetic background (Nathanson et al., 2001). Mutations in the BRCA1 and BRCA2 genes are inherited in an autosomal dominant pattern, which means one copy of the altered gene in each cell is sufficient to increase a person's chance of developing cancer. Both BRCA1 and BRCA2 are tumor suppressor genes that usually have the job of controlling cell growth and cell death. Everyone has two BRCA1 (one on each chromosome #17) and two BRCA2 genes (one on each chromosome #13). When a person has one altered or mutated copy of either the BRCA1 or BRCA2 gene, their risk for various types of cancer increases. Sep 12, 2019 The BRCA gene test is a blood test that uses DNA analysis to identify harmful changes (mutations) in either one of the two breast cancer 4 days ago Everyone has BRCA1 and BRCA2 genes. The function of the BRCA genes is to repair cell damage and keep breast, ovarian, and other cells Aug 5, 2018 BRCA1 mutations increase the risk of breast, ovarian, pancreatic, cervical, uterine, and colon cancers. · BRCA2 mutations increase the risk of Oct 16, 2020 BRCA1 and BRCA2 are both tumor suppressor genes that play a role in DNA repair. Both genes normally act as tumor suppressors, meaning that they help regulate cell division. When these genes are rendered inactive due to mutation, uncontrolled cell growth results, leading to breast cancer. Women with mutations in either gene have a much higher risk for developing breast cancer than women without mutations in the genes. Other cancer risks, such as cancer of the skin or digestive tract, also may be slightly higher in men with abnormal BRCA1 or BRCA2 genes. For BRCA2, median progression-free survival and overall survival were even longer at 21.6 and 75.2 months, for mutations in genes other than-BRCA, median progression-free survival and overall survival were 16 and 56 months, similar to that seen for BRCA1 mutations. They hold the code for BRCA1 and BRCA2 proteins, which repair DNA damage in cells. Aug 5, 2018 BRCA1 mutations increase the risk of breast, ovarian, pancreatic, cervical, uterine, and colon cancers. · BRCA2 mutations increase the risk of If you carry this “breast cancer gene mutation” you are at an increased Two genes, BRCA1 and BRCA2, if mutated are known to dramatically increase a woman's risk of developing breast and ovarian cancer. Here's what you need to Aug 16, 2019 Women with mutations in the BRCA1 or BRCA2 genes are well known to have an increased risk of developing both breast and ovarian cancer.
The Weird Thing Scientists Believe Might Cause Alzheimer’s Researchers at Duke University performed an experiment they believe might point to what causes Alzheimer’s to gain a foothold and begin to ruin people’s lives. The study was conducted with lab mice, and it showed when Alzheimer’s begins to develop, the brain’s natural protective agents actually go M.I.A. allowing the degenerative neurological disease to become exacerbated and more destructive. In particular, they noticed immune cells called mircoglia work in “reverse.” Normally tasked with helping to protect the brain, they instead begin to dampen the immune system which allows the disease to progress. CBS News notes: “The study, published today in the Journal of Neuroscience, opens up the possibility that two of the main suspects in Alzheimer’s development — amyloid plaques and tangles of tau proteins in the brain — are not acting alone.” How surprising is this? Apparently it’s news to many medical professionals. Dr. Jon Lapook, who serves as CBS News chief medical correspondent, said, “You’re finding a molecular pathway that we hadn’t thought of at all that might be important for the development of Alzheimer’s…[Instead of just focusing on amyloid plaques and tau proteins] now they’re looking at something new. They’re looking at these supportive cells called microglia that are inside the brain, they help to support the surrounding neurons.” The scientists research demonstrated [in mice] that when Alzheimer’s begins to set in, the mircoaglia begin to increase their excretion of an enzyme called arginase. In terms of how this effects brain health, arginase is responsible for lowering the levels of a very important amino acid known as arginine, which is essential to help nourish the sensitive neurons in nerve cells. Further, arginine supplementation won’t [and can’t] reverse the effects of arginase, and the Duke researchers are trying to use this data to come up with effective treatments. The one thing about this research is it only shows what happens once Alzheimer’s has developed. To prevent the formation of Alzheimer’s would be ideal, because then the microalgia won’t work against the brain and allow it to be susceptible to further degeneration. As K-2 works to move calcium out of the blood and into the bones, it can keep calcium out of the brain. It’s believed calcium in the brain can also be a contributing factor to the onset of Alzheimer’s. K-2 along with a healthy diet and an active mental state can help protect your brain from eventual degeneration.
Word Clouds are a visual representation of the frequency of words within a given body of text. Often they are used to visualize the frequency of words within large text documents, qualitative research data, public speeches, website tags, End User License Agreements (EULAs) and unstructured data sources. Wordle is a Java tool for generating “word clouds” from text that you provide, created by Jonathan Feinberg. The clouds give greater prominence to words that appear more frequently in the source text, and you can tweak your clouds with different fonts, layouts, and color schemes. The images you create with Wordle are yours to use however you like. Available as the original Java web version, and desktop versions for Windows and Mac. TagCrowd is a web application for visualizing word frequencies in any text by creating word clouds, and was created by Daniel Steinbock while a PhD student at Stanford University. You can enter text in three ways: paste text, upload a text file or enter the URL of a web page to visualize. WordArt.com is an online word cloud art creator that enables you to create amazing and unique word cloud art with ease. You can customize every bit of word cloud art including: words, shapes, fonts, colors, layouts and more! ToCloud is an online free word cloud generator that uses word frequency as the weight. Based on the text from a webpage or pasted text, the generated word cloud of a page gives a quick understanding of how the page is optimized for certain words. Wordclouds.com is a free online word cloud generator and tag cloud creator. Wordclouds.com works on your PC, Tablet or smartphone. Paste text, upload a document or open an URL to automatically generate a word- or tag cloud. Customize your cloud with shapes, themes, colors and fonts. You can also edit the word list, cloud size and gap size. Wordclouds.com can also generate clickable word clouds with links (image map). When you are satisfied with the result, save the image and share it online. Vizzlo is an online data visualization tool, and creating word clouds is one of its capabilities. Vizzlo does have offer word cloud creation for free users, but it includes the Vizzlo watermark. You have to be on one of the paid accounts to remove the watermark. Word Cloud Maker is an advanced online FREE word cloud generator that enables you to upload a background photo or select a design from the gallery upon which your word cloud art will be superimposed. You can simply download the word clouds to your local computer in multiple formats such as vector svg, png, jpg, jpeg, pdf and more. You can use it in your content for free. Infogram is an online chart maker used to design infographics, presentations, reports and more. It’s free to create an account, and word clouds are one of their charting options. You have to upgrade to a paid plan to remove the Infogram logo and get access to download options for your designs. WordSift was created to help teachers manage the demands of vocabulary and academic language in their text materials. Options are very similar to Jason Davies’ Word Cloud Maker (above) but is easier to use.
- NEW: Vaccine is about 62% effective, CDC director says - Getting the flu vaccine isn't a guarantee that you'll be healthy - The most common flu strain this season is a match with the vaccine - But even if it doesn't stop illness, it can stop some symptoms, Dr. Bill Schaffner says So you got a flu vaccine this season, and you've been reading about the flu epidemic. You might be wondering: Will the vaccine keep me healthy? The answer: The vaccine isn't a guarantee that you'll be flu-free. In fact, it's about 62% effective, said the director of the Centers for Disease Control and Prevention, Dr. Thomas Frieden, on Friday. It can prevent the flu, and it can be beneficial even if you do get sick, said Dr. Bill Schaffner, chairman of the preventive medicine department at Vanderbilt University's School of Medicine. "It's the best vaccine we have, but there are cases of influenza that occur despite immunization," Schaffner said on "CNN Newsroom" Thursday morning. The effectiveness of the vaccine depends, in part, on how well it matches the strains of viruses that actually end up prevailing during the flu season. So far, according to the CDC, this year's North American vaccine matches well with the most predominant type of flu spreading in the United States, but is less well matched to the No. 2 type of virus. This year's North American vaccine is made from three viruses: two types of influenza A virus (H3N2 and H1N1) and an influenza B virus. As of January 5, the CDC said Friday, the predominant virus in the United States was an influenza A (H3N2) virus that matched well with the H3N2 virus represented in the vaccine. Of the H3N2 viruses tested at that point, 99.4% matched the type that the vaccine protects against. The match for influenza B viruses, the second-most common this season, wasn't as good. Of the influenza B viruses tested between October 1 and January 5, 66.7% were the type represented in the vaccine. The match for H1N1, much rarer this season, was 100%, the CDC said. Of the viruses collected and tested in the United States in early January, 20.2% were influenza B and the rest were influenza A -- the vast majority of which appeared to be of the H3 variety, according to the CDC. The vaccine generally works better "in young, healthy people than it does in older persons," Schaffner said. The CDC concurs: Even when vaccinated, some older people and people with certain chronic illnesses might not develop the same high levels of immunity as healthy, young adults, the CDC says on its website. But even if it doesn't prevent illness in a particular person, the vaccine still can mitigate some of the symptoms. "(Flu vaccines) are often of benefit because they can prevent some of the complications. It makes a more serious infection somewhat milder," Schaffner said. Keep in mind that the vaccine doesn't work right away. "It takes about two weeks after vaccination for antibodies to develop in the body and provide protection against the flu," the CDC's website says. It's not too late to get a flu shot, Schaffner said. "Influenza is going to be with us into February and even beyond. If you haven't been vaccinated, please, take advantage of the benefits of influenza vaccine," he said. "Run, do not walk -- get the vaccine. Protect yourself and everyone around you." If you did get the vaccine but still came down with the flu, you might wonder if the vaccine caused the illness. It did not, Schaffner said. "You can get a bit of a sore arm if you get the injection. If you get the nasal spray variety, you can have a sore throat for a day and a runny nose. But you can't get flu from the flu vaccine," he said. Dr. Sanjay Gupta, CNN chief medical correspondent, endorsed getting a flu vaccine and frequently washing your hands. "With soap and water, wash your hands for two 'Happy Birthday' songs," he advised in an interview on CNN's "Anderson Cooper 360˚."
Primitive structures deep within the brain may have a far greater role in our high-level everyday thinking processes than previously believed, report researchers at the MIT Picower Center for Learning and Memory in the Feb. 24 issue of Nature. The results of this study led by Earl K. Miller, associate director of the Picower Center at MIT, have implications about how we learn. The new knowledge also may lead to better understanding and treatment for autism and schizophrenia, which could result from an imbalance between primitive and more advanced brain systems. Our brains have evolved a fast, reliable way to learn rules such as "stop at red" and "go at green." Dogma has it that the "big boss" lobes of the cerebral cortex, responsible for daily and long-term decision-making, learn the rules first and then transfer the knowledge to the more primitive, large forebrain region known as the basal ganglia, buried under the cortex. Although both regions are known to be involved in learning rules that become automatic enough for us to follow without much thought, no one had ever determined each one's specific role. In this study, Miller, who is the Picower Professor of Neuroscience, and postdoctoral associate Anitha Pasupathy found that in monkeys, the striatum (the input structure of the basal ganglia) showed more rapid change in the learning process than the more highly evolved prefrontal cortex. Their results suggest that the basal ganglia first identify the rule, and then "train" the prefrontal cortex, which absorbs the lesson more slowly. "These findings suggest new ways of thinking about learning," Miller said. "They suggest that new learning isn't simply the smarter bits of our brain such as the cortex 'figuring things out.' Instead, we should think of learning as interaction between our primitive brain structures and our more advanced cortex. In other words, primitive brain structures might be the engine driving even our most advanced high-level, intelligent learning abilities," he said. The cortex--the "thinking" part of the brain--is highly developed in humans. This is especially true for the prefrontal cortex. Common wisdom suggests that when we learn new things, the prefrontal cortex figures things out first. Then, as our behaviors become familiar and habitual, the more primitive, subcortical basal ganglia take over so that the now-familiar routines can be run off automatically and occupy less of our thoughts. "What we found was evidence for something very different," Pasupathy said. "We found that as monkeys learn new, simple rules--associations analogous to 'stop at red, go at green'--the striatum of the basal ganglia shows evidence of learning much sooner and faster than the prefrontal cortex. But, an interesting wrinkle is that the the monkeys' behavior improved at a slow rate, similar to that of the slower changes inprefrontal cortex." This suggests that while the basal ganglia "learn" first, their output forces the prefrontal cortex to change, albeit at a slower rate. The researchers speculate that perhaps the faster learning in the basal ganglia allows us (and our primitive ancestors who lacked a prefrontal cortex) to quickly pick up important information needed for survival. The prefrontal cortex then monitors what the basal ganglia have learned. Its slower, more deliberate learning mechanisms allow it to gather a more judicious "big picture" of what is going on by taking into account more history and thereby exert executive control over behavior, Miller said. This work was supported by the National Institute of Neurological Disorders and Stroke and the Tourette's Syndrome Association. A version of this article appeared in the March 2, 2005 issue of MIT Tech Talk (Volume 49, Number 19). Cite This Page:
Solar energy facts: Solar energy refers to energy from the sun. It is the most important source of energy for life forms. It is a renewable source of energy unlike Get all the facts about solar energy and discover the critical information you need before you decide to go solar with Verengo Solar. Solar energy is a rapidly growing field and there are a lot of myths regarding solar power. Check out our infographic to see how much you really know about ... Jan 24, 2014 ... Top 10 solar energy facts. Solar energy is the most abundant energy resource on earth – 173,000 terawatts of solar energy strikes the Earth ... It's good for the environment to know interesting facts about solar energy. Solar energy facts for kids are a great way to get them interested in science. Solar energy is obtained from sunlight. Solar energy has been used by humans for a long time for uses such as heating, cooking food, removing salt from ... There are a lot of solar energy facts that are not only interesting, but fun to learn, and helpful to us in our everyday lives. Read on to learn more facts about solar ... Apr 25, 2016 ... What you need to know about solar energy and solar panels. Get your solar energy facts straight. Nov 25, 2013 ... Below are 15 solar statistics or key facts that I think are worth your ... The cost of solar panels has fallen approximately 100 times over since ... Every hour the sun beams onto Earth more than enough energy to satisfy global energy needs for an entire year. Solar energy is the technology used to harness ...
The coyote is a canid native to North America and Central America. It is smaller than its close relative, the gray wolf, being roughly the North American equivalent The coyote appears often in the tales and traditions of Native Americans—usually as a very savvy and clever beast. Modern coyotes have displayed their ... The Coyote has a good sense of smell, vision and hearing which, coupled with evasiveness, enables the coyote to survive both in the wild and occasionally in ... The coyote has grayish-brown to yellowish-brown fur on top and whitish fur on its underparts. It has large triangular ears on the top of its head and a long, narrow ... One of the fastest-growing logistics providers in America, Coyote believes in doing the right thing, every time, no matter what your freight or destination. Coyote. 1. Are there coyotes in Rock Creek Park? YES- The first recorded sighting in the park was in May 2004; they were confirmed by park staff in Learn more about the Coyote - with amazing Coyote videos, photos and facts on Feb 19, 2015 ... If you see a coyote in the city or suburbs, don't be alarmed. Attacks on humans are very rare. Our tools will teach coyotes to keep their distance. Feb 24, 2015 ... Please SUBSCRIBE NOW! http://bit.ly/BWchannel On this week's episode Coyote Peterson meets a Wolf Pack! It goes without saying that ... The coyote is a medium-sized member of the dog family that includes wolves ... Current Distribution: Coyotes are native to North America and currently occur ...
Ontogenetic diet shifts, where individuals change their resource use during development, are the rule rather than the exception in the animal world. Hanna studied the effects of competition among juveniles on the likelihood of an adaptive radiation to arise using a size-structured consumer-resource model and the adaptive dynamics approach. ten Brink, H., & Seehausen, O. (2022). Competition among small individuals hinders adaptive radiation despite ecological opportunity. Proceedings of the Royal Society B: Biological Sciences, 289(1971), 20212655 (10 pp.). doi:10.1098/rspb.2021.2655,
Thirty-thousand-year-old bison bones discovered in permafrost at a Canadian goldmine are helping scientists unravel the mystery about how animals adapt to rapid environmental change. The bones play a key role in a world-first study, led by University of Adelaide researchers, which analyses special genetic modifications that turn genes on and off, without altering the DNA sequence itself. These ‘epigenetic’ changes can occur rapidly between generations – without requiring the time for standard evolutionary processes. Such epigenetic modifications could explain how animal species are able to respond to rapid climate change. In a collaboration between the University of Adelaide’s Australian Centre for Ancient DNA (ACAD) and Sydney’s Victor Chang Cardiac Research Institute, researchers have shown that it is possible to accurately measure epigenetic modifications in extinct animals and populations. The team of researchers measured epigenetic modifications in 30,000-year-old permafrost bones from the Yukon region in Canada, and compared them to those in modern-day cattle, and a 30-year-old mummified cow from New Zealand. Project leader Professor Alan Cooper, Director of ACAD, says: “Epigenetics is challenging some of our standard views of evolutionary adaptation, and the way we think about how animals use and inherit their DNA. In theory, such systems would be invaluable for a wide range of rapid evolutionary adaptation but it has not been possible to measure how or whether they are used in nature, or over evolutionary timescales.” Epigenetics specialist and co-investigator Dr Catherine Suter, from the Victor Chang Institute, has been studying the role of epigenetics in adaptation in laboratory animals. She jumped at the chance to test epigenetic methods in ancient DNA, which had never previously been attempted. “This is the first step towards testing the idea that epigenetics has driven evolution in natural populations,” Dr Suter says. Professor Cooper says: “The climate record shows that very rapid change has been a persistent feature of the recent past, and organisms would need to adapt to these changes in their environment equally quickly. Standard mutation and selection processes are likely to be too slow in many of these situations.” “Standard genetic tests do not detect epigenetic changes, because the actual DNA sequence is the same,” says lead author, ACAD senior researcher Bastien Llamas, an Australian Research Council (ARC) Fellow. “However, we were able to use special methods to show that epigenetic sites in this extinct species were comparable to modern cattle. “There is growing interest in the potential evolutionary role of epigenetic changes, but to truly demonstrate this will require studies of past populations as they experience major environmental changes,” he says.
The threat Asian carp pose to the Great Lakes community may be politically controversial, but pales in comparison to the costs and danger of continuing to wring hands over established facts. It's time, a Michigan State University fisheries expert says, to let science drive policy and put knowledge into action. "You know it's big when academics and the management community say we don't need five more years of study," said Bill Taylor, University Distinguished professor in global fisheries sustainability at Michigan State University and a member of MSU's Center for Systems Integration and Sustainability. "The costs of hydrological separation are high, but it's a one-time expense and remediation in the Great Lakes from these invasive species will eventually make separation look cheap." Taylor is one of four Great Lakes and Mississippi River researchers publishing a paper which breaks down four recent assertions that downplay the threat of the invasive Asian carp and questions the need to investigate ways to physically separate the Great Lakes and Mississippi River basins to prevent the further spread of harmful non-native species. Dividing the Waters: The Case for Hydrologic Separation of the North American Great Lakes and Mississippi River Basins is published June 30 in the Journal of Great Lakes Research. In addition to Taylor, it is authored by Jerry Rasmussen of Natural Resource Management Associates in Le Claire, Iowa; Henry Regier, University of Toronto; and Richard Sparks, a senior scientist at the National Great Rivers Research and Education Center, Godfrey, Ill. The authors conclude that the threats posed by the Asian carp and other invasive species remain high and warrant action to prevent further ecological and economic harm to the Great Lakes ecosystem. The paper examines recent claims by policy makers that: Implications that more study is needed are exasperating the science community, Taylor said. Science has done its job by reaching thoughtful and clear conclusion. Now, he said, is a time for action -- or at the least a clear decision not to take action. "I am tired of studying what we already know is going to happen," Taylor said. "We've watched this coming on for 10 years. We know what's going to happen." Taylor said science clearly points to the likelihood that invasive Asian carp, which are voracious feeders, will prove to be highly effective food competitors to native fish species. It's not the vast open areas of the Great Lakes he sees as threatened, but rather the lakes near shore areas, wetlands and tributaries -- rivers that now serve as rich habitat for diverse and highly productive fish communities, including our economically important game fish. "The Asian carp are going to whack the tributaries," Taylor said. "They're going to eat all the food -- they eat anything they get in their mouth and that means they'll eat the food base that our resident fish would normally eat. They will change the food web and dominate our streams and near shore regions in the Great Lakes basin." Congress, according to the paper, can play an important role in addressing the invasive species problem by passing legislation mandating the U.S. Army Corps of Engineers to complete a study that offers a permanent solution based on the best scientific information and engineering technology available. The Center for Systems Integration and Sustainability works in the innovative new field of coupled human and natural systems to find sustainable solutions that both benefit the environment and enable people to thrive. Cite This Page:
As central Italy continues to dig out from a devastating earthquake that killed more than 150 people, the country is abuzz with questions about whether an Italian scientist predicted the deadly event and had his warning silenced by local officials. But little about this affair is clear, perhaps not surprising given the chaos of the postquake situation. And several veteran seismologists and earth scientists who spoke with ScienceInsider said they were not familiar with the now-famous researcher or his work, and they expressed skepticism that his much-publicized measurements of radon gas offered a reliable predictive technique for earthquakes. The basics have been summed up in many media reports, although the English-language accounts seem to depend largely on translations of Italian media reports. The Los Angeles Times offers details of Giampaolo Gioacchino Giuliani’s apparent warnings several weeks ago of an upcoming earthquake in the region. But who is Giuliani? He works at the National Laboratories at Gran Sasso, though he’s been variously identified as a seismologist, physicist, and technician at Italy’s National Institute of Nuclear Physics, not Gran Sasso's National Institute of Nuclear Physics. “He is a technician in a collaboration with Gran Sasso, which is based in Turin—and his work on earthquakes is a hobby, nothing to do with the research project here,” that institute's director told Nature, who said the research center has been a “bit embarrassed” by the media reports. Italian news media have reported that Giuliani has been developing his radon detector/earthquake predictor for several years in association with other researchers and with CAEN, a company that supplies equipment for physics research. According to translations of other Italian media and discussions with Italian researchers who know Giuliani, he has been studying the correlation between earthquakes and radon, a gas emitted by Earth’s crust. He has reportedly claimed to develop radon-monitoring devices that can give the precise location and magnitude of a future quake, hours or more before it occurs. One such machine is reportedly located with Gran Sasso, whereas the others are reportedly in nearby Abruzzo, where the earthquake hit on Monday. There has certainly been decades of research into radon and earthquake predictability, particularly in Japan, but interest in the topic has dwindled, as it has for many alleged clues that might predict quakes. Paul G. Silver of the Carnegie Institution of Washington in Washington, D.C., a seismologist who co-authored a review on earthquake prediction in 1976, offered this reaction to the Italian quake and Giuliani: “It is highly unlikely that radon research has gotten to the stage where radon could be used as the basis for making an actual prediction. We still do not yet know how to predict earthquakes, so I think his warnings were treated properly. Some news outlets reported increased seismic activity over the past few weeks in the area. If true, this would still not be sufficient to issue a warning, given our present state of knowledge. To my knowledge, neither radon nor any other kind of observation [other than foreshocks] has been shown to be a reliable precursor of earthquakes. Of course, it would be useful to see his observations." And Ian Main, a University of Edinburgh seismologist who chaired an influential online debate a decade ago on whether earthquakes can be predicted, says that radon has been considered a “marginal [quake] precursor. It hasn’t stood up in a statistical way.” Paolo Diodati of the University of Perugia, who also works at Gran Sasso, thinks Giuliani is wrong to support the idea that earthquakes can be predicted. However, Diodati thinks Giuliani’s findings might deserve the attention of the scientific community. And he was critical of how the local officials handled the situation. "If a researcher is put under investigation after he informed the public about what his data are telling him, scientists will feel intimidated and never take the risk of launching any alarm again." In contrast, Francesco Mulargia, a seismologist at the University of Bologna who has written extensively on earthquake predictions, was more dismissive, saying in an e-mail: “Radon as a precursor has been quite extensively studied in the last three decades and did not stand the validation with the Scientific Method. This led to the widely accepted conclusion that it cannot be proposed as a reliable earthquake precursor. The guy who made the prediction is unknown to the seismological community. Neither his method of analysis nor his data have ever been published in a peer reviewed journal or presented at a scientific conference. I am afraid that under these terms they can hardly be taken into any serious consideration.” Will the Italian earthquake revive the stalled field of earthquake predictability? Few seem to think so. “It’s difficult to draw conclusions from any one case,” notes Main.
ESTCP Project 201412: Cadmium-Free Alternatives for Brush Plating Repair Operations By Christopher Venturella Environmental Effect Engineer, University of Dayton Research Institute Cadmium (Cd) has been widely used in the aerospace industry and the Department of Defense (DoD) due to its excellent corrosion resistance, adhesion, and lubricity characteristics. Cadmium brush electroplating is used to repair worn and corroded parts, in many cases while the components are still installed on the aircraft. However, Cd is a toxic metal and human carcinogen that is heavily regulated in the United States and the European Union. Traditional brush plating is an “open” process that leads to potential worker exposure and contamination of surrounding areas. Due to Cd toxicity and waste generation during cadmium operations, there are many environmental, health and safety problems as well as logistical problems and disposal costs associated with the use of cadmium at DoD repair depots and field operations. DoD, Air Force, and the Occupational Safety and Health Administration (OSHA) policies require the Air Force to eliminate Cd uses and exposures. During the last few years, OSHA issued multiple citations against Air Force depots. Executive Order 13423 issued January 24, 2007, “Strengthening Federal Environmental, Energy, and Transportation Management”, requires government agencies to reduce use and dispose of toxic and hazardous materials, including Cd. ESTCP Project Focus: This joint U.S. Air Force-ESTCP project focuses on the elimination of toxic/carcinogenic cadmium material for brush plating repair operations, and the reduction of solid waste associated with traditional brush plating repair processes. The Dalistick Station is a Commercial-off-the-Shelf (COTS) mobile electroplating system that enables selective electrochemical treatments without generating any leakage of electrolyte during the plating process. Residual brush plating solution is captured and recycled for re-use in a closed-loop system. The Station is designed to perform plating and surface finishing operations on steels or light alloys on site, at DoD repair depots, or in the field. These treatments can be performed on curved, horizontal, and/or vertical surfaces and edges without any leakage of electrolyte. The generation of solid waste is also reduced through this process. The Zinidal Aero Zn-Ni Solution is also a COTS system and is a candidate to replace Cd brush plating on high strength steels. It is intended to be used for repair applications on weapon systems parts and components (landing gear, terminal assemblies, landing gear doors, bushings, etc.) The coating provides sacrificial corrosion protection to steels, and the process does not require the hydrogen embrittlement relief baking when plating on high strength steels. The use of the Dalistick Station and Zinidal solution are expected to offer the following immediate and long- term cost, regulatory, and environmental, health, and safety benefits: - Elimination of Cd brush plating in repair operations - Reduction of occupational and environmental hazards associated with Cd brush plating - Avoidance of compliance issues in military repair operations - Cost savings due to recycle and reuse of plating solution in closed-loop process - Reduction of solid waste that is generated from traditional brush plating - Reduction of worker exposure to hazardous materials and to residual brush plating solutions - Reduction of exposure monitoring, need for personal protective equipment, permitting, and record keeping The Dalistick Station and Zinidal Aero Zn-Ni Solution are currently undergoing qualification testing. Following the positive results from the qualification testing, the system will be placed in the field at an Air Force Air Logistics Center to further evaluate the systems’ performance under field repair conditions. Reviewed and approved for public release: 88ABW-2016-3433
Feb 22, 2018 In this short episode, we review the basics of the refrigerant circuit. The standard HVAC refrigeration circuit has four main components: compressor, condenser, metering device, and evaporator. The compressor squeezes refrigerant vapor into a smaller volume by applying lots of pressure. It simultaneously moves and compresses gaseous refrigerant. The more a compressor has to compress a gas, the less gas it moves. The more gas a compressor moves, the less gas it compresses. Then, the refrigerant leaves the compressor via the discharge line. The discharge line is very hot because the temperature increases with pressure. The hot vapor feeds into the top of the condenser. The condenser brings the gaseous refrigerant back down to a liquid. Condensers come in all shapes for various applications, but all condensers' main goal is heat exchange. Condensers desuperheat, fully condense (change vapor to liquid), and subcool. Subcooled liquid refrigerant leaves the bottom of the condenser via the liquid line. The liquid line leads warm, subcooled liquid refrigerant to the metering device. The metering device's goal is to drop the refrigerant's pressure. That pressure drop facilitates boiling in the evaporator coil. The evaporator absorbs heat from the space. Fans blow warm air over the coils, allowing that heat to come into contact with the refrigerant. The refrigerant boils when it absorbs enough heat. The last few rows of the evaporator are where superheating occurs. Superheat is the temperature above the saturation point; superheat indicates that the refrigerant is all vapor, no longer a liquid-vapor mix. Then, the vapor refrigerant travels back to the compressor via the suction line; the refrigerant circuit restarts. The suction line is rather cool; we use some of that cool refrigerant gas to cool down the compressor.
When we interact with others, we experience those interactions with a mix of feelings new and old. By “old” I mean that emotions from long-standing relationships (like family) live on in our brain/mind as emotion-traces. At times, these emotion-traces strongly affect our current interactions with others, but we may not always be aware of their influence. One way to become aware of them is to notice when we’re having powerful feelings. Let’s take a look at a couple of examples where individuals are having a stronger reaction to another person than seems warranted. We’ll start with Jerry. - Cleared after stabbing, former UW student wants his life back - Driver arrested after I-90 crash that killed 2 - Costco delays credit-card switch - WSDOT chief ousted by Senate Republicans after 3 years on job - Soil rises around Bertha; no building foundations damaged Most Read Stories Jerry emails his friend, Don, asking for his help. When Don does not write back, Jerry gives up, feeling hurt and angry. Jerry tells me that the situation reminds him of something that happened when he was younger. He remembers taking over a lawn-mowing job from a friend but not being able to get the mower started. When he asked his friend and his dad for help, neither would. Jerry felt incompetent, hiding out alone in his room and embarrassed he couldn’t start the mower. Then, as now, he felt hurt and angry when he couldn’t get help, and his embarrassment made him give up. As we talk, Jerry sees the present situation differently — he realizes that what is happening now with Don is different from what happened when he was a child, even though his feelings are very similar. This is key — when the feelings you have now are similar to ones in the past, you are much more likely to assume the situation is the same when it may be very different. After we talk, Jerry calls Don, discovers Don’s email was out, and Jerry gets the help he needs. It’s not just unpleasant feelings from the past that remain important. Let’s consider Amy’s situation. Amy, a woman in her 20s, is learning cello and loves playing. This surprises her because she thought she “wasn’t musical.” Amy’s idea that she has no musical aptitude comes from her five years of piano lessons with a very stern teacher, a woman Amy always felt was exasperated with her and critical of her playing. Her cello teacher is a very lively woman who readily praises Amy, who realizes that the cello teacher reminds her of her beloved nanny — an animated woman who repeatedly told Amy that if she worked hard at something she would be good at it and enjoy it. Past (“old”) relationships that have been emotionally important to us, whether they were friendly or hostile, influence the way we experience present (“new”) ones. Having prior relationship experiences that were generally positive and welcoming ease us into having warm and positive present-day interactions. Past relationships that were hurtful or harmful will — if we aren’t aware — limit our potential for creating positive current relationships. But don’t lose hope — if you know that your present feelings about others are influenced by the past, you can better separate the past from the present, freeing yourself to enjoy the relationships you have now. Tony Hacker, Ph.D., is a Seattle area psychologist who sees individuals and couples in psychotherapy and psychoanalysis. Email: firstname.lastname@example.org
The Food Security Project aims to improve the food and nutrition security of rural Nepalese families. The target communities of the project are rural smallholder farming families, particularly women and children. The project focuses on nutrition-sensitive agriculture and the key components of the framework are soil conservation/management, bio-intensive gardening, fruit tree cultivation, and nutrition education. The project framework has also outlined four anchor activities: 2. Establishment of fruit tree nurseries 3. Improved cook stoves 4. Mushroom cultivation You will be a catalyst on a wide range of activities including, but not limited to: Train and coach smallholder farming families so they can establish bio-intensive gardening practices (compost making, green manure, vegetable nursery, use of bio-pesticides) to meet the daily dietary recommendations. Train smallholder farming families on seed production and guide them to establish small scale community seed banks. Train and follow up with smallholder farming families and community members to further the adoption of improved nutrition-related behaviors. While there is strong potential for Volunteers to contribute to improving the food security situation of rural community people, working in rural communities can present certain challenges. For example, Nepali government supervisors assigned to work with Volunteers are located outside the communities where Volunteers work, and this can prevent supervisors from regularly meeting with Volunteers. To remain effective, Volunteers must demonstrate a high degree of motivation, commitment, and initiative to properly engage with relevant community stakeholders to develop and implement work plans. Health coverage, Housing, Living allowance, Non-competitive eligibility (federal jobs), Stipend, Student loan forbearance, Training Prohibits paid work outside of the sponsoring agency at any time Subject to criminal background check
The issuance of passports can be traced back to around 450 B.C. when the King of Persia appointed the Babylonian, Nehemiah as governor of Palestine. “Nehemiah requested and was granted a letter of safe conduct to insure his safety.”¹ This was the first formal request for what we now know as a passport. The American passport has evolved over the years. The first passports were issued to individuals and to ships or vessels, by the Secretary of State, or under his authority by a diplomatic or consular officer serving overseas.² While the Department of State was the official issuing agency, governors, mayors and even notaries public often authorized passports.³ An act passed in 1856 outlawed this practice and stated the authority of issuing passports rested solely with the Secretary of State. Passports were also issued for travel within the United States. In particular, this was needed for safe passage through Indian territories. Early passports were a single sheet of engraved white paper confirming the citizenship of the individual and granting the bearer, be it an individual or even a ship or vessel, safe pas-sage to enter foreign territory. Beginning in 1796 passports described the general appearance of the bearer including age, height, eye color and hair color. At that time there was no standard passport, so the general appearance of the passport varied depending on who issued it. In 1804 Caleb Strong, Governor of Massachusetts, issued a passport to Joseph Warren Revere, son of Paul Revere, who was traveling to Europe. It stated Revere “going to Europe pass safely and freely, without giving, or permitting to be given to him, any hindrances, but on the contrary affording to him all aid and protection, as we would do in like case for all those that might be recommended to us.”⁴ In 1811 the first passport including a physical description of the bearer was issued by the Department of State.⁵ A few years later the physical description was moved to the left side of the passport and separated from the text. The early passports were only good for the duration of the journey. New passports had to be issued each time an individual or vessel traveled to foreign territory. In 1873 the duration of a passport was changed to two years.⁶ During World War I this was changed to six months, but reverted back to two years at the conclusion of the war. Until 1856 no fees were charged for acquiring a passport. At that time a fee of $1 was instituted for all passports issued abroad. Passports issued domestically were still free. The fee was increased to $3 in 1862 and this time it included domestic as well as applications from abroad.⁷ Prior to the mid-1840s there was no formal passport application. A personal letter ad-dressed to the Secretary of State including a physical description, a declaration of citizen-ship and an outline of the applicants travel plans was sufficient.⁸ In 1845 Secretary of State James Buchanan issued the first circular outlining the guide-lines for applicants. They required proof of identity and citizenship. Applicants were required to have an affidavit witnessed by a notary public and signed by another citizen to verify their identity and claim of citizen-ship. ⁹ The outbreak of World War I brought sever-al changes to the passport system. First and foremost U.S. citizens were warned to carry their passports when going abroad. Another modification was the requirement that all passports include a photograph of the bearer. A December 30, 1914 New York Times article stressed the new “rigid passport rules.” January 1, 1915 any persons wishing to enter Germany had to have a passport with a properly attached photograph or they would be denied admittance. In 1917 a new passport was designed with the bearer’s description in the lower left hand corner directly opposite the photo-graph. The first passport in the form of a booklet was introduced on January 3, 1918.¹⁰ Passports have been modernized many times over the years. Today you can fill out an application online and submit a digital photograph. The fee for the passport book is $110 and for an identity card $30. It won’t be long before the passport book is a thing of the past. Passport chips and ePassports will become the standard. Already national identity cards containing biometric information are being used in many countries including the United States. Passports, like everything else, have to stay in step with modern times and technology is a huge part of the present and future of passports. ¹U.S. Dept. of State. Bureau of Security and Consular Affairs. The United States Passport: Past, Present, Future. Washington, D.C.: U.S.G.P.O., 1976. p. 1 ²U.S. Dept. of State. The American Passport: Its History and a Digest of Laws, Rulings, and Regulations Governing Its Issuance by the Department of State. Washington, D.C.: U.S. G.P.O., 1898, p. 4 ³U.S. Dept. of State. The United States Pass-port, p. 31 ⁴Id. p. 27 ⁵Robertson, Craig. The Passport in America: The History of a Document. New York: Ox-ford, University Press, 2010, p. 253 ⁶U.S. Dept. of State. The American Passport, p. 75 ⁷Robertson, Craig. The Passport in America, p. 95 ⁸Id. P. 92 ⁹Id. P. 97 ¹⁰U.S. Dept. of State. The United States Pass-port, p. 63 Passport of Smyra & Constantinople J. Hosford Smith issued at the Consulate of the U.S. Beirut, June 9, 1853.
|These daylilies are an evergreen diploid with mounded leaves. Free-flowering and vigorous. Bears nocturnal, circular, magenta-lilac flowers, 3 1/2 inches across, with purple eyes, green throats, and black anthers, in early and midseason. Height 14 inches. In general, daylily colors range from the expected yellow, to pink, red, lavender, and even brown. Time of bloom may also vary from early summer to fall. Heights are just as varied. Clumping form gives rise to clustered, lily-like flowers atop bare stems. Does best in full sun or partial shade in warmer climates. Though daylilies are drought resistant, they flower better when given some water during bloom. They are relatively trouble free and grow in almost any soil, as long as it is well drained. Divide in early spring or la
The word trust means a lot of things, usually positive, but in a business and commerce context a trust is a large business entity, combination of interests, or agreement among businesses motivated by the goal of suppressing competition. While beating the competition with a better product or better fiscal management is the name of the game, actively seeking an unfair advantage is illegal under federal and state antitrust laws. For example, Intel is a very dominant player in the microprocessor industry but it is not considered a trust because there have been no findings of illegal trust activity or collusion with other companies. When the Federal Trade Commission (FTC) determines that a business entity is an illegal trust, the agency typically breaks it up into smaller companies. State courts generally handle antitrust cases that are more localized and don't involve interstate commerce. Idaho Antitrust Law at a Glance Under Idaho statute, monopolies and conspiracies among two or more entities are considered illegal activities. The state also has the authority to block acquisitions that would "substantially lessen competition." Additional details of Idaho antitrust law are listed in the following chart. |Antitrust Code Section||Idaho Competition Act: 48-101, et seq.| |Is a Private Lawsuit Possible?||Yes; attorney general also enforces| |Time Limit to Bring Claim||4 yrs. or within one year after cause of action by state concludes| |Can a Successful Plaintiff Recover Attorneys' Fees?||Yes| Note: State laws are always subject to change at any time through the enactment of newly signed legislation or other means. You should contact an Idaho antitrust and trade regulation attorney or conduct your own legal research to verify the state law(s) you are researching. Research the Law Idaho Antitrust Laws: Related Resources Contact a qualified attorney.
Artificial Intelligence (AI) in Cybersecurity Findings from of a Worldwide AI Survey Reveals Survey on the Fears, Hopes and Plans of Cybersecurity Professionals Three quarters of cybersecurity professionals believe the world is very close to encountering malicious artificial intelligence (AI) that can bypass most known cybersecurity measures. More than a quarter believe this will happen within the next year if it hasn’t happened already. 50% see it happening within 5 years. Phishing, social engineering tactics and malware attacks are those considered most likely to become more dangerous with the use of AI. These are just a few of the findings of a recent worldwide survey on AI in cybersecurity, conducted by research firm Cybersecurity Insiders on behalf of Enea. The results of the survey have been compiled into a report that provides an in-depth, holistic view of how today’s cybersecurity professionals see AI and its impact on the industry. Findings have been grouped into fears, hopes, and plans to reveal the risks and benefits of AI, while useful insights and recommendations will help in navigating the constantly evolving threat landscape. Download the report now to understand the impact of AI on cybersecurity and how you can implement a practical and realistic strategy to strengthen the protection of your networks. “This report confirms growing concerns around the malicious use of AI, but it also highlights some remarkable innovations in the use of AI to streamline and automate defenses. Significant gains have already been made, such as a reduction in the average time it takes to detect and contain threats. However, AI is not a one-size-fits-all solution – it’s essential that businesses take a clear and methodical approach to implementing AI strategies in order to achieve maximum readiness and resilience.”Laura Wilber, Sr. Industry Analyst, Enea 76% of cybersecurity professionals believe the world is very close to encountering malicious artificial intelligence (AI) that can bypass most known cybersecurity measures. More than a quarter see this happening within the next year, and 50% in the next 5 years. Phishing, social engineering and malware attacks are seen as the top threats that will be strengthened by AI, but identity fraud, data privacy breaches, and distributed denial-of-service (DDoS) attacks were also cited as likely to become more dangerous. AI is anticipated to bolster threat detection and vulnerability assessments, with intrusion detection and prevention identified as the domain most likely to benefit from AI. Deep learning for detecting malware in encrypted traffic holds the most promise, with 48% of cybersecurity professionals anticipating a positive impact from AI. Cost savings emerged as the top KPI for measuring the success of AI-enhanced defenses, while 72% of respondents believe AI automation will play a key role in alleviating cybersecurity talent shortages. While a majority (61%) of organizations are yet to deploy AI in any meaningful way as part of their cybersecurity strategy, 41% consider AI as a high or top priority for their organization. And a hopeful 68% of respondents expect a budget increase for AI initiatives over the next two years.
So I spent a lot of my time preparing for this post by looking at Braille and how that system could apply to teaching math in elementary schools. I read the article with lesson plan ideas for elementary students and learned a lot about the history of Braille and who invented it. I found it really interesting how there was a system before Louis Braille invented his system. He took something that wasn’t practical for his lifestyle and changed it to suit him better. The assignment of different patters to letters and numbers was intriguing and I don’t know how hard or easy it would be to be blind and learn how to read it. I loved the options for the activities to do with students especially the younger ones using colors and then moving forward with the older students and teaching them more about the background of Braille and diving deeper into the idea behind our projects and assignments. I furthered this look at Braille by doing the worksheet that we received called “Pixel Patterns.” I basically began by making my own mock Braille system which looks like this: You can see that I could use my system to write my name as well. I didn’t plan it out in any specific way, so I figured there was probably a more patterned way of arranging the dots, hence Braille. The Braille alphabet has a more systematic way of arranging the dots going from one letter to the next, and it even has a symbol to motion for a capital letter– something I hadn’t even thought of! Here’s what my name looks like in Braille. For further confusion and curiosity I went ahead and shaded in my own 5 pixel alphabet. It would be hard to memorize and get used to, but here it is! It’s hard for me to pinpoint a pattern to the Braille alphabet, but looking at it I can tell it sequences well. I would be curious to hear how my students would describe the pattern, and I would also like to give them the opportunity to make their own alphabets and explain why they chose to do it that way. I also watched the Domino Chain Reaction video, pretty neat! The creativity quotes page was a great read as well.. here’s one that stuck out: “The trick to creativity, if there is a single useful thing to say about it, is to identify your own peculiar talent and then to settle down to work with it for a good long time.” We as teachers need to remember that our students have unique talents and can be creative in different ways. I hope to structure my classroom to allow for that creative space for all of my students one day. I could do this by allowing for more creativity on assignments- like this braille worksheet, no two students would create the exact same alphabet. The same should go for most projects. Creativity shouldn’t be shut out just because a child is in math class. Instead, it should be encouraged! How do you feel my blog post went this week? I sure learned a lot while creating it, and felt my own creative juices flowing. I really loved learning about Braille more. I now have a better understanding after only a few hours of looking into it’s history.
Migrant workers are particularly vulnerable to forced labour, especially those who are undocumented. Forced labour refers to situations in which persons are coerced to work through the use of violence or intimidation, or by more subtle means such as accumulated debt, retention of identity papers or threats of denunciation to immigration authorities. Human trafficking can also be regarded as forced labour. Forced labour is different from sub-standard or exploitative working conditions. Various indicators can be used to ascertain when a situation amounts to forced labour, such as restrictions on workers’ freedom of movement, withholding of wages or identity documents, physical or sexual violence, threats and intimidation or fraudulent debt from which workers cannot escape. Forced labour can result from internal or cross-border movement which renders some workers particularly vulnerable to deceptive recruitment and coercive labour practices. The ILO estimates that almost 21 million people are victims of forced labour at any point in time. Twenty-nine percent of the victims ended up in forced labour after having moved across international borders, the majority of those being forced sex workers. Two recent publications of the ILO are good sources of information about forced labour: Strengthening action to end forced labour (2013) Summary of the ILO 2012 Global Estimate of Forced Labour (2012) The following video explains the significance of the newly adopted Forced Labour Protocol Recent posts in this section Documents from the Committee on Migrant Workers' General Discussion marking the 25th Anniversary of CMW Last 8 September 2015, the UN Committee on the Protection of the Rights of All Migrant Workers and Members of Their Families (the Committee) organized a half-day general discussion to mark the 25th anniversary of the adoption of the International Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families. The general discussion was a public meeting in which representatives of States, United Nations bodies and specialised agencies, civil society and other relevant organisations as well as individual experts are welcome to participate. The panel discussion focused on the multiple human rights abuses faced by migrant workers, and challenges for States on how to best address these issues in the context of migrant workers in the Gulf, undocumented children in the Americas, and irregular migration flows in the Mediterranean. Each of these discussions underlined the importance of an agreed international human rights framework, including stepping up ratifications of the Convention. This discussion also included perspectives, including best practices, by States parties and non-States parties on how to address migration-related issues and the role of the Convention in this regard.Click here for the full post When: September 30, 2015 6:00pm – 8:00pm Where: 2nd Floor, 777 UN Plaza, at First Avenue and E. 44th St. New York Every day, all around the world, migrant children and their families are detained simply because they lack the proper documents. These children and families often undertake perilous journeys and are met with xenophobia, violence and—increasingly—with detention despite having committed no crime and without being a threat to others. There is now overwhelmingly clear guidance from the United Nations system that the immigration detention of children is a violation of rights to liberty and family life. Non-custodial, community-based alternatives to detention (ATD) are increasingly being implemented in a variety of country contexts. These ATD fulfill the best interests of the child and allow children to remain with their family members and/or guardians, respecting the fundamental right to liberty, while their immigration status is being resolved.Click here for the full post Published by the ILO, 11 February 2014 In search of a job to support his family, a man accepts an offer from a recruiter and signs a contract for what looks like a good job with decent wages. Once at destination, the reality is very different.Click here for the full post
A sole proprietorship is the most common and simplest form of business ownership. It is preferred by most entrepreneurs because it offers advantages that partnerships and corporations can’t provide. A sole proprietorship is not only cheap and easy to form, but it also gives you complete control and decision-making power. You don’t have to pay corporate taxes or share your income with anybody if you choose this form. A sole proprietorship is also covered by fewer government rules and regulations. Sole proprietorship is a type of business that is owned and run by one individual. Under a sole proprietorship, there is no legal difference between the owner and the business. The total assets of the company belong to the sole owner and all liabilities are the responsibility of the owner. Although many sole proprietors operate using business names, single proprietorship businesses are registered with local government authorities under the name of the owner. Advantages vs. Disadvantages Unlike partnerships and corporations in which several people may be involved in the decisions, sole proprietors can make final decisions by themselves. Sole proprietors receive 100 percent of all net profits, while partnerships and corporations divide profits among several owners. One of the main disadvantages of sole proprietors is unlimited liability under which the owner's personal assets can be used to pay creditors in case of bankruptcy. Sole proprietors face limited government regulation. Corporations can offer stock ownership to the general public, but single proprietors can’t raise capital publicly. Service Business Examples A service business sells services directly to consumers, but many business owners who provide services find it easier to operate as sole proprietors. Services are intangible products that appear only when required by a customer. Freelance business consultants, barbers who own their shops, auto mechanics who are self-employed, a gardener who mows lawns for a fee, and real estate agents who work for themselves are five common examples of single proprietors. The common criteria that distinguish these business owners as single proprietors is the fact that they solely own their companies. Merchandising Business Examples An entrepreneur who goes into business by himself without pooling capital with other people or entities to sell merchandise is considered as a sole proprietor of a merchandising business. A hardware store owner, a grocery store owner, a clothing store owner, an entrepreneur who manufactures furniture in his backyard and sells, and a mother who bakes cookies and distributes them to other stores are five examples of single proprietors who sell merchandise. Some of these businesses may be registered as partnerships or corporations, but those that have only one owner are classified as sole proprietorship. - Jupiterimages/Photos.com/Getty Images
Historical Atlas of Ancient Greece / Edition 1by Angus Konstam Pub. Date: 06/15/2003 Publisher: Facts on File, Incorporated From the days of Homer's Mycenean culture until the final collapse of Ancient Greek civilization, the Greek World has provided an enduring fascination to countless generations. Its language, culture, political systems, philosophy, art and architecture still influence our everyday lives, while the stories of Greek warriors, mighty Gods, and glittering cities have captivated our imagination. This book traces the historical, cultural and political development of the Greeks, who created the first democratic society in the world, and whose empire under Alexander the Great spanned most of the known world. While her citizen soldiers safeguarded Greek civilization when it was threatened by the Persians, Greek writers, poets, architects, politicians and philosophers created a cultural legacy that still endures. In this lavishly illustrated book, the history and culture of this remarkable people are traced, allowing readers a clear and concise insight into the Greek World. and post it to your social network Most Helpful Customer Reviews See all customer reviews >
I recently heard an excellent analogy that may be useful for describing the importance of my research to the general public! I heard this from another participant at a workshop I just attended, so I do not take any credit for this. Unfortunately, I didn’t catch who he was, but I want to share it. This is an analogy for my previous post “What the heck do you do in grad school, Part 1.” In the same workshop, I learned that many people in the general public do not know what DNA is, so hopefully this will help make sense of that post. Imagine that your body is a big symphony. In the symphony, there are many instruments playing different parts all at once: melodies, harmonies, and rhythms. Each part is unique, but they combine together to make a whole, complex song. Depending on what the song should sound like, all of the instruments could be playing at once, or just the strings could be playing while the other musicians rest. Furthermore, depending on the arrangement of the musical parts, the same orchestra can play a wide range of very different music. In this analogy, the specific song being played is a cell in your body. Depending on the song, it could be a muscle cell, a skin cell, a blood cell, a brain cell, or any other number of cells. In a symphony, many songs come together to make a complete whole, like all of your cells come together to make a complete you. Each musician’s part in the song is analogous to a protein in the cell. Each part/protein has its role to play in the song/cell. Proteins can detect signals from outside the cell, they can be a way to send signals to other cells, they can pull on each other to contract a muscle, and they can break down nutrients for energy. The musicians are the genes in the DNA. In this analogy, the members of your orchestra play every song in the symphony. They are always there and always the same for every song. In the same way, each cell in your body has the exact same DNA as every other cell in your body. Just as the specific combinations of instruments playing or resting make a different song, the specific combinations of genes being “on” or “off” make a different cell. Furthermore, just as a musician can play loudly or softly, a gene can be “turned” up or down. Just like musicians need instruments to play their part, DNA needs RNA to make protein. DNA is “read” by a protein called RNA Polymerase that uses the code in the DNA to make a related molecule called RNA. The code in the RNA is then used to make the protein. Many ways to conduct the orchestra A conductor leads the orchestra so that the musicians know when to rest, when to play, and when to play really, really loudly! In our cell’s little orchestra, there are many ways to tell the tubas when to play and when not to. Sometimes, something might happen to one of the tubas, like a valve could get stuck, and it isn’t able to play when it should. Just like a professional musician takes care of his or her instrument so that it can make music when it needs to, the cell has ways to make sure that the RNA is in good shape and able to make protein. The mechanism that I study has to be functioning properly to make good RNA. So you could say I study one of the ways that the instrument is taken care of so that it can play its proper part to make the song in the symphony. FOR MORE INFORMATION: Check out the Genetic Science Learning Center website from the University of Utah. This excellent site is highlighted in my “Resources” page. Their page on “Molecules of Inheritance” has interactive learning tools to help you better understand DNA and genes, RNA, the Central Dogma, and proteins. You can also check out my Biology & Health Pinterest board. I add cool pictures, infographics, and links about a broad range of topics, including DNA, genetics, and cell biology. Image Credit: Indianapolis Symphony Orchestra
What is bitrate? Bit charges are outlined by megabits per second (Mbps) and in a nutshell, the upper the bit charge of a video, the upper the video of high quality. Bitrate instantly impacts the standard of a video. The greater bit charges produce sharper movies, whereas decrease ones produce blurrier movies as a result of the bit charge controls how a lot of data is in every video. How is bitrate linked to the scale of the video? Bitrate is usually represented by kbps which primarily means kb (kilobits) of knowledge per second. So, the scale of a 1hour 1500 kbps video will probably be – 1500*60*60 -kilobits= 1500*60*60/8000 MBs of knowledge = 675 MB per hour of video information. Similarly, a 1000 kbps 1-hour video will probably be 450 MB in measurement, 600 kbps will probably be 270 MB in measurement. So what is the connection between pixels(p) and bitrate (kbps)? There isn’t any exact technical relation between pixels and bitrate. For the identical streaming supplier; greater the pixels, greater the bitrate, and vice-versa. Though as I already talked about totally different service suppliers can supply totally different pixels even on a similar bitrate. Pixels outline the decision of the video, whereas bitrate is a common information measurement for a video file expressed per second of video. There could be high-resolution movies with low bitrate and low-resolution movies at a really excessive bitrate. This imbalance is due to some sophisticated maths that are used to precisely what to show on a video with the least quantity of file measurement. These maths can drive the bitrate for a video at any arbitrary small worth whereas compromising on the standard. What bitrate/pixels does Youtube use for HD streaming? Youtube has a variety for 1080p HD when it comes to bitrate. It relies on what measurement the shopper uploads and what’s a sort of content material – Media, E-learning (In e-learning, if it’s a display screen seize, animation or class recording, and so on.) It is at most saved at 1500 kbps for 1080p, for sure excessive movement motion pictures it’s saved at 2500 kbps. For sure circumstances, low movement lectures, 1080p could be as little as 600 kbps. Why does Youtube use pixels as a high-quality parameter? There are two major causes – Bitrate instantly corresponds to the scale and therefore bandwidth consumption & prices. 1000 kbps video will devour double bandwidth as in comparison with 500 kbps video. Pixels don’t have such direct co-relation. VdoCipher can present greater pixel high quality even at low bitrates. So, for a lot of circumstances, VdoCipher can present 1080p or 720 p HD even at 500-900 kbps vary. Thus, there is no such thing as a want for a decrease pixel parameter. How many video high-quality choices I have to have to make sure clean playback the world over? From VdoCipher expertise – For Movies/Serials with plenty of movement – three or most four qualities. We sometimes do 2500 kbps, 1500 kbps, 900 kbps, 500 kbps, Or one thing like 2000, 1200, 600 kbps. For Educational content material – 2 normally. 1500 kbps and 600 kbps. Or generally – 1500, 800, 400 kbps. The bitrate and high-quality optimizations are made maintaining in thoughts gradual connections of Asia & African customers. Over time, they’ve labored nicely for all geographical distributions making certain a fantastic viewing expertise.
Explain how the United States compares to other countries with regards to social mobility rates. Are there differences between the United States and other countries? Why? Which group in the society has the opportunity for upward social mobility and why? Do you think the United States is truly a meritocracy? Why or why not? Give relevant examples to support your decision. Apply your decision of attending college to the concept of meritocracy. Be sure to support your answer with references to the textbook, appropriate outside resources, and your own personal experiences.
What was the cause of the Tenerife airport disaster? The Tenerife airport disaster happened on March 27, 1977, when two Boeing 747s collided on the ground of Los Rodeos Airport (now Tenerife North Airport). This crash killed 583 people on board the two flights. Did the pilots survive the Tenerife airport disaster? But when the all clear came to resume their journeys, a combination of bad weather and miscommunication meant that Pan Am Flight 1736 was still on the runway as KLM Flight 4805 attempted take-off. Captain Robert Bragg was the co-pilot aboard the Pan Am plane, and was one of the few who survived the collision. How many died in Tenerife airport disaster? 583Tenerife airport disaster / Number of deaths How many fatal planes crashed in 2021? Number of worldwide air traffic fatalities from 2006 to 2021* |Characteristic||Number of fatalities| Did the pilots of Pan Am 1736 survive? The collision occurred when KLM Flight 4805 initiated its takeoff run while Pan Am Flight 1736 was still on the runway. The impact and resulting fire killed everyone on board KLM 4805 and most of the occupants of Pan Am 1736, with only 61 survivors in the front section of the aircraft….Tenerife airport disaster. Did the Pan Am captain survive Tenerife? All the 248 passengers and crew aboard the KLM died, while the Pan Am reported 335 deaths taking the total fatalities of the incident to 583. Remarkably 61 passengers and including 5 crew members: The Captain, First Officer, Flight Engineer and Flight Attendants of the Pan Am man managed to survive the crash. Why couldnt Pan Am fly domestic? In the late 1970s, during the Jimmy Carter presidency, deregulation fervor drove aviation policy. The domestic industry was deregulated and the United States embraced the concept of “open skies” internationally. Until deregulation in 1978, Pan Am was excluded from flying domestic routes. What happened to the Tenerife airport disaster? The Tenerife Airport Disaster happened on March 27, 1977, at 5:06 p.m., when two Boeing 747s operating the flights KLM 4805 and Pan Am 1736 collided on the runway of Los Rodeos Airport on the Spanish island of Tenerife, causing the deadliest accident in aviation history that resulted in the loss of 583 crew members and passengers from both flights. How many passengers survived the Tenerife plane disaster? Somewhat remarkable was that 61 passengers, including the flight deck, managed to survive from the Pan Am jumbo. Five hundred and eighty-three passengers including the crew of the KLM jumbo were not so lucky, making what happened on Tenerife, the worst airline disaster in history. Did anyone survive the KLM plane crash in Tenerife? ^ ten Voorde, Gerard (March 21, 2017). “Enige overlevende KLM-toestel vliegramp Tenerife blikt na veertig jaar terug” [Only surviving KLM plane Tenerife plane crash looks back after forty years]. Reformatorisch Dagblad (in Dutch). Why did Pan Am divert to Tenerife? The Pan Am crew indicated that they would prefer to circle in a holding pattern until landing clearance was given (they had enough fuel to safely stay in the air for two more hours), but they were ordered to divert to Tenerife.
The pace at which the EU is transitioning towards a net-zero future is too slow to achieve its 2030 and 2050 climate targets, according to a report from new net-zero watchdog the European Climate Neutrality Observatory. The report, titled ‘State of EU Progress to Climate Neutrality’, warns that although most sectors studied are moving in the right direction to eventually reach net zero, the overall pace of change needs to accelerate “significantly” if the bloc’s targets are to be maintained. The authors assessed 13 economic “building blocks”, ranging from agriculture, transport and industry to electricity, carbon removals, finance and lifestyles. Within these building blocks, 104 indicators were identified to analyse and measure past progress, using available data mainly up to 2021 – the year in which major policies to implement the European Green Deal were proposed. Electricity “almost on track” for a net-zero EU Greenhouse gas (GHG) emissions from electricity within the EU have steadily decreased at a rate sufficient to reach a 2040 benchmark set out in the EU’s long-term climate plans, the report found. However, the development and uptake of renewable electricity generation and its integration into the power system are currently developing too slowly, knocking the sector off-track to reach net zero by 2050. The report identifies several areas in need of further progress, beginning with overcoming barriers to renewable investment as well as the necessary scale-up of grid flexibility to accommodate variable clean energy capacity. The authors also highlight pitfalls in the EU’s own legislation, stating that while the EU Emissions Trading System (ETS) provides incentives to accelerate the phase out of coal, REPowerEU‘s measures to move away from Russian oil and gas in response to the Ukraine War mean that an increase in coal power is inevitable. As a result, the bloc is set to “undermine its own emissions reduction efforts”. A recent report by climate think tank Ember shows that there was no coal rebound in 2022, however. How well do you really know your competitors? Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge. Your download email will arrive shortly Not ready to buy yet? Download a free sample We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData Within the transport sector, the analysis rates the transition to electric and zero-emission vehicles as “far too slow”, despite the increasing uptake of electric vehicles by consumers. The EU’s plans to ban the sale of all new combustion engine vehicles by 2035 is expected to help address this issue. Emissions from buildings and industry still too high Progress in the building sector also remains far too slow, according to the report. Annual reductions in GHG emissions from buildings must increase by 7.5 times until 2030 if the bloc is to meet targets set out in the EU’s Renovation Wave plan. Lack of progress is due in part to a far-too-slow switch from fossil fuels to renewable or electric heating. Progress within industry was rated overall as far too slow, although detailed analysis of the transition towards a climate-neutral industry was not possible due to a lack of available data. Indeed, one of the report’s key conclusions was that available, reliable data and benchmarks from the EU are severely lacking. As a result, “significant” gaps in knowledge on the current state of decarbonisation in some sectors remain, making future predictions and targets difficult. Nevertheless, the report suggests that a reduction in GHG emissions from industry needs to happen 2.7 times faster for the EU to meet its net-zero targets. Net-zero EU: Carbon dioxide removals are moving in the wrong direction Carbon dioxide (CO₂) removals, both natural and engineered, are considered an essential counterweight to residual emissions released from industries that are more difficult to decarbonise, such as aviation. According to the report, the state of development for natural carbon removals is not only too slow, but actively heading in the wrong direction. Deforestation, land use changes and limited carbon stores in soil have driven an average annual decrease in CO₂ absorption from the atmosphere of almost 14 million tonnes of CO₂ equivalent (mtCO₂). This must change to an increase of more than 6mtCO₂ per year if the EU’s 2030 targets are to be met. Engineered, or technical, removals are still in their relative infancy and as such their projected impact on emissions reductions remains limited. The report notes they are moving in the right direction, albeit far too slowly. When asked at a press briefing about the significance of natural carbon sinks vs. engineered carbon removals, such as direct air capture technologies, Thomas Pellerin-Carlin, EU director at the Institute for Climate Economies and co-author of the report, said: “When it comes to the natural [removals], it’s going in the wrong direction for both objectives and enablers. For the technical removals, it’s going in the right direction, but at far too slow a pace. “So essentially, it’s not whether you should focus more on natural or technical [carbon removals]. Actually, it is both of them that are struggling. The challenges are different for both of them… but they don’t need to be opposed, you can work on both,” he added. Fossil fuel subsidies reign supreme The authors found that of all the building blocks, analysis of the finance sector was “the most worrying” and rated it as moving in the wrong direction. Fossil fuel investment and subsidies continue to plague the EU’s commitments to emissions reductions. Between 2015 and 2020, public authorities within the EU decreased environmental taxation and increased funding for fossil fuels to approximately €1.5bn ($1.64bn) per year. This dependency on the fossil industry was made worse in 2021 and 2022 as most member states increased fossil fuel subsidies and decreased taxation on fossil fuel companies, the report states. According to the report, the current state of finance could “put the whole transition to climate neutrality at risk”, because investment in renewable and clean technology and infrastructure is essential for global emissions reductions. To offset the ongoing financing of fossil fuels and to maintain its 2030 targets, the EU must increase its annual climate investment by €360bn, the report estimates.
OVERVIEW AND HISTORY For fifty years and running Anavar (also known as Oxandrolone) has been a crucial player on the planet of anabolic steroids. It gained its popularity during the 1960s and the 1980s when research of anabolic steroids was fast gaining reputation. Anavar was discovered at a period when science was at a race against time to find a perfect anabolic steroid. This was the nearest science ever arrived at understanding the dream of this ideal anabolic steroid. It’s a remarkable unwanted effect-to-benefit-ratio and safety profile that guarantees safe use for both children and females. Its history began back in 1962 when it was referred to as Oxandolone. It did not take long until it was released into the market under the new Anavar by G. D Searle & Co.. Laboratories (currently referred to as Pfizer Inc). Over the years, Oxandrolone was manufactured within an anabolic steroid everytime under a different brand including as for example Lonavar, Lipidex, Protivar, Antitriol and Anatrophil. But out of these brands only one stood out, Anavar, it recorded the maximum range of patient tolerance. FUNCTIONS AND Characteristics Anavar is an altered derivative of dihydrotestosterone, which possesses several alterations that affects its effectiveness. Originally, Raphael Pappo (Searle Laboratories) created Anavar by altering dihydrotestosterone working with another oxygen molecule to restore the carbon-2 contained in the A-ring. This alteration increases the anabolic activity of hormones and prevents it in any metabolic breakdown. Below are some of the main purposes of using Anavar A report conducted in 2004 involving men between the ages of 60-87 revealed a decrease in overall bodyfat especially for many who promised a dose of 20mg of Anavar for a period of time of less than 12 weeks. 1-2 weeks following the subjects ceased using the steroid they kept up to 83% of these aft losses. Yet another study between men between the ages of 40 60 afforded exactly the exact results. A report in 2000 demonstrated that Anavar could help accelerate the pace of healing. This was after investigators administered Anavar to rats using slight wounds and discovered it hastened the healing process. Oxandrolone helps increase red blood cell count and this makes it suitable for endurance athletes A report conducted in 2012 involving burned kids revealed increased muscle strength for up to 5 years after the burns. A second study in 2007 confirmed this after oxandrolone and exercises were administered to burn sufferers. Anavar might not be an appropriate steroid for athletes that would like to majority on mass however when along with a proper diet program, Oxandrolone can build you some lean muscle. Unlike most oral steroids, Anavar has very mild negative effects, that will be quite interesting especially in the event that you take into account it’s C-17 alkyls. It does not cause significant problems like peliosis, jaundice, hyperplasis, hepatitis and neoplasms. Research indicates that when taken in lower dosages, Anavar are well tolerated with a very low chance of leading to masculinization in women. Yet another key thing to notice is that Anavar will not aromatize. Nevertheless, it is still an anabolic steroid which is sure to get some side effects, Including: All of dihydrotestosterone steroids are known to cause oily acne and skin in user. A negative effect largely depends on the genetic predisposition of their consumer to acne. This means that perhaps not everybody will experience this side effect. FEMALE-ONLY SIDE EFFECTS Virilization in women can result in elevated hair growth, enlargement of the clitoris and deepening of the voice. If you see some of these impacts, it’s advisable that you quit using this anabolic steroid till they become irreversible. Additional effects include depression, increased odds of liver damage, increase in levels of terrible cholesterol and last but not least, Anavar suppresses production of testosterone. HOW TO TAKE ANAVAR Dosage should always be determined by the consumer’s experience and sex. Women that have only begun the using the medication often start at 510 mg each day. From there that they could gradually raise the dosage depending upon your human body’s reaction to this new anabolic steroid. Most men start at 20mg each day and keep increasing the dose while the cycle continues. The web is teeming with images of before and after pictures. They clearly show that anavar is quite effective in improving the total physique of both people. These pictures might show the actual consequences of anavar but also the steroid works different from each individual. That is partly because of the reason why our own bodies are different and respond dissimilar to anabolic steroids. And so, in the event the results displayed in those images do fit your do not feel cheated but rather wolf one. Anavar is also said to be among the safest and best anabolic steroid available in the market now. With this specific anabolic steroid, you’re guaranteed of excellent results with very little or no unwanted effects. Once you start looking at all of the people which are taking anabolic steroids they have different goals; many desire to be bodybuilders, the others desire to stay fit while others have been determined to attain the greatest accomplishment in the realm of bodybuilding (a Sandow trophy). ANAVAR CYCLE FOR WOMEN For starters, women should take 5-10mg per day with 5 mg of Winstrol Plus. This cycle should last for 2 weeks. Once you arrive at the complex degree, increase your dose to twenty five mg per day and twice your daily intake of Winstrol Plus to ten milligrams. Women should start with a low dose of 2.25mg at the first week and then increase gradually throughout the 8-week period. For women that are considering adding more bulk, they could up the dose to 20 milligrams. For adult men, the Anavar cycle is best suited to losing body weight. Generally, the suggested dosage is 30mg for the very first week. This for beginners but if you’ve used anabolic steroids that you can up the dosage to 50 mg. The best for newbies should be 70 mg while of regulars should be 100-mg. The outcomes will be dependent in your workout routines, diet and dosage. If you keep regular workout sessions on a healthy eating plan, you can reach your aim of losing weight, gaining lean muscle or perhaps a toned body.
Researchers at the Massachusetts Institute of Technology (MIT) have developed a method to create a highly efficient photovoltaic (PV) energy conversion system that does not require sunlight - it's powered by heat. Researchers have managed to engineer the surface of a material to grab fine tuned wavelengths of light. The technology chooses wavelengths that closely match those PV systems routinely select to generate electricity. The concept is not new; the innovative method converts energy at a higher efficiency than the older technological techniques. The billions of nanoscale pits etched into the surface of some material. The material absorbs heat from as variety of sources, such as the sun, hydrocarbon fuel, decaying radioisotope and other resources. The pitted surface releases energy at the preferred wavelengths. The MIT researchers fashioned a power generator about the size of a button. The generator, powered by butane gas, operates three times longer than a lithium-ion battery of similar weight. The generator recharges by simply swapping in a miniature fuel cartridge. Researchers also state that another mechanism, powered by a radioisotope, continually generates heat from radioactive decay. The scientists say, this process could produce energy for 30 years before the need arises to service or refuel the device. Utilizes Multiple Fuel Sources The U.S. Energy Information Administration states that over 90 percent of all the energy used results from the conversion of heat into mechanical energy. The mechanical energy transforms into electricity. For example, using a fuel to boil water creates steam, which expands or builds up pressure. The energy stored in the steam drives a generator, turns a turbine, or moves an object. Because mechanical systems function at a low level of efficiency, current technology does not allow scaling the systems down to the size required for gadgets and devices like smart phones, sensors or medical monitoring equipment. According to Ivan Celanovic, a research engineer in MIT's Institute for Soldier Nanotechnologies (ISN), the advantage of the new technology lies in the ability to convert multiple sources of fuel into electricity while eliminating the need for the moving parts associated with mechanical energy generation. Celanovic states the advance could “bring huge benefits, especially if we could do it efficiently, relatively inexpensively and on a small scale." For at least 50 years, the scientific community has always understood that PV could produce energy from any heat source - a process called thermophotovoltaics (TPV). If a burning hydrocarbon, such as butane, heats up a thermal emitter - material that radiates light and heat onto a solar cell -- it produces electricity. The radiation from the thermal emitter carries more infrared wavelengths than what normally happens in the solar spectrum. The advances in “low band-gap” photovoltaic materials - cadmium telluride, gallium arsenide, and other materials, allow for the absorption of infrared radiation than silicone. However, the significant loss of heat results in lower efficiencies. Formulating a Solution Researchers must now work to develop a thermal emitter that discharges only the preferred wavelengths the solar cell can absorb and convert into electricity, while simultaneously repressing the unwanted wavelengths. This entails creating a solar crystal out of material with nanoscale attributes on the surface, such as a repeating pattern of ridges or cavities, which allow light to circulate through the material in a significantly distinctive manner. ISN researcherMarin Soljačić, who is also a professor of physics, states, "By choosing how we design the nanostructure, we can create materials that have novel optical properties." In addition, according to Soljačić, “This gives us the ability to control and manipulate the behavior of light." Another approach, employed by researchers Peter Bermel, Peter Fisher, and Michael Ghebrebrhan, entails engineering billions of nanoscale pits on the surface of a slab of tungsten. When heated, the tungsten produces bright light that has an altered emission spectrum as each pit functions as a resonator. Each resonator generates radiation, but only at specific wavelengths. This technique, which ISN director and physics professor John Joannopoulos helped develop,has been instrumental in the enhancement of lasers and other devices. Researchers Celanovic and Soljacic emphasize that constructing functional systems with this technology requires bringing together multiple areas of technology and disciplines. Generating three times the power of a comparable size and weight lithium-ion battery, researchers expect to triple the current power generation. They hope to enhance the technology to provide power for a variety of uses, such as charging hybrid battery by using heat from the engine or powering a laptop from the heat emitted from the device.
The function of a protein is often determined by how it folds into a 3D structure. Therefore, knowledge of a protein’s structure is essential for a deeper understanding of its role in various cellular processes. However, for most proteins known to mankind, our experimental knowledge lacks their determined structure. For instance, the universal protein database Uniprot archives 229 million unique protein sequences, while the Protein Data Bank, the single worldwide archive for experimentally resolved protein structures, holds 206,000 proteins. X-ray crystallography, or cryo-electron microscopy, the traditional protein-structure-determination method that fires X-rays or electron beams at proteins to create a picture of their shape, is very time-consuming and technologically challenging. It thus contributes to the massive (more than a 1,000-fold) gap between known protein sequences and experimental protein structures. This gap could be closed by predicting proteins’ 3D configurations straight from their linear amino acid sequence, a solution that AlphaFold may offer. AlphaFold is a program powered by artificial intelligence (AI), developed by DeepMind, part of Alphabet Inc., Google’s parent company. AlphaFold transforms a protein’s sequence into its structure with high accuracy. EMBL-European Bioinformatics Institute (EMBL-EBI), partnering with DeepMind, made the predicted structures of over 200 million cataloged proteins available to science through the AlphaFold Protein Structure Database (AlphaFold DB). This freely available resource offers programmatic access to its data and interactive visualization of predicted structures. Continue reading The EBM Resource Pyramid guide from HSLS enables health sciences readers to tour the hierarchy and levels of evidence-based medicine (EBM) and practice in a self-paced manner. Evidence-based resources beyond medicine are included, such as nursing and physiotherapy. The graphical display of the pyramid breaks down the categories into seven levels. Those levels are matched with links to HSLS resources, each described in detail. Incorporated into the pyramid structure are the distinctions between filtered and unfiltered resources. Via this guide, you can quickly link to groups of filtered and unfiltered resources. Not all evidence is created equal. The pyramid’s structure facilitates recognition and grading of the evidence. The evidence at the top (systematic reviews) ranks higher than that at lower levels (e.g., cohort studies). For instance, a guideline formulated by a Delphi survey of experts’ opinions rates weaker than one based on systematic reviews of guideline recommendations. Hints for searching for evidence in the various database descriptions help guide strategies to find the strongest levels of evidence. Continue reading Join us for a new workshop on Risk of Bias: Thursday, November 10, from noon to 1 p.m., online Register for Risk of Bias: What is it? How do I assess for it? What do I use to assess?* Assessing for Risk of Bias (RoB) is one of the expected steps when conducting a systematic review, but it can also be used to self-assess study conduct as practiced or as written in your proposal or protocol. Risk of bias assessments are important in research to determine flaws in the design, conduct, or analysis of randomized trials and other types of studies that could lead to an overestimation or underestimation of the true effect of the intervention or exposure. Continue reading Guillaume Mauquest de la Motte was a surgeon-accoucheur. Accoucheurs were male surgeons specializing in childbirth, which became fashionable in 17th-century France as an alternative to the tradition of women as birth attendants. In the early 18th century, accoucheurs were at the center of a polemic by physician Philippe Hecquet, who wrote on the indecency of male birthing attendants. Guillaume Mauquest de la Motte, who responded with a defense of accoucheurs, argued that the skills and expertise that accoucheurs have are necessary to save both mother and child. This midwifery debate was more about whether physicians or surgeons are the best medical providers, rather than justifying or challenging the role of midwives. Change was coming. Only seven years later, the first school for surgeons opened in Paris in 1725, and from then on, surgeons’ training began to resemble the training of physicians. Guillaume Mauquest de la Motte is also the author of one of the best treatises on childbirth (Traité complet des accouchements, 1721), which was also very popular and had multiple editions. Mauquest de La Motte (1655-1737) studied at the Hôtel-Dieu. After obtaining his degree, he returned to his native region of France and established a practice in Valognes in 1701. He became well known and sought after, because he gained a reputation among women for delivering babies safely. He attended to three or four deliveries daily, and he practiced surgery and obstetrics for more than fifty years. The books he published helped solidify his reputation, because his writing was grounded in his extensive experience. Continue reading The HSLS Staff News section includes recent HSLS presentations, publications, staff changes, staff promotions, degrees earned, and more. Two new instructional designers have joined HSLS to support the Network of the National Library of Medicine (NNLM) All of Us Program Center (NAPC): Neda Hashmi, NAPC Instructional Designer, comes to HSLS from AppFolio, where she designed and developed employee training and Help Center content for a cloud-based property management firm, and Rumie, where she designed learning experiences for open online courses reaching broad audiences. Hashmi also has more than ten years of experience in freelance content and technical writing, teaching, and other experiences that involve creative storytelling, visual arts, and digital media that complement her skills in training development and design. For NAPC, Hashmi will support the design and development of learning experiences and programs for National Program Training, which focuses on public-serving NNLM audiences like public libraries and community-based organizations and the communities that they serve. Patrick Norman, NAPC Instructional Designer, comes to HSLS from Erlanger Health System in Chattanooga, TN, where he designed and delivered employee training at the multi-hospital academic medical center. He holds a Master of Education, and his professional experiences also include leadership development support at Teach For America, curriculum design and digital content production for a NASA education grant, and service in the U. S. Marine Corps Reserve. For NAPC, Norman will support the design and development of trainings and learning experiences for healthcare provider organizations, community engagement partners, and NIH program staff who work within the All of Us Research Program. Continue reading
Physical attributes determine how a character interacts with the physics of the world. Can she pick up that object? How does she move? How long can she hold her breath? |Strength Example Chart| |Rank||Deadlift Max||Max Lift Example||Character Example| |0||16 kg||Tavern bench||Merchant or Noble| |2||64 kg||Human adult||Farmer| |4||256 kg||Barrel of beer||Veteran warrior| |5||512 kg||Light horse||Strongest living human| |6||1,024 kg||Draft horse||Legendary hero| Strength is the attribute that represents a character’s raw physical power. It is primarily concerned with the amount of force that a character can generate with their muscles; how much they can lift and carry as well as how hard they hit. Characters with high ranks in Strength are often hard workers with jobs that require hours of physical labor to do or athletes who workout with weights to attain their powerful physiques. They tend toward large stature, although large size does not necessarily correspond to high Strength. Many athletes have high Strength while still maintaining a small, wiry physiques. Agility is the attribute that represents a character’s physical finesse. This is both a measure of how flexible a character is as well as how much control he has over his body. Hand-eye coordination, muscle memory, and both gross and fine motor control are all covered by Agility; if it involves the body in motion it is a function of Agility. Characters with a high Agility are often extremely graceful, seeming to glide across the ground when they walk, or to resemble a cat when poised to strike. They tend towards nimble, quick movements, and great flexibility, with deliberate movements that can seem otherworldly to others. Stamina is the attribute that represents a character’s physical resistance. It is a measure of how much exertion a character can sustain as well as how much punishment they can endure. It also represents their immune system in games that deal with diseases, such as a hard sci-fi, pulp-era, or historical setting. Characters who excel in Stamina can stay up late with no ill effects, run long distances without tiring, and are almost never sick. In many ways, Stamina is a gauge for the overall healthiness of a character, like a snapshot of their physique in a number. Sometimes characters who are otherwise not very physically fit, like those with low Strength or Agility attributes will still have several ranks in Stamina for this reason.
No, even though they’re a lot higher and moving a lot faster than an airplane, the situation is still the same. To them, Earth’s surface appears to be “moving backward” at their speed of 18,000 miles per hour (29,000 kilometers per hour), just as it appears to you when you’re in a car or an airplane. The only difference is that because of their higher speed, they can see a whole continent “moving by” in less time than it probably takes you to drive to work. You may have seen motion pictures of space-walking astronauts with the continents “moving” westward in the background. But why westward? Aha! That’s an interesting story. Have you ever wondered why the Kennedy Space Center was built on the east coast of Florida, rather than on the west coast of California? After all, Mickey Mouse is equally accessible on both coasts. First of all, we want to shoot our rockets out over an ocean, rather than over any populated areas, so that booster rockets can be safely jettisoned. But second and more important, we have to launch our shuttles and satellites into their orbits around the globe by shooting them eastward, in the same direction Earth’s surface is moving. That way, we get a free, 1,000-mile-per-hour (1,600-kilometer-per-hour) shove from Mother Earth. And that means the eastward Atlantic Ocean rather than the westward Pacific. After the shuttle is in orbit, it continues to fly eastward, and looking down, the astronauts see Earth’s surface apparently moving westward, just as if they were in an airplane flying from Los Angeles to New York. But with a lot more legroom.
In this Article, we will focus on - Understanding what does regression as a defence mechanism mean - How it can benefit in the short run - How it can create problems in the long run and - Finally how therapists can help clients move beyond this defense to identify and address deeper issues. Regression: Case Study 11-years-old Raj had to relocate with his parents as his father received a job transfer. Distressed yet not in a position to oppose this happening, he dejectedly left behind his school and friends. Waking up with great reluctance on the first day of his new school, he finds himself wet in his bed. After quite a few repeated occurrences of the same incident, the parents got highly worried about their child’s well being, for they were completely unaware of the turmoil he was undergoing–and in a way, so was he. Any therapist would claim that the little boy was regressing. What is Regression? Regression is nothing but unconsciously reverting back to (in terms of thoughts, feelings, behaviours….) an earlier stage of development. It is a defense mechanism that is used to deal with the current stressors of our lives. People usually regress when they find it difficult to address their issues in an age-appropriate manner or when they feel that they aren’t in a position to intervene at all—as in the case of Raj! Raj, a middle school-age child, started wetting his bed like an infant who hasn’t received any toilet training because he couldn’t bear the anxiety caused by relocation and separation from his peers. Another kind of regressive tendency often seen among children is thumb-sucking. Addressing the underlying unmet need in the child usually corrects the regressive behaviour. However, children aren’t the only ones susceptible to such behaviours. Regression in Adults: Case Study Raj grows up to be a typical 20-year-old who wishes to study abroad. He makes it to his desired college and begins a new life in a new apartment with new people to reside with. However, he finds it difficult to sleep at night and soon shoots off an email to his parents requesting to send his childhood Teddy. Highly amused by this request, his parents choose to comply. Raj starts sleeping well cuddling with his Teddy. Regression often helps to meet short term benefits, e.g., sleep as in the case of our young man here. He copes with the apparent insecurity of his new lifestyle by extracting some sense of protection from his childhood source of safety: his Teddy. However, a problem arises when we become dependent on such defense mechanisms and refuse to learn from our new experiences. So instead of adapting to his new friends and environment by socializing and knowing his way around, if Raj decides to meet his security needs by reverting to an early psychological stage (sleeping with his Teddy) that provides immediate soothing, regression becomes a puzzling circumstance. Usually, the mindset behind such exhibits is to unconsciously receive the nurturance and reassurance provided by an adult authority figure that took care of all your hardships faced in childhood. Now Raj, a 35-year-old, takes up a job and lives with his wife. His boss, an old yet sharp man with greying hair, usually criticizes his work and never shows a sense of appreciation. Unable to take this daily dose of censure and the following pen-chewing sessions anymore, Raj goes back home, curls into a ball of mess and starts crying—rocking back and forth as a baby cradled in his mother’s arms. Does his crying like that seem out of the ordinary? Indeed it does! That is because our fully grown adult version of Raj is yet again regressing. Regression takes place among adults facing unresolved frustration. Resorting to such behaviours as pen chewing is indicative of ‘fixation’ that occurs at a particular point in childhood where our desires have not been gratified or maybe in some cases- over gratified. These disturbances in the normal course of development, in Freudian psychology, become your safe haven to which you return every time you find yourself in a position where you feel like “I can’t take it anymore.” Each time we feel like we’re in a pickle, we feel helpless. Helplessness is a predominant feeling experienced in the early years of life because of our great dependency on our caretakers for almost everything. Hence, under duress we find ourselves seeking comfort from childlike behaviours that provided us relief back then. Regression: Purpose and Problem Remember Regression is happening at the level of the unconscious to protect the client by helping them express the feelings (in a childlike manner) that they were otherwise not able to express in a mature adultlike manner. Though the client is able to express these feelings, regressive childlike ways don’t really help them in resolving the conflict. Therapists Niche: Working with Regression - It is important to help clients build skills and mental states that are appropriate for the clients to be able to express these feelings in a more appropriate & effective manner that will help the client not only experience temporary relief but also leads to resolution of the conflict that lead to these feelings. - In order to understand the client’s behaviour and the reasons behind it, the therapist can use Erick Erickson’s “stages of development model” which attempts to explain why a person regresses to a particular age group when experiencing certain emotions and situations. - Help the client move beyond the defense mechanism of regression through creating awareness, developing skills, releasing suppressed emotions & accessing empowering mental states. - You can use a combination of conscious intervention & planning through effective questioning (checkout Meta Model), T.O.T.E, Corrective therapy, SVIT™ for creating awareness. - SOFT SEA™ can be used to help clients identify and develop required skills. - Hypnotherapy techniques like Regression Therapy or Inner Child Therapy can be used for releasing suppressed emotions. - Anchoring, Hypnotic Suggestions and mental rehearsals / future pacing can be used help the client access empowering mental states. - Through this combination of techniques, the client is encouraged to acknowledge his problem, identify its cause, come up with a solution & apply this solution to achieve the desired outcome. In essence, it is a person-centred approach that also helps develop resilience and adaptive strengths in the client. Precautions to take when working with Regression However, while using Regression and Psychoanalysis, a therapist needs to be careful about the choice of words. Asking closed-ended questions could lead to the creation of false memories which can create further complications. So, it is important to ask open-ended questions during these processes so that the client can understand his or her own subjective experiences and is not influenced by any choices provided by the therapist. If you are a psychologist or a psychology student, and would like to learn powerful therapeutic techniques like Regression, Inner Child work among others, do join us for our practitioner course Cognitive Hypnotic Psychotherapist™ to master these techniques which will help you become a more effective and successful therapist.
We love sharing the latest from our friends at YouBeauty! Skin cancer rates among Caucasians have been climbing steadily over the last decade, but so too are the number of incidences in ethnic skin. At the core of the problem is the (erroneous) belief that ethnic skin is immune from the sun's carcinogenic rays. "People need to realize that skin of color burns too," says YouBeauty Dermatology Expert, Jeanine B. Downie, M.D. "Radiation absorbed from the sun can have you seeing cancer down the road." However, while it’s hard to find data comparing skin cancer rates between white and non-caucasian skin types, we do know that melanoma mortality rates among darker-skin people are disproportionately higher. That’s because all these forms of cancer tend to be diagnosed at a later stage when they’re often advanced and potentially fatal, explains Dr. Mona Gohara, M.D., assistant clinical professor at Yale University School of Medicine’s department of dermatology. “Bob Marley died at age 36 from a melanoma that started under a toenail and spread to his brain,” says Maryland dermatologist, Dr. Noelle S. Sherber. “Lesions in these areas tend to grow horizontally and spread out on the skin’s surface, so early detection is key.” Non-melanoma skin cancers are also on the rise. The milder, basal cell carcinoma is predominantly found among Caucasians, Hispanics and Asians, while the more aggressive squamous cell carcinoma most commonly strikes the African American and Asian Indian communities and can metastasize to other organs quickly. Research has found that basal cell carcinomas occur primarily on the parts of the body that receive the most UV exposure. Squamous cell carcinomas show up in exposed areas too, but also exhibit on legs and in the anogenital region on African American skin. Pay close attention to any lesion that bleeds, oozes, crusts, won’t heal or lasts longer than a month; in those cases see a dermatologist immediately. Experts point to a cultural lack of sun awareness and education among many ethnic groups as a factor. While Caucasians tend to be versed in the ABCDE warning signs of skin cancer (asymmetry, border irregularity, color, diameter and evolution) and are more likely to schedule regular skin cancer screenings with a dermatologist, it’s a popular belief among minorities that skin of color provides a natural shield and such vigilant care isn’t necessary. “Higher levels of melanin in skin affords you a certain level of protection, but people need to understand that it doesn’t circumvent the risk,” says Downie. “Until that message gets across, I think it’s going to be depressing when it comes to statistics and who dies from skin cancer.” Results from a recent L’Oréal study on minorities and skin cancer reveal that message is still far from taking hold. In that survey, 65 percent of minority respondents said they didn’t consider themselves at risk for skin cancer, while 62 percent of African American adults said they’ve never even worn sunscreen. Only a meager 31 percent of minorities have performed a skin cancer check, and 17 percent have had a full body examination by a dermatologist. “The bottom line is that everyone needs to wear a full spectrum sunscreen of SPF 30 daily and get a body check at least once a year,” says Downie, who adds that if you have a family history of skin cancer, it should be twice a year, and even more often than that for those previously diagnosed with skin cancers. “I have an under 35-year-old patient who just had a melanoma removed six weeks ago and already we think we’ve just found another one,” says Downie. “That’s unusual, but if you’ve had an unsafe sun history these spots can pop up fast.” Experts say environmental change is one reason skin cancer in minorities is soaring. “Ozone depletion makes today’s sun more damaging than ever,” says Dr. Wendy Roberts, medical director of Desert Dermatology Skin Institute in Rancho Mirage, CA. And a culture seeped in an obsessive love of all things bronze has some entering tanning beds in search of even more color. “The tanning bed sun emits 12 to 15 times the ultraviolet radiation of the sun,” warns Downie. “I have patients who say that tanning gives them a ‘base tan’ of protection. Newsflash: a base tan is literally an SPF of three.” But while the risk of skin cancer may be the most important reason to protect against the sun, it’s often vanity that gets people to take action, says Downie. “Most people think, ‘it’ll never be me who gets cancer.’ But when you tell them, ‘it’ll be you who ages faster than your sister,’–then suddenly you get people’s attention.” The popular quip “black doesn’t crack,” may have some truth to it when it comes to wrinkling. But that higher melanin content also makes skin of color more susceptible to discoloration. “African Americans, Asians and Latinos age with patchy areas of pigmentation due to the sun’s rays, whereas signs of aging in Caucasians tend emerge as fine lines and wrinkles,” explains Downie. Pigmentation is tricky for dermatologists to treat. Lasers may induce an inflammatory darkening response while deep peels incite similar complications. “If you wear sunscreen every day, you will age slower–and that includes ethnic skin. It’s just a fact,” says Downie. And then there’s the chalky texture issue with many sunscreen formulas leaving pasty streaks of purple, gray or silver on dark skin. “I hear that excuse from patients all the time,” says Downie, who is herself of African American descent. “There are many micronized varieties now that apply clearly,” she adds, citing the weightless absorption of Neutrogena Ultra Sheer Dry-Touch SPF 85 and SkinMedica Daily Physical Defense SPF 30 as top picks among her patients of both Caucasian and ethnic backgrounds. Whether it’s fear of melanoma or the almighty wrinkle that puts the scare of sunscreen into you, just remember: skin cancer doesn’t discriminate. We can all afford to be more aware and protected.
- 23.5SE.1PE: Predicting Whether a Complex Has Optical IsomersDoes either have op... - 23.5SE.2PE: ?blem 2PEPredicting Whether a Complex Has Optical IsomersDoes eithe... Solutions for Chapter 23.5SE: Chemistry: The Central Science 13th Edition Full solutions for Chemistry: The Central Science | 13th Edition A reaction in which bonds are broken in the presence of an acid. For example, in the presence of a strong acid, an ether is converted into two alkyl halides. A unit of pressure equal to 760 torr; 1 atm = 101.325 kPa. (Section 10.2) atom The smallest representative particle of an element. (Sections 1.1 and 2.1) The slow oxidation of organic compounds that occurs in the presence of atmospheric oxygen. Air oxidation of materials such as unsaturated fatty acids. In a bicyclic system, the carbon atoms where the rings are fused together. A step in a chain reaction characterized by the formation of reactive intermediates (radicals, anions, or cations) from nonradical or noncharged molecules The generally larger formation constants for polydentate ligands as compared with the corresponding monodentate ligands. (Section 23.3) conjugate acid–base pair An acid and a base, such as H2O and OH-, that differ only in the presence or absence of a proton. (Section 16.2) crossed aldol reaction An aldol reaction that occurs between different partners. The process in which molecules, ions, or atoms come together to form a crystalline solid. (Section 13.2) deoxyribonucleic acids (DNA). A type of nucleic acid. (25.4) From the Greek meaning electron loving. Any species that can accept a pair of electrons to form a new covalent bond; alternatively, a Lewis acid. Atoms or groups on an atom that give a chiral center when one of the groups is replaced by another group. A pair of enantiomers results. The hydrogens of the CH2 group of ethanol, for example, are enantiotopic. Replacing one of them by deuterium gives (R)-1-deuteroethanol; replacing the other gives (S)-1-deuteroethanol. Enantiotopic groups have identical chemical shifts in achiral environments but different chemical shifts in chiral environments. A biological membrane that consists of a phospholipid bilayer with proteins, carbohydrates, and other lipids on the surface and embedded in the bilayer The product of the mass, m, and velocity, v, of an object. (Section 6.4) rare earth element See lanthanide element. (Sections 6.8 and 6.9) A method for preparing substituted amines by treating an aldehyde or ketone with an amine in the presence of a reducing agent The threedimensional conformations of localized regions of a protein, including helices and b-pleated sheets. Groups that strongy deactivate an aromatic ring toward electrophilic aromatic substitution, thereby significantly decreasing the rate of the reaction. Experimental conditions that permit the establishment of equilibrium between two or more products of a reaction. The composition of the product mixture is determined by the relative stabilities of the products. Textbook Survival Guides Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or email@example.com Forgot password? Reset it here
It goes without saying that you want your horse to live a long and prosperous life – a balanced diet is the biggest contributing factor to this. If you’re new to the world of horse ownership – or just want to ensure that you’re feeding your equine the best possible diet – you’ve come to the right place. In this article, we’ll look at what should be included in a horse’s diet and the importance of each nutritional element. We’ll also cover what things could do your horses harm if they were to consume them. If your horse becomes ill, vet bills can quickly rack up, which is why it’s so important that you have adequate horse insurance. Horse insurance policies from Equesure can cover horses valued up to £750,000 and provide up to £12,500 towards vet fees throughout the year. Here is a quick rundown of what constitutes a balanced diet for a horse and a few things to avoid: Fresh clean water Just like any animal, horses must have continuous access to fresh clean water. According to the RSPCA, an average 500kg (approximately 15hh) horse drinks around 30-50 litres a day. This amount can increase in hot weather – and when it has worked hard – as a horse will need to replenish the water reserves used up in sweating. A mare with a foal needs more water because the milk that she is producing to feed the foal requires water. It’s important that you regularly refresh a horse’s source of water. If a horse ends up drinking water that is contaminated with dirt, algae or manure/urine they can become sick. Sometimes a horse will refuse to drink if they can smell that the water is polluted or stagnant, which can lead to dehydration. Field Grass and tender plants Horses are herbivores. This means that they eat only plants. Horses should be provided with as much opportunity to graze on pasture as possible. Twenty-four hour access is ideal. That’s because horses have evolved to eat and process large volumes of relatively low quality (low calorie) grasses and other plants. As long as the pasture is good, it will contain most of the nutrition a horse requires to be healthy. Grass also contains silica, which contributes to good dental health. Grass can provide a balanced diet for your horse all year round but be aware that during the winter months the energy content of the grass falls – this is why it is sometimes necessary to supplement a horse’s diet with other things. If there is not enough pasture for your horse or pony to feed from, hay is the next best option for them. Not just any old hay will do, however – you need to find a good equine hay that is well matched to your horse’s lifestyle. Just as with grass, hay varies enormously in calorie content. So, if your horse is prone to putting on weight, you should try to source low-calorie hay. Make sure you know what you’re buying, so that any shortfalls in vitamins and minerals can be compensated for with supplements. Grains & Hard feed You might see some websites suggest that you feed your horse ‘straights’. These are cereals and grains, such as oats and barley. These are often rolled, crushed, bruised or heat-treated to increase their digestibility. These are quite old fashioned methods of feeding, there are modern alternatives available and it’s best to speak to a nutritional feed specialist to create the best feeding plan for your horse. We recommend taking advantage of TopSpec’s nutrional advice helpline to ensure you are providing the right balance. Feed provides energy and nutrients for horses who are regularly put through their paces and taking part in competition. If hard feed is fed to horses in light work or to ponies they may cause the animal to become overweight or difficult to manage. Hard feed should only make up only a small percentage of the horse’s diet, with roughage (hay and grass) making up the majority. Horses fed diets low in forage and high in straights are at risk of digestive problems because feed doesn’t require the chewing time, which doesn’t match up with horses’ long digestive systems. Salt and minerals When a horse sweats they obviously lose water from their body, but also electrolytes which need to be replenished through their diet with salt and other minerals. Providing a salt lick either in the stable or field should be enough to meet their sodium requirements. However, if your horse gets through a fair bit of work in a day, you’ll need to up their intake of salt. Horses consuming inadequate amounts of sodium are more likely to suffer from heat stress and electrolyte imbalances than those consuming the correct daily intake of sodium and chloride. Hygain advises that a horses with a moderate work rate will require 17.8 grams of sodium and 53.3 grams of chloride – but this will increase again if the weather is hot. We all like to treat our horses from time to time – it makes us feel good seeing them enjoy their food. But you just need to be careful not to overdo the treats. A horse will eat most things – even meat – because they don’t know what is good for them. However, avoid giving him meat or too many sugary treats, as these are not something horses would naturally consume. Feeding a horse meat could lead to serious problems. This is because horses cannot vomit– so they can’t clear their system of anything bad for them – so it’s best to just avoid letting your equine have the occasional bite of your hamburger, to be on the safe side. They get a condition called colic instead which can be fatal or very expensive if your horse needs surgery. Equesure have specialist colic cover available for occasions like this of up to £7,500 per incident. Things that you shouldn’t feed your horse If you feed your horse the wrong foods it can cause them a lot of discomfort and pain, and even damage their digestive and urinary system. Here are some of things you should avoid feeding your horse, according to Horsemart: - Dairy products - Stone fruits - Cabbage, broccoli & cauliflower - Lawn clippings & compost The best advice we can give is that if you want to try something new with your horse’s diet, consult a vet or equine nutritionist first. Invest in horse insurance Due to the nature of the way horses are built, they can be prone to spells of ill health. So, it’s essential that you take out comprehensive horse insurance which will cover you in the event of a number of different circumstances. Equesure are equine insurance experts, we have over 60 years combined equestrian knowledge and will endeavour to find an insurance policy, tailored to your needs, for you, your horse and your transport. Working with a varied and trusted panel of insurers, Equesure offer clients a bespoke horse insurance policy with various options of cover to ensure the policy is right for you. Get a quick quote today.
Gulf War Syndrome The Center For Disease Control 1999 Gulf War Syndrome Research Summary Report (click here to view and scroll to pdf page 10, or click here to for more details), states that animals fed the same drugs that were given to their 1992 Gulf War Personnel developed health problems that were significantly worse statistically than the sum of the incidence of symptoms when each pill was taken individually. The military personnel were given many pesticides, anti-viral, anti-chemical warfare, anti-biological warfare, and anti-parasite agents. These drugs were originally approved in studies that showed them to be safe when taken individually. In summary, each poison can disrupt a layer of protection to the point where several layers are defeated, causing the sensitive internal biochemistry to be exposed to harmful agents. According to the CDC report, the toxicity of the war pesticides (e.g. bug repellents such as DEET and permethrin) increased 100-fold when combined with pyridostigmine bormide (the anti-nerve gas drug). Pesticides are designed to inactivate the nerve cells of bugs by binding to their internal molecules. A human can normally tolerate low levels of pesticides, yet 100x levels are more dangerous. Additionally, burning oil well smoke is laden with heavy metals such as cadmium, arsenic, lead, and mercury which can bind to internal enzymes and chemicals, and disable their function. This CDC 1999 Gulf War Report is consistent with the information at our web site, which explains how many "difficult to see" problems are the result of one offending molecule disrupting an internal defense system and therefore enabling another offending molecule to damage internal biochemistry, without leaving much of a trace as to who had been where when. It is ironic that the USA military was concerned about the threat of chemical/biological warfare from the enemy, yet not themselves. Some feel there is nothing to fear but fear itself. One could argue there is nothing to fear, but yourself. Yet in all fairness, the US Army had the right intentions at the time. If one is suffering from GWS, we recommend that you proceed with our CFS Strategy and FMS Strategy. Additionally, the approach at the Dallas Environmental Health Center (e.g. Toxification Panel 200) is recommended (for details, click here and here). Also, it is recommend that one look for heavy metals via the dmsa challenge test and the Porphyrin panel. For details, click here. It is possible that some folks with GWS have permanent nerve damage from the pesticides , and treating it is not easy; however, they may also have many other things that are reverseable and therefore leave room for improvement. For more information on recommended research, please click here and here. BeatCfsAndFms Home ©Copyright 1999 gsw. All rights reserved.
People want to be healthy. In fact, most of them are in dire need to be healthy, but unfortunately, they do not eat what they need to eat. In this global economy which has highly advanced logistics, we are pretty much get any fruit from anywhere in the world. Working class consider themselves to be very busy these days (which is the not the real factor), they sit in their pods for hours together and love to eat sandwiches and soft drinks all the time. What they don’t realize is that the food items they intake has no nutritional value and have preservatives added to them which are dangerous for their health. However, the number of health conscious people is increasing very rapidly and they are resorting to food items and fruits like baobab fruit that has high nutritional value The Baobab Fruit “The new super fruit.” Adansonia digitata, also known as the Baobab tree is a symbol of strength in Africa and it is one of the most ancient tress in our earth. The tree is found all over Africa and in some parts of Australia also. The mighty baobab tree produces an edible fruit named the baobab fruit that has a sour taste. The fruit is rich in calcium and contains Vitamin C, which is 6 times more that of an orange. Baobab fruit is also a good source of iron, magnesium and potassium. It takes 65 years before a baobab tree to bear fruits. Nevertheless, the fruit is impressively rich in variety of nutrients and minerals and more importantly, it is 100% natural. Lot of dieticians suggest baobab for pregnant women as it rich in calcium and helps baby growth in a big way. The Health Benefits of the Super Fruit The baobab tree is also known as the ‘upside down tree’ as the branches of it are resemble the shape of roots. But regardless of its shape, this exotic fruit that it produces has several distinctive health benefits. Below given are few of the health benefits of the baobab fruit. Antioxidants – These are molecules that combat free radicals. These molecules play significant part in protecting the body from the radicals. The baobab fruit has very rich antioxidant properties. Vitamin C – improves the immune system making a person protected from all kinds of diseases. It also enhances the absorption of calcium and iron. The baobab fruit, as mentioned earlier has six times vitamin C when compared to vitamin C in an orange. Calcium – the baobab fruit contains twice the level of calcium present in milk. It can be used as substitute for milk for babies. The baobab fruit also plays an important part on the traditional African medicine. Its pulp is commonly used for the treatment of fever. Its roots are used to produce a drink that is believed to help in treatment for HIV positive people. The baobab also has properties for treating enteritis and rheumatism, constipation and hangovers. Lastly, the baobab tightens, tones and moisturizes the skin and encourages skin cell generation. The baobab fruit will definitely keep your indulgence healthy. This baobab fruit is just one of the many options for healthier lifestyle. If you want to know more of the all-natural super fruit juices formulated to give your body the daily dose of antioxidants, you can head on to http://www.vitav.com/ and see how you can live your life at its best! Vitav is an 100% natural antioxidant liquid drink made from exotic fruits like baobab, pear, mangosteen , acai berry and many others. Vitav helps to protect your body from the free radicals and improve your metabolism as a whole. Visit http://www.vitav.com to know how you can become a vitalizer and live richer for ever!
Did you know that you canít copyright a recipe? While recipes have nothing to do with crocheting, itís true that a recipe cannot be copyrighted. The reason being because there are only so many food products that can be put into a recipe to give the exact same flavor as the original. But, when it comes to patterns, there are a multitude of scenarios that can be done with stitches that will leave a pattern looking exactly the same, but when you look at the product, in depth, youíll discover the differences. Today, youíll learn some of the ways in duplicating the look of a pattern and yet making the pattern your own to copyright. Starting with a round object, such as a doily, we need to determine the beginning stitches; which forms our ring. Most often, the stitches are so tight that itís next to impossible to even consider getting an accurate count of stitches. No need to fret. We donít need this information. We can begin our reproduced pattern by beginning with a mock ring. This is done by wrapping our yarn or thread around our fingers twice. Remove this from our hand and placing the hook under the thread which is going to the skein. Next, twist the hook a complete 360 degrees so that we have a twist under the hook. Now, we simple crochet how many single crochets/double crochets or whatever it appears that the pattern calls for. If youíre working with a flat surface, you simply count how many stitches are across. Chain this number of stitches and add one stitch if the first row is done in a since crochet or a half double crochet and one or two stitches if itís done in a double crochet. I say one or two because there are people, like myself, who normally only allow two chains to equal the double crochet-Iíve also learned that, for myself, I like to count three stitches as one double crochet on the first row, but I like to use only two stitches thereafter. If the first row is done in a trc, you may want to add either two or three stitches. Again, this counts on how many chains you want to equal your first stitch in the first row. Itís difficult to reproduce a pattern without using the exact same number of stitches as the original crocheter; especially if you want to duplicate the same size. If recreating patterns is not to your liking, you might find someone near you who you can the job for you. Some crochet artists will do the job for a no-fee. After all, theyíll be gaining a new pattern or a small fee might be requested.
THE SENATE OF MADAGASCAR Law n° 98-001, dated 8 April 1998, on revision of the Constitution, provides for the creation of a Senate. The ordinance dated December 28th, 2000 on the organic law in respect of the Senate, and the decree dated January 8th, 2001, have provided for the setting up of this Senate. The essential responsibility of the Senate is to represent the autonomous provinces. It also has been designed as an advisory body for the Government on economic, social and territorial issues. I - COMPOSITION 90 Senators, of whom : 60 elected, ten from each of the six autonomous provinces ; 30 appointed by the Head of State on the basis of their special expertise in legal, economic, social or cultural matters. II - ELECTORAL AND APPOINTMENT SYSTEM A - ELECTED SENATORS Each autonomous province elects senators. Electoral college : Composed of the provincial councillors, elected by direct universal suffrage, and mayors of the provinces they will represent. Voting system : Proportional voting for a list, with the highest average, with no list grouping, no votes for candidates on different lists, and no preferential voting. The term of office of senators is 6 years. First election : 18th March 2001. Eligibility conditions : - minimum age for eligibility: 40 years ; - to be registered on the electoral lists ; - to reside in the territory of the Republic when declaring candidacy ; - to be in order with tax legislation and regulation and to have paid all payable taxes of any kind for the past three years. The function of a senator is not compatible with the exercise of any other publicly elected function and any civil service employment excepting teaching and holding a ministerial post. B - APPOINTED SENATORS Their appointment shall take place within 21 days after the results of the senatorial election have been officially announced. Last appointment : july 2002 III - ORGANISATION OF SESSIONS The Senate shall meet as of right during the sessions of the National Assembly. It may also meet at a special session. When the National Assembly is not in session, the Senate shall be allowed to debate only on questions on which it has been asked to give advice by the Government, except on draft bills. A - ORDINARY SESSIONS Two ordinary sessions per year. The duration of each session shall neither be less than sixty days, nor more than ninety days. The first session shall begin on the first Tuesday of May and the second on the last Tuesday of September. B - EXTRAORDINARY SESSIONS They shall take place on the basis of a determined agenda, through an ordinance of the President of the Republic taken during the Council of Ministers, either on the initiative of the President of the Republic, or upon request of the absolute majority of the deputies. The duration of the session shall not exceed twelve days. Nevertheless, a closing ordinance shall come into force as soon as the National Assembly has dealt with all the points on the agenda for which it has been convened. The President of the Republic may take the initiative alone to convene another extraordinary session within one month after the closing of the previous session. C - SPECIAL SESSIONS The Government shall convene special sessions. The agenda shall be limited and fixed by a convening ordinance, adopted during the Council of Ministers. IV - RELATIONS WITH THE OTHER CHAMBER AND EXECUTIVE A - LEGISLATIVE POWER 1) Legislative initiative Yes, the Prime Minister, together with senators and deputies, has the right to propose legislation. However, draft laws shall not be admissible if adoption would result either in a reduction in public resources, or an increase in a public charge, except for finance bills. 2) Right of amendment Yes, together with the deputies. However, amendments shall not be admissible if adoption would result either in a reduction in public resources, or an increase in a public charge, except for finance bills. 3) Legislative procedure Draft laws shall be tabled before the Bureau of either assembly except for draft finance bills, which shall be tabled before the Bureau of the National Assembly. a) Ordinary procedure Private bills and amendments shall be submitted to the Government. To express its observations, the Government has thirty days for private member's bills, and fifteen days for amendments. When the deadline expires, the assembly shall examine the private bills or the amendments with a view to adopting them. Government or private bills shall be sent for examination to the committee concerned before being discussed in public. Once it has been examined by the depository assembly in which they have been lodged, Government or private bills shall be sent to the other assembly. The debate shall take place in turn in each assembly until the adoption of identically worded versions. When, in the event of disagreement between the two assemblies, a Government or private bill has not been adopted after two readings by each assembly or, if the Government has stated that the matter is urgent, after one reading, the Prime Minister may call a meeting of a joint committee; this committee has the task of producing a text on the provisions that remain under discussion. The text elaborated by the joint committee may be submitted by the Government to the two assemblies for their approval. No amendment shall be admissible except with the Government's agreement. If the committee fails to adopt a common text or if the text is not adopted under the conditions laid down in the previous article, the National Assembly shall definitely give a ruling with the absolute majority of its members. If during the legislative procedure, it turns out that a private bill or an amendment may not be a legal question, the Government may declare it inadmissible. In the event of disagreement between the Government and the National Assembly or the Senate, the High Constitutional Court shall give a ruling within eight days, upon request of the Prime Minister or the president of either assembly. Before the end of the promulgation deadline - 3 weeks - the President of the Republic may ask the Parliament for another examination of the bill or part of it. The Parliament may not refuse to do so. Ordinary bills may be submitted to the High Constitutional Court before promulgation, notably by the President of the Senate or by Œ of the senators. b) Special measures relating to finance bills The Parliament shall have sixty days to examine the draft finance bill. If the National assembly does not reach a decision within thirty days of the bill being introduced, it shall be supposed to have been adopted and the draft shall be sent to the Senate. The Senate has fifteen days after the bill has been sent for the first reading and each assembly has five days for each of the following readings. If one assembly does not reach a decision on a bill within the fixed deadline, it shall be supposed to have been accepted. If the Parliament has not adopted the draft finance bill before the end of the second session, the provisions of the draft law may be brought into force by ordinance, while including one or several amendments adopted by both assemblies. Any amendment entailing an increase in public expenditure or a decrease in public resources shall propose at the same time an equivalent increase in revenue or savings. c) Special measures relating to organic laws A Government or private organic law may not be examined and voted on in the first assembly which has been referred to for fifteen days after it has been lodged. The bill may pass with an absolute majority of the members of each assembly. In the event of disagreement between the two assemblies after two readings, the National Assembly shall give a definite ruling with a 2/3 majority of its members. If the National Assembly has not adopted the draft organic law before the end of the session, its provisions may be brought into force by ordinance, while including, if the case arises, one or several amendments adopted by both assemblies. Organic laws relating to the Senate and to the interprovincial Conference may be voted on by both assemblies in identical terms. Organic laws may be promulgated only after being declared in conformity with the Constitution by the High Constitutional Court. d) Vote of confidence on the passing of a bill The Government may involve its responsibility in a vote of confidence by demanding that both assemblies express themselves by a single vote on all, or part of the provisions of the bills being discussed: - during extraordinary sessions, under the condition that these texts have been lodged forty-eight hours after the opening of the session; - during the last eight days of each ordinary session. But the Government shall resign only if it is defeated by the absolute majority of the National Assembly members. e) Enabling legislation By voting with the absolute majority of the members of each assembly, the Parliament may authorise the President of the Republic, for a limited period and on a special matter, to issue an ordinance covering legislative measures, during the Council of Ministers. Ordinances shall come into force as soon as they are published, but they shall be declared null and void if the draft ratification law is not presented to the National Assembly by the date fixed by the Enabling Law. B - SUPERVISORY POWERS 1) The means of information that the Senate may use are: the oral question, the written question, the questioning and the commission of enquiry. a) During the ordinary session, at least one sitting a month is given over to questions as a priority. b) Commissions of inquiry shall be in charge of collecting information on given facts and submitting their conclusions to the Bureau. They shall be created by the Senate through the vote of a resolution presented by at least five senators. A commission of inquiry may not be created for facts already been investigated and as long as the investigation is being carried out nor for facts on which judgement has already been delivered. If a commission has already been created, its mission shall end with the start of a preliminary investigation relating to the facts which have caused its creation. Commissions of inquiry may not be set up again with the same objective for a period of twelve months after the end of their mission. 2) The ratification or the approval of alliance and commerce treaties, of peace treaties, of treaties or agreements relating to the international organisation, of those involving public finances, modifying law provisions, modifying the territory, or those relating to the state of people, shall be legal. C - ADVISORY POWER The Government may seek advice from the Senate on economic and social issues as well as territorial organisation issues. D - RELATIONS WITH THE PRESIDENT OF THE REPUBLIC The President of the Republic may communicate with the Parliament by means of messages which do not give rise to any discussion. IV - SPECIAL PROVISIONS A - AUTONOMOUS PROVINCES 1) Senators will be members as of right of the provincial councils, and shall be empowered to vote therein (unlike deputies). 2) People found guilty may be dismissed from their duties by the President of the Republic after a joint committee of senators and deputies has been consulted. 3) The President of the Senate (as well as the President of the National Assembly) or his deputy shall attend as of right the interprovincial Conference, convened in order to discuss issues of common interest between the central power and one or several autonomous provinces, or between two or more autonomous provinces. B - INTERIM FOR THE PRESIDENT OF THE REPUBLIC 1) In the event of the interim for the President of the Republic because of resignation, death, impeachment or deposition, the duties of the Head of State shall be temporarily carried out by the President of the Senate. 2) Temporary incapacity It may be declared by the High Constitutional Court after the Parliament ruled through a separate vote of each assembly with a 2/3 majority of its members because of physical or mental incapacity to carry out his duties, which shall be proved. It may not go beyond a period of six months, after which the High Constitutional Court, in the same conditions, may decide on the transformation of the temporary incapacity into a permanent incapacity. C - JUDISDICTIONAL POWERS 1) The High Court of Justice shall comprise two incumbent senators and two deputy senators elected by the Senate. 2) The President of the Republic may be indicted before the High Court of Justice only by the two assemblies ruling through a separate vote, by public ballot and with a 2/3 majority of the members of each assembly, for actions accomplished in the exercise or caused by the exercise of his duties, only in the event of high treason, or serious and repeated violation of the Constitution. D - THE HIGH CONSTITUTIONAL COURT 1) Two from amongst the members of the High Constitutional Court shall be appointed by the Senate. 2) The President of the Senate, or Œ of the senators, may submit to the High Constitutional Court any text about law or regulation as well as any field for which it is competent to assess its conformity with the Constitution. 3) The President of the Senate may consult the High Constitutional Court on the constitutionality of any draft or on the interpretation of a constitutional provision. E - CONSTITUTIONAL AMENDMENT Both parliamentary assemblies shall have the right to move constitutional amendments, together with the President of the Republic. They rule on it through a separate vote requiring the absolute majority of the members of each assembly. The amendment - private or public - shall be adopted with at least a Ÿ majority of the members of the National Assembly and the Senate.
Do you remember the days when you were in grade school, learning to speak proper English by enhancing your grammar skills? Then you know just how important it is to communicate effectively using the right words within your speech. The same is true when learning a second language; especially Spanish. When learning to speak Spanish, using the right words is probably the most important aspect to learn. Learning how to speak Spanish while using proper grammar is an additional step to actually learning the language. Many people get discouraged during this portion of the tutoring, when grammar is introduced and soon quit before the program ends. To avoid this from happening to you, if you get stuck in a learning curve when learning to speak Spanish, it’s important to remember the reason why you started taking Spanish in the first place. One encouraging thing to be aware of when learning Spanish is that it’s actually very similar to English. Many English words originated from the Spanish language, that’s why many words in the English language sound the same or are similar in meaning as in Spanish. Another thing to take notice is that after you’ve notice yourself becoming proficient in your new language, learning new grammar rules will become easier. At first, you may notice your grammar slowly improving in the early stages of your training but try to be patient as your speech will soon get better with practice. Starting with the basics will also help your progression. When learning new languages, the basic mechanics of grammar are often forgotten such as adjectives, nouns, pronouns, and verbs. It’s best to practice repeating complete sentences instead of just learning words as repetition is key aspect in the process when learning any language. If you think about it, when you communicate with another person you talk in sentences instead of single words. So, it only makes since to learn a different language in a way that will enable to you get your full point across. There are many resources you can review that help you answer any questions regarding Spanish grammar errors. Make the Spanish dictionary and translations guide your friend. You also want to make a conscious effort to practice listening to audio tapes daily to help you improve your Spanish grammar at a faster rate. When learning Spanish, be sure to incorporate proper grammar when starting out. Learning the language, the right way, will ensure your success in becoming bi-lingual. The effort you put into learning Spanish will be well worth it in the years to come. ofcourse
The main purpose of this work is to study the effect of welding consumables on the microstructure and mechanical properties of a welded SA 530 GR 70 steel. Steel is an alloy of iron and carbon and it is usually cast into malleable form and it can be changed in shape by forging, rolling and can be joined using different joining process like welding etc. welding is the process of coalescing materials such as metals or thermoplastic in order to seamlessly join them. The following welding processes under taken are Flux Cored arc Welding (FCAW) and the filler type was mild steel filler wire, Shielded Metal arc Welding (SMAW) was used and the electrodes used were E6010, E6013 and E7018 and Submerged Arc Welding (SAW) was used with a mild steel filler metal. The test carried out are Magnetic particle test and ultrasonic test which are non-destructive test, mechanical test done are tensile test and hardness test, and microstructural test. SA 530 GR 70 steel shows increase in hardness especially on the heat affected zone followed by the fusion zone before the base metal due to type of cooling which takes place and the type of grains formed during the welding process. There is a change on microstructure where the base metal changes and create dendrite shape at weldment area and columnar grains at the heat affected zone. The result showed that different welding processes and consumables will give different strength and must be importantly considered by the welder. Steel is arguably the world’s most ‘advanced” material. It is a very versatile material with a wide range of attractive properties which can be produced at a very competitive cost. It has a diverse range of applications, and is second only to concrete in its annual production tonnage. Steel is not a new invention which leads to a common misperception that “everything is known about steel” amongst those outside its field. Steel is generally defined as a ferrous alloy containing less than 2.0wt%C. The complexity of steel arises with the introduction of further alloying elements into the iron-carbon alloy system. The optimization of alloying content in the iron-carbon system, combined with different mechanical and heat treatment leads to immense opportunities for parameter variations and these are continuously been developed. Steel is an alloy of iron and carbon and it is initially cast into a malleable form, and it can be changed in shape by forging, rolling or other mechanical processes. The difference between steel and cast iron is that steel do not contain graphite or free carbon. Carbon exist in small quantity in ferrite and majority in cementite. There are different types of steel but we are to deal majorly on mild steel. Figure 1:1 The iron-iron carbide phase diagram Mild steel is a type of steel containing a small percentage of carbon, strong and tough but not readily tempered. It is also known as plain carbon steel and low carbon steel. It is the most common type of steel because its price is relatively low while the material properties are acceptable for many applications. Mild steel contains approximately 0.05-0.25% carbon, making it malleable and ductile. It has a relatively low tensile strength but it is cheap and easy to form, its surface hardness can be increased through carburizing. It’s often used when large quantities of steel are needed for example structural steel. The density of mild steel is approximately 7.85g/cm and the Young modulus is 200GPa. Low carbon steel contains less carbon than other steel and are easier to cold form, making them easier to handle. This has given them an advantage compared to other nation since it was used in making weapons (Sacks and Bonhart, 2005). Low carbon steel is a type of steel that contains fine grains and it started being designed since the 19th century. Low carbon steel can be classified when the carbon content is lower than 0.2 percent (American Society for Testing and Materials 2001). Low carbon steel is widely used in fabrication industry due to excellent to weight ratio and one of the applications are in automobile industry (Khodabakhshi et al, 2011). This material is suitable to use in automotive industry because it can absorb high impact force without cracking. This happen because it has low carbon which makes it a ductile material compared to high carbon steel that is more brittle and easy to crack although it has more strength. Steel is an important engineering material. It has found applications in many areas such as vehicle parts, truck bed floors, automobile doors, domestic appliances etc. It is capable of presenting economically a very wide range of mechanical and other properties. Traditionally, mechanical components has been joined through fasteners, rivet joint etc. In other to reduce time for manufacturing, weight reduction and improvement in mechanical properties, welding processes is usually adopted. Welding is define today as a process of coalescing materials such as metals or thermoplastics in order to seamlessly join them. In this process, melting of the base metal takes place due to high heat generation during welding process. A filler metal is used to join the metal by forming a pool of molten metal. When it cools it forms the joint and it becomes stronger in strength than the base metal. Welding process can be done in different types of environments such as open air, under water etc. During welding process some precautions have to be taken in order to avoid damages to human as high heat is produced and the intensity of the arc is high and also it produces fumes which may be hazardous to human beings. Welding is extensively used as a fabrication process for joining material in a wide range of compositions, parts, shapes and sizes. It is an important joining process because of high joint efficiency set up, flexibility and low fabrication costs. Welding is an efficient, dependable and economic process. Welded joint finds applications in critical components in which if failure occur are catastrophic. Hence, inspection methods and adherence to acceptable standards are increasing. These acceptable standards represent the minimum weld quality which is based upon test carried out on welded specimen containing some discontinuities. Welding involves a wide range of variables such as time, temperature, electrode, power input and welding speed that influence the eventual properties of the weld metal. Concerning the welding of low carbon steels, it has been shown that the grain coarsened zone and heat affected zone are very critical since embrittlement is concentrated in these areas. It is also known that the final microstructures and mechanical properties of welded steel depends on some parameters like percentage of carbon and presence of other elements such as sulfur or phosphorous. Low carbon steel that has less than 0.25% carbon, display good weldability, because they can be generally welded without special precautions using most of the available welding processes. There are different types of welding processes and they are: (a) Shielded metal arc welding (b) Fluxed core arc welding (c) Submerged arc welding (d) Tungsten arc welding (e) Manual inert gas welding These are few types of welding processes that are commonly used and these processes uses electrodes and it can be divided into two namely: consumables and non-consumables. 1.2 WELDING ELECTRODE An electrode is a metal wire that is coated. It is made out of materials with a similar composition to the metal being welded. There are covered electrodes and also bare electrodes. Tungsten electrodes are not part of covered electrodes, it contains 2% cerium or thorium and have better electron emissivity, current-carrying capacity and resistance to contamination than pure tungsten electrodes. Tungsten inert gas (TIG) electrodes are non-consumables as they do not melt and become part of the weld, and it require the use of a welding rod. The manual inert gas (MIG) welding electrode is a continuously fed wire referred to as wire. As a result, arc starting is easier and the arc is more stable. The electron emissivity refers to the ability of the electrode tip to emit electrons. A lower electron emissivity implies a higher electrode tip temperature is required to emit electron and hence a greater risk of melting the tip. Covering electrodes are electrodes that are used in shielded metal arc welding process (SMAW) and they are covered with flux. This type of electrodes are consumable, meaning they become part of the weld, and their function are as follows: Functions of Electrode Covering The covering of the electrode contains various chemicals and even metal powder in order to perform one or more of the functions described below. (a) Protection: It provides a gaseous shield to protect the molten metal from air. For a cellulose- type electrode, the covering contains cellulose, (C6H10 O5) X. A large volume of gas mixture of H2, CO, H2O and CO2 is produced when cellulose in the electrode covering is heated and decomposes. For a limestone (CaCO3) type electrode, on the other hand CO2 gas and CaO slag form when the limestone decomposes. The limestone type electrode is a low hydrogen type electrode because it produces a gaseous shield low in hydrogen. It is often used for welding metals that are susceptible to hydrogen cracking, such as high strength steels. (b) De-oxidation: It provides deoxidizers and fluxing agents to deoxidize and cleanse the weld metal. The solid slag formed also protects the already solidified but still hot weld metal from oxidation. (c) Arc Stabilization: It provides arc stabilizers to help maintain a stable arc. The arc is an ionic gas (a plasma) that conducts the electric current. Arc stabilizers are compounds that decompose readily into ions in the arc such as potassium oxalate and lithium carbonate. They increase the electrical conductivity of the arc and help the arc conduct the electric current more smoothly. (d) Metal Addition: It provides alloying elements and/or metal powder to the weld pool. The former helps control the composition of the weld metal while the latter helps increase the deposition rate. 1.3 AIM OF STUDY The aim of this work is to study the effect of welding consumables on the microstructural and mechanical properties of a welded SA 530 GR 70 steel. - To study the microstructure of the weldment using metallurgical microscope or optical microscopy. - To use standard mechanical test to study the effects of different welding conditions on the weld. - To study the effect of the different welding types on the microstructure and mechanical properties on the weld. - To study the different types of microstructure obtained using different welding variables. - To use non-destructive test to study the effect of each consumables on a welded mild steel. The study is significance because its contribution deepens the knowledge of welding consumables and their effect on weldment and welding processes especially on SA 530 GR 70 steel.
PRESCRIBED LEARNING OUTCOMES It is expected that students will: • evaluate the changing nature of law and its relation to social conditions of the times SUGGESTED INSTRUCTIONAL STRATEGIES • Present to the class an example from history such as: In 1750, a teenaged female servant in Halifax stole some silverware from her employer and received the death penalty. Ask students to compare this event with what would happen today to a teenager who stole some silverware. Ensure that they account for differences in the severity of punishment (referring to the Young Offenders Act, the UN Declaration of the Rights of the Child, and the Human Rights Act) and consider social conditions of the time. SUGGESTED ASSESSMENT STRATEGIES • When students compare historical and modern penalties for stealing silverware, look for evidence that they: - define the “crime” in terms appropriate for each time - accurately describe relevant due process in terms of the Young Offenders Act - identify social conditions and values that might account for the severity of punishment in the 1750s (e.g., crime-ridden streets) - compare key features of society then and now in terms of crime, punishment, poverty, prevailing views of good and evil, and public perceptions of adolescence interactive site: Torture and the Truth: Angélique and the Burning of Montreal When Montreal caught fire in April 1734, suspicion fell on Marie Angélique, a Black slave accused of setting the fire to cover an escape with her White lover, a salt smuggler exiled from France. But if that was her motive, why did she stay to help her mistress save her possessions instead of fleeing. True, she confessed, but only after torture. Her punishment was to be hanged and then burned. But did she really start the fire? What does her story tell us about slavery, torture and fire in early Canada? Sentencing Principles (Law Courts Education Society) The Proceedings of the Old Bailey, London 1674 to 1834 (For Schools) A fully searchable online edition of the largest body of texts detailing the lives of non-elite people ever published, containing accounts of over 100,000 criminal trials held at London's central criminal court.
With the kickoff of the first sustainocratic initiative in the city of Eindhoven (the Netherlands) the first step is made to create a “purpose driven economy”. What is the difference with what we have today? And why is it important for the rest of the world to follow the experiment in Eindhoven and, better still, start one of their own? Our current economies are not purpose but consequence driven. The human being is positioned as compulsory consumer. The entire institutionalized society is focused on creating a mountain of wealth around this consumer that gives a sense of abundance at all times. The only way to access this abundance is through financial means. Some of these means are individually obtained through the production, logistics and sales infrastructure necessary to maintain this mountain of abundance. Other get paid out of the hierarchies funded through taxation on this consumer organization. Or through speculation on material resources contained in this “having” type of culture. And finally also debt. The consequences of such consumer economy show a growing tendency of (negative) influences that need attention through investments. Think of infrastructures, healthcare organizations, police, etc attending the attitude of greed and its effects on the human being, physically and mentally. This also shows an exponential growth which is equally reflected in the world economy through the costs of societies. At the same time we see our environment and human behavior deteriorate fast. The model of economies of growth purely based on unlimited consumption and the consequences thereof, is obsolete because we use our natural resources wrongly, destroy our environment, sicken ourselves and eventually eliminate our evolutionary chances. Fragmented complex society We know this now, including scientific proof, but have difficulties in changing the course of society. We created a very complex mesh of fragmented financial entities with dependencies and interests among each other on which powers and influences are being based. Each institutions has a perceived right to exist and defend its own interests. There is not one single institution that takes full responsibility for sustainable human progress. The institutional mesh is based on fragmented self interest and competition. Key is the understanding that no institutional specialization can take holistic responsibility for human safety, health or sustainable progress. It is the human being itself that needs to take this responsibility. What went wrong in the consequence driven consumer economy was that the human being delegated its wellness through fragmented institutionalized structures that grew into tremendously inflated organs like an abscess or cancer would do on a sick body. Instead of serving humankind they try to serve themselves. This fragmented type of human organization is institutionally sick with the risk of the cancers to develop themselves further and destroy our evoutionary chances. Purpose driven economies The big difference with the old consumer economy is that it is not based on consumption and growth but on true value creation (purpose). It is not based on massive productivity and distribution but on local content. It is a circular type of economy where “purpose” is defined according local human needs, obtained through local effort and using local resources in a circular way. To achieve a purpose driven economy an intense transformation is needed. But it can be done using the same institutional instruments of the old society. Each participant needs to cure its cancer like development and abuse and become functional again within the scope of local for local requirements. It requires a different mentality and true transformative leadership in each institution involved. Abundance is not presented through logistic channels from around the world, it is created by local cooperative efforts. In such local cooperation we see the four traditional human values come together: attitude, creativity, environment and wisdom. Those values in the old economy were split into separate institutions that do not act locally but globally, not in an integrated way but based on self interest, greed and fragmented excellence. Now we bring this global expertise back to the local context. Using what we have learned The great advantage of today is that the old consequence driven consumer economy has left us with a huge amount of accumulated experience and material knowledge thanks to the concentrated specialized, fragmented functions of expertise that developed over time. This would never have occurred if this phase of humankind had not taken place. For a long time it was very constructive. Now it has become destructive. We hence do not criticize our past but use the best of its elements in our new progress. We can of course be critical to those old time forces that try to prevent us from creating purpose driven progress. It is just a matter of time for that opposition to disappear. Eventually the purpose driven economies will develop there where the old one has become obsolete, entered into a crisis, providing room for renewal, not just in a physical, organizational sense but especially emotionally, spiritual and rationally when people become aware. Complex transformative process It is a complex process that is typically developed locally and bottom up with executive support to make it happen. The reason that it happened in Eindhoven first and not yet in another region is simply because this small Dutch town unites the essential ingredients to make it happen. What are these ingredients: - Awareness at executive level - Open democracy of true equality - Level of education and experience - The right people at the right time These qualities produce the necessary flexibility that can address the future with adaptive determination in a complex modern world. People take responsibility individually, convince their surroundings to support change and find ways to make it happen. The purpose is found in the essentials of human existence: food, health, security, wellness (housing, energy, etc) and knowledge. When it becomes clear that the global consequence driven consumer economy is obsolete speed is required to create a new sense of reality and responsibility, including a change in behavior. When the time of old abundance is over, new abundance needs to be created, preferably on time. Wellness is not a cost or right but the result of a responsibility and hard work (purpose) together. When circumstances change stability is found in change too. In a sustainocracy the purpose driven economy is initiated together. We do this by making human wellness a purpose driven issue of the local population with the support of the accumulated institutional excellence and enhancing potential. Purpose driven economic development based on sustainocratic complexities is needed to save humankind from the present day destructive expectations caused by the consequence driven consumer economy. If not we will face disaster. Yet if we assume responsibility individually and institutionally we also face a huge transformative challenge that will upset everything that we have known so far. The choice between destruction or working together on a healthier perspective is easy for me. I have become self aware and dedicated more than a decade to come to these views and initiatives. It is a start, giving comfort that humankind has a choice indeed. A choice that simply depends on one own and not someone else. But I realize that it is a difficult one, not only when one has to make it, but also for me to reach out to the world and make the choice known to all. If one does not know than no choice will be made. My personal challenge is hence multiple. Make it happen for myself, provide proof to my surroundings and reach out to all of you with sufficient clarity that you take sufficient confidence in the course that I have taken in order to let go of old securities and create new ones for yourselves and your direct surroundings.
Best of Sicily Food & Wine Map of Sicily His niche is a small but important one. Palermo-born Stanislao Cannizzaro is a special footnote to nineteenth-century physical science. He was, according to some, one of the scientists responsible for bringing chemistry and physics out of the realm of alchemy and into the modern world. Born in 1826 to a prosperous family (his father was a police magistrate), Stanislao attended medical school before choosing chemistry. At Pisa in 1845 and 1846, he assisted Raffaele Piria, who first prepared salicylic acid. Back in Palermo, he participated in the revolts of 1848, which spread to numerous European cities. In Sicily, this was a statement against the lack of tangible constitutional reforms promised by King Ferdinando II of the Two Sicilies. (In fact, the successive regime, the House of Savoy reigning over the Kingdom of Italy, with its shadowy democracy, offered little more, but the young Cannizzaro's activism reflected the spirit of the times.) Following the failure of these protests, Cannizzaro was forced to flee, and ended up in France for a couple years. In 1851, he was made professor of chemistry and physics at Alessandria (in Piedmont), part of the Savoys' pre-unification kingdom. There he discovered that benzaldehyde treated with concentrated alcoholic hydroxide produced equal amounts of benzyl alcohol and salt of benzoic acid. This was soon named the "Cannizzaro effect." As professor of chemistry at Genoa in 1855, he showed that the atomic weights of elements in volatile compounds could be calculated by applying Avogadro's Principle. He elaborated upon this in his Sunto, published in 1859, but, while building upon the foundation of research undertaken by various scientists over the course of fifty years, Cannizzaro's original work transcended Avogadro's in some important respects. Cannizzaro established values for atomic and molecular weights, incidentally distinguishing the two, designating the weight of hydrogen as the universal standard by which other elements should be measured. He demonstrated that vapor densities could be applied in determining atomic and molecular weights. This was remarkable work, and the periodic table of elements (formulated later) owes much to Cannizzaro's efforts. Moreover, the recognition of universal constants destroyed the prevailing concept that organic and inorganic chemistry functioned under differing principles. Today, it seems incredible that atoms (and elements) were not clearly distinguished from molecules (and compounds), but while others had postulated this theory long before Cannizzaro, he was one of the first to propose a new order based on a practical rule; this was soon embraced internationally. In those days, the distinctions between the sciences of chemistry and physics were not always clearly expressed; as a distinct field of academic study (though often coinciding with chemistry), physics really evolved in the last decades of the nineteenth century. By today's standards, Cannizzaro might be considered a "chemical physicist." In 1860, Cannizzaro actively supported the unification movement, and Garibaldi's revolt, in Sicily. He served briefly in the new Sicilian government, and helped to expand the chemistry department of the University of Palermo, where he remained until 1871. During the 1860s, he saw another series of bloody revolts --this time against the new government-- suppressed by northern Italian troops, but this failed to influence his strong loyalties to the unitary Italian state. Ever a political idealist, Cannizzaro accepted an appointment as senator in 1871 (when Savoyard forces occupied Papal Rome) and moved to Rome. He continued to work as an educator and politician. Brilliant as he was, Cannizzaro's political ideas were less inspired than his scientific ones. Senators were not elected but appointed by the king on the advice of the ruling party, making the Italian Senate, composed mostly of aristocrats and prominent industrialists (Cannizzaro was exceptional), little more than an organ that served to bolster the power of the established order. That a "new" Italy should be pieced together as a politically reactionary monarchy, considered by many to be a de facto police state, was a disappointment for Italians and foreigners alike. Senator Cannizzaro lived to witness Italian troops disgraced in defeat at Adwa (in Ethiopia) and the consequential fall of the government of his fellow Sicilian revolutionary, the cunning Crispi. Stanislao Cannizzaro died in Rome in 1910. About the Author: Palermo native Vincenzo Salerno has written biographies of several famous Sicilians, including Frederick II and Giuseppe di Lampedusa.
We know that transportation is the largest source of greenhouse gas emissions in the U.S., and that our car-dependent transportation system is the reason Americans drive so much more and consequently produce far more greenhouse gases per capita than residents of other wealthy countries. Scientists have shown that building more and wider roads stimulates more driving, longer trips, and more decentralized land use patterns, reinforcing car dependence. With this entire vicious cycle well-documented, it’s hard to imagine anyone arguing that a widened urban freeway would be good for the environment, but for state DOTs and their paid apologists, it’s a frequent claim. They’ve created trumped-up projections that claim traffic and pollution will be greater if we don’t build freeways. These are false claims, and today we take a close look at how this plays out in one egregious, if typical, project. For years at City Observatory, we’ve been following the Oregon Department of Transportation’s (ODOT’s) proposed I-5 Rose Quarter freeway widening project. The project would widen a mile-and-a-half long stretch of Interstate 5 in downtown Portland at that has recently ballooned to $1.2 billion. A key part of the agency’s argument is that this freeway-widening project—exactly unlike every other one that has ever been undertaken—will have essentially no impact on air pollution or greenhouse gases. They make the fanciful claim in their Environmental Assessment that the not widening the freeway (the “no-build” option) will somehow produce more pollution than the eight- or 10-lane freeway their plans show they’re really intending to build. In this article, we’ll sketch out ODOT’s claims and present a 10-point rebuttal to them. A Long List of False Environmental Claims from Oregon DOT Recently, a Portlander interested in the project contacted us, asking us to comment on ODOT’s Environmental Assessment, which makes these claims: Traffic operations would improve on I-5 in both the AM and PM time periods, . . . Conditions for pedestrians and bicyclists would improve from increased travel route options, improved ramp terminal intersections, physical separation from motorized users, and reduced complexity of intersections. Overall, the Regional Travel Demand Model results did not indicate trip increases on I-5 much beyond the Project limits (i.e., no induced demand). The 5 to 14 percent trip increase on I-5 within the Project Area is expected for an auxiliary lane project intended to improve flow between entrance ramps and exit ramps and is indicative of primarily local through-traffic. While consideration of greenhouse gas emissions and the effects of climate change has not been a NEPA requirement for EAs and EISs since the Council on Environmental Quality (CEQ) withdrew its previous guidance on April 5, 2017, ODOT included an analysis of climate change in the Project EA due to the high level of agency and stakeholder interest in these issues. As reported in Section 3.5 of the EA, the 2045 operational greenhouse gas emission total for the Build Alternative is projected to decrease by approximately 22 percent compared to the 2017 emission total due to federal, state, and local efforts to develop more stringent fuel economy standards and vehicle inspection and maintenance programs and the transition to cleaner low carbon fuels for motor vehicles. These trends are expected to continue over the life of the Build Alternative. The Build Alternative would contribute to this reduction due to higher speeds, less stop-and-go traffic, and less idling on I-5. Therefore, no mitigation is proposed. Ten Reasons Not to Believe Oregon DOT’s False Claims There is so much that is false and misleading about these claims about traffic, air pollution, and greenhouse gases that it’s difficult to know where to begin. We’ve written about all these phony claims at City Observatory. Here are 10 reasons why everyone should ignore ODOT’s environmental analysis of this project. 1. Traffic projections assume that a five-mile long, 12-lane wide freeway was built just north of this project in 2015. Hidden in the Rose Quarter’s traffic forecasting is an assumption that a massive, multibillion-dollar Columbia River Crossing was built as part of the “no build”—and finished five years ago. The project is still in limbo in 2021. This inflates traffic and increases congestion in the Rose Quarter in the “no build,” and makes the “build” look better than it is. 2. ODOT concealed plans that show it is widening the I-5 roadway enough to accommodate eight or 10 lanes of traffic. Two years after ODOT published the environmental assessment, we uncovered true plans for a 160-foot roadway. But its traffic modeling assumes that the freeway is expanded only from four to six lanes. Modeling an eight- or 10-lane road would show much more traffic and pollution.
(Can Also be used as a group exercise) By: Paul Kivel Adapted from Men’s Work: How To Stop Violence That Tears Our Lives Apart (1998) At the Oakland’s Men’s Project we developed an exercise to help men see the cost of our actions on the women around us. This exercise is a simple set of statements. After each statement is read, we asked the men in the audience to stand up silently if it applies to them, then silently sit down again. Most men could stand up for most of the statements. It was a very powerful and emotional experience to look out at the men who were standing and know that they shared with us a past of painful and abusive training. To use this exercise in a group setting… Tell the group that you are going to read a series of statements and that each male to whom a statement applies should stand up after that statement is read. Tell the group that every man and young man is being asked to participate. Those who are physically unable to stand may raise their hand or otherwise indicate that they are part of the group standing. Each participant should decide for themselves whether the statement applies to them or not. If a man or young man is unwilling to stand for a particular statement that applies to them they may pass for that statement but should notice any feelings they have about not standing. Explain that the exercise will be done in silence to allow participants to notice the feelings that come up during the exercise and to make it safer for all participants. After a statement is read and people have stood for a few moments, ask participants to sit down and read the next statement. Stand up silently if you have ever… - interrupted a woman by talking louder than she. - not valued a woman’s opinion about something because she was a woman. - looked at a women’s breasts while talking with her. - interrupted what you were doing or saying to look at the body of a woman going past you. - put down a woman you were with because she wasn’t as pretty as other women. - made a comment in public about a woman’s body. - discussed a woman’s body with another man. - been told by a woman that you are sexist. - been told by a woman that she wanted more affection and less sex from you. - lied to a woman with whom you were intimate about a sexual relationship with another woman. - left care for birth control up to the woman with whom you had a sexual relationship. - downplayed a woman’s fear of male violence. - called a woman a bitch, a slut, or a whore. - whistled at, yelled at, or grabbed a woman in public, either by yourself or as part of a group of other men. - used your voice or body to scare or intimidate a woman. - threated to hurt a woman, break something of hers, or hurt yourself if she didn’t do what you wanted her to do. - hit, slapped, shoved, or pushed a woman. - had sex with a woman when you knew she didn’t want to. After the exercise ask people to pair up to talk about what feelings and thoughts came up for them participating in the exercise. Reassemble the group and facilitate a group discussion of the feelings, thoughts, reflections, and insights that people want to share. This is not a stand alone exercise. It should only be conducted in the context of a workshop or talk on sexism, power, violence, and safety that allows the group to process the feelings, thoughts, and issues which arise from participating in the exercise. For further information on these issues and slightly different versions of this exercise see Men’s Work: How to Stop the Violence that Tears Our Lives Apart by Paul Kivel (Hazelden 1992/98), Young Men’s Work: Stopping Violence and Building Community by Allan Creighton and Paul Kivel (Hazelden, 1998), and Helping Teens Stop Violence: A Practical Guide for Parents, Counselors and Educators by Allan Creighton with Paul Kivel (Hunter House Publishers 1992). All articles may be quoted, adapted, or reprinted only for noncommercial purposes and with an attribution to Paul Kivel, www.paulkivel.com. Creative Commons Attribution – Noncommercial 3.0 United States License. To view a copy of this license, visit here40
Spotted... on Mount Olympus or Asgard. xoxo Gossip Shmoop Though the ancient Greeks thought of all satyrs as male, later artists started depicting satyresses (female satyrs), too. These ladies were highly welcome in the satyr community to say the least. (Source.) Satyr plays were bawdy comedies that always starred a cast full of drunken satyrs. These funny plays were performed after each trilogy of tragedies during the City Dionysia, the annual Athenian theatre festival. (Source.) Ever wanted to know what it's like to be a Satyr? In the world of Dungeons and Dragons role playing games you can try it out. Although you usually see satyrs depicted as being part goat these days, ancient Greeks thought of them as part horse. The half-goat creatures were called the Panes, because they looked just like the god, Pan.