text stringlengths 222 548k | id stringlengths 47 47 | dump stringclasses 95 values | url stringlengths 14 7.09k | file_path stringlengths 110 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 53 113k | score float64 2.52 5.03 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
Obesity in the ageing patient
Dr David H Haslam, GP and a physician specialising in obesity medicine at the Centre for Obesity research at Luton & Dunstable Hospital
Over the past decade, the prevalence of obesity in Western and Westernising countries has more than doubled.1 BMI data has been evaluated as part of an analysis of the Global Burden of Disease. Prevalence rates for overweight and obese people are very different in each region with the Middle East, Central and Eastern Europe and North America having higher prevalence rates. Obesity is usually now associated with poverty even in developing countries. Data suggests that abdominal obesity in adults, with its associated enhanced morbidity, occurs particularly in those who had lower birth weights and early childhood stunting.1
A study that evaluated the contributions of socioeconomic, lifestyle and body weight factors to predicted risk of coronary heart disease found that overweight and obesity now dominate the standard risk factors of coronary heart disease in men and should be the focus of national policies for prevention.2
In the Look AHEAD study overweight volunteers with type 2 diabetes were studied and the long-term effects of an intensive lifestyle intervention programme was reviewed. This programme was designed to achieve and maintain weight loss by decreased caloric intake and increased physical activity. The magnitude of weight loss at one year was strongly (p< 0.0001) associated with improvements in glycemia, blood pressure, tryiglycerides and HDL cholesterol but not with LDL cholesterol.3
At year four, participants who maintained the loss, compared with those who did not, attended more treatment sessions and reported more favorable physical activity and food intake. These results provide critical evidence that a comprehensive lifestyle intervention can induce clinically significant weight loss in overweight/obese participants with type 2 diabetes and maintain this loss in more than 45% of patients at four years.4
Management of obesity is mostly by lifestyle measures such as diet and exercise programmes. However, studies have shown that orlistat produces greater weight loss than diet alone.5 During four years of treatment, orlistat plus lifestyle intervention significantly decreased the progression to type 2 diabetes compared with placebo plus lifestyle intervention. Cumulative incidence of diabetes was 6.2% with orlistat and 9% with placebo, corresponding to a 37.3% decrease in the relative risk of developing diabetes with orlistat.
Sarcopenia, the age-associated loss of skeletal muscle mass, is a major concern in ageing populations and has been associated with metabolic impairment, CVD risk factors, physical disability and mortality. Sarcopenia often coexists with obesity. To fully understand the effect of obesity on mortality in the elderly it is important to take muscle mass into account. The evidence suggests that sarcopenia with obesity may be associated with higher levels of metabolic disorders and an increased risk of mortality than obesity or sarcopenia alone. Efforts to promote healthy ageing should focus on preventing obesity and maintaining or increasing muscle mass.6
Other conditions associated with obesity include coronary artery disease, sleep apnoea, congestive cardiac failure, rheumatoid arthritis, osteoarthritis and hypertension.
1. James PT, et al. Obes Res 2001; 9 Suppl 4: 228S–233S
2. Nanchahal K, et al. Int J Obes 2005; 29(3): 317–23
5. Sjöström L, et al. Lancet 1998; 352: 167–72
6. Wannamethee SG, et al. Proc Nutr Soc 2015; 27: 1–8
Supportive and Palliative Care in Primary Care
Dr Ken O’Neill, General Practitioner, Midlock Medical Centre, Glasgow
The publication Living and Dying Well: a national action plan for palliative and end of life care in Scotland was the outcome of an extensive process of collaboration across Scotland. It was published in 2008 and had two key principles:
- A person-centred approach to care and care planning
- Importance of communication, collaboration and continuity of care.
The report also aimed to ensure early identification of palliative care needs, holistic assessment with the patient and carer, as well as co-ordination and delivery of care.1 It stated that diagnosing dying is seldom easy, but increasing clinical expertise is available in this area and there is increasing awareness that recognition and agreement by the healthcare team that a patient is entering the dying phase allows the planning and implementation of appropriate care.
The report recommended the use of recognised tools to facilitate the assessment and review of those with palliative and end of life care needs. This included the use of an integrated care pathway such as the Liverpool Care Pathway (LCP) for the Dying Patient in the last days of life.2
Developed from a model of care successfully used in hospices, the LCP is a generic approach to dying. It was intended to ensure that uniformly good care was given to everyone thought to be dying within hours or within two-three days whether in hospital, a nursing home or in their own home. Because of substantial criticism of the LCP in the media and elsewhere, Norman Lamb MP, Minister of State for Care Support, asked Baroness Julia Neuberger to chair a panel to review the use and experience of the LCP in England, to be kept independent of Government and the NHS. The review made 44 recommendations, including the phasing out of the LCP to be replaced by individual care plans for the dying.2
There is still a lot of work to be done to improve end of life care in the UK. The recent report Dying Without Dignity from the Parliamentary and Health Service Ombudsman said that there is potential to improve the experience of care in the last year and months of life for approximately 355,000 people.3
It also stated that there is a need for the NHS to get better at recognising that people are dying, making sure that symptoms are properly controlled, communicating with people, their families and each other, providing out of hours services and making sure that service delivery and organisation help people to have a good death.3
New national guidelines on palliative care were published last year for the NHS in Scotland, which included a ‘key’ focus for healthcare professionals to consider the concerns and expectations of the patient, their family and informal carers. Open communication between the patient, family and health professionals in the days and weeks prior to someone dying, sharing feelings and fears, and building on trust and mutual confidence are of ‘paramount importance’, the document states.4
One way to achieve this could be through anticipatory care planning (ACP) as this can improve the quality of care and reduce the risk of medication harm. ACP encourages people to adopt a ‘thinking ahead’ approach and to have greater control and choice by planning for what their preferred support and care intervention would be in the event of a future flare-up or deterioration in their condition or a carer crisis.
Dementia: where are we up to?
Professor John O’Brien, Professor of Old Age Psychiatry, University of Cambridge
Dementia is defined as global cognitive decline leading to functional impairment. There are many causes, most commonly Alzheimer’s disease, vascular dementia, Lewy body dementia and fronto-temporal dementia.
It represents a huge economic burden to the NHS (>£20 million per year), and each dementia case costs five times more than each cancer case in terms of NHS and social care costs. There are currently estimated to be over 820,000 cases in the UK alone with numbers set to double in the next 30 years.
First described by Alois Alzheimer in 1907, Alzheimer’s disease is the most common cause of dementia. It is a degenerative disease with gradual onset. Usually memory is affected first followed by loss of other cognitive functions and variable behavioural and psychiatric features (depression, loss of interest, delusions and agitation). At post-mortem it is associated with profound brain shrinkage and focal deposits of amyloid and tau protein (plaques and tangles).
The NINCDS/ADRDA Criteria for Alzheimer’s disease required dementia (two or more cognitive deficits including memory impairment), impairment in social/occupational functioning, progressive deterioration, no disturbances of consciousness, onset between age of 40–90 years and is not accounted for by another brain disorder.
These criteria were developed over 30 years ago and require the presence of dementia and significant impairment in social or occupational functioning, so do not allow very early diagnosis.
It is important to diagnose early to exclude remedial causes, provide certainty, allow understanding and provide information about the illness and prognosis and to give appropriate disease specific management. It also allows planning for future, early access to services/benefits, medico-legal issues as well as wider benefits such as community resources, planning and access to disease modifying drugs.
Criteria proposed for diagnosis of very early Alzheimer’s disease suggest requirements for progressive change in memory function reported by patients or an informant over more than six months, objective evidence of significantly impaired episodic memory and at least one of the following biomarkers to be present:
- Medial temporal lobe atrophy on MRI
- Bilateral temporal/parietal hypometabolism on PET/SPECT
- Amyloid positive PET imaging
- Abnormal CSF biomarkers (reduced A beta 42, raised tau/p-tau).
Trials have been conducted to see whether cholinesterase inhibitors (ChEIs) currently approved for symptomatic treatment of mild to moderate Alzheimer’s disease, can prevent progression from mild cognitive impairment to Alzheimer’s disease. Unfortunately these studies have uniformally been negative.1
Another study found benefit of continuing ChEIs in patients with moderate or severe Alzheimer’s disease. Continued treatment with donepezil, as opposed to switching to placebo, was associated with cognitive benefits that exceeded the minimum clinically important difference and with significant functional benefits over the course of 12 months.2
Prevention though is clearly better than cure, but it is likely you need to start in mid life. Studies looking at prevention of dementia have found that the following may possibly be helpful, though more evidence is needed: physical exercise, mental stimulation, red wine, curry, a Mediterranean diet and modifying vascular risk.
1. Raschetti R et al. PLoS Med 2007; 4: e338
2. Howard R, et al. N Engl J Med 2012; 366: 893–903
Managing diabetes in the elderly patient
Professor Sarah Wild, Professor of Epidemiology and Honorary Consultant in Public Health, NHS Lothian
Older people with diabetes represent a heterogeneous group of people with recent onset and a longer-term history of diabetes. Current guidelines do not distinguish between sub-groups, partly because of limited availability of evidence from randomised controlled trials.
In the Scottish Diabetes Survey 2014, there were 27 new cases of type 1 diabetes in people over the age of 69 years of age and this was 3.1% of the 883 new cases in total. This was in contrast to 4,217 cases of type 2 diabetes in people aged over 69 years (26% of the 16,379 new cases in total). There was also a greater prevalence of type 2 diabetes in men compared to women.1
A systematic review found that type 2 diabetes diagnosed at 60–69 years of age was associated with 40% higher mortality compared to an age matched population without diabetes. However, the increased mortality was 13% higher among men and 19% higher among women and was no longer statistically significant among people whose diabetes was diagnosed after 69 years of age.2
In addition to vascular disease, diabetes is associated with substantially increased risks of premature death from several cancers, infectious diseases, external causes, intentional self-harm and degenerative disorders and is independent of several major risk factors.3
Similar treatment of hyperglycaemia and other cardiovascular risk factors may be warranted for some older people with diabetes as for younger populations. The aim of treatment is to manage symptoms and reduce risk of complications while avoiding hypoglycaemia and hypotension. Drug interactions due to polypharmacy are even more important in the frail elderly age group. As there is limited trial data in older age groups, treatment and targets should be individualised, using similar treatments to younger populations.4
The appropriate HbA1c target in fit older patients who have a life expectancy of over 10 years should be similar to those developed for younger adults (<7.0%).4 A target HbA1c of 7.0–7.9 % (median of 7.5%) may be safer than a lower target for patients with long-standing type 2 diabetes who are at high risk for cardiovascular disease. The goal should be somewhat higher (≤8.0%) in frail older adults with medical and functional comorbidities and in those whose life expectancy is less than 10 years.
For cardiovascular risk management, smoking cessation should be promoted along with physical activity. Treatment for dyslipidaemia and hypertension should be initiated as there is evidence of benefit for people over 80 years of age. Aspirin should also be prescribed for secondary prevention. Goals for risk factor management (hypertension, hyperlipidemia) should be adjusted based upon older patients’ life expectancy, comorbidities, cognitive status and personal preferences.4
A study looked at diabetes self-management programmes for older adults and found that they demonstrate a small reduction in HbA1c , lipids and blood pressure. These findings may be of greater clinical relevance when offered in conjunction with other therapies.5 More research is necessary to determine the mechanics through which diabetes self-management programmes improve clinical outcomes, either through education, patient engagement, goal-setting or treatment/medication adherence.
To conclude, the prevalence of diabetes is increasing. This is because of increasing survival. Diabetes is still associated with increased risk of cardiovascular disease and cancer as well as physical and mental comorbidity. Glycaemic control is required to avoid effects on cognition. Data on treatment and targets is limited so treatment should be individualised.
2. Barnett KN, et al. Age Ageing 2006; 35(5): 463–68
5. Sherifali D, et al. Diabet Med 2015 doi: 10.1111/dme.12780 | <urn:uuid:86fb39dd-fbb9-4218-9372-b5b9081ba13f> | CC-MAIN-2022-33 | https://www.gmjournal.co.uk/ageing-and-healthcare-today-conference-report | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00670.warc.gz | en | 0.936977 | 3,217 | 2.8125 | 3 |
What are Dental X-Rays?
Dental x-rays are used to help in diagnosing and treating many intra-oral conditions. You may need x-rays when coming in for an emergency or before oral surgery. Dental x-rays are taken at most of your appointments, as they help us to check for changes beneath the surface that cannot be seen with a normal examination. X-rays use a small amount of radiation that is deemed safe for most patients.
Why are Dental X-Rays necessary?
Dental x-rays are necessary for a variety of reasons and they are always beneficial in helping improve your oral health. They are often taken during emergency appointments and before having oral surgery of any kind. You will have a full set of x-rays taken every few years as part of your preventative care. Because these x-rays use a minimal amount of radiation, you can rest assured that they are safe.
What can Dental X-Rays help detect?
Your dental x-rays can detect a number of intra-oral problems. We closely examine your x-rays to check the health of your teeth, gums and underlying bone. Some of the different things that x-rays can detect include:
- Bone loss
- Gum disease
- Impacted wisdom teeth
- Deep cracks and fractures
What happens when your Dental X-Rays are taken?
You will come into our office and be greeted by one of our friendly staff members. We will drape a lead apron over your body and begin by placing a dental x-ray inside of your mouth. The x-ray machine is angled directly at this small appliance and an image is taken. Within seconds, the image is transposed to our computer where we will examine it and check for complications. You will benefit greatly from having dental x-rays taken at most of your appointments. X-rays are safe and are more beneficial to your oral health than any risk that may be involved with taking them.
If you need dental x-rays taken and would like to learn more about the procedure, call us today and we will work to get you in for an appointment. | <urn:uuid:6784d409-52cd-42ac-b584-e0f6a782c376> | CC-MAIN-2020-34 | https://hardindds.com/services/dental-x-rays/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00198.warc.gz | en | 0.962075 | 445 | 2.9375 | 3 |
Last week, I spoke about how President Barack Obama justified his prisoner swap of five senior Taliban leaders for U.S. Army Sgt. Bowe Bergdahl by saying former military leaders and presidents, including George Washington, have engaged in prisoner of war exchange, too.
Obama’s exact words were: “This is what happens at the end of wars. That was true for George Washington; that was true for Abraham Lincoln; that was true for FDR; that’s been true of every combat situation — that at some point, you make sure that you try to get your folks back. And that’s the right thing to do.”
From that statement alone, I revealed how Obama’s made grievous errors in judgment by concluding that 1) the war is over, and 2) he was engaging in a prisoner exchange like George Washington — to take just a single example among his list of stellar leaders.
What Obama didn’t tell you regarding Washington and prisoner exchange during the Revolutionary War is that both countries — England and the U.S. — exchanged prisoners of war because both had “few facilities to accommodate large numbers of prisoners,” according to the Mount Vernon Ladies’ Association, whose mission it is “to preserve, restore, and manage the estate of George Washington to the highest standards and to educate visitors and people throughout the world about the life and legacies of George Washington.”
As far as buying Americans back from captivity at the price of enemy combatants, Obama needs to follow the example of Gen. Washington, who “made sure that no states holding military prisoners should trade a British soldier for an American citizen. Washington believed that this would have legitimized the British capture of more citizens, most of whom were largely defenseless.”
Though no one is minimizing the understandable elation of Bergdahl’s family over his release, George Washington would not have traded for him because he didn’t believe in trading prisoners of war until after the war was in fact over, treaties were signed, and hostilities ceased, lest he risk the capture of further American people for ransom.
Here are my two additional grievances with Obama’s prisoner of war exchange:
3) As the commander in chief, George Washington wouldn’t have completely undermined the very heart and soul of the military as Obama did with his prisoner exchange, especially in light of how it is a cardinal sin in military culture to abandon one’s post and platoon during war.
A little over a week ago, The Washington Post reported, “Ralph Peters, a retired lieutenant colonel and intelligence officer, wrote in National Review that a ‘fundamental culture clash’ exists between the president’s team and those in the armed forces, as reflected by (national security adviser Susan) Rice’s remarks on Bergdahl’s honor.”
“Both President Obama and Ms. Rice seem to think that the crime of desertion in wartime is kind of like skipping class,” Peters wrote. “They have no idea of how great a sin desertion in the face of the enemy is to those in our military. The only worse sin is to side actively with the enemy and kill your brothers in arms. This is not sleeping in on Monday morning and ducking Gender Studies 101.”
The views expressed in this opinion article are solely those of their author and are not necessarily either shared or endorsed by WesternJournalism.com.
This post originally appeared on Western Journalism – Informing And Equipping Americans Who Love Freedom | <urn:uuid:f7d24227-2151-44c5-a325-5b37c1924fe5> | CC-MAIN-2015-22 | http://www.impeachobamacampaign.com/president-obama-vs-george-washington-on-prisoner-exchange-part-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932705.91/warc/CC-MAIN-20150521113212-00031-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.968724 | 739 | 2.625 | 3 |
Mar. 24, 2008 Black carbon, a form of particulate air pollution most often produced from biomass burning, cooking with solid fuels and diesel exhaust, has a warming effect in the atmosphere three to four times greater than prevailing estimates, according to scientists in an upcoming review article in the journal Nature Geoscience.
Scripps Institution of Oceanography at UC San Diego atmospheric scientist V. Ramanathan and University of Iowa chemical engineer Greg Carmichael, said that soot and other forms of black carbon could have as much as 60 percent of the current global warming effect of carbon dioxide, more than that of any greenhouse gas besides CO2. The researchers also noted, however, that mitigation would have immediate societal benefits in addition to the long term effect of reducing greenhouse gas emissions.
"Observationally based studies such as ours are converging on the same large magnitude of black carbon heating as modeling studies from Stanford, Caltech and NASA," said Ramanathan. "We now have to examine if black carbon is also having a large role in the retreat of arctic sea ice and Himalayan glaciers as suggested by recent studies."
In the paper, Ramanathan and Carmichael integrated observed data from satellites, aircraft and surface instruments about the warming effect of black carbon and found that its forcing, or warming effect in the atmosphere, is about 0.9 watts per meter squared. That compares to estimates of between 0.2 watts per meter squared and 0.4 watts per meter squared that were agreed upon as a consensus estimate in a report released last year by the Intergovernmental Panel on Climate Change (IPCC), a U.N.-sponsored agency that periodically synthesizes the body of climate change research.
Ramanathan and Carmichael said the conservative estimates are based on widely used computer model simulations that do not take into account the amplification of black carbon's warming effect when mixed with other aerosols such as sulfates. The models also do not adequately represent the full range of altitudes at which the warming effect occurs. The most recent observations, in contrast, have found significant black carbon warming effects at altitudes in the range of 2 kilometers (6,500 feet), levels at which black carbon particles absorb not only sunlight but also solar energy reflected by clouds at lower altitudes.
Between 25 and 35 percent of black carbon in the global atmosphere comes from China and India, emitted from the burning of wood and cow dung in household cooking and through the use of coal to heat homes. Countries in Europe and elsewhere that rely heavily on diesel fuel for transportation also contribute large amounts.
"Per capita emissions of black carbon from the United States and some European countries are still comparable to those from south Asia and east Asia," Ramanathan said.
In south Asia, pollution often forms a prevalent brownish haze that has been termed the "atmospheric brown cloud." Ramanathan's previous research has indicated that the warming effects of this smog appear to be accelerating the melt of Himalayan glaciers that provide billions of people throughout Asia with drinking water. In addition, the inhalation of smoke during indoor cooking has been linked to the deaths of an estimated 400,000 women and children in south and east Asia.
Elimination of black carbon, a contributor to global warming and a public health hazard, offers a nearly instant return on investment, the researchers said. Black carbon particles only remain airborne for weeks at most compared to carbon dioxide, which remains in the atmosphere for more than a century. In addition, technology that could substantially reduce black carbon emissions already exists in the form of commercially available products.
Ramanathan said that an observation program for which he is currently seeking corporate sponsorship could dramatically illustrate the benefits. Known as Project Surya, the proposed venture would provide some 20,000 rural Indian households with smoke-free cookers and equipped to transmit data. At the same time, a team of researchers led by Ramanathan would observe air pollution levels in the region to measure the effect of the cookers.
Carmichael said he hopes that the paper's presentation of the immediacy of the benefits will make it easier to generate political and regulatory momentum toward reduction of black carbon emissions.
"It offers a chance to get better traction for implementing strategies for reducing black carbon," he said.
The article, "Global and regional climate changes due to black carbon," will be posted in the online version of Nature Geoscience on Sunday, March 23.
The National Science Foundation, the National Oceanic and Atmospheric Administration and the National Aeronautics and Space Administration funded the review.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:8f4942f0-fc0b-436e-92d5-6fa0cafa370a> | CC-MAIN-2013-20 | http://www.sciencedaily.com/releases/2008/03/080323210225.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00014-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945511 | 973 | 4.03125 | 4 |
The history of economic growth is related to the sums of new net capital, free of debt, for any investments to be deployed. Countries use surplus from savings to increase the production capacity of their economies. The installation of net capital as a productive investment is then the key to achieving long term gains. Economic growth is generated only through investing profitably available capital and any proceeds from increased debts.
The significance of the net capital is its availability for new investments after depreciated assets have been replaced. A country's economic growth can come only after highly profitable investments in innovation, but that must be done without the burden of debt that is greater than the country's earning capacity.
Historically the expansion of the US industrial economy, after 1840, was based on exceptional profits obtained from colonial countries (predominantly England) which could be then re-invested into projects that had the capacity for generating extraordinary gains for the investors. Unfortunately, new financing since 1990 has became increasingly funded through debt that is funded mostly by China and Japan through their purchases of US Treasury bonds.
Net savings are the engine of economic growth, but that happens only after a net surplus is well reinvested. The last hundred years of making ample capital available can be therefore viewed as attractive investments made by colonizing countries in high yield ventures. For instance, the growth of the impoverished USA after gaining independence in into the current prosperity can be directly related to investments by the British into USA goods production. USA incurred debt, but that was profitable and therefore could be repaid.
After the end of WWII the growth of the USA would be further stimulated by an enormous accumulations of capital that came under control of US investors.
To raise capital one must be able to deploy savings and then to repay debt. The relationship between savings and economic growth continues except the methods have changed. After 1945 the extraction of global savings through the USA financial markets is now shifting. The global model used to be based on the deployment of savings on a country by country basis. The new post year 2000 model that is now based on the sharing of an international value chain through international trade even though this approach is now challenged by a new USA administration that appears to be favoring a mercantilist model of the 1930s.
The mercantilist model used national savings to build up national economies. The new global trading model is based on the the deployment of the international markets that relies on inter-related economies without regard for national borders. The paths of the migration from a USA dollar-based mercantilist economy to an international global currency are numerous but highly dependent on a completely different political environment where international leadership may be shifting to China.
The primary obstacle to sustaining continued USA leadership is the decline in the USA savings rate. Without savings as well as rising debt levels the USA position as the leading financial source of investment will be compromised:
At present the diminished USA savings are instead channelled into rising personal wealth. Only limited funds are used to support an expansion of economic growth. The current concentration on corporate mergers and acquisitions cannot be seen as increasing USA wealth-creating capacity.
Corporate profits as well as debt have been then increasingly invested since 2000 into higher stock market valuations, e.g. stock market multiples that are now reaching historical highs in excess of 200% of GDP:
The net results of increased stock market valuations has been a shift towards greater income inequality how wealth is distributed among 200.8 million adults in the USA, which constitutes 5.4% of the global adult population. The USA total wealth to be distributed to its population is $40.4 trillion, which accounts for 36.2% of global wealth. Although the USA has experienced declining savings, it has been able to accumulate total wealth on a scale that has never been matched by any anyone.
However the USA also stands out when examined from the standpoint of concentration of global wealth in the hands of the 0.7% of high income adults (e.g. adults with income over $1 million/year). These adults hold over 46% of global wealth, which is held mostly in financial assets. Those statistics show that despite a decline in overall national savings the inequality in the possession of USA wealth remains a disparity that the current political environment cannot resolve.
The exceptional wealth of the USA can be explained as an accumulation of over a century of corporate profits and debt which were then channeled into capital investments that contributed to the growth of the economy.
The USA is now standing on the threshold of a transition into a post-industrial society. That requires large injections of new funds into long term innovations. Unless that can be accomplished with a deployment of new net savings the transition into the next stage of the historical progress of the USA will not occur. | <urn:uuid:863b1a8b-021d-4077-8d23-1bb22de59ce8> | CC-MAIN-2019-18 | https://www.pstrassmannblogspot.org/2018/06/117-do-usa-savings-support-economic.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528058.3/warc/CC-MAIN-20190419201105-20190419223105-00291.warc.gz | en | 0.956784 | 964 | 3.296875 | 3 |
Blockchain Forks: Explained
Blockchain is a decentralised, distributed system that is responsible for keeping a record of all the transactions and maintaining transparency among its users. However just like every bit of technology blockchain too requires some updates or modifications. These modifications are done by a process called Forking.
In this article we will be looking at blockchain forks, how forking in blockchain works, soft forks and hard forks.
What are Forks in Blockchain?
Blockchain forks are updates in the running blockchain to modify its working and improve its usage for the users like done in the Cardano Hardfork.
To further explain forking in blockchain let us take an example of a Computer Code. Every computer code requires regular updates for better functioning, bug removals etc. Similarly blockchain too requires error fixes, security modifications and addition of new features. These updates are achieved via forks.
However, unlike centralised systems where everyone becomes a part of the update. In a decentralised system people are free to choose whether they would like to use the forked chain or continue with the original one.
Types of Forks
Crypto Forks are broadly categorized into two types:
1. Soft Fork
2. Hard Fork
Soft Crypto Fork
Soft fork is an update which does not affect the protocol fully and is just introduced for some minor bug fixes. A soft fork is backward compatible which means that even if new changes are introduced to the system, it would still work with the older versions of the blockchain.
Making a backwards-compatible update has its advantages but it comes with significant limitations as well. To be backward compatible, the updates cannot be very big or cannot alter the fundamentals of the protocol.
Hard Crypto Fork
Hard Forks are not backwards compatible thereby making blockchain users people to update as soon as a hard fork occurs. Hard crypto forks are meant to alter the protocols and introduce new changes to the technology.
Let’s take an example for a better understanding.
In Bitcoin which runs on bitcoin blockchain any desire to update its efficiency, or fix some major bugs would lead to the introduction of hard forks. The forked chain will thus split the blocks and acts as a separate chain in the Bitcoin protocol.
Listed below are some FAQs related to blockchain forks:
If we update the hard fork and accept it, it will change the protocol. Now, what will happen to our coins?
Ans- Assume that you have somewhere close to 2 Bitcoins before the bitcoin hard forks was introduced. Now, when you update the Bitcoin protocol, you will still have the 2 Bitcoins plus, you will also get the same amount of the other coin in the separated chain.
If everyone has double the number of coins they own. What will happen to the price?
Ans- We all know that the crypto market depends on the supply and demand procedure; thus, when the two separate chains are formed after the hard fork, the coin's pricing depends on whether the market believes that the two coins will survive or not.
When the blocks of a cryptocurrency split, what happens to their original name?
Ans- Yes, both the separate chains originate from the same user base and share the same past, but they are not the same anymore, they act as separate chains and thus, they have different names. For example, the older version of the Ethereum Blockchain is known as Ethereum Classic.
To sum it all up, forks exist to perform updates and bug fixes in the blockchain with either soft fork or hard fork. Forking is an important innovation and with mixed reactions to it the future of forking looks promising with some bumps ahead.
(Also read: Hyperledger Consensus Mechanism)
Image Credits: Coinomi; bitcoin.tax; ByBit. | <urn:uuid:fdf89cd1-28a4-45ab-bb47-cd17a0bcbb11> | CC-MAIN-2023-40 | https://zelta.io/blog/blockchain-forks-explained/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510238.65/warc/CC-MAIN-20230927003313-20230927033313-00666.warc.gz | en | 0.913774 | 773 | 3.09375 | 3 |
What is fluoride? Many say it’s a deadly chemical while dentists and oral health professionals advocate for its use. The truth is there’s a tremendous amount of misinformation on fluoride, its dangers and its benefits. Below is a quick run-through of the basics you need to know about using fluoride to improve your oral health.
Basics Of Fluoride
For starters, fluoride is an entirely natural compound. The fluoride ion comes from the element fluorine, which is a naturally occurring element that can be found in the earth’s crust. As a result, our freshwater reservoirs, such as lakes and rivers, contain small amounts of fluoride naturally — anywhere from 0.1 to 4 parts per million. This is simply a result of water flowing on and around rocks that contain the element fluorine in them.
The main goal of infusing toothpaste and other dental care products with fluoride is to protect against tooth decay and cavities. According to the American Dental Association, fluoride is the only chemical proven to reduce tooth decay and help to prevent cavities. As such, it’s almost universally recommended by dentists and health professionals as part of a healthy oral care routine.
How Fluoride Works
Fluoride will help strengthen your teeth in one of two ways: ingestion and topical application. Ingestion is rather simple and can significantly improve teeth health in the developmental stages. As fluoride is ingested through a medium, such as drinking water, it is deposited throughout the entirety of the tooth, not just on the surface. This strengthens your teeth and makes them more resistant to acid erosion. Additionally, fluoride will become embedded in saliva, which can routinely bathe your teeth and apply fluoride topically.
Topical fluoride helps in a slightly different way. Tooth decay is a result of the activity of the bacteria in your mouth. The various germs and bacteria in your mouth will feed upon the sugar and starch that are present in foods that you eat. As they consume this sugar and starch, they’ll excrete acid. This acid will alter the pH levels in your mouth. If the pH levels in your mouth drop below a certain threshold (specifically, 5.5 for enamel), the enamel on your teeth will start to wear away.
This is where topical fluoride (such as the kind you’d find in toothpaste) comes in. If fluoride is present when the enamel of your teeth is rebuilding, which happens continuously, it’ll slightly modify the remineralization process. Instead of your teeth rebuilding themselves with hydroxyapatite, they’ll instead rebuild with fluorapatite. The difference between these two materials (aside from their complicated names) is that fluorapatite is more resistant to the acid the bacteria in your mouth produce.
The crux of this is that your teeth are constantly being worn down and rebuilding themselves. Not using a fluoride-based toothpaste for even a few days will cause your teeth to rebuild themselves with the less resistant hydroxyapatite. A constant and consistent presence of fluoride is necessary in order to keep your teeth continually strong.
Depending on who you talk to, the topic of water fluoridation can result in a pretty heated debate. At a high level, the idea of the government adding chemicals to our drinking water sounds a bit scary. As it turns out though, there’s a lot of science to back up the benefits of this.
As mentioned earlier, fluoride occurs in our drinking water naturally; however, this natural amount of fluoride is not enough to help combat the effects of acid erosion on our teeth. As such, water fluoridation became an official policy in the United States in 1951 after numerous studies found it beneficial to preventing tooth decay.
The major concern around water fluoridation is the potential health effects it can have after repeated ingestion. As such, the CDC published an entire study around the effects of water fluoridation. The results were conclusive — water fluoridation is no danger whatsoever. Even at 3 to 6 times the standard amount, there were no observable adverse health effects. Furthermore, the reports around removing fluoride from drinking water show a pretty severe increase in tooth decay.
Simply put: drinking tap water that includes fluoride is good for your oral health.
Dangers of Fluoride
Most of the major danger concerns with fluoride are pretty baseless. Of the 6 adverse health effects listed on Wikipedia’s page on fluoride toxicity, only fluorosis is “the only generally accepted adverse health effect” of fluoride.
What is fluorosis? It’s simply a cosmetic issue that presents itself when increased amounts of fluoride are ingested. Dental fluorosis is generally found in children’s teeth during tooth development. It can cause varying degrees of intrinsic tooth discoloration but is otherwise harmless. General treatments of fluorosis involve teeth bleaching, micro-abrasions, veneers and crowns.
A fluoride-based toothpaste, along with mouthwash and flossing, is an important part of a healthy oral care routine. Most of the concerns around fluoride are based on simple fear and not fact. Water fluoridation can significantly increase resistance to tooth decay and help to prevent cavities. The only concern around fluoride is its use with very young toddlers who may accidentally swallow toothpaste. For these cases, make sure to use fluoride-free toothpaste to help prevent cosmetic issues. Otherwise, fluoride is an extremely helpful way to keep your teeth healthy. | <urn:uuid:1a3fc9b9-262c-4db7-af55-a4243380392d> | CC-MAIN-2023-06 | https://www.creditviewdental.com/site/dental-blog-mississauga/2016/09/09/fluoride-facts | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00279.warc.gz | en | 0.949057 | 1,130 | 3.40625 | 3 |
Older Geologic Maps of Margarita Peak Area
See Flora of Margarita Peak: Physical Setting and Geology for an introduction to this page, and the later 2005 geologic mapping.
The following map gives the geology from the printed 1966 large-scale Santa Ana Sheet Geologic Map of California, and from the 2001 Preliminary Geologic Map 2001 of the Margarita Peak Quadrangle (5 MB).
For brevity on the map, the label "granite" refers to both granite and granodiorite.
Blue lines and blue labels correspond to the 1966 mapping (except for the blue line that follows 8S01 going roughly east-west along the top of the map). Red lines and red labels correspond to the 2001 mapping, which stops at about the middle of the above map.
The black diagonal lines indicate areas of disagreement between the two geologic maps.
The most surprising and significant difference is the western side of the parcel, where the 1966 mapping gives the rocks as Santiago Peak Volcanics. The 2001 mapping gives the rocks as fine-grained granodiorite in the northern two-thirds of the parcel, and granodiorite in the southern third. The 2005 mapping gives this area as quartz-bearing diorite, which seems to fit the rocks we observed in the field there.
The source of this discrepancy is probably the large-scale mapping in the 1966 version. There is an extensive area of Santiago Peak Volcanics to the west of this parcel; the eastern boundary was probably just not drawn accurately on the 1966 map.
The rest of the western half of the parcel is mapped as granodiorite in the 2001 mapping, but the 1966 mapping has a tongue of Bedford extending into the property. The 2005 mapping enlarges the area of Bedford around that tongue, and says that the granitic areas include significant Bedford rock. Most likely, the granodiorite and Bedford are quite intermixed here, making any given area hard to call. | <urn:uuid:8b47c679-d037-44b1-ad9d-ab4445cc9744> | CC-MAIN-2017-47 | http://tchester.org/sd/plants/floras/pix/margarita_peak_older_geologic_maps.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806832.87/warc/CC-MAIN-20171123123458-20171123143458-00107.warc.gz | en | 0.916887 | 411 | 2.5625 | 3 |
My 13 Commandments is Matthias Fritsch's personal tool for trying out the path to a sustainable society in everyday life. Since even small steps can have a strong effect on everyday habits through constant routine, there is a very great potential for positive changes here.
If a person changes a daily habit, then this constant change becomes significant in its sum after one year. If many people change certain habits at the same time and create a critical mass, the effect becomes fundamental and can lead to social change.
WHY 13 COMMANDMENTS?
A sustainable society leaves its environment and culture in the same or better conditions. 'We are dwarves on the shoulders of giants' and will eventually be part of the giants that future generations will build on. I try living and acting according to the following principles. They can help to ensure that the future is not worse than the present. At the moment the scientific consensus is that we urgently need to change our habits. How fast we can do this as a society depends directly on everyone of us.
1. be grateful to live & respect the life around you
We're all mortal. Rejoice every morning that you are alive. Respect other people, animals and plants around you. Albert Schweitzer once said: "I am life that wants to live, in the midst of life that wants to live.
2. do not stop learning and share your knowledge with others
Keep trying to learn new things. Be independent and understand the big connections and systems. Question arguments of others and research their content of reason and truth yourself.
3. think global, act local
Demand determines supply. Question what you consume. Every consumer action and every cent spent causes an expenditure of resources, energy, transport & labour somewhere in the world. The profit and the associated taxes benefit a certain person, company, organization, municipality and form of government - you decide which one at the moment of purchase and consumption.
4. reduce, divide & use things several times
Live with less and enjoy the associated free space and leisure time, less consumption, less work, less technology, save water, energy and raw materials. How? Share what you have with other people, give away, make more sustainable holidays, recycle materials, buy used goods, collect usable on the street, offer your help & tools to others.
5. keep the environment clean
Artificial man-made materials often damage the environment. Leave your environment cleaner than you found it, and when you collect your rubbish, take the opportunity to remove some rubbish left behind by other people.
6. avoid plastic & harmful materials
Reduce your waste production to a minimum. Plastic, in particular, is currently a major threat to ecosystems and ends up in large quantities in the ocean, where it decomposes into microparticles, causes a lot of damage to animals, combines with other pollutants and ultimately returns to our food cycle. Use plastic more than once and packaging materials that are made of only one material and therefore easier to recycle or even better biodegradable plastic alternatives.
7. use your body for transportation
Look at everyday transport as sports exercises. Activities: Walking, running, carrying, cycling, stairs instead of elevators.
The fewer transport machines and routes you use, the less this infrastructure needs to be rebuilt and maintained.
8. produce and save energy where you live
Decentralize the energy grid, reduce transmission losses, produce energy with simple technologies that you can master. 50% of our energy consumption at home is used for heating. Put on one more garment instead of turning up the heating.
Possibilities: small solar and wind energy plants, Bio-Meiler and various off-grid alternatives.
Efficiency and sufficiency must go hand in hand to prevent rebound effects. Surf the Internet with low resolution and low bit rates for media, as less power and infrastructure is required for transmission & caching (Link).
9. eat plants from the neighbourhood
Use plants from the region and immediate vicinity. Collect and grow your own food and support local producers. Proceed in a habitat-conserving manner. Opportunities: City gardens, vegetables on the windowsill & on the wall, don't throw away any food, but cook hot smooties from leaves, stems and second-rate but unspoilt food that you wouldn't eat otherwise.
10. cook and process your food yourself
Keep control of your food. We are what we eat. Learn to cook, ferment and enjoy industrially processed food only on special occasions.
11. reduce meat and dairy products
According to current projections, up to 51% of all man-made CO2 emissions can be attributed directly or indirectly to livestock farming. As a result, eating little or no meat would already meet half of our most pressing climate change targets. Eat meat only on special occasions.
12. produce humus & return carbon to the soil
Worldwide, mankind is destroying the fertile soil that is the basis for our food production. Help to rebuild a fertile humus layer and return carbon to the soil!
Tools: humus generators in every home, Terra Preta for permanent humus, Kon-Tiki for charcoal production
13. collect your urine & feces and bring nutrients back to the fields
Helps to close the nutrient cycles. At the moment we are unfortunately flushing valuable nutrients through our water toilet system into the water cycle, where they have no place and clarification is very expensive and energy-intensive. It would be better to send them back where they come from: to the fields and forests. In cities, an infrastructure still needs to be developed to collect, compost and process urine and excrement on a large scale. In private, anyone can start now: Compost toilets and pouring with gold water (10% urine & 90% water mix, per person & year approx. 200qm floor area are needed for spreading) | <urn:uuid:32574ec1-f43a-49ee-ae77-d2e2af6fed72> | CC-MAIN-2020-34 | https://www.technoviking.tv/subrealic.net/works/circular/13gebote/my13commandments.html | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00348.warc.gz | en | 0.932864 | 1,204 | 2.65625 | 3 |
Your child might brush and floss regularly, but you may be surprised at their next dental appointment to find out that they have several cavities. This can be confusing for parents who are diligent in making sure their children are keeping up with their oral health. However, what many parents don’t understand is that it is fairly common for children to get cavities no matter how well they clean their teeth. These are four of the reasons why children develop cavities.
1. Streptococcus Mutans
Dental caries (cavities) is the scientific name for the disease of tooth decay, and it is one of the most common illnesses children get. The cavities are caused by the bacteria known as Streptococcus mutans. These bacteria spread between family members (mostly mothers to children) sometimes via shared utensils and toothbrushes. It is considered an illness or disease because babies are born without this bacteria in their mouth. They only become affected by it once other saliva finds its way into a child’s mouth.
2. Blame It on Their Genes
The Strep mutans bacteria tend to run rampant through families. This means that for parents who had more cavities as kids, their children usually have the same issues.
The myth of “soft teeth” running in families is really an issue of some people having more of the bacteria than others. Because these bacteria are more prevalent in some families than others, it explains why some children are more prone to develop cavities, no matter how good they are about brushing. This is also why some children can eat candy all the time and not brush their teeth regularly yet never have a cavity (…but this is obviously not recommended!)
3. The Bacteria Again, and Plaque
Once the bacteria have been passed on, they will start causing plaque to form. Plaque forms around the teeth and gum lines. It generally has a yellowish/white colour and can build up without proper care.
The bacteria in the plaque feed on sugar. In doing so, they produce an acid that depletes calcium and erodes the structure of the teeth. This can be a major issue for children, who might be eating sugary foods throughout the day. This produces more plaque, leading to more of the acid forming. Over time, an area on the tooth without calcium gets so big that the surface collapses, creating a cavity.
4. Food and Drink Today (Compared to the Past)
The food and drinks that children consume can play a part in determining how many cavities they may develop. Nowadays, children are at greater risk of developing cavities because of changes to their diets such as:
- More Sugar. Children are consuming much more sugar than they did in decades past. Since sugar feeds the bacteria that cause cavities, an increase in sugar will increase the risk of cavities forming. It isn’t just candy that provides the sugar either. Healthy snacks like dried fruits have high sugar contents. This is why it is important to watch what a child eats.
- Acidic Drinks. Fruit juices, sports drinks, and sodas contain citric and carbonic acid, which can erode the enamel in the teeth. The enamel contains the calcium in the tooth. As this part erodes, it becomes more susceptible to losing calcium caused by the acid the plaque creates. For this reason, care should be taken to get the child to make a habit of rinsing their mouth out after drinking these types of beverages.
- Bottled Water. Most water supplies have fluoride added to help strengthen teeth. This was a great public health service, but one rendered moot since parents switched to giving their children bottled water. Most bottled waters are not fluoridated, so kids’ teeth aren’t as strong because of the lower levels of fluoride.
Practicing good dental care should start in childhood. Still, parents should remember that no matter how well their kids clean their teeth, they can still get cavities. Because the amounts of bacteria vary from child to child, the best precaution is to maintain proper care so the risk for cavities will be reduced. | <urn:uuid:e8ac5a81-24f9-4fbd-a647-b91a9a4916c8> | CC-MAIN-2019-35 | https://dentrixdentalcare.com/4-reasons-why-children-get-cavities/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315258.34/warc/CC-MAIN-20190820070415-20190820092415-00337.warc.gz | en | 0.967359 | 853 | 3.078125 | 3 |
An enterprising 16-year-old in Kansas recently 3D-printed at prosthetic hand for his 9-year-old family friend, giving the young tyke the use of fingers for the first time in his life. And he did it all at the local county library.
Mason Wilde, the teenager with the 3D printing skills, first became interested in making the prosthetic after young Matthew, who has only partial digits on his right hand, saw a picture of the "Robohand" online. The prosthetic, originally co-designed by puppet artist Ivan Owen and South African woodworker Richard Van As, had recently been posted to Thingiverse, a collection of downloadable 3D-printed designs.
It was just what Matthew needed, but his mother had no idea how they might actually be able to make one. The family had long since given up hopes of purchasing a commercial prosthetic hand, which can cost upwards of $18,000, even with insurance. Nowing that Wilde was computer-savvy, she contacted him to see if he might help.
Sure enough, the 11th-grader crunched some numbers and tweaked the design. And after an eight hour 3D-printing session at the Johnson County Library, little Matthew had a new hand, and it only cost about $60 in materials. "I was happy, happy, happy," Matthew told The Kansas City Star. "Whopping Gangnam style!" he added, doing a little dance.
This is hardly the first time a 3D printer has provided someone with an affordable prosthetic. Last year, a loving father 3D-printed a similar prosthetic hand for his son. Now that the designs are online for anybody to download and use, we can only hope this will happen more. After all, there is much promise in our cyborg-friendly future. [KC Star via Twitter] | <urn:uuid:f525ee0d-2967-41db-99e3-98454b9fb59e> | CC-MAIN-2014-15 | http://gizmodo.com/now-this-is-what-technology-and-science-should-be-about-1516631974/@Citizen-Kang | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00442-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.980101 | 386 | 2.53125 | 3 |
The city of Atlanta was paralyzed earlier this year when a rare snowstorm turned its roads into something akin to the luge course in Sochi. Many fingers were pointed assigning blame for the resulting traffic catastrophe, including at least one aimed at imprecise weather predictions.
“The governor of Georgia said that they thought the heavier snowfall was going to be south of the city,” said Ana Barros, professor of civil and environmental engineering at Duke University. “But there’s a lot of uncertainty in those predictions because we don’t really understand the fine details of complex storm systems. We don’t know how to model these processes at high spatial resolutions.”
This summer, Barros and her colleagues will conduct the first field mission with a new satellite system intended to fill in those knowledge gaps. On Feb. 27, NASA and JAXA—Japan’s national space agency—will launch the core satellite for their new Global Precipitation Measurement (GPM) mission.
GPM is an international satellite mission designed to provide more detailed measurements of rain and snow over a wider range of the globe than previously possible. Not only will the satellite have more precise instrumentation than its predecessors; its orbit will allow researchers to study rainfall at higher latitudes at higher spatial and temporal resolutions. The data it collects will help unify measurements made by partner satellites and add to science’s understanding of how weather works.
Before meteorologists can start plugging the new data in to their weather models, however, researchers have to make sure they can accurately interpret the GPM measurements. The upcoming field mission, based in the mountains of western North Carolina and led by Duke engineers, will help achieve this by comparing satellite readings with those taken simultaneously from multiple aircraft and ground sensors. Besides calibrating the new satellite, the campaign will help improve how precipitation processes are represented in forecast calculations. It will also provide data and inform models used to address critical water management issues in mountainous regions.
“The campaign that we are running will obtain very high-resolution data of precipitation and the microphysics of storm systems in mountainous regions,” said Barros. “The end goal is to improve weather predictions and climate models.”
Between May 1 and June 15, measurements will be taken on the ground as well as by two aircraft flying at different altitudes and by the newly launched GPM satellite. The data collected will help calibrate the new satellite’s sensors for the rest of its long-term mission to study complex weather phenomena.
The campaign will also be the first of its kind in mountainous regions that, according to Barros, are home to complex rain patterns that are one of the biggest challenges in remote sensing.
After the initial field campaign ends on June 15, the two participating aircraft will move on to other missions, but Barros and her team will continue taking data from the new satellite and the extensive ground sensor network they have built until the effective end of the hurricane season.
“When we first started in 2007, there were only two rain gauges reliable for these kinds of studies above 3,200 feet in the whole eastern United States,” said Barros. “Now, for this experiment, we have more than 100. It’s taken a long time to get here, but we’ve had a lot of help along the way.
“We’ve been working with a handful of non-governmental organizations along with water authorities, planning commissions and literally dozens of independent entities, including the Haywood Community College, the Haywood Electric Membership Cooperation, Wilson College, UNC-Asheville, Maggie Valley Water District, Pisgah Astronomical Research Institute, ABTech, and even local landowners and landmarks, like Joey’s Pancake House, who have been very supportive of our field work.” | <urn:uuid:da323c2a-9bda-49e6-929e-14edccb971a1> | CC-MAIN-2017-43 | https://bigkingken.wordpress.com/2014/03/05/tuning-nasas-newest-weather-satellite-to-improve-forecasts/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00779.warc.gz | en | 0.955956 | 794 | 3.1875 | 3 |
Friendships for Life
Friendships are an important part of your children’s life. As a parent, knowing who your children’s friends are will help you encourage and discourage relationships in their best interest. Conversations about friends and relationships give you an opportunity to discover who their friends are and how their friends may be influencing them.
Table Topics Questions
If you could invite one person to share this meal with us, who would it be and why?
TIP: Listen carefully: If they would invite a friend, they feel friendship is important.
If they would invite a family member, they feel family is important.
If they would invite a famous person, consider the character of that person and ask your child why they would like to invite them.
If you could spend one day with your best friend, who would it be and what would you do?
TIP: Listen to the adventure your child describes. Best friends can change over time and it is always good to know with whom your children want to spend time.
After they share their story, share a story from your childhood about an adventure with a friend and tell them how that friendship made you feel.
Your children will develop friendship as they have seen you develop them. By sharing fond childhood memories of friendship, you encourage the same for your children.
How do you know someone is your friend?
TIP: Listen to how they define friendship. Healthy friendships benefit both people. So, listen for clues as to how your child identifies people as friends.
For example: If your child describes a friend as someone who shares their toys, then that suggests they have a healthy relationship.
If your child describes a friend as someone who likes to spend time doing the things they both like to do, then, again, this is evidence of healthy friends.
Children who learn to cultivate and maintain healthy friendships and relationships will become adults who cultivate and maintain healthy friendships and relationships. You, as the parent, know best and can encourage those friendships and relationships that are in the best interest of your child. | <urn:uuid:ae669693-e03e-4d82-8068-cb9b4383afed> | CC-MAIN-2019-09 | https://freshtakefamily.com/friendships-for-life/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511573.67/warc/CC-MAIN-20190221233437-20190222015437-00264.warc.gz | en | 0.968453 | 422 | 3.296875 | 3 |
This past September, as Hurricane Florence bore down on the Atlantic coastline, researchers and forecasters were more prepared than ever before to deal with the daunting effects of the upcoming storm. While these destructive forces of nature will never be truly neutralized, cutting-edge observation systems have made predicting their effects, and making people safer, easier than ever. That’s not to say it’s a simple task.
While today’s satellites can predict typical weather conditions fairly accurately, patterns of hurricanes making their way over the ocean are a bit more complicated. Predicting a hurricane’s path is tricky, which is why scientists work hard to gather as much data as possible from each major storm to better predict the next one.
Over the past few decades, however, weather forecasters have been able to rely on the combination of satellite technology, advanced radar systems, and well-designed hurricane aircraft to bring about a clearer picture of hurricane and tropical storm behavior than ever thought possible. Today’s technologies allow researchers and forecasters to track a hurricane’s path and predict its size and force with a remarkably high degree of accuracy.
Up Close and Personal Data Collection
Accurately predicting the path and potential damage of a storm requires some truly up close and personal data collection that’d be far too dangerous for a human to conduct in person. To get the most valuable information available, researchers have a secret weapon about the size of a paper towel roll: the dropsonde. Dropsondes are designed by Vaisala, a company based out of Louisville, Colorado. These appropriately named devices are dropped out of high-altitude planes, directly into the hurricane to gather and send data about the storm to pilots and research labs.
Originally developed by the National Center for Atmospheric Research in Boulder, these tools, formally the Airborne Vertical Atmospheric Profiling System (AVAPS™) debuted in 1997 for operational weather forecasting and atmospheric research efforts.
Dubbed by the National Science Foundation ‘workhorses in hurricane forecasting’ dropsondes can withstand extreme hurricane conditions to provide accurate, useful data. Each dropsonde is relatively lightweight and loaded with sensors. They’re small and efficiently designed, capable of capturing data twice per second in the harshest conditions imaginable.
Released from airplanes straight into the storm, dropsondes fall to the ground quickly, making every second of data collection extremely precious. Developers attach a small parachute to each unit — slowing down the drop rate so the devices can accurately measure temperature, humidity, wind speeds, and other important data points. Back at the research center, scientists can extrapolate all the data to formulate detailed projections, adding to a body of knowledge that will one day predict hurricanes the way we can today forecast a sunny afternoon.
Tracking the Hurricane in real time
During Hurricane Florence, research scientists at NOAA’s National Severe Storms Laboratory were even able to launch high-tech weather balloons into the middle of the hurricane to capture data. Sensors inside the balloon helped scientists monitor Hurricane Florence as it made its way to the shore and transitioned from a hurricane to a tropical depression. This type of technology helps data scientists analyze various conditions before, during, and after the hurricane, track the hurricane’s path, and make accurate estimates and assumptions when building models.
The National Hurricane Center has a formal process in place for forecasting all types of tropical cyclone activity in the Atlantic and Pacific around North America and are responsible for communicating their forecasts every six hours. They use everything from satellites, aircraft, ships, buoys, radar devices, and land-based tools to track hurricanes and predict their paths as accurately as possible. Once a hurricane looks like it will make landfall and is identified as a real threat, it’s closely monitored by the U.S. Air Force and NOAA hurricane craft.
While the storms themselves can’t be stopped, high-tech data collection and analysis can greatly reduce the risk presented by each new storm, and influence building and city planning practices to further protect residents from these incredibly powerful weather systems. This high-tech development, perfected over time, will one day make hurricanes like Florence a much less daunting event. That’s an evolution worth applauding. | <urn:uuid:e272f8b7-f934-4bf0-83c6-ac21d196d7e7> | CC-MAIN-2021-31 | https://bennatberger.net/tag/safety/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151972.40/warc/CC-MAIN-20210726000859-20210726030859-00083.warc.gz | en | 0.934084 | 862 | 3.515625 | 4 |
For the majority of African-Americans, the church has traditionally played a significant role in the life of the community. This continues to hold true today: According to the Pew Research Center, more than half of all African-Americans attend church services weekly, compared with 39 percent of the total U.S. population. African-American religions and religious beliefs spring from this community's history of oppression as well as its African roots.
Africans captured and brought to America were able to hold on to some of the religious practices common to their native land. The musical rhythms, drumming, dancing and call-and-response method of preaching come from Africa, as do the beliefs in spirit possession, healing and magic rituals, which are still practiced in some African-American churches. In addition to African religions, the Christianity that was practiced in the South had a strong influence on the development of African-American religion. Slave owners often took slaves to services, but they had to sit in a separate section. Blacks met with discrimination at churches in the North as well, and they began to form their own churches.
Christianity is the religion practiced by the great majority of African-Americans, according to the 2007 U.S. Religious Landscape Survey conducted by the Pew Research Center's Forum on Religion and Public Life. Seventy-eight percent are Protestant, and a majority belongs to the traditional black churches, including the National Baptist Convention and the American Methodist Episcopal Church. Forty percent are Baptists. Other Protestant faiths represented include Pentecostals, evangelicals and non-denominational. Only a small number, 4 percent, of African-Americans belong to mainline Protestant churches, and only 5 percent are Catholic. Fewer than 5 percent of African-Americans claim any faith other than Christianity, and 12 percent are not affiliated with any religious group.
Among African-Americans, 88 percent firmly believe in God, 55 percent believe the Bible is the literal word of God, 83 percent believe in angels and demons, 58 percent believe in life after death and 84 percent believe in miracles. Even the most religious African-Americans are just as likely to describe themselves as politically moderate as politically conservative. Forty-nine percent of African-Americans believe abortion should be legal in most cases and 44 percent believe it should not be; 60 percent believe churches should express their political views, but 60 percent also believe churches should not tell their members how to vote.
Degree of Religiosity
As previously stated, African-Americans are more religious than the general population. In addition to higher church attendance, they are more likely to say religion is very important in their lives: 79 percent of African-Americans express this view, compared with 56 percent of all Americans. African-Americans are more likely to pray, with 76 percent claiming to pray daily, compared with 58 percent of all Americans. Even among African-Americans with no religious affiliation, 70 percent believe in God and almost half pray daily.
- Digital Vision./Digital Vision/Getty Images | <urn:uuid:f35eebb0-5201-4aee-8b26-d5f371979298> | CC-MAIN-2014-42 | http://people.opposingviews.com/africanamerican-religions-religious-beliefs-5439.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444774.49/warc/CC-MAIN-20141017005724-00322-ip-10-16-133-185.ec2.internal.warc.gz | en | 0.966917 | 607 | 3.5 | 4 |
José Vasconcelos published The Cosmic Race in 1925. In his view Mexico could construct a new, better race from the diverse origins of its people.
Oswald de Andrade published his Anthropophagic Manifesto in 1928. In his view Brazil could construct new, better works by adapting ideas from the exterior. Among Andrade's inspirations was André Breton (perhaps the Surrealist Manifesto of 1924).
Had Andrade also read Vasconcelos? | <urn:uuid:1ba009b8-4f75-443a-b89e-5098bb22178d> | CC-MAIN-2019-04 | https://history.stackexchange.com/questions/49593/was-the-anthropophagic-manifesto-written-with-the-cosmic-race-in-mind | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583807724.75/warc/CC-MAIN-20190121193154-20190121215154-00449.warc.gz | en | 0.968633 | 100 | 2.609375 | 3 |
According to the Centers for Disease Control, cigarette smoking is the primary cause of preventable death. In the United States, 16 million citizens have a disease caused by smoking cigarettes. The activity is directly responsible for illnesses, such as stroke, respiratory diseases, cardiovascular diseases, as well as different kinds of cancer.
It makes sense for people to attempt to quit smoking. Some may abruptly stop the habit without medication. Others may need the aid of a nicotine patch or guided hypnosis with a clinical hypnotherapist. Although you can stop smoking for good with any of these techniques, you may relapse.
What makes cigarettes so seductive?
Cerebral Smoke Signals
The addictive power of cigarettes comes from the unique chemical properties of nicotine. Unlike other addictive substances, smoking cigarettes don’t induce euphoric happiness or trigger audiovisual stimulation. Simply put, you don’t get high from smoking a cigarette. The nicotine that a stick of cigarette contains only triggers the release of dopamine in the brain.
What makes nicotine unique is that each puff of a cigarette tells the brain to release dopamine, unlike other drugs that stimulate the brain just once. This hardwires the brain to repeat the behavior just to receive more dopamine, almost like a Pavlovian reaction.
The repeated release of dopamine also makes other activities more pleasurable, acting as reinforcement to those behaviors. Because of this, nicotine acts as a gateway substance, making the consumption of other chemicals more pleasurable to the brain. As such, it can be difficult but not impossible to “unlearn” the behavior. You can do it through discipline and curbing the impulses the nicotine taught.
Resisting the urge to smoke a cigarette is the key to quitting for good. You can control your nicotine craving in a number of ways. They range from distracting yourself to replacing the craving with something else.
- Avoid your tobacco triggers. You will experience the strongest cravings when you’re in situations where you used to smoke. These situations can be events, such as drinking sessions or parties, or emotions, like when you feel stressed or happy. Pinpoint these triggers and change the routine that you associate with smoking.
- Think about why you’re quitting. Remember the reasons you’re quitting. Perhaps it’s because you don’t want to be healthy, or because you’re significant other has asked you to stop. Maybe you’re going to be a parent soon. Visualize these reasons or say them out loud to help you resist the cravings.
- Sweat if off. Distract yourself from the urge to smoke by engaging in physical activity. If you’re outdoors, try walking or jogging around the block a couple of times. If you’re in an office, walk up and down the stairs or keep yourself busy with clerical work.
- It doesn’t end with one. At one point, you may try to rationalize that you’ll only have “just one more.” You may believe that doing so will satisfy the craving then and make it easier for you to stop. Don’t believe it for a second. Smoking one more stick can unravel all your hard work and cause you to take up the habit again.
Ask others for help when you feel like your best efforts might make you smoke again. Enlist the support of friends and family, visit online support groups, or work with a therapist. Above all, believe that you can do it. Once you’ve committed to giving up smoking, never give up on yourself. | <urn:uuid:d326d29a-da34-4c09-a78c-0b654da38fae> | CC-MAIN-2021-49 | https://www.educomics.org/quit-cigarette-smoking-by-managing-your-cravings/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00349.warc.gz | en | 0.925014 | 745 | 3.234375 | 3 |
Pancreatic islet β-cell insufficiency underlies pathogenesis of diabetes mellitus; thus, functional β-cell replacement from renewable sources is the focus of intensive worldwide effort. However, in vitro production of progeny that secrete insulin in response to physiological cues from primary human cells has proven elusive. Here we describe fractionation, expansion and conversion of primary adult human pancreatic ductal cells into progeny resembling native β-cells. FACS-sorted adult human ductal cells clonally expanded as spheres in culture, while retaining ductal characteristics. Expression of the cardinal islet developmental regulators Neurog3, MafA, Pdx1 and Pax6 converted exocrine duct cells into endocrine progeny with hallmark β-cell properties, including the ability to synthesize, process and store insulin, and secrete it in response to glucose or other depolarizing stimuli. These studies provide evidence that genetic reprogramming of expandable human pancreatic cells with defined factors may serve as a general strategy for islet replacement in diabetes.https://doi.org/10.7554/eLife.00940.001
Diabetes mellitus is a disease that can lead to dangerously high blood sugar levels, causing numerous complications such as heart disease, glaucoma, skin disorders, kidney disease, and nerve damage. In healthy individuals, beta cells in the pancreas produce a hormone called insulin, which stimulates cells in the liver, muscles and fat to take up glucose from the blood. However, this process is disrupted in people with diabetes, who either have too few pancreatic beta cells (type 1 diabetes) or do not respond appropriately to insulin (type 2 diabetes).
All patients with type 1 diabetes, and some with type 2, must inject themselves regularly with insulin, but this does not always fully control the disease. Some type 1 patients have been successfully treated with beta cells transplanted from deceased donors, but there are not enough donor organs available for this to become routine. Thus, intensive efforts worldwide are focused on generating insulin-producing cells in the lab from human stem cells. However, the cells produced in this way can give rise to tumors.
Now, Lee et al. have shown that duct cells, which make up about 30% of the human pancreas, can be converted into cells capable of producing and secreting insulin. Ductal cells obtained from donor pancreases were first separated from the remaining tissue and grown in cell culture. Viruses were then used to introduce genes that reprogrammed the ductal cells so that they acquired the ability to make, process and store insulin, and to release it in response to glucose—hallmark features of functional beta cells.
As well as providing a potential source of cells for use in transplant or cell conversion therapies for diabetes, the ability to grow and maintain human pancreatic ductal cells in culture may make it easier to study other diseases that affect the pancreas, including pancreatitis, cystic fibrosis, and adenocarcinoma.https://doi.org/10.7554/eLife.00940.002
The pancreas is a vital organ with exocrine and endocrine cell functions, and a root of lethal human diseases including diabetes mellitus, pancreatitis, and pancreatic ductal adenocarcinoma. Exocrine acinar cells produce digestive zymogens that are delivered to the intestines by a branching network of exocrine ductal cells that secrete bicarbonate and other products. Pancreatic endocrine functions derive from clusters of epithelial cells (islets of Langerhans) called α-, β-, δ-, and PP-cells that respectively synthesize, store, and secrete the hormones Glucagon, Insulin, Somatostatin, and Pancreatic polypeptide (Benitez et al., 2012). Insulin production by islet β-cells is highly regulated: key features of mature β-cells include preproinsulin (INS) transcription, proinsulin processing by endo- and exo-peptidases and storage of the proinsulin cleavage products insulin and C-peptide in dense core vesicles. Likewise, cardinal β-cell functions regulate insulin release in response to glucose and other secretagogues, including glucose sensing and metabolism through the enzyme glucokinase, and use of ATP-dependent potassium channels (KATP) and voltage-gated calcium channels to induce insulin exocytosis (reviewed in Suckale and Solimena, 2010). Deficiency or malfunctioning of β-cells produces impaired glucose regulation and diabetes mellitus, a disease with autoimmune (type 1, T1DM) and pandemic forms (type 2; Ashcroft and Rorsman, 2012). Thus, replacement or regeneration of functional human β-cells is an intensely-sought goal.
Human islet transplantation can be used to replace β-cell function in T1DM (reviewed in Vardanyan et al., 2010), but a shortage of donors currently precludes broad use of human pancreatic islets for β-cell replacement. Because of their expandability and multipotency, human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs) have been explored as sources of replacement insulin-producing cells (reviewed in Hebrok, 2012). However, directing the differentiation of these developmentally ‘primitive’ cells through multiple sequential fates into β-cell-like progeny that synthesize, process, store, and secrete insulin while lacking tumorigenic potential has challenged investigators worldwide (Fujikawa et al., 2005; McKnight et al., 2010; Cheng et al., 2012). Moreover, different hESC and iPSC cell lines exhibit significant variability during development into insulin-producing cells (Nostro and Keller, 2012). Recent work demonstrated that differentiated cell types in adult organs, including the mouse pancreas, can be experimentally ‘reprogrammed’ into progeny resembling islet cells, suggesting a new strategy for β-cell replacement (Vierbuchen and Wernig, 2011). For example, adult mouse pancreatic acinar cells can be converted into insulin-producing cells in vitro and in vivo (Minami et al., 2005; Zhou et al., 2008). However, little progress has been made in reprogramming primary human epithelial cells into different cell types, including conversion of pancreatic non-β-cells toward a human β-cell fate (Vierbuchen and Wernig, 2011). Thus, systems permitting expansion and genetic modulation of human pancreatic cells could powerfully influence studies of β-cell biology and replacement.
Pancreatic ducts constitute 30–40% of human pancreas and have been proposed as a potential source of replacement β-cells (Bouwens and Pipeleers, 1998; Bonner-Weir et al., 2004). During pancreas development, fetal endocrine cells derive from primitive ductal epithelium (reviewed by Pan and Wright, 2011; Pictet and Rutter, 1972). In addition, some studies have suggested that in adult mice, β-cells may be produced from pancreatic ductal epithelium (Inada et al., 2008; Xu et al., 2008; Rovira et al., 2010). However, recent lineage tracing evidences have not supported this view (Solar et al., 2009; Furuyama et al., 2011; Kopp et al., 2011). In humans, prior studies have suggested that adult human primary ductal cells in heterogeneous cell mixtures may harbor the potential to generate endocrine-like progeny (Bonner-Weir et al., 2000; Heremans et al., 2002; Swales et al., 2012), but interpretation in these studies was limited by the probability of islet cell contamination. Therefore, the potential for conversion of pancreatic ductal cells toward an endocrine fate remains unclear. Moreover, prior studies have revealed only limited proliferative capacity of primary human pancreatic ductal cells in culture (Rescan et al., 2005). Thus, despite their relative abundance, multiple practical issues have prevented development of human pancreatic ductal cells as a source of replacement β-cells.
Here we report that normal human adult pancreatic duct cells can be sorted, clonally expanded, and genetically converted into endocrine cells. Human insulin-producing cells (IPCs) produced from sorted duct cells exhibited hallmark features of functional neonatal β-cells including high-level preproinsulin (INS) expression, proinsulin processing and dense-core granule formation. Moreover, secretion of insulin and insulin C-peptide from IPCs is stimulated by glucose and KATP channel stimulants in a calcium-dependent manner. Together these studies reveal a new system for investigating human pancreatic duct cell biology, genetics, and β-cell regeneration.
To identify human pancreatic epithelial cells that can be grown and maintained in culture, we systematically screened cell isolation methods and culture conditions with dispersed adult human pancreatic cells obtained from cadaveric donors without known pancreatic cancer, diabetes mellitus, or other pancreatic diseases (Table 1). With primary cells plated at low density, we observed formation of multicellular epithelial spheres, when cultured in Matrigel with a serum-free culture medium without feeder cells (‘Materials and methods’, Figure 1—figure supplement 1A). The multicellular sphere formation suggested primary cell expansion, so based on this assay we fractionated cells by fluorescence-activated cell sorting (FACS) to isolate and characterize sphere-forming pancreatic cells. A survey of cell surface markers used for fetal mouse pancreatic cell isolation (Sugiyama et al., 2007) revealed that antibodies recognizing CD133 enriched sphere-forming cells by four fold, whereas sphere-forming cells were depleted in the CD133neg fraction (Figure 1A,B). Immunohistochemical analysis of the human adult pancreas revealed CD133 expression at the apical portion of duct epithelial cells that co-expressed keratin 19 (KRT19), whereas CD133 was undetectable in islet endocrine cells or acinar cells (Figure 1C, Figure 1—figure supplement 1B), consistent with prior reports (Lardon et al., 2008). We have achieved sphere formation from over 35 consecutive adult donors (Table 1); thus, the sphere formation of primary adult human pancreatic CD133+ cells was highly reproducible.
To assess the properties of FACS-purified adult pancreatic CD133+ cells, we performed quantitative reverse transcription PCR (qRT-PCR). This revealed that CD133+ cells expressed high levels of mRNA encoding ductal markers (KRT19 and CAR2), while mRNAs expressed in acinar (CPA1 and CEL) and endocrine (CHGA, INS, and GCG) cells were exclusively enriched in the CD133neg fraction (Figure 1D, Figure 1—figure supplement 1C). Immunostaining confirmed that >98% of sorted CD133+ cells produced KRT19, whereas CD133+ cells produced no detectable islet hormone (Figure 1E,F, Figure 1—figure supplement 1D). Thus, FACS efficiently eliminated islet endocrine and acinar cells, and enriched for a population of primary adult pancreatic duct cells that expanded as epithelial spheres in feeder- and serum-free culture.
After commencing in vitro cultures, the epithelial spheres from CD133+ ductal cells attained diameters ranging from 40 to 520 µm in 2 weeks (Figure 1A, Figure 1—figure supplement 1A and Figure 2—figure supplement 1A). Spheres 350–500 µm in diameter were composed of 1470 ± 310 cells (n = 5); thus, based on evidence of clonal expansion (see below), we calculated that spheres resulted from a minimum of 10 cell divisions in 2 weeks. Sphere epithelium maintained KRT19 protein expression and a polarized monolayer as indicated by apical localization of CD133 (Figure 2A, Figure 2—figure supplement 1A,D). Neither acinar (CPA1) nor islet endocrine (CHGA and insulin C-peptide) markers were detectable (Figure 3C and data not shown), suggesting epithelial cells in cultured spheres maintained ductal characteristics.
To assess whether sphere growth was achieved by cell proliferation or by other mechanisms like cell migration and aggregation, we analyzed spheres by immunostaining and time-lapse imaging. Immunohistochemistry revealed the proliferation marker Ki-67 in more than 25% of cells comprising 2-week-old spheres (Figure 2A, Figure 2—figure supplement 1B; labeling index 26.5 ± 5.1%), data further supported by detection of a second proliferation marker, phospho-histone H3 (Figure 2A). Time-lapse imaging revealed that spheres arose from single cells (Figure 2B), providing strong evidence that sphere formation resulted from CD133+ ductal cell proliferation, rather than through cell migration and aggregation. Enzymatic dispersion of 2-week-old G1 spheres and subsequent culture revealed that the spheres can be passaged up to seven generations (G7, 3 months) and that the total number of cells increased with each generation (Figure 2C,D, Figure 2—figure supplement 1C). After G7, ductal cell expansion was not achieved, and the spheres were not formed (Figure 2—figure supplement 1C and data not shown), supporting the view that ductal epithelial cells are not immortalized, and consistent with the origin of pancreatic cells from donors without neoplasia.
The endocrine potential of human or mouse pancreatic ductal cells remains controversial. To investigate the potential of purified human pancreatic ductal cells to achieve an endocrine fate, we used an adenovirus-mediated transgenic system. Neurog3 is a transcription factor necessary and sufficient for pancreatic endocrine cell differentiation in vivo (Gradwohl et al., 2000; Gu et al., 2002) and, combined with other factors, can induce pancreatic acinar-to-islet cell conversion in mice (Zhou et al., 2008). To test if Neurog3 expression could respecify human duct cells toward an endocrine fate, we infected cultured spheres as well as primary CD133+ cells with recombinant adenovirus co-expressing red fluorescent protein and Neurog3 (Ad-RFP-Neurog3), and assessed changes in gene expression by qRT-PCR (Figure 3A–C and 4C). Neurog3 induced the expression of NEUROD1, INSM1, and RFX6 (Figure 3C), genes whose mouse homologs are known direct targets of Neurog3 in pancreas development (Mellitzer et al., 2006; Smith et al., 2010). Ad-RFP-Neurog3 infection induced expression of the pan-endocrine markers chromogranin A (CHGA) and synaptophysin in both primary CD133+ duct cells and cultured spheres (Figures 3C and 4C, and data not shown). Ad-RFP-Neurog3 infection also induced expression of mRNA encoding PAX4 and NKX2.2, transcriptional regulators of pancreatic endocrine cell fate (Sosa-Pineda et al., 1997; Sussel et al., 1998), and mRNA encoding crucial β-cell factors such as the prohormone processing enzymes PCSK1 (PC1/3) and PCSK2 (PC2), KATP channel components KCNJ11 (KIR6.2) and ABCC8 (SUR1), and glucokinase (GCK) (Figure 4D). Moreover, Ad-RFP-Neurog3 significantly induced mRNA encoding the pancreatic hormones ghrelin and somatostatin, but not mRNAs encoding insulin, glucagon, PPY or the intestinal hormones cholecystokinin and gastrin (Figures 3C and 4D, Figure 4—figure supplement 1A, and data not shown). These findings support the conclusion that human adult pancreatic ductal cells harbor pancreatic endocrine potential upon induction of Neurog3.
Immunostaining confirmed these qRT-PCR findings and demonstrated that only RFP+ cells produced by Ad-RFP-Neurog3 infection were immunostained with antibodies recognizing NEUROD1, NKX2.2, CHGA, SST or GHRL (Figure 3B,D, Figure 3—figure supplement 1). We also confirmed that no insulin-, glucagon- or PPY-positive cells were observed by immunostaining (data not shown). While only a subset of cells infected with Ad-RFP-Neurog3 (RFP+) expressed CHGA, we noted all GHRL+ or SST+ cells co-expressed CHGA (Figure 3D). Quantification of CHGA+ and hormone+ cells revealed that 30% of infected cells (RFP+) expressed CHGA. At least 45% of CHGA+ cells produced SST or GHRL, and less than 2% of CHGA+ cells expressed both hormones (Figure 3D,E). Thus, Neurog3 expression efficiently converted primary human ductal cells and cultured ductal epithelial spheres into hormone-expressing cells with cardinal features of endocrine pancreas.
In mice, Neurog3 gene dosage can determine commitment between exocrine and endocrine lineages in pancreas development (Wang et al., 2010). Therefore, we next assessed the possibility that the 70% of RFP+ cells infected by Ad-RFP-Neurog3 failing to express CHGA may have achieved inadequate levels of Neurog3 expression. We fractionated cells produced by Ad-RFP-Neurog3 infection by RFP intensity and measured mRNA expression of Neurog3, CHGA, SST and GHRL by qRT-PCR (Figure 3F,G). We found that cell fractions with the highest levels of RFP expression (‘P4 and P5’, Figure 3F) had the highest levels of mouse Neurog3 mRNA, and only these cell fractions produced mRNA encoding CHGA, SST or GHRL (Figure 3G). These data suggest that relatively high threshold levels of Neurog3 may be necessary and sufficient for directing endocrine differentiation of human pancreatic cells.
The transcription factors MafA, Neurog3, and Pdx1 (a combination hereafter summarized as ‘MNP’) were sufficient to convert adult mouse acinar cells into insulin-producing cells (IPCs: Zhou et al., 2008). We constructed three adenoviruses expressing MafA, Neurog3, or Pdx1 (see ‘Materials and methods’; Figure 4A), and infected cultured spheres with this MNP combination. Within 5 days after infection, we reproducibly detected INS mRNA induction but at extremely low levels relative to adult human islet controls (0.0035 ± 0.0012% of islet levels; Figure 4B). Thus, we sought additional factors and discovered that mRNA encoding PAX6, an important regulator of mouse pancreatic endocrine cell development (Sander et al., 1997), was induced by MNP to only 0.03% of levels in control islets (Figure 4—figure supplement 1A). When combined with MafA, Neurog3, and Pdx1 (encoded in four viruses, ‘4V’), Pax6 induced INS expression in primary CD133+ ductal cells or spheres by over 30-fold relative to MNP (Figure 4A,C,D). We observed ductal conversion to IPCs with four consecutive, independent donors (INS, Figure 4D). We also detected substantially increased expression of other islet endocrine markers, including SST, GCK, PCSK1, KCNJ11, and ABCC8 (Figure 4D). Immunohistochemical analyses demonstrated that the number of Insulin+ cells was increased by 18 to 20-fold in spheres transduced by the four factor combination (4V) compared to the MNP combination (Figure 4F,G). ELISA studies quantified and confirmed this increase of proinsulin levels, showing that the spheres derived from 4V exposure contained proinsulin levels that averaged 0.7% of those in human islets (Figure 5E). Systematic removal of individual factors from this four virus combination revealed that omission of Neurog3 prevented expression of INS, CHGA or SST (Figure 4E). Omission of virus expressing MafA or Pax6 from this combination significantly reduced INS expression (Figure 4E–G), whereas omission of virus expressing Pdx1 did not significantly decrease INS expression. Thus, Neurog3-mediated endocrine cell conversion is required for the production of IPCs as well as other hormone-producing cells from ductal spheres.
Although ELISA studies readily detected proinsulin production by IPCs in our 4V spheres, we failed to detect processed C-peptide by ELISA (Figure 5E) or by immunostaining with antibodies recognizing cleaved C-peptide (data not shown). Thus, we sought methods to enhance proinsulin processing in IPCs produced by genetic conversion. For this, we used Ad-Neurog3-IRES-eGFP and a second adenovirus constructed to express simultaneously the three transcription factors MAFA, PAX6, and PDX1 (Ad-eGFP-M6P) in cultured G1 spheres (Figure 5A, referred to as ‘4TF’ combination). Compared to our standard 5 day post-infection culture (4TF), we found that two additional weeks of culture (referred to as ‘4TFM’) resulted in a 10-fold increase of INS mRNA expression in spheres (Figure 5B,C, Figure 5—figure supplement 1A). We observed conversion to IPCs with five consecutive, independent donors (INS, Figure 5C), demonstrating the robustness of our conversion method. The total number of converted IPCs appeared unchanged after this extended culture compared to 4V cultures (Figure 4G, Figure 5—figure supplement 1B), suggesting that INS mRNA levels per cell were increased in the 4TFM (4 transcription factors in two viruses plus maturation period) condition. In addition, mRNA encoding islet amyloid pancreatic polypeptide (IAPP), a β-cell dense core granule component not detectable in standard 4TF conditions, was readily detected in 4TFM spheres (Figure 5—figure supplement 1A). Likewise, multiple mRNAs encoding β-cell factors were expressed at levels comparable to those in purified human islets (Figure 5C), including the transcription factors NKX2.2 and NKX6.1, GCK, glucose transporters SLC2A1 (GLUT1) and SLC2A2 (GLUT2), PCSK1, PCSK2, Zinc transporter SLC30A8, KCNJ11, ABCC8, the voltage-gated calcium channel component CACNA1C, regulators of Ca++-induced insulin exocytosis like RAB3A, SYT3, and VAMP2, and the postulated maturation marker Urocortin 3 (UCN3) (Suckale and Solimena, 2010; Blum et al., 2012). Immunohistochemical analyses corroborated our qRT-PCR analysis, and showed that converted Insulin+ IPCs did not express other islet hormones (Figure 5D). Although we were unable to assess endogenous MAFA and PDX1 in cells with virally-expressed exogenous MAFA and PDX1 protein, we readily detected other known β-cell specific markers including NKX6.1, IAPP, and PC1/3 (Figure 5D). Moreover, Insulin+ cells, but not other hormone+ cells, expressed NKX6.1, a transcription factor with expression normally restricted in islets to β-cells (Figure 5—figure supplement 1C and data not shown).
To assess enhanced IPC maturation after extended culture (4TFM), we measured proinsulin and insulin C-peptide by ELISA. Total insulin (proinsulin + C-peptide) levels ranged from 3.4 to 15.2 pmol/µg DNA (Figure 5E), equal to approximately 9.6% of the total insulin protein level found in human adult islets (Figure 5E). Moreover, the percentage of insulin C-peptide processing in IPCs was comparable to that found in adult human islets (IPCs 77–92%; human islets 96–97%), indicating that maturation of IPCs during extended culture permitted proinsulin processing (Figure 5E). Ultrastructural studies by electron microscopy demonstrated round dense-core vesicles (Figure 5F) resembling those in adult human β-cells, including subsets of immature (light core) and mature (dense or crystallized core) vesicles, and vesicles adjacent to the plasma membrane (Figure 5F, Figure 5—figure supplement 1D). Consistent with the detection of SST mRNA (Figure 5C,D), we also observed rare cells with irregular electron-dense granules characteristic of islet δ-cells (Figure 5—figure supplement 1E; Klimstra et al., 2007).
Native islet β-cells depolarize and secrete insulin and C-peptide in response to glucose and other physiological or pharmacological stimuli, but reconstructing these hallmark functions in progeny of purified primary human non-β-cells has not been previously achieved during in vitro culture. Compared to baseline secretion in media with 0.1 mM glucose, IPCs increased insulin C-peptide secretion by 2.4-fold upon exposure to 2 mM glucose (Figure 5G). Similar to insulin release by human islet β-cells (Lupi et al., 1999), glucose stimulated the secretion of approximately 4% of total insulin C-peptide in IPCs (Figure 5—figure supplement 1G). This effect was blocked when the cells were incubated with glucose and Diazoxide, a drug that opens KATP channels and prevents glucose-stimulated insulin secretion (Figure 5G). However, unlike adult human islet β-cells, the release of insulin by IPCs was not further increased by 11 mM glucose. Islets from fetal or neonatal stages do not show elevated insulin secretion by high level glucose challenge (Rozzo et al., 2009), suggesting that IPCs are similar to immature islet β-cells and that further maturation is possible (Figure 5G). Calcium and voltage-dependent calcium channels are important regulators of normal insulin secretion after KATP channel-mediated membrane depolarization in β-cells (Henquin, 2005). When calcium was omitted in secretion buffer, C-peptide secretion stimulated by glucose was abolished, but restored upon calcium addition (Figure 5G). Insulin C-peptide release by cultured IPCs was also induced by the depolarizing agent potassium chloride (30 mM KCl), an effect reversed by a subsequent wash in media with 4.8 mM potassium ion (Figure 5G, figure 5—figure supplement 1H). Treatment with tolbutamide, a KATP channel blocker causing membrane depolarization, also stimulated insulin secretion by IPCs, an effect prevented by omission of calcium (Figure 5G). Together with data showing expression of key regulators of stimulus-secretion coupling, these findings provide strong evidence that IPCs produced by conversion and extended culture in our system develop regulated insulin secretion.
We examined the stability of the conversion of human ducts into IPCs by long-term transplantation of the converted spheres into specific transplantation sites of NOD scid gamma (NSG) mice (Figure 5—figure supplement 2; Supplementary file 1A). Human C-peptide was readily detected in kidney grafts harvested at specific times by immunostaining (8/12 cases, Figure 5—figure supplement 2A; Supplementary file 1A) and by ELISA (9/10 cases, Supplementary file 1A) without detectable tumor formation. This also included C-peptide+ IPCs left in the transplant location beyond 5 months (Figure 5—figure supplement 2A, d151), suggesting converted IPCs were stable. However, we observed that the total number of grafted C-peptide+ cells was drastically reduced within 2 weeks after transplantation, likely due to the apoptotic cell death. In three independent IPC transplants, however, we were able to detect circulating human insulin in the serum of host mice, and its level increased following intraperitoneal glucose challenge (Figure 5—figure supplement 2B; Supplementary file 1B). Therefore, these data suggest that despite extensive cell death in early stages of transplantation, IPCs can further mature in vivo and release increased levels of insulin in response to acute glucose challenge.
Methods to regenerate lost or injured cells in diseases like diabetes mellitus are the focus of intensive investigations (McKnight et al., 2010; Benitez et al., 2012). Generation of insulin-producing cells from human stem cell lines like human ES cells (D’Amour et al., 2006; Kroon et al., 2008) is an important, and oft-cited ‘benchmark’, in efforts to achieve β-cell replacement. However, in these prior reports, progeny of human ES cells developed largely as poly-hormonal cells, most frequently expressing both glucagon and insulin. Moreover such hESC progeny failed to secrete insulin in response to glucose or other secretagogues unless transplanted as progenitors for >2 months in mice (Nostro and Keller, 2012). This transplant-based maturation strategy was complicated by tumor formation (Fujikawa et al., 2005). Thus, it has remained unknown whether human cells can develop solely in vitro to generate glucose-responsive insulin-secreting progeny without tumorigenicity. Our data indicate that in principal this can be achieved, using a small number of genes in sorted human pancreatic ductal cells that convert them toward an islet fate, including progeny that produce, store, and secrete insulin in response to glucose.
Conversion of mouse acinar cells into insulin-producing cells using adenoviral delivery of Neurog3, Pdx1, and MafA was previously reported (Zhou et al., 2008). However, it has remained unknown whether human pancreatic cells can be converted using transgenic methods toward a β-cell fate. We were unable to culture and expand primary human pancreatic acinar cells (Figure 1B and data not shown); moreover, we found that the combination of these three genes (MNP) was insufficient to reprogram primary or expanded human pancreatic ductal cells toward a β-cell fate, suggesting transgenic conversion may be restricted by species and cell type. Thus, we postulated that additional transcriptional regulators might be needed to promote human ductal conversion toward a β-cell fate. Like Neurog3, MafA, and Pdx1, the transcription factor Pax6 is expressed in both fetal and adult pancreas, and required to achieve appropriately high levels of Ins and Gcg expression in mouse islet cell development (Sander et al., 1997; Wang et al., 2009, 2010; Pan and Wright, 2011). Together with the other factors, we found that Pax6 significantly enhanced expression of β-cell markers during ductal reprogramming into β-cells, and was shown as an essential factor for this process. By systematic addition or omission of each transcription factor, we found PDX1 is not required for IPC formation. Thus, unlike mouse acinar cells (Zhou et al., 2008) and human hepatocytes (Sapir et al., 2005), human ductal cells do not require exogenous Pdx1 expression for conversion toward an endocrine fate, for reasons that remain unclear. Our findings are also consistent with recent reports that transgenic adult mouse ductal cells can generate endocrine cells in vivo (Al-Hasani et al., 2013).
We initially attempted to induce spontaneous differentiation of pancreatic ductal cells using systematic variations of culture conditions, but these efforts proved unsuccessful (J Lee, unpublished results). During pancreas development, Neurog3 level surges in a subset of pancreatic progenitors located in primitive ducts, inducing development of endocrine cell fates (Zhou et al., 2007; Miyatsuka et al., 2009). Therefore, based on this model, we attempted to mimic induction of Neurog3 in human ductal cells using adenoviral overexpression of Neurog3. We found that Neurog3 was necessary and sufficient for reprogramming human ductal cells, and that the level of ectopic Neurog3 mRNA expressed in ductal cells correlated well with the extent of endocrine reprogramming, including expression of islet hormones (Figure 3G). These findings are reminiscent of studies by Gu et al. showing that reduced Neurog3 gene dosage in mice leads to respecification of pancreatic endocrine progenitors into ductal and acinar cells (Wang et al., 2010). Thus, Neurog3 functions may be evolutionarily conserved in allocating cells toward an exocrine or endocrine fate (whether in development or experimental cell conversion) in a dosage-dependent manner. Consistent with prior work revealing that Neurog3 attenuates islet cell proliferation (Miyatsuka et al., 2011), we did not observe multiple rounds of cell division, an important prerequisite for some de-differentiation events (Hanna et al., 2009), during Neurog3-dependent cell conversion. Also, we observed Neurog3 induction alone can rapidly upregulate endocrine molecular signatures in cultured human ductal cells. Thus endocrine cell conversion described here may involve direct conversion of human ductal cells into endocrine cells, rather than de-differentiation, but additional studies are required to assess this possibility. Our findings, albeit with enforced transcription factor expression in adult cells, indicate that Neurog3 expression is sufficient to induce latent endocrine programs in human adult ductal cells, a capacity not yet clearly demonstrated, to our knowledge.
We demonstrated robust expansion of purified human ductal cells in 3-dimensional culture. The cells were clonally expanded and serially passaged up to seven generations over 3 months, achieving an increase in cell number calculated to be up to 3,200-fold. By contrast, in prior studies, the maximum duration of sustained culture achieved with primary human pancreatic ductal cells was 5 weeks (Trautmann et al., 1993; Bonner-Weir et al., 2000; Rescan et al., 2005; Hao et al., 2006; Yatoh et al., 2007; Hoesli et al., 2012). Moreover, cultured cells in spheres maintained cardinal features of primary pancreatic ducts such as apical-basal polarity and KRT19 expression up to seven generations (Figure 2—figure supplement 1D,E). Thus, features of our culture system may be useful for studying the genetics and biology of human ductal cells.
Prior studies have reported that duct-containing fractions from human adult pancreas can form insulin-producing cells in vitro (Bonner-Weir et al., 2000; Hao et al., 2006; Heremans et al., 2002; Noguchi et al., 2006; Koblas et al., 2008; Swales et al., 2012) or after xeno-transplantion in mice (Yatoh et al., 2007). However, the possibility of endocrine cell contamination in the initial ductal fraction or feeder/stromal cells used for co-culture was raised by the detection of mRNAs encoding islet cell hormones and other endocrine markers in these and other studies (Heremans et al., 2002; Gao et al., 2005). Therefore, it remained elusive whether human pancreatic ducts retained the potential to produce islet endocrine cells in adult. In this report, we used FACS to fractionate CD133+ ductal cells and used molecular and immunocytological studies to demonstrate complete elimination of cells expressing markers of differentiated endocrine cells (including islet hormones). Therefore, subsequent conversion of these cells into functional endocrine cells provided unequivocal evidence that endocrine cell-free human adult CD133+ ductal cell fraction can be converted into islet endocrine cells. Centroacinar cells are located at the junction of acini and tip of intercalated ducts (Cleveland et al., 2012) and their properties remain poorly understood. These cells express CD133 (Immervoll et al., 2008), raising the possibility that our fractionated CD133+ cells also include centroacinar cells. Based on their relative paucity in the pancreas, it is unlikely that centroacinar cells are the exclusive source of spheres within this CD133+ fraction, as more than 11% of CD133+ cells were capable of generating spheres (Figure 1B). However, because of difficulties performing lineage-tracing experiments with human samples, we cannot exclude the possibility that centroacinar cells may also contribute to the conversion into endocrine cell lineages.
While expression of Pax6 along with Neurog3, Pdx1 and MafA significantly enhanced expression of INS and other β-cell marker genes in converted ductal cells, this transcription factor combination alone was not sufficient to generate mature IPCs. We found that extending the culture period for 2 weeks after viral infection led to maturation of several hallmark β-cell functions, including expression of key β-cell factors, significant increases of INS mRNA and protein levels, proinsulin processing, dense-core granule formation, and Insulin secretion in response to glucose or other depolarizing stimuli. We tested four distinct culture media with or without serum for this extended culture, and all media permitted maturation of these β-cell properties in IPCs (Figure 5—figure supplement 1F and see ‘Materials and methods’), indicating that the duration of culture is a key variable for promoting β-cell maturation in vitro. After maturation, the spheres contained an average of 7% total insulin compared to human islet controls, and 7–11% of cells comprising these spheres produced insulin C-peptide. Thus, we calculate that each reprogrammed Insulin+ cell produced between 49 and 77% of insulin levels observed in native β-cell controls, a comparable level to the IPCs derived from human ES cells (D’Amour et al., 2006).
Is the capacity of human ductal cells to be converted toward endocrine islet fates unique? A prior study by Sapir et al. (2005) suggests that human hepatocytes may be induced to express insulin. However, the conversion toward an insulin-producing fate was comparatively poor; resulting cells produced about 10,000-fold lower insulin mRNA level than that of human islets, about 3–4 orders of magnitude lower than from conversion of pancreatic duct spheres. In addition, characteristic dense core vesicles in converted hepatocytes were not observed, indicating insufficient conversion towards β-cells. Here, we also assessed the endocrine potential of primary human dermal fibroblasts, cells successfully ‘reprogrammed’ toward many non-fibroblast fates, including induced pluripotent stem cells (Takahashi et al., 2007), but detected no clear evidence of conversion toward an endocrine or β-cell fate (Figure 5—figure supplement 4, see ‘Materials and methods’ for details). Thus, conversion of human adult duct spheres into cells that produce and secrete insulin is singularly robust. Moreover, unlike prior studies of human ES cells that have high variability among ES cell lines used (D’Amour et al., 2006; Kroon et al., 2008), we demonstrated conversion toward insulin+ fates by ductal cells from multiple unrelated donors, another feature of the robustness of our methods.
Expression of factors produced from viral transgenes persisted in Insulin+ cells for at least 5 months, evidenced by the GFP expression in transplanted insulin-producing cells (Figure 5—figure Supplement 2A and 3). The transgenes delivered by adenovirus do not generally persist in dividing cells (Zhou et al., 2008). We speculate that cell cycle arrest in Insulin+ cells may be induced by Neurog3 (Miyatsuka et al., 2011), thereby preventing dilution of viral transgene-encoded factors. Thus, further studies are needed to investigate how persistent expression of conversion factors like Neurog3 affects maintenance and maturation of endocrine phenotypes in converted cells. Survival of transplanted insulin-secreting cells produced from ductal cells was poor, and reduced yields following transplantation of ductal cells precluded physiological studies in mouse models of diabetes. Promoting survival of transplanted insulin-secreting cells is a general problem for transplant-based islet replacement approaches. Thus, studies of factors that enhance survival of Insulin+ ductal cell progeny are an important current focus.
In conclusion, our study provides unique evidence that primary human cells can generate progeny that produce, store and secrete insulin in response to glucose or depolarizing agents, the hallmark features of pancreatic β-cells. We also show that human pancreatic exocrine cells, like in mice (Zhou et al., 2008), can be converted by transgenes toward an endocrine islet-like cell fate. We speculate that gene-based strategies like those described here may be combined with other methods, including culture modulation by growth factors and small molecules (Warren et al., 2010), to optimize endocrine differentiation or conversion of diverse cellular sources to advance cell replacement for diabetes. We speculate that our cell culture system may also serve as the foundation to investigate the genetics and pathogenesis of diverse human diseases rooted in pancreatic ductal cells, including pancreatitis, cystic fibrosis, and adenocarcinoma.
Institutional review board approval for research use of human tissue was obtained from the Stanford University School of Medicine. Human islet-depleted cell fractions were obtained with appropriate consent from healthy, non-diabetic organ donors deceased due to acute traumatic or anoxic death by overnight shipping from the following facilities: Division of Transplantation (Massachusetts General Hospital, MA), UAB Islet Resource Facility (University of Alabama at Birmingham, AL), UCSF Diabetes Center (University of California, San Francisco, CA), Kidney/pancreas transplantation center (University of Pennsylvania, PA), Islet Core of the University of Pittsburgh (Pittsburgh, PA), and Human Islet Isolation Program (The Hospital of the University of Virginia, VA). Donor samples with the age range 16–63 years (mean 38.24 years) used for this study are listed in Table 1. On receipt, the cell fractions were washed with PBS and cultured with CMRL media (Mediatech, Inc, Manassas, VA) supplemented with 10% heat inactivated fetal bovine serum (FBS, HyClone, Logan, UT), 2 mM GlutaMax (Life Technologies, Grand Island, NY), 2 mM nicotinamide (prepared in PBS, Sigma, St.Louis, MO), and 100 U Penicillin and 100 µg Streptomycin (Pen/Strep, Life Technologies) in a non-coated culture dish at 25.5°C in 5% CO2 until use. For dissociation, the cell pellet was washed with PBS, trypsinized with 0.05% Trypsin-EDTA solution (Life Technologies) for 5 min, and quenched with 5 vol of FACS buffer (10 mM EGTA, 2% FBS in PBS). Cells were collected by centrifugation and further digested in 1 U/ml dispase solution (Life Technologies) containing 0.1 mg/ml DNaseI in PBS on a nutating mixer at 37°C for 30 min. PBS washing was performed after each enzymatic digestion step. After centrifugation, the cell pellet was resuspended in FACS buffer and passed through a 40-µm-cell strainer. Cell viability and number were assessed using a Vi-Cell analyzer (Beckman Coulter, Fullerton, CA) and the samples exceeding 70% cell viability were used for subsequent antibody staining for FACS.
Dissociated cells were stained with biotin-conjugated CD133 antibodies (clone AC133 and 293C3, Miltenyi Biotec, Auburn, CA) and then Allophycocyanin-conjugated Streptavidin (eBioscience, San Diego, CA) for 15 min, each at room temperature. Cell pellets were collected by centrifugation and washed with PBS after each staining steps. Propidium Iodide (Life Technologies) staining was used to exclude dead cells. The cells were sorted using a FACSAria II (BD Biosciences, Bedford, MA) and collected in 100% FBS, washed with PBS twice, and resuspended in ice-cold Advanced DMEM/F-12 media (Life Technologies) at a density of 8000 cells/µl. The average percentage of CD133+ fraction was 32.73% (n = 32). 50 µl of growth factor-reduced Matrigel (BD Biosciences) was then added to 30 µl cell suspension and the mixture was placed around the bottom rim of each well. After solidification at 37°C for 60 min, each well was overlaid with 500 µl of modified crypt culture media (Sato et al., 2009) comprised of Advanced DMEM/F-12 media supplemented with recombinant human (rh) EGF (50 ng/ml, Sigma), rhR-spondin I (500 ng/ml, R&D systems, Minneapolis, MN), rhFGF10 (50 ng/ml, R&D systems), recombinant mouse Noggin (100 ng/ml, R&D systems), 10 mM Nicotinamide in PBS, and Pen/Strep. The media was changed twice weekly. The spheres were harvested after 2 to 3 weeks for passaging or viral infection. Static and time-lapse images of sphere growth were collected using Zeiss Axiovert 200 inverted microscope and Zeiss Observer.Z1 equipped with a temperature- and CO2-controlled chamber using Axiovision (Carl Zeiss, Germany) and MetaMorph (Molecular Devices, Sunnyvale, CA) softwares, respectively. For harvesting spheres, 500 µl of 2 U/ml dispase (Life Technologies) solution containing 0.1 mg/ml DNaseI in PBS was added in each well and the Matrigel was mechanically disrupted by pipetting and incubated at 37°C for 45 min. The released spheres were collected, washed twice with PBS and used for subsequent applications. For passaging spheres, the harvested spheres were trypsinized at 37°C for 5 min followed by quenching with FBS. The dispersed cells were then used for cell counting with a hemocytometer or were plated as described above.
Ad-eGFP and Ad-RFP control adenoviruses were purchased from Vector Biolabs (Philadelphia, PA). Ad-MafA and Ad-Neurog3-IRES-eGFP were described previously (Tashiro et al., 1999). To construct Ad-RFP-Neurog3 and Ad-RFP-Pdx1 adenoviruses, mouse cDNAs for Neurog3 (BC104326) and Pdx1 (BC103581) were purchased from Open Biosystems (Lafayette, CO) and the inserts were obtained by restriction enzyme digestion with EcoR V/BamH I and EcoR V/Msc I, respectively. The inserts were then subcloned into multiple cloning sites of Dual-RFP-CCM shuttle vector (Vector Biolabs) and adenoviruses were constructed by Vector Biolabs. For Ad-eGFP-M6P, human MAFA cDNA (gift from M German), PDX1 (NM_000209; GeneCopoeia, Rockville, MD), and PAX6 (BC011953; Open Biosystems) were used for PCR amplification with the primers shown in Supplementary file 1C to add T2A, P2A, restriction enzyme sites, and/or tagging proteins (Figure 5—figure supplement 5). A fused construct of MAFA-T2A-PAX6 was generated by PCR with MAFA and PAX6 PCR amplicons as templates. Similarly, PCR products for PAX6 and PDX1 were used to construct PAX6-P2A-PDX1. Next, MAFA-T2A-PAX6, PAX6-P2A-PDX1, and pDual-GFP-CCM vector (Vector Biolabs) were cut with BglII/PstI, PstI/EcoRI, and BglII/EcoRI, respectively, and ligated with NEB quick ligation kit (New England Biolabs, Ipswich, MA) followed by transformation of TOP10 chemically competent cells (Invitrogen, Carlsbad, CA). The construct was then used for generating adenoviruses by Vector Biolabs.
Spheres were infected at 37°C in suspension overnight at a multiplicity of infection (MOI) 100 for Ad-MafA and Ad-eGFP-M6P, or MOI 500 for the rest of viruses used. The spheres were then washed twice with culture medium and embedded in Matrigel as described above. The infected spheres were overlayed with sphere growth media without R-spondin I and with 0.33 µM all-trans retinoic acid (Sigma), and cultured for 5 days. For extended culture, the media was replaced with either (1) DMEM with high glucose (Life Technologies) supplemented with 10% FBS (Hyclone) and Pen/Strep (Life Technologies) for 2 weeks (referred as ‘DF’ in Figure 3—figure supplement 1F), (2) DF plus 20 mM KCl and 10 µM R0-28-1675 (glucokinase activator; Axon Ligands) for 2 weeks (referred as ‘DFK’), (3) DF for one week and then DMEM/F-12 media (Life Technologies) supplemented with 0.5 × N2 supplement (Life technologies), 0.5 × B27 (Life technologies), 0.2% BSA (Sigma), 1% ITS supplement (Life Technologies), 10 mM nicotinamide, 10 ng/ml recombinant human basic FGF (R&D systems), 50 ng/ml Exendin-4 (R&D systems), recombinant human BMP-4 (R&D systems) for additional 1 week (referred as ‘Z’; Zhang et al., 2009), or (4) DMEM high glucose supplemented with 1 × B27, 55 nM GLP-1, 50 ng FGF10 (R&D Systems), and Pen/Strep for 3 days followed by 5 days with DMEM high glucose supplemented with 1 × B27, 55 nM GLP-1 (Sigma), 10 µM DAPT (Sigma), and Pen/Strep, then for 6 days with CMRL1066 media (Mediatech) supplemented with 1 × B27, 55 nM GLP-1, 50 ng HGF (R&D Systems), 50 ng IGF-1 (R&D Systems), and Pen/Strep (referred as ‘T’; Thatava et al., 2011). The media was replaced every other day unless otherwise noted.
Total RNA was prepared from sorted cells or cultured spheres with QIAGEN RNeasy micro kit (QIAGEN Sciences, MD), and used for cDNA synthesis using QIAGEN Omniscript RT kit (QIAGEN), according to the manufacturer’s protocol. Relative mRNA level was measured by qRT-PCR of each cDNA in duplicate with gene-specific probe sets (Applied Biosystems, Foster City, CA) with TaqMan Universal PCR Master Mix (Applied Biosystems) and the ABI Prism 7500 detection system (Applied Biosystems). Normalizations across samples were performed using β-actin primers. Information of the primer and probe sets is available upon request.
For immunohistochemical analyses, cultured spheres were harvested, washed with PBS, mixed with 20 µl of Collagen Gel Kit (Nitta Gelatin, Osaka, Japan), solidified at 37°C for 1 hr, fixed with 4% paraformaldehyde for 2 hr at 4°C, cryoprotected in 30% sucrose solution in PBS overnight, embedded in OCT on dry ice, and sectioned in 8 µm thickness. For sorted cells, the cell suspension was washed once and resuspended with 20 µl of PBS, placed on a Polysine slide (Thermo scientific, Waltham, MA), and waited for 30 min at room temperature (RT) to let the cells sit on the slide glass by gravity. Then the solution was removed carefully and 40 µl of 4% paraformaldehyde was added. After 10 min of incubation at RT, the fixative was removed and the slides were washed with PBS three times for 5 min each. After removal of PBS, the slides were dried at RT for 1 hr and stored at −20°C. For immunostaining transplanted IPCs, grafted organs (kidney, EFP, or liver) were harvested, fixed with 4% paraformaldehyde overnight at 4°C, cryoprotected in 30% sucrose solution in PBS overnight, embedded in OCT on dry ice, and sectioned in 8 µm (kidney and liver) or 40 µm (EFP) thickness. The primary antibodies used were rabbit anti-Amylase (1:1000; Sigma), goat anti-Amylase (sc-12821; 1:200; Santa Cruz Biotechnology, Dallas, TX), CD133 (1:100 each; clone AC133 and 293C3; Miltenyi Biotec, Auburn, CA), rabbit anti-ChromograninA (20085; 1:100; Immunostar, Hudson, WI), mouse anti-ChromograninA (LK2H10; 1:200; Cell Marque, Rocklin, CA), mouse anti-CK19 (KRT19) (M0888; 1:200; DAKO, Carpinteria, CA), rabbit anti-CK19 (319R-15; 1:200; Cell Marque), rabbit anti-CPA1 (1810-0006; 1:100; AbD Serotec, UK), rabbit anti-C-peptide (#4593B; 1:200; Cell Signaling Technology, Danvers, MA), mouse anti-C-peptide (capt) (1:100; Mercodia, Sweden), mouse anti-Flag (F1804; 1:1000; Sigma), goat anti-GHRL (sc-10368; 1:200; Santa Cruz Biotechnology), guinea pig anti-Glucagon (4031-01; 1:200; Linco, Billerica, MA), mouse anti-HA (MMS-101P-1000; 1:1000; Covance), mouse anti-HuNu (MAB1281; 1:200; Millipore, Billerica, MA), mouse anti-IAPP (MCA1126T; 1:200; AbD serotec), rabbit anti-Ki-67 (NCL-Ki67p; 1:100, Leica Microsystems, Germany), rabbit anti-Myc (sc-789; 1:1000; Santa Cruz Biotechnology), mouse anti-NeuroD (sc-46684; 1:10; Santa Cruz Biotechnology), mouse anti-Neurog3 (F25A1B3; 1:4000; DSHB, Iowa City, IA), mouse anti-Nkx2.2 (74.5A5; 1:10; DSHB), mouse anti-Nkx6.1 (F55A10; 1:200; DSHB), rabbit anti-PC1/3 (PCSK1, AB10553; 1:200; Millipore), rabbit anti-phospho-H3 (06-570; 1:500; Millipore), goat anti-PPY (NB100-1793; 1:200; Novus Biologicals, Littleton, CO), rabbit anti-Somatostatin (1:200, DAKO), goat anti-Somatostatin (sc-7819; 1:200; Santa Cruz Biotechnology), goat anti-SUR-1 (sc-5789; 1:50; Santa Cruz Biotechnology). Tyramide signal amplification (Perkin Elmer, Waltham, MA) was used for antibodies against Neurog3, NeuroD, Nkx2.2, Nkx6.1, and PC1/3. Antigen unmasking (H-3300; Antigen Unmasking Solution, Citric Acid Based, Vector Laboratories, Burlingame, CA) was performed for anti-Flag antibody staining. The Neurog3, Nkx2.2, and Nkx6.1 antibodies developed by Dr OD Madsen were obtained from the Developmental Studies Hybridoma Bank (DSHB) developed under the auspices of the NICHD and maintained by The University of Iowa, Department of Biological Sciences, Iowa City, IA 52242. Secondary antibodies used were from Jackson ImmunoResearch (West Grove, PA) or Molecular Probes (Eugene, OR). Stained sections were mounted with VECTASHIELD Mounting Medium with DAPI (Vector Laboratories). Fluorescence images were taken using Zeiss Axio Imager.M1 or Leica SP2 inverted confocal laser scanning microscope.
The samples were fixed in Karnovsky’s fixative: 2% Glutaraldehyde (EMS Cat# 16000) and 4% Paraformaldehyde (EMS; Electron Microscopy Sciences, Hatfield, PA) in 0.1 M Sodium Cacodylate (EMS) pH 7.4 for 1 hr at RT then cut, post fixed in 1% Osmium tetroxide (EMS) for 1 hr at RT, washed three times with ultrafiltered water, then en bloc stained for 2 hr at RT or moved to 4°C overnight. The samples were then dehydrated in a series of ethanol washes for 15 min each at 4°C beginning at 50%, 70%, 95%, where the samples are then allowed to rise to RT, changed to 100% two times, followed by Acetonitrile for 15 min. The samples are infiltrated with EMbed-812 resin (EMS) mixed 1:1 with Acetonitrile for 2 hr followed by two parts EMbed-812 to 1 part Acetonitrile for 2 hr. The samples were then placed into EMbed-812 for 2 hr and then placed into molds, and resin filled gelatin capsules with labels were orientated over the cells of interest and placed into 65°C oven overnight. Sections were taken between 75 and 90 nm on a Leica Ultracut S (Leica, Wetzlar, Germany), picked up on formvar/Carbon coated slot grids (EMS Cat#FCF2010-Cu) or 100 mesh Cu grids (EMS). Grids were contrast stained for 15 min in 1:1 saturated UrAcetate (∼7.7%) to 100% ethanol followed by staining in 0.2% lead citrate for 3 to 4 min. JEOL JEM-1400 TEM was used to observe at 120 kV and photos were taken using a Gatan Orius digital camera.
C-peptide secretion assay and content measurement were performed as described previously with minor modification (Chen et al., 2001). Briefly for secretion assay, media was replaced a day before assay was performed. On the day, each well with matrigel-embedded spheres was incubated with fresh media for 2 hr, washed twice with plain Krebs-Ringer bicarbonate buffer (KRBB), and incubated twice with plain KRBB for 1 hr each for thorough washing. Next, the spheres were incubated consecutively with 400 µl KRBB containing indicated concentrations of glucose (Sigma) with or without 0.5 mM Diazoxide (Sigma), KCl (30 mM, Sigma), or Tolbutamide (0.2 mM, Sigma) for 2 hr each. KRBB without Calcium (No Ca++) was prepared by omission of CaCl2 and addition of 1 mM EGTA (Sigma). Secreted C-peptide level was measured with Human Ultrasensitive C-peptide ELISA kit (Mercodia). For C-peptide content measurement, the spheres were harvested in 1.5 ml microfuge tube, washed with PBS, resuspended with 300 µl of ice-cold TE/BSA buffer (10 mM Tris-HCl, 1 mM EDTA, 0.1% wt/vol BSA, pH 7.0), and sonicated with Bioruptor Sonicator (Diagenode, Denville, NJ). Half of the lysate was used for genomic DNA isolation and quantification with Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen). Same volume of acid alcohol (75% vol/vol ethanol, 2% vol/vol concentrated HCl, 23% vol/vol H2O) was added to the rest of lysate to extract C-peptide by rocking overnight at 4°C. The extract was then neutralized with 10 vol of PBS and used for C-peptide ELISA.
Transplantation in kidney capsule, epididymal fat pad (EFP), or in the liver by portal vein injection was performed as previously described (Kroon et al., 2008; Alipio et al., 2010; Wang et al., 2011). For transplantation in kidney or EFP, converted spheres with or without extended culture were harvested and mixed with or without mouse embryonic fibroblasts (Supplementary file 1A). The spheres were then mixed with matrigel to make a final volume of 10 µl for kidney transplantation or overlayed on pre-wet gelfoam for EFP transplantation. For liver transplantation, single cells produced by trypsinization of harvested spheres were resuspended in 100 µl PBS and injected into the portal vein with a 27 G needle. All animal experiments and methods were approved by the Institutional Animal Care and Use Committee (IACUC) of Stanford University.
Secretion of human Insulin or C-peptide by glucose injection was measured as previously described (Kroon et al., 2008). Briefly, transplanted mice were fasted overnight (14–16 hr) and 120 µl of blood was collected from tail into Microvette CB300LH (Sarstedt, Germany) to prepare 50 µl of serum. 3 g/kg glucose was then injected and blood was collected again 30 min after glucose administration. Secreted C-peptide or insulin level was measured with Human Ultrasensitive C-peptide or Insulin ELISA kits (Mercodia).
Human adult dermal fibroblasts (Coriell Institute for Medical Research, Camden, New Jersey, USA) were cultured and maintained as described previously (Yoo et al., 2011). The cells were either trypsinized for suspension infection (as was described above for ductal spheres) or infected as adherent cells in six-well plates by direct addition of virus into the culture medium, with Ad-eGFP (GFP) or Ad-eGFP-M6P and Ad-Neurog3-IRES-eGFP (4TFM). The same MOIs used for ductal sphere infection were also used. The suspension-infected cells were harvested the following day and embedded in Matrigel as described above for infected ductal spheres. The culture was maintained for additional 18 days to match the duration of infected ductal sphere maturation. The infected adherent cells were cultured with virus for 48 hr and the media was replaced. The culture was maintained for additional 10 days, passaged in 1:3 ratio due to confluency, re-plated, and cultured additional 7 days to match the duration of infected ductal sphere maturation. In both cases, media was replaced every other day. Three independent experiments were performed for both conditions and each experiment at least in duplicates. RNA isolation, cDNA preparation, and qRT-PCR were performed with primers specific to human INS, CHGA, and β-actin as described above.
In vitro cultivation of human islets from expanded ductal tissueProc Natl Acad Sci USA 97:7999–8004.https://doi.org/10.1073/pnas.97.14.7999
neurogenin3 is required for the development of the four endocrine cell lineages of the pancreasProc Natl Acad Sci USA 97:1607–1611.https://doi.org/10.1073/pnas.97.4.1607
Direct evidence for the pancreatic lineage: NGN3+ cells are islet progenitors and are distinct from duct progenitorsDevelopment 129:2447–2457.
Generating beta cells from stem cells-the story so farCold Spring Harbor Perspect Med 2:a007674.https://doi.org/10.1101/cshperspect.a007674
Cell biology of insulin secretionIn: CR Kahn, GC Weir, GL King, AM Jacobsen, AC Moses et al., editors. Joslin’s diabetes mellitus. Boston, MA, USA: Lippincott Williams & Wilkins. pp. 83–107.
VII - Alimentary Tract, 30 - PancreasIn: SE Mills, editors. Histology for Pathologists. Philadelphia: Lippincott Williams & Wilkins. pp. 723–760.
Isolation and functional characterization of murine prostate stem cellsProc Natl Acad Sci USA 104:181–186.https://doi.org/10.1073/pnas.0609684104
Lineage tracing and characterization of insulin-secreting cells generated from adult pancreatic acinar cellsProc Natl Acad Sci USA 102:15116–15121.https://doi.org/10.1073/pnas.0507567102
Neurogenin3 inhibits proliferation in endocrine progenitors by inducing Cdkn1aProc Natl Acad Sci USA 108:185–190.https://doi.org/10.1073/pnas.1004842108
Handbook of physiology: a critical, comprehensive presentation of physiological knowledge and concepts25–55, Handbook of physiology: a critical, comprehensive presentation of physiological knowledge and concepts, section 7: Endocrinology, American Physiological Society.
Mice lacking the homeodomain transcription factor Nkx2.2 have diabetes due to arrested differentiation of pancreatic beta cellsDevelopment 125:2213–2221.
Pancreas vs. islet transplantation: a call on the futureCurr Opin Organ Transplant 15:124–130.https://doi.org/10.1097/MOT.0b013e32833553f8
Janet RossantReviewing Editor; University of Toronto, Canada
eLife posts the editorial decision letter and author response on a selection of the published articles (subject to the approval of the authors). An edited version of the letter sent to the authors after peer review is shown, indicating the substantive concerns or comments; minor concerns are not usually shown. Reviewers have the opportunity to discuss the decision before the letter is sent (see review process). Similarly, the author response typically shows only responses to the major concerns raised by the reviewers.
[Editors’ note: this article was originally rejected after discussions between the reviewers, but the authors were invited to resubmit after an appeal against the decision.]
Thank you for choosing to send your work entitled “Expansion and Conversion of Human Pancreatic Ductal Cells into Insulin-Secreting Endocrine Cells” for consideration at eLife. Your full submission has been peer reviewed by one of our Senior editors, Janet Rossant, and two other reviewers, and the decision was reached after discussions between the reviewers. We regret to inform you that your work will not be considered further for publication at this point.
The reviewers and the Senior editor have had an extensive online discussion about your paper, after exchanging the reviews. While they all feel that the experiments are carefully carried out, and the data presented are robust, in the end they were not convinced that the study as a whole provided a major step forward in the drive towards generating functional beta cells from other cell types. It was noted that you have not demonstrated whether the use of ductal cells (CD133+ cells) is advantageous over the use of other cell types for transdifferentiation. What would happen if the same factors were used to reprogram other cell types, even non-pancreatic cells, such as fibroblasts? The fact that ductal cells can respond to exogenous transcription factors does not directly demonstrate that these cells have latent potential to form beta cells, as claimed in the Abstract. It was also noted that the beta cells produced are not apparently fully mature and, therefore, the long-term significance of this approach for human therapy is unclear. The ability to grow human ductal cells in vitro is interesting and a further analysis of the non-transduced cells to respond to external signals and undergo differentiation into different cell types would be interesting.
Given these major concerns and the amount of extra work that would be needed to address them, the decision is to reject the manuscript at this time. A majorly enhanced experimental study including assessing whether ductal cells are uniquely responsive to these inducing factors, better characterization of the cells produced, and a further analysis of the unmanipulated ductal cells could form the basis of a new submission at a later date. The major points from the full reviews are provided below.
In this manuscript the authors show that they can sort human cadaveric pancreas tissue with antibody to CD133 and that this enriches for pancreatic ductal epithelium. They then show that they can generate clonal spheres from these cells that can be passaged several generations in culture. They then infect these cultures with adenovirus expressing first neurogenin and then additional sets of beta cell inducing transcription factors and show that they can induce expression of endocrine phenotypes in the spheres and that a combination of 4 factors generates insulin-producing cells that show some degree of glucose regulation. The final most successful converted cells express 7% of the levels of insulin found in adult islets, but when transplanted to the kidney capsule in mice, they were able to detect some human insulin in serum. However, in these grafts cell survival was poor, so they were unable to test the ability of the cells to rescue diabetic mice.
This study is well performed and does indicate that ductal cells may be responsive to exogenous transcription factors that can drive towards an islet cell fate.
1) It is not clear whether conversion to islet cells is a unique property of ductal cells or whether other pancreatic cells or other cell types could respond in the same way. Other groups have suggested that hepatocytes can be transdifferentiated to insulin-producing cells- is this system more or less effective?
2) The final cocktail of transcription factors is stated to produce monohormonal insulin-producing cells, but this is not directly shown in the figures. This is an important point because many other differentiation assays generate fetal-like polyhormonal cells that cannot respond to glucose in the manner of adult beta cells. The cells produced here are not fully functional adult-type cells.
3) How long does expression of the exogenous factors persist? Is it required for ongoing maintenance of the cells or can you demonstrate independence of the exogenous factors?
4) How sure are they that the starting population is pure ductal cells, given that CD133 is not exclusively expressed in ductal epithelium in the pancreas? Can they double sort with a general epithelial marker to further purify the population, given that CD133 only enriches 4-fold for sphere-forming cells?
In this study the authors describe a method to isolate and expand human ductal cells using an antibody against CD133. Using culture conditions similar to those described by Dr. Hans Clevers (Sato et al.), they were able to culture CD133+ cells as epithelial spheres that maintain a ductal phenotype and lack acinar and endocrine markers. Furthermore, the authors were able to reprogram the CD133-enriched population to endocrine cells by infecting isolated CD133+ cells and/or CD133+ -derived spheres with adenoviruses expressing ngn3, MAFA, PAX6 and PDX1. On average the authors are able to generate 10% insulin+ cells that resemble fetal beta cells, as they secrete insulin in response to low level of glucose (2mM), but fail to respond to higher glucose concentration (11mM). Following transplantation of the reprogrammed spheres into NSG mice, they observed that survival of the graft after transplantation was poor. However, they were able to detect human insulin in the serum of the host mice and showed that insulin levels increased after glucose challenge, suggesting in vivo maturation, albeit few mice were analysed. In general this work is very well done, with convincing images and clear data. There are only some minor points that need to be clarified.
1) As CD133 is detected in centroacinar cells as well as in ductal cells (Immervoll et al JHC 2011), the authors should include additional acinar markers in their expression profile (Fig 1 D) to exclude acinar contamination.
2) Please include the average percentage of CD133+ cells detected in human pancreas and the purity of the sorted populations.
3) Figure 2A please include co-staining of KRT19 and CD133.
4) The authors state “Time-lapse imaging revealed that spheres arose from single isolated CD133+ ductal cells”. However, this statement is not accurate unless the purity of the sort was 100%.
5) Does the percentage of CD133+ cells decrease in culture? What is the percentage of CD133 after 3 months in culture?
6) The authors state that Insulin+ IPCs did not express other islet hormones, the authors should include co-staining of c-peptide with GCG, PP and Ghrelin in Figure 5D.
The claim that adult human pancreatic duct cells have a latent capacity for endocrine differentiation is correct, but only in this context of extremely strong transcription-factor-based enforced differentiation. I always wonder what MNP6 would do to a non-pancreatic cell type, and therefore if this effect is a specific latency of pancreatic duct cells or not (the paragraph starting ‘The transcription factors MafA, Neurog3 and Pdx1…’ is more important as a result if there is something these cells can do that is not ever seen with the 4-factor combination MNP6 (4TF) or 4TFM).
Essential is the claim that the insulin-producing cells are mono-hormone-positive, but this is not shown in the paper. The authors refer to the Figure 5D as the one showing no double-positives, but no Gcg, PP, or Ghrl are shown here.
Some more clarity on this aspect seems critical. Assays for Gcg and other hormones are either referred to as data not shown, or this aspect is stated but the figure does not have the supporting data. Gcg immunodetection was done on cells from N alone? Gcg, PP were tested on the 4-factor combinations?
Some explanation of why spontaneous (that is, non-TF-enforced) differentiation to endocrine fates was ruled out.
Is Pax6 already expressed in the MNP-adenovirus cultures? If so, why is more needed?
The part on “We sought new methods to mature the cells” (my words) reads strangely. It's the same method, just extending the time frame, I believe, and I would simplify this text. Another comment here is that we revert to the MNP6 mixture (4TF), having just been told that Pdx1 can be removed without impact - why?
Figure 2C–figure supplement 1 has an incorrect y-axis. 10,000 percent to the log(base10) is 4, correct. This graph needs altering – why not just “fold” for cell number? Related to this, Discussion asserts 3,000-fold, but this is just once? Up to 3,000-fold, and more typically xx-fold? Why do the cultures suddenly go bad at G7 (see text) – what happens – sudden apoptosis? #48 seems to be continuing even at G7 – please amend this text.
Does the 3-factor system (MNP) work in the authors' hands on mouse acini? And/or duct?
Does the CD133 separation method include centroacinar cells (CAC) or not? What is the photograph in Figure 1 – ‘tip’ of duct dives out of plane of section before it is reached, and therefore the CAC cannot be seen in this panel? CAC could have a specific latency not seen in duct cells.
The idea that lineage-tracing methods are hard to apply in human cells should be stated, as everything else hinges on numerical arguments on cleanliness of cell separations, etc.
chgA is an endocrine differentiation marker. This statement is meant to indicate that full-blown differentiation to the hormone-expressing state requires a substantial pulse of Ngn3. What about other hormones, even indicating non-pancreatic cell types?
A major point is the longevity of the pulse of Ngn3 and the other factors achieved with these methods, and the detection of a program that runs from the endogenous loci with or without the continuous presence of N, MNP, MNP6 (4TF, 4TFM). Would this method pulse the cells or not, and is continuous presence of some of the factors preventing their full differentiation to the most mature state?
Title of the section starting ‘Although ELISA studies readily detected Proinsulin production by IPCs…’ reads odd to me: ‘genetic conversion’ reads as if the genome of the adult duct cells is being altered in some manner.
Systematic removal of factors from MNP6 mixtures: Why can Pdx1 be removed without any impact?
Various ‘obvious’ markers were not tested, or the manuscript is incorrect in not showing such ‘easily pointed out’ questions. Pdx1 is produced, by immunofluorescence assay, within these induced beta cells? To normal levels? MafA/B, etc? Nkx6.1 was assessed, but the ‘dogmatic’ mature β-cell TF list should be addressed, at least.
It does seem difficult to follow 4V, 4TF, 4TFM, MNP. Seems as if there is a mixed descriptor used for the same manipulation, at least sometimes; I suggest simply checking for a way of making the text fully consistent throughout.https://doi.org/10.7554/eLife.00940.019
- Jonghyeob Lee
- Jing Wang
- Seung K Kim
- Seung K Kim
- Jonghyeob Lee
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
We thank Drs H-E Hohmeier and Christopher Newgard (Duke University) for helpful discussions and advice, Dr M German (UCSF) for human MAFA cDNA, Drs K D’Amour and E Kroon (Viacyte) for advice on EFP transplantation, Ms S Bryant (University of Alabama), Drs A Naji and C Liu (University of Pennsylvania), Dr X Huang (University of Virginia) for human non-islet cell processing, Mr J Perrino and the Stanford Cell Sciences Imaging Facility for transmission electron microscopy and confocal microscopy, Ms E Snyder and Mr J Albright for technical support, and members of the Kim Laboratory for comments on the manuscript.
Animal experimentation: This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to approved institutional animal care and use committee (IACUC) protocols (#10160) of the Stanford University. All surgery was performed under anesthesia, and every effort was made to minimize suffering.
- Janet Rossant, Reviewing Editor, University of Toronto, Canada
- Received: May 14, 2013
- Accepted: October 8, 2013
- Version of Record published: November 19, 2013 (version 1)
© 2013, Lee et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. | <urn:uuid:648cfc05-4e0c-4fd4-b9aa-373acb5cd8e5> | CC-MAIN-2018-22 | https://elifesciences.org/articles/00940 | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00572.warc.gz | en | 0.916764 | 17,181 | 2.9375 | 3 |
Very few contemporary legal thinkers turn to natural law for help in interpreting the Constitution. Among the many reasons for this, one is the belief that natural law supports slavery. Justin Dyer takes on this belief and attempts to rehabilitate natural law thinking by introducing us to a natural law constitutional tradition that vigorously opposed slavery. Moreover, while examining that tradition, he argues that natural law is an essential presupposition of English and American constitutionalism altogether. Dyer’s title may have an academic ring to it, but his book aims high, since its goal is to recover the moral foundation of Anglo-American constitutionalism.
By “natural law” Dyer has in mind not the opinions of philosophers, but those of judges and statesmen. It is what Thomas Jefferson called the “American mind” – a set of ideas that (in Jefferson’s words) rested on “the harmonizing sentiments of the day” (p. 1). The core of these ideas are the natural rights expressed in the Declaration of Independence, understood as resting on the premise that reason can discover in human nature, general, morally obligatory principles of action (or justice) that are ultimately traceable to God, the creator and orderer of nature.
That the US Constitution rests on a moral foundation of natural law is a controversial claim. The idea was rejected years ago by Oliver Wendell Holmes when he argued that law is founded on “arbitrary desires, beliefs, and wishes of society” rather than on any pre-existing rational order (25). More recently, scholars have argued that there were “multiple traditions” at the founding period and that there is no good reason to single out any particular one of them (e.g., Jefferson’s) as correct: there is no authoritative guide, only a multiplicity of conflicting positions, some of which are proslavery, some antislavery. Still another view, ultimately traced to Thomas Hobbes, maintains that the purpose of constitutionalism is to secure peace rather than justice. These approaches differ greatly from one another, Dyer argues, but they share two assumptions: that there is no discernible moral order, and that the Constitution if not predominantly proslavery, is at best neutral with regard to the contrary aspirations of proslavery and antislavery constitutionalists.
Against these views, Dyer appeals to Abraham Lincoln’s description of the Constitution as an apple of gold in a picture of silver. Lincoln regarded the Constitution (and the Union and government established by it) as fundamentally an instrument (the silver frame) whose purpose is to adorn and preserve an aspiration towards the golden apple of human equality and freedom. Despite the Constitution’s compromises with slavery, therefore, Lincoln could admire and support it because he believed that it put slavery on the path to eventual extinction. Dyer’s book is a defense of Lincoln’s view and is in many ways an excavation of its premises and precursors.
The basic difficulty is easily stated. If all men are created equal, so that government rests on the consent of the governed and is limited by inalienable natural rights, and if these principles underlie the government established in the Constitution, then the institution of race-based chattel slavery is a gross contradiction of the founding principles of American government. Worse even than this contradiction between principle and fact are the “disharmonies” in the Constitution itself. While it is the foundation of American liberties, the Constitution also has many provisions that appear to protect slavery: the notorious three fifths clause and interstate rendition of fugitive slaves, to mention only two examples. Far from being the glorious “liberty document” Lincoln thought it was, the Constitution seems at best ambiguous on the question of slavery and at worst directly to support it.
How then did men with antislavery sentiments work within the constitutional order to promote their views? The heart of Dyer’s answer is sketched in chapters two through five, which reflect on a series of court cases that occurred between 1772 and 1857. It is a complex, even convoluted story, in part because there is no simple linear progress, but also because each case involves different parts of the law, different legal and political circumstances, and different aspects of the contradictions mentioned above. With the occasional detour to visit a scholarly controversy, Dyer guides the reader through these multiple complex considerations with an eye for the essential fact and the heart of an argument. No one is likely to be satisfied with every argument, but he raises many essential questions and I know of no more succinct introduction to the issue.
In the first case discussed, the English case of Somerset v. Stewart (1772), Dyer introduces some of his main arguments. James Somerset was a Virginia plantation slave whose master brought him to England. After a foiled escape attempt, the master bound Somerset, intending to export him to Jamaica, where he was to be sold as punishment for his behavior. After antislavery activists petitioned him, Lord Chief Justice Mansfield issued a writ of habeas corpus to review the legality of Somerset’s detention. In his decision freeing Somerset, Mansfield focused not on his slavery as such, even though slavery was not legally protected in England, but on the seemingly narrower question of his binding and intended forcible exportation. But this question, Dyer argues, raised fundamental issues. For habeas corpus presupposes the idea that no man, not even the sovereign, may use arbitrary power over another man or rule him by private will, which is what was done to Somerset. Because the rule of law is the remedy for willful, arbitrary rule, habeas corpus implies that the rule of law is inconsistent with slavery: the very idea or logic of the rule of law (hence, also of constitutionalism) presupposes freedom. This is one reason Dyer can argue that “a judicial posture tending toward the establishment and preservation of universal liberty was necessary for the maintenance of that particular constitutional heritage that finds its expression in the rule of law” (72). It follows that even if a constitution contains within in it some elements that support slavery, these must be regarded as less fundamental than its tendency to liberty. Indeed, Dyer argues that “the fundamental principles undergirding the English and American claims to constitutional liberty offered a strong normative challenge to the existing institution of chattel slavery” (43).
Principle, however, was not the only thing Mansfield had to concern himself with. He could not ignore the fact that the rights of the master over his slave had long been recognized by custom or tradition in the British Empire, including in America. And both this fact and the general premise of British law (freedom) appear in the Chief Justice’s famous assertion that the nature of slavery “is so odious … nothing can be suffered to support it but positive law” (37, 54). This may not seem like a ringing endorsement of freedom, but Dyer shows that it influenced antislavery constitutionalism in America for 50 years. Mansfield’s implicit distinction between positive law and natural law suggests that freedom is the default position, which places the burden on slave-owners to prove ownership. Constitutionally, this means that in cases where no positive law protected slavery, or where positive law was ambiguous, the presumption was that men are free.
After the criminalization of the trans-Atlantic slave trade in 1808, antislavery constitutionalists made good use of these ideas in such cases as The Amedie (1810), The Fortuna (1811), The Donna Marianna (1812), and La Jeune Eugenie (1822). A typical case might involve a slave brought temporarily into a jurisdiction where no positive law sanctioned slavery. And in such a situation, the slave might appeal to the Somerset principle to claim freedom, or at least that any reintroduction of him into slavery “constituted an illegal assault subject to habeas corpus review and judicial redress” (39).
The general tendency to expand liberty was arrested in the late 1820s in The Antelope (1825) and The Slave Grace (1827), when judges appealed to custom rather than natural liberty as the foundation of law (but see p. 70). And despite the efforts of John Quincy Adams in La Amistad (1841) to restore the Declaration to a central role in constitutional interpretation, the tendency to liberty seemed to receive double knockout punches from Prigg v Pennsylvania (1842) and Dred Scott v Sanford (1857). Prigg gave masters an absolute right to seize and recapture slaves who had escaped to free states and it declared unconstitutional state laws against kidnapping blacks and carrying them across state lines. Dyer notes, however, that this harsh opinion was believed by Justice Story, a man of strong antislavery opinions, to have been a great “triumph for freedom” (106). Story believed that he served the cause of freedom by preserving the Union, for if the free states could give immunity to escaped slaves, the result would be perpetual animosity and strife among the states. Story also hoped that by keeping power over fugitive slaves in the hands of the federal government rather than in the states, Congress would at some time be able to change the law in favor of freedom. Thus, although the decision contradicted natural law morality, Story hoped it would preserve the possibility of future movement towards freedom. Whatever we might think of these arguments, Dyer’s discussion here and elsewhere nicely presents the difficult choices facing antislavery constitutionalists, who had to balance principle against both positive law and political realities. In the event, Congress did not remake the law in favor of freedom, and Dyer admits that the case “presents a challenge to my thesis concerning the relevance and operational impact of the antislavery tradition in constitutional adjudication” (57, n.64).
Dred Scott is the most important case Dyer discusses. Among the difficult questions it raised, one was whether or not the Constitution protected slavery in the federal territories. The issue turned partly on whether or not the property protected by the Fifth Amendment (“no person shall….be deprived of … property without due process of law”, which is another way of expressing habeas corpus) could include human beings. But what, under the Constitution, is property? And, what is a human being? On these questions, the Constitution is ambiguous or silent, so that interpreters are forced to resort to principles outside the Constitution. As is well known, Chief Justice Roger Taney argued that it is inconceivable that men who owned slaves could have thought that the principle, “all men are created equal”, applied to members of the black race (117). With this appeal to the practice of the Founders, or to history, Taney denied the universality of the natural law principle of freedom.
A faint ember of the natural law tradition was kept alive in Dred Scott in the dissent of the often disregarded Justice John McLean, whose reputation Dyer goes some way to defend. But McLean was on the losing side, and after Dred Scott the torch of natural law antislavery argument is picked up by Abraham Lincoln, probably its greatest advocate. Dyer’s discussion of Lincoln’s constitutional principles and statesmanship is subtle and helpful, but too complex to discuss in any detail here. The heart of the argument is that, like McLean, Lincoln considered it necessary to determine whether slavery is or is not a wrong, and also like McLean, he thought this could be done only with reference to human nature. Saying that whether slavery is practiced by a king or a whole race, it is based on “a tyrannical principle” (132), Lincoln argued that all the arguments that can be made to justify enslaving another man rest on “something won by convention or superior strength”, which implies that there is no “right principle of action but self-interest”. But, Lincoln, continued, what belongs to a man rightfully belongs to him by nature and cannot be won or lost by either convention or superior strength. Furthermore, if the public ever came to think that force or self-interest could be the basis for right, which it would think if Taney’s approach prevailed, the moral basis for republican government would be undermined. To prevent this, Lincoln was willing to risk war, and his intimations of providential intent in the Civil War suggest that peace may not be the final and highest value in every case.
With Lincoln in mind, Dyer returns to the present, in which some scholars argue that to resort to such “foundational” or “ultimate” justifications as natural law is to risk conflict and war. But unlike Lincoln, who thought that certain moral truths might be worth that risk, these scholars argue that comprehensive moral or religious justifications ought to be abandoned in favor of “public reasons that can be mutually affirmed by citizens independent of their disparate philosophical and theological premises” (161). This “neo-Hobbesian” argument is made most prominently by the political thinker John Rawls, whose argument is critiqued in chapter 6. Granting the legitimacy of the desire for peace, Dyer argues persuasively that in the 1850s and 60s Rawls would have been unable to side with Lincoln or Frederick Douglass against Taney. Not only did the former pair appeal to “ultimate” justifications to oppose slavery, but at the time “public reason” spoke at least as strongly for as against slavery. In this situation, it is difficult to see how one could defend freedom without appealing to “foundational” ideas. The implication is that the “public reason” approach can avoid ultimate questions about human nature or natural order only because it regards those questions as settled, or perhaps because it takes for granted one particular answer to them. Thus, besides establishing that natural law arguments were crucial historically for the advance of freedom, Dyer’s book challenges us to wonder whether a constitutionalism dedicated to liberty can survive if it does not know and refuses to investigate its own basic assumptions about man and his place in the world. | <urn:uuid:14a06508-bd84-479b-a992-58b7c5d4cf96> | CC-MAIN-2016-26 | http://www.libertylawsite.org/book-review/the-moral-foundation-of-american-constitutionalism/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00065-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953927 | 2,927 | 3.21875 | 3 |
Or the Bansuri bamboo flute is a traditional instrument from India that was adapted in the twentieth century for the interpretation of Indian classical music. The Bansuri can have six or seven holes. These flutes produce its fundamental note, the closing three holes above. The bamboo used for this instrument is very thin thickness and light. We reinforced the ends with metal rings.
This instrument is a bamboo flute that is born in northern India, is built of bamboo knotless medium thickness and very thin and lightweight. Bansuri originally had just 6 holes but in the last century Pannalal Ghosh said the new Bansuri seventh hole at the end to add a note and a few more ornaments in Indian classical music but was used previously only for the Indian folk music.
Has a record of about 3 octaves. The tone of the instrument is achieved by covering only 3 holes. It does not cover the holes with your fingertips, if not the middle finger to achieve a relaxed position of the same, ie, fingers covered with rings single fingertip, the rest of the fingers with the middle finger being fully stretched, this provides greater opportunity to plug a hole and flexibility in half the typical ornaments.
There are two types of systems of North India and Carnático the Bansuri for folk and classical music uses the Hindu system of North India and there are other types of Bansuris with 8 holes Carnático using the system. The Bansuri is of particular importance in India because of its association with Krishna.
Some flutists Hindus worldwide most recognized are: Pandit Pannalal Ghosh, Hari Prasad Chaurasia, G. S. Sachdev, Raghunath Seth, among others. | <urn:uuid:5ac4c8de-340f-4fb3-9287-d424550ce9a1> | CC-MAIN-2015-35 | http://www.kaypacha.com.ar/en/instruments/bansuri/bansuri.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00015-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.947127 | 355 | 3.5 | 4 |
Mining Mookaite in Western Australia at Mooka Springs
A lot has changed since the early days of mining
Heavy machinery is a huge force multiplier, and makes it easier to restore the ground to original condition when mining is completed.
Mining is a team effort in this remote wilderness country
It is important to maintain communications with the outside world for reasons of safety
There is nothing quite like the feeling of extracting gem material. Highly addictive! No known cure!
Gem material is often large and spectacular.
A hard day’s work means a big appetite! Chef and Master gem cutter Mick prepares a meal.
We always remember the blokes from earlier days with respect and fondness.
We are very fortunate to have the friendship and cooperation of the traditional landowners.
Mookaite has been a valued commodity for tool making and projectile points here for more than 40,000 years.
After extraction, quite a bit of transportation is required
The bush does not release the treasure readily. Bogging of vehicles is frequent.
Back at the main road the depot is prepared for load out to Melbourne.
Once the rough material arrives at the Crystal World Lapidary factory in Melbourne, the processing starts.
Some is sent to Bali for carving.
We have a massive amount of material slabbed or completely rough for lapidarists.
Mookaite is a silicified porcellanite or Jasper formed in the sedimentary environment of the ocean floor.
Siliceous ooze is a siliceous pelagic sediment that covers large areas of the deep ocean floor. Siliceous oozes consist predominantly of the remains of microscopic sea creatures, mostly those of diatoms and radiolarians.
The silica material from which it is derived comes from the abundant zooplankton of the Radiolaria family.
Radiolarians have a highly geometric skeleton made of very fine silica, this is what forms Mookaite. | <urn:uuid:b986c709-95c9-4839-a0f3-22793429cd43> | CC-MAIN-2018-17 | http://www.crystal-world.com/lapidary-gems-and-minerals/mining/mining-mookaite/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937114.2/warc/CC-MAIN-20180420042340-20180420062340-00161.warc.gz | en | 0.935894 | 410 | 2.640625 | 3 |
Edited by lester, Teresa, Dv2000battery, Maniac and 2 others
Have you ever wondered how to make a dried leaf bag? This article will teach you; just follow these few easy steps.
1Gather all the materials needed.Ad
2Cut out the plastic cover in whatever size and shape you want it to look like. Be careful when handling the scissors, and ask for help if you can't do it.
3Place the dried leaves on the plastic cover over it, and then place another plastic cover above it.
4Place the dried leaves on the plastic cover again. Then put some duct tape on the side to close the covers, and the leaves won't fall out.
5Sew together the two sides (there's supposed to be a zipper on the other side). After adding the zipper, put the other sides together, but without zippers.
6Attach the handle to the bag. If you don't want a handle, use a lace or make a shoulder bag.
7Make sure that the bag is sewed properly and doesn't come off easily.Ad
Things You'll Need
- Dried leaves from your backyard
- Duct tape
- A sewing machine
- A fixed old bag handle
- A thick plastic cover
- A zipper | <urn:uuid:8ca9d647-6b21-4254-ab11-2caa7774819e> | CC-MAIN-2014-10 | http://www.wikihow.com/Make-a-Dried-Leaf-Bag | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652865/warc/CC-MAIN-20140305060732-00091-ip-10-183-142-35.ec2.internal.warc.gz | en | 0.939526 | 268 | 2.59375 | 3 |
It turns out that the fountain of youth is not in a bottle, and no pharmaceutical company can lay claim to it. In fact, senior citizens do not even need insurance to take advantage of one of the most powerful anti-ageing remedies every invented. A fascinating 2001 study conducted at Rutgers University in New Jersey found that the answer to many of the most common, and most serious, problems associated with ageing can be as close as your local garden.
This study could not have come at a more opportune time. We all know the population is ageing, and more than 10 million of our fellow citizens are over 65 years of age. Those seniors often face significant challenges, from declining health to the lack of a sufficient social structure. While simply tending a garden is not a panacea to the problems of ageing, the Rutgers study does suggest that simply being around flowers can have a profound impact on the way seniors view the world and those around them.
The results reported by Rutgers University are the result of a six-month study specifically conducted on the health impact of flowers on the senior population. The study found that being around flowers helped to ease depression in older citizens. But the study also found other effects, including the ability of flowers to inspire social interaction and keep memory sharp.
These studies could turn out to be quite significant, as senior-care experts and social scientists look for safe and non-pharmaceutical ways to help seniors keep their minds sharp and their social skills intact. While no one, including the Rutgers researchers, expect floral therapy to replace pharmaceutical drugs, the study is a promising, and intriguing one.
The 2001 Rutgers study is actually a follow-up on a 2000 study that found a link between flowers and greater levels of happiness and life satisfaction among female participants. This study was intriguing in itself, and it prompted the folks at Rutgers to do a follow-up study on the effect of flowers on senior citizens and the unique challenges they face.
The Rutgers study involved over 100 senior citizen volunteers. Some of those seniors received flowers, while others in the study did not. The results of the study are intriguing to say the least, and they found that flowers do indeed have a strong impact on the lives of senior citizens. One thing that the Rutgers University researchers found was that study participants who received flowers experienced a significant increase in positive moods and self-reported happiness than those who did not receive flowers. More intriguingly, seniors who received flowers scored better on memory tests, and also experienced richer personal memories while flowers were in the room.
The study also found that flowers encouraged companionship and greater levels of social interaction among the seniors participating in the study. The group who received flowers engaged with people around them more consistently that those who did not receive flowers.
The percentages tell the tale, and they are quite impressive. In the Rutgers study, some 81% of participating seniors reported a decrease in depression after receiving flowers. Some 40% reported making new social contacts, and 72% scored well on tests of memory, generally much higher than those who were in the non-flower group.
So while flowers might not be the garden of Eden, the Rutgers study certainly indicates that the garden in your backyard can have a profound impact on your mood. You may have known that flowers make you feel better, but the Rutgers study provides proof that flowers are good for you as well. | <urn:uuid:dbfc17c0-41b9-426a-84b0-27e8f01da9ae> | CC-MAIN-2021-10 | https://www.simplygardening.co.uk/tag/anti-ageing/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00027.warc.gz | en | 0.974947 | 682 | 2.6875 | 3 |
||A reader requests expansion of this page to include more material.
You can help by adding new material (learn how) or ask for assistance in the reading room.
This may be described as a decade of wild decadence and rebellion in which public consciousness first burst out of the colonial mindset underpinning the economic rebuilding after the great depression. New found wealth made new forms of subsistence possible and this encouraged the formation experimental communities espousing philosophies aligned with deep ecology, humanism and communalism. Ironically this was occurring at the same time Australia's traditional markets were collapsing. | <urn:uuid:c7235b57-f684-4d08-9663-2c1224d1a578> | CC-MAIN-2015-22 | http://en.wikibooks.org/wiki/Australian_History/1970s | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929023.5/warc/CC-MAIN-20150521113209-00146-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.943224 | 114 | 2.984375 | 3 |
Levothyroxine is an oral thyroid hormone medication used in dogs and cats to treat hypothyroidism or other thyroid conditions due to low circulating thyroid hormone. It usually needs to be given for the life of the animal. Levothyroxine is available as chewable tablets, as an oral solution, as a powder or as tablets.
WHAT IS THIS DRUG?
- Levothyroxine is a synthetic thyroid hormone
- Levothyroxine is given by mouth
REASONS FOR PRESCRIBING:
- To treat conditions associated with low circulating thyroid hormone (hypothyroidism). This is a common disease of middle aged and older pets (where the animal's thyroid gland does not produce enough thyroid hormone)
- Cats don't often receive levothryoxine, but may be prescribed it for a short period to correct overtreatment of hyperthyroidism or if their thyroid gland was surgically removed
WHAT DOGS/CATS SHOULD NOT TAKE THIS MEDICATION?
- Hyperthyroid animals (pets which produce too much thyroid hormone)
- Use with extreme caution in older or debilitated animals, those with heart disease, high blood pressure, anemia, Addison's disease (hypoadrenocorticism), or diabetes
- Pets who have ever had thyrotoxicosis or have an uncontrolled adrenal problem
- Pregnant or nursing animals
- Pets known to have had an allergic reaction to levothyroxine or like products
Check with your veterinarian if this product can be given with or without food.
Give medication as directed by your veterinarian. This medication is usually given once or twice daily .
Read and follow the label carefully.
There are many different brands of thyroid replacement therapy. Differences do exist between brands. If you must change brands, your veterinarian may need to recheck thyroid hormone levels and adjust dosing accordingly.
Give the exact amount prescribed and only as often as directed. Missed doses reduce the effectiveness of therapy.
Ideally, give the medication at the same time daily.
Give any vitamin or mineral supplements an hour before or 4 hours after giving levothyroxine.
It usually needs to be given for the life of the animal.
Ensure your pet has fresh, clean drinking water at all times.
WHAT IF DOSE IS MISSED?
If a dose is missed, give it as soon as you can. If it is time already for the next dose, skip the missed dose and go back to the normal schedule. Do not give two doses at the same time.
STORAGE AND WARNINGS:
Store in a tight, light resistant, childproof container in a cool, dry place at room temperature away from heat and direct sunlight.
Refrigerate oral solution if instructed.
Keep this and all medication out of reach of children and pets. Call your physician immediately if you accidentally take this product.
POTENTIAL SIDE EFFECTS:
- This medication is usually well tolerated by dogs and cats when given at the correct dose
- Contact your veterinarian if your pet experiences any of these symptoms: fast heart rate, excessive ingestion of food, inability to tolerate heat, excitability, nervousness, excessive panting.
- Long term use may cause osteoporosis (bone loss)
- If you notice anything unusual, contact your veterinarian
CAN THIS DRUG BE GIVEN WITH OTHER DRUGS?
- Yes, but possible interactions may occur with antidepressants, digoxin, epinephrine, estrogens, insulin, ketamine, norepinephrine and warfarin
- If you give your pet sucralfate (Carafate) or aluminum antacids (Maalox, Mylanta), give these products 4 hours before or after giving levothyroxine
- If your pet experiences any unusual reactions when taking multiple medications, contact your veterinarian
Contact your veterinarian immediately if pet consumes more than the prescribed amount.
WHAT TO TELL/ASK VETERINARIAN BEFORE GIVING MEDICATION?
Talk to your veterinarian about:
- Thyroid hormone levels will need to be monitored with blood tests every few weeks until the dose is stabilized. Blood should be drawn 6-8 hours after the morning dose of medication. Schedule your appointment accordingly. Your veterinarian may also advise periodic liver and kidney function testing.
- Risks and benefits of using this drug
Tell your veterinarian about:
- If your pet has experienced side-effects on other drugs/products
- If your pet has experienced digestive upset now or ever
- If your pet has experienced liver or kidney disease now or ever
- If your pet has experienced any other medical problems or allergies now or ever
- All medicines and supplements that you are giving your pet or plan to give your pet, including those you can get without a prescription. Your veterinarian may want to check that all of your pet's medicines can be given together.
- If your pet is pregnant or nursing or if you plan to breed your pet
WHAT ELSE SHOULD I KNOW?
Notify your veterinarian if your animal's condition does not improve or worsens despite this treatment.
As with all prescribed medicines, levothyroxine should only be given to the dog/cat for which it was prescribed. It should be given only for the condition for which it was prescribed. It is important to periodically discuss your pet's response to levothyroxine at regular check ups. Your veterinarian will best determine if your pet is responding as expected and if your pet should continue receiving levothyroxine.
This is just a summary of information about levothyroxine. If you have any questions or concerns about levothyroxine or for the condition it was prescribed, contact your veterinarian. | <urn:uuid:349982f8-ae5b-4f59-ad15-1a025819a7eb> | CC-MAIN-2015-27 | http://www.vetstreet.com/thyrozine-tablets | s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097038.44/warc/CC-MAIN-20150627031817-00302-ip-10-179-60-89.ec2.internal.warc.gz | en | 0.90834 | 1,186 | 2.828125 | 3 |
The Pacific lionfish has taken over the Caribbean, killing reefs and decimating local species. Could Whole Foods be the answer?
- By Christopher PalaChristopher Pala, a former New York Times and Science magazine contributor from Hawaii who now lives in Washington, D.C., often writes about ocean issues.
Since Pacific lionfish were first detected off the coast of Florida three decades ago, they have spread around the Caribbean, gobbling up everything that fits in their mouths and reproducing at a phenomenal rate. Scientists have shown that soon after they descend upon a reef, there is a sharp fall in the number of small fish, notably the herbivores on which coral depends for survival. “They’re eating their way through the reefs like a plague of locusts,” said Mark Hixon, a lionfish specialist at the University of Hawaii. It is by far the most destructive invasive species ever recorded at sea, and the blight is believed to have started with aquarium fish released off the Florida Atlantic coast in the mid-1980s.
However, in the last few months, a set of unrelated trends has resulted in two U.S. supermarket chains, Whole Foods and Wegmans, offering Florida lionfish, which has a white, delicate flesh, to consumers with much fanfare. Early signs suggest that the state’s fishery might just be big enough to protect the native denizens of at least some reefs from being decimated.
“If the commercial fishermen can keep their numbers down, we should see an increase in the native species that are being eaten by lionfish,” said Lad Akins, the founder of the Reef Environmental Education Foundation (REEF) in Key Largo, Florida, and head of its lionfish study project. “That would be the first time a commercial market controls an invasive species.”
Popular among aquarium keepers for their stunning russet-and-cream stripes and 18 sharp, venomous spines that spread out like a fishing boat’s outriggers, lionfish also boast modest space requirements (near stillness is their default state), a surprising resilience, and an awfully good bang for the buck (currently under $50 apiece) in the sometimes stratospheric market of tropical aquarium fish.
The lionfish is relatively common in the tropical Pacific but has a negligible effect on reef life there because it’s never seen gathering in large concentrations, though exactly why this is so remains unclear; no predator has been identified. But in the Caribbean, where reefs are in far worse shape, it’s a different story.
When Christopher Columbus traversed the New World, marine life was so rich that his chronicler wrote that the crew could almost walk ashore on the backs of swimming turtles. Underwater, marine biologists believe the scenery was then just the opposite of what one can see now: Instead of many multicolored little fish fluttering carelessly around a reef and the blue outline of an occasional shark or barracuda in the distance, large predators made up the bulk of marine life. Little fish were even more numerous, as is still the case near certain pristine Pacific islands.
As people settled on reef-fringed coasts everywhere, overfishing — decimating a population of a certain species faster than it can reproduce — first affected the larger species of edible fish, both carnivores like grouper and snapper and herbivores like parrotfish and wrasse. These grazers played an essential role in the reef: They prevented algae from taking over, leaving space for coral to grow — which provides shelter and food for smaller marine life.
However, when the big grazers grew scarce, algae blossomed, and coral started to die; then fishermen moved onto the smaller ones, too, accelerating the process. In the 1980s, a mysterious disease wiped out sea urchins, which are algae grazers and friends of the coral. So by the time the lionfish arrived in the Caribbean islands, peeling off the Florida coast to first colonize the Bahamas in the late 1990s, average live coral cover in the region had fallen to 14 percent from more than 50 percent in the 1950s, according to Alan Friedlander, a marine ecologist at the University of Hawaii who has been studying Caribbean reefs for more than 30 years.
“The lionfish found a reef system that was exceptionally vulnerable,” Friedlander said. “The reefs are relying almost exclusively on the very fish that the lionfish are taking out.”
As a result, offshore Caribbean coral reefs are now dying and crumbling faster, reducing the populations of fish that millions of people depend on for healthy, cheap protein and removing a bulwark against storms that will accelerate coastal erosion and the destruction of beachfront communities.
At first glance, averting the disaster seems like a cakewalk: Spearing lionfish is so easy it doesn’t even require a spear gun; a two-foot shaft with a barbed hook propelled by a piece of rubbed tubing is enough, because the lionfish can usually be approached to within two feet. Some can be found in the first 15 feet of water where snorkelers swim, though most live on the lower end of a scuba diver’s range, below 100 feet. As they invaded the pretty reefs that recreational divers frequent, the divers started spearing them, keeping their numbers low and the coral healthy, particularly in places with large diving communities like the Florida Keys, Bonaire, the Caymans, and Cozumel, Mexico.
But the lionfish’s average Caribbean density of four per 100 square meters has generally been too small to make it economical to fish for a living. Most lionfish make up a single portion; some can feed two. But their white, tender meat has led more and more restaurants to put them on the menu as the notion of eating the invader caught the imagination of much of the community press in Florida, to a far greater degree than the invasive (but less damaging) Asian snakehead has on the Eastern Seaboard.
Rachel Lynn Bowman, who describes herself on Instagram as a #lionfishhuntress in the Florida Keys, says her average is 50 speared lionfish a day, which she sells to Whole Foods or local restaurants. But until now scientists have believed that the commercial cull, just like that of the recreational divers, was too small to have any effect.
That’s changing in the Florida Panhandle, particularly from Mobile, Alabama, to Apalachicola, Florida, where their density has risen more than tenfold to at least 50 per 100 square meters, says Kristen Dahl of the University of South Alabama. “They eat pretty much anything that fits in their mouth, and they have a predilection for young vermilion snapper,” she said, referring to a popular commercial species.
Just when traditional fishermen were complaining that their fish traps were coming up with lionfish instead of the more valuable grouper or snapper, the Monterey Bay Aquarium’s Seafood Watch program decided to list the lionfish as a “best choice,” noting that “reduction or removal from the Atlantic, Caribbean and Gulf of Mexico will greatly benefit the native species.” Whole Foods, which decides what seafood to sell in part based on the aquarium’s recommendations, swung into action.
David Ventura, the company’s seafood coordinator for Florida, was aware that his lobster suppliers often caught lionfish in their traps, so he offered them $3.50 a pound in March. “They sold faster than we could get them in,” he said. In an interview shortly after the April 1 end of the lobster season, he said he’d love to get more lionfish but admitted he had no idea where to find them.
He learned fast. By mid-May, he had created a network of spearfishing commercial divers who helped him stock the 26 Whole Foods Markets in Florida with lionfish at $9.99 a pound. In an interview at the May 14 lionfish derby in Pensacola, which pulled in over 8,000 fish, more than tripling the previous record, Ventura sounded positively thrilled. He said he’d sold more than he expected that weekend and would now be offering it permanently at all the state stores near posters that read, “Take a bite out of lionfish — be a part of the solution.”
“Almost every customer that’s approaching our seafood teams [is] chatting about it,” he said. “Demand is very strong, and given the dedication of the divers, I’m confident the supply will be there.” In July, lionfish appeared in some stores in the South, Southwest, Rocky Mountains, and California, added McKinzey Crossland, a Whole Food spokeswoman. Wegmans, which has stores in New York, Pennsylvania, New Jersey, Virginia, Maryland, and Massachusetts, has also started selling it.
At the Pensacola derby, where several dozen tents displayed everything from lionfish dishes to toys, the mood was euphoric. A small community of divers that had been picking off the fish for years — sometimes selling their catch for $10 a pound to restaurants, sometimes eating it themselves — felt its time had come.
With sales soaring, Rebecca Jones, a former sales executive, invested her savings in a lionfish-fishing operation with Ty McCall, a professional diver. “We’re pulling in 200 to 300 pounds a day,” McCall said. Under one tent at the lionfish derby, waitress Clara Proctor, who has been devoting all her spare time to making and selling a tasty lionfish dip under the brand Edible Invaders, was hoping the lionfish’s growing fame would allow her to quit her night job. At another, chef Irv Miller of Jackson’s Steakhouse restaurant was serving up a delicious fried lionfish mousse.
Ryan Chadwick, who believes he’s the first to serve lionfish in New York at his restaurant Norman’s Cay, now offers it in four other restaurants and has opened a wholesale service. “Demand has really gone through the roof since the derby,” he said. “A couple of years ago, most people hadn’t heard of lionfish,” said Dahl, the University of South Alabama scientist. “Now it’s almost like there’s a race to fish them.”
But putting a dent in their numbers won’t be easy. “We know that there are many, many places with big lionfish colonies way beyond the reach of divers, from 200 feet to 1,000,” she explained. “So that means that a reef that’s been fished out can be recolonized quite fast.”
Akins, of REEF, agrees. “The question is: Will the small fish have time to repopulate the reef before the lionfish come back?” A trap that works across all depths with minimal bycatch “could be the silver bullet we’ve been looking for,” he added. “A lot of very bright minds have been working on it, and now all this publicity is making it likelier that someone will come up with the right one.”
Lionfish have also begun appearing in the Mediterranean, but colder waters there should keep their numbers down, says Friedlander, the marine biologist. But in Gulf and the Caribbean, “It’s a race against the clock.”
Photo credit: Ethan Miller/Getty Images | <urn:uuid:a485582d-3522-4765-a94c-b7aed2200011> | CC-MAIN-2017-39 | http://foreignpolicy.com/2016/08/17/stopping-the-worlds-most-rapacious-invasive-species-one-fillet-at-a-time/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688158.27/warc/CC-MAIN-20170922022225-20170922042225-00185.warc.gz | en | 0.962402 | 2,419 | 2.90625 | 3 |
Last modified: 2014-03-30 by zoltán horváth
Keywords: uzbekian ssr | uzbekistan | hammer and sickle (yellow) | hammer and sickle (grey) | star: 5 points (fimbriated) | sun: rising | cotton | u3.c.c.p. | ŭz.s.s.r. | ўз.С.С.Р. |
Links: FOTW homepage | search | disclaimer and copyright | write us | mirrors
Red with blue bar, fimbriated white, with the following measures
(respective to the height of the flag, from top to bottom):
2/5ths of red, 1/50th of white, 8/50ths of green, 1/50th of white
and 2/5ths of red. See here also
detailed construction information of the hammer and sickle.
Mark Sensen, 25 May 1997
Stripes: 20+1+8+1+20. Star is contained in imaginary circle of
diameter one-tenth of flag height. hammer and sickle in imaginary square of sides
one-fifth of flag height. Imaginary circle of star touches the imaginary
square of hammer and sickle. Centre of star is at point one-tenth of flag height
from upper edge of flag. Vertical ax of star and hammer and sickle at one-third of
flag height (= one-sixth of flag length).
Mark Sensen, 20 Jun 2001, quoting [sol85]
According to Sokolov’s book [sol85],
the thin white stripes are not just due to a heraldic concern, as «The
light-blue stripe symbolises the cloudless sky over Uzbekistan sending
generous rays of the sun to the fertile soil. White edgings at the
light-blue stripe represent advanced cotton growing — “the white gold”
of the Republic.».
Mark Sensen, 20 Jun 2001
The last change of the soviet era was made on 29 August 1952 (Decree
of the Presidium of the Supreme Soviet), when the striped flag was adopted.
The blue is for the sky and the white is for the cotton; the red is the
revolutionary struggle of the working masses; the hammer and sickle is the
union of the workers and peasantry, and the star is symbol of the proletariat
international. The flags have several regulations and dispositions about
this: 1 November 1952, 30 December 1953, 31 October 1955, 27 September 1974,
14 October 1974 and 30 July 1981.
Jaume Ollé, 08 Oct 1996
image by António Martins, 28 Oct 2002 |
No hammer, sickle and star on the
Mark Sensen, 25 May 1997
Officially reverse looked like obverse without star and hammer-sickle.
But in fact I never saw these flags without star, hammer-sickle. Real flags
(all 15) usually were either with reverse analogous
to obverse (but with star and hammer-and-sickle near the hoist) or with
reverse = mirrored obverse.
Victor Lomantsov, 30 Nov 2002
Many (all?) soviet republics had these
banners. Usually it was red field with republican
coat-of-arms and the name of republic.
They had gold fringe. I know about such banner of Uzbekistan.
Victor Lomantsov, 09 Jan 2002
The emblem of the SSR was introduced on 14 February 1937 by
art. 143 of the then constitution (according to Hesmer
[hes92]) and it was replaced by
the current one July 1992.
The current emblem retains many parts of the old SSR one: the
grain and cotton wreaths, the ribbon (in the national colours
now, though) with inscription, the sun, and even the star: this
is, however, an eight-pointed blue star now instead of the
communist five-pointed red star.
Marcus Schmöger, 16 Sep 2001
Anything below this line was not added by the editor of this page. | <urn:uuid:e264466c-fa96-4c5d-9a00-eb1f1d50d13d> | CC-MAIN-2021-17 | https://crwflags.com/fotw/flags/su-uz.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464146.56/warc/CC-MAIN-20210418013444-20210418043444-00583.warc.gz | en | 0.901681 | 890 | 2.59375 | 3 |
Extracting pure copper metal from low-grade metal ores will benefit from the latest coordination chemistry research.
Katharine Sanderson/Budapest, Hungary
Extracting pure copper metal from low-grade metal ores will benefit from the latest coordination chemistry research, thanks to a molecule that can hold negative and positive ions in place, UK chemists claim.
Peter Tasker’s group at the University of Edinburgh made a specific zwitterionic ligand - a compound with both acidic and basic groups - to take copper sulfate through an entire ionic extraction circuit without generating unwanted sulfuric acid: a crucial breakthrough in extraction technology.
Cationic exchange reagents have been used by metallurgists to generate copper from ore for over 15 years. In this oxidative leach process, used as an alternative to smelting, copper oxide goes through a series of efficient cyclical processes. First sulfuric acid is used to produce copper sulfate, then the aqueous solution of copper ions is bound to another ligand in an organic solution, and then a solvent extraction process makes very pure copper sulfate when retreated with aqueous sulfuric acid. The final step in the circuit is electrolysis to extract pure copper. Products from each of these steps can feed the others so theoretically the overall process consumes just electricity. This is ’the ideal holy grail,’ Tasker said. ’It’s possible if your coordination chemistry [at the ligand binding stage] is very selective.’
Oxides are one thing, but the earth isn’t kind enough to supply a pure oxidic ore. Sulfidic ores are the next challenge, because they exist in much greater quantities. Gareth Bates, at Southampton university, who has worked with Tasker explained that sulfidic ores ’can be converted to their oxides but release sulfur dioxide, and are therefore not environmentally sound.’ Also if conventional ligands are used in the purifying (or stripping) process, sulfuric acid can build up in the system.
Tasker’s trick was to keep copper sulfate levels steady throughout the extraction cycles, preventing the sulphate ions generating extra sulphuric acid - which would then disrupt the other cycles. ’That’s not easy,’ Tasker said. ’You’ve got to essentially get copper sulfate soluble in kerosene, and sulfate hates to be in kerosene. you really need to get very special reagents that are able to complex both to copper and to sulfate.’
That special reagent has now been made, and recently patented by Tasker’s group. It’s a diacid ligand - one ligand equivalent can take up one copper ion with its double positive charge. It’s difficult to get a ligand with two acidic sites where the second site is sufficiently acidic. ’In order to pull two protons off, these ligands have to form very specifically stable copper complexes.’
The important property of the complexing ligand is the way the copper sulfate binds. The metal ion sits in the place Tasker expected, bound to a nitrogen atom, and the sulfate binds to another nitrogen. But the sulfate ion also binds to copper, keeping a complete copper sulfate molecule. ’It’s done exactly what we wanted,’ said Tasker. ’It comes along and finds copper sulfate and takes copper sulfate into the organic phase. That means that theoretically we can run the whole process [of moving copper sulfate across the circuit].’
Bates agrees, ’It’s a way to extract the pure metal from low-grade sulfidic ores.’ This offers ’possibility of real commercial application,’ he said.Tasker’s system is proven in a lab solvent extraction system, and he is happy with the coordination chemistry. But the solubility of the copper complex in kerosene needs to be refined. ’That’s a challenge. We will have to put a lot of organic solubilising things on the ligand.’ The development is now in the hands of the engineers, as well as a close collaboration with a reagent supply who hope to bring the system to life in the copper mines of South America. | <urn:uuid:9eb7074d-fee7-4cf4-aa1c-bbdee35d856a> | CC-MAIN-2022-21 | https://www.chemistryworld.com/news/copper-mines-and-coordination-chemistry/3001132.article | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00575.warc.gz | en | 0.909849 | 910 | 3.25 | 3 |
The US Department of the Interior Bureau of Land Management (BLM) hired Burleson to prepare a removal site investigation (RSI) report to evaluate the need for a removal action at the Helen Mine.
Helen Mine was discovered in 1871, and several hundred flasks of mercury were produced from 1873 to 1876.
The mine operated intermittently from 1900 to 1956 and produced about 7,000 flasks of mercury. BLM needed to know if the contamination from the site created the need for a removal action.
Burleson compiled site and watershed wide analytical data for soils, sediment, water, and biota to support development of a conceptual site model (CSM) and risk screening. Burleson also identified data gaps.
Preliminary Conceptual Site Model
Burleson prepared a preliminary CSM to identify complete exposure pathways for human and ecological receptors.
Site risks to humans, biota and water quality were evaluated for completed pathways shown in the CSM. Burleson evaluated:
- Potential risks to human health by comparing mercury concentrations in sediment and mine waste at the Helen Mine to screening benchmarks.
- Potential risks to ecological receptors by comparing mercury concentrations reported in sediment at the mine to the median risk management criteria for ecological receptors and by comparing the results of fauna sampling for mercury near Helen Mine to mercury in fauna downstream from the mine and in regional data sets.
- Potential water quality impacts by comparing detected concentrations to water quality criteria.
Based on consideration of NCP removal factors and consideration of the risk screening, Burleson found that a removal action was not necessary at the Helen Mine. Burleson recommended that BLM conduct sampling and analysis to determine the mercury content of waste rock at the site. Burleson also recommended sampling and analysis of sediment in Dry Creek upstream and downstream of the tributary from the Helen Mine to evaluate impacts of this feature on the watershed. | <urn:uuid:cfa0b539-e634-4acc-9cf1-da91f8354fed> | CC-MAIN-2018-13 | http://www.burlesonconsulting.com/projects/bureau-of-land-management/helen-mercury-mine-removal-site-investigation/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00485.warc.gz | en | 0.933184 | 390 | 3.1875 | 3 |
Prescription drugs are the most common, and usually the first, type of treatment given for Parkinson's disease.
But there are several other therapies that can help manage Parkinson's disease and its symptoms.
Deep Brain Stimulation
Deep brain stimulation (DBS) uses an electrode to stimulate certain parts of the brain, much like a cardiac pacemaker.
Although DBS is the most common surgery for Parkinson's disease, it's not appropriate for everyone.
In this procedure, a pulse generator (with a battery pack) is implanted in the chest near the collarbone. A wire from the generator sends finely controlled, painless electrical signals to the brain to interfere with signals that cause the motor symptoms of PD.
DBS can be used on one or both sides of the brain. For whichever side of the brain it's used, it will mainly affect the opposite side of the body.
If you get DBS surgery, you may have to return to the medical center frequently for a few months to have the stimulation carefully adjusted. After the first few months, you will need it checked occasionally.
The battery pack in the pulse generator needs to be replaced every three to five years.
DBS isn't recommended for people who have Parkinson's disease that responds well to the drug levodopa (L-dopa).
The procedure seems to help people for whom levodopa became less effective over time, or for people who developed disabling side effects from levodopa (like dyskinesia — involuntary twisting or writhing movements).
One current area of research involves whether starting DBS earlier in the course of the disease — while levodopa is still working — might be helpful.
DBS isn't recommended for people with memory problems, hallucinations, severe depression, poor health, or a consistently poor response to levodopa.
DBS also hasn't been shown to benefit people with other parkinsonisms (disorders that cause Parkinson's-like symptoms).
As with any surgery, DBS surgery carries a risk of infection. Because DBS is a brain surgery, there's also a small risk of brain hemorrhage or stroke.
Other Surgeries for Parkinson's
Pallidotomy and thalamotomy are surgeries that permanently destroy parts of the brain that are causing motor symptoms.
These surgeries were more common before deep brain stimulation (DBS) became available.
Pallidotomy and thalamotomy have become the subject of research again recently because there are now ultrasound versions of these procedures that can be performed noninvasively, without the need for surgery.
Diet for Parkinson's Disease
A normal, healthy diet is beneficial for anyone, including people with Parkinson's disease.
Constipation can be a problem for people with Parkinson's, so a fiber-rich diet with plenty of fluids may help.
Protein in the diet can limit the absorption of the drug levodopa, so this medication is best taken without a lot of protein.
There's ongoing research about the possible benefits of antioxidants, caffeine, and supplements in people with Parkinson's. But right now, there's no conclusive evidence that any specific dietary factors are helpful in preventing or treating the condition.
Always tell your doctor about any supplements or herbs that you're taking, as they may interact with medications.
Physical Therapy for Parkinson's
Regular aerobic exercise and strength training can help to improve strength, flexibility, and balance — as well as fight depression — in people with Parkinson's disease.
The following exercises and activities may be helpful:
Tai chi This exercise has been shown to help improve mobility, flexibility, and balance in people with mild to moderate Parkinson's.
Speech therapy A speech therapist may be able to help you overcome problems related to speaking and swallowing.
Occupational therapy An occupational therapist can help you develop techniques to aid with daily activities like dressing, eating, bathing, and writing.
Alexander technique This practice — which involves examining posture, balance, and how you use your muscles — may help reduce muscle tension, pain, and the risk of falling.
Yoga This popular activity may help improve flexibility and balance. Most yoga poses can be modified to suit your physical abilities.
You should always check with your doctor before starting a new exercise program.
Alternative Treatments for Parkinson's
Some people with Parkinson's disease have found alternative therapies to be helpful in coping with their symptoms.
Some alternative therapies for Parkinson's include:
Coenzyme Q10 Some studies have found this supplement to be helpful in the early stages of Parkinson's, if it's taken for more than 16 months. But other studies have shown it to have no effect.
Massage Getting a massage may help with muscle tension. Massage is often not covered by insurance.
Acupuncture This technique may help reduce pain. A trained practitioner of this traditional Chinese therapy places tiny needles into the skin at specific points.
Meditation This practice may help reduce stress and pain. By quietly reflecting and focusing your mind on an idea or image, you may feel an increase in your sense of well-being.
Music or art therapy Exposure to the artsmay help improve your mood and motor skills.
Pet therapy Interacting with a pet may help expand your movement and flexibility, while also improving your emotional health.
- Parkinson’s Disease: Hope Through Research; National Institute of Neurological Disorders and Stroke.
- FDA Approves Medtronic DBS Therapy for Early PD; Parkinson's Disease Foundation.
- Li et al. (2012) “Tai chi and postural stability in patients with Parkinson’s disease.” New England Journal of Medicine.
- Nutritional Supplements and Vitamins: Alternatives to Help Parkinson’s Disease? Parkinson's Disease Foundation.
- Diseases and Conditions: Parkinson’s disease; Mayo Clinic.
Last Updated: 7/1/2016 | <urn:uuid:94e2b8d5-1c6e-4dac-adc7-4293cb08d4d5> | CC-MAIN-2017-17 | http://www.everydayhealth.com/parkinsons-disease/guide/treatment/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00006-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.93642 | 1,200 | 2.9375 | 3 |
- Obama, Romney to debate domestic and foreign policy over three debates
- Telegenic John F. Kennedy outshined Richard Nixon in first debate in 1960
- Romney emerged victorius from primary season that included nearly 20 debates
- Then-candidate Obama easily handled Republican John McCain in 2008 debates
After months of talking about each other and their policies, the world finally gets to see Barack Obama and Republican challenger Mitt Romney go toe-to-toe on the same stage in three televised debates ahead of the U.S. election.
Unlike other countries, such as the United Kingdom, where the prime minister must defend his policies under televised duress from the opposition nearly every week, face-to-face showdowns between the two men fighting for the White House only happen every four years.
And while debates rarely swing the outcome of an election, a gaffe -- or a silver-tongued swipe at the opposition -- under the bright lights can alter the perception of the two contenders, for better or worse.
What's the history of U.S. presidential debates?
Presidential debates are a relatively recent phenomenon. The first televised debate was between Republican Richard Nixon and Democrat John F. Kennedy, on black-and-white TV in 1960.
Many people listening on the radio to that first of four Nixon-Kennedy debates thought Nixon had won - but on live TV, a tan and youthful-looking Kennedy trounced a sweaty, haggard Nixon (who'd recently suffered a staph infection) in the appearance department. While Nixon improved in later debates, Kennedy went on to win the election.
There were no debates again until Jimmy Carter took on Gerald Ford in 1976. Since then, the Republican and Democratic hopefuls have matched wits in a series of (usually three) debates every election year - and twice, in 1980 and 1992, an independent candidate has joined the duo onstage.
In 1980, President Jimmy Carter refused to take part in the first debate with Ronald Reagan because John Anderson, an independent candidate, had been invited to take part. Carter's boycott led to a dramatic decline in the anticipated viewership for that debate. The second was cancelled, and Anderson was wiped off the program for the third round several weeks later.
What are the debates about?
In recent election cycles, the three debates have consisted of a domestic policy debate, a foreign policy debate, and a general debate in a town hall format, where members of the audience also offer up questions. Vice presidential candidates also face off in a single debate in the run-up to the election.
Generally speaking, candidates are asked questions by a moderator, who in recent years has come from one of America's major broadcast news networks. Candidates then have a set period of time for responses and rebuttals.
A coin-flip determines the order of answers at debates. Tonight Obama will answer first, but Romney will have the final word.
The dates and sites for the debates, which typically take place at universities across America, are chosen from a list of applicants by the non-partisan Commission on Presidential Debates.
Do debates even matter to the public?
While the debates offer Romney and Obama a chance to expand on their views and rebut each other's plans directly, experts say that the vast majority of Americans have already decided who they're voting for along party lines.
But although debates aren't typically seen as deciding an election's outcome, there have been a few exceptions over time.
Kennedy's telegenic dominance of Nixon during the first televised debate helped swing momentum in the Democrat's direction in 1960.
In a 1980 debate, facing a barrage of assertions and accusations from incumbent Jimmy Carter, Ronald Reagan coolly replied with a smile: "There you go again." His famous retort momentarily took the wind out of Carter's sails. After entering debate season behind in opinion polls, eventual winner Reagan left the podium with the advantage over Carter.
Sometimes it's not the debate that hurts a candidate - it's the post-game review. In 2000, cameras caught a visibly annoyed Al Gore sighing and shaking his head when George W. Bush spoke.
The clip was played over and over again and lampooned on television, to the point that "people began to project onto Gore a personality trait of just annoyance and irritation of people in general," according to historian Doris Kearns Goodwin. A clear favorite before the debates, Gore lost his lead during the debate season. He eventually lost the controversial election after the Supreme Court ruled in Bush's favor.
Are Romney and Obama any good at debating?
As mentioned above, American politics don't involve many head-to-head debates between Republicans and Democrats, but both candidates are seen as more than competent debaters.
Obama handled Republican John McCain in all three contests four years ago, says debate coach Todd Graham, staying on track in his arguments, showing poise, and refusing to take attacks on his policies personally.
Obama's quick wit may have backfired on him during a 2008 Democratic primary debate. He responded to Hillary Clinton saying he was likeable with, "You're likeable enough, Hillary." The audience laughed, but many viewers saw the remark as a mean-spirited swipe.
Graham says despite Obama's reputation as a great orator, his debate performances have not lived up to the standards of his speeches - and that at times the president can be awkward and long-winded in his debate answers.
Romney is currently the better-practiced of the two, having emerged victorious from a Republican primary season that featured nearly 20 debates. Graham says Romney is consistently solid, has great opening lines to questions, and has a firm grasp of the issues.
Romney's biggest weakness, according to some experts, is that he often comes across as fake. Graham says Romney's broad smile and "thank yous" following heated answers make it seems like "he's practicing his speeches," not debating his opponent.
Are debates about great politics or great theater?
Long stretches of presidential debates involve dry policy speeches, but it's usually a single gaffe or clever one-liner that comes to define a debate in the annals of national memory.
Reagan was already the oldest president in history in 1984, and when asked during a debate about whether age would be an issue for him, the 73-year-old famously replied: "I will not make age an issue of this campaign. I am not going to exploit, for political purposes, my opponent's youth and inexperience." Even Democratic challenger Walter Mondale, then age 56, had to laugh.
Sometimes body language matters more than words in a debate. In 1992, President George H. W. Bush took a glance at his watch while an audience member was asking a question - a move that made Bush, whose re-election hopes were rapidly slipping away, seem uninterested in the concerns of the public.
John McCain sparked controversy when he referred to Obama as "that one" during the second 2008 presidential debate. At a dinner attended by both senators several days later, Obama joked that his first name was Swahili for "that one," according to the New York Times.
Vice presidents and their counterparts have delivered just as many memorable lines as their bosses have over the years. Lloyd Bentsen's sharp "Senator, you're no Jack Kennedy" reprimand of Republican vice presidential candidate Dan Quayle in 1988 remains one of the all-time greats -- along with Perot running mate James Stockdale's "Who am I? Why am I here?" debate opener in 1992, which drew guffaws from the audience.
A bad enough gaffe can help derail your campaign long before the first primary votes are cast, as Republican Rick Perry showed in late 2011.
At a primary debate in Michigan, Perry became the first candidate in history to say "oops" during a debate after forgetting the name of the third government agency he'd pledged to cut.
When pressed for an answer, Perry said: "The third agency of government I would do away with, the Education ... uhh the Commerce, and, let's see. I can't. The third one, I can't. Sorry. Oops."
After the debate, Perry owned up to the gaffe as only a Texas governor could: "I'm sure glad I had my boots on because I sure stepped in it out there." | <urn:uuid:052c89d2-97d4-470f-9804-187ff2fcf720> | CC-MAIN-2016-36 | http://www.cnn.com/2012/10/03/politics/presidential-debates-explainer/index.html?iref=mpstoryview | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982952852.53/warc/CC-MAIN-20160823200912-00206-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.975924 | 1,718 | 2.515625 | 3 |
Object NGC 2310 is located exactly in the center of the picture.
|NGC 2310 - galaxy in the constellation Puppis|
Type: S0 - lenticular galaxy
The angular dimensions: 4.30'x0.8'
magnitude: V=11.8m; B=12.7m
The surface brightness: 13.0 mag/arcmin2
Coordinates for epoch J2000: Ra= 6h53m53.8s; Dec= -40°51'49"
redshift (z): 0.003809
The distance from the Sun to NGC 2310: based on the amount of redshift (z) - 16.1 Mpc;
Other names of the object NGC 2310 : PGC 19811, ESO 309-7, MCG -7-15-1, AM 0652-405
Nearby objects: NGC 2308
, NGC 2309
, NGC 2311
, NGC 2312
The full catalog NGC / IC
In this version of the NGC catalog used by NASA imagery, ngcicproject.org and other sources.
The pictures in the places of their original placement referred to as free of licensing restrictions.
In case of doubt, please the authors: let me know and they will be removed. | <urn:uuid:cbecdd30-b159-40cf-8df0-bde42a0a43cc> | CC-MAIN-2019-43 | https://astro-map.com/get_ngcic.php?ID=NGC-2310 | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00349.warc.gz | en | 0.75623 | 283 | 2.59375 | 3 |
Researchers with the University of British Columbia have, for the first time, used drones to monitor the behavior of killer whales in an attempt to better understand their feeding patterns.
It is hoped that the footage will help researchers determine if endangered southern killer whales are getting enough of their preferred prey, the Chinook salmon.
“In order to help these whales, we need to know more about them – how they hunt, how they forage and where their food is,” said Andrew Trites, the research team’s lead and the director of the Marine Mammal Research Unit (MMRU) at the University of British Columbia. “This is the first time drones have been used to study killer whale behavior and their prey. It’s allowing us to be a fly on the wall and observe these animals undisturbed in their natural settings.”
Researchers monitored northern and southern resident killer whales off the coast of British Columbia. Southern resident killer whales are endangered, whereas the northern population has increased steadily since the 1970s.
“Observing both populations of killer whales means we’ll be able to compare the foraging conditions and hunting behaviors of the two groups and see whether it is more difficult for southern residents to capture prey,” said Sarah Fortune, a postdoctoral fellow at MMRU. Credit: UBC/Hakai Institute via Storyful | <urn:uuid:69e7e8e1-69a9-43fc-8a84-1ccf0aea018e> | CC-MAIN-2024-10 | https://sg.news.yahoo.com/drones-used-monitor-killer-whales-094250775.html | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474744.31/warc/CC-MAIN-20240228175828-20240228205828-00625.warc.gz | en | 0.932526 | 282 | 3.8125 | 4 |
Students from all over India can now test their academic knowledge on a racetrack. Twelve years after its launch in the UK, the Formula One School Programme has come to India, where school and college students from 12 to 19 years will build an F1 scale-model and test it on a 20-metre track. The aim is to help students understand the requirements of the corporate world and use their academic knowledge in a real life situation.
The right formula
Students will be required to use CAD/CAM software to design, test and build a scale-model F1 car from a block of balsa wood, and race it in a competition. “The competition challenges students to apply their academic knowledge and apply it to solve real life problems,” says Aditya Tangri, in-country coordinator for F1 in Schools in India. Participating students will need to find sponsors, design the car, devise a marketing strategy, promote themselves on various social media platforms, present their cars to a panel of judges and finally, test it on a 20-metre track.
Each student has to take responsibility, and a role is assigned accordingly in the team. “We want them to understand what they really want to do in life. One may excel in Mathematics, but does he/she have the aptitude to be an engineer? He could be the team manager, resources manager, manufacturing manager, graphics designer or design engineer,” explains Tangri.
Two schools from Mumbai, Poddar Education Network and Bal Bharti (Navi Mumbai) have already registered for the programme. For schools to participate, they must become a member of the franchise; memberships are accepted only if the school meets its requirements.
All in the game
Participants are divided into three categories — junior, intermediate and senior, and will compete at zonal and national levels. The national winners of the intermediate and senior levels will also participate in the international competition. The organisers will provide these students with technical training including how to design a car using CAD/CAM, and each car design has to be approved by its team of experts, before it appears in the competition.
“If a car fails to meet the safety and other standards, it’s sent back for re-designing,” says Tangri. Students must pay a competition fee of `5,000 (per head); a maximum of six students can be form a team. These students will get between 80-90 days to design their car. | <urn:uuid:cebb7074-bc57-4452-82b7-4159aba905f2> | CC-MAIN-2017-22 | http://www.mid-day.com/articles/formula-one-drives-india-schools/190573 | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00498.warc.gz | en | 0.963382 | 516 | 2.8125 | 3 |
In the first part we consider the first two of the above "dead" Baltic languages. Galindsky or golyadsky – language belonging to the Baltic tribe Galindians, which was first mentioned in the chronicle texts Russia in 11-12 centuries. Tribe Galindians ranks first in time references of all existing Baltic tribes. Title tribe, probably comes from either the word "edge", "margin", or from hydronyms (name influx Narew, the name of the lake). According to some assumptions under Galindo knew several ethno-linguistic groups that have existed in the Baltic Sea. In different historical sources provided different information about the placement Galindo. Thus, from the writings of Ptolemy can understand that the place of localization Galindians located between the territories of Iran Veneti tribes and the Alans, to the east from the first race.
In Russian chronicles are talking about placing them in the river basin Protva, south-west of Moscow. Doesburg Galindo refers to the Prussian lands. So, scope Galindians large, this fact is explained by the fact that the tribes in ancient times, actively migrating, particularly in the south, south-west and later westwards to Europe. Start migrations coincides with the "Roman times" that explains the existence of one of the titles Volusiana emperor, who ruled in the 3 century – Galindsky. Galindsky language has left its mark in the formation of several European languages – French, Spanish, Polish, Portuguese, Spanish, and also in some dialects of Russian. Despite the fact that data on the language of the tribe Galindians not quite a lot, its importance in the formation of some European language is sufficiently large. | <urn:uuid:877da662-be8d-4e69-ac65-3662d14b731b> | CC-MAIN-2019-04 | http://www.nsmanchester.com/tag/foreign-languages-%E2%80%8B%E2%80%8B | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583826240.93/warc/CC-MAIN-20190122034213-20190122060213-00635.warc.gz | en | 0.942153 | 360 | 3.34375 | 3 |
When it comes to weight loss, many people believe that high-intensity exercises are the only way to shed a few pounds.
But weight loss isn’t about the activities you do once in a while to improve your health, it’s about your daily routine.
That’s why exercises like walking can have such a profound impact on your weight and physical health.
The Joys of Walking
The part about walking is that it doesn’t present itself as exercise. It’s easy to spend the day on your feet without even noticing how much energy you’re spending.
Because it’s low impact, walking doesn’t put much strain on the muscles and bones, making it suitable for just about anyone. With regular practice, walking long distance can improve muscle tone and boost cardiovascular health.
Walking for Weight Loss
The amount of weight you’ll lose by walking depends on your body weight and walking pace. The heavier you are, the more calories you’ll burn.
For an average-sized adult, walking 4 miles in an hour translates to about 400 calories spent. | <urn:uuid:adc02980-f3aa-4a37-9590-7c38630ebdae> | CC-MAIN-2019-18 | https://dailyhealthpost.com/how-much-walking-to-lose-weight/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578655155.88/warc/CC-MAIN-20190424174425-20190424200425-00273.warc.gz | en | 0.908852 | 234 | 2.90625 | 3 |
This section is from the book "British Wild Flowers - In Their Natural Haunts Vol5-6", by A. R. Horwood. Also available from Amazon: A British Wild Flowers In Their Natural Haunts.
The habitat is heaths. The plant is a parasite on heath, gorse, etc. The flowers are pale rose-colour. The calyx is reddish, shorter than the corolla tube. The corolla is cylindrical, with spreading lobes, prominent scales nearly closing the tube. The plant is 6 in. to 2 ft. long, flowering in August and September, and is an annual parasite. | <urn:uuid:32856903-a93d-43ee-89b6-54d10c3f9c67> | CC-MAIN-2017-43 | http://chestofbooks.com/flora-plants/flowers/British-Wild-Flowers-2/Flowers-Of-The-Heaths-And-Moors-Order-Convolvulaceae.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820556.7/warc/CC-MAIN-20171017013608-20171017033608-00602.warc.gz | en | 0.918803 | 131 | 2.796875 | 3 |
Men and women have the same need and longing to connect with each other, but they also have different ways of reacting to stress that can drive them apart. Psychotherapist Patricia Love believes that these instinctive coping strategies can trigger the fear and shame that isolate partners from each other. Depression makes the disconnection that much worse.
These coping strategies can come up in relationships as a typically male sensitivity to shame and a typically female sensitivity to fear.
It’s always touchy to talk about gender differences, but Love and her colleagues approach this with the idea that differing male and female patterns are coping strategies, not fixed genetic traits. They recognize that individual men and women can exhibit behaviors across a broad spectrum.
There are no stereotypes that limit the roles of men and women, nor is there is a difference in their desire or need for feeling and relationships. But different styles of reacting to stress often lead to behaviors that create problems.
Fight-or-Flight or Tend-and-Befriend?
The differences that researchers have found relate to the ways in which humans have evolved to respond to threats and danger. For decades, scientists described the classic stress response of fight or flight as the basis for a lonely world of constant struggle for survival. It wasn’t until the 1990s that a group of social psychologists led by Shelley E. Taylor realized that all these observations had been based on human and animal studies that used mostly male subjects.
When they broadened research to include women in studies of stress, they identified another coping mechanism that was collaborative rather than competitive. Taylor summarized this research in her classic book, The Tending Instinct.
This alternative coping strategy is known as “tend and befriend.” In times of distress, women take care of those close to them while seeking the support of others for protection of the group. It’s a highly social way of dealing with danger that depends on bonds of trust and connection. Researchers found that this pattern, like the fight or flight response, was rooted in neurobiology as well as behavior.
Patricia Love and other psychotherapists have found that these distinctively male and female reactions to stress can contribute to the problems of couples, especially in the presence of depression. Here is a nutshell version of what can happen.
The Fear-Shame Dynamic
For men, the important thing is to demonstrate their ability to remove a danger or solve a problem through action and reasoning. Words and feelings can get in the way and don’t get the job done. The almost instinctive response is to do something on their own, without seeking help. If their ability to handle a situation is called into question, men tend to feel shame.
Rather than take action on their own, women often need to feel connected to others to feel safe. Isolation triggers fear. Expressing their worries to their partners is a way of reassuring themselves that the connection for support is there. Talking and expressing feelings are part of the process of connecting and handling stress.
Hearing about his partner’s worries, however, can also trigger a man’s vulnerability to shame. Instead of understanding a woman’s concerns as the need for connection, he can hear them as criticism that he has failed to do his job of providing and protecting. Her distress comes across to him as an accusation that it’s his fault.
Instead of reaching out to connect, he reacts defensively and angrily pushes his partner away. She is left alone with her fear, which is now intensified by the withdrawal of her primary source of support.
Each keeps triggering the main vulnerability of the other. The man looks for the respect and praise he needs to feel he’s fulfilled his male role but gets only a response he experiences as shaming. As he pulls away in anger, the woman feels more alone and fearful than ever.
How Depression Makes It Worse
Depression adds the perfect storm of isolation and emotional withdrawal. Many men see depression itself as a source of shame, a weakness and sign of their inability to perform. In the majority of cases, they refuse to get treatment or even to acknowledge it. That leaves the woman alone and excluded from the relationship during the crisis of illness.
When a man hears from his wife that he is depressed, that may sound to him like “you are a failure.” Anger is the typical response rather than feeling supported by his partner’s attentiveness. It’s very hard to get around the initial reaction that he’s hearing her words of concern as a criticism.
In depression, the man whom the woman looks to for reassurance and support can become himself the greatest threat to safety. This realization triggers an especially deep fear and vulnerability. She has to live in a constant state of alertness and easily gets angry. The stress level is high and can’t be relieved by the comfort of connection.
Connection Comes Before Communication
According to Pat Love and recent research, the fear-shame dynamic is an instantaneous reaction that begins well outside of our awareness through a process of emotional attunement. This is the reading of nonverbal signals in body language, facial expression and tone of voice. These communicate much faster than words. This nonverbal language comprised our primary method of evaluating and communicating the safety of situations, long before language and reasoning become so prominent in human development.
Because it is tied into neuro-circuitry, the dynamic of reactions is almost impossible to deflect simply by talking. Fear and shame keep you from hearing each other no matter how much mirroring and active listening you try to do.
It’s a lack of connection rather than a lack of communication that is the problem. Reconnecting on a non-verbal level is as important as finding the right words to get back together. This perception is behind the title of the book Love wrote with Steven Stosny, How to Improve Your Marriage Without Talking About It.
Their strategies for improving relationships include plenty of talking, but the words are secondary in importance to the level of interest and concern partners show each other through touch, looks and facial expressions.
Retraining to Reconnect
Love emphasizes that men and women are equally in need of love and intimacy and equally capable of experiencing it. Her approach is to train couples to be sensitive to their differing vulnerabilities and to practice ways of connecting without triggering fear or shame.
For example, she urges women to understand that for men relationship during a time of stress may not feel like a place of safety. Instead, it may seem more like a testing ground for their ability to perform and protect. If they feel they will face judgment about how well they’re doing their jobs as men, they might well try to avoid relationship when dealing with hard problems.
Men don’t realize that a woman’s fear of isolation and deprivation can be triggered by leaving her out of the important parts of his life. Men abandon their wives to manage their own dread of failure and inadequacy, leaving them alone with their needs. As Love and Stosny put it, “A man needs to value the longing of a woman’s heart, or he will leave her alone in her dreams and become the failure he dreads.”
The key is to understand the core vulnerabilities and avoid setting them off while also offering assurance in response to the underlying fear or shame.
I’d like to know if you have found these general ideas about the ways men and women react to stress to be accurate in the context of your own relationship. Do you think these tensions have added to the problems of depression? | <urn:uuid:f469abd4-0dea-4a6b-ab03-8afb78c7026e> | CC-MAIN-2016-50 | http://www.storiedmind.com/depressed-partners/reconnecting-depressed-partners-despite-fear-shame/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542712.12/warc/CC-MAIN-20161202170902-00281-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.970851 | 1,556 | 3.65625 | 4 |
Background: Cataract remains the leading cause of global blindness. Evidence from population based surveys carried out up to 2000 and the launch of the VISION 2020 initiative to address avoidable blindness, showed that women in low and middle income countries had a lower cataract surgical coverage (CSC) than men.
Methods: A systematic review identified population-based surveys reporting CSC in low and middle-income countries published since 2000. Researchers extracted data on sex specific CSC rates and estimated the overall CSC differences using meta-analyses.
Results: Among the 23 surveys selected for this review, 21 showed higher CSC among men. The Peto odds ratio found men were 1.71 times (95% CI: 1.48 - 1.97) more likely to have cataract surgery than women. The risk difference in the rates of surgery varied from –0.025 to 0.276 and the combined average was 0.116 (95% CI: 0.082 - 0.149).
Discussion: Gender inequity in use of cataract surgical services persists in the low and middle income countries. We estimate that blindness and severe visual impairment from cataract could be reduced by around 11 % in the low and middle income countries if women were to receive cataract surgery at the same rate as men. Additional effort globally is needed to ensure that women receive the benefits of cataract surgery at the same rate as men.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | <urn:uuid:fd0f3611-29ff-4131-b759-19d37d16b519> | CC-MAIN-2017-34 | http://bjo.bmj.com/content/early/2008/12/17/bjo.2008.140301 | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.82/warc/CC-MAIN-20170823020201-20170823040201-00315.warc.gz | en | 0.933562 | 357 | 2.609375 | 3 |
Math Tests and Quizzes That Are Aligned Core Math Curriculum. Math Tests and Topic/Skill Based Quizzes We now have a full line of tests for each section of the common core curriculum.
Test are fully available to members for immediate download. There is a test sampler in each section for those who haven't signed up yet and want to see what it is all about. We will continue to add new tests on a monthly basis. Yummy Math- Real World Math Tasks. Math Worksheets Land - For All Grade Levels. Great Maths Worksheets - FREE resources NEW and differentiated. Free Geometry Worksheets. PrepFactory. Interactive Math Activities, Demonstrations, Lessons with definitions and examples, worksheets, Interactive Activities and other Resources. Virtual Nerd: Real math help for school and home.
CK-12: Math & Science Lesson Plans, Worksheets, Real World Examples & Teacher Resources. Why Do I Need to Learn Math. Real Life Math. Starter Of The Day. Start your Mathematics Lesson with our Starter Of The Day.
Boost your mind power with these brain exercises. Give your mental ability a work out with a range of mathematical puzzles, speed tests and creative ideas. Click on the date in the calendar below for the Starter Of The Day. The Starter for the 9th April is the ultimate customisable revision lesson Starter. Math Teacher Tools. Free Math Worksheets. Key Stage 3 SAT Maths Tests. Math. L1Lesson plans for functional maths. Free Online Textbooks, Flashcards, Practice, Real World Examples, Simulations. Simplifying Fractions Game. Multiply Fractions Soccer Shootout. Maths Games for KS2: designed by a teacher for teachers. A+ Click Math Problems and Logic Puzzles for Grade K-1 K-12.
Starfall. Free Educational Videos for K-12 Students. Numbers and Equations. Educational Videos and Games for Kids about Science, Math, Social Studies and English. Tessellation Creator. La découverte d'une "tuile" historique secoue le monde des mathématiques. SCIENCES - Une équipe de mathématiciens a bouleversé le monde des maths en découvrant un nouveau type de pentagone capable de "paver un plan", c'est-à-dire que les tuiles peuvent s'assembler sur une surface plane sans qu'elles ne se chevauchent ni ne laissent de trous.
Seuls quinze pentagones de ce type ont été découverts jusqu'ici. On n'en avait pas découvert depuis trente ans. C'est presque aussi impressionnant que de découvrir un nouvel atome, a déclaré Dr. Casey Mann, maître de conférences en mathématiques à l'université de Washington Bothell et membre de l'équipe. L'équipe a fait cette découverte grâce à un programme informatique conçu pour l'occasion. "Nous avons découvert la tuile en faisant une recherche exhaustive sur un ordinateur grâce à un ensemble de possibilités très large mais fini", a expliqué Casey Mann au journal Le Guardian, en ajoutant que l'équipe avait été "un peu surprise" de découvrir un nouveau type de pentagone.
Le nouveau pentagone. Prime Numbers Up To 100 Game. Resources for mathematics enrichment. Geometry Resources Wall. Area and Perimeter. Math Games - HOODA MATH - 500 Cool Math Games. Free Online Textbooks, Flashcards, Practice, Real World Examples, Simulations. Interactive Unit Circle. Www.mathsrevision.com. A+ Click Math Problems and Logic Puzzles for Grade K-1 K-12. Area. Problems of the Month. Problem solving is the cornerstone of doing mathematics.
A problem that you can’t solve in less than a day is usually a problem that is similar to one that you have solved before. But in real life, a problem is a situation that confronts you and you don’t have an idea of where to even start. If we want our students to be problem solvers and mathematically powerful, we must model perseverance and challenge students with non-routine problems. Administrators, teachers and parents should facilitate and support students in the process of attacking and reasoning about the problems. The solution is not as important as the process of problem solving. The educator or parent should not be impatient with the student’s struggle.
The principal should embrace the concept of problem solving and model problem-solving leadership, being a facilitator of non-routine problems. Once the problem is presented to the students, the principal should be visible in facilitating the tasks alongside the teachers. Math is Fun - Maths Resources. Welcome to Math Playground. Play with numbers and give your brain a workout! Worksheets - www.m4ths.com GCSE & A Level maths DVD's.
Maths-it Podcast - Revision podcasts and worksheets for GCSE and A Level. Math Worksheets Land - Tons of Printable Math Worksheets From All Grade Levels. Maths Videos. Here is a collection of hand-picked mathematical videos freely available on YouTube.
If you are looking for a particular topic you may like to begin on out topics page where you can also find starters, visual aids and interactive resources.Please let us know if you find any interesting videos we should include in this list. A History of the Calendar A fast paced animation explaining the development of the modern calendar. [Time: 3:55] A Mathematical Fable It's not just the squares on the sides of right angles triangles that add up! [Time: 8:56] Angle Properties Song A song about the angles made with parallel lines. Maths Videos. AQA GCSE Mathematics Papers - Modular. Hint. Lessonquiz_national. Mathn01m. Tarsia. With this software you will easily be able to create, print out, save and exchange customised jigsaws, domino activities and a variety of rectangular card sort activities.
The activities created using this software can be presented in printable form, ready to cut out. Formulator Tarsia known earlier as Formulator Jigsaw is an editor designed for Teachers of Mathematics creating the activities in a form of jigsaws or dominos etc for later use in a class. It includes the powerful equation editor for building the math-expressions for the activities. An advanced feature of text placement along the side of the shape makes this tool irreplaceable software for fast activity creation.
Formulator Tarsia became a powerful tool for learning activities since it supports the activity templates. Formulator Tarsia installation package contains samples kindly offered by: Hermitech Laboratory highly appreciates these contributions. Table of Contents - Math Open Reference. MathTV - Videos By Topic. Geometry Rap Songs - Learn Terms, Shapes & Formulas - Flocabulary. Negative numbers introduction. Learn how to add and subtract negative numbers. Math. Math Help for Parents And Their Kids.
Pythagorean Theorem Jeopardy. Probability explained. Statistics Worksheets - printable math worksheets for high school Statistics and Probability. Monty Hall. The Probability Song. GCSE MATHS TAKEAWAY. HippoCampus - Homework and Study Help - Free help with your algebra, biology, environmental science, American government, US history, physics and religion homework.
WatchKnowLearn - Free Educational Videos for K-12 Students. LearnZillion. X Menu Welcome to LearnZillion!
Let's get started. Copyright © 2015 LearnZillion. MathTV - Videos By Topic. Problemlösning. Neo K-12. Math TV. Algebra 1 Online Modules! Factors and Multiples Jeopardy Game. Factors and Multiples Jeopardy is a free online game for middle school students and teachers.
The questions in this game focus on important concepts such as factors, multiples, prime factorization, GCF, and LCM. Important facts you should know that will help with this game. A number is divisible by its factors A number is divisible by 2 if the last digit is even. A number is divisible by 5 if its last digit is 0 or 5. A number is divisible by 3 if the sum of all of its digits is divisible by 3. Visit this link to play other free jeopardy math games. Return from the Factors and Multiples Jeopardy game to the Middle School Math Games webpage. Math. I Love Maths Games. Math Games Arcade - Math Fighter: Integer Operations. Volume Experiment. Opus: Free, Common Core Math. A+ Click Math Problems and Logic Puzzles for Grade K-1 K-12. Free Videos. The Game of SKUNK. Write the following questions on the chalkboard or overhead: I might make more money if I was in business for myself; should I quit my job?
An earthquake might destroy my house; should I buy insurance? My mathematics teacher might collect homework today; should I do it? Ask students to share their responses to each of these scenarios. Ask students why their responses may be different from their classmates. Every day each of us must make choices like those described above. Making the connection between choice and chance is basic to understanding the significance and usefulness of mathematical probability.
The game of SKUNK presents middle-grade students with an experience that clearly involves both choice and chance. The Game of SKUNK. Virtual Nerd: Real math help for school and home. Home Page. We provide teachers and students with mathematics relevant to our world today … Parabola. The parabola has many important applications, from a parabolic antenna or parabolic microphone to automobile headlight reflectors to the design of ballistic missiles.
They are frequently used in physics, engineering, and many other areas. Strictly, the adjective parabolic should be applied only to things that are shaped as a parabola, which is a two-dimensional shape. However, as shown in the last paragraph, the same adjective is commonly used for three-dimensional objects, such as parabolic reflectors, which are really paraboloids. Sometimes, the noun parabola is also used to refer to these objects. Though not perfectly correct, this usage is generally understood. Math Open Reference. ULearniversity. Get The Math. | <urn:uuid:32e91da8-be1e-4a0c-80fb-eb7f58540149> | CC-MAIN-2019-39 | http://www.pearltrees.com/annafusco | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577363.98/warc/CC-MAIN-20190923150847-20190923172847-00251.warc.gz | en | 0.806679 | 2,251 | 2.5625 | 3 |
The art of pastry making finds its roots in antiquity. Sweet dishes were served alongside or after meals, to add a sweet touch. The French pastry arts were strongly influenced by Italian pastry and other European styles, until the 19th century, when French pastry truly came into its own. Today, Paris is home to most of the new developments in French pastry, which then spread throughout the globe! French pastry chefs are some of the most renowned in the world, due to their know-how and their creativity. New technology and techniques lead to the constant creation of new and exciting recipes!
Parisian Pastry Chefs
Many of the world’s great pastry chefs stand out thanks to their creativity, innovating new recipes and blending new flavors. Let’s take a look at some of the great French pastry chefs that have been key figures in these sweet developments.
Charles Dalloyau was first noticed by the king at the court of Versailles in 1682, and his family continued cooking for the court until the creation of the Dalloyau pastry house at the beginning of the 19th century. The Dalloyau house was created by Jean-Baptiste, one of Charles’ descendants. He invented the idea of ready-made pastry, an innovative service at the time. He set up shop on rue du Faubourg Saint-Honoré, where the store still stands today.
Gaston Lenôtre (1920-2009) was a famed Parisian pastry chef and the author of several cookbooks. He was also one of the great innovators of the art of French pastry, as lauded by his fellow professionals as by the media. He trained several well-known pastry chefs, including our macaron and pastry expert.
Pastry chef Pierre Hermé began his carrier with Lenôtre, before becoming the head pastry chef at Fauchon and then Ladurée. He finally created his own pastry shop, Pierre Hermé Paris. He eliminated the elaborate décor that is usually so cumbersome to pastry chefs, instead creating new and exciting recipes, using sugar as a seasoning to bring out nuances in flavor, much as salt is used in traditional gastronomy.
We also must mention Jean-Paul Hévin, an artisan chocolatier and pastry chef, as well as Christophe Michalak, the head pastry chef at the Plaza Athénée in Paris.
Famous Parisian Pastries
The Paris-Brest is a pastry that appears to be in the form of a crown, but it’s actually representative of a bicycle wheel! This pastry was invented in honor of the famed Paris-Brest bicycle race. A choux pastry wheel is stuffed with praline cream and garnished with slivered almonds.
The Opera combines mocha flavors, with a sponge cake soaked in Grand Marnier or Cointreau syrup sandwiching ganache and buttercream. The cake is covered with a dark chocolate icing. It was invented in 1966 by Cyriaque Gavillon for Dalloyau.
The Saint-Honoré was invented in Paris in 1848 by Chiboust. The cake’s circular base is made of puff pastry or pie dough, surrounded by caramel-coated cream puffs. Its center is filled with either kirsch cream or vanilla whipped cream. Its name comes from the street where the pastry chef’s shop could be found, but Saint Honoré is also the patron saint of bakers.
The puits d’amour – French for “well of love – is a small round of puff pastry with a hollowed-out center – the “well” – filled with caramel pastry cream. In the original 18th century recipe, it was filled with redcurrant jelly, but in the 19th century, pastry chefs began replacing it with cream and topping it with a choux pastry crown. In Paris, two pastry shops still use the original recipe: Coquelin and Stohrer.
Millefeuille, also known as Napoleon in English, was created by the pastry chef Pierre François de la Varenne in the 17th century. Three layers of puff pastry sandwich two layers of pastry cream. The pastry is topped with fondant or icing sugar. Its French name, when directly translated, means “a thousand layers,” a reference to the sheets of puff pastry.
Bourdaloue tart gets its name from the Parisian street that was home to the shop of its inventor. It is made up of a sweet, crunchy dough, almond cream and soft, cooked pears. With its buttery aroma and flavors of vanilla and amber rum, it’s a true delight!
The history of the financier, an almond-flavored cake in the form of a gold brick, is fairly amusing: pastry chef Lasne invented it near the Paris stock market in 1890. The little cake was perfect for the busy bankers, who could eat it in one bite without getting their hands dirty!
The macaron has recently become popular in the States. This little round almond cookie has a long history, but it wasn’t until the 20th century that a pastry chef from Ladurée (founded in Paris in 1862) invented the “Parisian macaron,” made of two shells assembled with a flavored cream. Other than those at Ladurée, the best macarons can be found at Pierre Hermé and Fauchon; all three houses are constantly coming up with new and interesting flavor combinations!
Romain was AMAZING!!! I will 100% be recommending him to our friends! Our Louvre tour was one of my kids favorite things we did and they said it was because of Romain.
Highly recommend. Private tour is a great way to get more personalized insights on Paris history. Guide was very knowledgeable. Will definitely use again.
We have done several tours with Localers and they are definitely worth the money! We are pretty independent travelers, but there is no substitute for local knowledge. Localers has the best guides!
I did 2 tours with Localers and would enthusiastically do more when I return to Paris. The main reason is the quality of the tours themselves, the fact they are authentic experiences, and you learn so much!
Browse our homemade tours run by local experts
Book your tours online
Receive your online confirmation
Discover Europe like an insider | <urn:uuid:41a3ef42-b5ff-4429-adb8-86bea7bd6292> | CC-MAIN-2020-16 | https://www.localers.com/travel-guide/paris/food-and-wine-paris/paris-pastry | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00142.warc.gz | en | 0.962471 | 1,329 | 2.5625 | 3 |
Join our mailing list!
(Your shopping cart is empty)
Diane Publishing Books
Guide to Integrating Forensic Techniques into Incident Response: Recommendations of the National Institute of Standards and Technology
Karen Kent (au)
Forensic science is defined as the application of science to the law. Digital forensics is the application of science to the ident., collection, exam., & analysis of data while preserving the integrity of the info. & maintaining a chain of custody for the data. This guide provides recommend. for performing the forensic process. Also provides info. about using the analysis process with 4 types of data sources: files, operating systems, network traffic, & applications. Focuses on explaining the basic components & characteristics of data sources within each type, as well as techniques for the collection, exam., & analysis of data from each type. Also provides recommend. for how multiple data sources can be used together to gain a better understanding of an event.
Arizona Wetlands & Waterfowl
Office Computing Bible: Using Personal Computers at Work
How to Spec Type
War of the Rebellion: Official Records of the Union & Confederate Armies: Series I, Volume I
New View of Self: How Genes & Neurotransmitters Shape Your Mind, Your Personality, & Your Mental Health
Share your knowledge of this product with other customers...
Be the first to write a review
Diane Publishing Co
PO Box 617
Darby, PA 19023-0617
Become an Affiliate
Send Us Feedback
Copyright � 2004 Diane Publishing Company. All Rights Reserved. | <urn:uuid:7fbdb28f-d858-4509-8781-06c8f88d16c5> | CC-MAIN-2020-10 | http://www.dianepublishing.net/Guide_to_Integrating_Forensic_Techniques_into_Inci_p/1422311384.htm | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00302.warc.gz | en | 0.791353 | 329 | 2.703125 | 3 |
ROUTING: Routing & Trimming Polypropylene
Polypropylene is what many people mean when
they say “plastic” because it is the base of many
products and is capable of being fabricated by
many processes. From fiber to film, from injection molding
to thermoformed sheet, polypropylene is as versatile as it
is varied. It can be formulated to result in a wide
range of melt points, weights, stiffness and
machineability. Formulations can provide a substance
somewhere between traditional rubber and
conventional plastics. Other possibilities may be
filled or reinforced grades which offer good stiffness
One property of polypropylene, chemical or solvent
resistance, makes it ideal for tanks, vessels
and bottles used in the chemical industry. PP is
also used for clean room furniture and fixtures.
Some other auto interior and trim parts, shrouds,
covers, storage bins are all PP products which may
be trimmed or routed in the fabrication process.
Most PP products are machined on CNC routers.
Hand held electric or air routers do not normally
give satisfactory results. In most instances, PP is a
difficult material to work with because of its gummy
nature. It is always susceptible to reweldment or
wrap around of waste material on the cutting tool.
It can be challenging to obtain a proper finish on the
end product. Feed rates are critical for productivity
and cutting tool selection is critical for best results.
Anyone who has cut or trimmed anything but the
most dense and stable PP can attest to the above.
Machining PP is a continuous improvement process
often initiated by a basic trial and error process.
There are a few principals to employ when
machining polypropylene. Every
attempt should be made to cut large
chips. This can be accomplished by
use of slow helix tools shown in
Figures 1 and 2. Slow helix tools tend
to take a larger chip than conventional
helix tools and are available in single
or double flutes in both upcut and
downcut spirals. Here is where some
trial cuts should be made to determine
whether single or double flutes
and up or down spiral works better in
a specific application. A single edge
O flute, shown in Figure 3, may also be the best answer
for a particular job. Slow helix tools are also available with
a bearing pressed on the end of the cutting edge for guided
trimming operations if a CNC router is not available.
Because of the gummy nature of PP and the inherent heat
generated by cutting action,
high-speed steel tools are not
recommended. Solid carbide bits will outperform
high-speed steel, carbide tipped or diamond tools
and are the only type recommended for cutting PP.
High feed rates should be employed along with
lower spindle speeds. This will tend to abate reweldment
behind the cut and waste wrap around. Feed
rates should be increased until such time as the finish
is unacceptable. Spindle speed should then be
reduced until the finish is once again acceptable.
The process can then be repeated until the optimal
result is achieved. This process should be repeated,
then cataloged, for each unique setup.
One may want to consider a two-pass
process to optimize both feed rate and
piece part finish. If a tool changer is available,
the second pass can be taken with a finishing
tool such as shown in Figure 4. In all instances,
when the depth of cut exceeds the cutting edge
diameter of the tool by more than a factor of three,
multiple passes should be taken. When such is the
case, the second pass should be taken with the
same tool as the first cut.
Fixturing also becomes very important to
achieve acceptable productivity. It is often recommended
that PP setups use gasket
tape to improve hold down.
Many times, however, the tape is just placed on the spoilboard
surface. When the vacuum is turned on, the foam
has no place to go but a flat compress. The result is the
tape loses its memory and allows the part being cut to
vibrate. Any inconsistency or warpage in the part will also
be exaggerated by the flattened gasket tape. Either situation
will facilitate tool breakage and less than achievable
finish. This problem can be mitigated by making a channel
in the spoilboard before applying the gasket tape.
Typically the channel should be one half the thickness of
the gasket tape. Grooving the spoilboard will enable the part
to achieve a better vacuum and will prolong both the life of
the tool and the gasket tape. See Figure 5 for a description.
Polypropylene can be cut effectively in a CNC router
environment. It is, however, a more complex environment
than we have previously discussed with routing and trimming
either PET or ABS.
For more information, click on the author biography at the top of the page. | <urn:uuid:5a3dac30-ca41-4009-9b66-7bc16fe928cb> | CC-MAIN-2021-39 | http://www.plasticsmag.com/routing.asp?fIssue=Mar/Apr-08&aid=4741 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00423.warc.gz | en | 0.913051 | 1,071 | 2.546875 | 3 |
What is acupuncture?
Acupuncture is a therapy of traditional Chinese medicine that originated in China over 5,000 years ago. It is based on the belief that people have a vital energy, called “Qi,” that circulates through twelve invisible energy channels, known as “meridians,” on the body. These energy channels are like rivers flowing through the body to irrigate and nourish the tissues. Each meridian is associated with a different organ system. An imbalance in the flow of Qi throughout a meridian is how disease begins. An obstruction in the movement of these energy rivers is like a dam that backs up, causing stages of excess and deficiency in the body.
What is a treatment like?
Acupuncturists insert needles into specified points along meridian lines to unblock the obstructions at the dams and reestablish the regular flow of Qi through the body. There are over 1,000 acupuncture points on the body. Along with the usual method of puncturing the skin with the fine needles, the practitioners of acupuncture also use heat, pressure, suction or impulses of electromagnetic energy to stimulate the points. Prior to starting a session, a patient’s health is assessed by taking their pulses and looking at their tongue, both providing information on the internal function of the body. A session usually lasts around 30 minutes.
What can it treat?
Acupuncture treatments can help the body’s internal organs correct imbalances in their digestion, absorption and energy production activities, and in the circulation of their energy through the meridians. Some of the conditions treated at the Center for Natural Healing
with acupuncture include migraines, tension headaches, sinusitis, common cold, arthritis, menstrual cramps, fibromyalgia, low back pain and infertility. Each protocol is individually designed based on
the patient’s needs. All ages and all conditions can benefit from acupuncture.
Suggested Reading: "Between Heaven and Earth, A Guide to Chinese Medicine" by Harriet Beinfield, L.Ac. | <urn:uuid:3a52016f-fb30-47bb-b8a0-b7605a5d6429> | CC-MAIN-2022-27 | https://www.empowher.com/news/herarticle/2009/06/02/benefits-acupuncture | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00282.warc.gz | en | 0.932781 | 420 | 2.703125 | 3 |
Which of the following was symbolic of the rise of American influence in the fine arts after the Second World War?
Select an Answer
Mary Cassatt’s work in Impressionism
Thomas Eakin’s work in Realism
Grant Wood’s work in Regionalism
Jackson Pollock’s work in Abstract Expressionism
John S. Copley’s work of realistic portraiture
Jackson Pollock was a leading American artist of the 1940s and 1950s, and his Abstract Expressionist work helped make American painting much more prominent in the international art world following the Second World War. By contrast, Mary Cassatt’s work (A) was most prominent in the late nineteenth century. Thomas Eakins (B) was likewise most active in the late nineteenth century. Grant Wood’s (D) most famous work appeared in the 1930s. John S. Copley (E) worked in the late eighteenth century. | <urn:uuid:d892fe9a-8a1b-494b-ae82-d80d7729c0c7> | CC-MAIN-2019-47 | https://collegereadiness.collegeboard.org/sat-subject-tests/subjects/history/us-history/sample-questions/16 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00174.warc.gz | en | 0.946139 | 200 | 2.921875 | 3 |
Learn About PHMSA Programs
Energy products and hazardous materials support the American economy and our way of life. Americans use oil and natural gas everyday to heat, cool and light our homes and businesses, travel to work, and provide raw material for the many other things we use, including plastics, fibers, and paints. We also use a variety of chemicals to clean our water, create medicines, fertilize crops, and manufacture clothing and many other essential commodities.
PHMSA is charged with ensuring the transportation of these chemicals and energy products are conducted in the safest possible manner, free of risk to the American public, property and the environment, regardless of the transportation mode. To learn more about PHMSAs hazardous materials and pipeline safety programs, including the issuance of regulations and ensuring compliance through inspections and enforcement, visit the links below. | <urn:uuid:e8927f59-8978-47ef-ba5b-d2615a22a9da> | CC-MAIN-2015-06 | http://phmsa.dot.gov/media/programs | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422119446463.10/warc/CC-MAIN-20150124171046-00079-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.939488 | 167 | 2.515625 | 3 |
It’s not often we see our planet and satellite from the perspective of another world, and we’ve never seen them like this before. That little white dot zooming around the slightly bigger white dot, that’s the Moon orbiting Earth as seen from Mars.
To celebrate its 20th anniversary, the European Space Agency’s (ESA) Mars Express orbiter not only gave us the world’s first live stream from Mars but it turned back to face its home planet and snapped a nostalgic family portrait of Earth and the Moon inspired by the iconic “Pale Blue Dot”.
Thirty-three years ago, on its way out of the Solar System, Voyager turned around one last time and took what would become an iconic image of our planet. “That's here. That's home. That's us,” said Carl Sagan, who was instrumental in that image being taken, when it was first released. This new image of Earth and its natural satellite may not be the most impressive of our planet you’ve ever seen, but it still manages to capture that feeling of “Hey, wow, that’s us”.
“[W]e wanted to bring Carl Sagan’s reflections back to the present day,” explained Jorge Hernández Bernal, who is part of the Mars Express team behind the image.
“In these simple snapshots from Mars Express, Earth has the equivalent size as an ant seen from a distance of 100 metres [328 feet], and we are all in there. Even though we have seen images like these before, it is still humbling to pause and think: we need to look after the pale blue dot, there is no planet B.”
The first planetary image Mars Express took 20 years ago is also of Earth and the Moon taken from 8 million kilometers (~5 million miles) away on its way to Mars (below). The new images are taken from 300 million kilometers (186.4 million miles) away as it's completed over 24,000 orbits of the Red Planet.
The Mars Express mission was originally designed to last just one Martian year (687 Earth days). In the 20 years since its launch, it has traveled over 1.1 billion kilometers in its orbit of Mars, communicated with 12 other orbiters, landers, and rovers that have since landed on the planet's surface, and contributed to over 1,800 published scientific papers helping us explore Mars's atmosphere, climate, and surface. It shows no signs of slowing down and has been approved to continue until at least 2026.
“Perhaps it will only be another 20 years before humans can look up from the surface of Mars to see Earth in the night sky,” said Mars Express project scientist Colin Wilson. | <urn:uuid:5884b5d6-6db4-4173-bcb1-100fec367a58> | CC-MAIN-2023-40 | https://www.iflscience.com/watch-the-moon-whizz-around-our-home-planet-as-seen-from-mars-69803 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510219.5/warc/CC-MAIN-20230926175325-20230926205325-00223.warc.gz | en | 0.95739 | 577 | 3.046875 | 3 |
Not much happening in the sky today, so let’s talk about “planets out of bounds.” In terms of astrological energy, you can think of a planet out of bounds as a little dog digging under the fence and running around shouting “FREEEEE!”
But what does it mean physically? What is the astronomy of “out of bounds?” Where are these “bounds” and who decided on them?
There are two ways of looking at the solar system and the planetary movements. (Well, more than that, of course, but let’s keep it as simple as possible.)
The ancient astronomers, of course, saw the sky, the stars, and the “wandering stars” (“planetes” in Greek) from the surface of the Earth.
They saw that the planets all moved in roughly the same line across the sky, night after night. Sometimes they were above the line, sometimes below, but almost always they stayed within a band that covered about one-quarter of the visible sky.
In imagination, they took the Earth’s equator and projected it outwards onto the dome of the sky, calling this the “celestial equator.”
Then they observed the furthest north and south points that the Sun reaches at the two solstices, and projected those lines onto the sky dome as well. (We know these lines as the Tropics of Cancer and Capricorn.)
The image at right shows these lines on the surface of the Earth. The image also shows that the Earth is tilted to the plane of the ecliptic, which is the plane in which the planets move around the Sun.
Now let’s look at the solar system as a whole from outside, as an astronomer would view it.
This image, from NASA, is a diagram that shows that the planets don’t all move in exactly the same plane. The solar system is not like the rings of Saturn, where all its moons and the rings are in one plane like a sheet of paper.
The “plane” in which the planets move is kind of an average. Some are tilted up, some are tilted down, and some are truly wacky. So you can see that the tilted ones sometimes appear to be north of the celestial equator and sometimes south.
Now the third image… a view of the night sky from mid-northern latitude on today’s date. I’ve added the approximate locations of the celestial equator and the tropics of Cancer (+23) and Capricorn (-23) as dashed lines.
As you see, all the planets in this view are currently in bounds, although Mars and Pluto are close to the southern boundary. Right now only Mercury is out of bounds to the south. It will remain south of the Tropic of Capricorn until June 24, when its orbital tilt, as seen from Earth, brings it back within the safe confines of the zodiacal zone.
Summary: When a planet is “out of bounds,” the astronomy is simply that the combination of its tilted orbit and our tilted planet make it appear to be north or south of the Tropics in the sky. Astrologically, its energy behaves as though it has slipped the leash and is for a time free of constraints, both good and bad. | <urn:uuid:2a24751f-92e3-45ad-a025-f3e2c2f58bb7> | CC-MAIN-2020-24 | https://astrokate.com/planets-out-of-bounds/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00067.warc.gz | en | 0.968418 | 707 | 3.265625 | 3 |
Automatically Created Palindromes
Palindromes are spelled the same both forwards and backwards.
There are many great palindromes, and many collections of
them exists -- just search for "palindrome" on any web search
However, the palindromes below are entirely original. Furthermore,
most of them, as indicated, were created with the AI Palindrome Discovery System (a homework
project (including Java code) for exploring search strategies
in our Artificial Intelligence course). To my knowledge, this is the
first automatic creation of palindromes. Read more about the project
in our published cs education paper.
Also, you are urged to check out this animated palindrome GIF (by
Ph.D. student Pablo Duboue). It shows one of our favorite
computer-discovered palindromes in 3D. All the letters rotate 180
degrees and then the whole thing rotates resulting in the same phrase.
New: Moreover, check out this other
animated palindrome applet, by Michael Birken. It shows the same
palindrome in a cool circular 2D animation that would only be
possible for a palindrome.
Symmetric "Kargo", by Michael
Birken. You cannot argue that this does not say Kargo. You cannot
argue that this is not symmetric (looks the same in a mirror).
'Course, it is not a palindrome, but Michael was thinking a lot about
palindromes when he made it, and he works at Kargo with me.
Automatically created palindromes.
The student in parentheses wrote the search
procedure, ran the experiments and manually selected the worthy
palindrome from the system's output, which is usually big.
The following are the same as above, but by Your's Truly,
seeded with one of the
words: AI, OOP, Java, taught, SIG, or computer:
- re: Solomon no mo' loser (Tejpaul Bhatia)
- Sex or excel? Alec xeroxes. (Corey Robert Tripp)
- Minimax or excel? Alec xerox a minim. (Corey Robert Tripp)
- spacecraft farce caps (Boyle Lee)
- Len serf Fresnel. (Boyle Lee, who's friend Len's computer is named Fresnel!)
- Sex if fat affixes. (Haoqiang Zheng)
- trade man named art (Haoqiang Zheng)
- sat siva, a vistas (Boyle Lee)
- siva is a hindu god representing destruction...if he was sitting in front of
you, that would definitely be a spectacular site ( a vistas )
- Infidels led Ifni. (Gautam Nair)
A former Spanish possession on the Atlantic coast of southwest Morocco. It
was ceded to Spain in 1860, but overseas control was nominal until 1934.
Ifni was returned to Morocco in 1969.
- Snog at coy octagons. (Daniel Liu)
- Iambic IBM AI. (Daniel Liu)
- Seven eves. (Daniel Liu)
- Sad, I misuse Jesus, I Midas. (Daniel Liu)
- red dustcart tame hem attracts udder (Arie Addi)
- mix a man a maxim. (Huafeng Xu)
- Red limpid dip milder. (Jun Hou)
- Dog's God. (Jun Hou)
- Sub in money: yen, omnibus. (Jun Hou)
- Eureka mixes sex -- I make, rue. (Jun Hou)
- Putrid yapping noshes rep insures a mallet. Sahib Bart? Rabbi? Ha,
Stella! Maseru snipers, eh? Song, nip paydirt up. (Mike Blount)
- Masseuse sues Sam. (Jenny Weisenberg)
- Massive Levis, Sam! (Jenny Weisenberg)
- sun ate thor oh tetanus (Tarun Kapoor)
- wo cow (Tarun Kapoor)
- Tips? Secondrater retard! No cesspit! (Corey Leong)
- Sex Elicits Alex, Elastic Ilexes. (Alexander Day Chaffee)
- stunk college israel lear siegel locknuts (Andrew Bryant Ross)
- Sane Rat Arenas (Corey Tripp)
The following, by yours truly, were deemed grammatical, and
were produced from searches seeded with random
- Poor OOP.
- SUN, a Java Janus. (God)
- Dumb, O Java; Java job mud.
- ALA me taught: Evil liveth Guatemala.
- Debra, Java, LSD, Slav; ajar bed.
- Plato grasp OOP -- poops argot alp.
- Poofs note rap -- pareto NSF-OOP.
- Poops? I lewd! We LISP OOP.
- Poor iambics USC IBM air OOP.
- Ahem, ANSI SIG: Isis name -- ha!
- Ah computer, fret up; mocha.
- ``Adieu; grasp OOP - oops!'' argue Ida.
- None: Decimal SIG, Islamic Eden, on!
The following were not deemed grammatical, but win
- Infinitesimal clam I set in Ifni.
- Twenties acne? Encase it, Newt.
- Name none, Dr. Arden: one man.
- Revolted I -- Bidet, lover!
- Deepmined gossip piss, Ogden -- I'm peed.
- Wow! Wop powwow.
- Elf farm raffle.
- Net-safe fasten.
- Madame Nile pips pipeline, madam.
- Tangy gnat.
- Evade Dave.
- Re-sampled art? Trade LP maser.
- A mini-mower crew-o-minima.
- Nomad do-gooder, redo. O, goddam -- on!
- Redo? O Goddam -- on, nomad do-gooder!
- Wo, honey's deeds -- ye nohow!
- Hex, a jam! Ajax, eh?
- A ha-ha-foolery: Gyre loofah, aha!
- ETA misdeeds? I mate.
- Gentlewoman I dine mo' -- welt neg.
- Spacecapsule dome remodel U.S. pace caps.
- Sex elicit ore erotic ilexes
- A retaliatory pyro tail at ERA.
- Nil murderer, evil liver, ere drumlin.
- Nob Beebe, oh Phoebe, ebb on.
Manually created (no computer) by people Eric Siegel know's personally:
- On AI, play Al piano. (your's truly)
- Net safety? Byte fasten. (your's truly -- possibly the first palindrome joke)
- Y'no palindromes; Semor'd nil a pony. (your's truly -- possibly the first meta-palindrome)
- Are Macarena ban ERA camera? (your's truly)
- Dogma: I am God. (your's truly, I think; maybe I saw that
somewhere a long time ago)
- Screw! We R cs. (Huafeng Xu)
- Hey Emily, evade Davey! Limey, eh? (Homin Kaie)
- Not now Gib. A ton, not a big wonton. (Carl Sable)
- Straps laminate pet animal's parts. (Carl Sable)
- We? I vote cinema. Me! Nice to view. (Carl Sable)
- I, Len, am Oreo boys, sissy oboe Roman Eli. (Michael Littman)
- C Prof. May, alseep, peels a yam for PC. (Michael David Birken)
- On AI, Prof. Siegel, asleep, peels a leg; "E" is for piano. (Michael David Birken)
- To Prof. Siegel: lock college is for pot! (Michael David Birken)
- He's no me, Dr. Seus; Siegel Law allege issues R demons, eh?(Michael David Birken)
- I was Semor D. Nila: Prof. Siegel IRC! As sacrilege is for palindromes,
(Michael David Birken, regarding that I don't use my real name on IRC)
- Margo, realm the html aerogram. (Faith D'Lamater)
- Sit on a potato pan, Otis. (I don't know, but Dave Evans has it
as his email siggy)
Classic palindromes (not original)
- Madam, I'm Adam.
- Able was I ere I saw Elba.
- A man, a plan, a canal; Panama.
- Go hang a salami, I'm a lasagna hog.
- Doc note, I dissent, a fast never prevents a fatness; I diet on cod.
- Satan oscillate my metallic sonatas.
- No sir, away, a papaya war is on!
© 1999 Eric V. Siegel | <urn:uuid:46902502-c715-44ad-a68a-585eabead824> | CC-MAIN-2018-47 | http://www1.cs.columbia.edu/~evs/ai/Palindromes.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743968.63/warc/CC-MAIN-20181118052443-20181118074443-00309.warc.gz | en | 0.796789 | 2,024 | 2.703125 | 3 |
mayo 07, 2010
The Jewish President of the European Central Bank, Jean-Claude Trichet, told Forbes that global governance is extremely necessary if we want to prevent another financial crisis. In his prepared printed and spoken remarks to the Council on Foreign Relations, Trichet emphasized that politicians, economists, and financiers must work across the Atlantic and collaborate on methods to create an international set of standards. It is his belief that through global governance, the resiliency of the global financial system can be assured, noting that ultimately it was governments’ use of taxpayer’s money, equivalent to around 25% of GDP on both sides of the Atlantic, that prevented another catastrophic great depression from occurring. With the backdrop of a U.S. financial regulation bill being stuck in the Senate, he argued three main points in support of creating internationally agreed rules. | <urn:uuid:0d7ad13c-ac0f-41fc-9b55-c8f73ca7921b> | CC-MAIN-2017-04 | http://mashiajinminente.blogspot.com/2010/05/european-central-bank-calls-for-one.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948504 | 176 | 2.640625 | 3 |
370-28 Pull and Junction Boxes
Article 370 covers the installation and use of all boxes (and conduit bodies) used as outlet, junction, or pull boxes, depending on their use. [370-1] Boxes containing No. 18 through No. 6 conductors must be sized in accordance with the specifications in 370-16. These boxes are calculated from the sizes and numbers of conductors. Unlike those boxes, pull and junction boxes containing conductors of No. 4 or larger, under 600 volts, are calculated from the sizes and numbers of raceways.
Specific provisions for calculating pull or junction boxes containing these larger-size conductors are covered in Section 370-28. These provisions apply to both metallic and nonmetallic boxes. Because of various types of raceway arrangements, two calculation methods are provided-straight pulls and angle or U pulls. For raceways containing conductors of No. 4 or larger, and for cables containing conductors of No. 4 or larger, the minimum dimensions of pull or junction boxes installed in a raceway or cable run must comply with 370-28(a)(1) and (2). [370-28(a)]
370-28(a)(1) Straight Pulls
Boxes containing straight pulls are sized according to the largest raceway entering the box. The length of the box must not be less than eight times the trade diameter of the largest raceway. The first step is to determine the type of pull. Installing a raceway in each end of a wireway (gutter) is one example of a straight pull.
The ends of this pull box provide only enough room for one raceway each. Although these raceways are straight across from each other, this is only one type of installation. A straight pull does not necessarily mean the raceways are directly opposite one another. Boxes containing raceways on opposite walls, regardless of the offset, qualify as straight pulls.
Computing the minimum dimensions of a box containing straight pull(s) is an easy calculation—simply multiply the largest raceway (trade diameter) by eight. For example, calculate the minimum dimension for a pull box with two 2-inch conduits. The conduits are directly across from each other on opposite walls.
Since the conduits are the same size, multiply either trade diameter (2 inches) by eight. The minimum length required for this box is 16 inches. What about the width and depth of this pull box? Unless a raceway enters the back of the box, no requirement specifies the width or depth. The box’s width and depth must be large enough to provide proper installation of the raceway (or cable), including locknuts and bushings.
Use the largest raceway to size pull boxes containing different-size raceways. For example, a pull box contains two conduits located on opposite walls. While one conduit has a trade diameter of 2 inches, the other conduit is 3 inches. What is the minimum length for this pull box?
Simply select the larger raceway (3 inches) and multiply by eight. This pull box’s length must be at least 24 inches.
Boxes having more than two raceway entries are calculated exactly the same, provided the box only contains straight pulls. For example, the left side of a box contains three conduits, one 3-inch, and two 2-inch conduits. The right side contains one 4-inch, one 3-inch, and one 2-inch conduit. What is the minimum length required for this box?
Since the largest single raceway is 4 inches, multiply four by eight. The minimum length for this box is 32 inches. No extra space is required for additional raceways when calculating the minimum length of a box containing straight pulls. Again, the width and depth of the box must be large enough to provide proper installation of the raceway (or cable), including locknuts and bushings.
Just because a box has raceway entries on all four sides does not automatically mean it contains angle pulls—the conductors could pass straight through the box.
Where raceways enter on all four sides and all the pulls are straight, two separate calculations are needed. First, find the largest-size raceway (trade diameter) for the left/right (horizontal) dimension and multiply by eight. Next, find the largest-size raceway for the top/bottom (vertical) dimension and multiply by eight. For example, the left and right sides of a box contain one 3-inch conduit each. The top and bottom of the same box contain one 2-inch conduit each. What are the minimum length and height dimensions required for this box?
The largest-size raceway for the left/right (horizontal) dimension is a 3-inch conduit. Therefore, the minimum length for this dimension is 24 inches. Since the largest-size raceway for the top/bottom (vertical) dimension is a 2-inch conduit, the minimum height of the box is 16 inches.
A 24- by 16-inch box is required for the configuration shown in Figure 8. What if, upon installation, this box is rotated 90 degrees? Of course, one dimension would be correct, but the other would not. A solution, which eliminates this possibility, is to buy a square box equal to the largest dimension.
Sometimes, confusion arises concerning these large boxes enclosing smaller conductors. For example, a 16-inch square box has one 2-inch conduit entering each side. Each conduit is filled to capacity with No. 12 and No. 10 conductors. Unless the cover is removed, the box appears to be the correct size. However, sizing this box per Section 370-28(a) is incorrect. Boxes enclosing conductors smaller than No. 4 must comply with Section 370-16. Another major concern pertains to the number of current-carrying conductors within each conduit. When more than three current-carrying conductors are in a raceway or cable, the allowable ampacity of each conductor must be reduced (derated) as shown in Table 310-15(b)(2)(a).
Next month’s In Focus, beginning with Section 370-28(a)(2), will continue discussion of pull and junction box sizing. This section covers calculation procedures for boxes containing angle or U pulls.
MILLER, owner of Lighthouse Educational Services and author of Illustrated Guide to the National Electrical Code,can be reached by phone at (615) 333-3336, or via e-mail at charles@charlesRmiller.com. | <urn:uuid:6293d0d6-6e3c-4153-aade-a3fcaa12c665> | CC-MAIN-2015-40 | http://www.ecmag.com/section/codes-standards/article-370-pull-and-junction-boxes-0?qt-issues_block=0 | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737935954.77/warc/CC-MAIN-20151001221855-00140-ip-10-137-6-227.ec2.internal.warc.gz | en | 0.882853 | 1,358 | 2.703125 | 3 |
- slide 1 of 6
Company using Biomimicry
Biopower systems is a company involved with creating biomimicry utilizing sytems for generating renewable energy from marine environment. In this article we will learn about two such systems, Biowave and Biobase. We will also discuss the various advantages and special features attached with such systems.
- slide 2 of 6
Biowave system replicates the swaying motion of the sea plants that results due to waves.Power is generated due to hydrodynamic interaction of the specifically designed blades with the oscillating flow field, which leads to absorption of maximum energy over the full water breadth. The structure is made in such a way that the devices can self- orient in the direction of the wave. Due to this reason, the device is able to absorb all the possible energy from the waves, irrespective of its direction. In case of rough weather, the system automatically ceases to operate and lie flat on the ocean bed.
The main aim of the system is not to produce energy by resisting the motion or forces of waves but to work in concordance with the movements of the ocean. This enables the system to be less complicated in structure and significantly lighter in weight. A simple construction also means a cost effective and less maintenance structure.Biowave also utilizes O-Drive Generators that we discussed in the previous article.
- slide 3 of 6
Biopower systems has brought a revolutionary change in the way renewable energy is being generated. Let’s take a look at the main advantages this system demonstrates:
- Once the system is installed, it works as an independent, autonomous unit.
- The structure is so simple that it doesn’t require insurance and also restricts the overhead costs
- Low maintenance due to its simple mechanism and ability to lie flat on sea bed in case on heavy weather which prevents damages due to load tolerances
- It is also environment friendly, creating minimum seabed disturbances due to its smooth motion.
- As it moves in the same direction of the waves it produces slow natural movement without creating any noise.
- The system doesn’t disturb the natural flow of the currents, neither does it creates variation in salinity and turbidity.
- It doesn’t emit any kind of pollutants or foreign particles in the sea water
- Carries a direct drive power conversion for continuous flow of energy
- Covers a large swept area
- Smoothly aligns itself with the direction of wave propagation
- Simple in construction and hydro dynamically optimized.
- slide 4 of 6
Biobase is not a power generating system, but utilizes the same concept of bionics.Biobase is a singular mounting system that imitates the seabed holdfast mechanism of large sea plants such as giant kelp. Unlike other conventional mounting systems, Biobase doesn’t have a main shaft but have many small separate elements that share the overall load of the mounting. These elements are called “roots”. The vertical and lateral loads that are passed to the seabed are distributed equally among the roots. Thus the mechanism helps to evenly distribute the forces and alleviate excess loads. Biobase utilizes a rockbolting technology in order to fix the multiple roots to the seabed, thus it doesn’t require specialized drill ship for the same purpose.
Once Biobase is installed, the surmounting structure can easily attaches to the system with the help of a single surface vessel. Thus Biobass is an effective tool for securing offshore wind turbines and deepwater foundations.
- slide 5 of 6
The usage of bionics as a specialised engineering stream in order to generate new technologies that work in harmony with the nature, makes a strong and valid statement that working against the methodology of nature in order to create usable technologies for human welfare leads to harmful effects on both nature and human beings. Instead, taking inspiration from natural ecosystems to create engineering designs has not only lead to the creation of a more efficient sources for human benefit but also proved to be more eco-friendly and reliable. | <urn:uuid:15dcdf47-de08-4e52-9b67-59910de70298> | CC-MAIN-2018-43 | https://www.brighthub.com/environment/renewable-energy/articles/39995.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516135.92/warc/CC-MAIN-20181023111223-20181023132723-00558.warc.gz | en | 0.932258 | 840 | 2.609375 | 3 |
The Magic of Orson Welles
On the spotlight in this week’s History of Magic is Orson Welles, whom movie fans will know for his ground-breaking films like Citizen Kane, The Lady from Shanghai, and The Third Man. Even today, cinema students study his innovative film techniques, while historians remember him for the night he terrorised a nation with his radio play, “The War of the Worlds.” Orson Welles was a writer, director, actor, broadcaster and all-around genius. He was also a magician. He loved to perform magic and at this he was superb.
Orson Welles was born on May 6th 1915 and got his first magic set at age three. As a child, he saw Harry Houdini perform and was only 11 years old when the great escape artist died in 1926. At the age of 10, he ran away from home and was discovered two days later at a Chicago Street corner surrounded by a crowd while doing a magic act. The stage and theatre soon became a magnetic attraction for Welles and all through school he was writing and acting and even directing on the New York stage. By the time he was 20, he went into radio as a broadcaster and presented the weekly show called “The Mercury Theatre of the Air.”
In 1925 when he was only 25, Hollywood beckoned to Welles. He created what many still believe is one of the greatest films of all time, Citizen Kane. Welles continued to work at his magic and some felt that his films were an extension of his magic. When World War II broke out, he decided to put on a major show for servicemen and women. In 1943 in a huge tent in Hollywood, he produced the “Mercury Wonder Show.” Service men and women got in free while the public had to pay. A number of motion picture celebrities presented circus-type acts while Orson Welles did magic. He performed the dangerous Bullet Catching trick, Harry Houdini’s Needle trick, and the Borrowed Bill in an Orange, along with lesser known but dazzling effects. During one of his World War II magic shows featuring his wife Rita Hayworth as his illusion assistant, a newspaper reviewer wrote that Hayworth was sweating profusely as she emerged from a cabinet illusion. Welles took offense and wrote a letter to the editor in response, stating that Miss Hayworth does not sweat….. She glows!
In the same year, Universal Studios did a film called Follow the Boys. In this film, numerous stars did vaudeville acts while Welles sawed actress Marlene Dietrich in half. In the 1950’s, Welles did a three-week stint at the Riviera Casino in Las Vegas with his own magic act for which he received the fee of $45,000. Part of his show also included recitations from Shakespearean plays that had growing appeal in America.
Welles always had trouble raising money to produce his own films and spent much of his time acting in films of others where he often played the part of magicians. In Henry Jagloms’s film A Safe Place (1971), he played the part of a magician who befriends a young girl. In the film Get to Know Your Rabbit, he taught actor Tommy Smothers to become a tap-dancing magician. In 1973, he directed and acted in the film called F for Fake (which never actually finished) in which he performs a number of magic tricks including an Asrah levitation. Earlier in 1949, in his film Black Magic, Welles played the part of Count Allesandro Cagliostro, who was a 16th century occultist and adventurer of dubious fame.
On October 30th 1938, as part of the radio broadcast drama War of the Worlds (which was based upon the HG Welles book), Orson broke into the programme at various intervals to deliver a series of simulated news bulletins with the news of an alien invasion by Martians. In many parts of America, people panicked as the resonant, deep voice of Welles was quite realistic and convincing. The whole thing was a hoax but it certainly established Welles’s fame as a dramatist. As a result of this event, the Federal Communication Commission established strict rules on radio where there was a possibility of a broadcast inducing mass panic.
For many years, Welles worked on a film project called The Orson Welles Magic Show but the plot of the film kept changing. A significant part of the movie was meant to showcase Welles’s magic capabilities. It was to be filmed using only one camera angle and no editing cuts. He wanted to prove that his magic was not accomplished by camera trickery or special editing. However, the project was never completed for release.
Orson Welles continued to perform magic right until his death. He had a number of television special performances and I have a recollection of him performing at the Magic Castle. He was interviewed on numerous American talk shows and he was a frequent guest on the Merv Griffin Show. He often performed magic on these occasions and the night before he died, he actually performed a card trick on Griffin’s show. He died on October 10th 1985 at age 70.
There is no doubt that the genius of Orson Welles was better known in the motion picture industry than in the history of magic. Be that as it may, his magic was memorable and it was something he was involved in all his life.
(Grateful thanks to Gale Molovinsky of Ann Arbor Magic Club Ring 210 & SAM Assembly 88 USA)
Follow the Magic Tricks For Kids Team to Keep Up With New Posts!
Do you want to read more about the history of magic? Click here! | <urn:uuid:cbc30430-9e52-4088-a959-c9a821b9c303> | CC-MAIN-2021-04 | http://magictricksforkids.org/history-of-magic-orson-welles/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00327.warc.gz | en | 0.988708 | 1,189 | 2.734375 | 3 |
The community room was packed, the mood a mix of curiosity, and urgent anticipation. Not a normal Tuesday night at the Antigonish People’s Place Public library.
Close to 100 people had come to take part in a town-hall style meeting which is currently being repeated in 150 communities across Canada.
“There are people in those communities who are doing the exact same thing we’re doing,” said Chad Brazier who helped organize the event in Antigonish on May 21. “People who see the writing on the wall as presented to us by global scientists and which we can see with our own eyes.”
Halifax, Mahone Bay, Charlottetown, Edmundston, Moncton and Saint John are the Atlantic Canadian municipalities which have already declared states of climate emergency.
There are over 300 municipalities across Canada that have done the same thing in order to highlight the need to reduce carbon emissions.
However, these declarations do not lead to specific actions. That’s where The Pact for a Green New Deal comes in.
On May 5, 2019, over 67 separate organizations across the country came together to endorse The Pact for the Green New Deal, a non-partisan coalition calling for a 50 percent reduction to Canada’s fossil fuel emissions within the next 10 years.
What exactly the American – inspired Green New Deal is, and how it would look in Canada is what people had come to learn and decide.
“It’s a vision that acknowledges where we’re at today. That we are in crisis.” said David Elliot, another organizer for the gathering. “It lays out a vision, a groundwork to where we could be at the end of a set period of time.”
That time being between now and 2030, when the Intergovernmental Panel on Climate Change report states that the average surface temperature on earth will likely reach 1.5 degrees C above pre-industrial levels. This could severely effect sea-level rise, bio diversity, human health, and our long-term survivability.
In order to avoid that outcome, the IPCC calls for “rapid, far-reaching and unprecedented changes in all aspects of society,” and that’s also what The Pact for a Green New Deal demands.
Once this was explained to those gathered in the community room, everyone in attendance was given the chance to contribute ideas on how Canada can meet those demands.
There were six tables spread throughout the library where people were asked to write down what a Green New Deal in Canada should include, and also what should be left out. According to the organizers the information will be collated, packaged, and then sent to The Pact for a Green New Deal national body which will then deliver the information to policy makers in Ottawa.
“When your voice is heard you feel like something might actually change,” said Nancy Turniawan who lives in the community of Lakevale in Antigonish county. “People have enough to say and are that interested than an hour is not enough.”
“It can’t come from the top down,” said Elliot. “No one knows better than what needs to be done then the people in this room, the people in the community, the grassroots. That’s what we’re here to do today.” | <urn:uuid:8d3e8896-058a-43bf-9acd-12de6b50d4cf> | CC-MAIN-2019-43 | https://www.thecasket.ca/news/local/antigonish-peoples-place-hosts-one-of-150-town-hall-meetings-hapening-across-the-country-to-discuss-a-green-new-deal-for-canada-315042/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987773711.75/warc/CC-MAIN-20191021120639-20191021144139-00148.warc.gz | en | 0.960663 | 698 | 2.75 | 3 |
When it first arrived in this faraway corner of the Union, it was simply known as football or, in a publication’s initial reference, association football. It was the sport of immigrants, and during 1880s immigrants were among the tens of thousands pouring into the territory and, by November 1889, the newly admitted State of Washington. During that decade alone, the state population grew by nearly five-fold (most of it in the final five years of the century), and in the next 20 years it would triple again.
Western Washington was densely forested and rich coal reserves had been discovered on the Cascade foothills, and, whether it was the timber or mining industries, workers were in great demand. By 1900, 20 percent of the state population was foreign-born, over half from northern Europe, where association football had taken root. Black Diamond Mining Co., founded in 1884 in the southeastern King County foothills, featured a workforce of mainly Irish, Welsh and Italian heritage. Awaiting them was a local sporting culture of gridiron football, baseball and horse racing. Cleaving to what they knew, the immigrants brought association football, a game associated with Europe’s working class.
Those laboring in mining and lumber had dangerous jobs and little free time a typical work week was six 10-hour-days. Yet they thoroughly enjoyed competing, the game was growing throughout Cascadia, and beyond picking teams for after-work games, it was common to challenge other work camps and neighboring towns to a match.
The first published word of association football surfaced in 1890, both in rural Washington and urbanized Puget Sound. A spring team had formed in the southwestern logging community of Chehalis, and a Christmas Day match was played at Tacoma’s baseball park versus a side from Portland. By 1893, transcontinental rail service and association football had reached Seattle, where a challenge match against Everett took place on the shores of Lake Washington, at Madison Park.
With precious little recreation time and challenging travel conditions throughout the region, regular intercity play was not possible at first. Seattle Association Football Club (AFC) sailed up to Vancouver Island in May 1894 only to be humbled by British Columbia champion Victoria Wanderers, 5-1, for unofficial bragging rights of Cascadia.
Thanks to generous donors, prestigious prizes were at stake. In 1894, the Great Northern Railway donates “at great expense” a silver challenge cup as a prize for teams to compete throughout Washington and Oregon. In 1899, more silverware was donated by Thomas Lippy to winner of the state championship, to be settled on Thanksgiving Day. A second Seattle club, Thistles, was formed and by 1905 they were joined by Wanderers, soldiers from Fort Lawton and millworkers from Port Blakeley, on the eastern side of the Sound. It was sufficient to form the Northwestern Association Football League, which would feature six charter members, including Tacoma, opens in January 1906. Lumberman Jack McMillan’s trophy was the most recognizable. The McMillan Cup was presented to winners of several different competitions over a span of 35 years (1913-1948).
There was good reason to be optimistic about the game taking hold. Big matches had moved from Madison Park to Woodland Park, which was served by a trolley line. The City of Seattle had purchased the park from Guy Phinney in 1899. Five years later, the Olmsted Brothers-designed park, featuring a zoo, opened. An athletic field was situated at the park’s southwest corner and known as Upper Woodland. Crowds were soon flocking to the field an estimated 2,000 watched Wanders and Thistles play in March 1906.
Across the nation and in Washington, the game was getting a new name to better distinguish itself from American football after initially spelling it ‘socker,’ then soccer. Newspapers chronicled league play, major friendlies and even schoolboy leagues. Quality play was termed ‘fast,’ referees often received marquee billing and occasionally complete lineups were listed, usually prior to the match.
It’s a winter sport, with league play beginning in mid-November, concluding in late February. A knockout tournament follows, keeping each club engaged to the end. Friendlies continued to be played, more often than not on holidays such as Thanksgiving, Christmas, Boxing Day and New Year’s Day. Those would involve either established clubs or picked (all-star) sides meeting Canadian counterparts. By 1908, in addition to the Northwestern League, a picked team, Seattle United, was admitted to the B.C.-based Pacific Coast City League and holds it own (2-2-1). Ultimately, however, the costs of added travel are too much to bear they do not return for a second season.
Improved transportation makes local travel much more practical, however, allowing the miners from Black Diamond, Carbonado and Renton to join – and dominate – the league, which grows to 10 members for the first time, in 1913. The economy, with logging once more prosperous, is strong and interest is widespread. In the southwest corner of the state, communities and logging camps form a senior league. Youth play is introduced to schools in Seattle (Green Lake, in 1910) and Spokane by 1912, and women seek to play informally around Seattle.
As the state approached its fourth decade of soccer, the sport was booming but looming was U.S. involvement in the Great War. Since the outbreak of World War I in Europe in 1914, Puget Sound becomes a thriving manufacturing and shipbuilding center. Skinner & Eddy opens the largest shipyard in the Pacific Northwest in 1916, and by 1918 there are 13 yards launching steel- and wood-hulled vessels. Five shipyards – Skinner & Eddy, Seattle Construction & Dry Drock, Ames, Duthie’s, Todd and North Pacific – form teams and join the league, and during the war years the power shifts to from the hills to the docks.
These are heady times. A booming wartime economy, automobiles beginning to shrink the distances between communities and, for soccer, seemingly endless possibilities. Alas, the Twenties would begin with a whimper rather than a roar. | <urn:uuid:94b2a245-985a-4e0a-9f09-9c9c7f7c3580> | CC-MAIN-2021-10 | https://history.wasoccerlegends.org/year/1919 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00530.warc.gz | en | 0.977512 | 1,286 | 2.859375 | 3 |
Motivated by a completely more profound dynamic than most pop musicians, David Bowie possessed a unique universe of unusual sounds and perpetual vision. Unwilling to remain on the treadmill of rock legend and maintaining a strategic distance from the plunge into regularly belittling and diminishing circles of banality, Bowie composed and performed what he needed, when he needed. His nonappearance from the unending rundown of “critical occasions” had fuelled merely intrigue. Constant conjecture about what the person was up to, has even driven some to think about whether this is his most noteworthy reinvention ever. This paper seeks to analyze his life and influence in the art industry.
Born as David Robert Jones, Bowie was born on 8th January 1947 at Brixton in the UK. When he reached thirteen, he began his lessons from Ronnie Ross. The early bands he participated in at his young age provided him with the necessary skills and proficiency, which he needed to set his music career rolling. His musical career spanned for a period nearly five decades from the 70s to 2013. During his music career, Bowie underwent various transformations, both in his stage performance, outlooks and dressing, which became instrumental in shaping the nature of other pop artists. To begin with, during the early seventies, the release of “The Man Who Sold the World” marked a new chapter in music history. It is this album that gave birth to glam rock, which had a significant influence on other rock artists who came after him. During this period, Bowie created Ziggy Stardust, a novel and spectacular live performance, which propelled him to stardom. Other than music, Bowie had a slight taste of the film industry. Bowie did a cartoon voice in “SpongeBob SquarePants,” which was released in 1999. Further, he performed as small appearance in “Extras” released in 2005, and Tesla in “The Prestige,” which was released in 2006. These film ventures influenced the development of his musical career.
Bowie had significant influence in the development of the music industry as well as fashion design. His thrilling performances and ideas were inspirational to other musicians within the pop industry. Additionally, moonwalk origins are attributed to Bowie. Although he performed them statically, Michael Jackson perfected them. These early performances of the moonwalk had significant influence over Michael Jackson who heavily relied on the original ideas of Bowie. Also, Bowie had a considerable impact on the punk rock musicians. His musical innovations were incorporated into the genre, which significantly influenced its development. Further, Bowie introduced sophistication in the rock music, which enhanced its beauty and flavor. On the other hand, Bowie had tremendous influence over fashion development during his life. From dressing like “Thin White Duke” to the knitted jumpsuit, Bowie’s dressing ideas were novel and showed remarkable creativity. These designs were later adopted by other people in the fashion industry as well by general society. Due to his creativity and innovativeness, Bowie managed to influence the music and fashion industry in his life.
Bowie’s confession of his sex inclination was an inspiration and motivation for the LGBT community. During the early 70s, Bowie had a sex life, which many people of his time considered slightly obscene. In an interview at 1972, he openly stated that he was gay. This confession concurred with the “Ziggy Stardust” character, which Bowie sought to advance in the society. Later in 1976, he confessed that he was bisexual. Such sexual orientation seemed rather weird in the society during the time. However, Bowie was determined to advocate for his stand in an indirect approach. Through the use of his costumes, he managed to slowly change the perception of the society at that time to gradually appreciate the “gay” community. It is due to these ventures that he is considered an inspiration and motivation for the LGBT community as Bowie sought to rise against the cultural stereotypes regarding sex and sexuality at this time.
In conclusion, the life of David Bowie is an inspiration to both the music community and the fashion industry. Through his creativity, innovativeness, and open-mindedness, Bowie managed to develop and enhance the progression of pop music among other genres of music. Further, he managed to nurture novel fashion ideas, which were popular in the society at this time. It is due to these contributions in the community that Bowie remains an influential figure in visual and performing arts.
- Bennett, Andy. “Wrapped in stardust: glam rock and the rise of David Bowie as pop entrepreneur.”Continuum 31, no. 4 (2017): 574-582.
- Bradley, Peri, and James Page. “David Bowie–the trans who fell to earth: cultural regulation, Bowie and gender fluidity.” Continuum 31, no. 4 (2017): 583-595.
- Flannery, Denis. “Absence, resistance and visitable pasts: David Bowie, Todd Haynes, Henry James.”Continuum 31, no. 4 (2017): 542-551.
- Hawkins, Stan. Queerness in Pop Music: Aesthetics, Gender Norms, and Temporality. New York: Routledge, 2015. | <urn:uuid:90e7cca5-0208-4ba4-93e0-3276d209d603> | CC-MAIN-2023-23 | https://essaywriter.org/examples/david-bowie-and-his-contributions | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655446.86/warc/CC-MAIN-20230609064417-20230609094417-00037.warc.gz | en | 0.978936 | 1,075 | 2.515625 | 3 |
Vermont’s not exactly known for its tropical weather, but even with our long cold winters, you can still grow and harvest your own chocolate indoors. The cacao trees below were grown from a pod harvested from New Hampshire and germinated in my Vermont home, both zone 4.
The New Hampshire parent tree grown by a friend is about 6 feet tall, and produces a crop of 2 to 5 pods per year, blooming in the summer and ripening mid-winter. That’s not bad when you consider a tree growing outdoors in the tropics produces only 20 pods a year.
On our homestead, we love the novelty of growing our own tropical edibles. We’ve already had success with homegrown ginger, turmeric, mango trees, coffee, vanilla, lemons and oranges…why not add chocolate to the mix?
A few years back, I asked my cacao growing friend to save me a pod. Mid-February I got a call that my pod was ripe and ready to go. When I arrived, I found that they’d literally written my name on it to prevent anyone else from claiming it.
Since not everyone has a friend that happens to be growing cacao, you can order your own cacao pod online here
If you want to skip the germination steps, and get right to growing your own tree indoors, cacao trees are available here.
A bit of nomenclature, Theobroma Cacao is the tree name, spelled cacao. The processed chocolate, or cocoa mass, switches the last two letters and adds an o at the beginning. So a cacao tree is needed to grow your own cocoa or chocolate.
It’s important that the seeds are fresh, inside an intact pod. Once the pod is opened they rapidly spoil, and they’ll only germinate while fresh.
Seeds cannot be dried and stored like garden vegetable seed packets. As a tropical plant, in nature, the seeds would be kept warm and moist, and they wouldn’t have the opportunity to dry down like a package of typical garden seeds.
Each cacao bean is coated in a sticky-sweet coating that tempts tropical animals to crack open the tough pods and gorge on the interior nectar. The beans themselves are then discarded as the animal moves throughout the canopy, planting the next generation of cacao trees.
The first step in growing chocolate from seed is to crack open the seed pod, which is roughly 1 centimeter thick. It takes a good butcher knife or chef’s knife and quite a bit of elbow grease, so be careful with your fingers.
Avoid cutting into the seeds, because they’re surprisingly soft, gummy and fragile.
To prepare the seeds, you’ll need a few adventurous friends. I invited over just about everyone I knew when we cut it open because it’s not every day that you get to taste fresh grown raw chocolate.
The most efficient way to clean and prepare the seeds is by placing them into your mouth and sucking off the white cacao nectar. It’s sweet and fruity, and in the group I assembled every single person loved it.
In the tropics, they ferment it into a liquor and since the coating spoils so quickly, if you don’t grow your own your only chance to taste it fresh would involve a very expensive plane ride.
For germination, the seeds want to be kept warm and moist. My drafty 1850’s schoolhouse in February didn’t seem like it fit the bill, but I created a hot water bottle for them with a Ziploc bag filled with warm water, wrapped in a wet towel. I then placed the freshly cleaned seeds in a wet paper towel, and put that on top of the water-filled bag.
I put the whole setup into my oven with the oven light on for a small amount of extra heat. After just a few days, the seeds had begun to germinate and I transferred them to the soil.
With this method, I had a roughly 50% germination rate. Not bad for a cheap hacked setup.
If you’re investing in buying a cacao pod and having it shipped to you, you might as well try a small countertop seed germination setup or at least invest in a seedling heat mat to better ensure success.
Once you’ve got healthy cacao trees, either by germinating your own cacao pods or by starting with a live cacao tree, all you have left to do is wait.
In nature, cacao trees are a zone 10 plant, so they want to be kept warm, but room temperature, keeping them consistently between 65 and 70 degrees is sufficient for them to thrive. They’re an understory plant, so filtered light indoors is actually ideal, and they grow wonderfully even in northern climates near a south-facing window or in a sunroom.
It takes 5-6 years from germination to see your first crop. The flowers will appear directly out of the stem, and though the plant will produce hundreds of tiny flowers, only a few will actually go on to produce cacao pods even in ideal conditions.
The fruit will begin to form and will grow slowly for 6 to 8 months. Harvest happens in February or March for northern grown indoor cacao trees.
Be sure to have plenty of friends on hand for the harvest, to share in your success, and help you enjoy the sticky sweet cacao seed coating. When you harvest, you can continue to propagate from the seeds, or you can try eating the fresh raw seeds themselves. They have a unique flavor, and texture somewhat like a very firm grape or kiwi.
It really is a rare treat to get to enjoy your own fresh, raw chocolate from a homegrown tree. Best of luck, and get growing! | <urn:uuid:304454f5-d27e-41b8-9d02-e8d415e20ebc> | CC-MAIN-2021-25 | https://practicalselfreliance.com/grow-chocolate-tree-indoors/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608856.6/warc/CC-MAIN-20210613131257-20210613161257-00543.warc.gz | en | 0.954224 | 1,218 | 2.53125 | 3 |
It has long been a principle of criminal law that criminal liability requires mens rea, “guilt mind” or intent to commit a wrongful act. That’s what we teach in Introduction to Law. The insanity defense is premised on the defendant’s lack of mens rea, his inability to understand the consequences of his actions or that his act was wrong. It is Criminal Law 101.
Except the United States Congress did not get the memo. The Wall Street Journal reported recently (As Federal Crime List Grows, Threshold of Guilt Declines) that “Congress has repeatedly crafted laws that weaken or disregard the notion of criminal intent.” The article relates the following tale:
When the police came to Wade Martin’s home in Sitka, Alaska, in 2003, he says he had no idea why. Under an exemption to the Marine Mammal Protection Act, coastal Native Alaskans such as Mr. Martin are allowed to trap and hunt species that others can’t. That included the 10 sea otters he had recently sold for $50 apiece. Mr. Martin, 50 years old, readily admitted making the sale. “Then, they told me the buyer wasn’t a native,” he recalls.
The law requires that animals sold to non-Native Alaskans be converted into handicrafts. He knew the law, Mr. Martin said, and he had thought the buyer was Native Alaskan.
He pleaded guilty in 2008. The government didn’t have to prove he knew his conduct was illegal, his lawyer told him. They merely had to show he had made the sale.
In other words the law imposed strict liability: commit the act the statute defines as criminal (the actus reus for those who remember back to our discussion of criminal law four weeks ago) and you are guilty.
This is not a crazy idea in theory. I was taught “ignorance of the law is no excuse.” The Journal notes “that principle made sense when there were fewer criminal laws, like murder, and most people could be expected to know them. But according to University of Virginia law professor Anne Coughlin, when “legislators ‘criminalize everything under the sun, it’s unrealistic to expect citizens to be fully informed about the penal code.'” With reduced intent requirements “suddenly it opens a whole lot of people to being potential violators.”
The problem is the intersection between proliferating federal criminal statutes–“there are an estimated 4,500 crimes in federal statutes, plus thousands more embedded in federal regulations, many of which have been added to the penal code since the 1970s”–and lesser mens req requirements: “more than 40% of nonviolent offenses created or amended during two recent Congresses—the 109th and the 111th, the latter of which ran through last year—had ‘weak’ mens rea requirements at best.”
Perhaps the moral is don’t leave home without the United States Code and the Code of Federal Regulations. | <urn:uuid:005524ba-e05e-427e-97ff-f025ce0c7b8e> | CC-MAIN-2022-27 | https://trudalane.net/2011/10/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00687.warc.gz | en | 0.973742 | 647 | 2.65625 | 3 |
Whether you’re writing your first essay or whether you have been writing essays for decades, it can be easy to become confused about where to start. One of the most important things to remember is that composing an essay involves coming up with a topic, a affordable essay writing services thesis statement, and creating a clear and succinct argument. Essay writers should remember they have to be able to determine their main point, how to support it, and support it with concrete illustrations, facts, and references.
Since there are numerous essay writing tips for article writers to follow, the best place for them to begin is by identifying their own personal regions of expertise. Should they feel that they have a exceptional perspective on a specific topic, it would be a good idea to explore that area further. The topics that they develop with needs to be based on their distinctive skills. Perhaps they are more experienced in exploring companies than others, which means that they might want to compose an essay on the topic of workplace safety.
The following step for essay writers would be to research the particular advice they will be writing about. When researching information for an article, it is important for the writer to bear in mind that they will need to present information that is both accurate and current. It is also a good idea for the article to be organized. Essay topics should always be written around a central idea or topic. This allows the composition to flow nicely and to achieve its goal.
The last thing that essay writers will need to think about is the structure of the essay. Writing an essay should always follow a logical order. It might not be necessary to stick to this arrangement 100%, but it is going to provide the essay writers a much better sense of direction. If the essay includes a poor structure, it will lack substance and won’t compel the reader to keep reading. This could lead to the reader leaving before completing the whole essay.
Finally, essay writers must ensure that they research any topic that they choose to write about. Doing research gives essay authors the ability to answer any question which a reader may have. Doing study allows the essay authors to answer any queries which may arise such as,”Who came before me?” Or”What’s the very first sentence of every paragraph?”
The main thing to remember when composing an essay is that it needs to be written by someone who is knowledgeable about the topic. Essay writers ought to be able to answer any question which a reader may inquire. They should also be able to extend the information that’s needed to support their view. By following these steps, article writers will have the ability to write an essay that may be of fantastic advantage to your own reader. | <urn:uuid:ec9f68c3-1eb8-4e14-9370-7ad54ca9778f> | CC-MAIN-2022-05 | https://metallbau-weingartner.de/how-to-write-a-good-essay-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00622.warc.gz | en | 0.963899 | 551 | 2.609375 | 3 |
Dr Deborah Biggerstaff, Mental Health and Wellbeing, Division of Health Sciences, Warwick Medical School discusses the pressures facing mental health care that have been highlihgted in this week's (w/c 8 November) report.
This briefing paper, which makes for uncomfortable reading in places, is a timely reminder that mental health services need support to deliver appropriate care that meets patients’ needs. People’s needs are often complex when it comes to mental illness since we are talking about patients who need professional support when they are at their most vulnerable. Many professionals working in the NHS and related services acknowledge that this change is long over-due. Mental health services today are now so over-stretched and under-resourced that this has led to high levels of stress among the very professionals committed to care for them. The system in which they work is perceived by many to be close to breaking-point. Certainly many mental health professionals feel they are currently working in less than ideal situations. The report highlights that change is imperative. However, backing is needed for any such reform and this would involve a sustained financial commitment and support for this policy from all involved. There is much rhetoric but this needs to translate into reality and delivery for these reforms.
Mental healthcare is complex, often involving a variety of professionals trying to deliver services to patients in their care when they are at their most vulnerable. Often these patient groups need help for a variety of different conditions. Adult mental health services, for instance, are currently defined as being for the over 16s, yet we know that teens and younger adults would benefit from services more tailored to the specific mental health needs frequently encountered in this age group i.e. those relating to adolescence and young adults (for example treatment and support for eating disorders; early onset psychosis; serious depression and anxiety that need specialist treatment; psychological / developmental problems).
The King’s Fund briefing document draws our attention to the very real issues and challenges facing crisis care with the attendant increased risk of suicide for those individuals when they experience acute mental health problems. Evidence is provided from other sources highlighting the current lack of resources for urgent care when individuals and their families seek help (e.g. Care Quality Commission, 2015).
The overall message is troubling: pressures facing our mental health services are now critical with stressors in one area having a negative effect another part of the system. This is such that patients frequently report not feeling safe in the very system that should be providing them with care and support. The potential therapeutic effect of any treatment offered, whether acute care, or in the community, by overworked and stressed health professionals, is unlikely to help patients find relief from their symptoms, neither does it bode well for the chance of a meaningful recovery in the longer term; these patients then return into the system and thus the cycle continues.
It remains challenging for the NHS to get the balance right between ‘voice and choice’ especially since patients, and their families or carers, may not be in a position to either want, or be able to demand access to information about their care when unwell. This dilemma is one that many working in NHS mental health services acknowledge; change is needed but support is needed for this and any proposed reforms need to be mindful of mental health care patients’ needs. However, these views and concerns do need to be heard and acted on, if patient trust and care delivery is to become truly more ‘patient centred’ in future.
For further details please contact Nicola Jones, Communications Manager, University of Warwick 07824 540863 or N.Jones.email@example.com | <urn:uuid:b69ebd76-4539-4426-83b3-690574512507> | CC-MAIN-2018-51 | https://warwick.ac.uk/newsandevents/expertcomment/mental_health_under/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824338.6/warc/CC-MAIN-20181213010653-20181213032153-00249.warc.gz | en | 0.965072 | 742 | 2.546875 | 3 |
On This Day, February 19th in 1807, Aaron Burr, a former U.S. vice president, is arrested in Alabama on charges of plotting to annex Spanish territory in Louisiana and Mexico to be used toward the establishment of an independent republic.
In November 1800, in an election conducted before presidential and vice-presidential candidates shared a single ticket, Thomas Jefferson and his running mate, Aaron Burr, defeated Federalist incumbent John Adams with 73 electoral votes each. The tie vote then went to the House to be decided, and Federalist Alexander Hamilton was instrumental in breaking the deadlock in Jefferson’s favor. Burr, because he finished second, became vice president.
During the next few years, President Jefferson grew apart from his vice president and did not support Burr’s renomination to a second term in 1804. A faction of the Federalists, who had found their fortunes drastically diminished after the ascendance of Jefferson, sought to enlist the disgruntled Burr into their party. However, Alexander Hamilton opposed such a move and was quoted by a New York newspaper saying that he “looked upon Mr. Burr to be a dangerous man, and one who ought not to be trusted with the reins of government.” The article also referred to occasions when Hamilton had expressed an even “more despicable opinion of Burr.” Burr demanded an apology, Hamilton refused, so Burr challenged his old political antagonist to a duel.
On July 11, 1804, the pair met at a remote spot in Weehawken, New Jersey. Hamilton, whose son was killed in a duel in 1801, deliberately fired into the air, but Burr fired with intent to kill. Hamilton, fatally wounded, died in New York City the next day. The questionable circumstances of Hamilton’s death effectively brought Burr’s political career to an end.
Fleeing to Virginia, he traveled to New Orleans after finishing his term as vice president and met with U.S. General James Wilkinson, who was an agent for the Spanish. The exact nature of what the two plotted is unknown, but speculation ranges from the establishment of an independent republic in the American Southwest to the seizure of territory in Spanish America for the same purpose.
In the fall of 1806, Burr led a group of well-armed colonists toward New Orleans, prompting an immediate investigation by U.S. authorities. General Wilkinson, in an effort to save himself, turned against Burr and sent dispatches to Washington accusing Burr of treason. On February 19, 1807, Burr was arrested in Alabama for treason and sent to Richmond, Virginia, to be tried in a U.S. circuit court.
On September 1, 1807, he was acquitted on the grounds that, although he had conspired against the United States, he was not guilty of treason because he had not engaged in an “overt act,” a requirement of treason as specified by the U.S. Constitution. Nevertheless, public opinion condemned him as a traitor, and he spent several years in Europe before returning to New York and resuming his law practice. | <urn:uuid:736ceb98-85f0-415b-9e51-69807bfcf200> | CC-MAIN-2022-40 | https://minuteman-militia.com/2018/02/18/aaron-burr-arrested-for-treason/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00427.warc.gz | en | 0.97766 | 631 | 3.625 | 4 |
The eleven scientists, engineers, and physicians chosen in 1967 as NASA’s newest astronauts had just arrived in Houston when Deke Slayton, Director of Flight Crew Operations, called them into a meeting and told them the bad news. The program NASA had envisioned, the program that would have had people living in space stations and studying lunar soil in the 1970s, was over. Project Apollo’s budget was declining, and while American astronauts were still two years away from walking on the moon, the men who would fly Apollo missions had already been selected weeks earlier. The choice Slayton offered the new recruits, astronaut Joe Allen later recounted, was stark: They could hang around NASA if they wished, but they would never be “space flyers.” The first chapter of human spaceflight history was ending, and these new astronaut-candidates had arrived too late.
This week America’s latest plans to return astronauts to the moon were similarly consigned to the recycle bin, and NASA’s astronauts are no doubt wondering the same thing their colleagues did 40 years ago: “What about us?”
Between 1975 and space shuttle Columbia’s maiden flight in 1981, no Americans flew in space. Now the shuttle itself is due to retire, without any clear idea yet of what will replace it. Aspects of President Barack Obama’s 2011 budget proposal dealing with space exploration will likely disappoint (though not surprise) those who have watched NASA navigate four decades of hard questioning about what kind of space program the nation needs, and where (if anywhere) it should send its talented astronauts next. To be sure, no President since Dwight Eisenhower has been willing to pull the plug on sending Americans into space, and one suspects that none will anytime soon. NASA’s Constellation program—a broad series of piloted explorations to the Moon or even Mars—will, though, receive no funding; neither will the next generation of launch vehicles NASA hoped would carry future spacecraft to Earth orbit and beyond. The United States will retain a human spaceflight program—it will still send people to the International Space Station—but it will rely upon private industry for help, and there is no explicit plan for where they will head next, or when.
The merits of the President’s decision and the conclusions of the Augustine Commission that recently surveyed future spaceflight options will likely fill blogs for years to come. One thing that is certain, though, is that with the shuttle’s retirement, Americans are facing an extended period of time when relatively few of their citizens will travel into space, making a skilled astronaut workforce even harder to maintain. For NASA, it’s 1975 all over again.
To their credit, none of NASA’s astronaut-candidates of 1967 resigned after Slayton’s announcement, even though he consoled them that they’d “make no enemies” if they did. The boredom and frustration of the years that followed, though, led several of the men, like Philip Chapman and Donald Holmquest, to leave early. For those who stayed behind, working at NASA little resembled the bustle of the 1960s. By the time Americans next flew in space, most veteran astronauts—the ones who had squeezed into the capsules of Project Mercury, scooted around Earth in the two-seater Gemini, or landed on the moon in Apollo—had retired. Instead, the early shuttle crews were made of people who had waited out the great spaceflight bust: journeymen military aviators who had moved to NASA a little too late to find a seat, and the scientist-astronauts Slayton didn’t much want to begin with. Eventually, even Joe Allen donned a helmet, and, after a 15-year wait, flew on the Space Shuttle in 1982.
Space flying has never been a job with much security. Especially lately (and even before Constellation was canceled), the concern among career spacefarers has been that there would be too few flights, a problem that strains morale and keeps the most experienced people from lingering long in Houston. Until recently, though, journalists and space watchers could speculate as to which of NASA’s current astronauts might, by virtue of their youth and stick-to-itiveness, hope to last long enough in the agency to be the first person to bunk on the moon since 1972 (see Michael Cassutt, “Fly Us to the Moon”). With the current economic downturn, one wonders if any will.
Astronauts haven’t been in short supply since 1962; The United States can rely upon an abundance of aviators, scientists, engineers, and science fiction fans willing to sign up for a space mission. But with each astronaut generation that passes, something is lost, and we reach further into our cultural memory for a time when astronauts were ten feet tall, had perfect hair, and never feared that humanity’s grand adventure skyward would ever end.
Matthew Hersch, a Ph.D. candidate in the University of Pennsylvania’s Department of History and Sociology of Science, is this year’s HSS/NASA Fellow in the History of Space Science. He is writing a labor history of American astronauts. | <urn:uuid:772d0701-89b8-4ab5-ace8-7d3ceccaa7b8> | CC-MAIN-2014-49 | http://www.airspacemag.com/space/no-stimulus-plan-for-astronauts-6806535/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400373050.63/warc/CC-MAIN-20141119123253-00136-ip-10-235-23-156.ec2.internal.warc.gz | en | 0.96212 | 1,075 | 3.046875 | 3 |
To accompany the sweeping advances in naval construction and administration from 1901 to 1909, the distribution and internal organization of the American battle fleet underwent a revolutionary change. For the first time the ships of the United States Navy were concentrated in a few heavy units rather than dispersed in small squadrons all over the face of the earth. Here above all was Mahan's influence on Roosevelt apparent, for the former's doctrine of the superiority of the power of a concentrated battle fleet was put into actual practice with the formation of the Atlantic Fleet.
Before reviewing the actual events of the development of fleet organization, it would be well to take a look at the general strategy behind Roosevelt's moves. In his Annual Message to Congress in 1907, just as the fleet was starting out on its world cruise, he clearly outlined his policy.
Recalling the panic that spread along the Atlantic seaboard during the Spanish-American War because of fear of an attack by the Spanish Fleet, he warned Congress that:
"We need always to remember that in time of war the navy is not to be used to defend harbors and sea‑coast cities. . . . The only efficient use for the navy is for offense. The only way in which it can efficiently protect our coast against the possible action of a foreign navy is by destroying that navy. For defense against a hostile fleet which actually attacks them, the coast cities must depend upon their forts, mines, torpedoes, submarines and torpedo p71 boats and destroyers, . . . but [these] in no way supply the place of a thoroughly efficient navy capable of acting on the offensive. . . . But the forts and the like are necessary so that the navy may be footloose.
In time of war there is sure to be demand, under pressure of fright, for the ships to be scattered so as to defend all kinds of ports. Under penalty of terrible disasters, this demand must be refused. The ships must be kept together, and their object made the enemy's fleet.
". . . Unless there exists such a navy the fortifications are powerless by themselves to secure the victory. For of course the mere deficiency means that any resolute enemy can at his leisure combine all his forces upon one point with the certainty that he can take it."1
Nothing could have presented Roosevelt's case more clearly than these words, in which he summed up the naval strategy of his administration. Furthermore, in January 1907 Mahan wrote the President asking that the battleship fleet be not split between the Atlantic and the Pacific, for it was rumored that this was intended. Roosevelt replied immediately that he had no such idea and that if a fleet were to go to the Pacific it would be the strongest one possible.2
In his 1907 Message to Congress he said the same thing. He urged that the fleet be shifted back and forth between the Atlantic and the Pacific every two or three years, but that "until our battle fleet is much larger than at present it should never be split into detachments so far apart that they could not in event of emergency be speedily united."3
p72 At the conclusion of the cruise he again resisted a demand from the West Coast for the division of the battleships between the Atlantic and the Pacific,4 and the day before leaving office he wrote a letter to Taft, saying:
"One closing legacy. Under no circumstances divide the battleship fleet between the Atlantic and Pacific oceans prior to the finishing of the Panama Canal. . . . I should obey no direction of Congress and pay heed to no popular sentiment. . . .
". . . There were various factors which brought about Russia's defeat; but most important by all odds was her having divided her fleet between the Baltic and the Pacific. . . ."5
This naval policy has consistently been followed by the United States ever since, although the opening of the Panama Canal made it possible for the fleet to assemble more quickly if it were divided.
Up to Roosevelt's Administration the Navy had never in peacetime, and very inadequately in wartime, provided for drilling the ships in fleet formation. Such drills had been impossible largely because there were too few modern battleships. But in addition the Navy had failed to provide any drills or maneuvers for what ships there were in the squadrons assigned to the various stations of the fleet. The feeling in Congress that such practice was unnecessary before war actually began and was also an extravagant waste of money, was doubtless a factor.
Roosevelt, however, at once tried to disabuse the nation of this fallacious idea. In his first Message to Congress he declared that "even in time of peace a warship should be p73 used until it wears out" so as to keep it fit for any emergency. Furthermore, the vessels should constantly be maneuvered in squadrons made up of all the necessary types of ships. "A battleship worn out in long training of officers and men is well paid for by the results, while, on the other hand, no matter in how excellent condition, it is useless if the crew be not expert."6
This was the general opinion of the progressive elements within the Navy, though a few of the older officers would not agree to the new methods.7 To carry out this program an adequate number of the various types of warships was necessary. It was agreed that the first requirement for a squadron was similarity of speed and turning circles.8 This was exactly what the American Navy lacked, for many of the older battleships were very slow, for example, the Iowa and the Kentucky.
Although battleships were the backbone of a fleet's power, as shown by the Russo-Japanese War and by naval programs abroad,9 the growing importance of torpedo boats and destroyers was also recognized. In 1905 the President pointed out to Congress that recent naval history had shown that: ". . . seagoing torpedo boats or destroyers are indispensable, not only for making night attacks by surprise upon an enemy, but even in battle for finishing already crippled ships."10
The naval maneuvers and war games in 1907 showed that torpedo boats could get in on the battleships almost p74 every time, for there were only twenty destroyers for our sixteen battleships.11 Roosevelt saw this lack and every year asked Congress for more of these small ships with large fuel capacity so that they might have a cruising radius great enough to accompany the battle fleet.12 Unfortunately Congress refused to heed the president's requests so that he was followed to sacrifice auxiliary vessels to obtain the capital ships he wanted.13 It was even charged that the only reason we had any submarines at all was that the submarine builders had such a powerful lobby in Washington.14
In 1901 the few battleships that the Navy had were scattered about as flagships for the various overseas stations after their temporary concentration during the Spanish-American War. Roosevelt called all these together in 1902 in the Caribbean for the first large-scale maneuvers in our history under the immediate command of the Admiral of the Navy.15 Furthermore, when the drills were over most of the battleships were not sent back to their original posts. This was a very important decision, for it marked the beginning of the policy of maintaining permanently an effective, united battle fleet.16 It was decided to keep a fleet of eight battleships in the North Atlantic and three in Asiatic waters. Cruisers only were to be used for the other p75 stations.17 Officials realized that constant training and drill were exceedingly important,18 and that a fleet of eight battleships, such as was assembled in the spring of 1903, was the minimum number possible for effective maneuvers.19 By the beginning of 1903 the North Atlantic and Asiatic Fleets had been created, and by 1905 the South Atlantic and European Stations had been abolished, the ships of these squadrons being reassigned to other units.20
In 1906 it was finally decided to keep all the battleships in the Atlantic Fleet, leaving only the cruisers in the Asiatic Fleet. The only other regularly organized unit was the Pacific squadron based on the West Coast.21 In 1907 our naval forces were further concentrated by merging this squadron with the Asiatic Fleet into a new Pacific Fleet.22 In that same year the strength of the fleet was more than doubled. To the less than 160,000 tons of vessels in December 1906, there were added in one year battleships totaling 139,792 tons, armored cruisers of 29,000 tons and protected cruisers of 19,400 tons. The Atlantic Fleet then had sixteen battleships.23 This concentration made possible really effective drills and games carried out under war conditions.
This powerful new Atlantic Fleet was destined to do much more than merely conduct maneuvers along the coast, for the famous World Cruise began in 1907. In 1906 and p76 1907 there had been mounting tension between the United States and Japan over discrimination against Japanese on the Pacific Coast. The jingo elements in both countries clamored for war, and it is surprising to note how the threat of a Japanese attack on our Pacific Coast worried the President since no one understood more clearly than he the almost insuperable obstacles to such an offensive at that time.24 Since 1898 the United States had come more and more to face toward the Pacific and its relations with Asia were becoming as important as those with Europe. Roosevelt saw this and was intensely worried about Japanese-American relations, which were taking a more or less permanent turn for the worse. He admired Japan but confessed that he did not trust her fully.
Figure 7. From the Minneapolis Tribune
Admiral Yamamoto: "Good morning, Mr. President. We are going to have a war . . ."
President Roosevelt: "What's that?"
Admiral Yamamoto: "We are going to have a warm day today."
President Roosevelt: "Oh yes, yes. I think we are."
p77 ". . . I wish I were certain that the Japanese down at bottom did not lump Russians, English, Americans, and Germans, all of us, simply as white devils inferior to themselves not only in what they regard as the essentials of civilization, but in courage and forethought, to be treated politely only so long as would enable the Japanese to take advantage of our various national jealousies, and beat us in turn."25 He managed to reach a temporary solution of the problem by granting slight concessions in the Root-Takahira Agreement and, at the same time, by making a show of force in sending the fleet around the world.
This was not his only object in the cruise of the fleet. He felt the practice would be of great importance, as he explained to Root in a letter dated July 13, 1907, about ten days after the cruise was announced:
"I am more concerned over the Jap situation than almost any other. Thank Heaven we have the navy in good shape. It is high time, however, that it should go on a cruise around the world. In the first place I think it will have a pacific effect to show that it can be done; and in the next place . . . it [is] absolutely necessary for us to try in time of peace to see just what we can do in the way of putting a big battle fleet in the Pacific, and not make the experiment in time of war."26
Since Roosevelt was at this time renewing his drive in Congress for additional battleships, it seems likely that a third reason was to impress the nation with the need for a large navy and to dramatize its might.27 Many people objected to the cruise on the grounds that the United p78 States would be left open to attack. Furthermore, it was charged, such action would immediately provoke Japan to declare war. The United States was by then the second naval power of the world and Japan the fifth. It was originally announced that the fleet was only being moved to San Francisco, a step in which the Japs could find little legitimate cause for complaint.28 However, such a move to the Pacific was generally taken to mean that the United States was prepared to defend Hawaii and the Philippines at all costs.29
"Taking Notice," New York American
p79 Although at first the Navy Department opposed the cruise on the grounds that neither the ships nor the men were able to make it, Roosevelt brusquely overrode this opinion and even decided to send along the destroyers, though many people were sure these small vessels could not stand the trip.30
"A Bone in His Teeth"
The fleet of sixteen battleships finally left Hampton Roads in December 1907 under the command of Rear-Admiral Robley D. Evans. The story of this trip has often been told, so it need not be repeated here.31 In San Francisco Rear-Admiral Sperry took command for the rest of p80 the trip since Evans would soon pass the retiring age. The fleet finally returned to Hampton Roads on February 22, 1909, where it was reviewed by the President. It had been gone 432 days and had covered •45,000 miles.32
"Trying on Her New Necklace"
During the entire trip the ships had taken care of nearly all their own repairs and there had been but little damage.33 Admiral Sperry reported on his return that: ". . . the condition of the ships is better today than when they sailed from Hampton Roads in December of 1907. During these fourteen months the Fleet has been practically self-sustaining in the matter of repairs. . . . The results prove that the ships have been better cared for than when they depended upon the Navy yards."34
p81 The Secretary of the Navy's report in 1909 agreed with this, stating that the fleet was more efficient than ever in its history and that the morale, efficiency and discipline of the men had all improved.35
One lesson in particular learned by the Navy as a result of the cruise was that the lack of a merchant marine seriously handicapped the fleet. The law required that all coal for the Navy be carried in American ships, but there were not enough of them to supply coal for the entire trip and those that there were charged exorbitant prices. The Navy head requested additional colliers for years, but up to that time Congress had refused to provide them.36 The result was that the fleet had to take more than twenty foreign colliers and supply ships along with it. Still Congress refused to authorize any more naval auxiliaries or to subsidize the merchant marine.37
By the time Roosevelt left office the United States had an Atlantic Fleet of twenty battleships, two armored cruisers and auxiliaries; and a Pacific Fleet of eight armored cruisers, seven protected cruisers and auxiliaries. In addition to these ships there were eight dreadnoughts already building or authorized. This was a far cry from the situation in 1901, when the seven American battleships and the cruisers were divided among five stations. Within this eight-year period the Navy first took on its modern organization and left behind for good the old Jeffersonian idea of passive coast defense and sporadic commerce raiding.
No matter how well organized strategically or how well drilled in tactics, it was necessary that the fleet be able to outshoot any enemy once the battle was joined. This entailed accurate and rapid long-range gunnery. Although a few people claimed that American naval gunnery had been excellent throughout our entire history,38 and although Admiral Evans even tried to defend the record of the Navy in the Spanish-American War,39 the actual facts did not give a very flattering picture. At the Battle of Manila Bay the glorious American victory was won by making one hit out of every fifty shots.40 At Santiago only three per cent of the shots, or 120 out of about 9,000 were hits. This was at a range of approximately •2,800 feet, and the targets were at least •two hundred feet long and •from twenty to thirty feet high.41 Fortunately for the United States the war was against the Spanish Navy.
This wretched showing can be partially excused because modern methods of sighting and range finding were still largely in the experimental stage in 1898. It was the introduction of smokeless powder and more powerful guns that spurred the development of this apparatus, for longer battle ranges rendered the old methods obsolete.42
In 1898 Sir Percy Scott of the British Navy devised a new method of gunfire. Before this time the gunners set the open gunsights and fired the guns when the downward p83 roll of the ship brought the sights on the target. The new system kept the guns continuously aimed at the target by adjusting gun elevation to the motion of the ship. With this new procedure Scott attained on H. M. S. Scylla the hitherto impossible record of 80 per cent of hits, and in 1900 on the cruiser Terrible he bettered even this remarkable record.43
In that same year W. S. Sims, who was destined to play an important part in improving American naval gunnery, landed at Hongkong and met Sir Percy Scott. From him and the other British officer Sims learned all that he could about the new system of continuous‑aim firing. Sims sent back to the Navy Department detailed reports of the amazing target practice record of H. M. S. Terrible. Continuous‑aim firing, he discovered, necessitated an efficient elevating gear, accurate telescopic sights, and long and careful practice on the part of the gunner.44
Between 1890 and 1894 Bradley A. Fiske of the United States Navy had developed a telescopic sight. This instrument had been tested by the Navy, but not until the last years of the century, after much further research, had the Department begun to produce and install such sights. The responses from the officers was divided, many declaring telescopic sights a nuisance and advocating removal of all sights, others insisting that the telescopic sights were not only essential but must be perfected as soon as possible.45 Sims, p84 studying the success of Scott's firing methods, discovered that the British sights were more powerful than the American sights, that they were mounted right on the guns, and that, unlike the American sights, they were so cushioned that the observer could keep his eye continuously against the eyepiece without being injured by the recoil of the gun.46
In later reports Sims pointed out further inadequacies of the American telescopic sights. The cross-wires of the lens were too coarse, and it was impossible for the gunner to make corrections for speed and wind except by "pointing the sight off the target." In short, our "latest telescope" was a "practically complete failure."47
These endeavors to make the Department aware of the very poor record of American marksmanship were not without effect. A. P. Niblack, a close friend of Sims, was made Inspector of Target Practice, and a number of innovations were introduced.48 Late in 1902 Sims himself was called back to Washington to succeed Niblack.49
His first trip to the fleet convinced Sims that the service sights then being used were quite inadequate for continuous‑aim firing. Early the next year a perfected design was submitted to the Department, along with a full report of the failure of gunnery practice. Nevertheless, a year later the old sights were still in use and nothing had been done. Sims presented a new report on the subject and saw to it that it reached President Roosevelt. Against the opinion of the Department that it would take seven years to replace the old sights, Sims declared that it could be done p85 in one. The President took a hand, declaring that he would ". . . give the bureaus an alternative. Either they must find the money to re‑sight the Navy with the best possible designs of instruments or I shall take the matter up with Congress and tell them that the Navy's sighting devices are obsolete and inefficient."50 The Navy Department installed new sights within two years.
Because of the extremely long ranges at which the guns had to be fired, it was vitally important that there be an accurate method for determining the range. The usual way was to have observers high up in the mast to see whether the shots were hitting their target or not. By a trial-and‑error method the correct range could finally be determined. One difficulty with such a fire-control system was that a vital part of it had to be exposed in the mast with little or no protective armor, although the use of periscopes for the observers later reduced this danger somewhat.51 Another was the problem of communications between the spotters and the gun crews. The increased rapidity of fire put such a strain on the spotting method and its communication system that the observers did not have enough time to note the fall of the shot and figure the correction. Experiments made in 1904 further demonstrated that a spotter "could control fire up to thirty-five hundred yards with considerable accuracy." At ranges of six thousand yards he could tell by the splashes, although he could not follow the shell. As soon as battle ranges exceeded that p86 distance improved methods of range-finding were necessary.52
As in the case of the gun sights, the Navy Department long held out against range-finders, even though Fiske and other young officers developed more accurate instruments with much longer bases so that they worked well at long ranges. In 1903 Fiske installed one of his instruments experimentally on the U. S. S. Maine.53 From the lessons learned in this trial, which showed that the range-finder was too delicate to be put on a turret and still not accurate enough, he developed a device that proved completely successful on the Monitor Arkansas in 1907. Although the Bureau of Ordnance agreed that Fiske's invention was a success, its installation on other ships was not recommended because the Bureau thought it unnecessary.54 However, Rear-Admiral Mason, Chief of the Bureau of Ordnance, appeared before the House Naval Affairs Committee just about a year later asking for the installation of range-finders with long bases on all ships.55 After 1910 all ships authorized had them built into their turrets.56
New methods of target practice were also introduced. Sir Percy Scott of the British Navy had developed the "dotter," a device which moved a small target around in front of the sights. The gunner had to follow this, and when he had the sights all set on the target, he pressed a p87 button and the hit was scored on the "dotter."57 Shortly after the invention of Scott's device, a group of American officers produced a similar instrument, the Morris tube. The chief difference was that it used a small rifle attached to the gun to record the hits.58 Although Admiral Evans claimed that the development of this method of target practice in China in 1902 was entirely independent of Scott's success, of which he said the American officers in the Far East had not heard,59 it is a fact that Scott made nearly all his experiments on the Terrible while also in Asiatic waters.
W. S. Sims, it will be recalled, had met Scott in China and had studied at first hand Scott's new firing methods. Sims had been naval attaché in Paris and St. Petersburg, and had enjoyed ample opportunities to study foreign navies. He was much impressed with Scott's ideas. He discovered that American ships on the China station were hitting the target less than ten per cent of the time, while H. M. S. Terrible was hitting eighty to eighty-five per cent of the time. By November 1901 he had written over eleven thousand pages of reports which had gone to Washington. Although few visible results had yet appeared in the Department, Sims had in fact convinced many of his colleagues and superiors that something must be done about the poor marksmanship record of the American Navy.60
In November 1901 Sims finally wrote directly to President Roosevelt, pointing out the "present very inefficient p88 condition" of the fighting forces. He mentioned especially the armament and protection of our battleships, and the "crushingly inferior" marksmanship, as evidenced in a recent target the practice by the North Atlantic Squadron in which five ships fired for 5‑minute at a hulk only twenty-eight hundred yards distant, and only two hits were made.61
The legendary story is that on studying up the actual gunnery record of the United States Navy, Roosevelt took up Sims's case and sent for him to come to Washington immediately, where he was made Director of Target Practice.62 To the Navy Department Roosevelt is supposed to have said: "Give him entire charge of target practice for eighteen months. Do exactly as he says. If he does not accomplish something in that time, cut off his head and try somebody else."63
As a matter of fact, the President actually wrote to Sims that he thought him "unduly pessimistic," especially in view of the American victory over Spain in 1898. Nevertheless he was glad to receive the letter and would welcome further criticisms or suggestions. Roosevelt immediately ordered all of the reports Sims had sent from China condensed and prepared for publication, so that they could be distributed to all officers in the Service.64
Meantime Sims, now Fleet Intelligence Officer and Inspector of Target Practice on the China Station, was trying out the new methods of gunnery. He initiated dotter practice, and held target practice using the larger British targets p89 and counting only actual hits. The improved gunnery of the squadron soon began to vindicate his contentions. Sims continued to send reports back to Washington, and one on the superiority of British marksmanship created a stir in the Department. In September 1902 he was ordered home where he had been appointed to succeed Niblack as Inspector of Target Practice.65
Back in Washington Sims faced the double job of convincing the Navy Department of the importance of continuous‑aim firing and of putting the new methods into practice. Dotters and Morris Tube Targets were installed on the ships, and crews began daily practice. After three months of this a preliminary practice was held off Pensacola. Conditions were ideal — short ranges, smooth water, slow speeds — and rapidity of fire was not counted. All previous gunnery records were shattered. The large guns scored from forty to seventy-five per cent of hits, the smaller ones averaged fifty-five per cent.66
These results were encouraging but much remained to be done.
Much of the gunnery equipment was poor, or even improvised. There were no uniform regulations for gunnery drill. The story of the struggle for improved gunsights has already been told. A new Drill Book was issued in June 1903. Roosevelt instituted special prizes for excellent gunnery. The best ship in each year's practice was awarded a bronze plaque, and gunners received two to ten dollars a month extra pay, depending on the size of the gun, for making certain scores. To increase the rapidity of loading, p90 dummy breeches were installed and the men practiced loading as an athletic event.67
Under the old system of target practice the target was a triangular sail floating upright on a raft. It was United States Army hit only by luck, if at all. Shots which would have hit if the target were a full-sized ship were counted as hits. The result was extremely inefficient marksmanship. For example, in 1897 the entire North Atlantic Squadron had steamed by an old target ship at twelve knots and poured broadsides into her at 4,000 yards. Nothing had happened. The performance had even been repeated at half the range, but still no serious damage had been done, even with the 6‑inch guns also in use.68 Under the new system, larger targets were used and only hits were counted.
After the Pensacola trials a new basis for scoring hits was adopted. Previously the number of hits out of the total number of shots fired was the only factor considered. Speed was not counted at all. But Roosevelt realized that what really counted was ". . . the number of large projectiles that can be landed against an enemy's hull in a given time . . ." and not the total volume of fire.69 Under this new system of counting only the hits per gun per minute, some excellent records were made in September 1903 off Martha's Vineyard. The 13‑inch guns averaged a hit a minute. These guns on the Alabama were loaded and fired in thirty-eight seconds. Five years earlier the official time allowance for firing a gun of this caliber had been six p91 minutes.70 This development made the intermediary batteries largely unnecessary because the only reason for them had been their ability to fire so much more rapidly than the big guns. New rectangular targets were also adopted and only actual hits were counted in scoring.71
The records of the practice of 1903 and of a preliminary practice in 1904 showed such excellent progress that the complete success of the new gunnery methods seemed certain. Then came the terrible accident to the Missouri in which thirty-four mena were killed when a flareback from a gun ignited a powder charge and fire dropped down the hoist into the handling room. This disaster (which is described elsewhere in connection with the design of ammunition hoists) put in jeopardy the whole new system of gunnery.72
Many of the older men took the view that the rapid firing of the guns was to blame. Sims prepared a lengthy paper discussing in detail various technical aspects of the accident, defending the new firing methods, and recommending broken hoist construction to prevent similar accidents in the future. But the opposition continued. A round-robin was circulated in the Department asking that target practice be placed once more back on the old system based on the total percentage of hits.73
Sims carried the matter directly to the President. The two had met personally for the first time only a few months before, although Sims had from the first enjoyed the President's support in his struggle to improve the gunnery of p92 the Navy. Roosevelt sent a peremptory order to the Navy Department to continue gunnery practice as it was.74
But the battle still was not won. The new gunnery records were attacked on the ground that the conditions — of smooth water, short ranges, and slow speed — in no way approximated battle conditions. Sims explained at length the technical reasons for the use of this short range and small target. The latter was of the exact size, in proportion to the range, to allow for errors of the gun and yet at the same time to test the gunners' accuracy.75 It was not until the target practice of 1906 that attempts were made to simulate battle conditions. Meanwhile, the Report of Autumn Target Practice, 1905 embodied the general principles of fire control as it is practiced today.76
The results of the 1906 target practice, carried on at ranges greater than four thousand yards, were very satisfactory. In 1907 the large guns averaged a hit per minute, and the fleet average, excluding guns smaller than six‑pounders, was 77.6 per cent hits where it had been 40 per cent in 1903. Although by this time great improvements had been made in guns and mechanical firing equipment, most of the fire-control instruments, as Sims wrote to Roosevelt, were still improvised by the officers.77
Even at this date, and in the face of successful practice records, opposition to the new methods continued. In a letter to Sims dated August 29, 1907, Roosevelt deplored the resurrection of arguments ". . . against rapidity in p93 delivering accurately aimed shots . . ., arguments which one supposed were decently buried a decade ago."78
Roosevelt and Sims were responsible for an improvement in gunnery estimated variously at from three to five hundred per cent in a period of seven years.79 Whatever the exact degree of improvement might have been, Rear-Admiral Mason, Chief of the Bureau of Ordnance, declared that: "The most striking feature in the history of the Navy during the last few years has been the progress in gunnery. Results of . . . shooting have been such as were thought impossible a few years ago. . . ."80 Actually the percentage of misses in 1909 about equaled the percentage of hits in 1901. Writing to Secretary Newberry, Roosevelt estimated that "our fighting power is at least five times greater than it was before our training had been improved by Commander Sims' methods."81
Modern naval gunnery in the United States was thus born and developed during Roosevelt's Administration. Its rise led to revolutionary changes in battle tactics as well, for close, hand-to‑hand combat became impossible. Battle ranges have ever since been growing greater and greater.
1 Roosevelt, State Papers, pp472‑3.
3 Roosevelt, State Papers, p473.
4 Bailey, Theodore Roosevelt and the Japanese-American Crises, p298.
5 Bishop, op. cit., Vol. II, pp119‑20.
6 Roosevelt, State Papers, pp120‑1.
7 Fiske, "American Naval Policy," Proceedings of the U. S. Naval Institute, 31 (1905), p36.
8 ibid., pp38‑9.
9 Navy Department, Annual Report, 1907, pp7‑8.
10 Roosevelt, State Papers, p310.
11 Reuterdahl, "Needs of Our Navy," McClure's, 39 (1908), p257.
12 e.g. Roosevelt, State Papers, p260 (Annual Message, 1904).
13 Sprout, op. cit., p269, note 71 (Refers to 63rd Cong. 3rd Sess. House Committee on Naval Affairs, Hearings on Estimates for 1915, p702).
14 H. A. Evans, op. cit., p158.
15 J. C. O'Laughlin, "The American Fighting Fleet, Its Strategic Disposition," Cassier's Magazine, 24 (1903), p385.
16 Fiske, Autobiography, p350.
17 Navy Department, Annual Report, 1902, p392 (Report of Bureau of Navigation).
18 ibid., p4.
19 Converse, Refutation of Alleged Defects, p2 (60th Cong. 1st Sess. Sen. Doc. No. 298)
20 Sprout, Rise of American Naval Power, p278.
21 Navy Department, Annual Report, 1906, pp403‑5 (Report of Bureau of Navigation).
22 Navy Department, Annual Report, 1907, p6.
23 ibid., p13.
24 Bailey, Theodore Roosevelt and the Japanese-American Crises, pp277‑8.
25 Roosevelt to Spring-Rice, 1905, quoted in Dennett, op. cit., pp46‑7.
26 Bishop, op. cit., Vol. II, p64.
27 Bailey, "The World Cruise of the American Battleship Fleet," Pacific Historical Review, 1 (1932), p400.
28 ibid., p403.
29 See, e.g. "Will the U. S. Make a New Declaration of Imperialism?" Baltimore Sun, July 3, 1907, in Roosevelt's Scrapbooks, Editorial Comment Series, Vol. 23, p11. Also "16 Big Battleships for the Pacific," New York Herald, July 2, 1907, in Events of Interest Series, Vol. 27, p43.
30 Sims, "Roosevelt and the Navy," Part II, McClure's 54 (1922), pp59‑60.
31 See, e.g., Bailey, Theodore Roosevelt and the Japanese-American Crises, Chap. XII.
32 Brassey, Naval Annual, 1909, p34.
33 Navy Department, Annual Report, 1908, p6.
34 Quoted in Brassey, Naval Annual, 1909, p35.
35 Navy Department, Annual Report, 1909, p29.
36 R. D. Evans, op. cit., p411.
38 See, e.g. Gleaves, op. cit., p4895.
39 R. D. Evans, op. cit., p415.
40 Sims, "Roosevelt and the Navy," McClure's, 54 (1922), p33.
41 Sims, Letter to Naval Personnel Board, Sept. 24, 1906, p22.
42 Navy Department, Annual Report, 1902, p29.
43 E. E. Morison, Admiral Sims and the Modern American Navy (Houghton Mifflin Co., 1942), pp82‑3; Sims, "Roosevelt and the Navy," McClure's, 54 (1922), p34.
44 Morison, Admiral Sims pp83‑5; Sims, "Roosevelt and the Navy," McClure's, 54 (1922), p34.
45 Morison, Admiral Sims pp83‑4, 109‑10; Sims, "Roosevelt and the Navy," McClure's, 54 (1922), p37.
46 ibid., p84.
47 ibid., pp116‑17.
48 ibid., pp123 ff.
49 ibid., pp126 ff.
50 ibid., Chapter IX, especially p137; Hermann Hagedorn, "Theodore Roosevelt and the Navy," The Republican, 7 (1942), p32.
51 U. S. 60th Cong. 1st Sess. House Committee on Naval Affairs, Hearings and Communications, 1907‑1908, p157 (Testimony of Rear-Admiral N. E. Mason).
52 ibid., p153. Also "Ten Years' Development of the Battleship," Scientific American, 97 (1907), p406; Morison, Admiral Sims, pp146‑7.
53 Fiske, Autobiography, p360.
54 ibid., pp400‑1.
55 U. S. 60th Cong. 1st Sess. House Committee on Naval Affairs, Hearings and Communications, 1907‑1908, p153.
56 Fiske, Autobiography, p401.
57 Gleaves, op. cit., p4897; Morison, Admiral Sims, pp84‑5.
58 R. D. Evans, An Admiral's Log (D. Appleton, 1910), pp132‑3. Also Fiske, Autobiography, pp352‑3, and Morison, Admiral Sims, p125.
59 R. D. Evans, An Admiral's Log, p137.
60 "Admiral Sims Dies of a Heart Attack," New York Times, Sept. 29, 1936, p27; Morison, Admiral Sims, pp83 ff.
61 "Admiral Sims Dies," cited; Morison, Admiral Sims, pp102‑4.
62 Morison, op. cit., p104.
63 "Admiral Sims Dies," cited; Morison, op. cit., p104.
64 Morison, op. cit., pp104‑5.
65 ibid., pp122 ff., 126 ff.
66 ibid., pp131 ff.; Gleaves, op. cit., p4901.
67 Morison, op. cit., pp134 ff.; Sims, "Roosevelt and the Navy," Part II, McClure's, 54 (1922), p38; Gleaves, op. cit., pp4900‑1.
68 Gleaves, op. cit., pp4900‑1.
69 Roosevelt, Letter to House Naval Affairs Committee, Jan. 11, 1907, quoted in Brassey, Naval Annual, 1907, p388.
70 Gleaves, op. cit., pp4901‑3.
71 Navy Department, Annual Report, 1903, p25.
72 Morison, op. cit., pp135 ff., 138 ff.
73 ibid., pp139‑41.
74 Sims, "Roosevelt and the Navy," Part II, McClure's, 54 (1922), pp58‑9; Morison, op. cit., p141.
75 Morison, op. cit., pp141‑3.
76 ibid., pp147, 236.
77 ibid., pp236 ff.
78 Sims, "Roosevelt and the Navy," Part II, McClure's, 54 (1922), pp58‑9.
79 See ibid., p58; Also Roosevelt, Autobiography, p213.
80 U. S. 60th Cong. 1st Sess., House Committee on Naval Affairs, Hearings and Communications, 1907‑1908, p153.
81 "Admiral Sims Dies," cited, p1.
Images with borders lead to more information.
The thicker the border, the more information. (Details here.)
the Modern Navy
A page or image on this site is in the public domain ONLY
if its URL has a total of one *asterisk.
If the URL has two **asterisks,
the item is copyright someone else, and used by permission or fair use.
If the URL has none the item is © Bill Thayer.
See my copyright page for details and contact information.
Page updated: 1 May 16 | <urn:uuid:94cadf23-2592-419d-87b4-5ff560b12eba> | CC-MAIN-2017-47 | http://penelope.uchicago.edu/Thayer/E/Gazetteer/Places/America/United_States/_Topics/history/_Texts/OGTRMN/5*.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807056.67/warc/CC-MAIN-20171124012912-20171124032912-00673.warc.gz | en | 0.97494 | 8,436 | 3.21875 | 3 |
There are 55 authoritarian leaders in power throughout the world. Eleven of these leaders are 69 years old or older, and they are in varying stages of declining health. Most of these aging dictators, such as Angola’s Jose Eduardo dos Santos (73 years old), Kazakhstan’s Nursultan Nazarbayev (75 years old), and Zimbabwe’s Robert Mugabe (91 years old), have been in power for decades. At first blush this paints a hopeful picture for democracy watchers, who have recently documented a slow but steady authoritarian resurgence. Surely the fact that 20 percent of the world’s autocracies face the specter of succession provides an opportunity for new democracies to emerge — or does it?
Alternatively, perhaps the number of aging and ailing dictators is a cause for concern. Some fear that the deaths of these longtime leaders will spark intense political infighting or public unrest that could plunge their countries into chaos. The fact that most of this aging cohort, such as Algeria’s Abdelaziz Bouteflika, Cameroon’s Paul Biya, and Sudan’s Omar al-Bashir, has yet to identify a political successor seems to add credibility to these concerns.
Both perspectives seem plausible — but our research shows that there is little merit to either of them. In our review of the 79 dictators who have died in office from 1946 to 2014, we find that the death of a dictator almost never ushers in democracy. Nor does it typically bring down the regime. Instead, in the vast majority (92 percent) of cases, the regime persists after the autocrat’s death. The deaths of Hugo Chávez in Venezuela in 2013, Meles Zenawi in Ethiopia in 2012, and Kim Jong Il in North Korea in 2011 illustrate this trend. Compared with other forms of leadership turnover in autocracies — such as coups, elections, or term limits — which lead to regime collapse about half of the time, the death of a dictator is remarkably inconsequential.
Not only is it exceedingly rare for an autocrat’s death in office to result in democracy, but it also does not improve a country’s longer-term prospects for liberalization. Leaders who come to power following the death of an autocrat and who seek to deviate from the status quo are likely to provoke resistance from the “old guard” — elements of the regime who maintain control over the levers of power and find it in their interest to limit changes in the new system.
It is often forgotten today that the brutal Syrian dictator, Bashar al-Assad, came to power after his father’s death in 2000 with hopes of liberalizing his country.
It is often forgotten today that the brutal Syrian dictator, Bashar al-Assad, came to power after his father’s death in 2000 with hopes of liberalizing his country. Soon after inheriting power, he began a series of political reforms, including efforts to increase press freedoms, release political prisoners, and expand Internet use. But President Assad’s ability to change the system was limited by influential figures from his father’s regime who exerted their political power and influence to block policy changes and inhibit their implementation.
We also find that coups and public revolts are rare following a dictator’s death. During the year of a leader’s death in office, coups have occurred in only 6 percent of cases, compared with 32 percent when autocrats have left power via other means. Similarly, mass public protests are far less likely to break out following a dictator’s death than after other forms of authoritarian leader exit. This pattern persists even when we adjust our time frame and look at the five-year period following a leadership transition.
In some cases, such as Kuwait or Saudi Arabia, the resilience of authoritarian regimes following the deaths of their leaders reflects the durability of monarchies, where highly institutionalized succession processes ensure stability across generations. In other cases, a regime’s resilience is driven by the ability of fathers to position their sons as heirs, such as in Syria (2000), Azerbaijan (2003), and Togo (2005). But countries with less formal or obvious mechanisms for succession, such as Venezuela in 2013, Zambia in 2008, or Turkmenistan in 2006, have also endured their leaders’ deaths.
Perhaps we shouldn’t be surprised that there is little change following a dictator’s death in office. Autocrats who die in office tend to be particularly adept politicians — having evaded myriad threats to their rule — and they are likely to have fashioned entrenched political systems capable of persisting beyond their passing. On average, dictators who die in office have enjoyed 16 years in power, compared with just seven for those who exit by all other means. Such longevity is only possible by developing an inner circle of elite supporters who are highly invested in the status quo and are equipped with institutions that they can use to maintain it. In other words, the very strategies that are key to a dictator’s ability to stay in office until death increase his regime’s resilience after his passing.
The presence of a well-functioning support party is among the key strategies that enhance the durability of autocracies and facilitate the succession process. A strong body of academic studies demonstrates the prolonging power of political parties in authoritarian settings. While these parties differ from political parties in democracies, they do serve important functions in autocracies, such as counterbalancing interventionist militaries, distributing benefits to citizens, and promoting the regime’s ideology. Moreover, well-functioning political parties can co-opt individuals with political aspirations or those seeking to gain access to the spoils of office. Once these potential political rivals to the regime are incorporated and incentivized to participate in the system, the party serves as a focal point for negotiations over the choice of a new leader who can continue to protect their interests.
Although a leader’s death in office infrequently prompts the downfall of the regime or instability, these events do occasionally occur. So when should we worry about prospects for instability? Regimes governed by “strongmen” — where political power is highly concentrated in the hands of an individual — tend to be more at risk of instability following a leader’s death. But even then, instability is rare because many personalized regimes rule with the aid of a political party. The depth of the party matters, and those that invest in their development tend to be the regimes that more seamlessly outlive the death of the leader. For example, after the deaths of the highly personalized regimes of both Hafez al-Assad in Syria in 2000 and Ethiopia’s Meles Zenawi in 2012, the ruling political parties — the Baathist Party in Syria and the Ethiopian People’s Revolutionary Democratic Front — were critical in ensuring the regimes’ resilience.
In addition to countries where the regime lacks an effective ruling party, we find that countries that have recently experienced protests and domestic instability also have an elevated risk of coups and protests in the wake of a leader’s death. These findings are consistent with a body of research indicating that recent instability enhances the prospects that a country will experience instability in the future. Periods of instability produce segments of the population with networks and experience that prove useful in mobilizing further protests in the face of any discontent during a leadership transition. For example, previous episodes of instability likely contributed to unrest in the aftermath of Guinean President Lansana Conté’s death in 2008 and Gabonese President Omar Bongo’s death in 2009.
In another small subset of cases that we reviewed, a leader’s death set into motion dynamics that spurred instability in the longer term. In these cases, instability stems not from immediate disagreements over a potential successor, but from the tactics the new leader uses to consolidate power. In ethnically or geographically divided societies, opportunistic leaders can leverage divisions to boost their popularity. This was the case in Ivory Coast, where the death of President Félix Houphouët-Boigny in 1993 triggered the rise of Ivoirian nationalism that planted the seeds for civil war nine years later.
In its 2015 “Freedom in the World” report, Freedom House reported that the risk of a widespread democratic decline is higher now than at any time in the last 25 years. Unfortunately, our results show that the advanced age of 11 of the world’s autocrats offers little hope for reversing this trend. Instead of creating space for change, the deaths of these long-standing leaders will most likely leave in place the resilient autocratic systems they’ve created. Though most leadership transitions generate opportunities for political transformation in dictatorships, death in office is not among them. Death in office, it turns out, is a remarkably unremarkable event. | <urn:uuid:260998fa-0695-4188-8556-61e4b0a757c0> | CC-MAIN-2018-13 | http://www.africacradle.com/when-dictators-die/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647782.95/warc/CC-MAIN-20180322073140-20180322093140-00121.warc.gz | en | 0.951819 | 1,812 | 2.78125 | 3 |
ABOVE: A significant rock carving overlooking the Baisha Xi River RD&E and Visitor’s Center greets visitors to Hailongtun, China. Courtesy of VOA
Q & A with Daryl LeBlanc, AIA – Associate Principal, VOA Associates, Inc.
How does storytelling relate to entertainment design?
Storytelling is an integral part of entertainment design. Most projects have an underlying reason for being that can be expressed through stories. Whether it is creating an experience based around an IP, an educational component, or a significant culturally relevant venue, we find that stories are necessary to set the ground rules for the design. It would be problematic to define and develop the content for these projects without knowing how the story relates to the experience we are trying to convey.
What kind of projects do you employ storytelling on?
We try to incorporate storytelling into all of our projects. The most natural projects for this strategy include attractions and theme parks, museums and learning centers and RD&E [retail, dining and entertainment] venues. However, we have also found this to be an effective strategy in the design of projects within other markets, notably hotels and resorts.
How is the story developed?
This is dependent on the complexity and scale of the project. There have been occasions when we collaborate with consultants to research and write the stories, and in others, we have taken the lead for these efforts. Before design begins, an intense level of research takes place at the beginning of a project; the initial ideas are brought to the surface in a series of open discussions and workshops. Concepts, story outlines and experience fragments are discussed and debated until the ideal story for the client is developed. At that time, the actual writing begins. In some cases, this is no more than a list of several statements that comprise a series of “Goals and Outcomes” that we want all guests to experience. Other times, these “Goals and Outcomes” become expanded into a series of developed narratives that describe the experience from a guests’ point of view. The goal is to identify and describe the things we want visitors to know, feel and do.
How do you incorporate storytelling in your design process? How is it expressed?
Depending on the scope and scale of the project, the composition of the project team including the Owner, and other related factors, we typically employ a range of storytelling techniques. The design process begins with the definition of the story. This may seem strange to those who are not familiar with this process, but it can actually form the basis for a more organized and linear method of designing, especially with large, multi-disciplinary teams. When story threads, goals and outcomes are defined prior to putting pen to paper, it is easier to judge the validity of a specific design idea as we move forward. We are then asking ourselves whether this design concept now supports and enhances the story as opposed to something more subjective, such as do we like it. It is therefore easier to keep a larger team engaged and moving forward in the same direction. It is a matter of constantly asking, how does this idea/detail/material support the story. As part of this process, we develop the actual design work through techniques like storyboarding to create a series of vignettes that express how the story relates to the guest’s perspective. In some cases, we are more interested in creating a sequential series of experiences then determining practical solutions, such as where is the front door? We find that technical issues like that resolve themselves quickly leaving us time to focus on crafting the experience.
Does storytelling translate in the experience of the space? How?
I would say that these techniques do translate into the experience of the space. In some cases, this can be a very subtle overlay to what is happening, and in other cases, the story is at the forefront of how guests relate to the space. This range in story line prominence should be discussed early in the process. In either scenario, we feel that the use of stories enhance any design by making all components work together. Whether everyone fully understand the complete extent of the story at the end of their visit is more difficult to control, but they should at least get a sense that the spaces are richly textured, highly compelling and ultimately interesting enough to capture their attention and imagination, ideally making them want to return.
Will this way of approaching design continue to be important or is there something else that will replace it?
In my opinion, this method of designing will continue to be important. It is a way of distinguishing spaces and creating experiences that are more than just about developing interesting and unique forms. There is a deeper connection to the individuals that experience these spaces, and hopefully the memories created resonate longer as a result. In the future, the evolution of story-based design will include the ability for stories to be even more individualized, with the potential to be specifically tailored to each guest. While this is happening now, think of components in a museum that cater to different levels of engagement, or elements in a theme park that are meant to captivate different age groups. I believe this individualization will become more important in the future. The idea that I can have one experience in a venue that may be completely different from someone else’s yet both scenarios create the same sense of fulfillment and desire to return is a powerful notion that we should consider. I believe this extends outward past the physical environment and affect the way guests make initial plans to visit these types of spaces, which is usually via the Internet. The guest experience does not start when you arrive at a facility, it starts far earlier when we begin our planning to visit such a space. Similarly, these experiences do not stop when we leave either. It is the creation of lasting memories that are at the root of these efforts. People remember stories, especially the ones that personally involve them. Creating this highly personal connection is the reason we use stories as the basis of design and why we feel that this strategy will continue to be a strong design tool in the future.
• • •
Since joining VOA’s Orlando office in 2002, Daryl has been instrumental in building a thriving hospitality and commercial practice, currently serving as Senior Director for VOA’s Hospitality Practice Group. He brings an exceptional expertise and knowledge base to the design and project management of complex themed entertainment environments, multi-component resorts and hotel facilities | <urn:uuid:b37be34a-3c76-469a-a49a-a6009a3a1f1e> | CC-MAIN-2022-27 | https://www.inparkmagazine.com/storytelling-in-experiential-design-issue-60/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00686.warc.gz | en | 0.954508 | 1,307 | 2.515625 | 3 |
A new exhibition on the Jewish painter Marc Chagall has opened in Moscow to celebrate the 125th anniversary of his birth and aims to explore the impact his Russian and cultural background had on his work.
Marc Chagall:The Origins of the Master’s Creative Language at the Tretyakov Gallery in Moscow explores some of the celebrated artist’s lesser known works, in the form of drawings, watercolours, sculptures and paintings to show how the untaught master drew his inspiration from his natural habitat and his exposure to Jewish and Russian culture.
Born Moishe Segal in 1887 to a poor Jewish family on the outskirts of Vitebsk, in modern Belarus, following the Russian Revolution in 1917 which abolished anti-Semitic laws, Chagall was briefly made Fine Arts Commissioner in his hometown, which itself features heavily in his works.
Following his resignation from the post, after a dispute with a fellow painter, Chagall left his birthplace and shortly afterwards fled to Paris, where he made a name for himself.
A noted realistic, who drew both realistic and fantastical references from real life events and locations, notable Jewish references in his displayed works include repeated symbolism of menorahs and the Torah (Jewish bible). When posed the question as to whether his art was based on fantasy or reality, Chagall is recorded as having answered: “It is not true, that my art is fantastic. I’m realist and I love normal life on earth!”
Although noted around the world, his work was, for some time after his death in 1985, frowned upon by the former Soviet Union on account of the Communist state’s disregard of its “bourgeois” tendencies, but following former leader Mikhail Gorbachev’s political reforms in 1987, he finally earned an audience in his motherland, with thousands queuing that same year outside Moscow’s Fine Arts Museum’s for the first major exhibition of his paintings inside Russia.
One of the main components of this latest Russian tribute to the artist is a display of his family collection of portraits, never-before exhibited in Russia. The series, entitled My Life, painted in 1922 provide an illustrative background to his family life, including all its key members, as well a a substantial references to his hometown of Vitebsk.
Exhibition curator Ekaterina Selezneva claims the showing, which runs until September 30, “must help people to understand the mystery of Chagall”, whose Russian roots are often overlooked due to his home nation’s former rejection of his work. “His works are flying around the earth, but return to their place of origin,” she added. | <urn:uuid:a06ca89f-441c-4d91-a573-8eed57e5e2d4> | CC-MAIN-2013-20 | http://www.eju.org/news/europe/moscow-exhibition-explores-chagall%E2%80%99s-russian-jewish-roots | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705352205/warc/CC-MAIN-20130516115552-00038-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.985708 | 556 | 2.671875 | 3 |
|Description: A map of Walton County showing county lines, the county seat (De Funiak Springs), railroads, and cities current to 1904.|
Place Names: Walton, De Funiak Springs, Sterling, Red Bay, Portland, Euchee Anna, Mossy Head, Deerland
ISO Topic Categories: transportation, inlandWaters, boundaries, oceans
Keywords: Walton County, physical, political, transportation, physical features, county borders, railroads, transportation, inlandWaters, boundaries, oceans, Unknown,1904
Source: , Tunison's Florida (, : , 1904)
Map Credit: Courtesy the private collection of Roy Winkelman. | <urn:uuid:affee815-e0eb-4d0c-9277-cc24360c0f43> | CC-MAIN-2013-20 | http://fcit.usf.edu/florida/maps/pages/1800/f1843/f1843.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704133142/warc/CC-MAIN-20130516113533-00017-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.749164 | 138 | 2.65625 | 3 |
Expert chefs have known it for a long time – by using copper cookware, any meal can be prepared in a perfect and gentle way. They are the best pots and pans for cooking and roasting. This is particularly due to the fact that copper has excellent material properties.
Copper is a perfect heat conductor. The heat will evenly be spread over the cookware and dispensed accurately. Professionals and ambitious amateur cooks all over the world know what this quality is worth. Beside that copper item also has other features which add more values to your cookware but not commonly known by people.
Copper cookware is Naturally Antibacterial
Copper cookware has an antibacterial effect because germs and bacteria cannot survive on copper. Ancient civilizations exploited the antimicrobial properties of copper long before the concept of microbes became understood in the nineteenth century(WiKi). To these microorganisms copper is toxic but for humans the material poses no threat at all. For this reason, water pipes of water supply and door handles in hospitals had been made of copper for a long period. As a consequence, the transmission of germs can be avoided.
“Regular cleaning does not reduce the bioburden, but copper works 24/7 for years and never loses activity, even with biofilms and proteins present.”
Professor Tom Elliott.
Why prime bakers have to beat egg with copper bowls
Right egg white must be performed with an unlined copper bowl has been taken for granted for a long time among those prime bakers. Copper bowls have been used in France since the eighteenth century to stabilize egg foams. Unlike other copper cookware items benefit a lot from the supreme heat conductivity. This copper bowl takes advantage of another natural feature of copper: its minimum acidity. That characteristic assists in creating of some kind of copper salt with a tighter band in reactive sulfur items such as egg whites.
The bond created is so tight that the sulfurs are prevented from reacting with any other material, so the protein within the egg white is more pliable and more expandable. That also increases the threshold temperature of denaturation, though that we have 10 more degree to expand and increase the egg white volume. And you don’t need to worry about the copper toxicity. Copper contamination through this method is minimal.
Copper versus Copper Core versus Copper Bottom
Besides solid copper cookware,there has also copper core and copper bottom cookware. What is the difference between them. Copper core cookware is cookware that has an internal layer of copper but other layers are stainless steel, aluminum or other alloy. This design means to keep the advantage of the heat distribution copper offers but avoid the main downsides of copper cookware: high cost and the constant need of cleaning/maintenance.
Of course copper core cookware has no shining visual appeal like pure copper cookware. What you get from it is good heat distribution benefits of copper kitchenware, which is an important reason for purchasing a copper cookware.
Unlike this internal layer of copper, a copper bottom cookware is just with a copper bottom. These pots and pans are easily identified because the bottom of the cookware is copper color, while the rest of the cookware is different. Similar to copper core cookware, copper bottom cookware try to offer the benefit of uniform heat distribution for cooking, without the higher expense of 100% copper cookware. Cookware with copper core or copper bottom will improve your cooking skills, but you can not expect that will be the same as with a pure copper one.
Why Copper Jam Pan is must be unlined for high quality jam?
Copper pots without coating are particularly suitable for making jam. Jam pots and the so-calles “Poêlon à Confiseur”, which pastry chefs use to make syrup, caramel, chocolate, etc. because of their excellent heat conductivity, are not tin-plated. The melting temperature of sugar is at over 180 °C and would damage tin coatings.
That supreme heat conduction of a copper jam pan means your jam can be well prepared within a shorter period. It allows you to preserve more flavor and texture of your fruit ingredients. Using copper pan can avoid over boiling, which will definitely harm the quality and taste of the jam. Another useful features of copper during jam making is the even heat distribution,which will reduce the demanding of stirring dramatically. It also responds rapidly to changes in heat.
When you lift the pot off the heat, it stops boiling almost instantaneously. And when you lower or raise the heat, change in temp is transferred to the pot accordingly. With it you can make a large amount of jam through an effortless method. This is the exact reason why copper jam pan is looked as the game changer by many jam experts.
The excellent heat conductivity makes copper a perfect base material for pots and pans. And copper cookware is heavy, easy to cool and long lasting but a little expensive. Beyond these there are also other facts of copper cookware, descried in the post, helps it get the king position. | <urn:uuid:cf8ad31e-6af3-4084-9dab-05d4aa09bdfc> | CC-MAIN-2017-26 | http://www.coppercookwareinfo.com/facts-about-copper-cookware/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00548.warc.gz | en | 0.934314 | 1,043 | 2.671875 | 3 |
- Young Climate Warriors
Can you find that special key?
Updated: Sep 24, 2021
Autumn 2021 is an exciting time for taking action to combat climate change. Representatives from nearly 200 countries are meeting in November, in Glasgow to help decide our global climate change strategy and plans, so we will be hearing lots about climate change over the next few months. Do you think it is just the governments and big businesses that hold the ‘key’ to solving our climate crisis? What can we do at home? Well Young Climate Warriors also hold a ‘key’ … this week’s challenge is to find a very special key, and use it … it’s a key to your electricity meter. Ask your parent/carer if they can help you find it. Do you think it looks like a car key or a house key? Or more like a Playmobil/Meccano spanner?
Your challenge this week (with your parent / carer’s permission) is to write down some meter readings and work out how much electricity you use – in a day, in a week?How does your electricity consumption compare to the average UK household - using 8-10kWh a day on average across the year?You could check again in the middle of winter – when it is dark and cold outside our electricity usage is normally higher. Do you think you could reduce the amount of electricity you consume?You probably already have lots of ideas of how to do this, and over this year Young Climate Warriors challenges will provide many other fun suggestions. Understanding how much we use is a helpful first step in then seeing how we can reduce it.
When you have found and used your electricity meter ‘key’ please HIT THE RED BUTTON and let us know!
Electricity is generally produced by burning fossil fuels (coal, oil, gas) – which release carbon emissions when burnt - the main cause of climate change. Excitingly however, carbon emissions-free, renewable sources of energy are increasingly being used – like SOLAR (harnessing the sun), HYDRO (harnessing waves or fast-running or falling water), WIND (turbines on land and out at sea) and GEOTHERMAL (using heat from the earth’s core).
A QUARTER of GLOBAL electricity is now created from RENEWABLE sources. In 2020 RENEWABLE energy made up just over 40% of the UK’s electricity generation – great progress is being made!! We still need to play our part in reducing our electricity consumption to help us reduce our carbon emissions more quickly.
If you’d like to see a ‘live’ snapshot of how much electricity is being produced by renewables, at this exact time – you could have a look at this website: https://gridwatch.co.uk/renewables/percent/
If you want to remind yourself of ways in which you can save energy at home, and reduce your electricity consumption and carbon emissions, you could watch this European Commission YouTube clip. No doubt you will also think of plenty of ideas of your own!
Don’t forget to HIT THE RED BUTTON when you have put the electricity meter key to good use! | <urn:uuid:b11d4739-4ab3-48f4-b2bd-6ced0109f949> | CC-MAIN-2023-23 | https://www.youngclimatewarriors.org/post/can-you-find-that-special-key | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00563.warc.gz | en | 0.942141 | 670 | 2.90625 | 3 |
Online Workshops About UDL and Accessibility
Introduction to Accessibility and Universal Design for Learning
Developed by Blackboard, Inc. and offered by the Boise State IDEA Shop, this course is an introduction to building online courses that are usable and accessible. The content is intended to inspire further exploration and advocacy for designing materials that benefit all students and help them achieve their educational goals. While the focus of this course is on developing online courses, much of the information about accessibility and universal design for learning is also applicable to face-to-face courses and hybrid courses.
The course is self-paced but facilitated by instructional design consultants from the IDEA Shop. You can move through the material on your own schedule, though a recommended schedule is provided.
The course content is organized into three modules:
- Universal Design
- Assistive Technology
For details about upcoming offerings, see “Workshops & Events” on the Center for Teaching and Learning website, or contact Kevin Wilson, instructional design consultant, IDEA Shop (firstname.lastname@example.org).
Universal Design in Education: An Online Tutorial
Developed by the Center for Universal Design in Education, this extensive tutorial covers the history, definition, and principles of universal design while also providing numerous examples of specific applications of universal design.
Universal Design for Learning Module for Postsecondary Education
Developed by the Florida Consortium on Postsecondary Education and Intellectual Disabilities, this online course presents UDL as a manageable framework of strategies and techniques that support a diverse population of students while maintaining the integrity and content of the postsecondary education coursework. By using techniques described in this module, you will discover students who once struggled to learn become more engaged and leave with a deeper understanding of the course content.
Universal Design for Learning: Online Training Module
According to the Center for Teaching and Faculty Development at San Francisco State University, after completing the online training course, you will be able to do the following:
- Identify the principles of Universal Design for Learning
- Identify the benefits of Universal Design for Learning, both for you and your students
- Explain the rationale for implementing Universal Design for Learning
- Identify alternatives in conducting your classes
- Develop strategies for implementing Universal Design for Learning in your classes
Accessibility: Designing and Teaching Courses for All Learners
This Open SUNY course offers an opportunity to obtain a better understanding of accessibility as a civil rights issue and develop the knowledge and skills you need to design learning experiences that promote inclusive learning environments. Prepare to engage in thoughtful discussions, participate in peer review assignments, take short self-check quizzes, watch videos, and explore relevant readings. You will also earn badges that recognize your mastery of these competencies.
During this 6-week course (1 to 2 hours per week), you’ll learn how to:
- Recognize and address challenges faced by students with disabilities related to access, success, and completion.
- Articulate faculty and staff roles in reducing barriers for students with disabilities.
- Apply the principles of Universal Design for Learning (UDL) in designing accessible learning experiences.
- Analyze the benefits of Backward Design when developing learning experiences.
- Use Section 508 standards and WCAG 2.0 guidelines to create accessible courses.
- Determine which tools and techniques are appropriate based on course content. | <urn:uuid:f7fdda69-0579-4de0-b217-d148467cde1d> | CC-MAIN-2022-27 | https://www.boisestate.edu/accessibility/faculty/online-workshops-about-udl-and-accessibility/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271763.15/warc/CC-MAIN-20220626161834-20220626191834-00281.warc.gz | en | 0.90694 | 691 | 3.078125 | 3 |
Where Jews, Muslims have lived in relative harmony
To people pondering with dismay the situation in today’s Middle East, it may seem strange that less than a century ago Jews and Muslims frequently lived together in relative harmony. But the Jews of Morocco, with a 2,000 year presence in the region, provide a classic example.
Though its precise location is unknown, the land of Tarshish to which Jonah fled (Jonah 1:3) is often equated with Spain. If that identification is correct, Jewish merchants may have visited Spain and Morocco as early as the 10th century B.C., when Solomon is said to have traded there (1 Kings 10:22). Small permanent Jewish communities first appear in Morocco (Mauritania) during the time of the Roman Empire.
At that time, Jews were scattering throughout the empire, at first for economic reasons and then, after A.D. 70, as refugees from the near-genocidal Roman wars in Judea. In Romans 15:22-29, Paul talks about his plans — probably never fulfilled — to visit Jews in Spain, which may have included a hoped-for visit to the smaller Jewish communities in Morocco.
Paradoxically, the Arab Muslim conquests of North Africa and Spain in the seventh and early eighth centuries brought an increase of Jewish migration into Morocco and nearby Spain. This migration began with Jewish merchants, but soon expanded to include Jews from all walks of life. Most medieval Jewish migrants went to Spain, which afforded greater economic and cultural opportunities.
However, the wars of the Spanish Christian Reconquista (1002-1492) created political instability, causing Jews to begin to leave Spain. The completion of the Christian reconquest of Spain from the Muslims culminated in the 1492 Alhambra Decree of Ferdinand and Isabella, which demanded either conversion or expulsion of all of Spain’s Jews. (Five months later, Columbus departed on the expedition that discovered the New World.)
Portugal issued a similar decree in 1497. These expulsions transformed the nature of the Jewish communities in Morocco.
As a general rule, Jewish refugees from Spain after 1492 were welcome throughout the Muslim world, largely for economic reasons. As a contemporary Sultan of Turkey is reported to have said, “The king of Spain is a fool; by expelling his Jews he impoverishes himself and enriches me.” Many of these Jewish emigrants were skilled craftsmen, merchants, bankers and scholars.
The most famous Spanish Jewish refugee was the rabbi and philosopher Moses Maimonides (1135-1204), who was born in Cordoba but migrated to Fez, Morocco, where he composed his famous commentary on the Mishnah. He later moved to Egypt, where he served as court physician to Saladin and as the official representative of the Egyptian Jews to the Sultan.
The Jews expelled from Spain spread throughout the Mediterranean, but most went to North Africa, just a short distance across the Straits of Gibraltar. The majority settled in Fez or Marrakesh, transforming Morocco’s greatest cities at the time into centers of Jewish economic, cultural and scholarly activity. The Jewish quarter of southeast Marrakesh flourished for centuries. In 1492, Jewish emigrants from Spain founded the Laazama (al-Azama) Synagogue, which is still in operation, though mainly for Jewish tourists
The nearby Jewish Cemetery of Bab Ghmat is likewise half a millennium old; Marrakesh’s surviving Jews are still buried there.
Other Moroccan Jews settled in the countryside, remaining among the native Muslim Berbers there for nearly 500 years. Among the founders of these Jewish-Berber communities was Shlomo bel Hensh. Revered as a tzadik (“righteous” holy man), his tomb in the Ourika Valley of the Atlas Mountains is still a site of Jewish pilgrimage. Of course, relations between Jews and Muslims in Morocco were not always friendly, but as a whole, Jewish communities thrived in Morocco during the early modern period.
Ironically, the establishment of the state of Israel in 1948 led to the decline of independent Jewish communities in the Middle East, North Africa and Europe. Whereas in 1900 there were as many as 300,000 Jews in Morocco, today they number only a few thousand. Most have emigrated to Israel, where their descendants number around 1 million, roughly one sixth of contemporary Israeli Jews.
Jews from Spain and North Africa, known as Sephardic Jews (Sephardim), form one of the major cultural forms of modern Jewry along with Ashkenazi (European), and Mizrahi (Middle Eastern) Jews. In recent years, Morocco’s King Hassan II has encouraged Israeli Jews of Moroccan descent to return to their homeland, with only meager success.
Daniel Peterson chairs The Interpreter Foundation and blogs on Patheos. William Hamblin is the author of several books on premodern history. They speak only for themselves. | <urn:uuid:36d3dd90-1e13-4944-9226-6442e99b29f8> | CC-MAIN-2019-43 | https://www.thespectrum.com/story/life/features/2018/04/16/where-jews-muslims-have-lived-in-relative-harmony/33861701/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00110.warc.gz | en | 0.969666 | 1,023 | 3.484375 | 3 |
Transgender Day of Remembrance 2022
Thank you for sharing this moment in time with us. I know these past years have been a lot, not to mention this week, as we celebrated Trans Awareness.
TDOR the holiday was started in 1999 by transgender advocate Gwendolyn Ann Smith as a vigil to honor the memory of Rita Hester, a transwoman who was killed in 1998. The vigil commemorated all the trans people lost to violence since Rita Hester’s death and began an important tradition that has become the annual Transgender Day of Remembrance. Roses seeks to highlight the losses we face due to anti-transgender bigotry and violence. We know Black and brown trans girls often experience higher levels of violence at home, school, and work. Without saying, trans folk are no strangers to the need to fight for our rights and the right to simply exist is first and foremost.
Experiencing obscene levels of erasing transgender people sometimes in the most brutal ways possible, it is vitally important that those we lose are not forgotten and that we continue to fight for justice while keeping their names alive. It is pivotal to remember as we mourn that many, if not all of these victims, were killed by acquaintances, partners, or strangers- some of whom have been arrested and charged, but more often than not, their killers have yet to be identified and held accountable. As we know, in most cases, there is a clear sign of anti-trans bias at play. But we often fail to talk about how these queer and trans folk identities may have put them at risk of being in harm’s way, such as forcing them into unemployment, poverty, homelessness, and/or engaging in survival sex work.
We know it’s clear that black and brown trans women are disproportionately affected by violence and more specifically, black trans women at the intersections of misogynoir, homophobia, and transphobia. These unchecked gun laws continue to deprive them of housing, healthcare, employment, and other necessities that are needed to thrive. Let us continue the fight against anti-trans rhetoric and violence. While simultaneously engaging in Trans Joy, let’s cherish these moments with one another on the road to Liberation for all.
-Mulani Jackson (She/Her), Roses Initiative National Organizer | <urn:uuid:27bfe478-fd13-457e-b67b-2487db807301> | CC-MAIN-2022-49 | https://ourtranstruth.org/tdor-2022/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00513.warc.gz | en | 0.966124 | 467 | 2.90625 | 3 |
“One of the greatest shortcomings of human logic is the unquestioned belief that psychological problems, be it of behaviour or intelligence, are influenced only by psychological factors and that physiological factors are only influenced by physiological factors. This presupposes that the mind and body are separate, that psychological stress makes muscles tense. Ask a chemist, an anatomist and a psychologist to define where the mind starts and the body ends and they will find that the two are intimately interconnected”
Double-Nobel prize Winner Dr Carl Pfieffer
1. Balance your blood sugar – this means that you eat food that makes sure you have stable energy throughout the day. It is thought that the hormone insulin (the hormone responsible for keeping our glucose in balance) might be contributing to the rise in dementia that has been seen in the last 50 years.
2. Make sure that you recognise and eliminate food allergies – a rise in allergies over the last generation – The gut is sometimes referred to as “The Second Brain” – there are more receptors for the happy hormone serotonin than anywhere else in the body. Keeping the gut happy, keeps the mind happy.
3. Looking at your exposure to pollution – your body tries to detoxify pollution by using essential nutrients, if your essential nutrients are compromised then you could end up with problems of the mind. Zinc is especially important as are the B vitamins – however, making sure you have full spectrum of nutrients is very important
4. Address Stress – looking after the adrenal glands (the glands that produce stress the stress hormone cortisol) – making sure you are not having too many stimulants eg strong caffeine. Try Green Tea which does have caffeine but only 15mg (some coffees could have more than 150 mg per cup)
5. The focus for Mental Awareness Week is Mindfulness – this is the therapy of choice currently and is easily done and practiced by yourself without the need for fancy equipment. Mindfulness involves staying in the present moment (as opposed to ruminating about the past or projecting the future). Keeping focussed in the NOW enables you to be more aware of your body, your mind and your choices. I like Mark William’s Book “Mindfulness” and great start | <urn:uuid:8a2f244f-7e52-4d68-a7bb-457e4e96e80e> | CC-MAIN-2021-43 | https://katecook.biz/mental-performance/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00072.warc.gz | en | 0.951677 | 452 | 2.828125 | 3 |
From the Oceanic Discoverer in New Zealand
Feb 3, 2013 - Oceanic Discoverer
We woke to calm seas, a slightly overcast sky with pleasant temperatures in the mid-60s. Just before breakfast, White Island appeared on the horizon, diving Australasian Gannet, flying-fish and a large Manta-Ray escorted us in. As we approached, plumes of steam were visible from many miles off. But today, as we came closer, dense plumes, laden with mud and rock occasionally shot into the air at the far end of the carter. Lying at the northern end of the Taupo-Rotorua volcanic zone, part of the Pacific-rim-of-fire, it is one of four active volcanoes on that line; it often emits clouds of steam and the occasional spurt of ash-laden cloud which can be seen hanging over the island; but today it was at its most spectacular. Its Maori name is Whakaari (to make visible), Captain Cook on his first visit to New Zealand in 1769 gave the volcano its English name, inspired by the dense clouds of smoke or steam.
At the end of the 19th century, there was a huge demand for sulphur for farm fertilizer, and the first sulphur was mined on White Island in the early 1880s. An eruption in September 1914 caused a mudflow that swept the mining settlement out to sea, leaving only a cat alive and no trace of the 12 people that work there. Parts of the abandoned workings could be seen when we landed on the south-eastern side of the island. The White Island Sulphur Company gave the island to the father of the present owner and in 1953 it was declared a Private Scenic Reserve, now administered by the Department of Conservation (National Park Service).
Wildlife abounds around the island, three small colonies of Australasian Gannets have established, petrels and other sea birds nest on the nearby islands and rock stacks including Whale Island; smaller but again an active volcano.
Today a heavy swell allowed only the nimble-of-foot to land, but those of us who viewed the activity from close inshore aboard the ship had excellent views of the increasing volcanic activity.
It was a great day and a spectacular way to finish the cruise. | <urn:uuid:7fe0c4f7-bd91-4228-bfc9-8551a8728ee4> | CC-MAIN-2013-20 | http://www.expeditions.com/daily-expedition-reports/155244 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701281163/warc/CC-MAIN-20130516104801-00087-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.965336 | 479 | 2.71875 | 3 |
Deciding whether news is in the public interest
Journalists should always apply the public interest test before deciding whether to cover a story. For most issues it's fairly clear what is and what is not in the public interest; however, for some issues it's more complicated, particularly where privacy and power are concerned.
Why public interest matters
The first task is to separate what is in the public interest from those things members of the public are interested in; they are not necessarily the same - although they could be.
The fact that the public may be interested in something often has nothing to do with whether investigating and covering that issue is in the public interest.
The public interest is in having a safe, healthy and fully-functioning society. In a democracy, journalism plays a central role in that. It gives people the information they need to take part in the democratic process. If journalists are good at their job, they hold governments and other institutions to account.
This is what real journalists do. They scrutinise the executive, shine a light in dark places, and dig where others don't - and all in the public interest.
The public interest is more than what the public is interested in
Deciding what is in the public interest
So there is a public service ethic at the heart of all serious journalism. To fulfil this public service role, journalists must build and retain the trust of their audiences by behaving in an ethical and professional manner.
This site has a section covering editorial ethics for journalists. It contains training modules which deal with many of the issues involved in ensuring journalism is in the public interest. However, sometimes, there are reasons to vary from standard good practice in order to bring an important subject to the public’s attention. This is where you apply the public interest test.
For example, journalists should normally be honest about who and what they are; they should always give their names and say which news organisation they work for. A journalist needs to be straight themselves if they want straight answers from others.
However, there are times when a journalist might have to go undercover and hide their true identity and the real reason for their actions. Such cases could include the investigation of crime or political wrongdoing. This is an act of deception, which is generally to be avoided, but, if it brings justice, and an end to criminal activity, it may be justified in the wider public interest.
The private lives of public figures
Journalists should not normally intrude into the private lives of people - but there might be a case for doing so if the person being investigated is a public figure who is behaving differently in private from what he or she is advocating in public.
In this case, media intrusion - normally an objectionable practice - could expose hypocrisy and dishonesty. However, such intrusion must be clearly shown and clearly seen to be in the wider public interest. You must be able to justify your actions to yourself, your colleagues and, perhaps later, to your audience.
Things become more difficult when the story in question may actually involve a journalist breaking the law, or encouraging someone else to do so. Here you need to have a serious discussion with colleagues about the circumstances, the public interest benefit in covering the story, the risks involved and the likely consequences.
Some countries build "the public interest" into their legal systems. So if you want to publish a difficult or controversial item because it is "in the public interest" it is highly advisable to know whether the legal framework will give you any protection, or not.
Of course, in other countries, those in power might actively oppose journalists revealing information which is in the public interest because it might threaten their control of society. In such cases, the public interest test takes on another meaning. How those in power define the public interest might be more about control than freedom of information. Here, extra care is required.
Some public interest justifications
If the decision is taken to publish, it’s likely to be because the story would do one of these things:
- Correct a significant wrong.
- Bring to light information affecting public well-being and safety.
- Improve the public’s understanding of, and participation in, the debate about a big issue of the day.
- Lead to greater accountability and transparency in public life.
None of this is easy. Journalists grapple with these issues every day. And there are many other factors at play that are not even touched on in this brief training module, but if you get the public interest test right, you will be fulfilling the highest purpose of journalism.
Please try our public interest test scenario and see if you pass the test.
The image above is adapted from a photograph by andercismo from Flickr under Creative Commons. | <urn:uuid:798fa19a-84dc-4bb0-9548-bb8082c25355> | CC-MAIN-2017-09 | http://www.mediahelpingmedia.org/training-resources/journalism-basics/360-applying-the-public-interest-test-to-journalism | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00351-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958884 | 966 | 2.875 | 3 |
Why are women at higher risk for stroke than men?
May 9, 2016
The good news for women is that they generally live longer than men. The less than great news is that the risk for stroke increases with age, which means that women typically have a higher stroke risk.
In 2016, about 55,000 more women than men will have a stroke. More women than men die from stroke each year because older women outnumber older men. Strokes are the leading cause of disability for women, and they kill twice as many women as breast cancer.
Stroke ranks No. 4 among all causes of death, behind heart disease, cancer, and chronic lower respiratory disease. However, it’s the third-leading cause of death among women, and women accounted for almost 60 percent of stroke deaths in 2013.
In fact, strokes are so common in Texas that we’re included in the “Stroke Belt” – an area from Virginia down to Florida and west to Texas that has America’s highest rates of morbidity and death from stroke.
May is National Stroke Awareness month, and it’s the perfect time for women to think about their heart health and how it affects their risk for stroke. With that in mind, let’s discuss why women are more at risk, how women tend to handle their unique symptoms, and how we all can stay healthy together.
Women’s heart health and stroke risk
There are specific reasons why the risk of stroke is higher in women than men, including:
- Postmenopausal changes: The risk for vascular diseases increases as we age, and certain conditions that arise after menopause can increase that risk. These include high blood pressure, high cholesterol, and diabetes.
- Preeclampsia/eclampsia: This condition can double a woman’s risk of having a stroke for years after the pregnancy.
- Cerebrovascular disorders: Women have a higher rate of aneurysms and subarachnoid hemorrhage, which is bleeding in the area between the brain and the thin tissues that cover it. This is an additional risk factor for stroke.
- Migraines with aura: This condition can more than double a woman’s risk of stroke.
- Hypertension: High blood pressure is one of the most common – and most treatable – risk factors for stroke.
- Atrial fibrillation: Women generally have a higher rate of atrial fibrillation (AFib, or irregular heart rhythm) then men. AFib is a major risk factor for large embolic strokes. In fact, having AFib puts a person at five times greater risk for stroke.
Women also should pay attention to other stroke risk factors, in particular those that are related to cardiovascular disease. It’s important for all people to keep their hearts healthy, and women should be particularly mindful of this to decrease their risk of stroke.
How symptoms of stroke in women differ from men
In general, men and women face the same stroke symptoms, including sudden onset of:
- Difficulty walking, balancing, or speaking
- Numbness or weakness
- Severe headache with no apparent cause
- Vision problems
However, women often report these additional stroke symptoms:
- Difficulty breathing
- Fainting or loss of consciousness
- Feeling weak all over
- Sudden behavior changes or agitation
- Vomiting or nausea
The way men and women tend to react to their symptoms also is notably different. Men are infamous for minimizing their symptoms. They don’t assign a high degree of urgency or importance to them and often shrug them off.
Women, on the other hand, acknowledge their symptoms but try to assign other reasons for not feeling well. For example, a woman is likely to say her blood pressure is high because she’s upset about something or recently started a new medication, or to say she’s in pain or feels weak because she didn’t sleep well, and so forth.
Women also tend not to seek care for stroke symptoms because they don’t want their friends or family to worry. Or sometimes, they don’t want to deal with a serious medical diagnosis because too many people depend on them.
But when stroke symptoms occur and you don’t act quickly, severe and sometimes catastrophic consequences can ensue. You may not be able to do simple things like go to the grocery store alone or pick up your kids or grandkids. Women must ask themselves, is avoiding a visit with the doctor worth living that way?
What you can do to prevent stroke
I tell men and women alike that, first and foremost, heart-healthy lifestyle changes can dramatically decrease stroke risk:
- Eat in moderation. Include leafy greens, fruits, and lean meats, and consume less sodium.
- Exercise more. I recommend moderate exercise, three to four times a week.
- Check your blood pressure. A quarter of Americans have high blood pressure. Check your blood pressure daily at home and provide the readouts to your doctor. The doctor can help you calibrate your monitor to make sure it reads correctly.
- De-stress. Take walks with friends or practice yoga or meditation.
- Get routine well checks. Seeing your doctor regularly can help you stay on top of existing risk factors and prevent new ones from developing.
- Stop smoking. Smoking increases high blood pressure, which is a major risk factor for stroke.
Medication also can help support a healthy heart and reduce stroke risk. There are several ongoing studies of drugs for heart disease and AFib, and we’re working on how these drugs may be used to safely, effectively prevent stroke.
In the past, some stroke prevention medications that were designed to reduce clotting also increased the risk of bleeding. Drugs like warfarin often have food and drug interactions that were difficult to manage. Plus warfarin has to be monitored with frequent blood tests.
However, oral medications such as new oral anticoagulants (NOACs) are safe and effective and may even reduce the risk of bleeding. Also, there are no food or drug interactions to manage with NOACs. The older drugs may be cheaper, but in my mind the risks just aren’t worth it for most people.
Healthier lifestyles, healthier communities
Unfortunately, it sometimes takes suffering a stroke to really catch people’s attention and get them to focus on their health. After a stroke, most patients want to do whatever they can to avoid another because they know how awful a stroke can be and how long it can take to recover.
Theoretically, if men and women lived healthier lifestyles, and all of their risk factors were identified and well treated, the risk of stroke overall would decrease by about 80 percent. Leading a healthy lifestyle includes identifying risk factors, working with your doctor to control those risk factors, and seeking medical care right away when stroke symptoms occur. Nationally, this could result in 600,000 fewer strokes.
There’s no better time than right now to start focusing on healthy lifestyle choices, not only to detect and treat risk factors, but also to enjoy a healthier life together. Remember, the best way to treat stroke is to prevent it! | <urn:uuid:b0986588-1e3f-4a04-b43c-ce68edc43832> | CC-MAIN-2022-05 | https://utswmed.org/medblog/stroke-symptoms-women-risk/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00594.warc.gz | en | 0.955209 | 1,522 | 2.953125 | 3 |
The wet soil underneath a foundation might shift from loss of strength or swelling. Maintaining a dry foundation is important because this change is known to have a negative impact on structural integrity. Dampness is also a problem since it presents an ideal environment for mold growth. Apart from making below-ground spaces unpleasant, mold might be a health hazard. Unfortunately, conventional concrete is not waterproof, and even though it might keep out liquid water when it does not have cracks, water vapor penetrates easily. As such, preventing water from moving through concrete and ensuring it drains away from the foundation is essential to structural integrity.
Depending on factors such as climate, soil or water table conditions, geographic location, topography, and depth of the foundation, the task of installing proper drainage to guarantee dry below-grade spaces might be relatively straightforward or somewhat involving. Systems designed to keep water out will consist of three basic components, and these are:
• Drains for moving water away from the foundation’s bottom.
• Wall treatment designed to route water downwards and into the drains, which helps to ensure moisture does not move through the wall.
• Ground surface treatment installed adjacent to the building, which helps by directing surface water away.
While leaky basements are a common problem, taking precautionary measures to keep your below-grade interior spaces from flooding might be the best solution, especially since you now know that water is the leading cause of structural issues. Basement waterproofing is an excellent investment because it helps prevent water damage, which often leads to costly repairs.
Before you engage an expert, you’ll want to have some basic knowledge of the methods and products used to prevent basement leaks. Knowing a little bit more about the process of curing a leaky basement or what waterproofing involves is important. Apart from knowing what to expect, arming yourself with this information might help you choose the right service provider.
Waterproofing situations are often entirely different, and so are the basements involved. As such, identifying the exact location of moisture penetration is just as important as developing an ideal and sustainable solution. Here’s a look at the three methods used to make below-grade spaces impermeable.
1. Interior waterproofing and sealants
Cracks, which usually form in concrete foundations, are a common water entry point. Fortunately, many professionals have the skills and equipment to seal such cracks. To seal off potential moisture entry points, waterproofing experts will usually inject a special sealant into the crack openings from the inside of below-grade spaces, making sure it penetrates all the way to the outside. The sealants used should have the ability to prevent leaks even when subjected to high levels of humidity.
You can, however, choose to use watertight coatings. This waterproofing method is a viable alternative since the coating employed will adhere to your concrete walls permanently and function effectively even when subjected to dampness or minor condensation. Unfortunately, the use of watertight coats or sealants does not fix major leaks or flooding. This inability is mainly because waterproof coats and sealants don’t perform well when subjected to intense water pressure.
2. Exterior waterproofing
Exterior waterproofing involves extensive excavation, which needs to be done around the entire house and all the way to the foundation’s base. Once the excavation is complete, experts will use a waterproof coating to seal the walls and make them watertight. This helps by directing water away and towards a drainage system. Thanks to this waterproofing method, homeowners can keep water from seeping into below-grade spaces through the walls or foundation. Additionally, exterior waterproofing prevents moisture from causing damage to a home’s interior and foundation.
3. Installation of interior and exterior drainage systems
Proper drainage systems allow better control over water, even in cases where water has already penetrated into the house. Using a sump pump to collect and drain water from below-grade spaces is one of the simplest ways to enhance drainage. Even so, you’ll want to be sure you’re pumping the water away from your house.
Interior drainage systems serve to drain underground water from below-grade spaces as well as the water that collects along the foundation. Though some homeowners take the functionality of an interior drainage system lightly, these systems can provide extensive benefits when there’s a power outage, heavy rainfall or heavy melting of snow. Apart from helping to ensure the area is free of both mold and mildew, proper drainage is effective in keeping water out of the home.
Taking preventive measures is important, which is why you should contact an expert and prepare your home before it’s too late. | <urn:uuid:ee58f92c-76f1-4d02-80b3-2421c871182a> | CC-MAIN-2020-29 | https://www.raleighwaterproofinginc.com/2017/02/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00306.warc.gz | en | 0.942853 | 955 | 2.953125 | 3 |
, also known as Tasawwuf
(Arabic: تصوف), is an Islamic mystical
tradition, whose adherents aim to achieve obliteration of the self and complete oneness with Allah
. Sufis believe that nothing exists outside of Allah (similar to the Kabbalistic
view on the nature of God), and that the Qur'an has always existed on some metaphysical realm outside of our own, and therefore merely being revealed by Allah, rather than authored by him. This belief caused much persecution of Sufis during some periods of history. Sufis, like most Muslims, view Muhammad
as the prime example of spiritual piety, the al-Insān al-Kāmil
, and that he was not only the greatest prophet, but also the greatest man to have lived.
Sufis are particularly fond of dhikr
, the practice of reciting the various "names of Allah," of which there are 99.
Practitioners are called Sufis, dervishes, or murīdīn. They group themselves into orders based around a murshid, or a guide. These orders are called Tariqa. The etymological origin of Sufism is not clear, but scholars generally agree that ṣūf or "wool" is probably the root word of "Sufi." Sufism has many parallel beliefs alongside different religions, including Buddhism, Christianity, Hinduism, Judaism, as well as Greek concepts.
- ↑ Carl W. Ernst, The Cambridge Companion to Muhammad, Muḥammad as the Pole of Existence, Cambridge University Press, p. 130 | <urn:uuid:6883387a-c13e-4133-9731-354e06f10352> | CC-MAIN-2018-09 | https://rationalwiki.org/wiki/Sufism | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00211.warc.gz | en | 0.96412 | 348 | 3.15625 | 3 |
Gene responsible for regulating chronic pain discovered
13 September, 2012
ISLAMABAD: Researchers have discovered a gene that is linked to chronic pain - a finding that could open the door to better pain medications.
University of Cambridge researchers have isolated a gene called HCN2, which produces a protein that causes chronic pain.
Chronic pain comes in two main varieties, the first, inflammatory pain, occurs when a persistent injury (e.g. a burn or arthritis) results in an enhanced sensitivity of pain-sensitive nerve endings, thus increasing the sensation of pain.
More obdurate is the second variety of chronic pain, neuropathic pain, in which nerve damage causes on-going pain and a hypersensitivity to stimuli and is also a common component of lower back pain and other chronic painful conditions.
Neuropathic pain, which is often lifelong, is a surprisingly common condition and is poorly treated by current drugs. Neuropathic pain is seen in patients with diabetes (affecting 3.7m patients in Europe, USA and Japan) and as a painful after-effect of shingles, as well as often being a consequence of cancer chemotherapy.
"Individuals suffering from neuropathic pain often have little or no respite because of the lack of effective medications. Our research lays the groundwork for the development of new drugs to treat chronic pain by blocking HCN2," said lead researcher Professor Peter McNaughton, Head of the Department of Pharmacology at the University of Cambridge, said.
The researchers engineered the removal of the HCN2 gene from pain-sensitive nerves and then carried out studies using electrical stimuli on these nerves in cell cultures to determine how their properties were altered by the removal of HCN2.
Following promising results from the in vitro studies in cell cultures, the researchers studied genetically modified mice in which the HCN2 gene had been deleted.
By measuring the speed with which the mice withdrew from different types of painful stimuli, the scientists were able to determine that deleting the HCN2 gene abolished neuropathic pain.
Interestingly, researchers also found that deleting HCN2 does not affect normal acute pain (the type of pain produced by a sudden injury– such as biting one``s tongue).
The study is published in the journal Science. | <urn:uuid:3a294c56-13c8-49d2-869a-a609d951eb7c> | CC-MAIN-2016-40 | http://paktribune.com/news/Gene-responsible-for-regulating-chronic-pain-discovered-253256.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661778.39/warc/CC-MAIN-20160924173741-00271-ip-10-143-35-109.ec2.internal.warc.gz | en | 0.955524 | 462 | 3.34375 | 3 |
There has been 40 years of research into parental engagement. A report done in 2008 for the UK government by Janet Goodall and Alma Harris was titled “Do Parents know they matter”? This is an interesting question. Do you, as parents know that you can make a difference to not only your child’s academic achievement but to their overall wellbeing by supporting their learning in the home.
Janet Goodall has put together a 6-point model of elements of parental engagement that have been found to be effective to support children’s learning. These elements all work together and are underpinned by effective parenting practices.
- Authoritative parenting combines clear and consistent boundaries with practices that encourages your child to move towards autonomy and able to exercise self-control.
- Learning in the home
- expressing high educational aspirations
- teaching values in the home- decide what these values are
- making time and space to read and do homework- routines
- everyday activities – discussions while doing household tasks or driving home
- Beginning engagement with learning early
- Begin Early- start when they are small
- Kids need structure – regular bedtime, mealtimes
- The greater structure the greater they are able to cooperate with others, follow rules and norms, and the higher their academic development (Harris & Goodall, 2008)
- make it part of everyday activities-read, sing songs and nursery rhymes, take them on visits, regular opportunities to play with friends at home
- Stay engaged throughout school
- although feeling less in touch with child’s learning as they grow older usually due to the increasing difficulty of the curriculum – it is even more important to Discuss the future, have high expectations and realistic boundaries, talk about school activities, check on progress at school (Clinton & Hattie, 2013)
- Holding and passing on high expectations
- value education highly and pass that on to your kids
- what parents say and think about education has a direct effect on students’ own beliefs and actions (Harris and Goodall, 2008)
- have realistic academic expectations for them
- Active Interest
- involvement with homework, attending parent teacher conferences, be involved in school
- it is more than checking homework, it includes linking what is being learned at school to other things and having set structures for study.
Have a great year.
For more from Cathy Quinn head to her webpage.
To learn more about effective parental engagement visit our Parents as Partners page.
Clinton, J. & Hattie, J., (2013). New Zealand students’ perceptions of parental involvement in learning and schooling. Asia Pacific Journal of Education, 33(3), 324-337. DOI: 10.1080/02188791.2013.786679
Goodall, J. (2013). Parental engagement to support children’s learning: a six point model, School Leadership & Management: Formerly School Organisation, 33(2), 133-150, DOI: 10.1080/13632434.2012.724668
Harris, A. & J. Goodall (2008). Do Parents Know They Matter? Engaging All Parents in Learning. Educational Research 50(3): pp. 277 – 289. http://wrap.warwick.ac.uk/29194/ | <urn:uuid:00334df2-8108-4298-b572-d894442e3c62> | CC-MAIN-2019-43 | https://www.3plearning.com/blog/parents-matters/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00052.warc.gz | en | 0.940295 | 674 | 3.640625 | 4 |
Please visit our alerts page for an update on program closures.
Being homeless can make it harder to succeed in school. But City Schools can help.
What does it mean to be homeless?
Under U.S. federal law, students who have no fixed, adequate nighttime residence are considered “homeless.” This includes
Is doubling up or couch-surfing the same as being homeless?
When students — with or without other family members, parents, or guardians — stay with extended family or friends because of a loss of housing and lack of resources to obtain new, permanent housing, they are considered homeless under the law. This means that they are entitled to and eligible for services provided all other students and families experiencing homelessness.
What will schools do to support homeless youth and families?
Once a school knows a student is experiencing homelessness, staff will
What can a student or family do if they disagree with a decision about school placement or resources?
If a student or family disagrees with City Schools’ determination of eligibility or type of services provided, they may appeal the decision. To start the appeal, the student or family should ask for a formal conference with the school principal. If the student or family still disagree with the decision, they may request a grievance hearing.
During the appeals process, students have the right to enroll immediately in the closest school that they are eligible to attend or to remain enrolled at the school of origin. | <urn:uuid:de94a37c-31d7-4f9f-b770-72f4336300c7> | CC-MAIN-2021-31 | https://www.baltimorecityschools.org/homeless-services | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00702.warc.gz | en | 0.960972 | 293 | 2.953125 | 3 |
California can do it say reduction emissions forecasters
It sounds like an impossible equation. America's most populous state, California, aims to reduce its carbon emissions to 80% below its 1990 levels in the next 40 years, while its population is expected to rise from 37 million to 55 million and the demand for energy is expected to double.
However, scientists at Lawrence Berkeley National Laboratory are optimistic in a new report that the state can hit the target.
'California's Energy Future: The View to 2050' looks at a variety of scenarios for future energy use to achieve this target, including electrification, enhanced efficiency, nuclear energy, renewable energy sources, grid modernization, and carbon capture and sequestration (CCS).
And, according to the report, California is already on the way to the massive emissions reductions: ''California can achieve emissions roughly 60 percent below 1990 levels with technology we largely know about today if such technology is rapidly deployed at rates that are aggressive but feasible.''
Developments in new technologies - including artificial photosynthesis, fusion energy, more efficient and sustainable biofuels, hydrogen fuel, more effective CCS and advanced batteries for both vehicles and grid storage - can make up the shortfall say the Berkeley Lab scientists, whose colleagues are already working to improve these technologies.
If nothing changes then California's emissions in 2050 will be twice those of 1990, but just by using current technology more efficiently that increase can be limited to 20 percent.
A new way of distributing energy will be key to hitting the reduction target says the report. ''The grid as it currently stands is entirely unsustainable,'' says Berkeley Lab researcher Jeff Greenblatt. ''We're going to see a very different grid in 2050 than we have now.''
Jim McMahon, head of the Energy Analysis Department in the Environmental Energy Technologies Division, explained: ''We need either more storage on the grid-whether with batteries or compressed air or something else-or a very intelligent system that's able to respond to what's available. For example, since the wind tends to blow more at night, a smarter system would heat your water at night when you have the power and store that water, and not in the morning when everybody wants to take a shower.''
Generation too will need to change, with California's renewable energy resources needing to grown significantly to the extent that 1.3 percent of the state's land area would have to be devoted exclusively to renewables.
Acting quickly to make the machines we already use more efficient is key say the team. Space and water heating, vehicles, domestic cooking and bus and rail fleets should be electrified whenever possible.
The final ingredient is the state's population, who will have to make significant changes to their behaviour says McMahon: ''It's things like changing your diet, changing transportation to carpool more and use public transit, thermostat setbacks so you're cooling or heating your house a little less, eco-driving-in Europe they've taught people how to drive more efficiently. If you had 10 percent of people telecommuting, you'd have 10 percent less traffic.''
''There's portion of the population very interested in green living. I tend to think it's generational-there are a lot of young people trying to figure out how to live more sustainably on the earth. So over time they may have more and more say over what we do.'' | <urn:uuid:8135b9c7-9e18-43d6-8db7-8e95d12a5b78> | CC-MAIN-2014-49 | http://www.earthtimes.org/energy/california-reduction-emissions-forecasters/908/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379512.32/warc/CC-MAIN-20141119123259-00068-ip-10-235-23-156.ec2.internal.warc.gz | en | 0.941426 | 680 | 3.09375 | 3 |
Or phencyclidine, is a “dissociative” anesthetic that was developed in the 1950s as a surgical anesthetic. Its sedative and anesthetic effects are trance-like, and patients experience a feeling of being “out of body” and detached from their environment. Use of PCP in humans was discontinued in 1965, because it was found that patients often became agitated, delusional, and irrational while recovering from its anesthetic effects.
At high doses, PCP can cause hallucinations as well as seizures, coma, and death (though death more often results from accidental injury or suicide during PCP intoxication). Other effects that can occur at high doses are nausea, vomiting, blurred vision, flicking up and down of the eyes, drooling, loss of balance, and dizziness. High doses can also cause effects similar to symptoms of schizophrenia, such as delusions, paranoia, disordered thinking, a sensation of distance from one’s environment, and catatonia. Speech is often sparse and garbled.
PCP is addicting; that is, its repeated use often leads to psychological dependence, craving, and compulsive PCP-seeking behavior.
People who use PCP for long periods report memory loss, difficulties with speech and thinking, depression, and weight loss. These symptoms can persist up to a year after cessation of PCP use. Mood disorders also have been reported. | <urn:uuid:41402f8b-18a2-4893-89bc-e406f113a3fc> | CC-MAIN-2016-22 | http://www.cameron.edu/wellnesscenter/drugs/barbiturates/pcp | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049273946.43/warc/CC-MAIN-20160524002113-00034-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.97268 | 289 | 3.21875 | 3 |
Autumnal Recrudescence––Say What???
Perhaps you’ve noticed it too. So what physiological reason can we attribute this to? It’s due to a phenomenon with a hard to pronounce name – autumnal recrudescence.
The theory about autumnal recrudescence is this: Some sex hormones are triggered to be released based on the hours of daylight, and certain hormones “inspire” a bird to sing. At some point, when the daylight hours of autumn match those of just the right time in spring, those hormones are re-triggered and drive some birds to sing for a short period in the fall.
In other words, when Fall days are roughly as long as the days of the Spring mating season, birds experience a hormonal surge that triggers singing by males, despite the fact that they will not be breeding for many months.
The poem below, written by Susan Stiles in 1973 poetically sums up the phenomenon:
The Autumnal Recrudescence of the Amatory Urge
When the birds are cacaphonic in the trees and on the verge
Of the fields in mid-October when the cold is like a scourge.
It is not delight in winter that makes feathered voices surge,
But autumnal recrudescence of the amatory urge.
When the frost is on the pumpkin and when leaf and branch diverge,
Birds with hormones reawakened sing a paean, not a dirge.
What’s the reason for their warbling? Why on earth this late-year splurge?
The autumnal recrudescence of the amatory urge.
-Written by Susan Stiles, copyright December 1973
- An excellent book that describes autumnal recrudescence, and many other topics in bird biology and ecology, is The Birder’s Handbook by Paul Ehrlich, David Dobkin, and Darryl Wheye.
More Bird Resources on this Website
Here are some more stories, videos, and bird sketching resources I’ve written.
Did You Enjoy This Story?
If you’ve found value in this story and believe in my mission to educate youth and adults alike on the value of nature, I invite you to make a donation to help broaden and deepen the work I can accomplish.
Click the Paypal ‘Donate’ button below to donate any amount you wish to support the conservation and education work I do. You don’t need to have a Paypal account to donate, you may also choose to use a credit card, or simply send put a check in the mailbox if you wish. Thank you! | <urn:uuid:52348caa-1b93-4806-9da7-1b60db45fcfd> | CC-MAIN-2023-40 | https://christineelder.com/autumnal-recrudescence/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.19/warc/CC-MAIN-20230923162848-20230923192848-00627.warc.gz | en | 0.91311 | 564 | 2.984375 | 3 |
I visited the opening of an industrial internet of things lab at National Instruments this week to see what is happening in the industrial side of the connected world. First off, I make a distinction between smart home, enterprise IoT and industrial IoT. I will likely define them in a later post, but for now, know that this event was focused squarely on industrial IoT.
Perhaps the most compelling demo I saw was associated with a standard that has been added to Ethernet called Time Sensitive Networking. The goal of the standard is to make Ethernet work well enough for the needs of industrial automation so it can replace the dozens of proprietary standards out there such as HART, CAN bus and more.
In dedicated networks, such as those for smoke detection systems or managing factory automation, data moves along an uncontested path to its destination. When we’re talking about information such as setting off a fire alarm or detecting a problem in a multi-million-dollar piece of equipment, delays can be costly.
The internet (IP) is different. It’s mostly best effort and information can get waylaid. That’s a function of its design.
In today’s factories, companies want to link their dedicated operations networks with their best-effort IP networks. That’s where timing can get screwy. You can’t transfer video (such as computer vision data from robots) and equipment health data using Ethernet without causing delays in your important equipment data.
This is why companies including NI, Cisco, Schneider Electric, Bosch and others are getting behind an IEEE standard called Time Sensitive Networking (TSN). The goal is to combine these two networks while still ensuring the important pieces of information from physical systems get to their destination without delay.
This is now part of the IEEE Ethernet standard so it is available in Ethernet chips hitting the market (and some that are already on the market). I saw it demonstrated at the NI IIoT lab and was impressed at how well this worked, but caveats abound.
First, this standard works by looking for the Mac address of specific devices and assigning a data path between them. So you have to know where you are sending your time-sensitive data. Second, it’s only ready for a subnet on a specific IP address, which means it’s for private networks today.
But if you’re crazy excited about this idea, then the Internet Engineering Task Force has a working group dedicated to trying to apply these concepts to the Internet at large. For more on that, check out its sections on deterministic networking.
There are clear benefits for this sort of blending in industrial networks, and other demonstrations in the lab showed off how this technology could help with things such as asset tracking and managing the unreliable flow of renewable energy back onto the electric grid.
The question is if TNS can help Ethernet make inroads into the factory floor despite its flaws. Jim Theodoras, who is vice president of global business development at ADVA Optical Networking, says yes.
“It seems like it can’t win but then they add so many extensions to make it fit, and when you look at the sheer cost differential, it’s got to go Ethernet,” Theodoras says. “They’ll have to do a lot to make sure everything works together, but it would be worth the pain just to get the economies of scale that Ethernet has.”
That’s why efforts like NI’s are so important. They let vendors guarantee that their versions of the industrial-IoT-strength Ethernet are interoperable.
“Nothing can stand up to Ethernet,” Theodoras says. | <urn:uuid:7e42dcf4-805c-4d0c-9380-5442542ada8d> | CC-MAIN-2019-04 | https://staceyoniot.com/technology-profile-time-sensitive-networking-aims-to-bring-ethernet-to-industrial-iot/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00088.warc.gz | en | 0.952721 | 764 | 2.578125 | 3 |
governments need to develop new policies and refine
existing development regulations to address environmental
sustainability. This means that the physical resources
of this earth should be used in a way that maintains
the ability of natural systems to support human and
other life. This should be done while meeting basic
human needs, being fair, and respecting cultural and
The problems with incorporating sustainability into
the planning and development process include the lack
of a framework for defining sustainability, a lack
of a methodology for applying sustainability principles
to the project at hand, and a lack of understanding
the continuum from principles to policies to desired
The starting point for the effort should be scientifically
and ecologically based—preventing the accumulation
of substances in places and in quantities that cannot
be absorbed by natural systems, ensuring the continued
productivity of natural renewable systems, using non-renewable
resources efficiently while developing alternatives,
and preserving ecological diversity.
These scientific principles must be applied in a fair
and equitable manner and should be integrated into
good planning practice, including the concepts of
Smart Growth (limiting low intensity growth and placing
more emphasis on collaborative decision making), The
New Urbanism (emphasizing urban design, including
form-based principles, within a regional context),
and Permaculture (emphasizing sustainable agriculture,
design with nature, and appropriate materials). Sustainability
indicators should be considered for inclusion in the
implementation program and, where used, should be
integrated with quality-of-life and other indicators.
As a start, local sustainability programs must include
provisions that lead to better designed, more compact
communities. The most effective way of achieving efficient
land use patterns is to make compact communities more
desirable places to live so people will have little
incentive to move farther and farther away. Compact
communities will reduce vehicle miles traveled, reduce
energy and materials used for infrastructure, and
preserve land for recreation, agriculture, and habitat.
After more than 30 years of professional practice,
Robert Odland Consulting is now focusing its efforts
on implementation programs and development regulations
in the following areas:
• Comprehensive Sustainability Programs
• Integrated Sustainability Regulations
• LEED for Neighborhood Development
• Downtown Revitalization
• Land Use & Renewable Energy
• Environmental Conservation
• Community Design in Extreme Climates
• Growth Management Systems
• Intergovernmental Coordination
Bob Odland possesses a unique background that enables
him to deal with these diverse issues of sustainability.
His interests and career combine a broad body of professional
knowledge, an ability to synthesize input from varied
disciplines, and an ability to deal with the transition
between vision and reality.
Early in his professional career, his work focused
on resource and environmental management. He was the
coastal planner for the Association of Bay Area Governments
and then worked with the California Coastal Commission
in its initial year of operation. Subsequent coastal
work included managing the coastal plan for Volusia
County, Florida, and preparing the implementing ordinance
for the Pacific Grove coastal program.
Other resource management work included serving on
the staff of the California Assembly Revenue and Taxation
Committee, where he worked on agricultural land preservation.
Bob also incorporated agricultural lands preservation
and other environmental issues into general plan and
regulatory projects he managed. One notable example
was the Comprehensive Plan and Development Regulations
for the Disney World area, which dealt in detail with
development on sensitive land, habitat protection,
stormwater runoff, construction wastes, wellhead protection,
and other environmental issues.
Bob was one of the original staff members of the Solar
Energy Research Institute (SERI), now renamed the
National Renewable Energy Laboratory, where he first
began his work of pulling the various parts of good
planning and management practices into coordinated
systems at the local level. He was responsible for
the creation of the Community and Consumers Affairs
Branch, and became its Branch Chief.
During his tenure at SERI, he became a member of the
National Review Board for the Community Technology
Assessment Program sponsored by the U.S. Department
of Energy, one of the first programs at any government
level to recognize the significance of an interdisciplinary
systems approach to land use, environmental protection,
housing, utilities, resource conservation, and energy
use. He also organized and was chair of two national
conferences on community-scale energy systems.
After leaving SERI he co-managed the first EIR/EIS
in the country on wind energy production and analyzed
wind energy opportunities in Montana and Colorado.
He then worked for a European wind and photovoltaic
energy manufacturing company for two years. Subsequent
energy work included managing one of the first studies
of the relationship of energy use to land-use patterns,
carried out for the Southern California region, with
assistance from the California Energy Commission.
Bob was the project manager and author of “The
Sustainable City,” a proposed chapter of the
City of Los Angeles General Plan Framework, to which
he contributed. He also was the sustainable development
consultant to the East Bay Conversion and Reinvestment
Commission, which dealt with military base conversions.
Bob has prepared downtown mixed-use ordinances for
many cities, such as Anaheim and Oakland. He also
worked with the Los Angeles Central City Association
to identify barriers to residential development in
downtown Los Angeles and with the City of Denver in
assuring that changes to the downtown regulations
will enable Denver to implement its new Downtown Plan,
part of the mayor’s sustainability program.
Bob is currently preparing the downtown development
regulations for Anchorage, which are based on form-based
Bob was a member of the team that developed the Pajaro
Valley Growth Strategy, a successful intergovernmental
effort to save agricultural land by focusing more
development within the City of Watsonville. He prepared
environmentally sensitive ordinances for the portion
of Douglas County, Nevada, within the Lake Tahoe Basin
and for hillside development in Pittsburg, California.
He is currently preparing a model sustainable land
use code for Taos County, New Mexico.
He was on the board of Urban Ecology for ten years
and managed the initial phases of its award-winning
Blueprint for a Sustainable Bay Area before he left
to begin work in Eastern Europe on land use issues,
including advising the Union of Russian Cities on
implementing sustainable development. While working
in Europe, he participated in the United Nations Habitat
II Conference at Istanbul, which addressed housing
and sustainable development.
Upon returning from Europe in 1996, he was a consultant
to the Association of Bay Area Governments in support
of the Bay Area Alliance for Sustainable Development,
a regional offshoot of the President’s Council
of Sustainable Development; a consultant to the North
State Institute for Sustainable Communities; and a
sustainability consultant to the OZ Entertainment
Bob has made presentations on sustainability at the
National APA Conference, CNU Congress, and CCAPA Conference.
He has also made lectured on the subject at UC Berkeley,
UCLA School of Law, Sonoma State University, University
of Colorado, and the University of New Mexico. | <urn:uuid:e5687487-bcec-4ecc-bbd8-477b0db59f2b> | CC-MAIN-2018-47 | http://robertodland.com/sustainability.htm | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742906.49/warc/CC-MAIN-20181115182450-20181115204450-00163.warc.gz | en | 0.91534 | 1,536 | 3.359375 | 3 |
Twitter haiku: 17 syllables and 140 characters through US history
You gotta love the Twitter. Seriously. Even you choose to not use it at a personal level, there’s just too much stuff you and students can do with it.
Historical re-creations. Tweets as historical characters. Exit card activities. Assign homework. Virtual study rooms. Question and answer sessions with students. Connect with parents. With other teachers. With other classrooms. Provide study tips. Ask questions. Share ideas. Real time chats. Follow breaking news and current events.
History as haiku.
H.W. Brands, well-known author of Andrew Jackson, is tweeting his way through the history of the United States. Which is cool enough – and not the simplest of tasks, by the way. But Brand is not just tweeting his way through centuries of American history but doing it in traditional 17 syllable Japanese haiku.
Apparently Brand has been preaching for years that history can be written in any format:
I’ve been saying this for some years when one semester a student said, ‘Well, Professor, have you ever done it?
His bluff called, Brands jumps in and does it. One or two tweets a day. For some historical events, one tweet will do. For others, it can take longer.
I covered 10,000 years in North American history in two or three haiku, but by the time I got to the Civil War, I found, in fact, that I was losing ground. It was taking me longer to write the haiku than it was for the events to roll out. The Battle of Gettysburg lasted only three days, but I wrote it in probably 10 or 15 haiku that were spread out over three weeks.
Recent tweets focus on the Homestead Strike:
I really like using this with students. For a couple of reasons. First, 17 syllables forces kids to focus on the Big Idea of the topic and provides a quick way to check for understanding. Use the haiku idea as an exit card activity. Ask different kids to create haiku from different perspectives. What is the Big Idea if we’re looking at events from a historian’s perspective a century later? What is the Big Idea if I’m Carnegie? If I’m a striker? If I’m a scab?
Second, the Twitter tool provides a broader audience for student tweets. And forces even more concentration of focus.
Extra bonus? Ask kids to make decisions about what events, people, places, topics should be addressed via the haiku slash Twitter strategy. Ask them to justify why they picked what they picked.
A quick reminder of haiku rules:
- A haiku must be three lines
- The lines must follow the 5-7-5 format
- The first line contains five syllables
- The second line contains seven syllables
- The third line contains five syllables
Find out more about how Brands is creating his history via haiku (including a not to missed seven minute interview with PBS) here. | <urn:uuid:dcdb6895-4dea-4096-ac11-10d0d380b179> | CC-MAIN-2014-49 | http://historytech.wordpress.com/2014/04/29/twitter-haiku-17-syllables-and-140-characters-through-us-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379636.59/warc/CC-MAIN-20141119123259-00118-ip-10-235-23-156.ec2.internal.warc.gz | en | 0.953506 | 638 | 2.953125 | 3 |
“Silent Night” is a popular Christmas carol, composed in 1818 by Franz Xaver Gruber to lyrics by Joseph Mohr in the small town of Oberndorf bei Salzburg, Austria. It was declared an intangible cultural heritage by UNESCO in March 2011. The song has been recorded by a large number of singers from every music genre.
The song was first performed on Christmas Eve 1818 at St Nicholas parish church in Oberndorf, a village on the Salzach river. The young priest, Father Joseph Mohr, had come to Oberndorf the year before. He had already written the lyrics of the song “Stille Nacht” in 1816 at Mariapfarr, the hometown of his father in the Salzburg Lungau region, where Joseph had worked as a coadjutor.
The melody was composed by Franz Xaver Gruber, schoolmaster and organist in the nearby village of Arnsdorf. Before Christmas Eve, Mohr brought the words to Gruber and asked him to compose a melody and guitar accompaniment for the church service.Both performed the carol during the mass on the night of December 24.
The original manuscript has been lost. However a manuscript was discovered in 1995 in Mohr’s handwriting and dated by researchers at ca. 1820. It shows that Mohr wrote the words in 1816 when he was assigned to a pilgrim church in Mariapfarr, Austria, and shows that the music was composed by Gruber in 1818. This is the earliest manuscript that exists and the only one in Mohr’s handwriting.
In 1859, the Episcopal priest John Freeman Young, then serving at Trinity Church, New York City, published the English translation that is most frequently sung today. The version of the melody that is generally used today is a slow, meditative lullaby, differing slightly (particularly in the final strain) from Gruber’s original, which was a sprightly, dance-like tune in 6/8 time. Today, the lyrics and melody are in the public domain.
The carol has been translated into about 140 languages.
The song was sung simultaneously in French, English and German by troops during the Christmas truce of 1914 during World War I, as it was one carol that soldiers on both sides of the front line knew. | <urn:uuid:9a44fee7-5f77-499c-8c45-3363434a3a70> | CC-MAIN-2018-13 | https://exequy.wordpress.com/2013/12/25/silent-night/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00256.warc.gz | en | 0.981714 | 489 | 3.828125 | 4 |
Rats and Mice Bothering You .... To Bait or Not to Bait?
Our warm, wet summer, has seen the population of rodents (rats and mice) multiply more than usual. As the weather cools, you can expect these pesky little intruders to seek shelter in your roof cavity, in your sheds, and if you are unlucky, indoors too. Rats and mice carry disease and can be, well, downright smelly - and most people will want to control their population.
How do you know when you’ve too many mice. Of course, it all depends on your own personal tolerance – for me it is when they move indoors, begin eating my home grown fruit and vegetables, or nibbling on electrical cables and damaging items in the shed when they become a problem. It then becomes important to control their populations in and around your home and outbuildings.
Depending on your ethics, whether you have small children or dogs, and the rodent population levels on your property, there are several different measures you can use to control these pesky pests. Importantly, rat and mice poisons are just that, POISONOUS. Please take care when using poisons in and around your home. Be sure to wear gloves when handling baits and baiting stations, and ensure that no poisons can be touched or eaten by children and dogs - using specially made bait stations is the best way to avoid accidental poisoning and off target damage to other species. Know what baits you have on your property, and if your dog, or a neighbour’s dog is accidentally poisoned by baits be sure to seek veterinary care immediately and tell the vet what the active ingredient in the bait was.
There are two common active ingredients in baits available to home owners.
Warfarin based baits (such as Double Strength Ratsak) must be consumed by rats and mice over a few days to consume a lethal dose. Using warfarin based baits can reduce the possibility of secondary poisoning as larger amounts of poison need to be consumed per body kilogram, and they need to be consumed on consecutive days.
Brodifacoum based baits (Fast Action Raksak, Talon) are more lethal, and only need to be consumed once to consume a lethal dose. Most rats and mice will die within four to seven days of consuming Brodifacoum.
If you choose to use baits, it’s important that you continue placing baits in your bait stations the baits are no longer taken – generally maintaining an uninterrupted supply for a couple of weeks will allow you to get effective control of rodents in your home and garden.
There are several bait stations on the market, some disposable, some re-usable. They are child proof (… and when I have cold fingers, adult proof too!). If you have an area where children and dogs are excluded, you can make your own bait station by purchasing a length of PVC downpipe (say 1m long), and screwing an end cap on one end. Place the bait in the pipe, and tie the pipe along the bottom of the fence (…. Rats and mice will generally run along walls and fencelines). This method is very effective if you have problems with rats in your chicken run or hen house.
Wax blocks are manufactured to use in damp and outdoor locations, whereas packets of throw baits (generally wrapped in paper) are perfect to put into your roof cavity where it is cool and dry.
Old fashioned traps are still available, and modern, live catch (and release) traps are also available. The dilemma of course, with live catch traps, is what to do with the mice and rats. If you let them out of the trap in your own garden they are likely to find their way inside again. I know of a gorgeous, kind and oh so caring home owner who has taken mice to her local vet to be euthanised – but that, of course, is a very expensive option if you have a large population of rats and mice.
Good garden and shed hygiene will help to deter rats and mice. If you have hens, rabbits, guinea pigs and other pets, be sure to only feed each day what your animals can eat. Over feeding will only encourage rats and mice, as will unkept outbuildings, messy woodpiles and rubbish. And of course, exclusion is important to prevent them entering the home. Fill any holes and entry points with No More Gaps, or use fine steel wool and wedge it around pipe inlets under the sink and in the laundry.
Whichever method suits your home and garden, be sure to keep using it until you have your population under control, and if you have any questions our garden centre staff will be happy to speak with you.
#rats #mice #rodents #rodentcontrol #mouseplague #gardenpests #mitre10 #mightyhelpful #barrowandbench #unley #inthegarden #ratpoison #mousepoison #traps | <urn:uuid:2c72a9b7-efbe-46d7-8d8e-cbd0f290575d> | CC-MAIN-2019-22 | http://barrowandbench.com/blog/38/Rats-and-Mice-Bothering-You-....-To-Bait-or-Not-to-Bait | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257316.10/warc/CC-MAIN-20190523164007-20190523190007-00445.warc.gz | en | 0.937597 | 1,035 | 2.625 | 3 |
However, in socially monogamous and multibrooded species like the Eastern Phoebe, the potential for extra-pair copulations
may remain high through the second brood (Weeks 2011).
Because male M1 initiated another brood, perhaps he became less tolerant to prevent extra-pair copulations
between the second female (F2) and the helper male.
Conversely, males maintaining proximity for mate guarding purposes should be vigilant throughout the day as extra-pair copulations
can potentially occur at any time of day (Venier et al.
The female warblers engaged in extra-pair copulations
with neighboring males, but only if the neighbor sang more songs than the female's steady mate.
Cranes are monogamous with long-term pair bonds (Walkinshaw 1973) suggesting extra-pair copulations
should be rare.
Waiting until your own female is no longer fertile before you go looking for extra-pair copulations
is part of the male strategy," he asserts.
Elevated circulating testosterone during the incubation stage may be beneficial to male Tree Swallows to facilitate extra-pair copulations
(Raouf et al.
This can lead to fewer opportunities for intraspecific encounters and fewer extra-pair copulations
Communal calling assemblages are unlikely to have a role in social mate choice, but they could provide an opportunity for already-mated trogons to gain extra-pair copulations
I posit the territorial male was inspecting the female as part of a mate guarding strategy, perhaps investigating for evidence of extra-pair copulations | <urn:uuid:d22d0689-575f-41b9-85db-d58da478b03f> | CC-MAIN-2017-17 | http://www.freethesaurus.com/Extra-pair+copulations | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125849.25/warc/CC-MAIN-20170423031205-00547-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.913999 | 327 | 2.5625 | 3 |
English form of the Greek name Λουκας (Loukas)
which meant "from Lucania", Lucania being a region in southern Italy (of uncertain meaning). Luke was a doctor who travelled in the company of the apostle Paul
. According to tradition, he was the author of the third gospel and Acts in the New Testament
. He was probably of Greek ethnicity. He is considered a saint
by many Christian denominations.
Due to his renown, the name became common in the Christian world (in various spellings). As an English name, Luke
has been in use since the 12th century alongside the Latin form Lucas
. A famous fictional bearer was the hero Luke Skywalker from the 'Star Wars' movies, beginning in 1977. | <urn:uuid:9e7f3c93-8df1-42b7-99ae-16d084c2f149> | CC-MAIN-2017-34 | http://www.behindthename.com/name/luke | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00405.warc.gz | en | 0.977186 | 156 | 3.265625 | 3 |
Knowledge Scape – from muddles to models
What do I mean by the Knowledge Scape?
In the industrial age, business was primarily based in physical locations or ‘places’. In the information age, the work place has been radically altered to reflect very different sorts of interactions in time and space. The catalyst for this transition to the information age has largely been developments in electronics and telecommunications globally. The rapid rise in computerisation has been aided in recent years by the inclusion of multi-media, increased storage capacity for soft and complex forms of information such as pictures, documents, video and sound, and increased speed of transmission.
Knowledge Scape refers to all of the ways in which knowledge is created and used within an organisation. The analogy is to a landscape or soundscape. The Knowledge Scape is a product of the information age.
Knowledge Scape is a five-day study that explores the use of information and knowledge assets within an organisation. The study can be completed at a corporate or business unit level.
At the end of the study you will have a detailed plan that describes short-term actions and longer term initiatives that will deliver measurable improvements in the use of information and knowledge. The study also provides analysis of the four critical areas if information and knowledge are going to deliver substantial business benefits. Finally the study looks at the underlying processes that create and use knowledge or intellectual assets within an organisation. Without this understanding of these processes information and knowledge will remain diffuse – a source of untapped potential.
The study is based on a simple but effective model of the Knowledge Scape. I first developed this model in 1995. It has evolved since – from my research into the use of information and knowledge and from the practical experience of applying the model on client engagements.
Three Steps in a Knowledge Scape Study
The Knowledge Scape is a dynamic, uncertain landscape. It can be navigated in three simple steps.
A study typically involves:
- Understanding and defining the four focus areas in the context of your own organisation’s needs.
- Understanding the processes that link these four focus areas and deciding where these need to be changed or enhanced.
- Developing an action plan to help manage the transition from your present position to where you want to be in the future.
The Four Main Components or Focus Areas
Their are four main components or focus areas that are used to offer a better understanding of the composition of the Knowledge Scape in your organisation, its structural elements and how they may be used to promote sustainable business advantage.
The four critical areas are:
- Personal knowledge
- Information assets
- Sustainable advantage
- The enabling work space
These four areas form the circles in the Knowledge Scape model. From experience, each of these four areas is a vital part of the overall picture. To ignore one of them would mean an incomplete understanding of the corporate Knowledge Scape. Experience also shows that each of these areas is critical when devising information or knowledge strategy and consequent action plans. Without including all four areas plans tend to be short-term in outlook, with disappointing returns relative to investment.
Key questions include:
- What do people in the organisation know?
- Who knows that they know?
- Where did they gain this knowledge?
- How is their knowledge applied for business advantage?
- How can the organisation learn from this knowledge?
- Should any personal knowledge be transformed into a corporate asset?
Much of the knowledge that is applied within an organisation is actually Personal Knowledge. In other words, it is the expertise and experience that an individual has acquired and which they are able to apply to business work contexts. The module dealing with Personal Knowledge looks at the knowledge held by individuals who make up the organisation. If someone leaves the organisation or moves into a different department, then the knowledge available to that organisation or department will change. It is important for an organisation to understand the types of personal knowledge that are being used and the impact of changing this knowledge, for example through training, work experience, or organisational development programs. Organisations are often not aware of the degree to which personal knowledge contributes to the effective operation of their business.
The Knowledge Scape model helps to identify the types of personal knowledge that are available to an organisation, how they are currently deployed, and the potential for using personal knowledge in more effective ways. This module explores the question “what do you know”, and by extension, “what don’t you know” and “what didn’t they tell you” (something that might jokingly be called Ignorance Management).
Key questions include:
- What types of information asset does the organisation own?
- How are these corporate assets used?
- What does the organisation do to gain maximum advantage from these assets?
- Who knows what information is available?
- Who is responsible for managing the information assets?
- How can information assets be used to create new products or services?
The Information Asset module looks at information and knowledge from a corporate perspective. Information Assets are resources that could be owned at a corporate level. They include information that is stored in information systems and databases, as well as materials that are stored in filing cabinets or paper documents. Examples of information assets include data, patents, copyrights, business processes, information systems or product and service brands. Once again, many organisations do not have a clear picture of all of the intellectual assets that they own. This is particularly true of informations sytems that are poorly documented or information assets that are not managed using a formal or rigorous method.
The Knowledge Scape model helps to identify the types of information asset that are available to an organisation, how they are currently deployed, and the potential for using information assets in more effective ways. The module dealing with Information Assets contains checklists that are used to identify and complete an inventory of the tangible assets in an organisation. The module also uses a Life Cycle of Intellectual Assets to help determine the relative value of the various information assets owned by an organisation. I believe that an organisation limits itself through its capacity to utilise its information assets effectively. To this end, this focus area explores how to improve the effective use of information assets, both with and without computers.
Key questions include:
- Does the organisation gain advantage from knowledge and intellectual assets?
- Is any advantage sustainable?
- Could knowledge and information have a greater impact?
- How do we create unique, advantage-sustaining intellectual assets?
- Does the organisation measure:
- The contribution of knowledge to performance?
- Information complexity?
- Return From Information (RFI)?
- The Ratio of Information to Infrastructure?
- Corporate IQ?
Sustainable business advantage is the end-goal. Without gaining some business advantages from knowledge, then personal knowledge and information assets have no validity. Many organisations do not measure the contribution of knowledge or information to business success. This module develops measurements of effectiveness for an organisation in the information age, including the use and handling of information.
This is in turn related to how the organisation develops or transforms over time, how it perpetuates itself, and how it can simultaneously improves its position. I explore the relationship between information assets and the ways in which an organisation is organised! How does an organisation invest in its own future? In what ways can improvements in organising information assets contribute to improvements in the organisation’s ability to organise itself.
The Enabling Work Space
Key questions include:
- What is the culture and working environment in the organisation?
- What support is available from information technology?
- What factors inhibit the use of knowledge and information?
- How are people rewarded for using information or creating knowledge?
- What is the awareness of the value of knowledge and information assets?
- Do people know about knowledge or information management theory and practice?
- Is the work space conducive to the use of personal knowledge?
- How can the work space be optimised for the effective use of corporate information?
Information and knowledge count for little if the work environment to help use this resource is missing. This module looks at the pre-requisites or foundations for effective use of information assets and personal knowledge. Part of this unit examines information politics within the organisation. Information Politics can be a constructive “discipline that enables the transformation of the workplace from excessive managerial co-ordination to widespread empowerment of customers, suppliers and producers of services” [Paul Strassmann]. This component improves or expands that which individuals and groups can accomplish. I also analyse the balance between management practice and the design of information systems.
The sub-title – From Muddles to Models – suggests that without some ideas and guidance we end up in a muddle and become more ignorant instead of more knowledgeable. Instead, I propose the use of various forms of models – intellectual representations of the Knowledge Scape – that can be used to create sustainable business advantage.
My methodology explores the processes and interactions between the four main focus areas and provides a way of introducing more effective management of the Knowledge Scape within an organisation.
An Overview of the Key Processes
The key processes are the links between the four components that constitute the Knowledge Scape. These processes add a dynamic aspect to the more structural elements.
The key processes listed below are not so amenable to easy definition as those say, of a manufacturing operation or an industrial economy. Attempts to manage, control or define these processes should use a combination of conventional approachs combined with the skills of improvisation, innovation and creativity.
When organisations examine the six sets of processes that operate between the four components of the Knowledge Scape, they ensure that their more qualitative aspects are not overlooked.
Some of the processes that will be explored include: | <urn:uuid:10defa40-6cc4-40df-89f6-eecf5d401557> | CC-MAIN-2017-22 | http://www.evernden.net/articles/knowledgescape/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608084.63/warc/CC-MAIN-20170525140724-20170525160724-00034.warc.gz | en | 0.920815 | 2,002 | 2.765625 | 3 |
Turbochargers: The Future Is Charged
When people talk about technological advances in the automotive industry, they’re usually talking about things like smart cars or hybrid electrics, but great strides have been made for traditional diesel and gas engine-powered vehicles as well. Although turbochargers have been around for over a hundred years, they only started showing up in cars in the late ’60s and ’70s. And even there they existed on the fringe for a while, mostly within the racing community. Lately, though, this technology is trickling down to the average car buyer, in the case of turbochargers. Some are built into the design of new models, but there’s also an increased popularity of after-market installs allowing anyone to benefit from the increased performance they provide.
A turbochargers’ operation is pretty brilliant. Engines require air to run, and turbochargers force more air into the chamber on each intake stroke, upping combustion efficiency. Their basic construction is two fans that connect and spin on the same shaft, turning a snail-shaped housing. It sits between the engine and the exhaust, as close to the block as space will allow. As exhaust gasses exit, their force sets the bottom fan (turbine) spinning. Since it’s connected to the top fan (compressor), that one spins as well and, as it does, its whirring blades pull in air from the outside, centrifugally condensing it in the housing before forcing it out the other side of the turbocharger, through the intake and into the chamber. More air means stronger combustion stroke and increased power from the engine.
The Power Impels You
Sounds like a sweet deal, right? Well, hang on there, because although the operating principle in relatively simple, installation is not. For one, simply finding the space for it in the engine bay can be tough. Beyond that, there are a number of modifications you’ll have to make to your intake and exhaust system, alongside other supporting components that will probably have to be installed too, such as an intercooler, wastegate and blow-off valve. With modification, most cars can handle a turbocharger, but unless you have serious mechanical experience, this is not a simple DIY and should be left to the professionals.
These days, many manufacturers are designing turbocharged engines by default. In part, this is to meet new government emissions targets, but it doesn’t hurt that rising prices at the pumps are causing customers to look closer at fuel-efficiency. Furthermore, while turbocharging used to be the domain of diesel engines, new technology has made turbocharged gas engines more of a viable option.
Turbochargers are, in principle, simple mechanisms to increase engine performance. And with dependable aftermarket turbos on the market, they’re kind of a no-brainer for anyone looking to get more power behind the wheel. Due to the technicality of the installation, however, this is one project you want to talk to your mechanic about.
For more information on turbochargers, chat with a knowledgeable expert at your local NAPA AUTO PARTS store.
Photo courtesy of Blair Lampe. | <urn:uuid:c81cfbd0-60af-4d46-9033-a1ea22a1a935> | CC-MAIN-2018-34 | http://knowhow.napaonline.com/turbochargers-future-charged/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00180.warc.gz | en | 0.953336 | 668 | 2.53125 | 3 |
|Libraries Home | Mobile | My Account | Renew Items | Sitemap | Help|
Select a method to view the page:
We early Texans of the upper Brazos had to go to Dallas or McLennan county, Texas, for our breadstuff in those early times. Near the beginning of the Civil War, Cravens and Darnell built an inclined wheel cornmill in Golconda, the first name given to Palo Pinto, run by the tread of oxen on the wheel, and we fared well, as we were able to grow corn along the Brazos where the Indians had set the example before us. But the Red Man, always, bent on some mischief, came along and killed the big mulatto negro who was the miller while he was out hunting his oxen, and we had to fall back on the old hand steel mill, which was demonstrative evidence that man should eat his bread by the sweat of his face.
In 1870 Captain Cureton took an immigrant train of 70 people overland from Texas to California, and owned most of the herd of cattle carried by his boys to the Pacific coast at the same time. Captain Cureton returned from California, and was sheriff of Bosque county from 1876 to 1880, the period of time immediately succeeding the reconstruction days, when the country was infested by the worst of criminals, and when the sheriff and his deputies literally stood between the inhabitants of the community and assassins and thieves. He died and was buried in Bosque county in May, 1881, survived by all of his children and by his wife, who survived him until May, 1906, when she died at the home of her daughter, the wife of Judge 0. L. Lockett, of Cleburne, Texas.
I was one of the earliest settlers in Bandera county, when that section was wild and unsettled. The country was full of game. I established my ranch on the West Prong of the Medina River. As with most of the pioneers of those days, I erected a log house, and left the opening for the fireplace, and was waiting for a chimney builder to come and put up my chimney. To keep the rain out, | <urn:uuid:da3f8d1e-692f-488f-871e-62cb1d459c7d> | CC-MAIN-2015-06 | http://www.lib.utexas.edu/books/texasclassics/traildrivers/txu-oclc-12198638-c-0792.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00238-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.988799 | 453 | 2.53125 | 3 |
You find yourself in the basement of a local school. A mobile phone is lying discarded on the floor and a window has been smashed. But what is going on? Explore the online game in this music adventure mystery to search for clues and complete the grade one level written worksheets to unlock hidden evidence.
The Mystery of the Blue Raven is playable on a computer, laptop, tablet or smartphone and is available with a ‘Student’ or ‘Studio’ licence. The ‘Studio‘ version for teachers includes bonus games and resources and is perfect for music lessons.
All work well during in person lessons and many are ideal for online teaching using platforms like Zoom or Skype. Simply open a web browser on your computer, laptop or tablet, select a game and play!
Seven online chapters
Search the school using seven online interactive chapters or games playable on a computer, laptop, tablet or smartphone. What’s hiding in the music teacher’s locker and why does that picture look odd in the art room?
Ten theory worksheets
The ‘Blue Raven’ includes a printable set of ten theory worksheets (with hidden clues!) to keep forever. The book comes in a PDF format for easy printing after checkout and more information about the topics covered can be found below.
Case Files and Suspects
Can you solve the mystery by working out the identity of the criminal mastermind and also what the Blue Raven actually is? Keep track with a handy set of Case Files and rule out any suspects. Successful student investigators automatically unlock a printable certificate.
Music theory worksheets
There are questions to answer in both the written and online parts of the mystery and are all set at grade one level. It follows the new ABRSM theory syllabus from 2020 – but works great for students following any study path or exam board in any country!
- Worksheet 1 Pitch: treble and bass clef notes on the lines and spaces of the stave; rules for correctly writing notes on the stave
- Worksheet 2 Rhythm: time values (terminology is not used so perfect for those who use ‘crotchet’ as well as those who use ‘quarter note’), of quarter, half, 1, 2, 3, and 4 beat notes, plus dotted notes, and ties.
- Worksheet 3 Rhythm: time signatures of 2, 3, and 4 beats in a bar (2/4, 3/4, 4/4) including missing bar-lines, and adding the time signature to an extract of music.
- Worksheet 4 Rhythm: groupings of notes in the above time signatures
- Worksheet 5 Pitch: accidentals (sharps, flats and natural signs), plus tones and semitones (it also includes the names of the white keys on a piano).
- Worksheet 6 Keys and Key Signatures: C, G, D and F major
- Worksheet 7 Scales: degrees of the scale (numbers only), major scales of the above keys
- Worksheet 8 Intervals: above the tonic by number only
- Worksheet 9 Tonic Triads: in the keys of C, G, D and F major in both treble and bass clefs
- Worksheet 9 Terms and Signs: dynamic signs, tempo terms, instruction terms and musical signs such as the pause, staccato signs etc.
Rule out the suspects one by one
Discover clues and hidden evidence by searching the mysterious school. What is the Blue Raven and who is the criminal mastermind behind it all?
Bonus games for teachers
The Mystery of the Blue Raven ‘Studio’ version has two bonus games perfect for music lessons. ‘Spooky Sid’ is set in a school of magic where students collect spells to top the worldwide leaderboard! The question cards are customisable by the teacher and include draggable notes, sharps and flats, and changeable clef signs and time signatures.
‘Who Stole My Homework?’ is an interactive mystery where detectives need to use their music theory knowledge to crack the case! | <urn:uuid:1a814f84-a991-46f5-ac8c-29922e076537> | CC-MAIN-2022-33 | https://learnatune.co.uk/mystery-of-the-blue-raven/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00014.warc.gz | en | 0.895171 | 852 | 2.734375 | 3 |
Recently I’ve been pondering what makes something “news.” It seems simple, but when I asked people for their own definitions almost nobody agreed. Does it have to be recent? Does it have to be life changing? Does it have to be strictly facts? News is very subjective in its definition.
It’s also very subjective in its impact. For example, finding out you’re pregnant is big news to the people involved, but not news to 99.99% of the world. News can be personal or global.
I began noodling on all the different responses people gave me, and how to tell if something is really “news” or just “information.” What I came up with are different attributes of news, and a bit of a range within each attribute. The idea is anything you find to be “news” should have some combination of these, and different news may have different attributes.
This is still a rough draft, but I’d love any feedback. Does something in here describe just about anything you would consider news? Are there any attributes missing? Are the ranges clear?
And I know this topic has been cracked in other ways, by other people and institutions, but I’m doing this as my own exercise in deconstructing and learning about the topic. So I heartily welcome your personal thoughts, but am not interested in any links/references to other material.
I’m also totally uninterested in a discussion of the different news outlets you use (or hate) by name. This is about the concept of news itself.
Disclaimers out of the way, here is the list I came up with:
- World view – It changes how I view the world on a global/national level.
- Overall life – Changes how I live my life.
- Short term plans – I’ll act on the information sometime soon.
- Immediate – I need to act on this right away!
- Breaking – Critical and worth interrupting me to learn about.
- General – I could go a short time without learning.
- Of interest – I could learn about it anytime.
- Completely fact based – No perspective or commentary.
- Some perspective – Perspective or context that helps enhance the news item.
- Opinion and viewpoint – Analysis and summary from respected individuals.
- Peer reviewed journals – Factual and objectively vetted.
- Traditional media – Formally trained journalist in TV/newspapers/etc., either local or national/global.
- Independent media – Self-trained reporters/bloggers.
- Social networks – Hearing it from people you are friends with or connected with in life or online.
- Self-discovery – Learning about it yourself.
- Just happened – Occurred within minutes.
- Recent – Occurred within hours.
- New to me – Occurred a while ago, but I only just found out.
- Global – Happened somewhere else in the world.
- National – Happened within my country.
- Community – Happened within my city/tribe/group.
- Personal – Happened just to me and possibly immediate company.
What do you think? | <urn:uuid:acb14f53-98ef-4969-9a50-e0f0125b8ee1> | CC-MAIN-2021-17 | http://improvmedia.com/tag/definitions/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038062492.5/warc/CC-MAIN-20210411115126-20210411145126-00346.warc.gz | en | 0.93353 | 674 | 2.625 | 3 |
Taxonomy and Botany of Sky Pencil Holly
'Sky Pencil' holly has one of those cultivar names that succeeds in being descriptive while also "selling" successfully in a market that favors flashy names. Plant taxonomy lists the genus and species as Ilex crenata.
Sky Pencil holly is grouped botanically with the broadleaf evergreen shrubs. A related shrub is Ilex crenata 'Hetzii'. Both are types of Japanese holly, which is valued for its small, tightly-packed leaves.
Many a casual observer has made the mistake of thinking these shrubs (especially 'Hetzii') to be boxwoods, which have similar leaves. In fact, a common name sometimes used for Ilex crenata is "box-leaved holly." Like boxwood bushes, this plant has leaves with fairly smooth edges (not prickly edges, as many types of hollies have).
Shrub Characteristics and the Male Pollinator Question
Sky Pencil holly is a slow-growing, columnar shrub that reaches 4-10 feet in height, with a width just a small fraction of that. When one of these holly shrubs reaches 6 feet tall, its width may be only 14 inches at the widest point. The "column" is narrowest at the base, slowly tapering out the higher it goes.
Holly shrubs are dioecious; that is, there are separate male and female plants. Sky Pencil is strictly a female cultivar (she does not come with a corresponding "mate" of the same cultivar name). Yet a male plant is needed to pollinate the female if you want berries.
So how does one solve this dilemma?
But the matter is not as hopeless as it sounds, at first. The presence of any of a number of different types of male holly plants will allow you to achieve pollination. This gardener's own Sky Pencil apparently gets pollinated annually by a 'Blue Prince' holly that is way over on the other side of his landscape.
If you wish to stick with Japanese hollies, you can use Ilex crenata 'Beehive' (a male cultivar) as a pollinator. 'Beehive' is similar to 'Hetzii' in its growth habit, only it stays shorter (2-3 feet tall), being something of a dwarf.
The small, greenish-white flowers of Sky Pencil are unimpressive, but when pollination does occur, black berries will be produced (the same color as on Ilex crenata 'Hetzii' and the aptly named "inkberry"). While not showy, the black berries do offer something of a novelty, just because of their unusual color. In addition, the wild birds eat the berries.
Planting Zones, Sun and Soil Requirements
The parent species, Ilex crenata is native to eastern Asia. Sky Pencil holly is often listed for planting zones 6-8, but some gardeners report success growing the plant in zone 5 in a microclimate (a sheltered, relatively warm location, such as on the south side of a house).
Many gardeners in the North find that the plant performs best in full sun. This narrow, columnar shrub is listed as a bush that tolerates shade. While the plant will probably survive if located in shade, its growth may be slower and its branching not quite as dense. If, however, you live at the southern end (zone 8) of the plant's growing range, you may find that the bush profits from being located in a spot that offers partial shade.
But, in the North, you would generally want to avoid growing it on the East side or North side of a house (where there will be more shade than there will be on the South side or West side).
It will grow best in a well-drained soil. You can promote good drainage in the ground that you are planting in by spading in compost. The soil should be allowed neither to become too wet nor too dry. This shrub prefers a soil pH that is acidic.
Uses in Landscaping
There are two different ways to build a "living privacy fence," that is, a barrier that uses plant material:
- Traditional hedges composed of one type of plant, sheared to form a uniform wall.
- Looser borders, featuring a number of different plants in layers.
Sky Pencil hollies could work well as a component in either. Alternatively, where the goal along a border is simply to provide definition, rather than privacy, a row of these narrow bushes could form an attractive "colonnade" of sorts.
The shrub has an interesting enough shape to stand alone and serve as a specimen plant, too.
Many homeowners also find these shrubs useful in foundation plantings. Columnar shrubs are often placed at the corners of such planting beds. But a pair of Sky Pencil hollies would also be a logical choice flanking a house entry (placed on either side of the door). One could use them in the same fashion at the entrance to a driveway. It is possible to grow them in containers, as well. All of these observations suggest a comparison with dwarf Alberta spruce trees, which are used in similar ways. But dwarf Alberta spruce lacks the unusual, columnar shape sported by Sky Pencil holly shrubs.
Care: How to Prune
Fortunately, this columnar shrub retains its unique shape without pruning. If you do choose to prune it, the bush responds well to pruning. For example, is yours growing in a spot where you do not wish it to grow too high? If so, you can periodically trim off excess growth at the top. The plant will eventually generate new growth where the trimming took place, which will prompt you to prune it again (and so on). Likewise, you may choose to let your bush get as wide as it wants to, but one could also control the size of a Sky Pencil holly through pruning in this dimension if one wished to.
If you feel like being fussier with your pruning, you could pinch the tips off as many branches as you can. This care causes the shrub to become bushier. Winter is a good time to prune this holly, although it is not the only possible time to prune it.
If you wish to grow the shrub in zone 5, the best advice would be to situate it in an area sheltered from winds and mulch it in an attempt to help it through the winter. In summer, mulch will help retain moisture in the soil, which is also important for this shrub.
Another item on your winter-care checklist should be to wrap cords or twine (you can use bungee cords) around the shrub in a few places so as to pull the branches into the center. This will help you avoid the damage caused by having heavy snow or ice build up on the branches.
When landscaping for small spaces, a narrow shrub that injects vertical interest into your design can be a real blessing. There are not many shrubs that fit this description. Sky Pencil holly is one of these rare shrubs. Its columnar plant form is also sometimes called "fastigiate."
This shrub is generally considered to be a slow grower. In fact, for many gardeners, it grows so slowly during the first few years that you may start to question whether you might be stuck with a shrub too small for your needs. But the growth rate will pick up once the shrub is established. This could be just the right plant for you if you need a bush that:
- Is tall (but not too tall).
- Is skinny (so that it will fit into a tight spot).
- Does not need to be pruned often.
Alternative Columnar Shrubs
Irish juniper (Juniperus communis 'Stricta') is a needled evergreen shrub that grows to similar dimensions and keeps a columnar form. It is a better choice for gardeners who live where it gets very cold, as it can survive a winter in zone 3.
There is a barberry shrub that has a columnar form, also. It is called Berberis thunbergii 'Helmond Pillar' and averages 4-5 feet tall by just 1-2 feet wide. Like Sky Pencil holly, this choice is a berry-producer (and it bears red berries, which are showier than black). But unlike Ilex crenata, it is deciduous. Grow it in zones 4-8.
Another "living column" to consider comes from the yew bushes. But Taxus baccata 'Fastigiata' may be too big at 15-30 feet high by 4-8 feet wide to comfortably occupy that tight spot that you are trying to fill in a small space. It can be grown in zones 5-8. | <urn:uuid:7d62e6a2-c103-46fb-903e-3bb429abe1fd> | CC-MAIN-2017-43 | https://www.thespruce.com/sky-pencil-holly-2132077 | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822513.17/warc/CC-MAIN-20171017215800-20171017235800-00571.warc.gz | en | 0.960453 | 1,884 | 2.75 | 3 |
Levon Whyte, ERS, for Zondits
Construction projects cost a lot of money to develop. The last thing any construction firm wants to do is budget additional costs for developing a building energy model. Energy modeling, however, may prove to be more cost-effective than one might initially assume.
Energy models simulate how much energy a designed building will consume, taking into consideration weather, occupancy, building envelope, and proposed mechanical systems. The benefit of investing in an energy model is that designers can use these models to analyze which combination of building envelope, building layout, and HVAC systems will have the lowest operating cost. In many cases, the energy cost savings derived from implementing an optimal building design (as determined from energy modeling) will outweigh the cost of developing the building model. If we were to evaluate the investment in an energy model using simple payback we would find that an energy model pays for itself in a few months.
Anica Landreneau, Director of Sustainability at the architectural and engineering firm HOK, provided evidence for this in her presentation at the Better Buildings Summit held May 9‒11, 2016, in Washington, DC. In an internal study, HOK tracked modeling costs and predicted energy savings for a number of their projects over a few years. What they discovered is that the typical payback (the modeled energy cost savings divided by the cost of developing the model) for a building energy model is between 1 and 7 months. For example, for one of the projects they worked on, the DC Consolidated Forensic Laboratory, the modeling fees ran about $60,000. The optimal building design, however, was predicted to save $537,855 in energy costs annually. The payback period in this case was a mere 1.3 months.
Furthermore, Ms. Landreneau highlighted that modeling fees are not an exorbitant cost compared to gross project fees. Using data from HOK projects she determined that modeling fees are generally between 0.5% and 3.3% of gross costs. You can find her presentation on the Department of Energy’s website.
So, the next time you are involved building design for new construction or renovation you may want to strongly consider investing in an energy model to help determine the most economical design solution for your project. The investment will be well worth it. | <urn:uuid:762d0f15-a7e0-444f-b155-b2fdea8b16f5> | CC-MAIN-2023-50 | https://www.zondits.com/energy-models-economical-investment/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100448.65/warc/CC-MAIN-20231202172159-20231202202159-00415.warc.gz | en | 0.959344 | 476 | 2.875 | 3 |
you are here: home > inthenews > Lobster Eaters Beware
The US Food and Drug Administration (FDA) has issued a warning to people who eat American lobster:
Don't eat the liver!
The liver, also called tomalley, is the green substance found in the middle of a lobster. Investigators have found that the tomalley contains high concentrations of the neurotoxin that causes Paralytic Shellfish Poisoning (PSP).
PSP is caused by a dinoflagellate<, a small marine plankton, that occurs during a "red tide." The toxin concentrates in shellfish and people can be poisoned when they eat the contaminated food. Symptoms of PSP can include numbness and tingling of the face, arms and legs, headache, dizziness, nausea and movement problems. In severe cases, a person will become paralyzed and unable to move. People may even stop breathing.
Cooking will not eliminate the toxin in the tomalley, but the FDA says that the meat of the lobster is still safe to eat.
Copyright © 1996-2008, Eric H. Chudler, University of Washington | <urn:uuid:eb8e5ae4-e19a-4920-bc01-17bb15e6b372> | CC-MAIN-2014-10 | http://faculty.washington.edu/chudler/lobster.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999664120/warc/CC-MAIN-20140305060744-00098-ip-10-183-142-35.ec2.internal.warc.gz | en | 0.942158 | 237 | 2.78125 | 3 |
April 11, 2016 – A research team from Texas A&M University has discovered an enzyme in a common microalga that can assist in the development of biofuel.
By Andrew Macklin
The common green microalga, known as Botryococcus braunii can be found across the globe, except in sea water. The enzyme found within it can be used to make hydrocarbons for high-grade biofuel.
According to the article posted on Texas A&M Today, AgriLife Research biochemist Dr. Tim Devarenne and his team made the discovery as part of a four-year, two million dollar study made possible by the National Science Foundation.
For more on this story, click here. | <urn:uuid:6b004da6-e00f-4385-9e08-ada565c38f07> | CC-MAIN-2021-04 | https://www.canadianbiomassmagazine.ca/researchers-discover-microalga-for-biofuel-5628/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00760.warc.gz | en | 0.957032 | 150 | 3.015625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.