source stringclasses 1
value | snapshot stringclasses 3
values | text stringlengths 264 621k | size_before_bytes int64 269 624k | size_after_bytes int64 185 235k | compression_ratio float64 0 0.01 |
|---|---|---|---|---|---|
warc | 201704 | My facebook feed is filled up with a deluge of celebrities and friends taking the challenge to dump a bucket of ice water over their head and then challenge other friends to also participate. It’s all an attempt to raise awareness and funds for ALS (
Amyotrophic lateral sclerosis or Lou Gerhig’s disease). Unlike previous ice bucket challenges that have gone around the web, this one seems to be doing some good. While the challenge is to dump the ice or write a check– well, people are doing both. Happily writing the check after enduring a few seconds of bone chilling cold.
It’s hard to be a cynic when you read articles like the one recently published on Forbes or from an ALS family. Or the cold hard numbers of dollars being raised. (And this is assuming that we are only counting the money going to ALS associations and not individuals, families, charity walks, etc.)
In our house we three were “challenged” to the ice bucket challenge by some of our church/family camp friends. There are two families near and dear to us dealing with ALS on a daily basis. One is a 33 year old man who is now in a wheelchair on a vent. Five years ago he was an athlete. Tan, tall, handsome with his beautiful blond wife and a new baby. His wife is amazing– taking care of their daughter and her husband and fighting for the best care/treatment/hope for him. He is amazing– fighting for every scrap of his life while still working and taking care (albeit in a new way) of his family. The other is a woman who is a half generation older than us– she and her husband were the “cool adults” (is there such a thing?) when I was a teenager visiting their church youth group. She is generous and kind and prone to laughing in the way that makes everyone else want to be laughing, too. She, too, is in a wheelchair where only her eyes are still alert and responsive– her beautiful voice and smile are gone. Her large family has closed ranks around her with three generations taking care of the woman that took care of them.
We cheerfully evoked their names before we took the plunge. And, honestly, at the time I hoped that it was appropriate. Somewhere I worried, “Is this helpful? Hurtful? Pissing them off?”
Our friend Lindsay, the amazing wife, put my fears to rest with her latest fb update on their family and her husband.
“ And another word about the ice bucket challenge…the funds raised is astounding, but the awareness it is bringing is on a whole new level. This might be my lack of sleep talking, but get ready for the wrath of Lindsay if I see any negative comments about it. I hope it never ends and all of your news feeds are completely filled with ice buckets and and the letters A-L-S!”
So I’m irritated (with permission!) with the backlash against the #alsicebucketchallenge. It’s come from dear friends, cynical college students I know, and acquaintances. Some take the superior sounding stance of “All that waste of good, clean water!” (An argument that falls apart as long as they are using said water for bathing, car washing, lawn care, etc.) Others have taken the “I don’t get how this helps” whining stance. (The ice water doesn’t help cure ALS. But people talking about ALS and moving it to the front of peoples’ awareness of it might lead to more research funding. More compassion for the people with ALS and their families.) And then there are those who take the stance that is probably the most truthful, “I’m sick of all these ice bucket videos filling up my feed!” (I’m sorry, too. Gosh. It must be terrible to have your Candy Crush invitations cluttered with a devastating condition that has no cure.)
Internet comments drive me crazy. I know better than to read the comments connected to our local newspaper, for example– because it’s like opening a closetful of rats into a room where the floor is strewn with birthday cake. Sometimes I can’t stop myself though. When a friend is involved in an election or a big decision has been made by the school board. Still. I should know better.
I don’t know how to make the naysayers less full of naysay. I wish there was an app that promised them “No ALSicebuckets on your newsfeed!” for a lovely sized donation. Or maybe a compassion stick to smite them upside the head.
Meanwhile– in our house– I’m grateful for the kick in the pants to give a little. And to have a way to spur other people to give a little, too. Do I think a hundred dollars makes a difference? Probably not last week.
But this week– when our money is added to those that challenged us and those that we challenged and so on and so on– yes. I do. And if it fills up Lindsay’s feed with a bunch of us doing something that makes them laugh for a minute and feel some modicum of us thinking about them– not in hushed, pitying ways– but with love and hope and “Suck it, ALS!” attitudes– well. That’s something.
Don’t put this trend on ice yet. There’s more money out there. Maybe when the ice melts so will some of the cynics? | 5,216 | 2,543 | 0.000414 |
warc | 201704 | By Albert Wong and Valerie Belair-Gagnon, Information Society Project at Yale Law School
In a recent article in the
Columbia Journalism Review, we reported that major US newspapers exhibited a net pro-surveillance bias in their “post-Edward Snowden” coverage of the NSA. Our results ran counter to the general perception that major media outlets lean “traditionally liberal” on social issues. Given our findings, we decided to extend our analysis to see if the same bias was present in “traditionally conservative” and international newspapers.
Using the same methods described in our previous study, we examined total press coverage in the
Washington Times, one of the top “traditionally conservative” newspapers in the US. We found that the Washington Times used pro-surveillance terms such as security or counterterrorism 45.5% more frequently than anti-surveillance terms like liberty or rights. This is comparable to USA Today‘s 36% bias and quantitatively greater than The New York Times‘ 14.1% or the Washington Post‘s 11.1%. The Washington Times, a “traditionally conservative” newspaper, had the same, if not stronger, pro-surveillance bias in its coverage as neutral/”traditionally liberal”-leaning newspapers.
In contrast,
The Guardian, the major UK newspaper where Glenn Greenwald has reported most of Snowden’s disclosures, did not exhibit such a bias. Unlike any of the US newspapers we examined, The Guardian actually used anti-surveillance terms slightly (3.2%) more frequently than pro-surveillance terms. Despite the UK government’s pro-surveillance position (similar to and perhaps even more uncompromising than that of the US government), the Guardian‘s coverage has remained neutral overall. (Neutral as far as keyword frequency analysis goes, anyway; the use of other methods, such as qualitative analysis of article tone, may also be helpful in building a comprehensive picture.)
Our extended results provide additional context for our earlier report and demonstrate that our analysis is “capturing a meaningful divide.”
On a further note, as several commenters suggested in response to our original report, the US media’s pro-surveillance bias may be a manifestation of a broader “pro-state” bias. This theory may be correct, but it would be difficult to confirm conclusively. On many, even most, issues, the US government does not speak with one voice. Whose position should be taken as the “state” position? The opinion of the President? The Speaker of the House? The Chief Justice? Administration allies in Congress? In the context of the Affordable Care Act, is there no “pro-state” position at all, since the President, the Speaker, and the Chief Justice each have different, largely irreconcilable views? | 2,879 | 1,382 | 0.000762 |
warc | 201704 | Here’s yet another reason to go for generic drugs when you can: drug makers keep raising prices on brand name products. If you group generics and brand names together, drug prices rose by 3.4% in 2009, according to an industry report. However, if you look at just brand name drugs as the AARP did in a new report, the average price hike was 8.3%. An earlier AARP report from May points out that if you look at specialty drugs “widely used by people in Medicare” then the hike jumps to 9.2%.
The closer you get to the top of the popularity chart for brand name drugs, the worse it gets: the most popular brands jumped 41.5%.
The AARP will release the new study later today. | 691 | 413 | 0.00249 |
warc | 201704 | Or: Space and the Universe are a Palimpsest
.
For those who have followed archaelogical progress over several decades (and reconstructed it from the very beginnings of “modern” archaeology starting probably with people like Schliemann trying to find Troy etc.) it is quite obvious that not only have many things been unearthed in recent years that former generations would have never believed to have existed but it seems clear that what has so far been found is but a fraction of what there is to be found at some future point; and then some, because probably most of the stuff will never be found, go unrecognised or being inadvertently destroyed, e.g. while excavating for a new underground rail system etc. Read more…
There is a great debate in the history of economics whether there can be under-consumption in any real historic moment of the state of an economy and if so, whether it has detrimental effects and if so, what is to be done about it. And modern economists seem to have found a magic wand with which to unfailingly repair any such damage and smoothen the otherwise rough ride of economic cycles. We’ll see if they have and what it might be worth, whether it’s a wand or a cane.
There is a widespread believe that if governments inject a certain amount of money into “an” or “their” economy, it will miraculously multiply and bear fruit beyond what was invested. This is one of the mainstays of Keynesian economics in that it justifies state subsidies, public works, in short just any intervention by a state in the realm of private enterprise on the expenditure side. Read more…
In a recent post Woodford warns of deflation threat as CPI drops to 3% CrisisMaven found another instance of the widespread belief that not only sinking prices (misspelt “deflation”) are harmful as they cause buyers “to strike” but that the housing market has “deflated”. Read more…
If buyers don’t buy during a period of deflation why would sellers sell during periods of inflation? Read more…
If a robber robs a bank and runs away with the money, never gets caught or if he gets caught has spent the money before that, has that caused inflation?
No – the bank now has less money and thus its money holdings have decreased exactly as much as the robber’s holdings increased. But what if the bank gets “recapitalised”? Read more… | 2,457 | 1,229 | 0.000859 |
warc | 201704 | Essential oils from plants have been found to provide relief and healing for a wide variety of human conditions.
What is essential oil?
“The term ‘essential oil’ is a contraction of the original ‘quintessential oil.’ This stems from the Aristotelian idea that matter is composed of four elements, namely, fire, air, earth, and water.
The fifth element, or quintessence, was then considered to be spirit or life force. Distillation and evaporation were thought to be processes of removing the spirit from the plant and this is also reflected in our language since the term “spirits” is used to describe distilled alcoholic beverages such as brandy, whiskey, and eau de vie.
The last of these again shows reference to the concept of removing the life force from the plant. Nowadays, of course, we know that, far from being spirit, essential oils are physical in nature and composed of complex mixtures of chemicals.” (2)
Through distillation, the essence of the plant is extracted; this includes the plant’s smell, taste, and phytochemicals—like boiling something down to its most fundamental components and structure. (3)
Plants produce these oils to protect themselves, attract insect pollinators, and ward off predators. Because of the nature of the concentrated extract, little of it is needed to reap its rewards. Therapeutic-grade essential oils are high-quality distillations and recommended when used for a specific health goal rather than only for their fragrances.
One use for select essential oils is the promotion of eye health.
Before we say anything more, don’t EVER put an essential oil directly in or on the eye. If contact occurs, place a small drop of carrier oil (olive or coconut) in the corner of the eye and blink. Don’t use water, which will spread the oil around the eye. The essential oil will join with the carrier to lift it out of the eye.
There are three effective methods for applying essential oils to treat the eyes; the first time, attempt treatment around one eye to check for sensitivity. You may mix the essential oil with a carrier (jojoba, olive, avocado, or coconut) to dilute:
Rub 1 drop of the oil between your hands and apply sparingly above the eyebrows and at the top of the cheekbones. The essence will be released and penetrate the eyes.
Apply 2-3 drops of essential oil to a warm cotton cloth and place over closed eyes for 10-15 minutes twice a day.
Rub 1 drop of oil between your hands and hold them cupped over—not on—open eyes for 3 minutes, then over the nose and inhale 6-8 times each day.
Essential Oils for Eye Health Frankincense – a potent anti-inflammatory and antioxidant, this herb has implications for reversing cell degradation. (4) Because essential oils are plants in their most basic form, they permeate cells readily. Rosemary – reduces oxidative stress in eye cells, preventing degradation. (5) Blurred Vision Helichrysum – antioxidant and anti-inflammatory supports nerve function, improving vision. Lavender – reduces dry eyes, improves circulation, moisturizes, and assists in cell re-growth. (6) Peppermint – promotes oxygenation of eye cells and is vitamin-rich, including A and C—known for promoting eye health. General Eye Health Clary Sage – known in the Middle Ages as “clear eyes” for its ability to support vision; reduces eye strain and fatigue with antispasmodic and anti-inflammatory phytochemicals. Cypress – strengthens capillaries and improves blood circulation. Lemongrass – maintains and improves tissue health; dilates blood vessels, promoting oxygenation and blood flow. Sandalwood – promotes circulation and healing, hydrating surrounding tissues. | 3,779 | 1,833 | 0.000567 |
warc | 201704 | It’s a slippery slope and the sheriff of Humboldt County, NV may have wandered over it. [LVRJ] Two travelers on I-80 saw their personal cash confiscated because they were “suspected” of drug trafficking. These civil forfeitures are subject to the provisions of 18 U.S. Code 981, which is pretty lenient on the topic, the person having been perceived as engaging in a “specified unlawful activity.” There are also the provisions of NRS 179.1164-5 to consider. In Nevada, the forfeiture is supposed to be attendant with an arrest, a search with a warrant, or an inspection pursuant to a warrant — the latter a rather large loophole. In short:
“Nevada forfeiture law provides paltry protection for property owners from wrongful forfeitures. The government may seize your property and keep it upon a showing of clear and convincing evidence, a higher standard than many states but still lower than the criminal standard of beyond a reasonable doubt. But the burden falls on you to prove that you are an innocent owner by showing that the act giving rise to the forfeiture was done without your knowledge, consent or willful blindness.” [IJ]
So, what standards are applied by officers in Humboldt County?
“…officers are trained to recognize evasiveness during questioning, including stories about travel routes that don’t add up or a lack of luggage on cross-country trips.” [LVRJ]
Here’s where the specifics become part of the discussion — (1) What constitutes ‘evasion’ during questioning? (2) What elements of a ‘travel route’ make the journey suspicious? (3) How much luggage is presumed reasonable before a ‘lack’ is noticeable?
Let’s look at the first question: “Where are you headed?” Can I respond, “…to California?” Or, must I say that I’m headed for beautiful downtown Fresno to visit my ailing grandmother on Elm Street? Where have I been? Can I say, “Colorado?” Or, must I inform the officer that I’ve been helping my disabled brother-in-law move from Grand Junction to Aurora? Just how much personal information must I divulge to a complete stranger in order to assuage his suspicions that I am not a drug trafficker or a money launderer?
What must I say in order to make my travel route ‘reasonable?’ “Well officer, I tried fishing at Wild Horse, but somebody told me that Knott Creek was better, but I didn’t have any joy there so I thought I’d try the East Carson…..” What would happen if I said, “I dunno’ I just got into my truck and started looking for places that looked interesting and I might end up over at the Bodie, CA park to see the ghost town…” Does my itinerary have to make sense to anyone other than myself?
And, how much luggage must I pack before I am plausible? I do recall, and always with a smile, a former colleague who — with considerable assistance from his wife, I always suspected — could pack everything he needed for a weekend conference in one small attache bag. I also remember, with some nostalgia for the days when gasoline was $1.50 a gallon, when I could take off for a weekend with everything I needed for a bit of site seeing and photography scrunched into a single duffel bag. Would this be enough to convince The Officer I wasn’t drug trafficking or doing a bit of ‘asset hiding?’
Without some very clear guidelines, Humboldt County could find itself categorized with such infamous places as Tenaha, TX,
“Police in an East Texas city will no longer enrich their coffers by seizing assets from innocent Black and Latino drivers and threatening them with baseless criminal charges, under a settlement reached today with the American Civil Liberties Union.
The ACLU settled a class-action suit, pending court approval, against officials in Tenaha and Shelby County, where it is estimated police seized $3 million between 2006 and 2008 in at least 140 cases. Police officers routinely pulled over motorists in the vicinity of Tenaha without any legal justification, asked if they were carrying cash and, if they were, ordered them to sign over the cash to the city or face charges of money laundering or other serious crimes.” [ACLU]
The infamous Boatright Case is not one with which Humboldt County, NV authorities would want to be associated either. Unfortunately, the Humboldt County cases are not isolated instances, a recent article in Forbes publicizes other incidents in may other states. Police in one singularly repugnant incident confiscated church donations on their way to the bank and didn’t release the funds until a former Reagan Administration appointee to the Justice Department’s Asset Forfeiture Office took the case
pro bono.
When remote, rural, Humboldt County hits ABC NewsBreak, it’s time to give more serious consideration to the nature, and to the implementation of drug enforcement policy than what gives the appearance of hopping on the Cash for Freedom bandwagon of law enforcement officers in too many jurisdictions.
Bones of Contention
Let’s begin with the proposition that no one wants to facilitate drug trafficking, and no one would seriously advocate that we should make it easy for people to hide assets from legitimate scrutiny (and taxation), nor do we want to make it easy for international criminals to transfer funds with alacrity. That said:
(1) How can we balance the need for legitimate law enforcement activity with the personal privacy and security in our “persons, houses, papers, and effects,” as guaranteed by our 4th Amendment?
(2) How do we adhere to the precept that we are innocent until proven guilty if we allow the civil forfeiture without a standard at least as robust as that required to justify an arrest?
The sorriest part of this state of affairs is that Humboldt County, Nevada, although newsworthy at the moment, is not all that far from the more egregious behavior of police operations in other states. Instead of attracting a reputation as a bulwark of ‘liberty,’ the county has adopted a culture of convenience, especially when it comes to collecting cash. | 6,265 | 3,079 | 0.000341 |
warc | 201704 | Mission and Values
The primary mission of the University of Oregon School Psychology Master’s Program is to prepare our students to become
skilled practitioners and leaders in the field of school psychology. Our program is intervention-focused, with an emphasis on prevention and early intervention. We seek to recruit and train students who have the desire to make a substantial impact in the fields of school psychology and education at the state, national, and international levels. We are particularly known for and seek to maintain our strong emphasis on state-of-the-art applied research and development efforts in the field of education. Through these efforts, our faculty, students, and alumni help to improve systems of service in schools, and to improve outcomes for children, youth, and their families. Our scientist-practitioner program values linkages across disciplines and systems, and opportunities for such linkages are built into the program requirements. We value the diversity of backgrounds and characteristics that our students bring to the training program, and we actively seek to maintain and increase this diversity. We also value the empowerment of our students, and the perpetuation of a highly collegial program environment, where we strive for positive and cooperative professional relationships among faculty, among students, and between faculty and students. Program Philosophy
We are behaviorally influenced in our theoretical and philosophical orientations, meaning we focus on observable relations that require
low-level inferences. Within this general framework, our individual theoretical orientations range from behavior analytic to social-interactional theory. From these perspectives we strive to train school psychologists as scientist-practitioners, with a data-oriented problem-solving emphasis. Our program is intervention-focused, training graduate students to conduct and evaluate research and to deliver evidence-based interventions to children and youth in schools and in related settings within a behaviorally-oriented perspective and at a variety of levels. These levels of service delivery and intervention include (a) with individuals, (b) within small groups and classrooms, and (c) across entire schools and systems. Although the program prepares graduates to provide effective evidence-based services to individuals and groups who have a wide variety of needs, problems, or deficits, we particularly value primary prevention and early intervention approaches, which seek to provide universal screening and prevention services to all students in school settings, and to detect and intervene early before problems become severe. This emphasis supports an outcomes-driven model of service delivery, which is focused on health rather than pathology, and is focused on desired outcomes rather than on problems. MS Program Overview
The UO School Psychology Program offers a Doctor of Philosophy (PhD) degree and a Master of Science (MS) degree in School Psychology. Both our doctoral and master’s programs are approved by the Oregon Teacher Standards and Practices Commission1 (TSPC), which oversees the educational licensure of school psychologists in Oregon. The doctoral program is accredited by the American Psychological Association2 (APA) and both our master’s and doctoral programs are approved by the National Association of School Psychologists3 (NASP).
Although the Oregon University System does not currently provide a mechanism for awarding a specialist (EdS) degree, the University of Oregon master’s program is designed as a specialist-level program of study. The master’s program requires 3 years of full-time study, completion of a minimum of 93 (quarter) credit hours of program coursework, including a sequence of supervised field experiences, practica, and a 1200-hour internship.
Students who complete the master’s program and pass the required licensure tests are eligible for the Oregon TSPC Initial School Psychologist License, and most other states. Graduates are also prepared to apply for the Nationally Certified School Psychologist (NCSP) credential, offered through NASP. | 4,188 | 1,825 | 0.000553 |
warc | 201704 | Talk of The Industry
April 2015
Big data, predictive analytics, telemedicine, virtual reality, 3D, and other technologies are transforming healthcare delivery, offering innovative avenues to improve medical outcomes and reduce costs.
However, cutting-edge technologies can cut into network performance for other essential applications. In today’s transformative environment, two out of three IT leaders expect demand for bandwidth will increase at their company over the next year.
1 As a result, more than half of IT executives say they will need to add capacity to keep pace. 2
Tap into the
Talk of the Industry to learn about healthcare innovations that can impact your network and how your organization can stay ahead of the curve.
1 InformationWeek 2014 Next-Generation WAN Survey 2 Computerworld Forecast 2014 | 829 | 514 | 0.001979 |
warc | 201704 | The sidewalk on the east side of Georgia Avenue in downtown Silver Spring just got a makeover, with new brick pavers and street trees. But will it have enough room for everyone who wants to use it?
Montgomery County’s Department of Housing and Community Affairs (DHCA) managed the $650,000 project, which began this summer and lasted about five months. The agency’s main goal was to level and lower the sidewalk to meet the requirements of the Americans With Disabilities Act. It replaced the existing concrete sidewalk, built in the 1980s, with sturdier and more attractive brick pavers, and created large new bumpouts at some intersections. The new sidewalk is very attractive and will hopefully encourage visitors and shoppers to stray from the Ellsworth Drive strip and check out the businesses on Georgia. But it also reveals the tension between different users on Silver Spring’s often-cramped sidewalks. DHCA also removed all of the mature Zelkova trees along Georgia, arguing that the sidewalk reconstruction would disturb the trees and kill them. The new trees are Princeton or Lacebark Elm trees, which will apparently improve the visibility of shops and restaurants from the street. The old sidewalks had trees in tree grates, allowing room that businesses could put out tables and chairs and leave enough sidewalk for people to walk past comfortably. But the new trees now sit in long, wide planter boxes with little gaps in between for street lights or people getting out of parked cars. This isn’t the only place in downtown Silver Spring with new planters. The county’s Department of Transportation (MCDOT) also installed the same planters along Ellsworth Drive and Fenton Street, except with three-foot-high hedges. Some planters, like one on Fenton Street, extend for most of a city block to discourage jaywalking. In 2009, when planning on the Georgia Avenue sidewalk project started, county-hired arborist Steve Castrogiovanni recommended doing the same thing with the new trees to “strike a [balance] between the trees’ needs and the needs of pedestrians.” But officials endorsed the bigger planters, saying it would give the trees more soil and help them live longer. Street trees have a lot of health and environmental benefits. They can provide a feeling of enclosure on a street or sidewalk, calming traffic on busy streets like Georgia Avenue, and making pedestrians feel safer. However, these planter boxes seem to provide the wrong kind of enclosure. Crowded sidewalks can be a good thing, creating a feeling of excitement and vitality on a city street. But when you push pedestrians and outdoor dining tables into too small a space, it can feel uncomfortable, and people won’t want to stick around and spend money. That’s why restaurateur Jackie Greenbaum, who owns Jackie’s, Sidebar, and Quarry House Tavern, all on Georgia Avenue, didn’t want trees planted on the narrow sidewalk outside her businesses. “THIS WILL ELIMINATE MUCH OF MY PATIO SEATING!” she wrote in a 2010 email to DHCA. “This is NOT an improvement and is unnecessary, even undesirable.” In the end, DHCA agreed not to plant any there. Having healthy street trees and vibrant sidewalks aren’t mutually exclusive. DHCA could have still created a bigger soil pit for the trees, giving them room to grow, while putting tree grates or permeable pavers on top, ensuring that there’s still enough sidewalk space.
Wider sidewalks mean ample room for walking, for dining, and for nature. Photo by Jim Malone on Flickr.
And if county officials really wanted planters, they could have at least used a more attractive design, like these low, stone planters in NoMa that provide space for trees and plants while staying out of the way. Or they could have looked at a bioswale that cleans and filters stormwater in addition to looking pretty. The real issue isn’t the planters, but that the sidewalks on Georgia Avenue aren’t appreciably wider. DHCA’s project was simply to make the sidewalks meet ADA regulations. This sidewalk may not get rebuilt for another 30 years, meaning we’ve missed an opportunity to have a larger conversation about how Georgia Avenue works. Wider sidewalks mean we wouldn’t have to decide between landscaping, walking space, and outdoor seating. They mean we could have added new features, like benches, or a “shared use trail” for cyclists similar to the Green Trail on Wayne Avenue. Doing this would require taking space for cars, which today constitutes the vast majority of Georgia Avenue, and giving it back to people. While that would probably be bad for drivers passing through, it would ultimately be a good thing for downtown Silver Spring, whose historic main street would become a more attractive, pleasant, and safer place to walk and spend time. | 4,899 | 2,317 | 0.000443 |
warc | 201704 | Zach Rosenberg believes the cliché “happy wife, happy life” is a terrible mantra to live by. Zach Rosenberg believes the cliché “happy wife, happy life” is a terrible mantra to live by.
Here’s a proposition: let’s find a replacement for the mantra “happy wife, happy life.” I’ve never found that simply making my wife happy inherently made me happy. Oh sure, I enjoy doing things for my wife that make her happy. And in turn, that makes me more or less happy. But if we’re talking about down-to-brass-tacks, real, soul-filling happiness, I need something more out of my marriage.
Every man, the day after he’s married (unless you immediately jet-set off to your honeymoon), hears “happy wife, happy life” primarily from…well, his wife. If you’re lucky, you’ll get to hear it from your mother-in-law as well.
Men got stiffed on this because, well, nothing rhymes with husband. I’ve checked. And saying “Happy Husband and he’ll go to work every day and maybe go to war and raise two children with you and when it’s time, empty the retirement fund and you’ll both go on that really cool looking Alaska cruise” just doesn’t sound as catchy.
The phrase has its roots in logic. When a woman’s “place” was in the home, she was, in essence, in charge of the daily life of the family: cooking, cleaning, laundry, mothering. Mom
was the home. But it wasn’t really a happy job—it was a lot of work, and she still had to gussy-up for the husband and his friends or for husband-approved social functions.
More recently, women found a place in the job market. Really cool, successful places. So, maybe it was a swing in values or maybe someone had a gun to someone’s head, but men started caring about their wives. And, well, “happy wife, happy life” sprung up from that. Maybe it was just that if you kept her happy by “letting” her work, she’d still be happy enough to come home and make dinner as well. I don’t know, I’m young and stupid.
Nevertheless, even more recently, it’s changed. The “happy wife, happy life” image that’s unceremoniously dumped on husbands daily is this: keep your wife happy, and you will be happy. Or, “if you do things that specifically make your wife happy, like shut up and surrender, she won’t poison your food, and hopefully you will end up on the business end of some private parts later.”
So, the message is clear for men: keep your eyes peeled, keep your ear to the street, keep your nose to the grindstone, and hope that some of that sweet trickle-down happiness ends up in your mouth. But how does this work out in light of the fact that a record number of men are eschewing the job market and staying home with the children? Do they still have to be consumed with making sure their wife is happy, while the wife has to make sure the rest of the household is in-gear?
That archaic “happy wife, happy life” coexists with the idea that a wife is a husband’s “better half.” That’s a lot of pressure, wives. While we’re busy making you happy, you’re busy being better. Ugh, we already did this “have it all” thing to death, haven’t we?
♦◊♦
Recently, Lesley at xoJane tackled a Weight Watchers commercial in which you get to see the “happy wife, happy life” equation play out. Lesley does a great job of explaining the finer details of how Meg, the “wife-mom” keeps hubby Matt in check, explicitly, for his own good. She even does the “I see you” eye-point that I do to my son when he’s outside playing with the neighbor girl and I see him start to get all rough-housey.
“The commercial ends with Meg saying, cutely, ‘Happy wife, happy life, right?’” Lesley responds in her article, “which sort of cements the idea that this whole joint diet was her idea…”
And the audience is left wondering how Meg is actually “happy” having to manage both her diet AND Matt’s. And how having a child for a husband is soul-fulfilling. I mean, sure, they’re both skinny rakes now, but Matt can’t have a Snickers at work without Skyping his wife to show her that it’s a “fun size” and not a full size.
Psssh… more like “happy better half, happy on my behalf.” Wait, does that work?
Here’s what I’d prefer: “happy spouse, happy house.”
This solution puts the whole household into the mix, even the kids—because believe me, it doesn’t matter if my wife says she’s “happy”, if our four year old son isn’t happy, no one’s happy.
But for you non-child-rearing folks (first of all, god bless you), this “happy house, happy house” thing just assumes a two-way street between you and your spouse.
Oh, and for you same-sex partner-types out there, this works for you too. I know that at least a handful of you were saying “well, which one of us is which? We don’t know who’s supposed to assume the lion’s share of happiness!” Problem solved.
Who loves you, baby?
People, please, join me in replacing the old, tired, gendered “happy wife, happy life” trickle-down system with the much more postmodern “happy spouse, happy house.” My wife and I have found great success in making each other happy simultaneously. And it’s a much deeper sense of a “happy life” than me going to bed feeling like I’m a slave to my wife’s happiness and her feeling like she’s got to manage the whole house and raise our kid alone.
My wife and I are happy spouses who care for each other on an equal level. We’re teammates in life; neither of us are a “better half.” And because of that, we parent better and are raising a happy kid too.
Happy spouse, happy house. It’s not just my mantra, it’s our mantra.
Photo: Flickr/ vonderauvisuals | 6,118 | 2,781 | 0.000399 |
warc | 201704 | I’ve got some exciting news to share!! Goodwill’s brand new Crystal Lake store opens its doors TOMORROW (March 27th)! Doors open at 9am. Just wait until you see this store! I was there yesterday, and all I have to say is W-O-W! You’re going to love it!
Stop by and see exactly what I’m talking about. In fact, go through your closet and pull out a couple things that you no longer use or wear, and donate them to Goodwill while you’re there. The golden rule is, if you haven’t used or worn an item in one year or longer … donate it! The revenue from your donation (and the treasures you buy when you stop in and see the new store) helps to fund Goodwill’s mission programs that provide job training, employment, financial education, and many other life enhancing opportunities to people with barriers in your very own community! It’s a win/win! You can make room in your closet by donating, help someone in need live a better life, and then fill that closet with great items you’ll see in our new store! Do you see why I love Goodwill so much?
Hope to see you tomorrow in Crystal Lake!
1016 Central Park Drive in Crystal Lake (just north of Walmart)
For this weeks blog I want to share another success story of my co-worker’s participant, Samantha.
Samantha came to Goodwill seeking help finding a job. Through Goodwill’s Placement Program, Samantha learned resume building, interview skills and job searching skills that enabled her to find employment and succeed in her goals. She learned tips on following up with applications and contacted potential employers in the community.
Samantha is a food service worker at CoCo Key’s Water Resort at the Clocktower Hotel. Samantha is doing well and also secured a second part time job at Culver’s. Samantha learned the jobs very quickly and has inquired about a promotion at the Clocktower. She is able to run the concession stand independently and be accurate with the cash handling and all of the duties.
Samantha is happy and excited to be working at CoCo Keys and Culver’s. She feels good about making extra money and having her own money to do things that she wants to do. She is feeling more independent from her parents.
Before coming to Goodwill, Samantha had been out of work for a year. Now that she is working, Samantha feels more independent and likes having extra money. Having overcome anxiety, depression and other physical limitations that Samantha has been diagnosed with, and thriving in the workforce, Samantha has proven to herself that she is a success!
This week I want to highlight another participant success story. Ashley recently reached her 90-day probation period at her job which means that she successfully completed Goodwill’s Placement Program! It is always exciting, as an Employment Specialist, to see a participant reach their employment goals and to come so far from their first meeting, which is why I like to share their success.
When Ashley was diagnosed with Epilepsy she was told that finding and maintaining competitive employment would be difficult if not impossible. Ashley has seizures and difficulty with following instruction which is why she was told she would need on-the-job coaching in order to be a success at any job she did get. During high school Ashley let this obstacle hold her back after graduating, but with the help of her DRS (Department of Rehabilitation) Counselor and the Epilepsy Foundation, Ashley joined the Goodwill Placement Program with the goal to find employment and contribute to the workforce.
“My Placement Specialist taught me hard work, goal setting and achievement, as well as resume and skill building which helped me find my job,” said Ashley. With the skills that she received and her determination Ashley was able to find a job working as a Crew Member at McDonalds. Ashley was able to learn her job and work with her co-workers without a job coach on site, though she did use her support group as needed.
With her job she said she is able to earn money for herself for the first time, go out more with friends and feel less guilty by taking the burden away from her mom by having to pay for everything. Congratulations Ashley!
I am sure I am not alone when I say that I have heard of the Special Olympics and certainly the Olympics as a whole; but until today when I was looking for this week’s topic, I had never heard of the Deaflypmics. As a person who works with people with disabilities and volunteers with the Special Olympics each year by plunging herself into cold water in winter, I was surprised but also intrigued.
I have always thought it was important for people to have the opportunity to get involved., but it is easy to take things for granted. While everyone should have the opportunity to participate equally, that is not always the case. To find that the Deaflympics has been around for so long was inspiring to me.
The Deaflympics began as a gathering of 148 athletes from nine European nations competing in the Silent Games in Paris, France, in 1924. Held every 4 years, the Deaflympics are the longest running multi-sport event excluding the Olympics themselves. In order to qualify athletes must have a hearing loss of 55 db in their “better ear.” Hearing devices such as aids or implants are not allowed in competition so that all athletes are competing equally. To address Deaflympians inability to hear, officials guide them by alternative methods such as waving a flag to start a game instead of using a whistle or a light on the track instead of a starter pistol. It is also common practice for spectators to wave instead of clap or to cheer.
This year, on July 26 in Sofia, Bulgaria, the Deaflypmics will be held once again giving athletes around the world a chance to compete. Joining them this year for the second time is Palatine, Illinois resident Jenny Woyahn, ranked number 10 in the world among tennis players and a junior tennis instructor at an athletic club.
Jenny has been hearing impaired since shortly after her birth; she is able to communicate verbally and uses hearing aids to better understand those around her. When she went to the Deaflympics for the first time in 2009 that was her first time meeting people with hearing impairments from other countries. Along with many other talented athletes, Jenny will compete again this year in hopes of bringing home a medal and proving that anything is possible with skill and determination.
My lesson for today; though we may not always be aware of it, everyday people are doing things to make a difference (whether big or small) to bridge the gap of inequality. What will you do today?
In 2004, a proclamation was signed designating July as Elder Abuse Awareness Month, in Illinois, in an effort to “Break the Silence.” On Monday, Governor Quinn went one step further to increase awareness against abuse. He signed a new law to better protect adults with disabilities and elderly residents living in Illinois.
The Adult Protective Services Act will create an adult unit within the state Department on Aging that will be responsible for investigating cases of abuse, financial exploitation and neglect of adults with disabilities and elderly individuals. In addition, the law will require caretakers to have special training and establish a team to investigate any suspicious deaths that may occur.
These reforms were proposed following the failure of the state to investigate 53 deaths of adults with disabilities who lived at home, despite receiving calls from a state hotline regarding alleged abuse or neglect.
What is Adult Protective Services?
Adult Protective Services (APS) investigates reports alleging abuse, neglect and exploitation of frail elderly and disabled adults and intervenes to protect vulnerable adults who are at risk.
Services to vulnerable adults include: • assessment, • counseling, • case management, • emergency assistance with basic needs; • short-term homemaker services; o coordination of legal services to obtain emergency orders or guardians/conservators; o placement into long-term care facilities or assisted living homes; and o coordination of social, assessment and medical services with other agencies and providers.
More information will be forthcoming regarding the new legislation, but at this time it is good to know that more people are being made aware of the abuse that can be present and what to do about it. Only with knowledge can people “break the silence” and stop it from happening.
Do you need to brush up on your clerical skills? Register now for the Goodwill Goodworks Clerical Skills Program taking place July 15th-August 2nd from 9 am until 12 noon at our Missions Services Center. During these classes you will learn basic Microsoft Word and Excel skills, as well as email and resume creation. You’ll also learn about using social media and interview skills. Call 815.987.6223 to register today!
For this weeks blog post I wanted to share a news article that one of my coworkers sent to me. It features a local foundation called Fish-Abled. It is a not-for-profit organization in Rockford that helps individuals with disabilities participate and attend social activities such as baseball games, fishing outings, concerts, bowling outings, etc … free of charge.
After reading this article I was encouraged by what I read, because I felt that there are a lot of people not only the U.S. that want to make a difference, but locally that want to help. Having a disability shouldn’t cause a person to stop living and having fun, but for some individuals it can complete matters or limit their activity. To have an organization or group of people willing to ensure that individuals with disabilities can still interact socially and participate is really important.
To read the article visit:
http://m.rrstar.com/rrs/db_/contentdetail.htm?contentguid=FJlR1Tva&full=true#display | 10,045 | 4,611 | 0.000221 |
warc | 201704 | end of the week random links…
July 23, 2010 § 4 Comments
some random notes:
The POWERful classes are innovative and off the beaten path of standard childbirth classes because they serve a dual purpose. The first is to share information with women about their pregnancy, birth and postpartum so that they can make informed and empowered decisions about their health and the health of their baby. The second purpose is to introduce women to social justice organizing so that they can impact positive change as leaders in their communities.
The classes, which were also offered last year at Power U, will cover topics ranging from birthing options, nutrition and breastfeeding to reducing toxic housing conditions, improving neighborhood schools and negotiating fair rent prices.
“I feel more respected in these classes,” stated one class participant, who is also a teen mom.
–this weekend i am doing the printable pdf for outlaw midwives zine. pulling out my geometry brain…any help in this arena would be much appreciated…
–aza insists on being called: princess mafina or amira mafina. but not aza. definitely not aza.
–midwife pamela on fb linked to this article:
This is especially true when it comes to pregnant drug using women. For nearly two decades popular media claimed that any illegal drugs used by pregnant women would inevitably and significantly damage their babies.
The actual scientific research contradicts this assumption. Carefully constructed, unbiased scientific research has not found that prenatal exposure to any of the illegal drugs causes unique or even inevitable harm. This research is so clear that that courts and leading federal agencies have concluded that what most people heard was “essentially a myth.” As the National Institute for Drug Abuse explains, “babies born to mothers who used crack cocaine while pregnant, were at one time written off by many as a lost generation. . . . It was later found that this was a gross exaggeration.”
–some of these notes may develop into blog post. or maybe not.
–i am basically nanowrimo-ing a memoir and then after a couple of weeks seeing if it is worth working on. i had just figured that i didnt have the emotional energy to do it. but i hate having something sitting there undone staring at me. me, unsure if it works or it doesnt. so i am writing my ass off and then when i am done, i can see what the next step would be.
anyways the writing reminded me of living in the woods reading the peace pilgrim. and how reading her little book really did act as a guide for how to live in this world as a free person no matter what.
–oh there are a couple of awesome posts on checking dilation during labor without a vaginal exam. lovely.
–i will write soon about the viva palestina september/october convoy to deliver aid to gaza. but here is the link to it for now…
–while the more that i learn about the placenta, the more amazed i am by it, i am not sure if i could knowingly eat placenta lasagna.
–aza is running around with a can of tuna. habibi is cooking potatoes. it is july in cairo and the heat swims in the air like a prayer. i can drink smoothies all day. mornings are chaos here. | 3,282 | 1,678 | 0.000618 |
warc | 201704 | Quick update. I decided to check out what randomly linked from last post. It turns out one of them is to a parenting blog and that children should be taught strict manners. Clearly, I’m in general opposed to this (of course, children shouldn’t go around terrorizing, but teaching them to blindly follow rules that have no justification is a scary thought as well).
All I have to say is that posting some of my views in the comments was not a good idea. Lots of scary moms started yelling at me that I had no idea what I was talking about. How dare I criticize when I don’t have any children. I find this slightly ironic considering the post. It seems to me quite impolite to dismiss someone’s opinion because they approach it from outside your little group. I actually find that the opinions I value most come from outside the group. They have a fresh perspective I haven’t thought about before.
Also, this is a scary attitude to have. The “You can only have a valid opinion if you have an invested interest” attitude. Essentially they are like when a tobacco company does research on the health effect of smoking. Of course, when you don’t go outside the group all your data is going to fit your predetermined conclusion. They seem to think that parents that raise their kids to be polite turn out polite and parents that don’t instill some sort of military-like rigid manners will turn out to be these horrible people. In my experience there seems to be very little correlation.
I wanted to write: a child grows up, and the character that person becomes is more than the sum of its parts…erm, I mean, is more than the sum of one part: parenting. There are: brain chemistry, early psychology developed while at school and with friends, hormone alterations, genetics, mass media. That is just a tiny list, too. How could a parent possibly have that much impact with so much out of their hands? I’m not saying that parenting isn’t important. It is one of the parts, but it shouldn’t be considered the sole factor in how a child turns out as these mothers seem to think.
I just had to get that out there. I couldn’t do it on their site. Is this really that unreasonable? | 2,244 | 1,143 | 0.000906 |
warc | 201704 | Are you looking for tasty jerky recipes to snack on? Look no further, because I’ve rounded up chewy and savory jerky recipes to satisfy your salty cravings!
15 Flavorful Jerky Recipes To Make At Home!
Jerky is another genius way to eat meat as a snack and treat. It can be sweet but mostly savory. If you’re wondering how jerky can be a tasty treat, the magic starts in the marinade! It’s actually easy to make jerky at home. You can make jerky even without the fancy equipment such as jerky press and dehydrator. You can use an oven, which can actually handle a larger amount of meat at once than most dehydrators. The first time I tried making jerky, it turned out pretty well, despite my initial misgivings. But since I have this list, I realized I used the right recipe! I’m glad I did! I am completely sold on these jerky recipes and have already tried three on this list. They are all perfect and beyond what I’ve expected. These jerky recipes are guaranteed to soothe your cravings for chewy and salty treats and I can’t wait to share them with you!
Store bought jerky is usually loaded with an insane amount of chemicals and straight up nasties, which is a shame because we love us some jerky! If you’re in the same boat, I hope these jerky recipes inspire you to ditch the chemical-laden version and hop on the
au naturel train! These jerky recipes are easy to make and will ignite your taste buds’ passion! Homemade jerky is healthier and better tasting than the kinds you find at the store, so don’t keep your family and friends waiting any longer and let’s make some tasty jerky!
1. Hot and Spicy Homemade Beef Jerky
Have you ever wondered how to make beef jerky? Then thank goodness you came across this list! This hot and spicy jerky recipe is a classic and has just the right flavor to turn this one-time snack into a lifetime obsession.
2. Spicy Teriyaki Beef Jerky
If you like your meaty treat a bit sweeter and spicier, this is the jerky recipe for you! This jerky is marinated in teriyaki sauce, which in turn makes it moist, tender, and not too tough to chew on. While we love ripping apart bits of jerky, our jaws don’t, so this recipe provides a nice little break for them.
3. Salmon Jerky
I love seeing unusual proteins get turned into jerky and salmon is one of them! This recipe turns a classic snack into one packed with omega-3s, protein, and essential amino acids. So, now you can snack guilt-free and to your heart’s content! Woooh!
4. Bak Kwa (Chinese Pork Jerky)
Bak Kwa or dried meat is a Malaysian/Chinese-style pork jerky which is moist and grilled to perfection over a charcoal fire. Bak Kwa is a treat that is usually pretty pricey, so the homemade version is definitely the best way to enjoy this delectable delicacy.
5. Duck or Goose Jerky
Duck or goose meat make great jerky! Duck meat has a very distinct taste and texture that allows it to easily absorb whatever spices and herbs you choose to sexy up your dish with. When making this recipe, be sure to get the ratio of salt right and marinate the meat for at least 24 hours (but 3 days is recommended). This will ensure your jerky is the best it can be!
6. Spicy Sriracha Tofu Jerky
Yay! A jerky version just for vegans! While most vegan jerky recipes are made with wheat gluten, this recipe switches it up a bit by making it with tofu! The touch of sriracha in this homemade recipe gives this tofu jerky the perfect ratio of sweet, spicy, and sexy all over. And you thought tofu was boring…Psh!
7. Venison Jerky
Want to learn how to make venison jerky? Sweet – let’s get to it then! Deer meat tastes a lot like beef, except leaner. This jerky recipe tastes just like the kind you buy at the store, except healthier, tastier, and without all those weird preservatives. Isn’t home cooking fun?
8. Korean Beef Jerky
If you’ve ever tried Korean BBQ, you know, first hand, how ridiculously delicious it is. The marinade for Korean BBQ is unlike any other marinade I’ve ever tried, so of course, Korean BBQ jerky was a must-make on my list. If your taste buds are looking for something new and exciting, then switch it up a bit and try this Korean BBQ jerky recipe.
9. Candied Bacon Jerky
A bacon jerky recipe packed with sweet, crispy, bacon goodness. This candied bacon jerky recipe yields crispy salty-sweet bacon with a kick of cayenne and is absolutely everything you’re looking for in a snack. In comparison to other jerky recipes, this one is relatively easy to put together.
10. Old-Fashioned Deer Jerky
Make your favorite jerky recipe the old-fashioned way without a meat grinder or jerky press. This authentic jerky recipe is delicious and will remind you of all the reasons you love jerky.
11. Teriyaki Chicken Jerky
Beef is great, but sometimes, poultry is even better. Be warned though because when you make this recipe you are about to taste one of the best chicken recipes ever! This is a sweet, spicy, and chewy treat that will make you forget any other kind of jerky even exists.
12. Cauliflower Jerky
Cauliflower is actually a great meat substitute and can be used in recipes for buffalo chicken, chicken, pizza crust, and steak. In this recipe, cauliflower is drenched in a rich tahini and other flavors. This jerky treat is a brilliant way to enjoy veggies and will leave you happy and satisfied.
— Homemade Recipes (@BestHomeRecipes) January 16, 2017
13. Sweet and Spicy Beef Jerky for Purim
Purim is a Jewish festival held in spring and this is a recipe made specifically for it. This beef jerky uses three kinds of sweeteners – honey, maple syrup, and sugar which make it the best recipe for sweet-toothed meat eaters! That said, if you’re not into sweets, this recipe also packs some heat from cayenne and red pepper flakes. Have a bit and I bet you won’t be able to stop at just one.
14. Eggplant Jerky
This jerky is made with eggplant and acorn squash and tastes so similar to the real thing! Think of bacon and beef but in a wholesome form. If you’re looking for a healthier alternative to meat jerky, this may just be the recipe for you.
15. Kentucky Bourbon Beef Jerky
Excite your taste buds with this jerky recipe packed with flavor plus a mild punch from whiskey. This is such a manly snack perfect for a get-together, picnics, Father’s Day, or even just for an ordinary day.
Watch this video tutorial from Allrecipes to learn the basic in making homemade beef jerky:
Jerky is the perfect treat for when your salty cravings hit. You don’t have to splurge on tasty jerky because you can actually make your own at home. This list is loaded with uniquely flavorful jerky recipes to inspire you not to buy nasty and chemical-laden jerky and make a healthier and better homemade version instead. I’m pretty sure you’ll find a favorite on this list and I would be delighted to hear from you!
Which jerky recipes are you planning to try? Share your experience with me in the comments section below. How about healthier recipes to snack on? Try these 12 Healthy Chips Recipes To Try At Home! Don’t forget to keep in touch, foodies! Sign up for our newsletter here! Also, make sure to follow us on social media:
Featured Image via Traeger | 7,334 | 3,365 | 0.000305 |
warc | 201704 | Hydro Ottawa offers services to contractors and developers, most of which are completely free, to help ensure that construction and renovation projects are electrically safe.
The services we offer range from installing electrical services in new buildings and developments, to assessing and replacing current electrical equipment to ensure that it is up to safety standards, to helping customers install renewable generation equipment as part of the microFIT program.
You can browse and request many of these services online using our service request form. So whether you are building a new development, renovating a house or business, or have a question or a concern, we have you covered.
If you own high voltage electrical equipment in a vault, it is important that you maintain your equipment through regular inspections and tests.
Hydro Ottawa offers a wide range of services to meet our customer needs.
Trees that are in close proximity to overhead power lines could compromise public safety and the reliability of your electricity supply.
We can help make your move a little easier. Simply request that we turn on the electricity at your new address, or turn off service at your old address... all online. | 1,219 | 624 | 0.001613 |
warc | 201704 | Solvency Determinants of Conventional Life Insurers and Takaful Operators
The business of insurance is based on the trust of its policyholders, who expect that their losses will be compensated should the need arise at any time. Thus, sound financial conditions constitute the most important criterion for insurance firms, as well as for takaful operators. Although the policyholder may be the most important source of insurer finance, or a debt holder from an economic point of view through premium payments, the policyholder is not well informed in assessing the financial strength or solvency of the life insurer. Various measures of the solvency of the insurer are used in the industry, such as margin of solvency (MOS), risk based capital (RBC), and claim paying ability (CPA) rating. Unfortunately, none of these can provide information to policyholders on the financial position of the insurer. This is because the MOS and RBC for each insurer is the company's and regulator’s confidential information. However, for the CPA rating, it is limited to insurers who wish to be evaluated, and therefore the assessment is not comprehensive. Because of these shortcomings, this study provides a platform for policyholders to get an idea of the solvency of the insurers/takaful operators. Furthermore, this study identifies factors that affect the solvency of the insurers/takaful operators in Malaysia. Using random effects regression on panel data for 2003-2007, it is determined that investment income, total benefit paid to capital and surplus ratio, financial leverage, and liquidity are significantly related to solvency, in which the investment income has a positive relationship, while the other three have a negative relationship. From the results obtained, the policyholders/consumers can assess the insurers’ financial strength through the solvency determinants of the insurers/takaful operators, even though the actual level of solvency is not known. To some extent, this information can help policyholders/consumers make smarter choices in choosing the insurers/takaful operators
If you experience problems downloading a file, check if you have theproper application toview it first. In case of further problems readthe IDEAS helppage. Note that these files are
not on the IDEASsite. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 6 (2012) Issue (Month): 2 (June) Pages: 1-25
Handle: RePEc:bpj:apjrin:v:6:y:2012:i:2:n:3 Contact details of provider: Web page: http://www.degruyter.com
Order Information: Web: http://www.degruyter.com/view/j/apjri References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: Brockett, Patrick L. & Cooper, William W. & Golden, Linda L. & Rousseau, John J. & Wang, Yuying, 2004." Evaluating solvency versus efficiency performance and different forms of organization and marketing in US property--liability insurance companies," European Journal of Operational Research, Elsevier, vol. 154(2), pages 492-514, April. Patrick L. Brockett & Linda L. Golden & Jaeho Jang & Chuanhou Yang, 2006." A Comparison of Neural Network, Statistical Methods, and Variable Choice for Life Insurers' Financial Distress Prediction," Journal of Risk & Insurance, The American Risk and Insurance Association, vol. 73(3), pages 397-419. Sanchis, A. & Segovia, M.J. & Gil, J.A. & Heras, A. & Vilar, J.L., 2007." Rough Sets and the role of the monetary policy in financial stability (macroeconomic problem) and the prediction of insolvency in insurance sector (microeconomic problem)," European Journal of Operational Research, Elsevier, vol. 181(3), pages 1554-1573, September. Taylor, William E., 1980." Small sample considerations in estimation from panel data," Journal of Econometrics, Elsevier, vol. 13(2), pages 203-223, June. Giovanni S. F. Bruno, 2005." Estimation and inference in dynamic unbalanced panel-data models with a small number of individuals," Stata Journal, StataCorp LP, vol. 5(4), pages 473-500, December. Giovanni S.F. Bruno, 2005. " Estimation and inference in dynamic unbalanced panel data models with a small number of individuals," KITeS Working Papers 165, KITeS, Centre for Knowledge, Internationalization and Technology Studies, Universita' Bocconi, Milano, Italy, revised Jun 2005. Giovanni S.F. Bruno, 2005. " Carson, James & Hoyt, Robert, 2000." Evaluating the risk of life insurer insolvency: implications from the US for the European Union," Journal of Multinational Financial Management, Elsevier, vol. 10(3-4), pages 297-314, December. Yung-Ming Shiu, 2005." The determinants of solvency in the United Kingdom life insurance market," Applied Economics Letters, Taylor & Francis Journals, vol. 12(6), pages 339-344. Full references(including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:bpj:apjrin:v:6:y:2012:i:2:n:3. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Peter Golla)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 6,331 | 2,961 | 0.00034 |
warc | 201704 | Location decisions of manufacturing FDI in China: implications of China's WTO accession
No abstract is available for this item.
If you experience problems downloading a file, check if you have theproper application toview it first. In case of further problems readthe IDEAS helppage. Note that these files are
not on the IDEASsite. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: Caballero, R.J. & Lyons, R.K., 1991. " External Effects in U.S. Procyclical Productivity," Papers 91-19, Columbia - Graduate School of Business. John M. Quigley, 1998." Urban Diversity and Economic Growth," Journal of Economic Perspectives, American Economic Association, vol. 12(2), pages 127-138, Spring. Irving B. Kravis & Robert E. Lipsey, 1980. " The Location of Overseas Production and Production for Export by U.S. Multinational Firms," NBER Working Papers 0482, National Bureau of Economic Research, Inc. Kravis, Irving B. & Lipsey, Robert E., 1982." The location of overseas production and production for export by U.S. multinational firms," Journal of International Economics, Elsevier, vol. 12(3-4), pages 201-223, May. Kravis, Irving B. & Lipsey, Robert E., 1982. " Barrell, Ray & Pain, Nigel, 1999." Trade restraints and Japanese direct investment flows," European Economic Review, Elsevier, vol. 43(1), pages 29-45, January. Krugman, Paul, 1991." Increasing Returns and Economic Geography," Journal of Political Economy, University of Chicago Press, vol. 99(3), pages 483-99, June. Donald S. Siegel & Catherine J. Morrison Paul, 1999." Scale Economies and Industry Agglomeration Externalities: A Dynamic Cost Function Approach," American Economic Review, American Economic Association, vol. 89(1), pages 272-290, March. Full references(including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:eee:asieco:v:14:y:2003:i:1:p:51-72. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Shamier, Wendy)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 3,412 | 1,711 | 0.000591 |
warc | 201704 | Semiparametric estimates and tests of base-independent equivalence scales
No abstract is available for this item.
If you experience problems downloading a file, check if you have theproper application toview it first. In case of further problems readthe IDEAS helppage. Note that these files are
not on the IDEASsite. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: Pollak, Robert A., 1991." Welfare comparisons and situation comparisons," Journal of Econometrics, Elsevier, vol. 50(1-2), pages 31-48, October. Pollak, R.A., 1990. " Welfare Comparisons And Situations Comparisons," Working Papers 90-11, University of Washington, Department of Economics. Pollak, R.A., 1990. " Welfare Comparisons And Situations Comparisons," Discussion Papers in Economics at the University of Washington 90-11, Department of Economics at the University of Washington. Pollak, R.A., 1990. " Christopher J. Nicol, 1994." Identifiability of Household Equivalence Scales through Exact Aggregation: Some Empirical Results," Canadian Journal of Economics, Canadian Economics Association, vol. 27(2), pages 307-28, May. James Banks & Richard Blundell & Arthur Lewbel, 1997." Quadratic Engel Curves And Consumer Demand," The Review of Economics and Statistics, MIT Press, vol. 79(4), pages 527-539, November. Arthur Lewbel, 1989." Identification and Estimation of Equivalence Scales under Weak Separability," Review of Economic Studies, Oxford University Press, vol. 56(2), pages 311-316. Lewbel, Arthur, 1991." Cost of characteristics indices and household equivalence scales," European Economic Review, Elsevier, vol. 35(6), pages 1277-1293, August. Lewbel, Arthur, 1989." Household equivalence scales and welfare comparisons," Journal of Public Economics, Elsevier, vol. 39(3), pages 377-391, August. Phipps, S., 1990. " Price-Sensitive Adult-Equivalence Scales For Canada," Department of Economics at Dalhousie University working papers archive 90-01, Dalhousie, Department of Economics. Pashardes, Panos, 1995." Equivalence scales in a rank-3 demand system," Journal of Public Economics, Elsevier, vol. 58(1), pages 143-158, September. Pollak, Robert A & Wales, Terence J, 1979." Welfare Comparisons and Equivalence Scales," American Economic Review, American Economic Association, vol. 69(2), pages 216-21, May. Blundell, Richard & Pashardes, Panos & Weber, Guglielmo, 1993." What Do We Learn About Consumer Demand Patterns from Micro Data?," American Economic Review, American Economic Association, vol. 83(3), pages 570-97, June. Gozalo, Pedro L., 1997." Nonparametric bootstrap analysis with applications to demographic effects in demand functions," Journal of Econometrics, Elsevier, vol. 81(2), pages 357-393, December. Jorgenson, Dale W & Slesnick, Daniel T, 1987." Aggregate Consumer Behavior and Household Equivalence Scales," Journal of Business & Economic Statistics, American Statistical Association, vol. 5(2), pages 219-32, April. Full references(including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:eee:econom:v:88:y:1998:i:1:p:1-40. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Shamier, Wendy)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 4,638 | 2,080 | 0.000486 |
warc | 201704 | A Case Study of Privatization without Consideration: The Failure of Voucher Privatization in the Czech Republic
The theory of coupon privatization was developed by Milton Friedman in the 1970s as a quicker alternative to stock privatization. According to this theory, several transformed Central and Easter European countries have provided their state entrepreneurial assets – to various degrees – to private ownership. The largest voucher asset transmission took place in the Czech Republic, where more than half of the total privatization value was transferred to private ownership by this institutional method. The current study presents the socioeconomic motivations, achievements, and failures of this radical privatization model and finally, it draws lessons and conclusions regarding the Bohemian application of this extremist (in an economic-political sense) privatization technique. The study will be part of a dissertation including the comparative analysis of European privatization models.
Volume (Year): 7 (2011) Issue (Month): 02 () Pages: 79-86
Handle: RePEc:mic:tmpjrn:v:7:y:2011:i:02:p:79-86 Contact details of provider: Web page: http://www.gtk.uni-miskolc.hu/
Email:
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:mic:tmpjrn:v:7:y:2011:i:02:p:79-86. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 2,424 | 1,223 | 0.000827 |
warc | 201704 | The Political Economy of Institutions, Stability and Investment: A Simultaneous Equation Approach in an Emerging Economy. The Case of South Africa
The modern theory of investment identifies the importance of uncertainty to investment. A number of empirical studies have tested the theory on South African time series, employing political instability measures as proxies for uncertainty. This paper verifies that political instability measures are required in the formulation of the investment function for South Africa. It also establishes that there are distinct institutional factors that influence the uncertainty variable such as property rights and crime levels. We find that rising income and property rights lower political instability, and that rising crime levels are positively related to political instability. The inference is that political instability in South Africa may not represent uncertainty directly, since it is systematically related to a set of determinants. Instead, uncertainty would have to be understood as being related to a broader institutional nexus that in concert may generate uncertainty for investors. The paper highlights the significance of getting institutions right to ensure that uncertainty is kept to a minimum by providing a predictable long-term environment. Stability at a systemic level appears crucial if investment rates are to rise in South Africa and this paper demonstrates that stability in turn is driven by a sound institutional environment that has multiple dimensions.
If you experience problems downloading a file, check if you have theproper application toview it first. In case of further problems readthe IDEAS helppage. Note that these files are
not on the IDEASsite. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 44 (2008) Issue (Month): 7 () Pages: 1056-1079
Handle: RePEc:taf:jdevst:v:44:y:2008:i:7:p:1056-1079 DOI: 10.1080/00220380802150854 Contact details of provider: Web page: http://www.tandfonline.com/FJDS20
Order Information: Web: http://www.tandfonline.com/pricing/journal/FJDS20
When requesting a correction, please mention this item's handle: RePEc:taf:jdevst:v:44:y:2008:i:7:p:1056-1079. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Michael McNulty)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 3,461 | 1,645 | 0.000611 |
warc | 201704 | Institutional Theories and Public Institutions
This chapter covers the evolution of institutional theory and its application to public adminsitrations in the last 20 years. It discusses the various streams, their reseach agendas and their contributions, but also their limits to add value to knowledge.
Length: Date of creation: 2011 Publication status: Published in Peters B.G. and J.Pierre. The Handbook of Public Administration, Sage, pp.185-101., 2011 Handle: RePEc:hal:journl:halshs-00638348 Note: View the original document on HAL open archive server: https://halshs.archives-ouvertes.fr/halshs-00638348 Contact details of provider: Web page: https://hal.archives-ouvertes.fr/
References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: Jean-Claude Thoenig, 1/08. " Territorial Administration and Political Control," Post-Print halshs-00638304, HAL. repec:dau:papers:123456789/995 is not listed on IDEAS Jean-Claude Thoenig, 2005. " Territorial administration and political control. Decentralization in France," Post-Print halshs-00139986, HAL. repec:dau:papers:123456789/2735 is not listed on IDEAS Full references(including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:hal:journl:halshs-00638348. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (CCSD)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 2,541 | 1,252 | 0.000805 |
warc | 201704 | Productivity and structural heterogeneity in the Brazilian manufacturing sector: trends and determinants
This paper discusses the evolution of firmsù productivity and structural heterogeneity (SH) in the Brazilian manufacturing industry in the 2000s. SH is defined (following the Latin American structuralist tradition) as a situation in which a large share of total firms is in the lowest productivity groups of the production structure, and there are very large differences in labour productivity between groups and firms. The paper combines and makes compatible several databases on manufacturing production, innovation and micro-social data for Brazil, in order to measure productivity and SH, to analyze its evolution between 2000 and 2008, and to discuss its determinants. Econometric analyses (k-means cluster methodology to identify productivity groups, and ordered probit models to analyse the determinants of SH) show that increasing returns in innovation and learning prevailed in the 2000s, while policies failed to encourage the catching up process by laggard firms. As a result, SH did not fall in the Brazilian manufacturing sector.
Length: Date of creation: 29 Nov 2012 Date of revision: Handle: RePEc:ssa:lemwps:2012/20 Contact details of provider: Postal: Piazza dei Martiri della Liberta, 33, 56127 Pisa
Phone: +39-50-883343
Fax: +39-50-883344
Web page: http://www.lem.sssup.it/
Email:
More information through EDIRC
References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: Cimoli, Mario & Dosi, Giovanni, 1995." Technological Paradigms, Patterns of Learning and Development: An Introductory Roadmap," Journal of Evolutionary Economics, Springer, vol. 5(3), pages 243-68, September. Philippe Aghion & Peter Howitt, 1997." Endogenous Growth Theory," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262011662, March. Mario Cimoli & Gabriel Porcile, 2009." Sources of learning paths and technological capabilities: an introductory roadmap of development processes," Economics of Innovation and New Technology, Taylor & Francis Journals, vol. 18(7), pages 675-694. Atkinson, Anthony B & Stiglitz, Joseph E, 1969." A New View of Technological Change," Economic Journal, Royal Economic Society, vol. 79(315), pages 573-78, September. Freeman, Chris, 1995." The 'National System of Innovation' in Historical Perspective," Cambridge Journal of Economics, Oxford University Press, vol. 19(1), pages 5-24, February. Prebisch, Raúl, 1950. " The economic development of Latin America and its principal problems," Sede de la CEPAL en Santiago (Estudios e Investigaciones) 29973, Naciones Unidas Comisión Económica para América Latina y el Caribe (CEPAL). Giovanni Dosi & Sébastien Lechevalier & Angelo Secchi, 2010." Introduction: Interfirm heterogeneity--nature, sources and consequences for industrial dynamics," Industrial and Corporate Change, Oxford University Press, vol. 19(6), pages 1867-1890, December. Dosi, Giovanni, 1988." Sources, Procedures, and Microeconomic Effects of Innovation," Journal of Economic Literature, American Economic Association, vol. 26(3), pages 1120-71, September. Infante B., Ricardo & Sunkel, Osvaldo, 2009." Chile: hacia un desarrollo inclusivo," Revista CEPAL, Naciones Unidas Comisión Económica para América Latina y el Caribe (CEPAL), April. Full references(including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:ssa:lemwps:2012/20. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 4,791 | 2,354 | 0.000432 |
warc | 201704 | On Christmas Day 2009, a man boarded a flight from Amsterdam to Detroit and allegedly tried to detonate a bomb hidden in his underwear. Thought to be a failed Al-Qaeda terrorist plot, this led to the introduction of Advanced Image Technology (‘AIT’) (full body scanners) in UK airports and the acceleration of their use in the US.
In the US, the introduction of the new technology has just been declared lawful. In the UK, the interim code of practice setting out their acceptable use has just been the subject of a public consultation.
On 15 July 2011, the United States Court of Appeals for the District of Columbia Circuit upheld the use of full-body scanners to screen air travellers in the case of
Electronic Privacy Information Center (EPIC), et al. v United States Department of Homeland Security, et al. (US Court of Appeals, DC Circuit, 15 July 2011)
The Transportation Security Administration (“TSA”) had begun to deploy scanners that used AIT rather than the traditional magnetometer in 2007 and introduced them more widely in early 2010. The AIT scanner produces a crude image of an unclothed person, enabling the operator of the machine to detect a non-metallic object, such as a liquid or powder without touching the passengers coming through the checkpoint. No passenger was in fact ever required to submit to an AIT full body scan but could opt instead for a ‘pat down’ search. The TSA also took steps to mitigate the effect on the passengers’ privacy, including obscuring facial features, ensuring that the images were viewable only by an officer sitting in a remote and secure room and deleting the images as soon as the passenger was cleared.
However, EPIC and two other individuals petitioned for a review of the TSA’s decision to screen by the new method. They argued that the decision violated various federal statutes, the Fourth Amendment of the United States Constitution and in any event should have been subject to a consultation (‘notice and comment’) before being adopted.
Judge Ginsburg, who was sitting with Judge Henderson and Judge Tatel, stated that “it is clear that by producing an image of the unclothed passenger, an AIT scanner intrudes upon his or her personal privacy in a way a magnetometer does not”. However, he ruled that the decision was not unlawful under:
the Video Voyeurism Prevention Act, 18 U.S.C because it does not apply to any “lawful law enforcement, correctional or intelligence activity”; and the Privacy Act, 5 U.S.C because the TSA does not maintain data from AIT scanners in any system of records linked to names or any other identifier, nor had EPIC offered any reason to believe that the TSA had attempted to identify the images from any other sources.
EPIC also failed in their argument that the Chief Privacy Officer (CPO) of the Department for Homeland Security had not done enough to safeguard privacy. The judges refused to disturb the CPO’s conclusion that the built in privacy protections were sufficiently strong.
Nor had the TSA’s decision breached the Fourth Amendment which guards against unreasonable searches and seizures. EPIC had argued that the search was more invasive than was necessary to detect weapons or explosives, however, the judges relied on the US Supreme Court’s continued refusal to declare that “only the least intrusive search practicable can be reasonable under the Fourth Amendment” (
City of Ontario v Quon, 130 S. Ct. (2010)) and the fact that passengers could opt out of the search.
However, the Judges ruled that the TSA should have conducted a “notice-and-comment rulemaking” procedure before implementing their decision and remanded the matter to the TSA for further proceedings. Marc Rotenberg, the EPIC Executive Director, responded to the ruling by saying that “many Americans object to the airport body scanner program. Now they will have an opportunity to express their views to the TSA and the agency must take their views into account as a matter of law”.
Last week’s ruling serves as a reminder that the AIT system is also in use across UK airports. In January 2010, the Government announced a package of measures to enhance the protection of the travelling public following the failed Christmas Day plot, including the introduction of AIT full body scanners.
An interim code of practice was published by the Department for Transport to support the introduction of AIT scanners at Heathrow and Manchester airports and a consultation on the code was commenced on 29 March 2010. The purpose of the consultation was to seek the public’s views on the interim code with a view to preparing the final code of practice.
Responses were made to the consultation by, amongst others, Liberty, the Equality and Human Rights Commission and the Islamic Human Rights Commission. Each organisation was concerned at the impact the implementation of the AIT scanners would have on a traveller’s privacy. Unlike in the United States, if an individual is selected for body scanning in the UK, an alternative will not be offered. This was emphasised in Liberty’s response which said that “the issue here is not a refusal to submit to a security search, but the disproportionate impact on some people’s privacy…caused by the lack of an alternative means of being searched. While some may object to revealing these intimate details and may prefer this to be being physically touched, human rights concerns the dignity of individuals and, a majority-rules approach is not only inappropriate, it may also be unlawful.”
The consultation on body-scanners in UK airports closed on 19 July 2010. Clear questions have been raised in response but no final code yet produced.
Perhaps the recent judgment in the US Court of Appeals will revive the issue on this side of the pond. | 5,929 | 2,771 | 0.000372 |
warc | 201704 | There are lots of good things to write about, so it is with despondent irony that I am writing about something very bad for anyone who reads or writes. The something in question is “Spritz”, a new app or computer program or something that force feeds written material to you at the frenetically nauseating pace of three-to-five hundred words-per-minute. The average reading speed for a well-read adult is at most 220 w.p.m. so this is a staggering 30-100% increase in the speed that words are flying from the page—er—screen—to your brain.
The creators of Spritz claim that this technology will revolutionize the world, make you a better person, improve your love life, etc. It certainly will change things if it catches on. If the boundless exuberance on my social media feeds is any indicator, people plan to finally “read” all the books they’ve been lying about having read for all these years. I put “read” in scare quotes, because after watching a sample feed of words at three, four, and five hundred w.p.m. I simply cannot buy that anyone who chokes down
Gatsby, Lolita, or Infinte Jest on Spritz will have learned a damn thing. They might be able to name characters, outline the plot, even talk about what they liked and didn’t like, but the sweeping impressionism of Fitzgerald’s prose or Wallce’s asinine footnotes and re-references will be entirely lost on these one-word-at-a-time phonies.
Based on my brief demo of the program, I was almost unable to worry about the societal implications because my brain was teetering dangerously close to epilepsy. The words whir by so quickly that, even though it does work better than I’d like to admit, it feels like some sort of unsustainable warp speed that will eventually catastrophically fail when the center ceases to hold. After reading about five sentences at 500 w.p.m. I felt like I needed a breather. Which led me to miss two more sentences and wonder what happens if you have to sneeze or write down a favorite line (not that you can pick favorites when you can’t see syntax, etc.).
By sterilizing the reading experience, we are dehumanizing ourselves. Humans are unique in the elaborateness of their written and spoken languages, their completely-unrelated-to-survival artistic endeavors, and their hellish bent toward self-destructive progress. From nuclear weapons to hyperconnectivity to reading sans fun, we are remarkably proficient at taking good ideas several steps too far. Harnessing atoms to make energy out of nothing seems wise, especially with our insatiable appetites for electricity and the finite amounts of coal and crude available to us. Connecting all corners of the world has obvious advantages. Helping people dig into the classics who wouldn’t otherwise read at all is a noble cause. And yet, somewhere along the way, these have all become perverted.
We make energy out of nothing to level nations or mutually assure one another that such destruction is imminent. We are so connected that walking peaceably down a quiet street and being wholly present seems as quaintly archaic as black and white video footage of a soda parlor. And now, we are bent on eradicating the frustrating, relaxing, challenging, rewarding endeavor that is reading a good book. I wonder when Beethoven’s complete canon will be compressed into a minute-long ultradensesupertechnowunder soundbyte or when every Picasso painting will be downloadable straight to your hippocampus via GoogleContacts.
This is not technophobic or dystopic radicalism. It is a not-so-far-fetched look at what might happen soon, and, more importantly, a sincere “
why?” What do we gain by ‘knowing’ great works of art if we haven’t engaged with them? There might be a hint of democracy to downloading the Mona Lisa since traveling to Paris is not exactly free, but everyone knows what she looks like. The pilgrimage to Napoleon’s Great Pillage that is the Louvre is as beautiful an experience as sitting on a blanket on the beach for three hours and reading barely ninety pages of Steinbeck’s densest storytelling. If we crush these slow joys and miniature (or major) vacations out of our lives in favor of efficiency, what will we do with the time we ‘save’?
I might could answer “get in more training time for triathlons” or “spend more time with people I love,” but how long until those things are somehow condensed (see the gimmicky workouts and workout machines that promise incredible results in minimal time) too? Until we learn to (re)value the process, be it of creating or appreciating art, of building, repairing, maintaining, we are going to watch the uniquely human aspects of our lives get marginalized until
A.I. is not so Hollywood anymore. This is not strictly about robots becoming convincingly human, but about us forgetting how to turn pages and wrenches. And things are only enjoyable because they are counterbalanced by other things (there is a mechanical analogy here, but it is likely lost on everybody because our cars are too complex to bother understanding anymore), moderation the key to savoring anything at all.
The complexity of all our gadgetry has created a unique problem that could fill volumes of books, were they still being printed and sold en masse. Technology has become so good and improvements come about so quickly (cheers, Gordon Moore) that rather than maintaining it or learning how it works, we use it until it becomes unbearably irrelevant (oftentimes less than two years).
I’m not arguing against inevitability or standing on a Luddite soapbox. Some things we cannot change or resist. But as long as we create the demand for great literature, architectural triumphs filled with artistic masterpieces, and keep putting pockets in our pants and shelves in our homes to stash the smartphones for a few minutes, we will maintain a degree of resistance to the fierce force of homogeneity that is the invisible forces behind technological ‘advances’. Read something printed on paper, be it a newspaper article or even ten pages of a great book. Walk outside for twenty minutes without checking your phone. Go to an art museum. Dedicate ourselves daily anew. Something like that. | 6,335 | 3,170 | 0.000325 |
warc | 201704 | A tragedy leads to another tragedy.
This unfortunately event started when a young man from the same village was missing after swimming in the lake.
A rescuer party consists of 83 men went to find him and they did managed to recover his body.
A week after the rescuers went home, more than 10 persons fall ill with symptoms of fever and myalgia and 5 of them succumbed to the illness.
Today, the health minister had mentioned to the press that the cause of death are co-infection of meliodosis and leptospirosis.
The five victims 4 in their 50s and 1 in early thirties are diabetic patients which often a target of meliodosis which is endemic in this region.
Back in 2000, An outbreak of leptospirosis in a group of eco-challenge participants in Sabah after they returned to their countries.
As a medical doctors, we should alway need to rule out in patients who had recent jungle travelling , often present with low platelet, similar to dengue, and with derangement of liver function and worsening of kidney function.
Early treatment is warrant. The 5 victims are believed to had delayed treatment.
In the mysterious illness in contributed to leptospirosis and meliodisis.
Quote from the site “
A 15-year-old boy died and 37 others from a village in Beaufort district have been admitted to hospital after swimming in a stream near an oil palm plantation over the weekend. This has sparked off alarm in Kampung Kebatu, about 115km from here, as villagers now are afraid to use the stream where they have been bathing for many years.” | 1,555 | 851 | 0.001193 |
warc | 201704 | Please use this identifier to cite or link to this item:
http://hdl.handle.net/2381/34416
Title: Functional analysis of the escherichia coli haemolysin and its export mechanism. Authors: Gray, Lindsay D. Award date: 1987 Presented at: University of Leicester Abstract: Haemolytic activity produced by E.coli strains carrying the recombinant plasmid pLG570 was investigated. This plasmid contains a haemolytic determinant subcloned from the wild-type E.coli LE2001, isolated from a human urinary tract infection. Quantitative assays for haemolytic activity and for the separation of soluble subcellular compartments were developed. These revealed the absence of any detectable periplasmic pool of haemolytic activity in strains expressing the intact haemolytic determinant. Restriction mapping the haemolytic determinant enabled the identification, subcloning and independent expression from separate plasmids of the hly genes. The production of secreted haemolytic activity was shown to require the expression of four contiguous genes, hlyC, hlyA (the haemolysin polypeptide), hlyB and hlyD. Quantitative assays of haemolytic activity were undertaken on subcellular fractions of strains expressing subsets of hly genes. This enabled the following observations. HlyC is required for activation, but not for secretion, of HlyA. Both HlyB and HlyD are required for secretion of HlyA. Absence of any combination of HlyC, HlyB and/or HlyD did not result in periplasmic accumulation of HlyA. Thus it was concluded that the secretion of haemolysin involves tightly coupled translocation through both bacterial membranes without release into the periplasm. Identification of a "nucleotide binding site" in HlyB implied that at least part of this protein was exposed at the cytoplasmic face of the bacterial inner membrane. A topogenic domain at the C-terminal of HlyA was identified. Truncation of hlyA, resulting in the deletion of 27 amino acids from the C-terminal of HlyA, was shown to produce a haemolytic polypeptide which was not secreted. In addition, HlyA derived polypeptides, expressed from subcloned 3' portions of the hlyA gene, were shown to be secreted. Thus it was concluded that the C-terminal 113 amino acids of HlyA apparently include all the information required for interaction with the hly secretion machinery. Links: http://hdl.handle.net/2381/34416 Type: Thesis Level: Doctoral Qualification: Ph.D. Rights: Copyright © the author. All rights reserved. Appears in Collections: Theses, Dept. of Genetics
Leicester Theses
Items in LRA are protected by copyright, with all rights reserved, unless otherwise indicated. | 2,642 | 1,263 | 0.000796 |
warc | 201704 | Marketing is changing at an increasingly rapid rate. New digital tools and tactics are constantly emerging to dethrone long-time kings. Disruptive products and players crop up on a constant basis with novel approaches and innovative ideas, vying for your market share. Even long-established pillars of marketing are being fundamentally changed by analytics, technology, and shifting consumer habits; it’s all too easy to fall behind.
If you’ve worked in digital marketing long enough, you’ve almost certainly worked under someone who was not a digital native; a senior manager or head of marketing that’s still leveraging the same outlook and tactics that made them successful ten or twenty years ago. Those principals were enough to bring them quick career growth at the time but have since lost relevance–and it’s apparent to anyone working with them who has a more modern perspective. As a digital marketer yourself, it’s imperative that you start adjusting your career journey now to avoid a similar fate.
This is no longer a profession suited for people with a static understanding of their marketing discipline. Perhaps once you could have gotten by with just a strong foundation of brand principles or PR best practices or direct response fundamentals, and that would be enough to carry you through your career. But that hasn’t been the case for some time now. Becoming complacent for just a few months can mean you’ll have to work overtime to catch back up and ride the wave. Fall behind a couple of years, and you might want to start considering a new career direction. That’s why, if you hope to have long and successful careers in marketing, it’s critically important to become what I like to call a “lifelong learner.” Otherwise you’ll be destined to commit the same sins as previous generations and quickly become irrelevant.
Climbing to the Top, One Step at a Time
Regardless of the path you take to the top, you must always have the humility it takes to realize that you have a lot more to learn. The days of a successful marketing leader being out of touch with digital and the rest of marketing’s cutting edge are long gone. At the same time, a marketing head who is digitally minded but can’t take advantage of traditional media or more abstract concepts like branding will also have a short shelf life. If you have long term plans of a career in executive leadership with an eventual spot in the CMO chair of a large and prestigious organization, you must have an understanding of how all the pieces of modern marketing fit together. Experienced CMO executive recruiters will accept nothing less.
As fast as digital marketing is moving now, you need a consistent big-picture understanding of the major trends in marketing and the primary factors that are impacting your success. But that’s not something you can just acquire overnight–or over a year. We see too many marketers put off learning about a key new strategy or system and then rush to understand it all at once when an urgent need arises. These days, trying to acquire new skills or update your understanding in big “chunks” only when there is a pressing need for it doesn’t work for digital; you won’t get the real-time context and ability to absorb that information that makes it truly valuable.
Instead, it’s better to constantly build yourself and your knowledge of digital marketing consistently, on a daily basis. Read an article here, have a conversation with a subject matter expert there, watch a video in between. On any given day, this consistent learning and growing won’t make a huge impact. But the accumulation of knowledge, experience and skills builds up bit by bit into a powerful and flexible big-picture understanding of how to make the most of digital.
An experienced digital marketing recruiter can easily differentiate a professional who dedicated themselves to learning and growing constantly ten years ago versus one who only does it and fits and spurts when the need arises. The former is immeasurably more wise, strategic and prepared for the future than the latter, which can make all the difference down the road when CMO executive recruiters are evaluating you for a big job opportunity.
Becoming a Lifelong Learner Yourself
From a strictly professional perspective, most of the knowledge you’re acquiring as a lifelong learner should be related to marketing, business, leadership and technology. But the most successful lifelong learners don’t limit their perpetual education to their career; they’re constantly learning personal skills, practicing their social aptitude, exposing themselves to new ideas and tirelessly forging themselves into stronger, better people overall. For them, learning isn’t a professional duty limited to business hours; it’s a full-time lifestyle choice.
Don’t take this advice to mean that you’re expected to know everything about every facet of marketing; that’s simply impossible. The list of moving parts that make up today’s marketing is already too long, and growing every day. Can you imagine learning all there is to know about online advertising, PR, user experience, analytics, branding, and web design? Of course not–and that’s only the tip of the iceberg!
Instead, it’s better to have a general understanding of major trends, a big-picture view of emerging technology and tactics, and know how everything fits together as part of a large, constantly changing and interdependent puzzle.
This is no longer a profession suited for people with a static understanding of marketing discipline.
For instance, regardless of your background you can better understand what your customer experience is, why it’s important, and the value (or detriment) it brings your business. But if your background and strengths lie in product marketing or media buying, you don’t need to get obsessed with learning the meticulous details of improving online UX, providing responsive customer support and developing a seamless omnichannel offering. For the finer points, you should rely on your team of top performers in every field to handle day-to-day execution and advise you on strategy.
A good way to start is to find a variety of reliable resources that deliver a regular schedule of high-quality, up-to-date and easily accessible content. Thanks to the magic of the internet, you have an overwhelming amount of well-written, interesting, informative and even entertaining resources in your pocket at all times. Make use of them! Blogs, web publications, podcasts, video series and more bring guides, thought leadership, trending news and industry forecasts right to your fingertips, often entirely free. There are also webinars, industry conferences, mentorships, workshops, and the like. Our CMO executive recruiters have written about several of these subjects before; you’ll find these a good place to get started if you’re just beginning (or continuing) your journey as a digital marketer and lifelong learner:
18 Exceptional Blogs That Will Keep You Up to Speed on Digital Marketing 5 Free Apps Marketers Can Use to Learn New Digital Skills and Stay on Top of Trends 5 Podcasts to Keep Yourself on The Cutting Edge of Digital Marketing 11 Resources Any Digital Marketer Can Use to Stay on Top of SEO Strategy 9 Eye-Catching Certifications that Boost Your Marketing Resume | 7,526 | 3,423 | 0.000301 |
warc | 201704 | Big gaps in basic knowledge about the numbers and causes of apparently inexplicable heart attacks among young sportsmen and women are seriously hampering our ability to prevent them, says a sport and exercise medicine specialist in the
British Journal of Sports Medicine.
At the very least, we need to start building reliable databases of all such events across sport, in a bid to start plugging these knowledge gaps, say Dr Richard Weiler and colleagues.
His comments come in the wake of the recent high profile case of premier league footballer, Fabrice Muamba, who collapsed on pitch, in front of a stadium packed with spectators, after sustaining a sudden heart attack.
Fortunately, Mr Muamba recovered, but cases like these, although rare, are still likely to occur despite screening programmes, and they are poorly understood, emphasises Dr Weiler.
These cases have prompted improvements in pitch-side and acute sports medicine, including emergency life support, defibrillation and the development of practical education courses and emergency care guidelines, says Dr Weiler.
None the less, he says: "We still lack many answers to basic questions about these afflictions. We do not know the exact numbers and trends in prevalence or incidence, and do not understand the [multiple causes] that trigger sudden cardiac death in previously healthy athletes."
Issues that still need further investigation are the roles of gender and ethnicity, geography and genes, he says.
For example, Sub-Saharan Africa may be a "cardiac hotspot," with recent research linking sudden heart attacks to sickle cell trait.
Other research suggests that African Americans are three times more prone to sudden cardiac death/arrest than white athletes, although the rates vary considerably depending on the type of sport played.
And another study found that heart (ECG) tracing patterns differ between white and black athletes, although whether this is normal or indicates a higher risk for sudden cardiac death is not known, says Dr Weiler.
Screening programmes throw up a considerable number of false positive results, and it is still far from clear whether screening actually cuts the number of deaths, whether it is cost effective, and how to manage any abnormal findings, he says.
"It is vital that we start to answer these questions based on reliable science and evidence," he insists. "To achieve this, we propose the collection and recording of reliable data across sport of every sudden cardiac death/arrest," he writes.
But for this to happen, cooperation and collaboration will be needed among sporting organisations, federations, and clubs, in addition to the establishment of sport specific and national registries for these incidents, he suggests.
Dr Weiler cites a FIFA (International Football Federation) initiative. This requires a medical assessment before a match for all FIFA competitions, and includes a recently established database for all its 208 member associations in a bid to build up an evidence base and better understand the condition.
"This is one of many efforts needed to fill knowledge gaps and enable us to mitigate the risks of sudden cardiac arrest/death," concludes Dr Weiler, adding that minimum standards of pitch-side medical care across all sports are essential.
Explore further: Cardiac pre-participation screenings too restrictive for black athletes More information: What can we do to reduce the number of tragic cardiac events in sport? doi 10.1136/bjsports-2012-091252 | 3,523 | 1,732 | 0.000582 |
warc | 201704 | At least one in three patients with depression won't respond well to a series of treatments and experts in the field have joined together to outline practical treatments to tackle the issue, in the
Medical Journal of Australia Open.
Depression can be a stubborn problem—at least one in three patients fail to respond to proven therapies—and experts in the field have put their heads together to outline practical treatment approaches for general practitioners in an MJA Open supplement on "difficult-to-treat depression". "While GPs have many skills in the assessment and treatment of depression, they are often faced with people with depression who simply do not get better, despite the use of proven therapies, be they psychological or pharmacological", wrote Professor David Castle, Chair of Psychiatry at St Vincent's Health and the University of Melbourne, and coauthors.
They wrote that they hoped the approaches outlined in the supplement could assist clinicians—and GPs in particular—to improve the outcomes of patients with difficult-to-treat depression. In an article on pharmacological approaches to the problem, Dr Herng-Nieng Chan and Professor Philip Mitchell, psychiatrists with the University of New South Wales and the Black Dog Institute, and coauthors outlined the latest evidence-based drug treatment strategies for people with difficult-to-treat depression, based on studies including a US trial of almost 3000 patients.
The US study found that 30% of patients failed to achieve remission of their depression after using up to four different antidepressants. "This finding reflects the reality of clinical practice and highlights the need to employ the best available evidence in the management of people with complex depression", they wrote.
Professor Paul Fitzgerald, a psychiatrist from Monash Alfred Psychiatry Research Centre, wrote that electroconvulsive therapy remained the most widely used and effective biological non-drug treatment for difficult-to-treat depression.
However, he also detailed innovative new forms of brain stimulation, including magnetic seizure therapy and vagus nerve stimulation, which showed promise. "Ongoing work is required to define which treatments are likely to be most useful, and in which patient groups", he wrote.
Dr Melissa Casey, director of psychology at Southern Health, and coauthors wrote that evidencebased psychological approaches including cognitive behaviour therapy, interpersonal psychotherapy and family-based therapy could improve outcomes in difficult-to-treat depression.
As thought patterns and behaviour played a large role in determining outcomes of treatment for people with depression, they wrote, they were "prime candidates for intervention through a psychosocial treatment regimen".
Explore further: Speech a new marker for depression treatment response | 2,881 | 1,341 | 0.000756 |
warc | 201704 | Authors: Halil Kesselring, Rebecca Wheatley and Dustin J Marshall Published in: Marine Ecology Progress Series, volume 465, doi: 10.3354/meps09865 Abstract
An understanding of the effects of intraspecific variation in offspring size is important from both an ecological and an evolutionary perspective.
While the relationship between off- spring size and overall offspring performance is key, most studies are restricted to examination of the effects of offspring size on early life-history stages only, and too few have examined the effects of offspring size throughout the life history.
Here, we examine the effects of offspring size on post- metamorphic survival, growth, and fecundity under field conditions for the polychaete
Janua sp.
Larger offspring became larger adults and had higher levels of fecundity than those from smaller offspring, though the effect on fecundity was weaker and more variable over different experimental runs. Adults derived from larger larvae had shorter lifespans than adults derived from smaller larvae.
Our results suggest that the maternal effect of offspring size can influence the frequently observed trade-off between longevity and fecundity.
Future studies should seek to measure the effects of offspring size over as much of the life history as possible in order to avoid misestimating the relationship between offspring size and fitness.
Full paper
Kesselring H, Wheatley R,
Marshall DJ (2012) Initial offspring size mediates trade-off between fecundity and longevity in the field. Marine Ecology Progress Series, 465: 129–136 email for a copy doi: 10.3354/meps09865 | 1,627 | 821 | 0.001231 |
warc | 201704 | A rant by an America-loving immigrant. Ohio State University jihadist, Abdul Razak Ali Artan, was not just a Muslim immigrant from Somalia. He had the coveted refugee status, which means the US government provided him with food, shelter, and education grants. What made him so mad at Americans that he went on an indiscriminate killing rampage? Was it something he heard from ISIS – or was it the school-approved “social justice” rhetoric about how much he suffered in this country as a Muslim? It could be both, not necessarily in that order. The University police officer who shot and killed him, Alan Horujko , is another young man from an immigrant family. His ancestors hailed from Ukraine or any of the bordering areas in Poland, Belarus, Slovakia, or another Eastern European country. They’d be shocked to hear that someday the U.S. government would feed and house immigrants – especially the types of immigrants that have a proven record of going on murderous rampages and indiscriminately killing and maiming Americans simply for being Americans. Based on these two examples, which type of immigration is beneficial to America and which one is harmful?
The answer is obvious, but here’s the kicker: current immigration laws, as written by the late Senator Ted Kennedy, make it next to impossible for any European to obtain a green card (it took me 20 years to get mine). Instead, the law focuses on “multiculturalism” and “diversity,” which means the preference is given to immigrants from the Third World countries who are least likely to assimilate. By now the entire world is aware of the benefits that come with the “refugee status”; for many it is the equivalent of winning a life lottery: guaranteed food stamps, subsidized housing, free healthcare, grants, and other perks on the taxpayer’s dime.
If today’s laws were in existence at the time when the Horujkos came to America, there would be no Alan Horujko around to stop the jihadist because his ancestors would be turned away. America itself would also be different, looking more like a Third World country in its ways, culture, and system of government. Terror attacks would be the norm and human life would have as little value as it has in Somalia.
Knowing this, any sensible American would demand an urgent overhaul of these insane immigration laws. Unfortunately, proponents of the existing system have been able to condition American conservatives to see red at the very mention of “immigration reform” because it sounds like a “liberal” idea. By and large, traditional conservatives seem to have convinced themselves that all that’s needed is simply to “enforce the existing laws,” oppose “illegal immigration,” proudly support “legal immigration,” and forget about any reforms. By doing that they effectively support a continuation of multiculturalism that precludes assimilation and renders the proverbial “melting pot” obsolete.
Indeed, any reform conducted by a leftist government is likely to make things worse. But the leftist hold on the U.S. government will be over as early as January. Once the Trump administration takes office, we should demand a completely new set of immigration laws based on a cardinally different philosophy.
This is not an argument to filter people by race or ethnicity, but rather by their individual readiness to assimilate into the American culture by accepting American values and American way of life.
It’s not racist to want your children to be safe when they walk the streets, ride the trains, or, in this case, go to school after a Thanksgiving break. We need an immigration system that would let more Horujkos in and keep Abdul Razak Ali Artans out.
* * *
On another note, three weeks ago I was arrested for hanging these anti-terror posters on George Mason University campus. Given today’s terrorist attack on Ohio State campus, these posters turned out to be prophetic (PBUH).
Will the George Mason authorities apologize to me now and drop the charges? I have three more hashtags for them:
#IToldYouSo
#TheIrony #IDontHoldMyBreath
And on December 2 they have a scheduled
Discussion with OSU Police Department: “We will have a discussion about the police relations with the muslim (sic) community and how we can adress (sic) distrust. In a time where bigotry against minority communities is prevalant (sic) it is important for our community to have friends within the Law Enforcement Community.”
Why do bad things always happen to their community?
Perhaps, if only their community could meet with the other community to talk about their communities sooner, much bigotry could have been prevented – and a community of 11 random non-Muslim students wouldn’t be in the hospital today with stab wounds and fractured skulls. And Abdul Razak Ali Artan would have abandoned the jihadi community and set his mind on helping the Catholic community that took care of him when he first came to join the American community. Or maybe not.
* * *
On yet another note, if Abdul Razak Ali Artan voted for a U.S. president this month, who did he vote for? In Ohio, according to reports,
Somali immigrants are being driven to the polls by the van load, while many of them don’t even speak English, which makes them unlikely citizens. Abdul is said to have been a “permanent resident,” which comes before citizenship. So if he did vote, can we now chuck his vote from the much discussed “popular vote” number? In fact, how many more votes like his are there in the “popular vote” of which Hillary is so proud? | 5,729 | 2,764 | 0.000376 |
warc | 201704 | Mises Daily Articles Net Neutrality: Unwarranted Intervention
With the recent sentence on the Comcast affair,1 it seems that the debate about net-neutrality regulation has paused.2 Feeling the carpet pulled out from under it, the Federal Communications Commission (FCC) is struggling to find a new legal ground for its intervention.
In the meantime, the stakeholders should reflect on the consequences a lobbying success would bring: the creation of regulations on net neutrality.
The position of telecom operators is clear enough, as they have opposed from the beginning this attack on their assets and their businesses.
But what about the several net-neutrality proponents, like Google, Twitter, Facebook, Amazon and many others?3 Have they really thought about the consequences of net-neutrality regulation — not only in the short but also in the long term? Are they not aware that any regulation has unintended consequences?
Access to content Access to applications and services Connection of devices Access to competitive options Nondiscrimination Transparency
These are the proposed obligations, but the real issue behind them is network management. Telecommunications networks are scarce resources and, as such, they require management. In an environment of scarcity, someone has to decide how to assign the limited capacity available.
For most of their history, the management techniques required for telecommunications networks have been simple. But that is changing now, with the explosive growth of data traffic, and, specifically, the data generated from cellular devices. Suddenly, network capacity is becoming scarcer than ever, and telecom operators are starting to analyze how to manage the new situation with the resources at their disposal.
In summary, the question of net neutrality is not about the need for network management; it is about who will decide how the network is managed. There is no chance of having "dumb pipes," as net-neutrality proponents seem to demand; what we will have is either dumbly managed pipes or smartly managed pipes.
There are basically two options for network management: it may be designed and carried out by the owner of the network, or by "stakeholders" not owning the network (i.e., by means of government intervention and regulations).
And now we return to the original question. What would new regulation mean for Google and its net-neutrality allies?
The owner of the network is interested in obtaining as much revenues as possible. He can only do so if the service he provides is satisfactory to his customers. Otherwise, they will change to another network or discontinue the contract. Of course, this threat will have important implications for the way the owner manages the network. He will try at any moment to use the scarce resource in a way that most perfectly satisfies the majority of clients.
This seems in line with the interests of Google. If Google is providing a good service to its users, telecom clients will demand access to Google services, and the telecom operator will not have the option to deny such access. If he does, he will lose customers and, more importantly, revenues.
What if the management is done by the government? Then we abandon the realm of service and enter into the realm of policy. The interest of the customer gives way to the "public interest." And Google will not be able to rely anymore on the satisfaction of its users in order to secure the needed capacity to reach them.
Google, Facebook, Amazon, eBay, and the rest of the big Internet companies will have to rely on lobbying instead of on merits. The network, managed by the government (of course, not directly, but by increasing regulation of telecom operators), will be assigned according to arbitrary policy criteria not necessarily related to the actual preferences of people. For example, Spanish movies could be given priority over Google contents due to cultural issues (or due to better lobbying by those stakeholders).
If network owners are deprived of the ability to manage their assets, they will stop investing in them. After all, nobody wastes money on something they can't use. The capacity of the network will not grow and it will not be able to carry the vast and growing amount of data demanded. Once again, this does not look good for Google and its allies, whose businesses rely so heavily on the increase of data traffic.
In conclusion, net-neutrality regulation is not good, not even for its most ardent supporters. Google, eBay, Facebook, Amazon, and other net-neutrality proponents have a proven track record of success in serving their customers. Why they prefer to put their future in the hands of governments instead of depending on their own proven capacities of service is simply beyond me.
1. United States Court of Appeals for the district of Columbia Circuit: Comcast v. Federal Communications Commission and United States of America, On Petition for Review of an Order of the Federal Communications Commission. Case No. 08-1291; April 6, 2010. 2. Net-neutrality proponents assert that the Internet must be neutral regarding the contents and services demanded by Internet users. In other words, it should be forbidden that telecom operators (the owners of the network) act on the traffic, for example, to block, manage, or prioritize it. 3. The Open Intenet Coalition, for instance. | 5,399 | 2,437 | 0.000413 |
warc | 201704 | Finicelli, Andrea and Pagano, Patrizio and Sbracia, Massimo (2009):
Ricardian selection.
PDF
MPRA_paper_16950.pdf
Download (372kB) | Preview
Abstract
We analyze the foundations of the relationship between trade and TFP in the Ricardian model. Under general assumptions about the autarky distributions of industry productivities, trade openness raises TFP. This is due to the selection effect of international competition --- driven by comparative advantages --- which makes "some" high- and "many" low-productivity industries exit the market. We derive a model-based measure of this effect that requires only production and trade data. For a sample of 41 countries, we find that Ricardian selection raised manufacturing TFP by 11% above the autarky level in 2005 (6% in 1985), with a neat positive time trend and large cross-country differences.
Item Type: MPRA Paper Original Title: Ricardian selection English Title: Ricardian selection Language: English Keywords: selection effect, Eaton-Kortum model, international competition Subjects: F - International Economics > F1 - Trade
D - Microeconomics > D2 - Production and Organizations > D24 - Production ; Cost ; Capital ; Capital, Total Factor, and Multifactor Productivity ; Capacity
O - Economic Development, Innovation, Technological Change, and Growth > O4 - Economic Growth and Aggregate Productivity
Item ID: 16950 Depositing User: Massimo Sbracia Date Deposited: 26. Aug 2009 13:47 Last Modified: 12. Feb 2013 04:54 References:
Alcalá F., Ciccone A. (2004), Trade and Productivity, Quarterly Journal of Economics, Vol. 119, pp. 612-645.
Alvarez F., Lucas R.E.Jr. (2007), General Equilibrium Analysis of the Eaton-Kortum Model of International Trade, Journal of Monetary Economics, Vol. 54, pp. 1726-1768.
Anderson J.E., van Wincoop E. (2004), Trade costs, Journal of Economic Literature, Vol. 42, pp. 691�751.
Armington P.S. (1969), A Theory of demand for Products Distinguished by Place of Production, IMF Staff Papers, No. 16.
Balakrishnan N., Nevzorov V.B. (2003), A Primer on Statistical Distributions, Wiley, Hoboken, New Jersey.
Bernard A.B., Eaton J., Jensen J.B., Kortum S. (2003), Plants and Productivity in International Trade, American Economic Review, Vol. 93, pp. 1268-1290.
Bernard A.B., Jensen J.B. (1999), Exceptional Exporter Performance: Cause, Effect, or Both?, Journal of International Economics, Vol. 47, pp. 1-25.
Bernard A.B., Jensen J.B., Redding S.J., Schott P.K. (2007), Firms in International Trade, Journal of Economic Perspectives, Vol. 21, pp. 105-130.
Chaney T. (2008), Distorted Gravity: The Intensive and Extensive Margins of International Trade, American Economic Review, Vol. 98, pp. 1707-1721.
Chari V.V., Restuccia D., Urrutia C. (2005), On-the-Job Training, Limited Commitment, and Firing Costs, unpublished, University of Minnesota.
Conway P., Nicoletti G. (2006), Product Market Regulation in the Non-Manufacturing Sectors of OECD Countries: Measurement and Highlights, OECD Economics Department Working Papers, No. 530.
Demidova S., Rodríguez-Clare A. (2009), Trade Policy under Firm-Level Heterogeneity in a Small Economy, Journal of International Economics, forthcoming.
Dekle R., Eaton J., Kortum S. (2007), Unbalanced Trade, American Economic Review, Vol. 97, pp. 351-355.
Dollar D., Kraay A. (2003), Institutions, trade, and growth, Journal of Monetary Economics, Vol. 50, pages 133-162.
Dornbusch R., Fischer S., Samuelson P. A. (1977), Comparative Advantage, Trade, and Payments in a Ricardian Model with a Continuum of Goods, American Economic Review, Vol. 67, pp. 823-839.
Eaton J., Kortum S. (2002), Technology, Geography, and Trade, Econometrica, Vol. 70, pp. 1741-1779.
Eaton J., Kortum S. (2009), Technology in the Global Economy: A Framework for Quantitative Analysis, Princeton University Press, forthcoming (title tentative).
Frankel J.A., Romer D. (1999), Does Trade Cause Growth?, American Economic Review, Vol. 89, pp. 379-399.
Finicelli A., Pagano P., Sbracia M. (2009a), Trade-revealed TFP, unpublished, Bank of Italy.
Finicelli A., Pagano P., Sbracia M. (2009b), The Eaton-Kortum Model: Empirical Issues and Extensions, unpublished, Bank of Italy.
Gumbel E.J. (1960), Bivariate Exponential Distributions, Journal of the American Statistical Association, Vol. 55, pp. 698-707.
Hall R., Jones C. I. (1999), Why Do Some Countries Produce So Much More Output Per Worker Than Others?, Quarterly Journal of Economics, Vol. 114, pp. 83-116.
Jones C.I. (2005), The Shape of Production Functions and the Direction of Technical Change, Quarterly Journal of Economics, Vol. 120, pp. 517-549.
Kortum S. (1997), Research, Patenting, and Technological Change, Econometrica, Vol. 65, pp. 1385-1419.
Lagos R. (2006), A Model of TFP, Review of Economic Studies, Vol. 73, pp. 983-1007.
Mardia K.V. (1970), Families of Bivariate Distributions, Lubrecht & Cramer (Griffin's Statistical Monographs, No 27).
Melitz M.J. (2003), The Impact of Trade on Intra-Industry Reallocations and Aggregate Industry Productivity, Econometrica, Vol. 71, pp. 1695-1725.
Melitz M.J., Ottaviano G.I.P. (2008), Market Size, Trade, and Productivity, Review of Economic Studies, Vol. 75, pp. 295-316.
Pavcnik N. (2002), Trade Liberalization, Exit, and Productivity Improvements: Evidence from Chilean Plants, Review of Economic Studies, Vol. 69, pp. 245-76.
Rodríguez-Clare A. (2007), Trade, Diffusion and the Gains from Openness, NBER Working Papers, No. 13662, National Bureau of Economic Research.
Tawn J.A. (1990), Modelling Multivariate Extreme Value Distributions, Biometrika, Vol. 77, pp. 245-253.
Waugh M.E. (2008), International Trade and Income Differences, unpublished, University of Iowa.
URI: https://mpra.ub.uni-muenchen.de/id/eprint/16950 | 5,803 | 2,749 | 0.000368 |
warc | 201704 | IN September 1979, 3,000 Sengalese workers demonstrated in Dakar against the decision of the nationalised company, Bud-Senegal, to close down, leaving them unemployed. Bud’s Senegalese history is not long. It operated first as a private enterprise from 1971-1976 then as a nationalised company from 1976-1979. Still, the company’s brief fling in Senegal illustrates the nest of problems foreign agribusiness can create for a Third World country trying to develop by outside investment.
The company’s founder, Lester V. ‘Bud’ Anile, started his lettuce packing and shipping operation in California after the Second World War. Using state-of-the-art technology, Bud became the world’s largest private producer of ‘Iceburg’ lettuce — the crunchy, bland salad vegetable most North Americans think of when they hear the word lettuce.
By 1978, the company was a multinational operation selling about $80 million worth of vegetables a year. Seven years earlier Fritz Marsehall, an executive with one of Bud’s European affiliates, on a visit to Senegal noted the similarity of the West African country’s climate to California. The question arose: What was to stop Bud from setting up another California-style operation? The restaurants and dinner tables of relatively affluent Europeans with their fierce appetite for fresh fruit and vegetables during the long winter months would provide a ready market.
In addition to climate, there were a few other good reasons to set up shop in Africa. The distance between Senegalese ports and Europe’s major cities was not great by refrigerated cargo ship. And of course labour would be cheap. The World Bank was willing to help finance the project and the Senegalese government was also very helpful to the enterprise. In addition to holding 48 per cent of the stock, Senegal agreed to a moratorium on taxes and import-export duties for ten years, to charge very low land rental fees and to supply water for irrigation. The government also very kindly expropriated 800 hectares of land from villagers for one of Bud’s projects.
Operating through its Dutch affiliate, the company set up two different types of farming concerns in Senegal. One was a plantation type using the expropriated land and the other was based on local small farmers actually growing the crops.
The plantations were initially established on a 450 hectare site, 38 kilometres from Dakar, convenient to the port and to the airport. This was a high-technology operation using drip-irrigation to produce melons, green beans, green peppers and tomatoes. Once shipped and air freighted to Amsterdam, Brussels, Paris and Stockholm the produce could be sold by the company’s marketing agencies.
The drip-irrigation techniques which Bud was relying on had proven successful in Israel and other arid areas. Drip-irrigation involves a system of small tubes hooked up to pipes connected to a main water supply. In areas where water is scarce this has the advantage of giving the moisture directly to the plants. Fertilizers as well as water can be transmitted to the root systems by the drip method.
The system is very expensive and is also capital intensive, providing little employment for local people. The long-term effects of drip-irrigation were not studied beforehand and the whole operation was essentially competing with other parts of the country for scarce fresh water.
Bud cleared the land of the huge Baobab trees that picturesquely dot the West African landscape, resembling upside down grey carrots. The trees, however, are more than another picturesque ingredient to the local landscape. They protect the soil from erosion while providing the local inhabitants with an edible fruit and raw material for making houses, ropes and other household goods. Two and sometimes three large Caterpillar tractors were required to remove a single one of the large, deeply rooted trees. An inadequate crop rotation on the company’s land further impoverished the soil.
Bud’s second project Senegold, made use of another natural curiosity called naives. These are depressions near Senegal’s coast where ground-water is close to the surface. This land was to be farmed by small holders. Their inputs, fertilizer, seeds, pesticides and advice would come from the company which would also be their only marketing outlet. If things did not work out, the farmers would be the principle losers since the company’s investment was minimal.
Furthermore, Bud’s farmers would be competing with other local producers. If the former were successful they might drive their neighbours out of business. On the other hand, if farmers not involved in the schemes produced their crops more cheaply than those assisted by Bud, the participants in Senegold would be out of luck. In either case one group of farmers would wind up with the short end of the stick.
Senegal has suffered from food shortages in recent years. And Bud’s agribusiness activities were not doing much to provide food for the local population. It was the food preferences of Europeans and price advantages that determined what was grown. When green bean prices in Europe dropped below the cost of growing and shipping Senegalese beans, the company destroyed the harvest.
As one Bud official explained, 'since the Senegalese are not familiar with green beans and don’t eat them we had to destroy them.’ Furthermore, from May until December the European tariff system makes it unprofitable to export vegetables. The Senegalese land could have been left fallow or local people might have used it to grow food for themselves. Bud decided the best use for the land would be to grow food for livestock.
By 1976, Bud’s projects had run into financial difficulties. The company refused to invest any more money in Senegal and the Senegalese government became the majority stockholder with 61 per cent of the shares and nominal control of the corporation. There were charges the company had written off profits as losses by selling to its own marketing arm at exceptionally high prices, then had claimed it could not afford to maintain the operations and had allowed the Senegalese government to buy more shares out of government coffers.
The Senegalese government was left with eroded soil and imported machinery which could not be adequately maintained in Senegal. The state could continue growing vegetables but transportation and marketing networks were completely out of its hands. Transportation had been a persistent problem and there seemed no way out of this if the produce was to be sold in Europe.
In 1979 alone, 600 tons of peppers were lost due to transport problems. Finally, the government brought the operation to a total halt.
The International Finance Corporation, part of the World Bank, identified ‘management weaknesses’ and ‘high personnel costs’ among Bud’s problems. However, this mismanagement and an alleged loss of $27 million did not prevent the giant agribusiness company Castle and Cooke from acquiring Bud-Antle in 1978. Castle and Cooke markets under various brand names — tuna fish, pineapple, bananas, mushrooms and other products. They are also in the real estate and weapons business. This means in the future Bud will have even more resources to tackle the international scene. There is no reason to believe the results will be happier than they were in Senegal. | 7,498 | 3,478 | 0.000294 |
warc | 201704 | While long-time observers of North Korea are more than familiar with the regime’s appalling record on human rights, the report released earlier this week by the U.N. Commission of Inquiry on Human Rights in North Korea makes a critically important contribution by documenting and raising public awareness of these issues.
Based on extensive testimony by survivors, witnesses, and experts, the report paints a picture of a country where the systematic denial of rights and freedoms does not “have any parallel in the contemporary world.” It describes a prison camp network in which hundreds of thousands of political prisoners have been killed and tormented over the years through “deliberate starvation, forced labour, executions, torture, rape, and the denial of reproductive rights through punishment, forced abortion, and infanticide.” And it illuminates a dark world in which the average citizen faces an “almost complete denial of the right to freedom of thought, conscience and religion, as well as of the rights to freedom of opinion, expression, information and association.”
Concluding that it has documented offenses entailing “crimes against humanity,” the Commission makes recommendations for holding perpetrators to account. As the United States and the international community consider these recommendations, we will also continue efforts to focus attention on the horrific human rights situation in the DPRK, including through the work of our Special Envoy for Human Rights in North Korea, Robert King. Recognizing that the plight of the North Korean people is too often crowded out of international headlines by the regime’s belligerent and vitriolic behavior, the State Department’s annual Human Rights Reports will continue to tell their story. And on the multilateral stage, the U.S. government will continue to work with our partners—including at the U.N. Human Rights Council, where the report will be presented next month—to help ensure the ongoing engagement of the international community.
In the meantime, we applaud the work of the U.N. Commission for giving survivors of North Korean abuses the opportunity to publicly tell their stories, and for shining a clear, bright light on human rights violations perpetrated by the North Korean regime.
Stephen Pomper is Senior Director for Multilateral Affairs and Human Rights on the National Security Council. | 2,451 | 1,202 | 0.000857 |
warc | 201704 | Ingredients 2 large ripe avocados Salt to taste 4 Tablespoons Tabasco Sauce, divided 6 to 7 cups chicken broth 3 skinless, boneless chicken breast halves (about 1 lb.) 2 Tablespoons uncooked rice 1 large tomato ½ cup chopped onion ¼ cup finely-chopped cilantro Tortilla chips Instructions Halve avocados and remove pit. Scoop pulp into a bowl and mash with a fork; add salt to taste and 1 Tablespoon of the Tabasco Sauce. Blend well and set aside. Bring chicken broth to boiling in a 4-quart saucepan. Add chicken, lower heat, and cook until meat is white throughout, about 15 minutes. Remove chicken and set aside to cool. Add rice to broth and cook until tender, about 15 minutes. Cut chicken into bite-size pieces and return to saucepan. Just before serving, seed and chop tomato and chop onion. Stir tomato, onion, cilantro, and remaining 3 Tablespoons Tabasco Sauce into soup. To serve, place a few tortilla chips in bottom of each bowl. Ladle soup over chips and top with a spoonful of avocado mixture. Kashrut Instructions FRESH HERBS: DESCRIPTION : Fresh chives, basil, cilantro, dill, mint, oregano, parsley, rosemary, sage, and thyme are often used as spices or garnishing. Please Note : Curly leaf parsley is very difficult to check. It is therefore recommended that only flat leaf parsley be used. INFESTATION : Aphids, thrips and other insects may often be found on the leaves and stems of these herbs. Insects tend to nestle in the crevices between the leaves and branches of herbs. These insects can curl up and stick to the leaf once they come in contact with water.
Vegetable spinners, power hoses, and light boxes are not always available in the home. We therefore recommend the following alternate procedure.
RECOMMENDATION : In order to determine if a particular bunch of herbs is infested prior to washing, bang it several times over a white cloth. This is most important when checking oregano, rosemary, sage and thyme. If only one or two insects are found proceed with the steps below. If three or more insects are detected in a particular bunch of herbs it should not be used. INSPECTION : Soak herbs in a solution of cold water and vegetable wash. The proper amount of vegetable wash has been added when some bubbles are observed in the water. (In the absence of vegetable wash, several drops of concentrated unscented liquid detergent may be used. However, for health reasons, care must be taken to thoroughly rinse off the soapy solution.) Agitate the herbs in the soapy water, in order to loosen the sticking excretion of the bugs. Using a heavy stream of water, thoroughly wash off the soap and other foreign matter from the herbs. Check both sides of each leaf under direct light. If one or two insects are found, rewash the herbs. If any insects are found after repeating the agitation process twice, the entire bunch must be discarded. Please note : To prepare herbs such as cilantro, dill, or parsley for use in soups, wash them thoroughly and place in a cooking bag. | 3,014 | 1,471 | 0.000683 |
warc | 201704 | Following a record-breaking warm Arctic winter, Arctic Ocean sea ice looks set to hit a record low maximum level. The ice hit a maximum of 14,478 million km2 on the second day of March. If the ice does not grow any more this year, this will set a new record, beating the previous low maximum set last year.
Although a low maximum does not necessarily lead to a low minimum record in the late summer, it does have impacts on Arctic wildlife. New polar bear mothers emerging from a long fast in the dens where they gave birth over winter need quick access to sea ice to feed and regain their strength.
"This year is the most extreme I've ever seen," said Jon Aars, a polar bear researcher with the Norwegian Polar Institute whose field work on the Norwegian Svalbard archipelago is partly supported by WWF.
Aars expects that this spring he will find that fewer female polar bears even made it to shore to den.
"It is very bad in the more eastern parts of the archipelago, where we traditionally have had the most important denning areas. This means that it is very difficult for the bears to reach those areas," said Aars.
Aars also expects that the low ice conditions will have a negative effect on the polar bears' main prey, ringed seals, which need sea ice to give birth to their pups.
"This year marks another grim statistic in the continuing disappearance of Arctic sea ice, with major consequences for wildlife and weather in the northern hemisphere," said Samantha Smith, Leader of WWF's Global Climate and Energy Initiative. "If we needed reminding, we have it: Governments, businesses and cities must act immediately on their climate commitments from the UN negotiations in December."
Late last year, governments meeting in Paris adopted a deal that lays the foundation for long-term voluntary efforts to fight climate change. The Paris Agreement includes a long-term temperature goal of well below 2°C of warming and a reference to a 1.5°C goal, sending a strong signal that governments are committed to following the latest science.
The Paris Agreement is also the first agreement that joins all nations in a common cause on climate change based on their historic, current and future responsibilities.
"As the latest scientific findings from the Arctic show, there is no alternative to implementing climate commitments. We must stop the steady destruction of our planet's delicate ecosystems and
start building a new and renewable energy future," said Smith.
Last week, Canada and the United States agreed to several Arctic initiatives, underscoring the need to take action on both mitigating climate change and on Arctic conservation to reduce the impacts of change already being experienced.
On April 22, governments will meet at the United Nations in New York when the new global climate deal is opened for signature. This coincides with the global observance of Earth Day.
Explore further: Polar bears develop taste for dolphins as Arctic warms | 2,993 | 1,495 | 0.000677 |
warc | 201704 | Context and Communication Styles
Direct talking style
Very clear communication
With no hidden implications
Challenges Of International Assignment
American Novelties Staffing strategy:
ethnocentric, sent and American from an American company to the foreign land
Did not provide any predeparture training
Little notice was given
Todd had no previous overseas experience
Choose based on expertise, but possibly because Todd had no dependents
Comparison of Cultures using Hofstede's (2001) Dimensions
Brief Movie Overview
Outsource – to obtain goods or services from an external or foreign supplier
Todd has his job outsourced to India, where he is sent to train the new call centre manager in international and interpersonal relations with Americans. During his time there he finds himself learning through experience after a series of cross-cultural circumstances.
Todd, Dave, Puro, Asha and Aunty Ji
Power Distance
Attitude towards inequality
Perceived societal equality
India scores 77, U.S.A. scores 40
Unevenly distributed hierarchy
Todd’s conversation with Puro
“Mr. Todd”
The Wall
How this played out in ‘Outsourced’:
Asha says “A girl in my position has her whole life mapped out in front of her.”
Todd promotes her to assistant manager, saying he believes that “Asha can do anything.”
Uncertainty Avoidance
Independence vs Interdependence
India-48, U.S.A.-91
“The American Dream”
Immediate vs Extended family
Todd vs Puro
Individualism
Masculinity
A masculine society is associated with assertive, competitive go-getter mentality.
US scores 62 versus India scoring 56 on the masculinity value dimension. (Hofstede)
In “Outsourced”, a clear example of masculinity of Americans is shown by
the basic premise of the movie; the company wants to generate more
revenue and therefore choose to reduce labor expenses by outsourcing
first to India then to China.
Time Orientation
Long-term orientation is associated with thrift, or saving, due to the focus on long-term perseverance.
Short-term orientation is associated with a feeling security and lack of worrying about the future.
US scores 29 versus India scoring 61 on this dimension (Hofstede)
In “Outsourced”, the second Todd sets foot in India, he’s already got Dave pressuring him about goals
he needs to achieve and if he doesn’t achieve them he will never be able leave India.
Outsourced - A Cross Cultural Analysis
Cultural perception of time is named Chronemics by Hall and can be broken down into two functions; Monochronic and Polychronic.
Monochronic – Doing one thing at a time, concentrate on the job, take commitments seriously, follow plans seriously and are low-context and need information.
Polychronic – Doing many things at once, can be easily distracted and manage interruptions well, are committed to people and human relations and are high-context.
In “Outsourced”, Todd has no clue of his future, his Job security, or which country he will be in next, yet his sole focus is on getting his MPI down to a flat six because it is the most immediate and important deadline on his schedule.
Chronemics
LOW CONTEXT
Indirect talking style
Verbally implicit
Inward reaction and emotion
HIGH CONTEXT
Todd's (American)
First reaction of seeing the call centre's environment
Reaction to the call centre’s high MPI
Employees (Indian)
Decline in a very indirect way
Hide their emotions
Puro declines the request
Nobody explicitly says they are shocked
India's major religious demographic is Hinduism
-> cows are sacred animals
Todd fails to understand this when he:
a) describes the branding of baby cows to the employee's in a nonchalant manner
b) desperately tries to find a beefburger in India, only to find a veggie burger
Beneath the 'Cultural Iceberg' - Religion
Lewis' Cultural Categories
Todd's U- Curve of Adjustment
did not pass
phase 3
Organisation and Non- Work Culture Novelty
Individual Perception and Relation skills
Highlighted factors affecting the degree of Todd’s Adjustment
References
Black, J.S. & M. Mendenhall (1991). The u-curve adjustment hypothesis revisited: A review and theoretical framework. Journal of International Business Studies, 22(2), 225-247.
Black, J.S., M. Mendenhall & G. Oddou (1991). Toward a Comprehensive Model of International Adjustment: An Integration of Multiple Theoretical Perspectives. Academy of Management Review, 16(2), 292-317.
Browaeys, M-J. & Price, R. (2011), Understanding Cross-Cultural Management, 2nd edition, FT Prentice Hall, pp. 123-124
Hall, E. T., Birdwhistell, R. L., Bock, B., Bohannan, P., Diebold Jr, A. R., Durbin, M. & Vayda, A. P. (1968). Proxemics [and Comments and Replies]. Current anthropology, 83-108.
Hofstede, G. (2010). India - Geert Hofstede. [online] Geert-hofstede.com. Available at: http://geert-hofstede.com/india.html
[Accessed 29 May. 2014]
Javidan, M., Stahl, G., Brodbeck, F. and Wilderom, C. (2005). Cross-border transfer of knowledge: Cultural lessons from Project GLOBE. The Academy of Management Executive, 19(2), pp.59--76.
Outsourced (2006), Film. Directed by John Jeffcoat. USA: ShadowCatcher Entertainment
Conclusion
Thank you for your time, we invite any questions
Culture Shock
Todd experienced culture shock in his workplace, relationship, food, environment.
Workplace – Frustration and tensions because the way he did things were different from how Indians did things.
Relationships – On the train when the little boy sat on his thigh. Also when Asha told him she has been engaged to be married since she was 4.
Food – His stomach could not handle the food Aunt Ji made. Always had to use the toilet and he was turning pale.
Environment – When he came out of the airport, the crowd and the taxi drivers. - Also when he tried to get on the train his own way.
Repatriation
Todd (the expatriate) could not feel at home in his own home anymore. He puts “the third” eye on a picture.
Feels loneliness - No friends or family - Calls his mother
Changed as a person, could not get back to his old habits. For example, he now prefers a lot of sugar in his coffee.
Aims
Identify theories highlighted in Outsourced
Critically analyse the theories discussed in class
Amalgamate these points and apply them in a cross cultural context
Lower adjustment at phase 1
Hofstede's Cultural Dimensions
Verbal Communication Styles
Non-verbal Communication
Staffing strategies & decision
Adjustment
Culture Shock
Repatriation
Conclusion
Personal space
Rankings in organisations
The Wall
Overpopulation – city, trains
Touch
Need for respect
Use of left hand
Proxemics and Haptics
/ gives good excuses
Understanding power distance is imperative in effective cultural integration
Perception of time affects understanding of communication styles & business approach
Americans with a low context culture are direct and task oriented while Indians with a high context culture are indirect and people oriented
Lack of training and role ambiguity resulted in Todd's low starting adjustment
Outsourced shows the difference in an an expatriate's attitude at different stages of the U-curve - particularly in culture shock, adjustment and repatriation
1205268 1300897 1200454 1104009 1301972 | 7,406 | 3,696 | 0.000281 |
warc | 201704 | USGS Scientific Investigations Report 2007-5160
Prepared in cooperation with the National Park Service
By Kenneth E. Hyer
U.S. Geological Survey Scientific Investigations Report 2007-5160, 18 pages (Published online January 2008)
This report is available in PDF format: SIR 2007-5160 () (1.3 MB)
Although fecal contamination of streams is a problem of national scope, few investigations have been directed at relatively pristine streams in forested basins in national parks. With approximately 1.8 million visitors annually, Shenandoah National Park in Virginia is subject to extensive recreational use. The effects of these visitors and their recreational activities on fecal indicator bacteria levels in the streams are poorly understood and of concern for Shenandoah National Park managers.
During 2005 and 2006, streams and springs in Shenandoah National Park were sampled for
Escherichia coli ( E. coli) concentrations. The first study objective was to evaluate the effects of recreational activities on E. coli concentrations in selected streams. Of the 20 streams that were selected, 14 were in basins with extensive recreational activity, and 6 were in control basins where minimal recreational activities occurred. Water-quality sampling was conducted during low-flow conditions during the relatively warm months, as this is when outdoor recreation and bacterial survivorship are greatest. Although most sampling was conducted during low-flow conditions, approximately three stormflow samples were collected from each stream. The second study objective was to evaluate E. coli levels in backcountry drinking-water supplies throughout Shenandoah National Park. Nineteen drinking-water supplies (springs and streams) were sampled two to six times each by Shenandoah National Park staff and analyzed by the U.S. Geological Survey for this purpose.
The water-quality sampling results indicated relatively low
E. coli concentrations during low-flow conditions, and no statistically significant increase in E. coli concentrations was observed in the recreational streams relative to the control streams. These results indicate that during low-flow conditions, recreational activities had no significant effect on E. coli concentrations. During stormflow conditions, E. coli concentrations increased by nearly a factor of 10 in both basin types, and the Virginia instantaneous water-quality standard for E. coli (235 colonies per 100 milliliters) frequently was exceeded.
The sampling results from drinking-water supplies throughout Shenandoah National Park indicated relatively low
E. coli concentrations in all springs that were sampled. Several of the streams that were sampled had slightly higher E. coli concentrations relative to the springs, but no E. coli concentrations exceeded the instantaneous water-quality standard. Although E. coli concentrations in all the drinking-water supplies were relatively low, Shenandoah National Park management continues to stress that all hikers must treat drinking water from all streams and springs prior to consumption.
After determining that recreational activities in Shenandoah National Park did not have a statistically significant effect on low-flow
E. coli concentrations, an additional concern was addressed regarding the quality of the water releases from the wastewater-treatment plants in the park. Sampling of three wastewater-treatment plant outfalls was conducted in 2006 to evaluate their effects on water quality. Samples were analyzed for E. coli and a collection of wastewater organic compounds that may be endocrine disruptors. Relatively elevated E. coli concentrations were observed in 2 of the 3 samples, and between 9 and 13 wastewater organic compounds were detected in the samples, including 3 known and 5 suspected endocrine-disrupting compounds.
Abstract
Introduction
Purpose and Scope
Description of the Study Area
Study Design and Sample Collection
Selection and Sampling of Sites for the Evaluation of Recreational Activities
on
E. coli Concentrations
Selection and Sampling of Drinking-Water Supply Sites
Sampling of Wastewater-Treatment Plants
Analytical Technique for
E. coli
Evaluating the Effect of Recreational Activities on
E. coli Concentrations
Low-Flow Conditions
Stormflow Conditions
Pinefield Hut Samples
E. coli Concentrations in Backcountry Drinking-Water Supplies
Water Quality of Wastewater-Treatment Plant Releases
Summary and Conclusions
Acknowledgments
Literature Cited
To view the PDF document, you need the Adobe Reader installed on your computer. (A free copy of the Adobe Reader may be downloaded from Adobe Systems Incorporated.)
Suggested citation: Hyer, K.E., 2007, Escherichia coli concentrations in recreational streams and backcountry drinking-water supplies in Shenandoah National Park, Virginia, 2005–2006: U.S. Geological Survey Scientific Investigations Report 2007–5160, 18 p. (available online at https://pubs.water.usgs.gov/sir2007-5160 )
For more information, please contact Kenneth E. Hyer. | 5,056 | 1,998 | 0.000505 |
warc | 201704 | Jonathan Lewis’ recent blog post and accompanying video on fundraising hits the nail on the head: “The best fundraisers don’t fundraise. Instead, they teach people to take realistic – and unrealistic! – risks in the service of a better world. “
“Teaching” and “risk-taking” in service of a better world.
Maybe if we used that language more often we would have more great people getting into fundraising, more people in fundraising with the right mindset and orientation, and more funders taking risk.
I’m with Jonathan all the way until the closing paragraph, where he says, “Infuriating indeed is the patronizing ‘don’t take it personally’…If you believe in your mission and if you are giving it your all, then it’s always personal. Every committed social entrepreneur takes organizational rejection personally!”
As I told Jonathan, I don’t think this is quite right. Of course I
feel it personally when I am rejected, when someone doesn’t share my passion or, worse, when my explanation of what we are trying to do at Acumen fails to capture the imagination of someone who I know is aligned with my passion and vision (in which case, shame on me). I don’t think I would be human if I didn’t feel it; indeed, if the day comes when I stop feeling it I’d have to question my own passion and sense of commitment.
But when I let the rejection feel
personal, and when I see other fundraisers do the same thing, I think that’s a big mistake.
The person I’m meeting with came into the meeting with a worldview, with ideas, with momentum in a certain direction…and so did I. I feel like my job is to listen, explore, connect, tease out alignment, and then to inspire action (aside: the “inspire action” bit is really important and not easy to get right.)
But when that alignment isn’t there and I end up feeling personally rejected then I believe I’m misdiagnosing what just went on in that meeting.
When someone says no, it could be an execution error on my part: maybe I handled the meeting poorly, didn’t listen enough, was off my game, didn’t have a real and compelling ask, didn’t tell compelling stories, or didn’t articulate how Acumen could help the funder realize their vision. Hopefully, after fundraising for nearly seven years I make fewer and fewer of those mistakes, but I’m sure I do make them plenty. When this is what’s gone wrong, I need to use a rejection to figure out how I can get better, how I can hone my craft, how I can turn “no’s” into “not now’s.” Taking these sorts of rejections personally places blame in the wrong place: I didn’t do my job well, plain and simple.
And when what I’m fundraising for doesn’t inspire a funder or align with their vision, then something entirely different is at play. That’s a question of worldview, a question of where they are in their journey. It’s about lack of alignment of vision and values and aspiration. What they’re looking for is not what I’m selling.
(Note that it’s easy to see, when I’m selling database software or consumer copiers, the difference between being turned down because the person isn’t buying anything right now, buys from my competitor, or decides to buy productivity software and a high-end color printers instead. In philanthropy what we mostly see is the person giving or not giving to us, so everything gets much more muddled).
Almost always, it’s not personal. I have not been rejected. The moment I take rejection too personally is the moment I lose forward momentum, the moment I begin to question myself at a more fundamental level, the moment I forget that real long-term partnerships happen because of a deep sense of alignment, not because someone chose to buy what I’m selling. | 3,957 | 1,833 | 0.000588 |
warc | 201704 | This post originally ran a year ago. I dusted it off because I was looking up medical terms online last night and encountered some photos that brought the old squeamishness back. -Jackie
Patient empowerment is all the rage lately. While I distrust the way the “e” word sometimes verges on ideology, I’m all for learning what’s happening when we get that front row seat to medicine thanks to cancer or another big diagnosis.
But how best to learn if you tend to be medically squeamish? My previous patient experience was limited to an annual visit, with a handful of garden-variety illnesses and the inevitable screening tests required once you hit your 40s and 50s. I’ve never had a problem with those tests, or with needles, but once I learned my breasts were going to be the focus of a cancer adventure I felt a bit queasy.
The thing is, I can’t even stand nipple rings. Back when my husband Bruce and I used to take his Harley to the big bike rally in Sturgis, S.D., I averted my eyes a lot. I found myself doing the same thing now as I loaded up on breast cancer books. How do those DCIS cells act? Sure. An illustration of a nipple floating off into space during a mastectomy? Not so much.
I wanted to know what to expect without getting too much detail, if that makes any sense. So while I learned enough to know I wanted implants instead of tissue replacement surgery for reconstruction, I didn’t read about surgery details, and I couldn’t look at before and after reconstruction photos available online.
I had gone through the mastectomy and first-stage reconstruction before I became curious about things like how my surgeon was able to balance tissue removal and skin preservation during the mastectomy, or how my plastic surgeon was able to recreate a nipple.
Believe it or not, I actually watched him do it, since it only required local anesthetic. If you had asked me five years ago if I wanted to watch myself getting a nipple built, I probably would have yakked on your shoes. But this was my fifth surgery in nine months, so I had gotten used to it. And I’m really glad I watched because it was fascinating.
But that’s me, and it happened over time. You may want every last detail, or you may prefer letting the experience wash over you. And there’s nothing wrong with that. I would recommend learning enough to be able to make an informed treatment choice, and giving yourself enough time to make that choice. Whether you ever learn what they do with those scalpels or watch them do it is totally up to you.
For the record, nipple rings still gross me out. | 2,638 | 1,344 | 0.000763 |
warc | 201704 | When Lieza Dagher SM ’04, MCP ’04 was 21, she lived in Italy across the street from a two-story market where she could purchase fresh, local food directly from farmers.
Years later, she realized it was not as easy to find the same quality of food.
“I wanted to have the most nutrient-dense and flavorful food for my family. I saw that this was possible as a young woman living abroad.” Dagher, the mother of two young children, says. “As a mom, I source as much as I can from my community farmers and food artisans.”
Dagher’s experience in Italy was one spark in a lifelong passion for food, which she now pursues professionally as the Director of the Kitchen at the Boston Public Market, the education arm of the Boston Public Market that offers year-round programming focused on the intersection between food and agriculture.
“The Kitchen is a platform that teaches people about the benefits of sourcing from the local food system,” she says. “There are health benefits to your body, economic and cultural benefits for your community, and of course the benefits to the health of the planet.”
The Kitchen is located within the Boston Public Market, an indoor, year-round marketplace adjacent to Boston’s Faneuil Hall that houses about 40 local farmers, fishers, and food entrepreneurs who provide New England-sourced groceries and agricultural products. The Kitchen partners with these businesses and other mission-similar organizations to create hands-on cooking classes, lectures, and events that help to connect visitors back to the land.
“At the Kitchen you can learn about where your food is sourced, experiment with the different varieties of products yielded right here in Massachusetts, and meet the people behind these delicious ingredients,” Dagher says. “We’re showcasing the sea-change that is happening in New England’s vibrant local food system and helping to restore the craft of cooking in our communities.”
The Kitchen hosts free and ticketed hands-on classes and events, multiple times per week, that cover topics like seasonal cooking, menu planning on a budget, local wine and spirits tastings, and a variety of family and children’s cooking classes.
Dagher spoke to
Slice of MIT at the 2016 HUBweek festival, where she was part of the Kitchen-hosted session, “Making Dinner Your Family’s Favorite Time Together.” Dagher was one of more than 40 MIT alumni who presented at the festival.
“MIT helped me to consider all the forces at play in shaping a community,” Dagher, an alumna of MIT’s Department of Urban Studies and Planning, says. “In my work to help transform our Massachusetts food system, and increase wellness in our local communities, I use the lessons I learned at MIT every day.” | 2,860 | 1,409 | 0.000748 |
warc | 201704 | Alternative energy plays have been around for decades, including Ballard Power Systems Inc. (NASDAQ/BLDP), a maker of hydrogen fuel cells that went public in 1993. The stock traded as high as $100.00 as a speculative investment opportunity in early 2000 but was unable to break into the automotive market. It is currently drifting at the $4.00 level.
However, what Ballard was hoping for is now materializing for battery-powered automaker Tesla Motors, Inc. (NASDAQ/TSLA), which has built a superhighway of charging stations across the U.S. and is expanding into Europe and China. Tesla is a great story and a decent possible investment opportunity.
Yet it’s not only vehicles that demand alternative sources of energy; we also see demand coming from numerous applications and, in some cases, manufacturing facilities.
The demand for alternative energy can be based on wind, solar, or water and has led to the development of a strong solar industry as an investment opportunity.
A small-cap that has been exciting the stock market while producing sizzling gains for speculators has been Plug Power Inc. (NASDAQ/PLUG), a developer of hydrogen fuel cells that power forklifts and other devices. The stock traded as low as $0.32 over the past 52 weeks, surging to $6.37 on Thursday morning after reporting strong results. Plug Power has been on my technical analysis screens for some time, as the stock consistently breaks higher. If interested, I would suggest investors look to this stock on weakness for a volatile speculative investment opportunity.
Chart courtesy of www.StockCharts.com
Another possible investment opportunity that may interest investors in the alternative energy space is FuelCell Energy, Inc. (NASDAQ/FCEL), which has a market cap of $616 million. The stock has traded as low as $1.12 and as high as $4.74 over the past 52 weeks. The current price is halved at $2.37, so there’s a potential aggressive investment opportunity here.
Chart courtesy of www.StockCharts.com
FuelCell is a developer of fuel cell solutions by way of its stationary “Direct FuelCell” power plants, built to deliver ultra-clean, efficient, and reliable green power. The process involves harnessing the energy of renewable biogas from wastewater treatment and food processing.
Clients are varied and include commercial, industrial, government, and utility businesses. Sectors served include the food and beverage, manufacturing, hospital and prison, college and university, hospitality, utilities, and wastewater treatment areas.
FuelCell says its energy produced is up to two times more efficient than fossil fuel plants. The company’s plants produce output ranging from 300 kilowatts (kW) to 2.8 megawatts (MW) and are expandable to more than 50 MW. There are currently more than 50 plants worldwide that have generated more than 300 million kilowatt hours (kWh) of electricity.
FuelCell is expanding in Southeast Asia, including South Korea, Indonesia, Thailand, Malaysia, and Singapore, which the company sees as an investment opportunity.
Revenues are estimated to rise 7.2% to $201.16 million in FY14 followed by 22.6% to an estimated $246.54 million in FY15, according to Thomson Financial.
I suggest investors keep an eye on a company like FuelCell, as this volatile investment opportunity has tremendous upside if it can deliver results.
This article Alternative Energy the Next Big Play? was originally published at Daily Gains Letter
This article was syndicated from Business 2 Community: Alternative Energy the Next Big Play?
More Business articles from Business 2 Community: | 3,624 | 1,728 | 0.000586 |
warc | 201704 | More cool water is being dumped on the once-hot real estate market in St. John’s, according to data released by two housing groups.
The Canadian Mortgage and Housing Corporation released a survey showing construction activity was down in March in the capital city, after a small recovery in the latter half of 2015.
Home prices have also been flat throughout the last year in St. John's, according to a second survey by Royal LePage.
In its release, the CMHC called construction in the St. John's area "weak" in March.
"A lack of economic growth, mostly due to low oil prices, continued to restrain new home construction activity in March," said CMHC analyst Chris Janes in a press release.
Construction started on only seven houses in St. John's last month, compared to 15 during the same period one year ago.
The CMHC's "trend measurement" — a longer-term measurement of activity — also shows that the housing market is slowing down.
As well, 2016 has seen a big drop in construction activity for "multiple housing" units such as apartment buildings and row houses.
In the first three months of 2016, only 16 units were started in these buildings, compared to 127 in the first three months of last year.
The Royal LePage survey of home prices in the area shows more dark clouds hovering.
Prices for bungalows and two-storey homes were relatively flat, both increasing less than one per cent over the last year.
The price for condominiums plummeted, falling almost 10 per cent in the same time frame.
Across Canada, the average price of a home increased by eight per cent year-over-year.
Glenn Larkin, a realtor in St. John's with Royal LePage, said that the slow market did have an upside.
"First-time home buyers in the city are benefiting from a buyer's market, which is providing them with more options and the opportunity to negotiate," he said in a press release.
Royal LePage said its survey is showing "remarkable" differences between regional housing markets.
The group says home prices in Alberta and Newfoundland are just starting to adjust to the oil downturn. | 2,118 | 1,061 | 0.000963 |
warc | 201704 | **This is the first in a four part weekly series**
This is yet another one of the resources that I received from my boot camp a little while ago. Since I will be working with children with autism, this will definitely come in handy. This is a list of 57 questions that you should ask yourself when addressing behavior problems. I have decided to do this in a four part series so that the posts won’t be too long. Here is Part 1, numbers 1-21, which deal with the layout of furniture and materials.
When a problem occurs, consider the following:
Physical structure increases the likelihood of success during learning and free times. Limits that are physically clear to the individual may be an initial step towards self-control.
Ask:
Is there a clearly defined space where the individual keeps his/her belongings? Is furniture spaced sufficiently for movement? Are work areas located in the least distractible setting? Are work areas spaced sufficiently to discourage interactions with others during work times? Does the individual need to stay in a relatively closed space to reduce wandering off? Is the furniture appropriately sized for the individual? Is the furniture sturdy? Can other furniture (e.g., dividers, bookcases, etc.) be used to cut down distractions for individuals with difficulties focusing on their work? Besides furniture, are there other means of defining separate spaces in the room (e.g., tape on the floor, rugs, etc.)? Are windows, doors, cabinets, and other tempting materials less available or less accessible to distractible individuals? Are individual work areas clearly differentiated from group work areas? Can the staff see all or the majority of work areas in the room? Are group areas and independent work areas located in close enough proximity that the staff can monitor both? Are there clear means of transit between areas (i.e., while the individual is moving between work areas, is there an opportunity for him to distract another individual)? Is the individual distracted by available materials when moving between work areas? Are there too many work materials in the work area? Do these act as a disorganizing influence? Are work materials in a centralized area? Are the individual’s work materials easily accessible to him/her? Are materials which the individual is not allowed to use in a different place from those he/she can use? Is the leisure or break area situated where little or no supervision is necessary (i.e., away from exits, dangerous materials, or staff’s materials)? Is the free time area clearly defined? Do all the areas in the room have a simple label(possibly paired with a visual symbol) so that individuals know where to go (e.g., “Go to the blue table.”)? Is lighting sufficient in work area? Is the temperature easily controlled? Is noise level a problem? In summary: Does the layout of furniture and materials assist in the development of behaviors and skills which we want the individual to have? **Watch for Part 2 next week, October 8, which deals with schedules.** Source : Love, S. (2004). Professional Seminar: Behavior management for individuals with autism. Asheville TEACHH Center. | 3,190 | 1,497 | 0.000675 |
warc | 201704 | A new USA Today/Gallup poll out this week showed that more Americans do not support the health care bill that is now law:
50% of respondents said passing the bill was a “bad thing,” while 47% said it was a “good thing.”
Well, seems cut and dry, then, that the country would prefer they were not guaranteed health insurance — that is until you look a bit deeper at the survey details NOT broadcast.
First, the margin of error in this survey is enough to make the comparison one of balanced 50/50 support or non-support. Second — and most illuminating — is the fact that the survey only asked about whether one thought the new law “good” or bad,” while ignoring what THAT means.
For instance what if I were surveyed? I know that a Single-Payer model would be the most efficient and effective, and I also know that a Public Option would have forced the Private Insurance Industry to play by the spirit of the law… So, I might be inclined to say that the new health care law is “bad” since it contains neither of the options I know would make it better.
BUT… I still am thrilled the bill became law and know that it was the best that could be obtained given the political realities of confronting the obstructionist Party-Of-No!
OK, so here’s the deal: two other studies have shown that 13% of the country believe it is not the optimal plan (or is “bad”) yet… are pleased to see the new health care law enacted.
So… take the 13% away from the 50% within the USA Today/Gallup poll, and the results are actually:
37% of Americans do not supportthe new health care law, while 60% of Americans do supportthat we have at leastthe new health care law enacted, with 13% of the nation preferringthat the law would have included single-payer or Public Option. | 1,851 | 946 | 0.001121 |
warc | 201704 | Born in Kansas, educated in Texas and Oklahoma, but Louisiana has become my home.
I have a degree in Petroleum Engineering from the University of Oklahoma and have worked for upstream oil and gas companies, large, medium and small, during college and since graduation in 1978. Currently, I am Operations Manager for a small exploration and production company, with operations in the shallow waters of the Gulf of Mexico, as well as the inland waters and onshore South Louisiana.
For most of my career, my technical specialty was reservoir engineering, the branch of petroleum engineering that focuses on describing and quantifying what’s going on in the petroleum reservoir. Reservoir engineers are usually the guys who are most in tune with the economics of a venture, because the single most important piece of information about the value of an oil and gas well (or prospect) is the reserve estimate. We work a lot with the concepts of risk and probability and the time value of money. It helps to have a healthy appreciation for the science of geology.
In terms of credentials, I am a registered Professional Engineer in the State of Oklahoma. That would qualify me to testify as an expert in my field. I am also a member of the Society of Petroleum Engineers and the Society of Petroleum Evaluation Engineers.
The Obama Administration’s energy policies are misinformed, malformed and wrongheaded. They will damage our country and our collective prosperity. My blogging is inspired out of frustration with those policies and a commitment as a patriot and a citizen to defeat them.
Contact me at
s m a l e y 1 3 0 \ a t / g m a i l . c o m
but remove the blanks. | 1,684 | 903 | 0.001121 |
warc | 201704 | E-waste disposal is becoming a serious threat due to raising health and environmental concerns. Therefore, recycling is not just about eliminating harmful effects; it’s also turning into a business proposition which any company can consider.
In general, Electronic Waste includes different materials such as damaged or unwanted electronic, electrical devices and components like computers, printers, monitors, mobile phones, batteries, televisions, and many others. Previously, all this e-waste material ended up in landfills, or was incinerated or either dumped into water bodies such as oceans.
Since, these options were not the best solutions; they raised serious health and environmental issues.
At this juncture, recycling e-waste turns into a better option as it not helps in saving the environment, but also helps in conserving natural resources, by minimizing greenhouse gas emissions, water and air pollution that occurs while fabricating virgin materials.
In a recent survey taken up by EPA, it was revealed that over 3,500 US homes can avail electricity if one million laptops are recycled in a year.
Getting into some facts, an average PC contains many hazardous waste materials and it is extremely important to dispose such materials including plastic, ferrous and non-ferrous metals, glass, electronic boards in an accurate manner. Since computer recycling has become mandatory in most parts of the world, many big organizations today prefer appointing professionals who can efficiently deal with recycling heavy metal like lead in the circuit board.
DNF Recycling Services, a business unit of Dynamic Network Factory involves in e-waste disposal program. It offers a one-stop solution for recycling your old equipment through hassle free 5 step process.
DNF Recycling Services will arrange for pickup and transportation of the equipment to their office premises. Typically there is no fee associated with the pickup and removal of equipment. However, the distance of travel is currently limited to the state of California. If travelling is required outside of California, please call in advance at 510.962.5012and ask for details. After picking up and transporting the discarded electronic equipment to their facility, a detailed inventor and assessment of the equipment is done and a report is generated and is provided to the related customer at no cost. Then each unit will carefully disassembled and parts will be sorted based upon their recyclable properties and whether or not they contain hazardous properties. This process is performed with full compliance from US Environmental Protection Agency ( EPA)and clients. If the client provides hard disks and want the disposing process to be done in a responsible way, then DNF will erase the data from the drives using one of the three methods. The first method will be to write 0s and 1s to the drive which permanently erases traces of any data. The second method is drilling holes into the hard drive platters in such a manner that prevents further access to the platters. The third method involves sanding the platters using an industrial sandler. This is the most proven method to get the data out of the drives. All this is done for a nominal fee. The main highlight is that, DNF Recycling offers to its customers a credit for buying new components or electronics. However, the credit will depend on many factors lined-up in parallel with the benchmarks of DNF.
To know more details on what your e-waste can earn for you call
510.962.5012 or click on DNF Recycling web page. | 3,564 | 1,755 | 0.000572 |
warc | 201704 | Toxoplasmosis
Toxoplasmosis is a type of food poisoning that is caused by the parasite, toxoplasma gondii.
How To Prevent It: Don't eat raw or undercooked meat. Don't use the same utensils that were in contact with raw meats and use them for cooked meat. Don't use the same cutting board for raw meats and cooked meats. Don't come into contact with cat feces or contaminated water. Last, don't come into contact with an organ transplant from an infected person.
Toxoplasmosis comes from being in contact with cat feces or someone else that has. It is very dangerous for pregnant women to have this parasite.
Some of the common symptoms of Toxoplasmosis are:
Headaches Fevers Muscle pain Sore throat Confusion Blurred vision Seizures
Another video describing Toxoplasmosis.
If a woman is infected before pregnancy and become pregnant, the unborn child is protected by the mother's immunity. But if she is infected while pregnant the unborn child could become infected. The unborn child will not show signs of infection until later in life and it could result in blindness or mental disability.
This is what Toxoplasmosis looks like under a microscope.
The darker the state, the more cases of Toxoplasmosis that have occurred in the state since 1865. | 1,259 | 639 | 0.001579 |
warc | 201704 | Need A Couple Counseling Experts
People always love to have a partner in order to have companionship in their life. They want a person with whom they can share their emotions, feelings and make love. People search their partner of their interest and lead a life; however, most of the times, people fail to understand their partner and develop gaps. This gap develops much more widely and people start splitting. This not only brings bitterness in the relations, but also shatters confidence and faith. In this condition, it is recommended to visit couples counseling experts who can help you bring back the harmony in a relation.
Couples counseling experts
Couples counseling experts understand that anger, frustration and misunderstanding can create gaps in a relation and split people of even same interest. They know that current professional life is much more disturbing and frustrating. People face different types of pressure from the boss and peer; this is why they sometime unable to balance their personal and professional life.
Unhealthy patterns of communication
Sometimes, unhealthy patterns of communication and strong anger can damage a healthy relation, which can ultimately destroy marriage. This is why it is always good to visit a couples counseling expert. Only they can bring peace and instigate love in a dilapidating marriage. They can show options and many ways to overcome bitterness. They show different options of communication and simplify complexities. They show they value of love, passion and feelings.
Logic of life Couple counseling experts in Reston, VA explains the logics in the simplest manner and talk to you and your partner in-person to understand the concerns, beliefs, emotions and inner desires. Needless to say, intervention of a counselor can transform your life and save your marriage. Sometimes, due to external tension and stress, husband talks very angrily and even become violent, but after some time they become normal and feel regret. All wife need is to show patience and understand the situation and react accordingly. Understanding, love, care and faith
Small understanding, love, care and faith can save a relation and dissolve fun and enjoyment in it. Life is short and every second must be enjoyed to the fullest. Difference of opinion and distractions come and go. You should have faith and show your original behavior in order to tackle difficult situations.
Of course, handling pressure, anger and stress have effectively been very difficult, but it is always good to control impulsive behavior and show respect to your partner. It will not only bring you both closer, but also solidify your relations.
About The Author
Shawn Hughes is a celebrated therapist and
psychologist in Reston, VA who is also into writing many articles and blogs related to his field. Through these creations, he tries to help people understand their psychological issues and find the most effective solutions for the same. | 2,972 | 1,359 | 0.000739 |
warc | 201704 | Calculated Columns (SSAS Tabular)
Applies To: SQL Server 2016
Calculated columns, in tabular models, allow you to add new data to your model. Instead of pasting or importing values into the column, you create a DAX formula that defines the column’s row level values. The calculated column can then be used in a report, PivotTable, or PivotChart as would any other column.
Sections in this topic:
Formulas in calculated columns are much like formulas in Excel. Unlike Excel, however, you cannot create different formulas for different rows in a table; instead, the DAX formula is automatically applied to the entire column.
When a column contains a formula, the value is computed for each row. The results are calculated for the column when you enter a valid formula. Column values are then recalculated as necessary, such as when the underlying data is refreshed.
You can create calculated columns that are based on measures and other calculated columns. For example, you might create one calculated column to extract a number from a string of text, and then use that number in another calculated column.
A calculated column is based on data that you already have in an existing table, or created by using a DAX formula. For example, you might choose to concatenate values, perform addition, extract substrings, or compare the values in other fields. To add a calculated column, you must have at least one table in your model.
This example demonstrates a simple formula in a calculated column:
=EOMONTH([StartDate],0])
This formula extracts the month from the StartDate column. It then calculates the end of the month value for each row in the table. The second parameter specifies the number of months before or after the month in StartDate; in this case, 0 means the same month. For example, if the value in the StartDate column is 6/1/2001, the value in the calculated column will be 6/30/2001.
By default, new calculated columns are added to the right of other columns in a table, and the column is automatically assigned the default name of
CalculatedColumn1, CalculatedColumn2, and so forth. You can also right click a column, and then click Insert Column to create a new column between two existing columns. You can rearrange columns within the same table by clicking and dragging, and you can rename columns after they are created; however, you should be aware of the following restrictions on changes to calculated columns:
Each column name must be unique within a table.
Avoid names that have already been used for measures within the same model. Although it is possible for a measure and a calculated column to have the same name, if names are not unique you can get calculation errors. To avoid accidentally invoking a measure, when referring to a column always use a fully qualified column reference.
When you rename a calculated column, any formulas that rely on the column must be updated manually. Unless you are in manual update mode, updating the results of formulas takes place automatically. However, this operation might take some time.
There are some characters that cannot be used within the names of columns. For more information, see "Naming Requirements" in DAX Syntax Reference.
The formula for a calculated column can be more resource-intensive than the formula used for a measure. One reason is that the result for a calculated column is always calculated for each row in a table, whereas a measure is only calculated for the cells defined by the filter used in a report, PivotTable, or PivotChart. For example, a table with a million rows will always have a calculated column with a million results, and a corresponding effect on performance. However, a PivotTable generally filters data by applying row and column headings; therefore, a measure is calculated only for the subset of data in each cell of the PivotTable.
A formula has dependencies on the objects that are referenced in the formula, such as other columns or expressions that evaluate values. For example, a calculated column that is based on another column, or a calculation that contains an expression with a column reference, cannot be evaluated until the other column is evaluated. By default, automatic refresh is enabled in workbooks; therefore, all such dependencies can affect performance while values are updated and formulas refreshed.
To avoid performance issues when you create calculated columns, follow these guidelines:
Rather than create a single formula that contains many complex dependencies, create the formulas in steps, with results saved to columns, so that you can validate the results and assess performance.
Modification of data will often require that calculated columns be recalculated. You can prevent this by setting the recalculation mode to manual; however, if any values in the calculated column are incorrect the column will be grayed out until you refresh and recalculate the data.
If you change or delete relationships between tables, formulas that use columns in those tables will become invalid.
If you create a formula that contains a circular or self-referencing dependency, an error will occur.
Topic Description Create a Calculated Column (SSAS Tabular) Tasks in this topic describe how to add a new calculated column to a table. | 5,298 | 2,169 | 0.000464 |
warc | 201704 | The 2011 New York state law that made renting out apartment units or rooms in residential buildings for less than 30 days unlawful, coupled with Airbnb’s booming short-term rental business, has served a one-two punch to the city’s traditional bed and breakfasts.
The number of bed and breakfasts has fallen by half since 2011, according to data from BnbFinder.com cited by Crain’s. Currently, the site lists only nine such properties in New York City.
One New York City innkeeper launched a nonprofit advocacy group in 2011 called StayNYC.org, which lobbies legislators and serves as a resource for Airbnb owners. The group’s aim, according to founder and Ivy Terrace townhouse owner Vinessa Milandro, is to persuade the state to carve out an exemption for them. Such a move would involve registering with the city’s Department of Finance and paying the same taxes as hotels.
“We decided that we needed to get our message out there,” Milandro told Crain’s. “What has happened since 2011 is that Airbnb has become so popular, and legislators are not rushing to change any laws.” [Crain’s] —
Julie Strickland | 1,172 | 664 | 0.001596 |
warc | 201704 | Neutropenia is a common but yucky side effect that affects about half of cancer patients on chemotherapy or treatment with a biological agent (like Nexavar). When a cancer patient is neutropenic, it means that her white blood cell count is low and she is much more suceptible to infection. She is given direction to avoid possible sources of infection, to stay away from crowds, to wash hands and insist that others wash hands before coming into contact with her, and to change her diet to avoid the risk of infection from food.
All foods must be freshly cooked. None can come from restaurants or uncontrolled sources. Bread must be bagged and not homemade. Cutting boards must be changed between preparation of meats and other foods. No deli meat. No deli cheese. No hand-packed or soft ice cream or froyo. No soft cheese. No popcorn. Nothing from a bakery. No raw veggies, fresh fruits, except those with a very thick skin (oranges and bananas), or dried fruits. No spices, salad bars, buffets, or restaurants.
I’m pretty sure the guidelines would say no contact with little children who bring home infections from preschool, but what can I do? I have two who need me, and one is home sick today. | 1,207 | 661 | 0.001524 |
warc | 201704 | “If anyone believes that the downsizing of the PMU industry here in North America is advantageous for the mares and foals who suffer at the hands of Big Pharma, they are sadly mistaken” — Jane Allin
The news in January 2009 of pharmaceutical giant Pfizer’s announcement to take over rival company Wyeth for $68 billion in a cash and stock merger brought speculation and uncertainty regarding the fate of the PMU farming industry in North America. At that time Norm Luba, executive director of The North American Equine Ranching Information Council (NAERIC) posited that it was too early to assess the impact of the Pfizer-Wyeth merger. But more aptly, and much more perceptively, Shane Boyes, a registered Quarter horse-owner and PMU producer in Saskatchewan made the comment:
“They made a billion dollars from sales of Premarin alone last year. I don’t think Pfizer would dump a billion dollar industry. We will just have to wait and see what happens.” [1]
Indeed the strategy behind Pfizer’s announcement was in part to garner the lucrative HRT segment of Wyeth’s portfolio. Historically, Pfizer’s reputation as an innovative drug maker is less than stellar and they are renowned for securing drugs with several years of patent protection, high sales profits and potential future blockbusters in late-stage testing for FDA approval. There was no question that Pfizer would retain these product lines given the profits realized from these drugs derived from pregnant mares urine regardless of the inexcusable exploitation of women and horses alike. And with the impending approval of Aprela®, a combination HRT-osteo drug, in Europe, Japan and North America profits are expected to climb even further despite the fact that conjugated “equine” estrogens are classified as carcinogenic agents. In both 2010 and 2011 Pfizer’s revenues worldwide from the sales of the Premarin® family of drugs were in excess of 1 billion dollars. [2]
Disgraceful and disheartening.
This brings us back to the apprehension expressed by Luba and Boyes.
No, they need not have worried about Pfizer getting out of the HRT business. That was never an option nor was it part of their strategic plan. What they did have to worry about was the stability of the PMU industry in North America.
The gradual downsizing of the PMU ranches began shortly after the WHI results were released and now stand at approximately 22 farms with an estimated total of 800 to 900 mares after the shuttering of a farm in North Dakota earlier this year. A far cry from 2002 and 2003 where there were as many as 300 PMU farms housing 40,000 mares that churned out gallon upon gallon of estrogen-rich urine.
While it is true that immediately following the release of the WHI study, citing the horrors associated with CEE-derived HRT, there was a precipitous drop in sales of Premarin® and Prempro®, some of that market was reclaimed in time via reduced dosage recommendations, aggressive marketing strategies, price increases and the wide availability of these drugs without prescriptions through Internet sales.
What no one realized at the time was that Wyeth had probably begun to move their PMU facilities prior to or just after the damning results of the WHI were released to the public. As evidenced by their ghost written Prempro® and Premarin® articles between 1996 and 2004, Wyeth was well aware of the risks involved. It was only a matter of time before their unethical marketing tactics were exposed.
A simple Internet search shows that several pharmaceutical-based companies in mainland China and India produce bulk conjugated estrogens in powder form. Interestingly enough, or perhaps clearly enough, Wyeth/Pfizer is and has been a customer for several years. Whether their procurements are limited to conjugated estrogens is unknown. But really, who else would purchase these products in such quantities without a vested interest in them? Moreover, the Premarin® patent only just expired in February 2012 and hence no generics were permitted on the market up until this time.
In fact, according to information we procured from an inside source from one such company, PMU farms have been in operation in Northern China for the last 8 years. The company’s manufacturing site is also in the same general location.
Based on the limited information available it is currently unknown exactly how many of these farms exist and the number of mares housed at these facilities. Nonetheless it is evident that Wyeth, in its unremitting deception, quietly contracted out the manufacture of these conjugated equine estrogens to avoid publicity surrounding the evils of CEEs while concealing their true agenda.
It is well recognized that both Pfizer and Wyeth established presence in China in the mid-eighties as strategic protocol to gain a foothold in the rapid economic growth and tantalizing market potential for a variety of pharmaceutical products. Why then the “cloak and dagger” regarding the PMU industry? Wyeth could just as easily have launched their own PMU enterprises rather than rely on secondary network distribution systems. Perhaps cost was a factor or conceivably a more sinister motive was involved or perhaps a combination of both. One will never know.
As for the manufacturers of conjugated equine estrogens in India, and possibly other countries, it is only speculation at this point. One thing however is clear. If anyone believes that the downsizing of the PMU industry here in North America is advantageous for the mares and foals who suffer at the hands of Big Pharma, they are sadly mistaken. With revenues of over 1 billion dollars in 2011 and sales projected to skyrocket with the introduction of Aprela® to an aging population, the PMU industry will be alive and kicking, just not here in North America. As history tells us the horses are likely to suffer to a greater degree given the heinous crimes against animals that China is so very renowned for.
In any case, there is no question that Pfizer/Wyeth will continue to market and sell this vile family of drugs regardless of the fact that the patent expiration date for Premarin® was February 26, 2012. Prempro® is patent protected until January 2015. and once Aprela® is approved for sale it will have patent protection for many years and so a monopoly of sorts is guaranteed. That said Pfizer has seemingly made a very tactical move in regard to Premarin® among other drugs in their repertoire, most notably the blockbuster statin Lipitor® which lost patent protection in November 2011.
Interestingly enough on February 17, 2012, just shortly before the Premarin® patent expired, Pfizer and Zhejiang Hisun Pharmaceutical, a leading pharmaceutical company in China, signed a framework agreement pursuant to an earlier announcement which would establish a joint venture to develop, manufacture and commercialize off-patent pharmaceuticals in China and global markets.[3] The key word here is “off-patent”.
A pattern similar to that observed in Europe in the 1980s and 1990s driven by EU over-regulation of the pharmaceutical industry is currently replaying itself here in North America. In part due to a rise in regulatory pressures here in North America as well as hefty plaintiff compensatory awards in mass torts drug litigation.
However a more compelling reason for Pfizer and other large drug companies to ramp up investments in emerging markets, particularly China, has been widely interpreted to be a response to the loss or impending loss of patent protection and subsequent profits for several of their blockbuster drugs. According to IMS health when a drug loses patent protection more than 80% of its prescription sales are replaced by generics within 6 months. [4]
Even more sobering for Big Pharma, DRX Inc., a health-care data provider reports that during the first 180 days after patent expiration the price falls about 10% and as much as 80% in some cases thereafter. [5]
For big name drugs this translates to big money. In 2011 Pfizer’s top money-making drug Lipitor netted them over $9 billion, the Premarin® family of drugs (12th) $1 billion with the overall revenue from the top 15 drugs on the list a whopping $32 billion representing over 55% of the total revenue from biopharmaceutical products. [6]
With this much money at stake this is not simply a cost-effective strategy, the realignment also has potential to drive brand sales over the longer term and substantially boost overall sales given China’s burgeoning population. As a number of industry researchers has indicated, China’s prescription drug market, set to be the world’s second largest by 2020, is estimated to be worth more than $110 billion by 2015, from $50 billion in 2010. [7]
More importantly, from our perspective, what will be dealt the horses?
Forever swallowed in the melee of human progress as we know it there is nothing to suggest that compassion and respect for our fellow companions on this earth will ever be realized by Big Pharma – animal or human alike. Big Pharma has been described as an albatross of medical monopoly that exploits illness for power and profit while spending almost twice as much on promotional propaganda as it does on research and development. To think that the horses would be spared is imaginably naïve.
Nothing has been, or ever will be, appealing or beneficial about the PMU industry and the Premarin® family of drugs; they are clearly harbingers of death from both sides of the equation.
Sadly the relocation of the PMU industry to distant and less regulated countries will carry with it the legacy of deceit, exploitation of the innocent and ever-zealous hunger for profit at all costs. The writing is on the wall. It is only a matter of time before Pfizer/Wyeth completes the transition.
No longer will the cataclysm and protest against the atrocities of PMU farming in North America be taken at face value by Big Pharma. No longer will Pfizer need to give this reprehensible exploitation the gloss of moral sanction. A new dawn for Pfizer perhaps – a blood red dawn for the PMU mares and foals, one governed by misery and certain slaughter.
Regrettably the suffering will continue at its worst; the new beginning in China bodes a critical end for the innocent mares and their foals. Distance – the great separator. Who will be there for them in their new homeland?
Perhaps there is a glint of hope.
While the horrors of animal abuse in China have painted an ugly picture for many around the globe, over the past several years there have been major developments in support of the plight of animals at the hands of these abhorrent and unforgiving industries and practices. Raising awareness and educating those who now preside over the welfare of the PMU mares and foals will be paramount in transferring our efforts to a distant land unaccustomed to the realities of the negative implications of these drugs from both a human and equine perspective.
It does not matter where one hails from; it is the goodness in each of us that promotes welfare and justice.
————
Written and Researched by JANE ALLIN
© Int’l Fund for Horses (HorseFund.org) 2012 | 11,388 | 5,169 | 0.000198 |
warc | 201704 | Neutral advice from the IPCC?
By Richard Ingham (AFP) – 3 hours ago
PARIS — A leaden cloak of responsibility lies on the shoulders of UN scientists as they put the final touches to the first volume of a massive report that will give the world the most detailed picture yet of climate change.
Due to be unveiled in Stockholm on September 27, the document will be scrutinised word by word by green groups, fossil-fuel lobbies and governments to see if it will yank climate change out of prolonged political limbo.
The report will kick off the fifth assessment by the Intergovernmental Panel on Climate Change (IPCC), an expert body set up in 1988 to provide neutral advice on global warming and its impacts. http://www.google.com/hostednews/afp/article/ALeqM5hHmcL4DZjT-PZWhEHO3VDb5gjsrA?docId=CNG.db54bf0fa84dd93ad7cf71578fe1dcef.681
=================================================================
Regarding your “Trenberth’s IPCC claim” post, you may like to mention Green & Armstrong (2007) (available here) in which we addressed Trenberth’s IPCC-don’t-forecast line in some detail. As far as I’m aware, our subsequent paper (Green, Armstrong, & Soon 2009, here) provides the *only* forecast of global mean temperatures over the 21st Century. That is, we state that we are making a forecast (not a scenario or projection), the forecast is stated clearly (annual average temperatures will be within 0.5 C of the 2008 figure), and is unconditional (no matter what happens to CO2 emissions, etc). Unlike Trenberth et al., who try to have it both ways by calling for “action” but aren’t prepared to say they are making forecasts, we stand by our forecast and the clear implication that government climate policies are neither needed nor desirable.
Cheers, Dr Kesten C Green
===================================================================
Park Service personnel recently discovered evidence of a buried forest dating back to at least 1170 AD high in the Forelands near the current glacier’s edge…Exit Glacier advanced from the Harding Icefield during the Little Ice Age, burying this existing forest and advancing to a maximum marked by the terminal moraine dated to 1815…
===================================================================
It’s baaaaaaack….
Eastern US water supplies threatened by a legacy of acid rain
Noted ecologist Gene Likens, founding director of the Cary Institute of Ecosystem Studies and a co-discoverer of acid rain, was among the study’s authors. The extent of alkalinity change in streams and rivers exceeded his expectations: “This is another example of the widespread impact humans are having on natural systems. Policymakers and the public think that the acid rain problem has gone away, but it has not.”
====================================================================
Dr. Roy Spencer continues his greenhouse experiments:
In Part I of this series, I mentioned how Wood’s (1909) “greenhouse box” experiment, which he claimed suggested that a real greenhouse did not operate through “trapping” of infrared radiation, was probably not described well enough to conclude anything of substance. I provided Wood’s original published “Note”, which was only a few paragraphs, and in which he admitted that he covered the issue in only cursory detail.
Wood’s experiment was not described well enough to replicate. We have no idea how much sunlight was passed through his plate of rock salt-covered box versus the glass-covered box. We also don’t know exactly how he placed another glass window over the rock salt window, which if it was very close at all, invalidated the whole experiment.
====================================================================
New witch hunt: “Environmental Campaign Suggests Naming Vicious Storms After Climate-Change Deniers”
New York agency Barton F. Graf has turned its roguish attention to the issue of climate change, and is helping 350 Action, a climate change activist group, with the amusing video below. According to the YouTube description: “Since 1954, the World Meteorological Organization has been naming extreme storms after people. But we propose a new naming system. One that names extreme storms caused by climate change, after the policy makers who deny climate change and obstruct climate policy. If you agree, sign the petition at climatenamechange.org.” The snarky tone preaches to the choir, but it’s hard to resist lines like, “If you value your life, please seek shelter from Michele Bachmann.”
=================================================================
Satellite temps flat for 200 months now by Werner Brozek If the global warming era started in June 1988 with Jim Hansen’s drama-queen congressional testimony, then atmospheric temps have been flat 67% of the time since. | 4,977 | 2,460 | 0.000424 |
warc | 201704 | AS ALWAYS, there was a lot in the Budget with plenty of detail to keep Budget aficionados going for a good while. But for all the detail and undoubted complexity of some of the measures, I’d argue that there was a seam – even theme – of simplification. The Office of Tax Simplification (OTS) made a mark on this Budget and plans to do more in future ones with our competitiveness project.
We published in January significant reports on partnerships and employee benefits & expenses and were awaiting the chancellor’s formal response to our various recommendations. It is pleasing that these are indeed going to be taken forward (see paras 2.212-217 in the Red Book) and we had a
fuller response from David Gauke, exchequer secretary to the Treasury.
Some of the highlights included employee benefits – voluntary payrolling of benefits will be brought in; £8,500 limit (the old ‘higher paid’) – consultations on abolishing this outdated limit and the effects of so doing; employee expenses – consultation on an exemption for qualifying business expenses; partnerships – lots of our ‘short-term fixes’ are being taken forward, including better guidance and we are being invited to do more work on our longer-term ideas.
It’s not just these recent reports that are being taken forward. In last year’s pensioners’ report, we recommended abolishing the 10% savings rate of income tax as being complex and ineffective. To compensate, we recommended a pragmatic increase in ISA limits. The difficulty with our proposal – which we always acknowledged – was that there wouldn’t be a precise match between winners and losers. The announcement in the Budget does not formally abolish the savings rate but by increasing the band and reducing the rate to zero achieves most of the practical simplification we were after.
Our continuing efforts to make the case for significant reforms to NICs (see our Employee Benefits & Expenses report) have borne some fruit with the taking forward of collecting Class 2 NICs via self assessment.
The OTS report on ‘unapproved’ share schemes has led to a number of measures in the Finance Bill 2014. Two of our more radical proposals – on the timing of the tax charge and for an ‘employee shareholding vehicle’ – are to be taken forward by way of a discussion document. It has taken some effort to make the case for what we think are potentially valuable reforms so it is pleasing to see those efforts bearing fruit.
One potential downside of all this reform that I have to acknowledge is that changing the tax system always adds to complexity in the short term. We have to be sure that OTS recommendations pass the ‘worth it’ hurdle. I think these do: for example, our programme of reforms in the employee benefits & expenses area could, we believe, largely eliminate the 4.4 million P11Ds completed annually.
As always, we need to do more and one of the documents published with the Budget is our
paper on the competitiveness project. Our brief is to look for ways of improving the competitiveness of the UK tax administration, with particular reference to the World Bank’s ‘Paying Taxes’ report.
The chancellor wants our ideas on improving competitiveness – and that means we want your ideas. What would make business taxes – corporation tax, PAYE/NIC, VAT and the rest – more efficient? If you could suggest one change to improve the competitiveness of the UK tax administration, what would it be? Let us know at competitiveness@ots.gsi.gov.uk.
John Whiting is tax director at the Office of Tax Simplification Related reading
Report argues that the government must change the way it makes tax and budget decisions
Committee expresses concern about costs to businesses and April 2018 implementation date
Andrew Tyrie airs views on the Finance Bill, 'Making Tax Policy Better' report, and Brexit
Top 25 firm HW Fisher & Co has acquired London firm Rhodes & Rhodes | 4,070 | 1,978 | 0.000529 |
warc | 201704 | FOR IMMEDIATE RELEASE
CONTACT: (212) 549-2666; media@aclu.org
NEW YORK – The American Civil Liberties Union and the ACLU of New Mexico filed a class action lawsuit today challenging the Defense Department's discriminatory policy of cutting in half the separation pay of service members who have been honorably discharged for being gay. The separation pay policy is not part of the "Don't Ask, Don't Tell" statute, and can be changed without congressional approval.
"By denying servicemen and women full separation pay, the military is needlessly compounding the discrimination perpetuated by 'Don't Ask, Don't Tell,'" said Joshua Block, staff attorney with the ACLU Lesbian, Gay, Bisexual and Transgender Project. "The Obama administration has repeatedly said the 'Don't Ask, Don't Tell' statute is wrong, but that it needs to work with Congress to repeal the law. But the separation pay issue is entirely within the administration's control. The administration can at least take a preliminary step toward backing up its rhetoric with action by addressing this issue promptly and protecting gay and lesbian service members from needless additional discrimination."
Federal law entitles service members to separation pay if they have been involuntarily discharged from the military after completing at least six years of service. But in 1991, the Defense Department adopted an internal policy that automatically cuts a former service member's separation pay in half if the service member is discharged because of "homosexuality." The separation pay policy was adopted two years before Congress enacted the "Don't Ask, Don't Tell" statute. The ACLU and the Servicemembers Legal Defense Network contacted the Defense Department in November 2009 to request that the separation pay policy be revised to eliminate the discrimination against gay and lesbian service members, but the department has refused to do so.
Today's class action lawsuit was filed in the U.S. Court of Federal Claims. The lawsuit was brought on behalf of all service members involuntarily discharged in the past six years who received honorable discharges and were otherwise eligible for full separation pay but had that pay cut in half because of "homosexuality." The ACLU estimates that over 100 former service members will qualify as part of the class of plaintiffs.
The lead plaintiff in the case is Richard Collins, a former staff sergeant in the Air Force who served for nine years until he was discharged under "Don't Ask, Don't Tell." Collins was stationed at Cannon Air Force Base in New Mexico when two civilian co-workers observed him exchange a kiss with his civilian boyfriend and reported it to his superiors. The kiss occurred while Collins and his boyfriend were in a car stopped at an intersection 10 miles off base and while Collins was off duty and out of uniform. Collins received an honorable discharge from the Air Force but discovered after the discharge had been completed that his separation pay had been cut in half on the grounds of "homosexuality."
"After nine years of honorable service, it's not fair that I should be deprived of the same benefits given to other dedicated service members who are adjusting to civilian life," said Collins. "I hope that the Defense Department will adjust its policy and show some justice to anyone who has been discharged from the military under 'Don't Ask, Don't Tell.'"
"Mr. Collins's case is a perfect example of how discrimination on the basis of sexual orientation is unfair and unconstitutional," said Laura Ives, a staff attorney for the ACLU of New Mexico. "Mr. Collins's sexual orientation did not prevent him from serving his country ably and honorably. The least the government can do is provide him with the same separation pay it provides other honorably discharged service members."
Attorneys on the case,
Collins v. United States, include Ives and Matt Garcia of the ACLU of New Mexico; Block and Leslie Cooper of the ACLU LGBT Project; George Bach, cooperating attorney with the ACLU of New Mexico and Sara Berger of Freedman Boyd Hollander Goldberg Ives and Duncan, PA.
For more on this case, please visit: www.aclu.org/lgbt-rights/collins-v-united-states-class-action-military-separation-pay | 4,281 | 1,906 | 0.00053 |
warc | 201704 | Holds are barred, long ones on the tarmac anyway
The Department of Transportation today gave the final go ahead to implement a federal rule barring airlines from holding a passenger aboard a flight stuck on the tarmac for more than 3 hours on April 29, deciding it cover all domestic flights from Day one.
It rejected requests for a temporary exemption from the new rule from JetBlue, American Airlines, U.S. Airways, Delta Air Lines and Continental Airlines.
Jet Blue wanted a delay because of JFK airport construction and its request prompted Delta Airlines, American Airlines and Continental Airlines to make similar request for that and other New York airports. U.S. Airways sought a delay for Philadelphia because it feared delays in New York would spill over. All the airlines warned without delays they could have to cancel flights. United and Spirit then asked all airlines be treated the same.
Transportation Secretary Ray LaHood announced the decision yesterday in a statement.
"Passengers on flights delayed on the tarmac have a right to know they will not be held aboard a plane indefinitely," he said. "This is an important consumer protection, and we believe it should take effect as planned."
In the statement, the department said the airlines could easily avoid delays by rerouting or rescheduling flights at JFK to allow the airport's other three runways to absorb the extra traffic.
The 3-hour rule was part of new "passenger protection" rules the Obama administration announced following a series of incidents in which passengers were held on planes waiting to take off or land for hours and hours.
The department fined Continental Airlines, ExpressJet Airlines and Mesaba Airlines a total of $175,000 for their roles in one of the incidents, a nearly six-hour overnight ground delay in letting passengers off at Rochester, Minn.
Under the new rules airlines:
Can be fined up to $27,500 per passenger if they don't let passengers get off a domestic flight that is on the tarmac for 3 hours. There is an exception for safety and security reasons or if returning to the terminal would disrupt air operations. After two hours sitting on the tarmac, have to provide food and water to passengers. Must display flight delay information on their websites for every domestic flight they operate. Need to respond "in a timely and substantive fashion" to consumer complaints.
Kate Hanni, executive director of FlyersRights.org, a group that has been pushing for more consumer friendly rules, praised the decision to move forward.
"We are absolutely thrilled. It is good for airline passengers," she said.
She said passengers will finally be assured of getting snacks and water if flights are held.
"It's not going to be steak dinner, but they will be fed something," she said.
Hanni said the rule will also likely result in the three airlines that extensively use JFK -- Jet Blue, Delta and America -- having to pare back the number of flights they have between 6 a.m. and 8 a.m. and between 3 p.m. and 5 p.m.
"I believe there will be the same number of flights available, but some will be moved to the middle of the day," she said.A U.S. Airways spokeswoman said the airline's concern was it would be disadvantaged if other carriers won reprieves. She said flights would operate normally. JetBlue said in a statement it would comply but warned that the rule "could have unintended consequences, and result in harming consumers with more delays and cancellations rather than protecting their interests."
www.dot.gov/affairs/2010/dot7610.htm | 3,584 | 1,742 | 0.00058 |
warc | 201704 | Preserve: Proving Plastic Needn't Be Bad for the Environment
When Preserve founder Eric Hudson was meeting with plastics manufacturers back in the mid-'90s to make the first toothbrush for his new company, he would start by talking about how it would encourage better brushing.
Specifically, Hudson would walk would-be manufacturers through the unique forward-angle design, which he created with the help of his dad, a former car and boat designer.
It wasn't until the third or fourth meeting, when he thought he might have them sold on the idea, that Hudson would tell them what really made his brushes special.
Hudson didn't want to make the brushes out of new plastic, which is easy to work with. He wanted to use recycled plastics -- and not just any plastic, but No. 5 polypropylene, the stuff yogurt cups and bottle tops are made out of. At the time, most of it was ending up in landfills.
Out of that trash, Preserve would come to produce an entirely different creation.
Chapter 1: Redefining Sustainability
Hudson's reluctance to mention his vision upfront may seem strange now, but in the mid-1990s recycling was just gaining a foothold across America. While cities had begun collecting plastic, there were very few useful products being made from it, and none were being made from No. 5 polypropylene.
Hudson was neither a plastics expert nor an environmental crusader. But the Massachusetts native and former Fidelity securities trader was fresh out of Babson College, where he'd picked up an MBA and nurtured his entrepreneurial spirit. His grandparents were the founders of Brookstone, and his father was an industrial engineer. After years of trading stock, he longed to create something real. Now he just needed a product.
He'd always cared about the environment, and after reading a news story about the Fresh Kills Landfill on New York's Staten Island -- which was turning away trash barges because it was bursting at the seams with garbage -- an idea started taking shape.
"I remember reading about this and going, 'We're really running out of landfill space,' " Hudson says. "We really need to take on this recycling challenge and say, 'We've got to start converting this stuff into useful products made in the United States of America.' "
Speaking of useful products, Hudson had long been kicking around an idea for a backward-sloping toothbrush or idea for a reverse-angle toothbrush -- one that would allow him to brush as his dentist had always instructed. He worked with plastics and dental experts on the design, and a few months after submitting plans to the Food and Drug Administration, he was given the green light. Now he just needed a manufacturer.
Recycled plastics don't always melt and blend as neatly as virgin materials, so they can be hard to work with. Hudson eventually found a company in Tennessee willing to take up the challenge.
The next challenge would be find consumers to buy it and use it. Did the world need another toothbrush, especially one made out of garbage?
"Does it make sense to launch a product everybody uses, and that everybody uses in their mouth, made from recycled materials?" Hudson remembers asking himself. "Is this OK?"
Two decades later, the answer is yes.
Preserve is now a well-established company with 13 employees, a massive supply chain, and four manufacturing partners making not just toothbrushes, but also razors, reusable tableware -- including plates, cups, and cutlery -- and kitchenware items like colanders and food-storage containers.
Hudson no longer downplays the environmental angle. He puts it right on the company's packaging: "Made with love and recycled yogurt cups."
See Gallery | 3,697 | 1,836 | 0.000549 |
warc | 201704 | Q:I am addicted to pepsi, and i have gotten thirst and frequent urination problems. I drank something like 1 litre of it everyday. I feel really thirsty at times.. and at night sometimes i wake up two or three times to urinate. Even drinking water sometimes does'nt help. I drink a lot of water. I also get tired easily, at school me and my friends were hitting each other as a joke, others were fine but i got really tired fast, started breathing heavily. Do i have diabetes?
A:Thank you for your question. It is possible that your consumption of soda has been leading to your symptoms. Excessive consumption of sugar can lead to to such symptoms. We cannot definitively diagnose you with diabetes without further examination and blood testing to confirm. It is important that you try to cut back your consumption of soda and switch to a healthier alternative such as water, and if absolutely necessary, artificially sweetened drinks in moderate amounts. Please follow up with your doctor as soon as you can for further evaluation.
(*)These Q&A’s are for educational purposes and should not be relied upon as a substitute for medical advice you may receive from your physician. If you have a medical emergency, please call 911. These answers do not constitute or initiate a patient/doctor relationship. | 1,312 | 721 | 0.001397 |
warc | 201704 | Previous--Index--Next
When biological agents are made into or used as a weapon or as a potential
weapon they are being weaponized. How a biological agent is weaponized depends
on what the state/nonstate actor wants to accomplish. Several questions have to
be answered. How many people will be targeted? Where is the attack going to take
place? Is this attack meant to incite fear in the population at large or to take
over a particular region? Is this BW going to be used as a WMD or for
assassination?
There are several ways in which a biological agent can be released. They can
be sprayed into the air as a wet or dry aerosol, placed wet or dry in food or
water sources and injected into people, plants or animals; or released by
explosive dissemination. The means of release are in part dependent on the BW
used and the number of victims the attacker wants to infect.
BW released into the air using aerosols have the potential to infect many
people in highly populated areas. Release into water and food sources can also
infect large numbers of people. Injection of a BW usually targets just a few
people at time. Aerosols, water, and food contamination can make it difficult
for an attacker when they want to occupy the area of the BW release since they
are likely to also become infected. This is especially the case if the BW lasts
long periods of time in the environment. However, if the attacker does not want
to occupy the area of the BW release then the use of aerosols and contaminating
water sources may obtain the objectives of causing disease and fear while at the
same time drain the enemy’s resources as they attempt to decontaminate the
area.
Some BW are more effective if injected into the blood stream or muscles. Some
means of injecting the BW would have to be employed to infect a target. It is
more difficult to infect large numbers of people using injections. It also
usually requires the attacker to be in closer proximity to the victims and the
chances of getting caught increase. In one of the most famous historical
examples, a poison dart filled with ricin and fired
from an umbrella in London in 1978 killed a Bulgarian dissident Georgi
Markov (54).
Some BW infect a people following ingestion. A strain of
Salmonella
was used by the Rajneesh cult in Oregon to infect others at a local restaurant.
The Rajneeshees grew the organism in a nutrient rich liquid media and placed it
on food items on the salad bars. In this case relatively little had to be done
to prepare the BW other than grow it up and sprinkle it on the food (5).
Other BW are most effective if inhaled. Typically, aerosols can be released
by two basic means: point or line source dissemination. Point source
dissemination involves the use of stationary aerosol generators or bomblets.
Line source dissemination involves release of the BW from low-flying
aircraft or speedboats along the coast. This allows the attacker to be
remote from the site of the attack making it less likely that they would be
caught in the act.
Aerosol release is more dangerous in that the aerosols if released prematurely can also infect the attackers (Boomerang Effect; 30). Even though aerosols can infect many more people there is a lot more involved in weaponizing such BW. It usually requires concentration of the BW and then generation of the right particle size so that it will float in the air and yet be able to settle in the lungs once it is inhaled.
Aerosols can be either wet or dry and are made up of small particles. Dry
aerosols tend to travel further than wet aerosols. The larger the particles in
the aerosol the more rapidly it will settle to the ground making it less
effective. Particles larger than 5
micrometers (the diameter of a human hair can be from 17-181 micrometers) do not
go down into the lungs but are caught up in the mucus that lines the nose and
the back of the throat. The particles are then swallowed and the acid in the
stomach kills most of the organisms. If the particles were too small they would
be inhaled into and exhaled from the lungs before they can settle down on the
inner surfaces of the lungs.
An effective aerosol must have particle sizes between 0.5 and 5 micrometers.
These aerosols are invisible, do not have any odor, and can remain in the air
for long periods of time depending on the weather conditions.
Previous--Index--Next
© 2005 Neal Chamberlain. All rights reserved.
Site Last Revised 5/13/05 Neal Chamberlain, Ph.D. A. T. Still University of Health Sciences/Kirksville College of Osteopathic Medicine. Site maintained by: Neal R. Chamberlain Ph.D.: nchamberlain@atsu.edu | 4,680 | 2,177 | 0.000467 |
warc | 201704 | Academia
Brittany Gross
Nikkita Patel
Two students have been named the 2010 Penn Vet Student Inspiration Award winners. The awards are given annually to two University of Pennsylvania School of Veterinary Medicine students in recognition of their plans to substantially advance the frontiers of the profession.
Brittany Gross, South Sterling, Pa., is a second-year student who earned a bachelor's in biology from the University of Vermont.
Gross' proposal involves constructing an educationally focused dairy farm in the rural northeast region of Thailand. The dairy would be the site of after-school programs that provide hands-on involvement and instruction in herd care, raw milk handling, and dairy product processing. The students would learn valuable skills in a facility that models methods and technologies that are implementable by the farmers in the region.
Nikkita Patel, Knoxville, Tenn., is a fourth-year student who earned a bachelor's from the University of Tennessee and a master's in public health from Yale University.
Her proposal, titled, "Veterinary Public Outreach 2.0," is about veterinarians educating the public on the depth and breadth of current problems they are working to solve, encompassing public health, conservation, and environmental health. Using Internet tools can be an efficient and effective means of doing so. Patel plans to use these tools for a veterinary intervention to educate and engage individuals and provide a powerful resource for policymakers. | 1,506 | 799 | 0.001261 |
warc | 201704 | No, it's not against the law, but it is not a good idea. At 8 years of age, your daughter should not be sleeping with her father. You might want to gently suggest to him that either she or he should sleep on the couch. You will also want to keep an eye on your daughter to see if there are any behavior changes that might indicate a problem. If you do see behavior changes or changes in the quality of her school work, get her to a therapist.
Is this a divorce situation or a child born out of wedlock? If it is a divorce situation, check your Judgment and Decree to see if there is anything in that document about sleeping arrangements. If it is a paternity case (a child born out of wedlock), if there has never been a court order giving the father some rights, he has no rights and you can withhold his parenting time until he makes the appropriate sleeping arrangements. You will not, of course, want to withhold them completely, but rather change the arrangements so that he doesn't have her overnight. You can't arbitrarily change his parenting time if there is a court order giving him specific rights.
I'm not necessarily suggesting that the father is intentionally doing something bad, but we don't know what we do in our sleep. Even if he is not intentionally doing something wrong, he could be reaching out in his sleep and making inappropriate contact with her.
Sign up to receive a 3-part series of useful information and advice about child custody law. | 1,471 | 725 | 0.001384 |
warc | 201704 | Table of Contents
Tax Proposals for Exempt Organizations to Watch in 2015
With Charitable Giving Tactics Old and New, Nimble Nonprofits Win
Community College for All - the Road to Universal Access?
Compensation Committee - Do We Really Need One?
Going Concern: What Nonprofit Management Teams Need to Know
Assessing Financial Stability
2014 Changes to Form 990 and Schedules
Effectively Communicating Your Mission and Accomplishments: Form 990 and Beyond
BDO's Rebekuh Eley Appointed to AICPA's Exempt Organization Taxation Technical Resource Panel
Nonprofit Organizations and the Tangible Property Regulations
2015 Outlook: Nonprofit Healthcare Organizations
Nonprofits Beware: The Hidden Gem in an Office Lease
Nonprofit Facts: Did you know
Other Items to Note
Nonprofit & Education Webinar Series
BDO Professionals in the News
Tax Proposals for Exempt Organizations to Watch in 2015 By Laura Kalick, JD, LLM in Tax
At this point, we are just about through the first quarter, and 2015 has already seen a slew of legislative proposals that could considerably impact exempt organizations. From the President’s FY 2016 budget proposal
,
to last year’s Tax Reform Act of 2014 (TRA 2014)
, to a new proposal requiring that the Internal Revenue Service (IRS) give exempt organizations notice before their exempt status is revoked for non-filing, nonprofits are in the midst of a legislative landscape potentially poised for reform. As we look to the weeks and months ahead, here are a few major pieces of legislation that nonprofits should be monitoring:
Reduction of the excise tax on the investment income of private foundations:
A private foundation is generally subject to a two percent excise tax on its net investment income, and this rate is reduced to one percent in any year in which a foundation exceeds the average historical level of its charitable distributions. TRA 2014 had a provision to reduce the excise tax on the investment income of private foundations from two percent to one percent. This provision found its way into the America Gives More Act of 2014, as well as other tax provisions that were passed by the House of Representatives
, but ultimately did not become law last year.
Meanwhile, the President’s budget contains a proposal to reduce the two percent tax to 1.35 percent across the board. Many in the nonprofit community are opposed to the President’s proposal because it could actually result in a tax increase for organizations that are able to reduce the tax to one percent under the current tax law formula.
Make the IRA rollover to charity and enhanced deductions for conservation and food inventory permanent:
These provisions aren’t permanent, but they keep getting renewed every year. Legislation in 2014 would have made permanent the tax-free distributions from individual retirement accounts (IRAs) for charitable purposes, an enhanced deduction for contributions of food inventory and also the tax deduction for charitable contributions by individuals and corporations of real property interests for conservation purposes. The America Gives More Act of 2015 that makes these provisions permanent was passed by the U.S. House of Representatives on February 12, 2015. In order to become law, the Senate will also have to pass the provisions and the legislation will require final signoff by the President.
Charitable contribution extensions and simplified rules:
TRA 2014 had a number of provisions that would have impacted charitable giving, including one that would allow taxpayers to treat charitable contributions made up until April 15 as deductible in the previous year’s taxes. Although this provision surfaced again in 2014, we have not seen it yet this year.
Meanwhile, the President’s proposals aim to simplify the rules regarding limitations on the maximum amount of charitable contribution deductions for a single year, regardless of whether contributions are made to public charities or private foundations, whether they are cash or property, and whether they are for the use of the organization. The proposal would also increase the carryforward period for an unused charitable deduction that is in excess of the limits from five years to fifteen years.
College and professional sports under scrutiny:
Both TRA 2014 and the President’s FY 2016 budget proposals have placed sports on the radar in a number of capacities. Under present law, those who donate to colleges and universities and receive in exchange the right to purchase tickets for seating at an athletic event may deduct 80 percent of their contribution. This is in contrast to the usual rule that only the contribution in excess of the fair market value received in return can be deducted. Both TRA 2014 and the President’s budget proposals aim to eliminate this deduction.
There are also two other tax proposals aimed at sporting events. TRA 2014 would have eliminated the ability of professional sports organizations such as the NFL, NHL and others to be exempt under Internal Revenue Code (IRC) 501(c)(6). Additionally, although the President’s proposals would expand the use of tax-exempt financing for infrastructure and research, they would repeal exempt financing of professional sports facilities on the basis that it transfers the benefits of exempt financing to private professional sports teams, rather than the general public.
New tax bills introduced in the Senate:
Three new tax bills were also recently introduced in the Senate, were vetted in a hearing of the Senate Finance Committee and were approved by voice vote in Executive Session. They include a bill to require the IRS to give an exempt organization 65 days’ notice before it has its exempt status revoked for failing to file information returns (Form 990 series); a bill to make certain agricultural research organizations public charities; and a bill to provide an exception to the private foundation excess business holding rules for certain philanthropic business holdings.
TRA 2014:
The Tax Reform Act of 2014 contained many legislative proposals for tax exempt organizations including: disallowing losses from one unrelated business income (UBI) activity to offset the income from another UBI activity; changes to the corporate sponsorship and royalty rules; expansion of the reach of intermediate sanctions to 501(c)(5) and (6) organizations; and the imposition of a 25 percent excise tax on compensation paid to a nonprofit organization’s top five executives in excess of $1 million. It is possible that some of these proposals could resurface again in the year ahead.
Gifts to 501(c)(4), (5) and (6) organizations:
Finally, we know that gifts to organizations other than 501(c)(3) organizations do not qualify for charitable deductions. However, whether gifts over $14,000 are subject to the gift tax if made to other nonprofit organizations has never been clear, and there have been times when the IRS has threatened to apply the tax when a gift was made to a 501(c)(4), (5) or (6) organization. To remedy the situation, Ways and Means Oversight Subcommittee Chairman Peter Roskam just introduced H.R. 1104, the Fair Treatment for All Donations Act, which would permanently ensure that donations to 501(c)(4), (5) and (6) organizations are not subject to the gift tax.
Stay tuned to the Nonprofit Standard blog and future newsletters in the weeks and months ahead, as we’ll be keeping a close eye on these proposals as they progress through the legislative process, and will keep you updated.
Article adapted from the Nonprofit Standard blog.
For more information, contact Laura Kalick, national director, Nonprofit and Healthcare Tax Consulting, at lkalick@bdo.com.
Return to Table of Contents
With Charitable Giving Tactics Old and New, Nimble Nonprofits Win
By Laurie De Armond, CPA
The numbers are out.
Charitable giving grew by 2.1 percent in 2014, according to the newly-released 2014 Charitable Giving Report
from Blackbaud, and this modest growth will no doubt prompt nonprofit fundraisers and executives to take a step back and evaluate their own fundraising results from the past year. But behind this solitary, lackluster statistic, there’s a more complex and profound transformation taking place in the U.S. charitable giving environment.
Above all, nonprofits currently face a challenging combination of longstanding norms and evolving trends. Nonprofit trade journals are full of articles about online giving trends, social media tactics and crowdfunding triumphs that provide resounding success stories and helpful tips around improving fundraising effectiveness. These newer fundraising models are critical, and will only become more important over time, but they are just one piece of the puzzle. Online donations accounted for only 6.7 percent of all U.S. giving in 2014, according to Blackbaud, and nonprofits are still largely working to secure donations via traditional channels, attract and retain new donors, and encourage affluent donors to extend their generosity through large gifts. To be sure, these perennial challenges are not going anywhere, but in the face of evolving donor behavior, nonprofits must evolve, as well.
Consider, for example, what’s occurring among the largest charities in the United States. According to The Chronicle’s Philanthropy 400 index
, these top organizations saw an 11 percent boost in donations during 2013, driven largely by affluent donors. Despite this growth, donor preferences shifted notably, and the affluent donors that contributed the lion’s share of revenue to the 25 largest nonprofits increasingly gave to donor-advised funds (DAFs). In fact, four of the top 10 nonprofits by revenue were DAFs last year, and a growing number of these funds continue to move up the ranks. For traditional large charities (which saw 1.3 percent growth
in donations during 2014), as DAFs receive a greater share of contributions from America’s philanthropists, the ongoing challenge of attracting and retaining donors is only further intensified.
This is just one of many major shifts in donor behavior, but its impact and ramifications are clear: Even the sector’s behemoths face competitive threats and the draining effects of donor abandonment. Charities of all sizes and across all segments rely on large bases of generous givers. But as new generations of donors gain financial means, and as the interests and giving preferences of existing donors transform, so must charities’ fundraising strategies.
What remains constant is the need for engagement. However, shifts in technology mean that connecting with donors requires new mediums of engagement that are accessible, relevant and appealing. For most organizations, antiquated tactics like telethons, telephone solicitation and direct mail campaigns no longer suffice. Effectively competing for funds now demands an adaptive and strategic approach—one that clearly and creatively communicates outcomes; one that creates an impassioned community of advocates; and one that, ultimately, transforms these advocates into a strong base of donors for sustained fundraising growth.
Moving forward, savvy and successful organizations will be those that not only adapt strategically, but also tactically. Digital platforms—online donation portals, mobile-friendly sites, text and email campaigns, social media campaigns—offer the ability to constantly and creatively engage existing and potential donors, as well as build online communities of advocates and financial supporters. Just as importantly, they provide donors with ease and accessibility for actually
making
donations. With 8.9 percent growth
in overall online donations during 2014, a strong online presence is now essential for nonprofits.
Still, staying relevant in today’s highly competitive environment requires constant tactical innovation. From fun and engaging social media campaigns like the ALS Ice Bucket Challenge
, to the rise of community-building giving days like #GivingTuesday
, organizations are starting to realize that aside from large gifts, big results can come from outside-the-box fundraising initiatives that encourage peer-to-peer giving. Expect to see more nonprofits pushing the creative bounds and achieving new levels of success in the year ahead.
Article reprinted from the Nonprofit Standard blog.
For more information, contact Laurie De Armond, partner, at ldearmond@bdo.com.
Return to Table of Contents
Does Your Organization’s Development Plan Need Refreshing?
In an era where nonprofits can be sharply criticized by donors and watchdog organizations for spending too much on fundraising, some nonprofits shy away from making critical investments in their development efforts—investments which, in the long run, could substantially impact their financial stability.
We’ve created a checklist below with questions that organizations should consider when determining whether their overall development plan needs refreshing. While not all of the questions can be weighed equally, if you answer “No” to more than five, it may be a sign that your organization needs to strategically reassess its plan.
Y
N
Is fundraising seen as the lifeblood of your organization?
Is your development department stable and able to achieve key fundraising objectives?
Does your organization regularly review its development plan?
Does your organization annually consider how effectively it’s achieving its mission?
When reviewing your plan, are you considering the changing demographics of your organization’s donor base and proactively addressing these changes?
Have you discussed new ways to reach potential donors and advocates in the last two years?
Are your fundraising materials current?
Have your recently introduced a new fundraising campaign?
Do you have an online giving program?
Do you have a mobile giving platform?
Have you planned or conducted a social media fundraising campaign?
Do you offer opportunities for potential donors and advocates to get involved in activities that directly fulfill your mission?
Is your organization flexible and responsive to new fundraising trends and tactics?
If your organization is experiencing declining donations, does it have a strategic plan in place for increasing contributions?
Return to Table of Contents
Community College for All – The Road to Universal Access? By Tom Gorman, CPA
In his State of the Union Address in January, President Obama outlined the framework for universal access to free education at community colleges. That announcement followed several previews in the days before the speech and has ignited intense debate since. Loosely modeled after a similar plan being implemented in Tennessee, the call for nation-wide access to free education has drawn praise and re-ignited concerns over further government regulation of what many claim to be an over-regulated industry.
The Proposal
In his address, the President laid out a broad framework for the proposal that would cost an estimated $60 billion over 10 years. The stated goal of the proposal is to increase the number of students attending community college, and hopefully increase the number of graduates with job skills to enter the workforce. The proposal would be funded through a variety of higher education tax reforms. The most controversial of those was taxing distributions from Section 529 college savings plans. This option was soundly opposed by both Democrats and Republicans, and was subsequently withdrawn from consideration.
Under the proposal, students would need to attend school at least half-time, maintain a 2.5 GPA and show progress towards completing their degree program. In exchange, the federal government would provide 75 percent of the cost of tuition, and states would be required to provide the remainder. In addition, community colleges would need to ensure their credits would be fully transferrable to four-year institutions for those that choose to continue their studies. For their part, four-year institutions may very well develop and expand articulation agreements with community colleges to enhance the pipeline of students with a demonstrated interest in completing their degrees.
Today’s Reality
Many critics of the proposal point out that the average annual price of tuition and fees at community college was recently estimated at $3,347 by the College Board. This is well below the maximum award limit for the Federal Pell Grant of $5,730 for the 2014-2015 award year. Others argue that relieving community college students of the financial burden of attending will enhance their completion and graduation rates, and ultimately result in the workforce improvements hoped for by the Obama administration.
The other sticking point of the proposal is the reliance on states to fund one-quarter of the cost of the program. During the economic downturn, state support of higher education—at all levels—fell to some of its lowest levels in recent times. Only recently has state support of higher education increased, and it has just barely returned to pre-recession levels. Many worry that placing additional burdens on states to fund the community college program will simply shift funding from other priorities.
Access, Outcomes and Where We Go From Here
It is hard to look at this free community college proposal in isolation, given the backdrop that exists with the proposed higher education rating system that is being rolled out. The focus on greater access to a college education is laudable; however, funding tied to outcomes and completion rates may prove more challenging. The community college proposal is still at the conceptual stage, with legislative action still to come. We will keep you updated as this proposal takes shape in the coming months.
For more information, contact Tom Gorman, director, at tgorman@bdo.com.
Return to Table of Contents
Compensation Committee – Do We Really Need One? By Michael Conover
In the Fall 2013 issue of the Nonprofit Standard
, I contributed a similarly titled article, “Compensation Consultant… Do We Really Need One? Really?”. Nearly a year and a half later, it is important to note that for many organizations, the need still exists. But along with compensation consultants, organizations looking to maintain compliance—and their tax-exempt status—are well-advised to also establish compensation committees.
Adoption of final regulations for the Internal Revenue Service (IRS) Intermediate Sanctions (Internal Revenue Code (IRC) 4958) in 2002 prompted many 501 (c)(3) and (c)(4) organizations to formally designate a board-level committee with specific responsibility for oversight of the compensation of their most senior-level executive position(s). This governance structure was a practice adopted long ago by most for-profit and publicly-held organizations. This structure also satisfied one of three criteria stipulated by the IRS for affording a nonprofit organization the ‘Presumption of Reasonableness’ for its pay practices. The Form 990 and requested information in Schedule J provides still more evidence of an expectation of formal governance and oversight of executive pay.
While not every organization has a need for a compensation committee specifically dedicated to this subject, the need for independent board members and the proper process to govern pay is nearly universal for any tax-exempt organization that pays its senior-most executive(s). It is not unusual to find an executive committee of the board or some similar subset of the board fulfilling this role. This arrangement may have been in place for many years prior to the Intermediate Sanctions, revised Form 990 and the increased scrutiny toward executive pay practices of nonprofit and for-profit organizations alike.
In some of these organizations without a committee dedicated to compensation, longstanding methods of ‘handling’ executive pay may have failed to keep pace with the growth in size and complexity of the organization and/or IRS requirements. Generally, these organizations are categorized as having no compensation committee. The symptoms are often fairly obvious: There is little or no evidence of any policy or process for executive compensation decision-making; there are no external sources of compensation practices for comparable organizations; and there are no meaningful minutes of board discussions and decisions about pay. The oversight of executive compensation is simply a part of the annual chorus of required board votes: “Do I hear a motion? A second? All those in favor.”
Almost as troubling is another scenario in which a board compensation committee has been created, but the commitment of the organization or individual members to the committee’s role is inadequate. Admittedly, many board members assigned to the committee are often volunteers and they are frequently selected for their interest in the organization’s mission—not for their expertise in executive compensation. Nonetheless, two different causes create what can be considered as, “a compensation committee in name only.”
The first cause is a committee with members having little to no understanding of executive compensation in the nonprofit sector and little or no interest in learning any more about it. These individuals often fail to engage in the discussions and decisions that directly impact the leadership of the organization. Careful consideration of competitive pay practices, thoughtful discussions about the organization’s beliefs about pay, effective evaluation of executive performance and related pay actions are not present. Compensation decision-making is often reduced to predictable, annual upticks in executive salary with sporadic attention to other components of pay (e.g., retirement benefits, life insurance, etc.), often without regard to the executive’s total compensation program.
The second cause is membership turnover. Significant changes in the makeup of the committee on a year-to-year basis can severely reduce its ability to be effective. Without the benefit of any compensation subject matter previously given to former committee members or continuity with past discussions or decisions, new members are a compensation committee in name only. This new group of committee members is essentially starting all over again. If past committees have left no policies or processes in place, the new members will potentially need to create a compensation strategy for their tenure.
Organizations without compensation committees, or where the committee is not properly performing the role—or performing it in name only—are at risk. Inattentive or even well-intentioned decision-making without the benefit of effective policies and processes for managing executive pay may have negative consequences. At a minimum, an opportunity for an objective assessment of the executive’s performance and the reasonableness of compensation in light of competitive practices by comparable organizations may be lost. In more serious cases, an organization may be startled by the realization that executive pay has become the focal point of embarrassment and adversity.
Above all, organizations that pay their senior executive(s) would be well-advised to consider the following recommendations:
Formally assign responsibility for oversight of executive pay to a committee of independent board members. It may be a committee already in existence, or a new compensation committee may need to be established.
Draft a charter describing the role and accountability of the committee. In addition to monitoring competitive pay practices for comparable organizations, consider the role the committee could play in managing the performance/evaluating the effectiveness of the executive(s) for which it is responsible.
Establish membership guidelines for the committee. Ideally, a member should serve through two or more annual cycles of the process. In addition, committee membership and committee chair terms should be staggered to ensure adequate continuity on a year-to-year basis, but also allow the introduction of new members in the process.
For more information, contact Michael Conover, senior director, Specialized Tax Services– Global Employer Services, at wconover@bdo.com.
Return to Table of Contents
Going Concern: What Nonprofit Management Teams Need to Know By Lee Klumpp, CPA, CGMA
Financial reporting issues remain hot topics for those in the nonprofit industry, but one of these issues in particular has historically lacked direction and guidance for for-profit and nonprofit organizations alike: going concern.
To help provide clarity around the issue, the Financial Accounting Standards Board (FASB) recently issued Accounting Standards Update (ASU) No. 2014-15, Presentation of Financial Statements - Going Concern (Subtopic 205-40), Disclosure of Uncertainties about an Entity’s Ability to Continue as a Going Concern
.
For the sake of background, the principle of going concern is embedded in our conceptual accounting framework. It’s based on the assumption that an organization will remain in operation for the foreseeable (a reasonable time period) future. Conversely, this also means the organization will not be forced to cease its operating and programmatic activities and liquidate its assets in the near term. By making this assumption, management is justified in deferring the recognition of certain expenses until a later period, when the organization will presumably still be operating to achieve its mission and using its assets in the most effective manner possible.
The going concern principle is presumed as the basis for preparing financial statements—unless and until the organization’s liquidation becomes imminent. If and when a nonprofit’s liquidation does become imminent, financial statements should be prepared using the liquidation basis of accounting in accordance with Subtopic 205-30. For years, U.S. auditing standards assisted auditors in evaluating whether there was substantial doubt about an organization’s ability to continue as a going concern for a reasonable period of time, not to exceed one year beyond the date of the financial statements being audited. However, in practice, this created difficulties between auditors and management.
To clarify, an organization is assumed to be a going concern in the absence of significant information to the contrary (e.g., an organization’s inability to meet its obligations as they come due without substantial asset sales or debt restructurings). Even if an organization’s liquidation is not imminent, there may be conditions or events that raise substantial doubt about the organization’s ability to continue as a going concern. In those situations, financial statements should continue to be prepared under the going concern basis of accounting. However, the user of the financial statements should be informed that these conditions exist. With the issuance of ASU 2014-15, there is now guidance in generally accepted accounting principles (GAAP) about management’s responsibility to evaluate whether there is substantial doubt about an organization’s ability to continue as a going concern, and if so, provide related footnote disclosures.
Auditors have always been required to consider the possible financial statement effects, including footnote disclosures, on uncertainties about an organization’s ability to continue as a going concern for a reasonable period of time (the American Institute of Certified Public Accountant’s Codification of Statements on Auditing Standards Section AU-C 570, The Auditor’s Consideration of an Entity’s Ability to Continue as a Going Concern
). With the implementation of ASU 2014-15, management must now perform this analysis and determine the impact on the financial statements.
ASU 2014-15 now requires that management evaluate whether there are conditions or events, considered in the aggregate, that raise substantial doubt about the organization’s ability to continue as a going concern within one year after the date that the financial statements are issued (or, when applicable, within one year after the date that the financial statements are available to be issued for entities with conduit debt). Management should consider, among other issues, the following items in deciding if there is a substantial doubt about an organization’s ability to continue as a going concern (See the related article entitled “Assessing Financial Stability” for a checklist of items to consider):
Negative trends in operating results, such as a series of losses;
Loan defaults by the organization;
Denial of trade credit to the organization by its suppliers;
Uneconomical long-term commitments to which the organization is subjected; and
Legal proceedings against the organization.
This evaluation should be based on relevant conditions and events that are known and reasonably foreseeable at the date that the financial statements are issued—or at the date that the financial statements are available to be issued. Substantial doubt about an organization’s ability to continue as a going concern exists when relevant conditions and events indicate that it’s probable the organization will be unable to meet its obligations as they become due within one year after the date that the financial statements are issued (or available to be issued). The term probable as defined in the Accounting Standards Codification (ASC) Topic 450 means that the future event or events are likely to occur.
The mitigating effect of management’s plans should be considered only to the extent that (1) it is probable that the plans will be effectively implemented and, if so, (2) it is probable that the plans will mitigate the conditions or events that raise substantial doubt about the organization’s ability to continue as a going concern. If conditions or events raise substantial doubt about an organization’s ability to continue as a going concern, but the substantial doubt is alleviated as a result of consideration of management’s plans, the organization should disclose information that enables the users of the financial statements to understand all of the following:
a. Principal conditions or events that raised substantial doubt about the organization’s ability to continue as a going concern (before consideration of management’s plans);
b. Management’s evaluation of the significance of those conditions or events in relation to the organization’s ability to meet its obligations; and
c. Management’s plans that alleviated substantial doubt about the organization’s ability to continue as a going concern.
It is also possible for nonprofits to mitigate their going concern status by having a third party guarantee their debts or agree to provide additional funds as needed. By doing so, a nonprofit’s management can be reasonably assured that they will remain functional for a reasonable period of time as stipulated by GAAP.
If conditions or events raise substantial doubt about an organization’s ability to continue as a going concern, and substantial doubt is not alleviated after consideration of management’s plans, an organization should include a statement in the footnotes indicating that there is substantial doubt about the organization’s ability to continue as a going concern within one year after the date that the financial statements are issued (or available to be issued). Additionally, the organization should disclose information that enables users of the financial statements to understand all of the following:
a. Principal conditions or events that raise substantial doubt about the organization’s ability to continue as a going concern;
b. Management’s evaluation of the significance of those conditions or events in relation to the organization’s ability to meet its obligations; and
c. Management’s plans that are intended to mitigate the conditions or events that raise substantial doubt about the organization’s ability to continue as a going concern.
In a situation where management believes that their organization may no longer be a going concern, the issue of whether the organization’s assets are impaired needs to be addressed, as it may call for the write-down of their carrying amount to their liquidation value. The underlying concept is that the value of an organization that is assumed to be a going concern is higher than its break-up value, since an organization that is a going concern can potentially continue to fulfill its mission and serve the public good through providing programmatic activities to its beneficiaries.
ASU 2014-15 is effective for fiscal years ending after December 15, 2016 and can be adopted early.
In the meantime, we encourage you to familiarize yourself with the FASB’s ASU 2014-15, which provides helpful guidance in GAAP about management’s responsibilities surrounding these issues.
Article adapted from a post on the Nonprofit Standard blog.
For more information, contact Lee Klumpp, director, at lklumpp@bdo.com.
Return to Table of Contents
Assessing Financial Stability By Dick Larkin, CPA, and Elizabeth Pilacik, CPA
As part of sound financial management practices, management has a responsibility to evaluate its nonprofit organization’s ability to continue as a going concern (i.e., the organization’s ability to continue operating both financially and programmatically for a reasonable period of time). This review by management should occur every time the financial statements are prepared and made available to the users of those financial statements (no less than annually). The checklist included below identifies key items, as well as other indicators, that management should consider in documenting their assessment. Use of this checklist provides both management and the organization’s external auditors with a basis for evaluating certain financial stability factors. For more background information, see Lee Klumpp’s article, “Going Concern: What Nonprofit Management Teams Need to Know” on page 6.
Going Concern Checklist for Nonprofit Organizations
Purpose
– To assist a nonprofit’s management team in evaluating and documenting its assessment of their organization’s ability to continue as a going concern. Of course, there may be other indicators not listed here that should also be considered in management’s analysis.
For each indicator that applies, describe what mitigating factors, if any, may lessen the impact on the organization’s financial stability. Conclude as to whether the evaluation of these key items, indicators and mitigating factors raise substantial doubt about the organization’s ability to continue as a going concern, and whether that substantial doubt can be alleviated. Ensure that management’s plans are documented and determine appropriate financial reporting and disclosure requirements.
Use the following key items, where appropriate, in evaluating the indicators:
the latest available interim financial statements and other key financial and operating data;
events after the statement of financial position (balance sheet) date;
minutes of meetings of the governing board and key committees of the board, including at least the executive, finance, audit and investment committees; and
correspondence with lawyers.
Community Support
Decline in utilization of the organization’s services by the local community (fewer students, patients, visitors, members, concertgoers or other users)
Decline in real dollar support through gifts (cash and in-kind), grants, bequests and member dues
Decline in hours of time made available by volunteers
Increasing incidence of turndown of grant requests
Increasing reliance on very few different sources of support
Criticism of the organization or its programs by public figures or media, sanctions imposed by programmatic or charity regulators, or being found out of compliance with charity watchdogs’ standards
Concerns about the intent or ability of affiliated organizations to provide continuing support
Financial Stability
A growing percentage of expenditures for basic operations funded by restricted grants
A growing percentage of own-source unrestricted revenues committed to meet matching-fund requirements or needed to supplement restricted revenues for special projects
Operating reserves appear inadequate to support the size of the operations
Continuing decline or substantial deficit in operating income or unrestricted net assets
Continuing decline or overdraft in cash and cash equivalents
A net liability (unrestricted net asset deficit, exclusive of net equity in fixed assets) or net current liability (working capital deficiency) position
Significant deterioration in key ratios (e.g., debt to equity, gross profit from business-type activities and, if applicable, days’ sales in accounts receivable and inventory turnover)
Long overdue accounts, loans, pledges receivable or excessive inventory
Difficulty in obtaining trade credit or in paying bills in a timely manner (e.g., due to negative cash flows)
Organization is financing activities out of overdue suppliers and other creditors
Obligations
A growing debt burden
Fixed-term borrowings approaching maturity without realistic prospects of renewal or repayment
Violations of loan or grant covenants where appropriate waivers are not likely
A lender has refused to provide financing for operations or new activities, a line of credit or a guarantee has not been renewed or there have been loan defaults (principal or interest) or other deterioration of lender relationships
Obligations to fulfill uneconomic long-term contracts
Significant unfunded pension or other employee future benefit obligations
Legal proceedings against the entity that may, if successful, result in judgments that could not be met
Current Revenues and Costs
Cost per unit of service rising rapidly
Rapid increases in fixed cash costs (salaries and fringes, rent, debt service or others)
Number of employees per unit of service rising rapidly
User fee rates rising rapidly (unless resulting from a deliberate management decision to reduce the amount by which such fees are subsidized from other revenue sources)
Increasing incidence of revenue shortfalls
Recent declines in profit margins from business-type activities
A pattern of budget cost overruns, either overall or in specific programs/departments
Proceeds of long-term debt or sales of long-term investments being used for current operating purposes
Deferring needed maintenance of capital assets
Low or declining funding of replacement of capital assets near the end of their useful life
Failure to pay payroll or other taxes when due (Note: Also represents a possible personal liability)
Inability to pay salaries or other expenses when due, or borrowing to cover such amounts shortly before payment
Borrowing of cash or other assets from restricted funds, or other diversions of restricted resources to inappropriate purposes
Management Practices
Earnings on investments declining disproportionately to general trends of investment yields
Interest rates charged by lenders increasing disproportionately to general trends of interest rates, unwillingness of lenders to lend to organization or insistence by lenders on burdensome debt covenants
Levels of receivables, inventory or prepaid expenses increasing faster than related activity would dictate
Heavy reliance on the success of a significant project or new product
Increasing incidence of funding source challenge or disallowance of expenses
Loss of key employees or volunteers
Violation of laws, regulations or other statutory requirements
Financial and operating data provided to board members and management is delayed, unclear, or incomplete; explanations of key items and variances are unavailable or of doubtful validity
Failure on the part of board members and/or management to understand and accept the seriousness of the financial situation Industry and Environmental Factors
Recent financial failures of similar organizations
Recent technological developments that threaten a key program
Recent changes in legislation or government policy that could have a significant adverse impact on the entity
Key customers or suppliers have been lost or experienced financial difficulty
Shortages of important supplies
Labor disputes, strikes or work stoppages involving the organization or its key suppliers
Uninsured or underinsured catastrophe, such as fire, drought, earthquake or flood
Significant negative consequences from environmental remediation problems
Threats of receivership or forced bankruptcy
Are there any other indications that the organization may not be able to continue as a going concern? If so, describe:
Conclusion
– Based on management’s evaluation of the noted key items, indicators and mitigating factors, management should determine whether substantial doubt about the organization’s ability to continue as a going concern does / does not exist. Where substantial doubt does exist, consideration of management’s plans needs to be performed to determine whether these plans have / have not alleviated the substantial doubt. Management’s plans should be clearly documented and supported by appropriate and verifiable information.
Checklist completed by _________________________________
Date ____________________
For more information, contact Dick Larkin, director, at dlarkin@bdo.com, or Elizabeth Pilacik, director, at epilacik@bdo.com.
Return to Table of Contents
2014 Changes to Form 990 and Schedules By R. Michael Sorrells, CPA
As in the past few years, most of the 2014 changes to the Form 990 and its many schedules are fairly minor. There is, however, one exception—Schedule A: the schedule required for all Section 501(c)(3) public charities, which has gone from four to eight pages and now includes eighteen pages of instructions.
Schedule A Changes
All public charities must check a box on Schedule A indicating why they are a public charity, and based upon that, may have to complete one of two possible numerical support tests to prove public support over a five-year period including the current year. Certain organizations, such as schools and colleges, hospitals and Section 509(a)(3) supporting organizations do not have to complete either of the support schedules. Supporting organizations are exactly what has caused the Schedule A to expand this year. The Internal Revenue Service (IRS) and Congress see significant opportunity for abuse with these organizations and have decided to utilize the Schedule A to gather a large amount of information from them to determine if they are in compliance. Other (non-supporting) organizations will see no change to the information required on Schedule A.
As the name implies, supporting organizations are organized to provide financial or programmatic support for one or more publically supported organizations. There are four types of supporting organizations: Type I, Type II, Type III Functionally Integrated and Type III Non-Functionally Integrated. Type I and Type II organizations are both controlled by the supported organization(s) while both Type III organizations are not. The Type III organizations (especially Non-Functionally Integrated Type III’s) pose the most risk in the eyes of the IRS and Congress. Thus, Type III Non-Functionally Integrated organizations are subject to a set of rules very similar to private foundations in terms of making minimum distributions to their supported organization(s). All supporting organizations are subject to a variety of rules concerning transactions with, and control by, disqualified persons. Some supporting organizations are subject to the private foundation excess business holding rules of Sec. 4943.
The 2014 Schedule A has a battery of questions for all supporting organizations regardless of type that go to the heart of the various rules governing them. However, for Type III Non-Functionally Integrated organizations, there are two pages of fairly complex financial data required in order to prove that distributions have been made at the proper level.
All supporting organizations should take a careful look at the new Schedule A so they are not caught by surprise by the disclosures required. Supporting organizations may wish to discuss their status (type) with outside advisors before completing the various parts of Schedule A.
Other 2014 Form 990 and Schedule Changes
Part VII and Schedule J Compensation:
The Form 990 and Schedule J instructions now clearly state that any deferred income actually paid within 2½ months of the end of the calendar year included in the fiscal year being reported, and which are included in reportable income (W-2) on Form 990, does not get reported as deferred income on Form 990 Part VII or on Schedule J.
Group Returns:
In Form 990 Appendix E, Group Returns, new instructions are provided for group returns with Section 509(a)(3) supporting organizations.
Schedule L Transactions with Interested Persons:
This four part schedule discloses various transactions between nonprofit organizations and various interested persons (“insiders”). Previously, each section had slightly different definitions for interested persons. The 2014 Schedule L instructions now say that the definition for insiders is “harmonized” for Parts II, III, and IV with some special definitions still being applicable to Part III, Grants or Assistance Benefiting Interested Persons. Additionally, for Part IV, Business Transactions with Interested Persons, transactions with publicly traded companies in the ordinary course of business, on the same (or better for the organization) terms as it offers to the general public, are excluded from being reported here.
The IRS expects organizations to make a reasonable effort to obtain information about such transactions with interested persons. For 2014, the reasonable effort definition has been harmonized for all four types of transactions, so that the same efforts will pass muster for each. An example of a reasonable effort is for the organization to distribute a questionnaire annually to each person it believes to be an interested person requesting information relevant to determining if the transaction is reportable. See Schedule L instructions for more information.
For more information, contact R. Michael Sorrells, national director, Nonprofit Tax Services, at msorrells@bdo.com.
Return to Table of Contents
Effectively Communicating Your Mission and Accomplishments: Form 990 and Beyond By Joyce Underwood, CPA
The Form 990 plays an important part in communicating your nonprofit organization’s mission and accomplishments to the world, and is also a key means for promoting your organization. Filing the Form 990 satisfies your tax compliance requirements, but it is also a public document distributed widely and manipulated by third parties on an ever-increasing basis. Ensuring information in your Form 990 is accurate and conforms to your other communications about your organization is important.
You want to ensure the messaging agrees to your website and social media communications, and conforms to your intended public image. You can describe your organization in a way to attract a certain type of supporter, or speak to an intended generation of donor. You will also want to describe your accomplishments showing effective outcomes, and consider if you want to focus your message on the giver or on the receiver of your resources. These days, more and more donors seek to support organizations with which they find a personal connection, and demonstrating outcomes is one of the most effective ways of showing the progress and impact your organization is making, which helps to create those connections.
The first page of the Form 990 (Part I, line 1) is intended to highlight the organization’s mission or most significant activities in condensed form. You should be direct but brief here, providing succinct wording to describe the organization. Page two, the Statement of Program Service Accomplishments (Part III), is where you can shine. Part III requires more detailed information about the mission and activities, and provides an opportunity to further promote the organization and capture interest. The mission described here should reflect the language in the approved mission adopted by the governing body, either initially or through amendment.
It is important to communicate to the Internal Revenue Service (IRS) any discontinued, changed or new activities, as your tax exemption is dependent upon your approved mission and activities. Should your mission or activities evolve over time from their original intent and not be communicated—or remain appropriate for your status—you could have tax exemption issues.
Organizations are also required to describe the top three program activities, as measured by expenses, and list any other programs carried on. You can describe all program activities, if you prefer. Even if you’re not a public charity or public welfare organization, which are required to include the revenue and expenses attributable to the programs, you are required to describe your activities. The IRS requests that you describe program service accomplishments through specific measurements such as clients served, days of care provided, number of sessions or events held, and/or publications issued; describe the activity’s objective, both for this time period and the longer-term goal, if the output is intangible, such as in a research activity; and give reasonable estimates for any statistical information if exact figures are not readily available. Indicate that this information is estimated. As long as you satisfy these requirements, you are free to word your descriptions to describe your organization in its best light or to satisfy another audience.
Generational Issues
An organization should consider its target audiences in addition to the IRS. Have you considered whether you are you targeting Baby Boomers, Gen-Xers or Millennials? There has been a lot of discussion in recent years regarding the different behaviors and preferences of individuals based upon their age and experience. Focusing on these characteristics in your communication style can have an impact on attracting donors and program participants. In designing your communication, be sure to ask yourself the following questions:
Do the people you want to attract respond better to certain types of communications?
Does your mission seem meaningful and engaging? Does it address issues that a specific generation is attracted to or has concerns about?
Do your descriptions help a donor feel like they can be an active participant in your mission?
Do you provide validation to donors in your description of accomplishments that helps them see the outcomes of their support?
Will referencing support of a “member” or “partner” help a donor feel they can become involved?
Are you using the right language to attract donors that may have an interest in giving through a will or living trust?
Would it be effective to describe and provide links to more far-reaching digital tools, such as social media campaigns, to attract a large number of small donations?
Do you effectively utilize a blog, Twitter or other social networking sites?
Do you know how to speak to and attract a long-term donor?
Do you know how to send a message that will get noticed, catch on and spread the word?
Giving your readers information about what you do with their generational needs and concerns in mind can be helpful in connecting with them as you describe your mission and activities in your Form 990.
Measuring Impact
Many donors and charity evaluators have been very focused on outcomes in recent years. Since the information from Form 990 is more readily available and increasingly in a readable format, many industry partners are jumping on the bandwagon, either for philanthropic or commercial purposes. Data about organizations is now gathered, analyzed and compared to describe and compare your nonprofit to other organizations. At the same time, others are compiling and analyzing data and demanding more concrete information about performance. It is often now expected that an organization devote resources to measuring and communicating how they have used funding to effectively produce outcomes, as well as justify that the outcomes are appropriate. It can be a balancing act between serving your mission and satisfying the new performance-oriented donor or member. An organization that does not focus on and effectively communicate its results may be at a disadvantage when seeking grants, contributions, dues and other resources.
Share your Story
Storytelling can be an effective communication tool. Stories influence a reader and help people remember facts and circumstances. As such, they provide nonprofits an opportunity to better connect with donors, which can prompt them to be more generous. Stories can also instill a sense of urgency or need, as they have the power to paint a picture. Be sure to state what you stand for and connect them with an image or activity readers will remember. To accomplish this, you may want to include other members’ perspectives of your organization in the Form 990 preparation process. Many organizations now have a final review by their development department or the social media team to help provide this collaborative overview.
Above all, whatever you use to describe your organization for Form 990 purposes, you should always consider your larger audience. Although you may want to be brief and quickly scratch off the Form 990 preparation from your to-do list, targeting your readers and telling your story can provide a significant advantage to nonprofits. But remember, you’re also talking to the IRS.
For more information, contact Joyce Underwood, director, Nonprofit Tax Services, at junderwood@bdo.com.
Return to Table of Contents
BDO’s Rebekuh Eley Appointed to AICPA’s Exempt Organization Taxation Technical Resource Panel
BDO is pleased to announce that
Rebekuh Eley, tax senior director in Chicago and a member of the Nonprofit & Education practice
, has been appointed to serve as associate member on the American Institute of Certified Public Accountants (AICPA) Exempt Organization (EO) Taxation Technical Resource Panel (TRP)
. In this role, she will contribute to important panel projects, including legislative and regulatory activities, as well as provide tools, information and counsel to other members in the practice regarding improvements to tax practices and procedures.
Eley becomes one of just 30 associate members in this influential group, where she joins BDO tax partner Jeff Schragg. Also an associate member, Schragg has served on the EO TRP for several years, and is active among a number of the AICPA’s nonprofit initiatives. He serves as an advisory council member for the AICPA’s new Not-For-Profit Member Section
, and is a member of the Steering Committee for the AICPA Not-For-Profit Industry Conference.
Eley, Schragg and the rest of the panel are currently tackling various advocacy and member service projects, including the Corporate Governance task force, EO Accounting Method Change task force, EO E-Filings task force and Form 990 and Instructions task force. The panel also maintains a direct dialogue with the EO Commissioner of the IRS to inquire about IRS initiatives and focus for the year, as well as additional information and guidance from the agency.
“It’s an honor to be a part of such a great group of thought leaders and learn from their experience, as well as discuss the trends and changes within the industry,” said Eley. “I look forward to working both within the AICPA and in conjunction with the IRS to further advance their important EO initiatives.”
“We are very pleased to have Rebekuh join Jeff on this important industry panel,” said Bill Eisig, National Nonprofit & Education Practice Leader and National Director of the BDO Institute for Nonprofit Excellence
SM
. “This is an invaluable opportunity for these BDO professionals to not only grow their industry knowledge and better serve our clients, but also to make a meaningful difference in the nonprofit sector overall.”
BDO’s
Neena Masih, assurance partner
, was also recently appointed to the AICPA’s Not-For-Profit Task Force on Revenue Recognition
, where she liaises with other members of the task force and coordinates with the AICPA to help nonprofit organizations apply the provisions of the Financial Accounting Standard Board’s (FASB) ASU 2014-09, Revenue from Contracts with Customers
that addresses revenue recognition.
Return to Table of Contents
Nonprofit Organizations and the Tangible Property Regulations By Nathan Clark, CPA
What are the tangible property regulations?
These regulations were issued by the Internal Revenue Service (IRS) to provide guidance for the acquisition, production or improvement of tangible property—buildings, furniture, fixtures and equipment assets, typically—which must be capitalized and depreciated, deducted in the future or deducted immediately. On a more granular level, these rules dictate how to establish a basic capitalization policy (“de minimis expenses”), identify repair and maintenance costs, account for materials and supplies, determine which costs must be capitalized for the improvement or acquisition of buildings and equipment, and when disposed property may be written off.
These regulations apply equally to all businesses subject to U.S. tax law, regardless of for-profit or exempt status, organization size, legal entity, or industry. They apply to taxable years beginning on or after January 1, 2014. However, in certain situations, the regulations could affect capitalization of costs incurred in years prior to 2014, regardless of a tax return’s normal statute period.
Prior to this new guidance, the previous regulations governing tangible property were a subject of constant disagreement between taxpayers and the IRS, which led to a patchwork of court cases, rulings and other guidance that was not always consistent, nor easily applicable across industries. The IRS, with much feedback and input from taxpayers, rewrote these regulations, which included proposed and temporary regulations, before finalizing the regulations. The prior guidance applied to nonprofits just as the new regulations do. Many for-profit and nonprofit organizations are addressing these regulations now because of the broad application and complexity over the old guidance.
But My Organization is Tax-Exempt!
The primary impact of the tangible property regulations is the capitalization of tangible property on the statement of financial position (balance sheet) and the computation of taxable income. Expenditures could be capitalized as improvements to existing buildings, leasehold improvements or equipment assets and deducted over time through depreciation, or conversely, deducted as a repair and maintenance expense, de minimis property, or as materials and supplies.
Nonprofits that pay unrelated business income tax, have taxable subsidiaries, or lose their tax exempt status will need to consider the impact of these regulations and determine if there is a change to current methods of calculating taxable income. Amounts may be re-characterized as capital improvements that were historically deducted, or vice versa.
For example, consider a nonprofit university that operates a convention center and hotel, resulting in unrelated business income tax for the nonprofit university. In 2012, the hotel underwent a renovation to repaint guest rooms, replace broken lighting and plumbing fixtures, and replace the roof, among other miscellaneous expenditures. The university in this example must compare the facts and circumstances of the expenditures to the regulations to determine if the expenditures were improvements that must be capitalized and depreciated, or repair costs that were deductible when incurred. Depending on the scope of the work performed, these amounts could have been capital improvements or deductible repairs. The regulations introduce the concept of “Unit(s) of Property”, which is how the regulations identify the asset that is being repaired or improved. Historically, an entire building was the unit of property. However, now the regulations subdivide building property into nine different “building systems”. When evaluating an expenditure to determine if it is a repair or an improvement, the expenditure must be compared to the relevant “building system” as opposed to the entire building. This change in how we define the asset that is being repaired or improved can result in characterization of expenditures as capital or deductible that is different from the historical characterization.
Optional Elections
Elections are a formalized manner of adopting tax return positions provided by the Internal Revenue Code and regulations. There are three new elections in the regulations that each nonprofit should consider making. All of the elections described below require a statement to be attached to the organization’s timely filed federal income tax return, including extensions. Further, these are annual elections that will need to be considered for 2014 and every subsequent tax year.
De Minimis Expensing Safe Harbor
Most organizations have historically had a capitalization policy or practice where amounts beneath a specified amount were not capitalized as fixed assets. Prior to these regulations, there was no guidance on establishing such a practice. The regulations introduce the De Minimis Safe Harbor to establish a basic capitalization policy. The key requirements are as follows:
The organization must have a capitalization policy in place at the beginning of the year specifying that amounts incurred for the purchase of tangible property beneath a fixed dollar amount will not be capitalized for financial accounting or tax purposes;
The capitalization threshold cannot exceed $5,000 if the organization’s financial statements are audited by external auditors, or $500 if the organization’s financial statements are not audited; and
The policy must be in writing if the organization has an audited financial statement.
If an organization follows the above practices—and most importantly follows the practice equally for financial accounting and tax purposes—then the IRS will not question the expensing of amounts beneath the threshold. The capitalization threshold may change (but not exceed the safe harbor limits above) as necessary to meet the changing business practices and needs of the organization.
Small Taxpayer Safe Harbor
The regulations provide an election for a simplified repair versus improvement analysis for small taxpayers. A small taxpayer, for purposes of this safe harbor, is an organization with average annual revenue for the prior three years of not more than $10 million. Small taxpayers meeting the revenue threshold may expense costs to repair, improve or maintain building property(s) if those expenditures in aggregate, per building, do not exceed the lesser of $10,000 or two-percent of the original building cost. This simplified analysis may be applied to each building a small taxpayer owns that has an original cost (or total amount of rent payments expected to be paid by the lessee under the term of the lease, including renewal periods) of not more than $1 million.
Conformity to Book Capitalization of Repair and Maintenance
It is a common occurrence that an amount may be capitalized for financial accounting purposes but deductible under the regulations for taxable income purposes, or vice versa. Another means of simplifying the adoption of the regulations is an election that allows a taxpayer to capitalize amounts that are deductible for taxable income purposes, if those amounts are capitalized for financial accounting purposes. This election allows a taxpayer to capitalize for taxable income purposes amounts already capitalized for financial accounting purposes.
In addition to the formal elections discussed above, the regulations contain numerous other elections, not discussed herein, that are made simply by taking a position on the organization’s tax return.
Manner of Adoption
The regulations, as stated above, are adopted through elections where indicated and by filing Form(s) 3115, Application for Change in Accounting Method, as indicated by the IRS in separate guidance. Consideration should be given to the taxable situation and nature of each nonprofit’s activities to determine whether filing of Form 3115 is necessary. A Form 3115 is generally, but not always, filed with a retroactive catch up adjustment that is the difference between the deductions claimed to date under the old method and the deductions that should have been claimed to date under the new method. This adjustment will factor in all amounts from prior years unless otherwise specified by IRS guidance. However, adjustments for certain accounting method changes are required to be calculated on a prospective basis. Filing a Form 3115 provides protection against IRS audit penalties where the old method of accounting could have resulted in unfavorable IRS audit adjustments.
The IRS recently provided relief to simplify the procedures for small businesses making accounting method changes. A small business, for this purpose, is defined as a trade or business with average total revenues for the prior three years of not more than $10 million or total assets not more than $10 million. If either test is met, then a nonprofit may adopt the regulations through these simplified procedures. The new procedures allow small businesses to change a method of accounting on a prospective basis without filing Form 3115 or calculating retroactive adjustments. No formal election or statement is required to be filed to request a change in accounting method under these simplified procedures. Eligible small businesses following the simplified adoption procedures will compute taxable income starting with the 2014 tax returns according to the regulations without having to file a Form 3115. However, taxpayers will still be subject to additional taxes, penalties and interest if, upon IRS exam, amounts were deducted in pre-2014 years that should have been capitalized and deducted in a later year or depreciated over time.
Plan for the Future
Organizations should consider how they are affected by these regulations currently, as well as potential future implications. An organization that currently does not have unrelated business taxable income may feel little impact from the regulations from an income tax perspective. There will be little incentive for the IRS to enforce the regulations for many nonprofits where income tax is not paid. However, where a nonprofit pays unrelated business income tax, has taxable subsidiaries, or loses its tax exempt status, it should absolutely address these regulations currently. A nonprofit should anticipate filing Form 3115 in future years subject to unrelated business income tax, even if not currently subject to income tax. It is generally recommended that an organization make the de minimis safe harbor election regardless of whether it currently pays unrelated business income tax.
Consideration should also be given to determine whether these regulations could change the way overhead percentages are calculated for benchmarking purposes, statement of financial position (balance sheet) implications to assets or activities potentially eligible for a future sale, spin-off or other separation resulting in tax concerns upon separation, or taxable income implications for activities that cycle in and out of taxable activities. The tangible property regulations may also result in changes to the capitalization practices for financial accounting purposes.
The regulations provide broad ranging guidance on what must be capitalized as tangible property. However, they do not change any depreciation rules. For amounts required to be capitalized under these regulations, nonprofits will need to continue to apply the appropriate tax depreciation rules to elect appropriate tax depreciation methods.
Although these regulations affect many issues related to tangible property, they offer flexibility and options for determining the best course of action. Nonprofits, although not affected in the same manner as for-profit entities, are nevertheless subject to these regulations, and should discuss with their tax adviser the potential implications in order to plan for and mitigate unexpected and potentially adverse consequences.
For more information, contact Nathan Clark, senior tax director, at nclark@bdo.com.
Return to Table of Contents
2015 Outlook: Nonprofit Healthcare Organizations By Adam Cole, CPA, Cortney Marcin, and Patrick Pilch, CPA, MBA
Pressures to perform and transform, merge, acquire or consolidate, and protect are building and converging on nonprofit healthcare organizations. As such, 2015 promises to be a seminal year for these organizations and the communities they serve.
According to the American Hospital Association’s 2013 Hospital Statistics, there are about 5,686 hospitals in the U.S., 2,904 of which are nongovernment nonprofit community hospitals and 1,010 are tax-exempt state and local government community hospitals.
The pressure to perform and transform is informed by numerous data points that are combining to make the current operating model structure unsustainable:
The Medicare Payment Advisory Commission’s 2014 Data Book shows a demonstrable decline in Medicare inpatient and outpatient margins from 2002 to 2012. For that period, the overall hospital Medicare margin decreased from 2.2 percent to a negative 5.4 percent
Modern Healthcare analyzed earnings for approximately 200 hospitals and health systems and included a mix of nonprofit and investor-owned through 2013. The study revealed a shrinking of margins for all hospitals to 3.1 percent in 2013.
A Moody’s report on operating margins of nonprofit hospitals and health systems for 2013 showed overall operating margin deterioration to 2.2 percent.
Moreover, the typical hospital’s payer mix is 40 percent Medicare. This year, 8 percent of Medicare payments to hospitals will be value- or risk-based. Moreover, in February, Health and Human Services (HHS) announced a goal of tying 30 percent of existing fee-for-service Medicare payments to value-based payment models such as Accountable Care Organizations (ACOs) or bundled payment models by the end of 2016, and ultimately 90 percent of all traditional Medicare payments by 2018.
The trend is accelerating, and makes the current operating model structure unsustainable. The credit rating agencies know this. For the third consecutive year, the three major credit rating agencies all forecast a negative outlook for the nonprofit healthcare and hospital sector. They pointed to declining cash flows from operations because of lower revenues associated with the shift to risk-based reimbursement from volume-driven fee-for-service reimbursement, coupled with the rising costs of operations. The agencies also anticipate further rating downgrades due to the continuing challenges associated with the implementation of the Affordable Care Act (ACA).
The shift away from traditional fee-for-service to value-based payments has been underway for some time through various innovation models, and has accelerated under the ACA. The acceleration has been the result of a greater emphasis being placed on overall population health, preventative care, reductions in hospital admissions and readmissions, and providing services in lower cost of care settings to reduce overall costs.
Adding to the uncertainty is the upcoming U.S. Supreme Court decision in the King vs. Burwell
case, which may overturn the ACA’s provision for insurance coverage purchased through federally operated state exchanges.
One might ask, “have similar dynamics occurred in other industries?” The resultant tipping point faced by the healthcare industry is similar to that faced by another capital-intensive industry in the late 20th century: U.S. steel. In that emblematic case study, new, more nimble competitors eroded the country’s global market advantage by introducing more modern methods and technologies. A flurry of capital restructuring and operational redesign followed, and the industry ultimately shifted to more efficient mini-mills structures. By the 1980s, the rush of competition not only forced the shutdown of aging mills, but also began to threaten some of the more thinly capitalized new entrants.
We posit that 2015’s outlook for nonprofit healthcare organizations will reflect a similar dynamic; nonprofit hospitals will need to access the right capital aligned with the new—and different—operating and delivery models, and they must monitor and adapt to outside factors that will impact access to this capital, as well as their operations and reimbursements. New regulations and stressed government budgets threaten access to grants and tax-exempt bonds, and even tax-exempt status itself. Compliance will be critical in the face of these evolving requirements and new scrutiny.
Typically, for nonprofit healthcare organizations, capital is provided through tax-exempt bond financing, charitable contributions through foundation development and, occasionally, government or private grants. Tax-exempt bond financing represents the primary source of capital. Rates for these borrowings are lower than their taxable comparables, but easy access to such financing is challenging given both a negative outlook for reimbursement and the sector, as well as and the need for more efficient capital and scale to redesign nonprofit healthcare organizations. The need for capital has accelerated mergers and acquisitions and consolidation activity. For the last several years, this activity has been robust among nonprofit hospitals and within the social service sector. In addition to traditional M&A activity, nonprofit healthcare organizations have been pursuing risk sharing arrangements through ACOs, bundled payment arrangements and managed care contracts. Other capital arrangements through joint ventures with Real Estate Investment Trusts (REITs) also create access to new sources of capital. We expect these activities to continue, and even accelerate.
Potential regulatory changes may place nonprofit healthcare organizations’ tax exempt status at risk in two important ways.
First, in return for tax-exempt status, federal law requires nonprofit health systems to provide services to the poor and uninsured/underinsured, as well as to provide community benefits to the general public. The ACA contains a provision that requires hospitals to make “reasonable efforts” to assess whether patients qualify for financial assistance before taking an aggressive step like filing a lawsuit. This provision has bipartisan support in Congress. Recent news reports suggest that the government intends to enforce this provision in the form of significant penalties to those organizations that appear to cross the line. While a loss of tax exempt status has not yet happened under this provision, it remains highly possible this provision will impact the tax exempt status of organizations that do not comply.
Secondly, billions of dollars in tax exemptions granted to nonprofit hospitals are being challenged by regulators and politicians as federal, state and municipal budgets have been strained significantly since the recession. Nonprofit healthcare organizations need to ensure that they are in compliance with the new provisions of the ACA, as well as state and local regulations, in order to protect their nonprofit status.
There are myriad additional regulatory and compliance requirements taking effect in 2015, including notable changes impacting federal funding as affected by the Office of Management and Budget’s (OMB) new Uniform Guidance. The guidance is the most comprehensive set of changes to occur to the OMB regulations in decades, and will impose more consistent guidelines on both grant recipients and organizations issuing grants to sub-recipients, which is more common with certain healthcare funding arrangements. Compliance will be critical in the face of these evolving requirements and new scrutiny.
So, what should nonprofit healthcare organizations consider for 2015 and beyond?
1. Understand your cost of care and cost of operations.
This is often easier said than done. Care delivery is complex, and fragmented outcomes are disassociated from financial and market analytics. There is much market opportunity to reposition nonprofit healthcare organizations for the future sustainability.
2. Understand your investment thesis.
Nonprofit healthcare CFOs can take a page from global corporations whose CFOs must evaluate their enterprises from a portfolio perspective. In the same way that steel industry CFOs redeployed capital into new mini-mill models, so too can healthcare providers examine their assets in terms of ROI. Reduce or moderate investment in lower ROI assets in favor of aligning investments with higher-ROI businesses in emerging or growing markets or assets. Think care design and new risk-based models.
3. Understand your market and your “customer.”
Nonprofit healthcare leaders need to understand the implications of the risk shifts from payer to provider to consumer, as well as the opportunities for investing in a customer-focused relationship. Understanding the market need through visual analytics will serve nonprofit healthcare organizations well in redesigning their models around the population they serve. Tapping into best practices from consumer focused industries will be helpful.
4. Understand who you are and what your organization means.
Do you have the vision, leadership, appropriate resources, ideas, capital and partners to mitigate the risks and take advantage of the opportunity for your organization? Each CFO must ask himself or herself: “Where are the gaps? Can we execute the change?”
5. Understand what your future state could look like—the art of the possible.
Look to the future and assess what will be the successful models in five or 10 years, taking into account the first four recommendations.
The next step is to get started!
For more information, contact Adam Cole, partner, at acole@bdo.com, Cortney Marcin, manager, at cmarcin@bdo.com, or Patrick Pilch, managing director, The BDO Center for Healthcare Excellence & Innovation, at ppilch@bdo.com.
Return to Table of Contents
Nonprofits Beware: The Hidden Costs in an Office Lease By Patrick Gioffre, The EZRA Company
Have you ever heard of a “gross up” clause? Do you know why operating expense provisions could cost you thousands? There can be inconsistencies and dangers lurking in an office lease, and nonprofits should be privy to the ways some building owners aim to pass on additional expenses to tenants. Here’s what to look for and how to prepare.
The financial components of most leases for office space include a base rent, an annual escalator, a tenant improvement allowance, perhaps some free rent and a pass-through of increases in operating expenses over a base year.
For example, a 10-year office lease could have a base rent of $41 per square foot (which is increased by 2 percent per year), 10 months of free rent, an improvement allowance of $60 per square foot to be used by the tenant to build out its space and a pass-through to the tenant of increases in operating expenses over a base year of 2015.
All of the financial components of this sample deal are easy to estimate, except the increase in operating expenses over the 2015 base year. If not properly negotiated, increases in operating expenses can be significant. Let’s consider the operating expense component of the lease a little closer.
Operating Expense Provisions
Most leases have a provision that allows the landlord to pass its pro rata increase in operating expenses to the tenant. If the tenant occupies 10 percent of the building, the landlord can pass to the tenant 10 percent of all increases in operating expenses over a base year (i.e., usually the initial rental year).
It is the landlord’s objective to define operating expenses as broadly as possible to provide the landlord with maximum flexibility to pass through any and all building costs. Operating expenses include real estate taxes, insurance, and common-area maintenance expenses that the landlord incurs to maintain the building, including:
janitorial services
gas, water and electricity utilities
repairs and maintenance to the building systems such as elevators, heating, ventilating and air conditioning (HVAC) systems, electrical systems and plumbing
property management fees
labor
If properly negotiated by the tenant’s real estate agent, the operating expense provisions should have numerous exclusions and adjustments, which are often lengthy and include complex mathematical formulas and highly detailed definitions that can run for pages. Unfortunately, some real estate agents don’t have a comprehensive understanding of operating expense provisions, so the documents are often not negotiated very well, if at all. The result is that the tenant pays a significant amount more than is fair or necessary over the term of the lease.
For example, if the base year is understated by $1 per square foot in favor of the landlord on a 10,000-square-foot lease for a 10-year term, this item alone would cost the tenant more than $100,000 over the term of the lease. It is typically the landlord’s goal to keep the base-year expenses as low as possible, in order to pass more operating expenses to tenants.
In a new building, equipment like elevators and HVAC systems are under warranty for the first year. If the first year is also the base year in the lease, then the base-year operating expenses could be understated because of the warranties. If the operating expense provision does not specifically state that the base-year expenses should be increased for the value of the warranties, then the tenant will likely pay for the impact of these warranties for the entire term of the lease.
This may seem unfair, but if the lease does not specifically address adjusting the base-year operating expenses for items such as warranties, then the landlord will most likely not make those adjustments.
Another example that could have a significant economic impact for nonprofits concerns repairs and maintenance associated with a parking garage. When a landlord charges a parking fee in addition to rent, any cost attributable to the operation of the parking garage should not be included in operating expenses. Again, if this is not specifically addressed in the lease, then most likely the landlord will pass these expenses to the tenant. These are just two examples, and there are numerous others.
The “Gross Up” Clause
Another area that could have a significant economic impact if not properly addressed in the lease negotiation is the “gross up” clause. Grossing up expenses is a method of extrapolating certain expenses that vary based on occupancy. If the building is not fully leased, the operating expenses should be adjusted to accurately reflect the expectation of the parties. For example, if the building is only 50 percent occupied during the base year, then operating expenses for items such as janitorial services will be significantly less than if the building was fully occupied.
For the purpose of calculating the operating expenses for the base year, the expenses for janitorial services should be increased as if the building was fully occupied. If this is not done correctly, resulting in the operating expenses for the base year to be understated, then expenses passed to the tenant for the entire term of the lease will be overstated.
The Audit Clause
This provision of the lease allows the tenant to audit the operating expenses for the base year and each year thereafter. It is important that this clause require the landlord to give the tenant a detailed accounting of the operating expenses on an annual basis. It should require that requested documents be provided to the tenant to determine that all items included in operating expenses are allowable as well as provide for an adequate amount of time to conduct an audit. Tenants should exercise this option in the lease to ensure operating costs are appropriate.
Ask For Help
Be sure you hire and/or consult with real estate and financial professionals who are capable of negotiating the complicated provisions throughout your office space lease. By taking steps to assure you are getting the most for your organization’s dollars, you are helping to support your important mission.
Reprinted with permission. Copyright, ASAE: The Center for Association Leadership, December 2014, Washington, D.C.
For more information, contact Patrick Gioffre, senior vice president at The EZRA Company, an independent firm focusing on tenant and buyer representation, at 571-214-8532 or pgioffre@ezracompany.com.
Return to Table of Contents
Other Items to Note Issuance of Uniform Guidance
On December 19, 2014, the joint interim final rule
was issued by the Office of Management and Budget (OMB) implementing the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards at 2 CFR 200
(Uniform Guidance) in the Federal Register. This joint interim final rule incorporates the implementing regulations of all the federal awarding agencies and was necessary to bring into effect the new Uniform Guidance for Federal Awards under 2 CFR 200. It was effective on December 26, 2014.
Included in this are certain technical corrections to language included in the original Uniform Guidance (previously referred to as the Supercircular) which are highlighted below:
The effective applicability date has been revised to allow a grace period of one fiscal year for non-federal entities to implement changes to their procurement policies and procedures in accordance with the revised procurement standards.
CFR Section 200.320 was revised to clarify that the requirement for sealed bids to be advertised and opened “publicly” is applicable to state, local and tribal entities only.
There were several places in the guidance where “should” has been revised to “must”.
Management of nonprofit organizations should review the Uniform Guidance to ensure that all requirements of the guidance have been addressed by their organization.
2015 OMB Compliance Supplement
The OMB has provided the AICPA Governmental Audit Quality Center (GAQC) with a draft version of the 2015 OMB Compliance Supplement (Supplement) for their review. The major change in the Supplement this year is the incorporation of the requirements and guidance from OMB’s Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards
(Uniform Guidance).
During the period covered by the 2015 Supplement, organizations will have federal awards expended that are subject to requirements from different sources. For example, federal awards made before December 26, 2014, are subject to the “old” OMB cost principles and administrative requirements. However, new federal awards are subject to the cost principles and administrative requirements contained in the Uniform Guidance. To address this transitional situation, a new section will be included in Part 3 of the 2015 Supplement. The new Part 3.2 will apply to compliance testing of new federal awards and incremental funding made on or after December 26, 2014. Part 3.1, which is the previous Part 3 from the 2014 Supplement updated for normal annual changes, will apply to federal awards subject to the “old” rules.
It is important for management to review these guidelines when the Supplement is issued to ensure they are in compliance. The new Section 3.2 will be effective for March 31, June 30, and September 30 year-end single audits if federal funds have been expended under federal awards subject to the new Uniform Guidance.
There is no stated date for the release of the 2015 Supplement, however, OMB’s goal is to try and issue it earlier than in the past due to the significant changes being made as a result of the Uniform Guidance.
FASB Prepares to Release Not-for-Profit Financial Reporting Proposal Draft
For the last 18 months, we’ve been closely monitoring updates from the Financial Accounting Standards Board (FASB or the Board) regarding the organization’s Not-for-Profit Financial Statement Reporting Project. To date, we’ve highlighted the Board’s tentative decisions surrounding not-for-profit financial reporting
, expense report requirements
and cash flow statements
in both our blog and newsletter.
On March 4, the Board announced long-awaited news: It voted 5-2 to release a proposal for updates to the existing net classification scheme, as well as requirements concerning liquidity, financial performance and cash flow information that nonprofits must present and/or disclose. The proposal’s exposure draft will likely be issued to the public for review and comment in early to mid-April.
Stay tuned to our Nonprofit Standard blog in the weeks ahead, as we’ll provide a detailed overview of the FASB’s proposal draft once it’s released. In the meantime, you can review all of our past posts on the Board’s deliberations up to this point:
Return to Table of Contents
Nonprofit Facts: Did you know…
Data from a survey of 261 leading U.S. companies, including 62 Fortune 100 companies, reveals they contributed more than $25 billion in total giving in 2013, equivalent to around 1 percent of pre-tax profits, or more than $600 per employee, according to an article in
The Conversation
.
According to the
2014 Fundraising Effectiveness Survey
, 43 percent of donors who made a gift in 2012 did so again in 2013.
Twenty percent of adults under 30 volunteered in 2013, up from 14 percent in 1989, according to census data analyzed by the
Corporation for National and Community Service
.
According to the
Chronicle of Philanthropy
, the viral Ice Bucket Challenge campaign drove an estimated $220-million in donations globally in 2014, with $115-million to the ALS Association.
Based on the latest numbers from the
Blackbaud Index
, overall charitable giving increased just 0.1 percent for the three months ending November 2014, compared to the same period of 2013, with online giving increasing 4.4 percent.
Donations on the latest #GivingTuesday surged to an estimated $45.7 million, more than double the amount raised in 2012 when the event began, according to a new report by
Giving USA Foundation
and
Indiana University’s Lilly Family School of Philanthropy
.
According to a survey of eighty nonprofit executives released by
GiveCentral
, only 23 percent of respondents work for nonprofits that measure mobile payment processing, and only six percent have donors who pay via their mobile devices.
Eighty-seven percent of volunteers say there is overlap between the organizations they support financially and where they volunteer, with 43 percent describing significant or total overlap with the organizations they support financially and as a volunteer, according to a recent
Fidelity Charitable
report.
A new Journal of Consumer Research study found that giving donors fewer options spurs more gifts at the end of a campaign.
Nineteen nonprofits, including the American Red Cross, Livestrong Foundation, and Unicef, now have the ability to receive donations through their Facebook page with a donation feature that Facebook recently incorporated.
Return to Table of Contents
Nonprofit & Education Webinar Series
The BDO Institute for Nonprofit Excellence
SM
provides a complimentary educational series that is designed specifically for busy professionals in nonprofit and educational institutions.
The BDO Institute for Nonprofit Excellence
SM
is proud to announce our
2015 BDO KNOWLEDGE Nonprofit and Education Webinar Series
to keep you abreast of trends, issues and challenges that are impacting the nonprofit environment. We invite you to take part in this program with members of your organization, including board members. All webinars are conveniently scheduled from 1:00 to 2:45 p.m. Eastern Time and offer two hours of CPE credit.
Stay tuned to the Nonprofit Standard blog
or refer to www.bdo.com for further details and registration information.
The 2015 calendar of events currently scheduled is below.
4/16/2015 - Social Media – The Changing Landscape of Fundraising
5/7/2015 - Measuring and Monitoring Program Impact and Outcomes – What You Need to Know!
6/11/2015 - Rethinking Risk to Build a Better Investment Portfolio – Investment, Accounting and Tax Considerations
9/10/2015 - Annual Nonprofit Tax Update
10/8/2015 - Annual Nonprofit Audit and Accounting Update
11/4/2015 - Nonprofit Entity Risk Management – How to Manage Risk to Ensure Success
Return to Table of Contents
BDO Professionals in the news
BDO professionals are regularly asked to speak at various conferences due to their recognized experience in the industry. You can hear BDO professionals speak at these upcoming events:
APRIL
Michael Sorrells
will be presenting a session entitled “Current Hot Items: IRS Issues, Form 990 and Legislation” at the Maryland Association of CPAs Nonprofit Conference on April 17 in College Park, Md.
MAY
Lee Klumpp
will be presenting a session entitled “Surveying the Scene: An Overview of Recent FASB Projects” at the Virginia Society of CPAs Business & Industry Conference on May 13 in Williamsburg, Va.
Sorrells
will also be presenting the session entitled “Current Hot Items: IRS Issues, Form 990 and Legislation” at the BDO Nonprofit Conference on May 14 in San Antonio, Tex.
Klumpp
will present a session entitled “NFP Audit and Accounting Update and Accounting Update” on May 14 at the BDO Nonprofit Conference in San Antonio, Tex.
Sandra Feinsmith
and
Laura Kalick
will be conducting a session entitled “Annual Nonprofit Tax Update” at the Georgia Society of CPAs Annual Nonprofit Conference on May 15 in Atlanta, Ga.
Klumpp
will be conducting a session entitled “Accounting and Auditing Standards Update” at the Florida Institute of CPAs at their 2015 Not-for-Profit Organizations Conference on May 28 in Ft. Lauderdale, Fl. and on May 29 at the same conference being offered in Tampa, Fla.
JUNE
Klumpp
will be conducting an all-day session on June 8 entitled “COSO’s Updated Internal Control Framework: Critical Concepts in Design, Evaluation, Implementation and Monitoring” for the New Hampshire Society of CPAs in Manchester, NH.
Klumpp
will also be conducting an all-day course on June 9 entitled “Performing Single Audits in 2015 and Beyond” for the New Hampshire Society of CPAs in Manchester, NH.
Several professionals are scheduled to speak at the 2015 AICPA Not-for-Profit Industry Conference being held June 15-17 in National Harbor, Maryland. Here is a summary of the BDO speakers:
Patty Brickett and Jeffrey Schragg will be presenting a session entitled “Onboarding to Termination: Tax Reporting for Employees” on June 16.
Laurie De Armond and Schragg will be presenting a session entitled “Protecting Your Brand: Challenges Associated with Chapter/Affiliate Relationships: Interactive Unplugged” on June 16.
De Armond and Sorrells will be presenting a session entitled “Tax Red Flags for Auditors and Financial Managers” on June 17.
Rebekuh Eley and Schragg will be presenting a session entitled “Mythbusters! EO Tax Edition (or Mythperceptions)” on June 17.
Several BDO professionals, including
Terri Albertson, Tom Gorman, Kalick
and
Klumpp
, will be presenting various nonprofit topics at the BDO Annual Florida Government and Nonprofit Conference on June 18 in Fort Lauderdale, Fla. and on June 19 in Miami, Fla.
Klumpp
will be conducting a half-day session entitled “Frequent Frauds Found in Government and Yellow Book Audits” on June 22 for the Arkansas Society of CPAs in Little Rock, Ar.
Klumpp
will be conducting an all-day session entitled “Applying OMB Circular A-133 to Nonprofit & Governmental Organizations” on June 24 for the Virginia Society of CPAs in Fairfax, Va.
Klumpp
will also be presenting an all-day session entitled “FASB Review: Common GAAP Issues Impacting all CPAs” on June 25 for the Greater Washington Society of CPAs in Washington, D.C.
Return to Table of Contents | 98,071 | 34,671 | 0.000029 |
warc | 201704 | Opponents of campus affirmative action typically rest their case on the immorality of using racial or ethnic categories (more delicately called “diversity”) versus treating people as individuals. That objection is certainly valid but when it comes to hiring of faculty, the damage far exceeds just violating a principle. Racial preferences deeply corrupt and will inevitably undermine academic excellence in ways that campus outsiders seldom grasp.
To appreciate this damage, consider Brown University’s recent National Diversity Summit in which the school announced plans to double its “underrepresented” minority (i.e., black) faculty by 2025 — from the current 9% to 18% (women don’t count here since the proportion of female faculty is already more than 50% but the plan nevertheless calls for a substantial increase in women in science departments).
Strategies included creating post-doctoral fellowships for black scholars to be mentored by Brown faculty and attracting young blacks to the Brown campus with conferences. More forceful measures will entail asking departments to develop a “diversity action plan” whose annual goals would be monitored and requiring faculty search committees to ensure a diverse pool of minority candidates. In numbers, 410 black professors will have to be recruited… | 1,358 | 771 | 0.001351 |
warc | 201704 | His critics believe he’s not done anything outside his crazed and controversial war against drugs but the business community thinks otherwise because they think that amid the controversies of President Rodrigo Duterte’s drug war lies a potential for economic boom that the Philippines hasn’t seen for many, many years.
It has really been just less than two months since he began his term but the business sector believes that change is about to happen. Of course, there’s still former President Benigno Aquino to thank for in laying the groundwork for a strong economy but the fact that Duterte is seen as a strong leader who fulfills his promise is already a strong quality that the business sector believes will bring the economic boom to the country.
Moreover, the war against drugs is not the only one the president has focused on, despite what his critics are saying. In fact, he had also been busy fighting corruption and red tape in the government. With lesser time needed to process documents and finding no need to pay someone just to hasten the procedures, businesses see a better future with the Duterte government.
Also, there are plenty of plans for expansion of development outside Manila, especially in Mindanao. This means more work for the people and more potential for businesses to flourish.
Henry Schumacher of the European Chamber of Commerce in the Philippines said, “
I believe infrastructure is going to grow very fast and it will have a double or triple effect. Money will be available. An iron fist is going to be behind it.”
It is also worthy of note that the president has told the telecommunications companies in the country to speed up – that had been in May. If they won’t find a way to do it sooner, we’re willing to bet they would soon earn the ire of the president and find a solution pronto!
With faster and, hopefully, cheaper internet and telecommunications options, the country will surely attract more investors in the BPO and other sectors relying on the internet to get things done.
There’s also hope that the public transport options might also be improved, especially with the upcoming construction of the Mindanao Railway System planned in 2017, possible improvements with the MRT/LRT systems in Metro Manila, crackdown against colorum vehicles, and many others.
Aside from all those things, remittances from OFWs have also been getting stronger and better, thanks to the administration’s steps to help out these workers.
Indeed, this administration is just starting out but if his staunch supporters and the business community are to be believed, something good is about to happen to the country’s economy in the coming months… We are surely hoping that would come soon!
Source: GMA News | 2,810 | 1,379 | 0.000746 |
warc | 201704 | This report is based on the revenue generated through land-based casinos and not online casinos. It covers the present scenario and the potential growth of the market.
In addition, the study notices that the vision on casinos is also evolving. What appeared in the past as a way to lose money now becomes a real pleasure.
This growth can be explained by the change in the player profiles visiting the casino. For a long time, men were the most numerous, but now women account for over 50% of the players.
The study also explains that dematerialized money allows vendors to better understand the habits of the players and to create personalized marketing plans.
However, the report explains that some tax regulations should affect the market during the studied period. | 773 | 443 | 0.002275 |
warc | 201704 | Progress?
Dear friends,
It's not something you hear often when it comes to climate negotiations: "progress has been made."
At 4AM on Saturday morning in Cancun, delegates emerged from the UN negotiations, all of them sleep-deprived and most of them smiling. They had managed to agree on a foundation for future talks. The agreements that came out of Cancun won't be enough to get the world back to 350--but they offer a glimpse at a path forward that just might.
The feeling of momentum emerging from Cancun was refreshing: countries rebuilt trust, and wrestled with difficult issues like deforestation and transparency. This trust was in serious doubt after last year's failed negations in Copenhagen--and even in the final hours of negotiations in Cancun.
These countries will now have to negotiate with the world’s climate--and the physics and chemistry that govern the climate won’t negotiate. In the wake of the modest progress achieved in Cancun, it’s tempting to overlook the fact that delegates mostly avoided the real crux of the negotiations: exactly how much will countries reduce their planet-heating emissions?
In fact, the current pledges contained in the negotiating text are still grossly inadequate, leaving the planet on a crash course with at least 4 degrees Celcius of temperature rise--a terrifying prospect that would put us closer to 750ppm than 350ppm. That’s very far from where we must be, and that that gap won’t be fixed by simply waiting until the next year’s convention in Durban, South Africa.
To close the gap between scientific necessity and political possibility, we must fight the influence of big polluters on the political process. At the end of last week, thousands of you spoke up in support of the most vulnerable countries, sending your messages of solidarity from all corners of the planet. (http://action.350.org/content_item/show-solidarity) Our team in Cancun delivered your messages directly to the delegates (http://www.facebook.com/350.org?v=photos#%21/album.php?aid=262723&id=12185972707), and reminded them just how much the world is counting on them to stand up to big polluters.
By building a public movement around the climate solutions that science and justice demand, we've helped keep this process alive when major polluters tried to destroy it. We've made the science clear. And thanks to your messages of solidarity, we've strengthened the voices of vulnerable nations, who have pledged to keep the fight for bold climate action alive.
In the months and years to come, that will continue to be our fight as well. In the final hours of the talks in Cancun, members of the 350.org team were among a group of young people who stood peacefully at the entrance to the negotiating halls and slowly counted upwards towards 21,000, the number of deaths attributed (http://www.350.org/en/21000) to climate related disasters in the first 9 months of this year. After two weeks of abstract negotiations, this event was a poignant reminder of the stakes in this struggle--and of the strength of the bonds of this global network.
There will be those receiving this email who would wish us to condemn the agreements that came out of Cancun -- as well as those who might like us to call it a hope-filled victory.
But we didn’t get involved in this movement to condemn or cheer: we got in it to win.
To do that, we’ll have to win our country’s capitols first, and to do that, we’ll have to organize in all the communities where we live. We’ve begun that work, but we still have much more work to do.
We will do it with hope, with passion, and with unwavering determination. And above all, we will do it together.
Onwards,
May Boeve for...
Supporters are now helping to Support a global agreement to stop climate change. Campaign closed
We need a groundswell of support to send a clear message to President Obama.
We've got big plans in 2013 — but we need your help to make them happen. In the coming months we're planning to: Scale out our fossil fuel divestment campaign in campuses and communities everywhere. Coordinate mobilizations to keep a lid on the Keystone XL Pipeline. Spark bold, strategic campaigns to force politicians to rise to climate crisis the challenge of our time.
Viewed 8 times | 4,325 | 2,165 | 0.000471 |
warc | 201704 | Guideline for the Prevention and Control of Norovirus Gastroenteritis Outbreaks in Healthcare Settings, 2011 V. Background
Norovirus is the most common etiological agent of acute gastroenteritis and is often responsible for outbreaks in a wide spectrum of community and healthcare settings. These single-stranded RNA viruses belong to the family
Caliciviridae, which also includes the genera Sapovirus, Lagovirus, and Vesivirus.[1] Illness is typically self-limiting, with acute symptoms of fever, nausea, vomiting, cramping, malaise, and diarrhea persisting for 2 to 5 days.[2,3] Noteworthy sequelae of norovirus infection include hypovolemia and electrolyte imbalance, as well as more severe medical presentations such as hypokalemia and renal insufficiency. As most healthy children and adults experience relatively mild symptoms, sporadic cases and outbreaks may be undetected or underreported. However, it is estimated that norovirus may be the causative agent in over 23 million gastroenteritis cases every year in the United States, representing approximately 60% of all acute gastroenteritis cases.[4] Based on pooled analysis, it is estimated that norovirus may lead to over 91,000 emergency room visits and 23,000 hospitalizations for severe diarrhea among children under the age of five each year in the United States.[5,6]
Noroviruses are classified into five genogroups, with most human infections resulting from genogroups GI and GII.[6] Over 80% of confirmed human norovirus infections are associated with genotype GII.[4.7,8] Since 2002, multiple new variants of the GII.[4] genotype have emerged and quickly become the predominant cause of human norovirus disease.[9] As recently as late 2006, two new GII.[4] variants were detected across the United States and resulted in a 254% increase in acute gastroenteritis outbreaks in 2006 compared to 2005.[10] The increase in incidence was likely associated with potential increases in pathogenicity and transmissibility of, and depressed population immunity to these new strains.[10] CDC conducts surveillance for foodborne outbreaks, including norovirus or norovirus-like outbreaks, through voluntary state and local health reports using the Foodborne Disease Outbreak Surveillance System (FBDSS). CDC summary data for 2001-2005 indicate that caliciviruses (CaCV), primarily norovirus, were responsible for 29% of all reported foodborne outbreaks, while in 2006, 40% of foodborne outbreaks were attributed to norovirus.[11] In 2009, the National Outbreak Reporting System (NORS) was launched by the CDC after the Council of State and Territorial Epidemiologists (CSTE) passed a resolution to commit states to reporting all acute gastroenteritis outbreaks, including those that involve person-to-person or waterborne transmission.
Norovirus infections are seen in all age groups, although severe outcomes and longer durations of illness are most likely to be reported among the elderly.[2] Among hospitalized persons who may be immunocompromised or have significant medical comorbidities, norovirus infection can directly result in a prolonged hospital stay, additional medical complications, and, rarely, death.10 Immunity after infection is strain-specific and appears to be limited in duration to a period of several weeks, despite the fact that seroprevalence of antibody to this virus reaches 80-90% as populations transition from childhood to adulthood.[2] There is currently no vaccine available for norovirus and, generally, no medical treatment is offered for norovirus infection apart from oral or intravenous repletion of volume.[2]
Food or water can be easily contaminated by norovirus, and numerous point-source outbreaks are attributed to improper handling of food by infected food-handlers, or through contaminated water sources where food is grown or cultivated (e.g., shellfish and produce). For more information visit: Updated Norovirus Outbreak Management and Disease Prevention Guidelines [PDF - 850 KB]. The ease of its transmission, with a very low infectious dose of <10 -100 virions, primarily by the fecal-oral route, along with a short incubation period (24-48 hours) [12,13], environmental persistence, and lack of durable immunity following infection, enables norovirus to spread rapidly through confined populations.[6]
Institutional settings such as hospitals and long-term care facilities commonly report outbreaks of norovirus gastroenteritis, which may make up over 50% of reported outbreaks.[11] However, cases and outbreaks are also reported in a wide breadth of community settings such as cruise ships, schools, day-care centers, and food services, such as hotels and restaurants. In healthcare settings, norovirus may be introduced into a facility through ill patients, visitors, or staff. Typically, transmission occurs through exposure to direct or indirect fecal contamination found on fomites, by ingestion of fecally-contaminated food or water, or by exposure to aerosols of norovirus from vomiting persons.[2,6] Healthcare facilities managing outbreaks of norovirus gastroenteritis may experience significant costs relating to isolation precautions and PPE, ward closures, supplemental environmental cleaning, staff cohorting or replacement, and sick time. The pathogenesis of human norovirus infection
The P2 subdomain of the viral capsid is the likely binding site of norovirus, and is the most variable region on the norovirus genome.[14] The P2 ligand is the natural binding site with human HBGA, which may be the point of initial viral attachment.[14] HBGA is found on the surfaces of red blood cells and is also expressed in saliva, in the gut, and in respiratory epithelia. The strength of the virus binding may be dependent on the human host HBGA receptor sites, as well as on the infecting strain of norovirus. Infection appears to involve the lamina propria of the proximal portion of the small intestine,[15] yet the cascade of changes to the local environment is unknown.
Clinical diagnosis of norovirus gastroenteritis is common, and, under outbreak conditions, the Kaplan Criteria are often used to determine whether gastroenteritis clusters or outbreaks of unknown etiology are likely to be attributable to norovirus.[16] These criteria are:
Submitted fecal specimens negative for bacterial and if tested, parasitic pathogens, Greater than 50% of cases reporting vomiting as a symptom of illness, Mean or median duration of illness ranging between 12 and 60 hours, and Mean or median incubation period ranging between 24 and 48 hours.
The current standard for norovirus diagnostics is reverse transcriptase polymerase chain reaction (RT-PCR), but clinical laboratories may use commercial enzyme immunoassays (EIA), or electron microscopy (EM).[6] ELISA and transmission electron microscopy (TEM) demonstrate high sensitivity but lower specificities against the RT-PCR gold standard. The use of enzyme-linked immunosorbent assays (ELISA) and EM together can improve the overall test characteristics—particularly test specificity.[17] Improvements in PCR have included the development of multiple nucleotide probes to detect a spectrum of genotypes as well as methods to improve detection of norovirus from dilute samples or low viral loads and those containing PCR-inhibitors.[18] While the currently available diagnostic methods are capable, with differing degrees of sensitivity and specificity, of detecting the physical presence of human norovirus from a sample, its detection does not directly translate into information about residual infectivity.
A significant challenge to controlling the environmental spread of norovirus in healthcare and other settings is the paucity of data available on the ability of human strains of norovirus to persist and remain infective in environments after cleaning and disinfection.[19] Identifying the physical and chemical properties of norovirus is limited by the fact that human strains are presently uncultivable
in vitro. The majority of research evaluating the efficacy of both environmental and hand disinfectants against human norovirus over the past two decades has primarily utilized feline calicivirus (FCV) as a surrogate. It is still unclear whether FCV is an appropriate surrogate for human norovirus, with some research suggesting that human norovirus may exhibit more resistance to disinfectants than does FCV.[20] Newer research has identified and utilized a murine norovirus (MNV) surrogate, which exhibits physical properties and pathophysiology more similar to those of human norovirus.[20] Currently, the Environmental Protection Agency (EPA) offers a list of approved disinfectants demonstrating efficacy against FCV, and the Federal Drug Administration (FDA) is responsible for evaluating hand disinfectants with label-claims against FCV as a surrogate for human norovirus (among other epidemiologically significant pathogens). It is unknown whether there are variations of physical and chemical tolerances to disinfectants and other virucidal agents among the various human norovirus genotypes. Other research pathways are evaluating the efficacy of fumigants, such as vapor phase hydrogen peroxides, as well as fogging methods as virucidal mechanisms to eliminate norovirus from environmental surfaces. | 9,307 | 4,142 | 0.000242 |
warc | 201704 | Alice Suter, M.S. Ed., Ph.D., James P. Keogh Award Winner for 2012
Alice Suter, M.S. Ed., Ph.D., has been a leader in occupational hearing conservation for the past four decades. Dr. Suter received a BA from the American University in 1959 and earned an MS Ed in deaf education from Gallaudet College in 1960. She earned a PhD in audiology from the University of Maryland in 1977.
Dr. Suter joined the Office of Noise Abatement and Control at the US Environmental Protection Agency in 1973. Her technical contributions and leadership formed the basis of the "Health Effects Criteria Document" and the publication of "Levels of Noise Requisite to the Protection of Public Health and Welfare with an Adequate Margin of Safety." These documents were key to the global adoption of recommendations for health protection and recognition by the World Health Organization. Addressing the psychological, physiological, performance, and communication effects of noise, many of these recommendations remain "best practices" today.
In 1978, Dr. Suter transferred to the U.S. Department of Labor, where she led the development of the Hearing Conservation Amendment to the OSHA noise standard. A "safety net" for workers exposed to noise, the standard has led to employer-provided earplugs, annual hearing tests, training at work about noise and hearing loss, as well as many international hearing conservation policies.
Joining NIOSH in 1988 as a visiting scientist, Dr. Suter focused on agriculture and construction workers who were not covered by the noise regulations. She coauthored the NIOSH Publication "A Practical Guide to Effective Hearing Conservation Programs in the Workplace."
Dr. Suter has authored over 50 publications, including the "Hearing Conservation Manual" (currently in its fourth edition). Written for the Council for Accreditation in Occupational Hearing Conservation, the manual has helped shape the training of audiologists and occupational hearing conservationists in the United States. She is a fellow in the American Speech-Language-Hearing Association and the Acoustical Society of America (ASA). She has received a Distinguished Service Citation from ASA, the Alice Hamilton Award from the American Industrial Hygiene Association, and the Michael Beall Threadgill Award, Outstanding Hearing Conservationist Award, and Lifetime Achievement Award from the National Hearing Conservation Association.
Page last reviewed: April 26, 2012 Page last updated: April 26, 2012 Content source: | 2,520 | 1,268 | 0.000795 |
warc | 201704 | Farmer dies after being pinned between tractor tire and hayrack.
Source
NIOSH 1996 Mar; :1-3
Abstract
The victim was alone at the time that the incident occurred. This report is based upon a review of a written sheriff's department report, a review of copies of their photos of the incident, and a telephone interview with a sheriff's deputy who responded to the scene. A 70-year-old male farmer (victim) died from injuries sustained after he was pinned between a rear tractor tire and a hayrack. The victim left his farmyard near mid-day with a tractor and two hayracks and drove to a hay field to fill the racks with large round hay bales. He arrived at the hay field, unhooked the hayracks and began loading bales onto the racks. After he filled the front hayrack and loaded four bales on the second hayrack, he hooked the hayracks to the tractor drawbar. He drove to another location in the field, stopped the tractor and got off of it to unhook the hayracks. When he removed the hitch pin, the hayracks rolled forward and pinned him between the left rear tractor tire and a large round hay bale that extended beyond the edge of the hayrack. The victim's wife became concerned when he did not return from the hay field. She called a neighbor and told him that she hadn't seen her husband since he left the farmyard earlier during the day. The neighbor drove to the hay field and found the victim pinned between the tractor tire and the hayrack. He called emergency medical personnel who arrived at the scene shortly after being notified. They removed the victim and pronounced him dead at the scene. MN FACE investigators concluded that in order to reduce the likelihood of similar incidents, the following guidelines should be followed: 1. whenever possible, operators should select terrain that is level when unhitching equipment; and 2. operators should block the wheels of equipment if it must be unhitched on sloping terrain.
Keywords
Region-5; Accident-analysis; Accident-prevention; Accidents; Injuries; Injury-prevention; Traumatic-injuries; Work-operations; Work-analysis; Work-areas; Work-performance; Work-practices; Safety-education; Safety-equipment; Safety-measures; Safety-monitoring; Protective-measures; Farmers; Agricultural-machinery; Agricultural-workers; Tractors; Equipment-operators
Document Type
Field Studies; Fatality Assessment and Control Evaluation
Funding Type
Cooperative Agreement
NTIS Accession No.
PB2014-105070
Identifying No.
FACE-95MN070; Cooperative-Agreement-Number-U60-CCU-507283
Source Name
National Institute for Occupational Safety and Health
Performing Organization
Minnesota Department of Health | 2,664 | 1,300 | 0.000775 |
warc | 201704 | The Political Economy of Business Ethics in East Asia: A Historical and Comparative Perspective deals with modes of ethical persuasion in both public and private sectors of the national economy in East Asia, from the periods of the fourteenth century, to the modern era. Authors in this volume ask how, and why, governments in pre-modern Joseon Korea, modern Korea, and modern Japan used moral persuasion of different kinds in designing national economic institutions.
Case studies demonstrate that the concept of modes of exchange first developed by John Lie (1992) provides a more convincing explanation on the evolution of pre-modern and modern economic institutions compared with Marx’s modes of production as historically-specific social relations, or Smith’s free market as a terminal stage of human economic development.
The pre-modern and modern cases presented in this volume reveal that different modes of exchange have coexisted throughout human history. Furthermore, business ethics or corporate social responsibility is not a purely European economic ideology because manorial, market, entrepreneurial, and mercantilist moral persuasions had widely been used by state rulers and policymakers in East Asia for their programs of advancing dissimilar modes of exchange. In a similar vein, the domination of the market and entrepreneurial modes in the twenty-first century world is also complemented by other competing modes of change, such as state welfarism, public sector economies, and protectionism.
Compares Chinese, Japanese, and Korean business ethics from a comparative and historical context Explores recent theoretical approaches to capitalist development in modern history in non-Western regions Discusses the theoretical usefulness of new institutionalism, modes of exchange, and neoclassical discussions of business ethics Evaluates historical texts in their own languages in its attempt to compare Chinese, Japanese, and Korean business ethics in the pre-modern and modern times | 2,017 | 974 | 0.001034 |
warc | 201704 | Ottawa Humane Society
Ottawa, ON K2E 1A6
Executive Director: Bruce Roney
Board Chair: Mike Laviolette
Website: www.ottawahumane.ca
Charitable Reg. #: 12326 4715 RR0001
Charity Rating Social Results Reporting Grade: A-The grade is based on the charity's public reporting of the work it does and the results it achieves. Financial Transparency Program Cost Coverage Spending Breakdown Full-time staff #57 Avg. Compensation $54,172 Top 10 Staff Salary Range
$350k + 0 $300k - $350k 0 $250k - $300k 0 $200k - $250k 0 $160k - $200k 0 $120k - $160k 1 $80k - $120k 4 $40k - $80k 5 < $40k 0 About Ottawa Humane Society:
Founded in 1888, Ottawa Humane Society (OHS) protects animals in the Ottawa-Carleton area by sheltering animals in need of a home, investigating cases of animal cruelty, running veterinary clinics for animals in need of medical care and educating the community about humane treatment practices.
Ottawa Humane Society's largest program is its Animal Shelter activities, which made up 57% of program costs in F2015. According to the charity's current annual report, OHS took in 9,589 animals in 2014/15. The majority of animals were from pet-owner surrenders. Ottawa Humane Society found homes for 4,158 animals in 2014/15, equal to 43% of total animals taken in during the year. Adoptions related to special needs animals – animals requiring additional care and treatment – totalled 243 and grew by 50% compared to last year.
Ottawa Humane Society's volunteer and outreach programs made up 18% of program costs in F2015. As a part of its humane education efforts, OHS runs school presentations that teach kids about the importance of animal kindness and how to take care of a pet responsibly. OHS made 395 school presentations in 2014/15, up by 106% from the year before. These presentations reached 9,724 students.
Ottawa Humane Society's cruelty investigations and rescue activities made up 11% of program costs in F2015. The charity's 2014/15 annual report states that OHS's Rescue and Investigation Services team conducted 1,225 investigations in the year, with 377 reports of dogs left in hot cars as the top investigation type. The team's investigation activities saved 1,521 animals throughout the year, of which 1,154 were wild animals. OHS reports that the number of wild animals saved in 2014/15 grew by more than 10% compared to last year.
Ottawa Humane Society also has veterinary clinics that perform procedures such as neuters, dental work and x-rays. Clinics made up 10% of program costs in F2015. OHS's 2014/15 annual report states that within the year, veterinarians performed 4,233 surgeries, including 3,062 spay or neuter procedures. OHS clinics also recently adopted a new "High Volume Spay and Neuter" technique that enables veterinarians to go from performing only 15 cat spay procedures to 29 cat spay and 14 cat neuter procedures in a four-hour surgical block, which increases clinic output performance by over 200%.
Financial Review:
Ottawa Humane Society is a Big-cap charity with total donations and special events revenues of $5.1m in F2015. Administrative costs are 10% of revenues and fundraising costs are 33% of donations. $0.43 of every donated dollar goes toward overhead costs, which falls outside Ci's reasonable range for overhead spending. Fundraising costs have increased steadily from F2013 to F2015.
Ottawa Humane Society's funding reserves of $3.8m can cover only 82% of annual program costs, indicating a funding need.
This charity report is an update that is currently being reviewed by Ottawa Humane Society. Changes and edits may be forthcoming.
Updated on July 18, 2016 by Katie Khodawandi.
Financial RatiosFiscal year ending March 2015 2014 2013 Administrative costs as % of revenues 10.1% 7.1% 10.1% Fundraising costs as % of donations 32.5% 28.2% 24.5% Program cost coverage (%) 81.5% 85.9% 85.2%
Summary Financial StatementsAll figures in $000s 2015 2014 2013 Donations 4,502 4,896 3,822 Government funding 70 372 287 Fees for service 2,234 2,287 1,981 Special events 625 466 464 Investment income 25 139 122 Other income 110 88 90 Total revenues 7,566 8,248 6,767 Program costs 4,671 4,267 4,057 Administrative costs 764 576 671 Fundraising costs 1,667 1,509 1,050 Other costs 0 125 129 Cash flow from operations 465 1,772 860 Funding reserves 3,807 3,666 3,455 | 4,358 | 2,053 | 0.000491 |
warc | 201704 | Painful period is every woman’s worst nightmare! Whether you are a teenager, a college-going girl, a housewife or a working lady, you must have been groaning every month to the sharp and unbearable
period pain. Menstrual pain usually includes gripping pain and constant cramps in the lower abdominal, back, thighs and legs. Some women also experience other symptoms like nausea, headaches, digestive problems and tenderness in breasts. As per gynecologists, some pain during the menstrual cycle is considered normal. However, if the pain is excessive, it could be the onset of primary dysmenorrheal or secondary dysmenorrheal. Chemist-4-u stocks a huge array of tablets, caplets, capsules and heat packs that are especially formulated to provide relief from menstrual discomfort, pain and cramps. Along with period pain, these formulas also cure associated symptoms like headache, muscular pain and backache. | 916 | 541 | 0.001865 |
warc | 201704 | Let’s suppose there is an imaginary company in the packaged goods business that is a heavy user of web analytics data to improve their online product offerings, campaigns, and digital partnerships. And let us suppose they had for several years used outside web analytics expertise to deliver much-needed actionable insights to their business users. The business teams have been getting good, actionable information and have begun to rely more heavily on the outside experts who know the tools and techniques necessary to generate value.
Now let’s suppose there is a small cadre of IT people who decide, without asking the business users, that they can do web analytics themselves – booting out the trusted third-party experts the business had relied upon. Let’s suppose they may have gotten a little scared that such key functions in the business were relying on processes they did not control, but ostensibly perhaps ought to control. And that their supposition was that analytics rightfully “belongs to IT,” as the mistake is often made.
While they could not make a good business case for jettisoning the experts, they claimed instead that the consultants had “broken a security protocol” and while this was a thin veil for their plan to make themselves more indispensible, it passed muster and soon the consultants were out and the IT team was positioned to take on analytics.
One day I visited a friend whose office was right next door to Mario, a VP of marketing, as he met with William, a member of the abovementioned IT cadre. Via the magic of modern thin-wall construction techniques, we were able to overhear the following:
Mario: We’ve had no reports now for six weeks. I tried calling Dan over at Analytics Consulting and he said they were not working with us anymore. William: We had to let them go. We are now handling analytics internally. Mario: When Debra, Laura, Brian, and Vinay reported to me this week, they said they had lost all visibility into the success of the Powder Ridge campaign and that no one on your team knew when it would be restored. William: It is possible that Dan may have corrupted something. Mario: In seven years Dan’s team never corrupted anything. Please tell me when we will have reports again. William: We are in review of the entire analytics package. It may not be right for us. Mario: My team was very happy. It seemed right for them. Why was it not right for you? William: We had detected issues with certain internal protocols relating to structural inefficiencies. Mario: I am detecting a run-around. Who can I speak with about getting reports up and running again? William: I will admit we need to have a small brush-up on tagging. Mario: We just finished going through a complete re-tagging of the site and it had been QA’d and everything was fine. And then suddenly no reports. Also I have six new campaigns rolling out next month that need detailed tracking. William: The IT team is dedicated to your success. Mario: To whom may I speak about getting accurate, meaningful reports again? William: Some of these reports may have become defective. Mario: Since when? William: Only recently. I am sure the data can be corrected. Mario: Recently – since the experts left? William: I cannot say. Mario: What if I told you that the marketing team had decided your computers were too dusty? William: I do not understand. Mario: That your computers were dusty and that they all had to be replaced with flat-screen televisions so we could watch 30 Rock because that show sparked our creative juices? And that this would also somehow make us more able to manage the server farm as well? William: Our computers are very clean. Mario: What if I told you that the marketing team was going to prevent you from writing any emails without thorough review for grammar, tone, and brevity? William: I would say that would be a misstep on your part – and it would be encroaching on our ability to be effective in our jobs. Mario: That is what I thought you would say. Because that is exactly what you have done to my marketing team and for no good reason. William: We have some very capable people internally and we wanted to give them a chance at learning analytics. Mario: I’m sorry, I didn’t hear that. William: I didn’t say anything. Mario: How about this: in two weeks I will speak with you again. If that conversation does not include a presentation of all our analytics in a manner commensurate with what we used to get from Dan’s company, then I will blow the whistle on your team. William: Do you have Dan’s number? I seem to have lost it.
At that point my friend returned from the water cooler and we went out to lunch. On the way back, I saw William in the park reading the first chapter of “Web Analytics: An Hour a Day” by Avinash Kaushik.
Related reading
As an organisation, finding the right marketing channels is an essential part of your marketing strategy.
When measuring the effectiveness of discount codes, retailers often get it wrong. In this article, we'll look at how data-driven attribution can help businesses better understand where discount codes produce the best ROI.
Data. It’s the latest ‘buzzword’ in the digital marketing world when it comes to content.
The term ‘marketing cloud’ has gained significant traction in the last few years as major software companies have sought to monetise the growing importance of technology for marketing teams. | 5,548 | 2,581 | 0.000397 |
warc | 201704 | These animations depict the three major Milankovitch Cycles that impact global climate, visually demonstrating the definitions of eccentricity, obliquity, and precession, and their ranges of variation and timing on Earth.
This image depicts a representative subset of the atmospheric processes related to aerosol lifecycles, cloud lifecycles, and aerosol-cloud-precipitation interactions that must be understood to improve future climate predictions.
This short video, the sixth in the National Academies Climate Change, Lines of Evidence series, explores the hypothesis that changes in solar energy output may be responsible for observed global surface temperature rise. Several lines of evidence, such as direct satellite observations, are reviewed.
This short video, is the fifth in the National Academies Climate Change, Lines of Evidence series. It focuses on greenhouse gases, climate forcing (natural and human-caused), and global energy balance.
This animated video outlines Earth's energy. The video presents a progression from identifying the different energy systems to the differences between external and internal energy sources and how that energy is cycled and used. | 1,187 | 617 | 0.001629 |
warc | 201704 | Like previous generations, today's teens seem to be constantly on the phone. But now they're doing a lot more texting than talking.
One third of teens in the U.S. text more than 100 times a day, according to a study released Tuesday by Pew Internet and American Life Project.
Based on a survey and focus groups conducted with teenagers between 12 and 17, Pew found that text messaging is by far the most common way that kids communicate with each other, more than chatting on the phone, e-mailing, using social-networking sites, or talking face to face.
More than 75 percent of teens now own cell phones, notes Pew, up from just 45 percent in 2004. Around 72 percent of all teens, or 88 percent of teens who own mobile phones, use text messages to communicate. That marks a big jump from 2006 when only 51 percent of teens texted on their phones.
Among those 12 to 17 years old, half of them send 50 or more text messages a day, while 15 percent tap out more than 200 instant messages every day. Results vary by gender and age. On average, boys send and receive around 30 text messages each day, while girls send and receive around 80 per day. Older girls are the biggest texters, with those 14 to 17 sending out more than 100 messages a day. Younger kids ages 12 to 13 are lighter users, typically sending and receiving around 20 texts each day.
Though teens love their cell phones both for texting and talking, parents have mixed feelings. Mom and Dad often buy mobile phones for their children so they can keep track of them--98 percent of parents questioned said the main reason they give their kids phones is to stay in touch with them no matter where they are. As a result, those parents do exercise some control over the use of those phones.
Almost half of all parents limit the amount of time their kids can use the cell phone. Around 64 percent of them said they look at the contents of their kid's cell phone, while 62 percent reported they've taken away their kid's phone as a form of punishment.
Teachers aren't wild about the use of cell phones either, and as a result, many schools limit or ban their use. Around 24 percent of teens said their school bans all cell phones from the campus entirely, while 62 percent said they're allowed to bring a phone to school but not into the classroom.
But 65 percent of teens whose schools exclude cell phones from campus said they bring them anyway, and 58 percent of them said they've sent text messages in class despite the ban. Meanwhile, kids have come up with ways to avoid having their phones taken away. One teen surveyed said he has a real phone and a fake phone so that if the teacher catches him, he can give her the fake phone.
To gather the data, Pew by phone surveyed 800 kids, as well as one of their parents, between June to September of 2009. Working in collaboration, the University of Michigan also held a series of nine in-depth focus groups with teens 12 to 18 between June and October of last year. | 2,984 | 1,412 | 0.000711 |
warc | 201704 | Hot-Dip Galvanizing (HDG) Definition - What does Hot-Dip Galvanizing (HDG) mean?
Hot-dip galvanizing (HDG) is the process of coating iron, steel or ferrous materials with a layer of zinc. This done by passing the metal through molten zinc at a temperature of 860°F (460°C) to form zinc carbonate (ZNC03). Zinc carbonate is a strong material that protects steel and can prevent corrosion in many circumstances. Hot-dip galvanizing can be carried out cheaply and in large batches.
Hot-dip galvanizing is also known as hot-dip coating.
Corrosionpedia explains Hot-Dip Galvanizing (HDG)
Hot-dip galvanizing involves three main steps:
Preparation: The galvanizing reaction will only occur on a chemically clean surface, so the first step of the process involves removing contamination. First, the metal is degreased using a caustic solution and then dipped in hydrochloric acid to remove rust, mill scale, welding slag, paint and grease. This followed by a rinse and a dip in a flux solution, which is usually about 30 percent zinc ammonium chloride. Galvanizing: When the clean iron or steel component is dipped into the molten zinc (at 842°F (450°C)), zinc-iron alloy layers form as a result of a metallurgical reaction between the iron and zinc. When the material is pulled from the galvanizing bath, a layer of molten zinc is present on top of the alloy layer. When it cools, it has the bright, shiny appearance associated with galvanized products. Inspection: After galvanizing, the coated materials are inspected for coating thickness and coating appearance. A variety of simple physical and laboratory tests may be performed to determine thickness, uniformity, adherence and appearance of the zinc coating. | 1,735 | 873 | 0.001164 |
warc | 201704 | I've been doing some quite fun stuff involving advanced maths over the last couple of days. It's good to do hard theoretical stuff from time to time. Because it's been a while, I had to revise my stats quite a bit. As soon as I got beyond binomial and normal distributions to chi-squared distributions, G-tests and beta functions, I turned to the textbooks. Actually, first I turned to Google, but all I could find was course listings for university courses saying things like 'This course includes the beta function, chi-squared distribution etc. etc.'. I guess most of the notes are only available on intranets etc. It is remarkably hard to find information online about advanced maths.
Wikipedia is actually a very good resource here (this is the kind of thing wikipedia generally rocks at - the right answer is indisputable, so you don't get edit wars etc.). To see how comprehensive it is, check out the pages on the Beta function and its distribution (though there's precious little about how to calculate it).
When it comes to numerical methods, there's still nothing to beat Numerical Recipes but I didn't really want to have to write my own routines to calculate things like chi-square distributions. Luckily, after a bit of googling, I found that we could install statistics functions for php. Unfortunately, they are the worst-documented functions I have ever seen (check out the documentation for the chi-square distribution function!) but that's another blog post.
##Why I am looking up all these ridiculous things
I'm in the process of writing a new tool to help us do better at ppc management - a similar idea to Clickmuse's Adwords Optimizer, that we use and find handy - but instead of telling you when one advert is out-performing another, this is designed to spot keywords that need to be targeted differently (either moved into their own ad group in order to gain the benefit of a high-performing keyword or deleted / moved out of an ad group to avoid pulling the performance of your other keywords down). With the way that quality score is calculated, taking into account ad group performance as a whole and factoring it into your cost-per-click via the quality score, it is becoming ever more important to monitor ad groups closely and group keywords together even better.
At first, I thought this was a pretty simple statistical task - all I needed to do was look at each keyword and its clicks in order to calculate the probability of getting this many clicks if the keyword has an underlying long-run click-through rate equal to the ad group it is in. If this probability was low (below 5%, say), we could be relatively confident that this keyword should be moved out of this ad group as it is an outlier compared to the rest of the group.
I started coding this without thinking too much further (great software development process there) and hit a hurdle when I needed to calculate the probability of getting at least n clicks out of I impressions with a supposed click-through rate of p. It's simple when I is small (it's just the sum of binomial distribution probabilities) but as I grows, this involves calculating factorial of a bunch of large numbers. At this point I started dredging up some of my old stats courses and remembered that you can use Normal and Poisson approximations to the Binomial distribution for large populations (which you use depends on the probability - for reasonable size expected numbers of clicks, you can use the Normal distribution, for small expected numbers of clicks you have to use the Poisson distribution).
Calculating the normal distribution in PHP was pretty straightforward, but calculating Poisson distributions when large numbers are involved means calculating Beta functions and other such fun. I needed to read up on the subject. Hence why I turned to Google, failed, and turned to wikipedia.
This post isn't really about what the solution to my problem was in the end, but just in case anyone is interested, I have gone with calculating a G-statistic for the set of data of:
Category Clicks Non-clicks Specific keyword ck nk Rest of ad group ca nk
This statistic then has a chi-squared distribution with 1 degree of freedom (I think - because there are 2 independent variables given the total number of impressions and we are estimating one parameter - the click-through rate of the whole ad group). Testing this for significance then tells us the probability that the keyword clicks and 'rest of ad group' clicks are drawn from a universe with the same click-through rate. I think this is a better approach because it allows for both the possibility that the keyword we are interested in has an unexpected number of clicks and that the rest of the ad group has an unexpected number of clicks.
##Where this all fails
I find the concept of degrees of freedom very difficult to grasp (always have, despite having it explained by some of the best minds in the world - I find it especially hard when you start talking fractional degrees of freedom) and when I'm working with real-world examples, I'm always sure I've got it wrong.
Wikipedia is very poor on the subject and I can't find good explanations online.
If you happen to understand this kind of stuff and can tell me whether my problem has 1 or 2 degrees of freedom, it would be very much appreciated! | 5,346 | 2,407 | 0.000417 |
warc | 201704 | Please use this identifier to cite or link to this item:
http://hdl.handle.net/
10419/23225
Full metadata record
DC Field Value Language dc.contributor.author Lemke, Robert J. en_US dc.contributor.author Witt, Robert en_US dc.contributor.author Witte, Ann Dryden en_US dc.date.accessioned 2009-01-29T15:51:00Z - dc.date.available 2009-01-29T15:51:00Z - dc.date.issued 2001 en_US dc.identifier.uri http://hdl.handle.net/10419/23225 - dc.description.abstract We assess the role of child care in the welfare to work transition using an unusually large andcomprehensive data base. Our data are for Massachusetts, a state that began welfare reform in 1995 undera federal waiver, for the period July 1996 through August 1997. We find that both the nature of the childcare market and the availability and policies of subsidized care and early education affect the probabilitythat current and former welfare recipients will work. Regarding the child care market, we find that theavailability of care is most consistently related to employment. However, the price and quality of carealso matter. We also find that increased funding for child care subsidies, and the availability of full daykindergarten and Head Start significantly increase the probability that current and former welfarerecipients work. Higher state payments to providers are associated with increased probabilities of work.Finally, recipients are more likely to work when they are subject to a work requirement. The effects ofimposing time limits on cash assistance are less clear. en_US dc.language.iso eng en_US dc.publisher en_US dc.relation.ispartofseries |aWorking Papers / Wellesley College, Department of Economics |x2001,02 en_US dc.subject.jel I38 en_US dc.subject.jel J22 en_US dc.subject.jel H40 en_US dc.subject.jel I20 en_US dc.subject.ddc 330 en_US dc.subject.keyword Child Care en_US dc.subject.keyword Welfare Reform en_US dc.subject.keyword Vouchers en_US dc.subject.keyword Labor Supply en_US dc.subject.keyword Time Limits en_US dc.subject.stw Kinderbetreuung en_US dc.subject.stw Familienleistungsausgleich en_US dc.subject.stw Arbeitsangebot en_US dc.subject.stw Mütter en_US dc.subject.stw USA en_US dc.title Child care and the welfare to work transition en_US dc.type Working Paper en_US dc.identifier.ppn 37879566X en_US dc.rights http://www.econstor.eu/dspace/Nutzungsbedingungen -
Files in This Item:
Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated. | 2,499 | 1,181 | 0.000851 |
warc | 201704 | Elearning allows both students and business executives to learn anywhere and at any time. You can learn from virtually any place with a computer or mobile device and internet connection, meaning you can study from home, on vacation or in your break. But elearning is more than about convenience and there are fundamental differences between elearning in the corporate sector and in education.
Corporate training
The role of corporate training is to ensure an employee has the knowledge and skills to undertake a specific operation to enable an organization to continue to operate. Fundamentally, corporate training is centered on knowledge transfer. For example, conferences and workshops are an essential yet expensive part of business and elearning makes it affordable and efficient – sales people, for instance, can receive their training on new products and sales strategies online. Elearning can be translated to lower costs to deliver training in a shorter period of time, especially when employees are spread worldwide.
Corporate education however adds another dimension and depth to training by involving learners as participants in generating new knowledge that assists an organization to develop and evolve.
The main characteristics of corporate learning are: Fast-paced: Enterprise learning is mostly “fast paced” because “time is money” in the corporate world. Training needs to be delivered in as short a time frame as possible with maximum results. Career-related: Enterprise learning helps employees gain new skills to advance their careers inside the company. Enterprise LMSs have additional modules to facilitate that process. Benefits organization: Enterprise learning focuses mainly on pragmatic issues with immediate benefits for the organization rather than just individual benefit. Ultimately training is required for the organization to function correctly, and corporate education in order for it to evolve and develop. Training vs. Education: Enterprise is mostly focused on training, while education is mostly about learning though “igniting curiosity” (check out this related post on ‘Learning through Curiosity’). Training usually means the act of being prepared for something, of being taught or learning a particular skill and practicing it until the required standard is reached. This has obvious practical implications for the workplace. Return on investment: An enterprise needs to be able to calculate the ROI of its learning investment. In an educational context this ROI is difficult to calculate and usually the effects of learning take years to show. Education sector
In comparison with corporate learning, learning in the education sector focuses primarily on knowledge transfer and not on training i.e. in education we mainly strive to learn things with global scope (e.g. a subject such as mathematics) whilst corporate elearning is more focused on business needs (e.g. new recruit induction). The word education means to gain general theoretical knowledge and this may or may not involve learning how to do any specific practical work, tasks or skills. Please note that there is some overlap and that the word ‘education’ can also refer to a process of training or receiving tuition. For example, basic training in a field such as health services is usually a combination of theoretical, educational and practical learning skills.
Convergence
Corporate elearning professionals can learn from academic elearning initiatives and vice versa, and we are currently seeing a convergence of academic and corporate elearning needs. For example, the academic space is starting to gravitate towards incorporating corporate methods in the classroom on how certain topics are taught. And on the corporate side they’re shifting the model of utilizing technologies in a way that supports the traditional classroom of academics especially with regards to blending technologies.
There is obvious overlap: mobile learning for example is becoming increasingly popular with learners having one if not more mobile devices in their possession and taking these devices to school or work. Learners have access to the internet and social networks via these mobile devices so all the technologies required to gather information, create content and communicate with other people are readily available and naturally create an environment conducive to learning. Currently both the education and corporate sectors are struggling to answer the exact same questions: how do we use these for learning? How do instructional design, and teaching methodologies and theories apply to delivering content via mobile devices? It’s only natural for knowledge to be shared across the table.
About the author
Roberta Gogos is a Social Media & Content Marketing Consultant and eFront Learning’s Community Manager, she is contributing author to a number of blogs and focuses on social media, culture-specific communication, technology, and elearning. She can be contacted @rgogos or via LinkedIn. | 5,075 | 2,257 | 0.00045 |
warc | 201704 | The development of high quality eLearning is hard work. No short cuts, no straight pathway. The first step in identifying a quality eLearning course from a less well-made one is to determine the degree of its effectiveness...
Category - eLearning
Many articles have been writen to try and share with you the essentials elements that lead to eLearning success. As educational technology and strategies progress, learners become more advanced and proactive in learning. New...
It’s been said before, and it’ll be said again: it’s a brutally competitive market out there when it comes to getting the best talent onboard and, more importantly, keeping hold of them. Here’s a figure for you: it costs...
The abundant use of technology in all contexts of life, work, education, entertainment or any other you can imagine and we haven’t, has led to the development of an vast ecosystem of tech. The study of the interactions and...
Life never stands still: There’s always something more to do, somewhere to go, someone to see. We’re always in motion. Indeed, we need that impetus to move forward. The same goes with your employees’ careers. Perhaps...
Many books and job aids have been created to instruct trainers on how to create compelling programs for an organization. The need for the development of learning organizations is greater than ever. The quest for the best...
Knowledge management is one of the biggest challenges in today’s corporate world. The general notion is that eLearning doesn’t any longer need to be a just a professionally authored solution. It’s not like there is no place...
How can you tell that you’ve learned something or made progress? One surefire way to know is when someone else tells you that you have, of course.
It’d be hard to find any corporation these days which relies solely on face-to-face, classroom-based courses. Everyone has embraced eLearning because of the flexibility and convenience it offers. Yet, you’ll find some...
Let’s set the stage: you’ve got an idea for a fantastic course. You’ve storyboarded it, created your outlines and are ready to crank up the production and let it roll. You’re sure that the learners are going to love... | 2,284 | 1,155 | 0.000906 |
warc | 201704 | 1 Answer | Add Yours
Lighting is a very significant device of theatre. It is not just a decoration or a put-on---something superficial and imposed from the outside. Lighting can make or break a performance. In fact, its role in theatre is functional and it participates semantically and semiotically in the theatrical meaning-effect. Lighting determines the stage-space, develops the spectacle, defining and limiting it. Its subtle shades create the differential pattern of theatrical meaning. The colour of it has obvious symbolic connotations too.
To give an example from 20th century theatrical practice, lighting in Samuel Beckett's theatre is an elementary tool of communication. In the tiniest play of the world, Breath, Beckett writes down the absolutely exact configurations of light by using a hypothetical range to talk about its increase and decrease. In a play like Not I, there is no full human subject, but a mouth and it is the spotlight that effectively renders this reduction. In Play, the lights become a character--that of the interrogator. The lights keep flashing from one urned face to the other to extract compulsive speech from him or her. The light in theatre makes not just the audience but also the actors captive. Beckett's meticulous insistence on a grey light on stage is a definitive articulation of a post-war light, hazed with the traces of bombardment and devastation.
We’ve answered 319,200 questions. We can answer yours, too.Ask a question | 1,486 | 823 | 0.001223 |
warc | 201704 | Although several information are available on various aspects of durability of high–performance concrete (HPC), no systematic information is available on durability of HPC used for bridge decks. In this study, 12 HPC mixtures were prepared with different combinations of fly ash, slag, and silica fume with three different w/cm ratios: 0.40, 0.35, 0.30 to study the durability primarily concerning bridge deck. Results show that at higher w/cm ratio, both rapid chloride penetration and chloride diffusion depend mostly on supplementary cementitious materials. A combined freezing and thawing/salt–scaling test for a properly made test specimen is representative of real–life situation for bridge deck slab. The physical sulphate attack is minimal for HPC with low w/cm ratios. The present experimental study serves to establish a basis for the future optimisation of HPC mixture proportions as well as test methods through critical durability evaluations.
Keywords: bridge decks, chloride permeability, durability evaluation, freezing, thawing, high–performance concrete, HPC, scaling, concrete durability, fly ash, slag, silica fume, chloride penetration, chloride diffusion, sulphate attack, mixture proportions | 1,237 | 618 | 0.001649 |
warc | 201704 | Including ecotoxic impacts on warm‐blooded predators in life cycle impact assessment
In current life cycle impact assessment, the focus of ecotoxicity is on cold‐blooded species. We developed a method to calculate characterization factors (CFs) for the impact assessment of chemical emissions on warm‐blooded predators in freshwater food chains. The method was applied to 329 organic chemicals. The CF for these predators was defined as a multiplication of the fate factor (FF), exposure factor (XF), bioaccumulation factor (BF), and effect factor (EF). Fate factors and XFs were calculated with the model USES‐LCA 2.0. Bioaccumulation factors were calculated with the model OMEGA, for chemical uptake via freshwater, food, and air. Effect factors were calculated based on experimental, median lethal doses (LD50). The concentration buildup (CB) of the chemicals (i.e., FF, XF, and BF over the 3 routes of exposure) showed a range of 7 to 9 orders of magnitude, depending on the emission compartment. Effect factors displayed a range of 7 orders of magnitude. Characterization factors ranged 9 orders of magnitude. After emissions to freshwater, the relative contribution of the uptake routes to CB were 1% (90% confidence interval [CI]: 0%–2%) for uptake from air, 43% (11%–50%) for uptake from water, and 56% (50%–87%) for uptake from food. After an emission to agricultural soil, the contribution was 11% (0%–80%) for uptake from air, 39% (5%–50%) for uptake from water, and 50% (11%–83%) for uptake from food. Uptake from air was mainly relevant for emissions to air (on average 42%, 90% CI: 5%–98%). Characterization factors for cold‐blooded species were typically 4 orders of magnitude higher than CFs for warm‐blooded predators. The correlation between both types of CFs was low, which means that a high relative impact on cold‐blooded species does not necessarily indicate a high relative impact on warm‐blooded predators. Depending on the weighing method to be considered, the inclusion of impacts on warm‐blooded predators can change the relative ranking of toxic chemicals in a life cycle assessment. Integr Environ Assess Manag 2012; 8: 372–378. © 2011 SETAC | 2,263 | 991 | 0.001052 |
warc | 201704 | Meditation isn’t easy. It takes time and energy. Some people wonder why they should bother with meditation at all.
Here is an inversion of meditation. This is
what Meditation isn’t.
There are a number of common misconceptions about meditation. The same things come up over and over.
In Mindfulness in Plain English, Henepola Gunaratana deals with 11 of these preconceptions one at a time.
Misconception 1: Meditation is Just a Relaxation Technique
The bugaboo here is the word just. Relaxation is a key component of meditation, but vipassana-style meditation aims at a much loftier goal. The statement is essentially true for many other systems of meditation. All meditation procedures stress concentration of the mind, bringing the mind to rest on one item or one area of thought. Do it strongly and thoroughly enough, and you achieve a deep and blissful relaxation, called jhana. It is a state of such supreme tranquillity that it amounts to rapture, a form of pleasure that lies above and beyond anything that can be experienced in the normal state of consciousness. Most systems stop right there. Jhana is the goal, and when you attain that, you simply repeat the experience for the rest of your life. Not so with vipassana meditation. Vipassana seeks another goal: awareness. Concentration and relaxation are considered necessary concomitants to awareness. They are required precursors, handy tools, and beneficial byproducts. But they are not the goal. The goal is insight. Vipassana meditation is a profound religious practice aimed at nothing less than the purification and transformation of your everyday life.
Misconception 2: Meditation is Going Into a Trance
Here again the statement could be applied accurately to certain systems of meditation, but not to vipassana. Insight meditation is not a form of hypnosis. You are not trying to black out your mind so as to become unconscious, or trying to turn yourself into an emotionless vegetable. If anything, the reverse is true: you will become more and more attuned to your own emotional changes. You will learn to know yourself with ever greater clarity and precision. In learning this technique, certain states do occur that may appear trancelike to the observer. But they are really quite the opposite. In hypnotic trance, the subject is susceptible to control by another party, whereas in deep concentration, the meditator remains very much under his or her own control. The similarity is superficial, and in any case, the occurrence of these phenomena is not the point of vipassana. As we have said, the deep concentration of jhana is simply a tool or stepping stone on the route to heightened awareness. Vipassana, by definition, is the cultivation of mindfulness or awareness. If you find that you are becoming unconscious in meditation, then you aren’t meditating, according to the definition of that word as used in the vipassana system.
Misconception 3: Meditation is a Mysterious Practice That Cannot be Understood
Here again, this is almost true, but not quite. Meditation deals with levels of consciousness that lie deeper than conceptual thought. Therefore, some of the experiences of meditation just won’t fit into words. That does not mean, however, that meditation cannot be understood. There are deeper ways to understand things than by the use of words. You understand how to walk. You probably can’t describe the exact order in which your nerve fibers and your muscles contract during that process. But you know how to do it. Meditation needs to be understood that same way— by doing it. It is not something that you can learn in abstract terms, or something to be talked about. It is something to be experienced. Meditation is not a mindless formula that gives automatic and predictable results; you can never really predict exactly what will come up during any particular session. It is an investigation and an experiment, an adventure every time. In fact, this is so true that when you do reach a feeling of predictability and sameness in your practice, you can read that as an indication that you have gotten off track and are headed for stagnation. Learning to look at each second as if it were the first and only second in the universe is essential in vipassana meditation.
Misconception 4: The Purpose of Meditation is to Become Psychic
No. The purpose of meditation is to develop awareness. Learning to read minds is not the point. Levitation is not the goal. The goal is liberation. There is a link between psychic phenomena and meditation, but the relationship is complex. During early stages of the meditator’s career, such phenomena may or may not arise. Some people may experience some intuitive understanding or memories from past lives; others do not. In any case, these phenomena are not regarded as well-developed and reliable psychic abilities, and they should not be given undue importance. Such phenomena are in fact fairly dangerous to new meditators in that they are quite seductive. They can be an ego trap, luring you right off the track. Your best approach is not to place any emphasis on these phenomena. If they come up, that’s fine. If they don’t, that’s fine, too. There is a point in the meditator’s career where he or she may practice special exercises to develop psychic powers. But this occurs far down the line. Only after the meditator has reached a very deep stage of jhana will he or she be advanced enough to work with such powers without the danger of their running out of control or taking over his or her life. The meditator will then develop them strictly for the purpose of service to others. In most cases, this state of affairs occurs only after decades of practice. Don’t worry about it. Just concentrate on developing more and more awareness. If voices and visions pop up, just notice them and let them go. Don’t get involved.
Misconception 5: Meditation is Dangerous and a Prudent Person Should Avoid it
Everything is dangerous. Walk across the street and you may get hit by a bus. Take a shower and you could break your neck. Meditate, and you will probably dredge up various nasty matters from your past. The suppressed material that has been buried for quite some time can be scary. But exploring it is also highly profitable. No activity is entirely without risk, but that does not mean that we should wrap ourselves in a protective cocoon. That is not living, but is premature death. The way to deal with danger is to know approximately how much of it there is, where it is likely to be found, and how to deal with it when it arises. That is the purpose of this manual. Vipassana is development of awareness. That in itself is not dangerous; on the contrary, increased awareness is a safeguard against danger.
Misconception 6: Meditation is for Saints and Sadhus
It is true, of course, that most holy men meditate, but they don’t meditate because they are holy men. That is backward. They are holy men because they meditate; meditation is how they got there. And they started meditating before they became holy, otherwise they would not be holy. This is an important point. A sizable number of students seems to feel that a person should be completely moral before beginning to meditate. It is an unworkable strategy. Morality requires a certain degree of mental control as a prerequisite. You can’t follow any set of moral precepts without at least a little self-control, and if your mind is perpetually spinning like a fruit cylinder in a slot machine, self-control is highly unlikely.
…
There are three integral factors in Buddhist meditation— morality, concentration, and wisdom. These three factors grow together as your practice deepens. Each one influences the other, so you cultivate the three of them at once, not separately. When you have the wisdom to truly understand a situation, compassion toward all parties involved is automatic, and compassion means that you automatically restrain yourself from any thought, word, or deed that might harm yourself or others; thus, your behavior is automatically moral.
Misconception 7: Meditation is Running Away from Reality
Incorrect. Meditation is running straight into reality. It does not insulate you from the pain of life but rather allows you to delve so deeply into life and all its aspects that you pierce the pain barrier and go beyond suffering.
Misconception 8: Meditation is a Great way to get High
Well, yes and no. Meditation does produce lovely blissful feelings sometimes. But they are not the purpose, and they don’t always occur. Furthermore, if you do meditation with that purpose in mind, they are less likely to occur than if you just meditate for the actual purpose of meditation, which is increased awareness. Bliss results from relaxation, and relaxation results from release of tension. Seeking bliss from meditation introduces tension into the process, which blows the whole chain of events. It is a Catch-22: you can only experience bliss if you don’t chase after it. Euphoria is not the purpose of meditation. It will often arise, but should be regarded as a byproduct.
Misconception 9: Meditation is Selfish
It certainly looks that way. There sits the meditator parked on a little cushion. Is she out donating blood? No. Is she busy working with disaster victims? No. But let us examine her motivation. Why is she doing this? The meditator’s intention is to purge her own mind of anger, prejudice, and ill will, and she is actively engaged in the process of getting rid of greed, tension, and insensitivity. Those are the very items that obstruct her compassion for others. Until they are gone, any good works that she does are likely to be just an extension of her own ego, and of no real help in the long run. Harm in the name of help is one of the oldest games.
Misconception 10: When you Meditate, you Sit Around Thinking Lofty Thoughts
Of course, lofty thoughts may arise during your practice. They are certainly not to be avoided. Neither are they to be sought. They are just pleasant side effects. Vipassana is a simple practice. It consists of experiencing your own life events directly, without preferences and without mental images pasted onto them. Vipassana is seeing your life unfold from moment to moment without biases. What comes up, comes up. It is very simple.
Misconception 11: A Couple of Weeks of Meditation and All My Problems will go Away
Sorry, meditation is not a quick cure-all. You will start seeing changes right away, but really profound effects are years down the line. That is just the way the universe is constructed. Nothing worthwhile is achieved overnight. Meditation is tough in some respects, requiring a long discipline and a sometimes painful process of practice. At each sitting you gain some results, but they are often very subtle. They occur deep within the mind, and only manifest much later. And if you are sitting there constantly looking for huge, instantaneous changes, you will miss the subtle shifts altogether. You will get discouraged, give up, and swear that no such changes could ever occur. Patience is the key. Patience. If you learn nothing else from meditation, you will learn patience. Patience is essential for any profound change.
Mindfulness in Plain English is worth reading. | 11,385 | 4,926 | 0.000205 |
warc | 201704 | Remarks by Governor Laurence H. Meyer
At the University of Wisconsin, LaCrosse, Wisconsin
October 24, 2000
The Politics of Monetary Policy: Balancing Independence and Accountability
It is widely believed, at least among central bankers, that "independence" is a prerequisite for achieving the goals that traditionally have been assigned to central banks--specifically for achieving price stability. "Independence" does not mean literally independence from government, because central banks here and abroad are almost always part of government. The relationship of central banks to the rest of government is, in practice, therefore much more complex than the term "independence" might suggest.
The motivation for granting independence to central banks is to insulate the conduct of monetary policy from political interference, especially interference motivated by the pressures of elections to deliver short-term gains irrespective of longer-term costs. The intent of this insulation is not to free the central bank to pursue whatever policy it prefers--indeed every country specifies the goals of policy to some degree--but to provide a credible commitment of the government, through its central bank, to achieve those goals, especially price stability.
Even a limited degree of independence, taken literally, could be viewed as inconsistent with democratic ideals and, in addition, might leave the central bank without appropriate incentives to carry out its responsibilities. Therefore, independence has to be balanced with accountability--accountability of the central bank to the public and, specifically, to their elected representatives.
It is important to appreciate, however, that steps to encourage accountability also offer opportunities for political pressure. The history of the Federal Reserve's relationship to the rest of government is one marked by efforts by the rest of government both to foster central bank independence and to exert political pressure on monetary policy.
The purpose of this paper is to clarify the relationship of central banks within government, to explain the nature, degree of, and rationale for the independence afforded to many central banks--with a special focus on the role of the Federal Reserve within the U.S. government--and to discuss the balancing of independence and accountability in principle and in practice.
It is useful to distinguish two types of independence for central banks: goal independence and instrument independence. If a central bank is free to set the final objectives for monetary policy, it has goal independence. If a central bank is free to choose the settings for its instruments in order to pursue its ultimate objectives, it has instrument independence.
Most central banks have specific legislative mandates and therefore do not have goal independence. Thus the "independence" of "independent" central banks is instrument independence under which the central bank has authority to choose settings for its instruments in order to pursue the objectives mandated by the legislature, without seeking permission from, or being overturned by, either the executive or the legislature. However, countries vary considerably in the specificity of the mandated goals and hence in the degree of discretion of central banks in the conduct of monetary policy.
On the other hand, it appears that elected officials in many countries apparently understood the incentives under which they operate and have structured charters for their central banks that, in effect, tie their own hands--that is, limit political interference with monetary policy to enhance the prospects of achieving and maintaining price stability. Nevertheless, the urge to exert political pressure--to support the objectives of the Administration as well as those of the Congress, to take the U.S. case, and other times to support the re-election of the President or of congressional incumbents--sometimes becomes irresistible. At such times, the tradition of independence at the Fed, the leadership of its Chairman, the influence of long terms for governors, and the presence of Reserve Bank presidents on the Federal Open Market Committee (FOMC) become especially important.
In addition, budget priorities and monetary policy objectives can be in conflict. The executive branch generally wants to keep the cost of servicing its debt low, and this preference might be at odds with the need for monetary policy to vary interest rates to maintain price stability. This tension has been present during both World Wars and for several years following World War II.
Finally, especially in countries where debt markets are not well developed, central banks might be called upon to finance budget deficits by printing money, again interfering with maintaining price stability. The Federal Reserve, for example, was asked to directly underwrite government debt during World War I but a statutory prohibition on directly purchasing government debt was later added to the Federal Reserve Act.
Some have worried that even an independent central bank could succumb to the temptation to stimulate the economy today at the expense of higher inflation in the future. This is referred to as the problem of time inconsistency. That is, the central bank has an incentive to commit itself to price stability and then to renege on this promise in order to gain employment in the short run with relatively little initial sacrifice in the form of higher inflation. In the long run inflation would rise and the central bank would either have to tolerate the higher rate of inflation or push output below potential for a while to restore price stability. Once the public understood this process, moreover, it would expect higher inflation, so that, in the longer run, the result could be higher inflation without any short-run gain in output.
Several solutions to the time inconsistency problem have been offered. First, the rest of government could impose a rule on the central bank, restricting its ability to play the game described above. The rule would ensure a credible commitment to price stability, thereby anchoring the public's expectations and removing the inflationary bias that otherwise might result. Second, the government could appoint conservative central bankers--central bankers with a greater commitment to price stability than the public--and thereby offset the inflationary bias that would otherwise arise. Third, central bankers could be forced to operate under performance or incentive contracts, whereby they could be penalized for failure to maintain price stability. The Governor of the Bank of New Zealand operates under such a performance contract; he can be removed from office for failure to achieve his inflation target.
I have never found the literature on time inconsistency particularly relevant to central banks. Surely central banks realize they are facing a repeated game, not a one-time game. They will therefore be reluctant to undermine their credibility over the longer run by pretending to pursue price stability while stimulating the economy for short-run gain. Long terms and other institutional ways of insulating central banks from short-term political pressures allow central bankers to take this longer view and make them less likely to follow time-inconsistent policies. Still, the problem highlighted in the time inconsistency literature may reinforce the case for both a price stability legislative mandate and instrument independence for the central bank.
Independence also is likely to reinforce the credibility of a central bank's commitment to price stability. This enhanced credibility may then yield additional benefits. First, it could allow the central bank to reduce the cost of lowering inflation. It is generally agreed that to lower inflation monetary policy must reduce output for a while, relative to potential, by reducing aggregate demand. The resulting loss of output during the transition to lower inflation is a measure of the cost of reducing inflation. The more quickly inflation expectations fall, the more rapidly will inflation itself decline, and the lower will be the cost of reducing inflation.
A credible central bank could also be more effective in conducting stabilization policy. If aggregate demand were to slow, a stimulative monetary policy move would be less likely to undermine confidence in the central bank's pursuit of price stability when the central bank is independent (and has a price stability mandate). In addition, if inflation moved upward, inflation expectations would be less likely to follow immediately, making it easier for the central bank to contain inflation.
Changes in Britain, Japan, and continental Europe made 1998 a banner year in the history of central bank independence. The Bank of England, one of the oldest central banks in the world, was founded by an act of Parliament in 1694. It was involved in commercial activity until the end of the 19th century, but it had gradually shifted during those 200 years toward exclusive focus on central bank activity. The Bank of England had substantial independence for much of the 18th and 19th centuries, but by the 20th century it had essentially become an agency of the British Treasury. Then, in June 1998, it was reborn as an independent central bank under the current Labor government.
The Bank of Japan gained operational independence in April 1998. The Bank is still not legally independent, a status prevented by the Japanese constitution. In addition, representatives of both the Ministry of Finance and the Economic Planning Agency attend meetings in a nonvoting capacity. But before then, the Ministry of Finance could require the Bank to delay implementation of a change in policy; now it can only ask. Recently, the Ministry of Finance indeed asked the policy committee of the Bank of Japan to delay a decision to raise the Bank's target interest rate. In an exercise of the Bank's newly attained power, the policy committee rejected the request.
The European Central Bank (ECB) began operating on June 1, 1998, and assumed responsibility for monetary policy in the euro area on January 1, 1999. The ECB is the world's first supranational central bank and probably qualifies as the most independent central bank in the world. The charter for the European System of Central Banks (composed of the ECB and the national central banks of the member countries) is an international treaty that can be changed only by unanimous consent of its signatories. With its supranational status, the ECB is further removed from the political pressure of national governments than even the most independent national central banks. In addition, there is no political counterpart to the supranational ECB. The European Parliament carries out oversight hearings on monetary policy but does not have any authority with respect to the ECB.
The major question for the founders was the degree to which the U.S. central bank should be a public or a private institution. Bankers wanted a largely private central bank. Populists wanted a public institution. President Wilson and Congressman Glass steered a middle course. There would be a Federal Reserve Board that was completely public and Federal Reserve Banks that would have significant characteristics of private institutions. During the first half century of Federal Reserve history, the Congress continued to focus more on issues involving the structure of the Federal Reserve than on providing a clear legislative mandate for monetary policy or oversight of the conduct of monetary policy.
A former Fed governor, Andrew Brimmer, in a 1989 paper entitled "Politics and Monetary Policy: Presidential Efforts to Control the Federal Reserve," describes the record of almost "continuous and at least public and vigorous conflicts" between Presidents and the Federal Reserve. In his view, twelve of the fourteen Presidents between the founding of the Federal Reserve and the time he was writing--from Woodrow Wilson to George Bush--had "some kind of public debate, conflict, or criticism of Federal Reserve monetary policy," the exceptions being Calvin Coolidge and Gerald Ford. He alleged that Presidents resented the delegation of monetary policy by the Congress to an independent Federal Reserve and sought ways to bring monetary policy under their influence, often by exerting direct political pressure on the Federal Reserve, but principally through the appointment process. Examples of the latter cited by Brimmer include Nixon, believing that the Federal Reserve had cost him the election in 1960, replacing Chairman William McChesney Martin with Arthur Burns in February 1970 when Martin's term expired; Carter, appointing William Miller to replace Chairman Burns in 1978; and Reagan, appointing Alan Greenspan as Chairman in 1987. For the most part, their best efforts to appoint sympathetic choices as Chairmen have, in Brimmer's judgment, been frustrated by the systematic tendency of Chairmen and other Board members to insist on exercising their congressional mandate.
Thomas M. Havrilesky, in a 1992 book, also provides an account of, and some attempts to measure, the intensity of political pressure over time, based on the number of comments on monetary policy made by Administration officials, including the President, and by members of the Congress. He concludes that there was little pressure from the executive branch during the Eisenhower and Ford Administrations, but many more such efforts in the Kennedy, Johnson, and Nixon Administrations.
My experience on the Board is that the Clinton Administration has respected the independence of the Federal Reserve to a degree that, given the accounts of others, may exceed that of any previous Administration. To be sure, President Clinton has had opportunities to make appointments to the Federal Reserve Board and he has twice reappointed Alan Greenspan as Chairman. But to my knowledge the Administration has never made any public or private effort to influence monetary policy.
The Federal Reserve has been technically independent of the President from the beginning, even though the Secretary of the Treasury and the Comptroller of the Currency originally sat on the Board. Although it is a creature of the Congress, the Federal Reserve Act delegated control over the currency to the Board and Congress insulated the Federal Reserve from elective politics to a large degree. The current structure of the Federal Open Market Committee was introduced in the Banking Act of 1935, which became effective in March 1936. At that time the Secretary of the Treasury and the Comptroller of the Currency were removed from the Board.
The legislation was also a battle between the Administration and the Congress. The Administration wanted to shift the power over monetary policy toward the centralized and presidentially appointed Federal Reserve Board governors, a group they had a better opportunity to influence through the appointment process. The Congress partly resisted and diluted the control of the Administration by allowing a role for the Reserve Bank presidents on the FOMC.
During both World Wars, Treasury wanted to issue securities at low interest rates to ease the burden of financing and the Fed went along because it felt bound to facilitate wartime financing. In addition, during World War I, Reserve Banks bought most of the government's first $50 million certificate issue directly from the Treasury despite strong objections from some System officials. Such direct purchases were later eliminated and the statutory prohibition on direct underwriting of government debt is today considered one of the principal protections of the independence of a central bank. After World War I, the Treasury opposed raising the discount rate to combat inflation, but the Fed did so anyway.
During World War II, the Fed sacrificed its independence by agreeing to peg the Treasury yield curve to ensure low rates for wartime financing. After the war, the Fed wanted to resume an independent monetary policy, fearing that it would otherwise become an engine of inflation, but the Treasury was still concerned about minimizing the service cost of the debt. To resolve this conflict, an agreement was negotiated in 1951 by Assistant Secretary of the Treasury William McChesney Martin and Fed officials. The Congress, led by Senator Paul Douglas, also played an important role through its support for Federal Reserve independence. Under the terms of the Accord, as it came to be known, the Fed was no longer obligated to peg the interest rates on Treasury debt, but it was agreed that active consultation between the Fed and Treasury would continue. That active consultation continues today.
From the end of World War II until the mid-1970s, the mandate for monetary policy was based on the Employment Act of 1946. This legislation set out a general mandate for the government. Although it did not explicitly refer to the Federal Reserve, it was widely understood that the act applied to the central bank as a part of government. The act identified the government's macroeconomic policy objectives as fostering "conditions under which there will be useful employment opportunities…for those able, willing, and seeking to work, and to promote maximum employment, production, and purchasing power."
Conflict between the executive branch and the Federal Reserve erupted dramatically in December 1965. President Johnson did not want the Administration's stimulative fiscal policy undermined by restrictive monetary policy. Chairman Martin supported an increase in the discount rate as an appropriate step to contain the risk of higher inflation. A key vote occurred on a proposed increase in the discount rate at a Board meeting on December 3. Although the President tried to influence the Chairman's position, and others in the Administration put pressure on other members of the Board, the Board of Governors voted 4-3 to support the Chairman. Following the vote, the President summoned the Chairman to the President's ranch in Texas. But the vote stood. The independence of the Fed was preserved and indeed used for precisely the purpose it was intended. Subsequently, virtually everyone agreed it had been the correct decision. The system worked.
The Congress became more involved in the monetary policy process in the 1970s. This was a response to both poor economic performance and changing views about the importance of monetary aggregates in shaping economic developments, especially inflation. Inflation began to rise in the late 1960s and escalated further in the 1970s. During this period, monetarism was an increasing influence, with its focus on the importance of limiting the rate of growth of the money supply to control inflation. But it was the sharp recession in 1974-75 that really provoked the Congress to provide more detailed instructions to the Federal Reserve about the objectives that should guide monetary policy.
In 1975, the House and Senate passed Concurrent Resolution 133 calling on the Fed to lower long-term interest rates and expand the monetary and credit aggregates to promote recovery. The Fed was also instructed to set money growth targets and to participate in periodic congressional hearings on monetary policy. For the first time, the Congress explicitly identified the objectives for monetary policy. The same language about the objectives applies today. Still, with its focus on the conduct of monetary policy at a point in time (rather than on general guidelines on policy objectives to be applied over time), the resolution was a clear instance of action by the Congress to intervene and influence monetary policy.
The monetary policy objectives written into the concurrent resolution were added by an amendment to the Federal Reserve Act in 1977 and were further elaborated in the Full Employment and Balanced Growth Act of 1978, often referred to as the Humphrey Hawkins act after its co-sponsors.
Another clear attempt at political interference emerged in February 1988 when an undersecretary of the Treasury sent a letter to Federal Reserve officials urging them to ease monetary policy. The request was promptly and publicly rebuked by Chairman Greenspan. Having an attempt at political pressure become public and be sharply rejected was an unusual event in the history of the relationship between the executive branch and the Federal Reserve.
The reporting requirements in the Humphrey-Hawkins act expired in May 2000. As a result, the Congress is now reconsidering the monetary policy oversight process. In part because the link between money growth and nominal spending appears to be less tight than it was earlier, the role of money growth in monetary policy deliberations has diminished and it appears likely that the Congress will no longer require the Fed to set and report money growth ranges. However, the current language about the objectives of monetary policy seems likely to be retained, as does semiannual testimony on monetary policy.
The legislation creating an "independent" central bank--or in many cases revisions to such legislation--often entirely takes away goal independence by mandating objectives for monetary policy, but otherwise sets up a structure that confers and protects instrument independence. The most important requirement for instrument independence is that the central bank be the final authority on monetary policy. That is, monetary policy decisions should not be subject to veto by the executive or legislative branches of government. Instrument independence is further protected if other institutions of government are not represented on the monetary policy committee. A lesser protection would be to allow government representation but only in a nonvoting capacity.
Instrument independence is further facilitated by long, overlapping terms for members of the monetary policy committee; by limited opportunities for reappointment; and by committee members not being subject to removal except for cause--where "cause" refers to fraud or other personal misconduct but explicitly excludes differences in judgment about policy. An intangible contributor to independence, but arguably the most important, is the appointment of a capable, respected, politically astute, and "independent minded" chairman.
A third important protection of independence is achieved by freeing the central bank from the appropriations process. Many central banks have been granted the seignorage function--issuing currency for the government--and cover the cost of their operations from the earnings on their portfolio of government securities acquired in the process, returning the excess to the government.
Finally, it is critically important to ensure that the central bank will not be required to directly underwrite government debt. As I noted above, the Treasury or Finance Ministry will have an incentive to keep interest rates low to reduce the cost of servicing the government debt. Indeed, perhaps the first principle of central bank independence is independence from the fiscal authority.
If independence is also defined in terms of assuring the ability and commitment of the central bank to achieve price stability, this commitment can be protected by an explicit price stability mandate from the government. That is, a government that explicitly imposes this mandate is less likely to interfere in a central bank's pursuit of this objective. Independence, by this definition, is viewed as greatest if price stability is the exclusive objective of monetary policy, or at least the principal objective.
Bade and Parkin (1988) ranked the political independence of twelve industrial country central banks on the basis of answers to questions such as Is the bank the final policy authority? and Is there no government official (with or without voting power) on the bank board? Grilli, Masciandro, and Tabellini (1991) also incorporated information on the length of terms of monetary policy committee members and on policy goals of the central bank with respect to monetary policy, specifically whether there is a mandate for monetary stability (including money growth or price stability objectives). Cukierman (1992) also takes into account restrictions on the ability of the public sector to borrow from the central bank. A central bank is more independent if it is protected, for example, from directly underwriting the government debt.
Germany (prior to its participation as part of the ECB) and Switzerland have been uniformly ranked the most independent of central banks. The United States fell in the second tier in the Bade and Parkin rankings; was just below the most independent central banks in the Grilli, Masciandro, and Tabellini rankings of eighteen industrial countries; and was tied for fourth place among seventy countries in Cukierman's rankings. The Federal Reserve lost points in these rankings because of the brevity of the Chairman's term (less than five years) and the failure to single out price stability as the unique or principal objective.
Of these studies, I prefer Bade and Parkin's methodology for ranking independence because they included only those institutional characteristics that afforded a measure of independence to the central bank. Grilli, Masciandro, and Tabellini and Cukierman also included in their measures the nature of monetary policy objectives, ranking independence higher if there is a price stability objective and, in Cukierman's case, higher still if price stability is the only or at least principal objective. In the latter case, a central bank with more discretion--for example, as a result of multiple objectives, as in the case of the United States--is ranked as less independent than a central bank that has little discretion on account of a single, precisely defined price stability objective. Of course, defining independence to involve a mandate making price stability the single or principal objective increases the potential for an inverse relationship between "independence" and inflation.
Every organization's performance is likely to be enhanced by appropriate incentives. In the private sector, the incentives for a business are profitability and, indeed, survival. In the public sector, other means must be found to provide incentives. Elections of course play this role for elected officials. With central banks having been given an arms-length relationship with the electoral process, some have suggested that central bank policymakers should operate under explicit incentive contracts. But, for the most part, accountability is achieved for central banks both through the appointment process and by regular oversight by the legislature.
Accountability is facilitated by providing the central bank with a specific, external (usually legislatively imposed) mandate. Two aspects of designing the objectives for monetary policy are important. First, a single objective (typically price stability) makes the central bank more accountable, because multiple targets always carry trade-offs, at least in the short-run, which are subject to the discretion of the central bank. Second, explicit numerical targets make central banks more accountable than more general targets. Specifically, an explicit numerical inflation target makes the central bank more accountable than a more general commitment to price stability.
There are, however, other considerations that are relevant to setting the mandate. First, if there is a single target for a central bank, it will surely be price stability, given that monetary policy is the principal, even unique, determinant of inflation in the long run. While a single target is more precise, few legislatures would tolerate a central bank disclaiming any responsibility for the cyclical state of the economy or at least failing to respond to cyclical weakness. Indeed, given the inescapable trade-off between inflation variability and output variability, a central bank naturally, even inevitably, accounts for the variability of output around full employment when deciding how rapidly to try to restore price stability in cases where shocks or policy mistakes move the economy away from this goal. Inflation-targeting central banks often take account of output variability by defining a period of time over which any return to price stability should occur, typically two years. But such a fixed boundary may not encompass the optimal response to all shocks.
A second consideration in setting the mandate is that flexibility can be a valuable asset for policymakers, given the variety of shocks that the economy may face, structural changes that could effect the nature of trade-offs faced by policymakers, and the possibility of short-run trade-offs among multiple targets. So, less precise objectives and multiple targets provide flexibility for the policymaker. To the extent that there is a single and explicit target, accountability is narrowly about performance relative to that target. On the other hand, when there are multiple targets and hence inherent shorter run flexibility and less precisely defined targets, the oversight by the legislature will typically focus more broadly on the judgments that the central bank has made in pursuing its legislative mandate.
A second source of accountability is through the reappointment process. If terms are short and especially if the Chairman and other voting members can be reappointed for additional terms, more control can be exercised through the appointment process, and committee members can more easily be held accountable for their policy votes. This is a clear example of the trade-off between independence (facilitated by long terms without the possibility of reappointment) and accountability (facilitated by short terms with opportunities for reappointment).
As I noted earlier, Federal Reserve Board governors are appointed by the President, subject to Senate confirmation, for nominal fourteen-year terms. Such long, overlapping terms facilitate independence. However, if a Board member resigns before his or her term has expired, the successor is appointed for the remainder of that term. At the end of a partial term, a governor can be reappointed for a full term, but reappointment is at the discretion of the President and is again subject to confirmation by the Senate. Once a full term has been served, no reappointment is possible. The average actual tenure of governors has been between five and six years over the last twenty-five years.
However, the term of the Chairman and the Vice Chairman of the Board of Governors is only four years and both can be reappointed for additional terms as Chairman and Vice Chairman for as long as they remain on the Board. Such short and renewable terms reduce independence but facilitate accountability. In addition, they provide an important opportunity for the President to try to influence monetary policy decisions by pressures exerted on the Chairman subsequent to appointment. To a lesser degree, appointment of governors and direct pressure on them are further avenues of political influence that have been employed, at least on occasion.
The authors of the Banking Act of 1935, which established the FOMC in its modern form, implemented the system of long overlapping terms for governors and shorter renewable terms for the Chairman and Vice Chairman. It seems to me they made a conscious effort to balance independence and accountability. The short, renewable term for the Chairman would enhance accountability and encourage a strong working relationship between the Chairman and the executive and legislative branches. On the other hand, the long and effectively nonrenewable terms for governors would protect the fundamental independence of monetary policy. So the Federal Reserve loses points in some indices of independence because of the short term of the Chairman, but the resulting balance between independence and accountability has, in my view, contributed over the years to a successful relationship between the central bank and the rest of government.
The Congress has, over time, made efforts to increase the transparency of the monetary policy process and widen the scope of disclosures of monetary policy decisions and of the discussions leading up to those decisions. Historically, the Federal Reserve has responded initially by trying to preserve the status quo, but over time it has come to accept and even appreciate the evolution toward greater transparency and disclosure. Nevertheless, continuing concerns have been the potentially deleterious effect of still greater transparency and disclosure on the effectiveness of the deliberative process and the possible effects on the volatility of financial markets.
Transparency is influenced by the operating procedures used to implement monetary policy. It is furthered by announcements of policy changes, along with statements explaining the rationale underlying policy actions, and by timely and sufficiently detailed reporting of the substantive discussions leading to the policy decisions.
The Federal Reserve used to set its policy in terms of the tightness of reserve positions (so-called "reserve market conditions"). This was a very imprecise way of setting and explaining policy, making it more difficult for the public and the Congress to monitor and evaluate monetary policy decisions. One of the developments of FOMC practice under Chairman Greenspan was to set policy explicitly in terms of a target rate for the federal funds rate.
Initially, these decisions were not directly conveyed to the public. Instead, the Federal Reserve Bank of New York altered the way in which it implemented open market operations to alert financial markets to the change in policy. In February 1994, the Federal Reserve began announcing on the day of each meeting any change in its federal funds target and formalized that decision in February 1995. At the same time, it began to offer a brief statement explaining the rationale for the policy change. A policy of issuing a statement even when there was no change in policy was implemented last year.
The effect of monetary policy derives not only from the explicit policy actions taken, but also from expectations about future policy. Until quite recently, the Federal Reserve opposed earlier release of its directive or minutes precisely because that would provide some hints about prospects for future policy and this could result in volatility in financial markets. Today, however, not only does the FOMC immediately announce its policy decisions and provide a rationale for policy changes, it also reveals whether the committee believes the risks to achieving its goals are balanced or unbalanced--and, if unbalanced, in what direction. Since the early 1980s, the so-called tilt had been part of the directive, but in May 1999 the FOMC began to report changes in the tilt on the day of the meeting and, since that time, the markets have focused considerable attention on what the Federal Reserve says about the future.
Transparency is also enhanced by disclosure--including the announcement of policy actions and of the rationale for policy actions, the release of a summary in the minutes of the Committee's substantive discussion about the economic outlook and the appropriate course of policy, and testimony before the Congress. The Federal Reserve releases minutes of each meeting shortly after the following meeting--in effect, a delay of six or seven weeks. Some have encouraged a further step toward enhanced transparency by speeding this release.
The transcripts of an entire year of meetings--lightly edited verbatim records of the deliberations, with redactions for sensitive information related to foreign governments or specific businesses or individuals--are released with a lag of five years. The decision to do so was made in February 1995. Until late 1993, it was not widely known--inside or outside the Federal Reserve--that verbatim records of FOMC meetings (transcribed from audio tapes) were retained. Once the minutes were released, the tapes themselves were erased--actually taped over--in conformance with Committee directives. When the Congress learned of the availability of the transcripts, they demanded that they be released to the public, and the current procedures were negotiated between the Federal Reserve and the Congress. The transcripts are a useful historical record of FOMC meetings and provide scholars as well as current Board members with insights into the monetary policy process and its evolution over time.
One important reason for consultation and communication between the Federal Reserve and both the executive and legislative branches is the desirability of effective coordination of monetary and fiscal policies. The executive and legislative branches collectively set fiscal policy, while the central bank sets monetary policy. The appropriate monetary policy must give substantial weight to prevailing and expected fiscal policies. To a lesser degree, the same principle is at work in the formation of fiscal policies. I say to a lesser extent because I believe fiscal policies since the early 1980s have been set more on the basis of longer-run considerations--such as promoting growth--than for short-run stabilization purposes. As a result, stabilization policy is today principally a concern of the central bank except under extreme circumstances. The forecast of the central bank must consider current and prospective fiscal policies, and monetary policy must adjust to fiscal policy changes, while the executive and legislative branches are somewhat freer to implement changes in long-run strategies as the political consensus allows or dictates.
On the other hand, the absence of active stabilization efforts by the executive branch (and the Congress) might increase their frustration about the stabilization policies pursued by the Federal Reserve--or perhaps more likely at the perceived failure of the Federal Reserve to pursue full employment aggressively enough--and increase efforts at political interference with the conduct of monetary policy.
The relationship between the Federal Reserve and the executive branch has evolved over the last half century toward a more informal and less structured relationship. The Eisenhower Administration established an Advisory Board on Economic Growth and Stabilization (ABEGS) which included the chairman of the President's Council of Economic Advisers and the Fed Chairman--initially Arthur Burns and William McChesney Martin respectively--plus cabinet members. With Arthur Burns in the lead, the ABEGS functioned as a forum for frequent consultations on the policy mix. During that period, fiscal policy had a more prominent role in stabilization policy, with monetary policy playing a more supporting role. Some previous Fed Chairmen also have acted as close advisors to the president--for example, Mariner Eccles for President Franklin Roosevelt and Arthur Burns in the case of President Nixon--sometimes in discussions unrelated to the coordination of stabilization policy.
The Kennedy-Johnson Administration inherited the recession of 1960-61 and was determined to use fiscal policy to promote recovery. In the prelude to the 1964 tax cut, Chairman Martin was included in policy discussions as part of a "Quadriad" consisting also of the CEA chairman, the Secretary of the Treasury, and the director of the Budget Bureau. Chairman Martin was included mainly to ensure that the Fed did not offset the expected effect of the tax cut.
Coordination slackened as Vietnam War spending stimulated the economy, to the alarm of Fed officials, leading to the December 1965 confrontation. This was the most dramatic example of attempted political interference with the conduct of monetary policy after the 1951 Accord. The minutes of the meeting at which the Board decided to raise the discount rate include interesting discussions of the tension between independence and coordination. Some governors wanted to defer to, or at least negotiate further with, the Administration on this issue because they viewed the Administration as having the primary responsibility for the conduct of national economic policy. Should the Federal Reserve frustrate the direction of that policy? And during the congressional hearings that followed there was much discussion about the dangers to a ship with two captains.
Today, the interaction between the Federal Reserve and the Administration is more informal but also perhaps more continuous. However, that relationship is less focused on monetary-fiscal policy coordination than on regulatory and international economic issues. This change reflects the smaller role of fiscal policy in stabilization ever since the early 1980s, when the Reagan Administration shifted the focus to longer-run issues related to encouraging more rapid trend growth. Stabilization policy since that time has been dominated by the Federal Reserve, with coordination of policies becoming especially important at major turning points in the thrust of fiscal policy--for example, when the Clinton Administration decided to make a reduction in the structural federal budget deficit the centerpiece of its economic policy strategy in 1993.
The Secretary of the Treasury and the Chairman of the Federal Reserve meet frequently, many times for breakfast or lunch, often two or three times a week. The meetings are generally short, but not always, with no formal agenda and no staff. These meetings date back to the Treasury-Federal Reserve Accord in 1951 but today, apart from telephone consultations, they are the main source of ongoing contact between the Chairman and the Administration.
There are a number of other opportunities for regular contact among Federal Reserve governors and members of the Administration's economic team. A governor (on a rotating basis) hosts a weekly lunch for senior staff of the Treasury and the Federal Reserve. While the meetings are often social as well as substantive, they are opportunities to discuss issues of mutual concern. Some of the most effective meetings are "theme" meetings, when we agree in advance to focus on a particular issue. Given the Treasury's respect for the independence of the Fed, participants will rarely discuss monetary policy, although they occasionally touch on the economic outlook. Regulatory issues, debt management, or international economic issues tend to dominate. But the contacts made and refreshed at these meetings are extremely constructive when discussions between the Federal Reserve and Treasury are called for, again most often on regulatory issues.
Members of the Board and members of the CEA also meet monthly for lunch. Once again, discussions of the economic outlook and monetary policy are rare. But the discussions often involve interesting issues related to the outlook, such as the sources of the increases in productivity growth and why most other countries have not benefited significantly thus far from the same developments.
The President and the Chairman of the Federal Reserve meet occasionally--more recently, generally a couple of times a year. These meetings typically are informal discussions without agendas and without announcements before or after the meetings. They usually also include the Vice President, the Secretary of the Treasury, and the President's chief of staff. These are typically opportunities for the Chairman to brief the President on the U.S. and global economic outlooks. The frequency of meetings between the Chairman and both the Secretary of the Treasury and the President have varied across Chairmen and Administrations, depending to an important degree on the individuals involved.
The Federal Reserve and the Treasury participate in a variety of working groups--including the President's Working Group on Financial Markets. Treasury and Federal Reserve officials often serve together on U.S. delegations to international organizations--including meetings of G-7 and G-10 finance ministers and central bank governors; regional organizations such as the Asia Pacific Economic Cooperation Council, the Manila Framework, and the Forum for Latin American Central Bank Governors; the Financial Stability Forum, OECD Working Party 3 and Economic Policy Committee, and G-22; and bilateral economic dialogues, for example, with China and India. Before each such forum, it is typical for the U.S. delegation to meet together to coordinate their participation. Consultations were intense during the Mexico crisis in 1995, the Asian financial crisis in 1997-98, and in the discussions about reforming the international financial architecture since these events.
It is sometimes difficult to separate direct political involvement in monetary policy from essential congressional oversight of monetary policy. Today it seems out of place for the Administration to comment directly on the conduct of monetary policy, though this may reflect the extraordinary relationship between this Administration and the Federal Reserve and the exceptional economic environment of the last several years. Only time will tell. At any rate, for the present, the relationship between the Administration and the Federal Reserve on monetary policy is confined to the President's making appointments to the Board while the members of the administration and the Board (and especially the Chairman and the Secretary of the Treasury) engage in regular but informal consultations. On the other hand, the Congress cannot fulfill its oversight responsibilities without actively engaging the Federal Reserve in a dialogue about the conduct of monetary policy.
The Congress conveys its views on monetary policy through a variety of vehicles, including letters, speeches, statements and questions at hearings, committee reports on monetary policy, and bills and resolutions. The Congress over the years has used a variety of approaches to influence monetary policy. Perhaps most important, the Congress has set the goals for monetary policy in law. In addition, the Senate confirms nominees to the Board of Governors, and individual Senators can hold up Board member confirmations in an attempt to influence policy and appointments. The Congress can, if it decides, pass legislation that directly requires a specific monetary policy action. The Congress can threaten to change the structure of the Federal Reserve--abolish the Federal Reserve at the extreme, or specify particular qualifications for Board members, or alter the composition of the FOMC--in an attempt to influence monetary policy. The Congress can demand an accounting of policy by summoning the Chairman, Board members, and Reserve Bank presidents to congressional hearings, in addition to the formal semiannual testimonies by the Chairman.
The line between oversight and direct involvement in the conduct of policy is perhaps crossed when the Congress passes or even introduces a resolution or legislation that gives specific direction to raise or lower interest rates and, especially, when such directions are accompanied by proposed legislation that would reduce the independence of the Federal Reserve.
The history of the past twenty years shows that members of the Congress do try to influence monetary policy, especially when the economy is performing poorly or when interest rates are high or rising, but that the Congress has rarely gone so far as to pass legislation to direct policy. In fact, such legislation has often been introduced, though rarely passed. Indeed, the introduction of such legislative mandates may be thought of as one way the Congress tries to persuade the Federal Reserve to alter its conduct of monetary policy.
There are, to my knowledge, only one or two examples of legislation that passed with specific monetary policy directives and thus compromised the delegation of instrument independence to the Federal Reserve. I mentioned previously Concurrent Resolution 133 passed in 1975. At the end of November 1982, forty-two Senate Democrats introduced a resolution calling on the Fed to "achieve low enough interest rates to generate significant economic growth and thereby reduce the current intolerable level of unemployment." The Democratic House approved this language in its version of the year-end continuing resolution. However, the final language in the continuing resolution included an important qualification that, in effect, left the discretion about the conduct of monetary policy to the Federal Reserve. Specifically, the Congress added the qualifying phrase "with due regard for combating inflation." This is an excellent example of the broader tension in the relationship between the Congress and the Federal Reserve. On the one hand, the Congress honors the Fed's independence in establishing policy and, on the other hand, individual members, particularly in difficult economic times, work to influence policy.
At the most extreme end of efforts to change the structure of the Federal Reserve, bills have been introduced to repeal the Federal Reserve Act (thereby abolishing the Federal Reserve), to abolish the FOMC, to remove Reserve Bank presidents from the FOMC, or even to impeach Chairman Volcker and all the members of the FOMC at that time. Congressman Henry Gonzalez, a longstanding member and ultimately chairman of the House Banking Committee, brought a special zeal to these efforts and over the years was the author of several such measures.
In return for granting the Federal Reserve "independence," it seems to me that the Congress asks three things of us. First, we must do a good job promoting the objectives that the Congress has identified. Second, we have to accept a certain amount of grumbling about the decisions that impose short-run costs, especially when unemployment is high or policy tightens preemptively to contain what the Fed perceives as inflation risks. We are always the one taking away the punch bowl just as the party is getting good, with members of the Congress among those who always question the timing of any restraint. To be fair, members of the Congress are among the first to congratulate us when we lower rates! And there has, I have to admit, been plenty of praise for the Federal Reserve's contribution to the recent exceptional economic performance. Third, the Federal Reserve must be fully prepared to get a substantial part of the blame for bad results (whether or not we caused them).
The Congress keeps its part of the bargain by leaving the core of our operations alone, so long as things go right, and intervening only around the edges (hearings, speeches, letters, and the introduction of an occasional bill or resolution) to show they remain alert to their oversight responsibilities and reflect the concerns of their constituents.
The appointment process is an important element of the relationship with the Congress. Governors are subject to confirmation by the Senate. The confirmation process is a way for the Congress to influence the conduct of Federal Reserve policy, just as the appointment process offers this opportunity to the President. When the President and congressional majority are from different parties, party politics can affect the Federal Reserve and may explain, in part, why today the Board has two vacancies plus a governor who is serving after the expiration of his term (since a governor can continue to serve in such a case until reappointed or until a new governor is appointed and confirmed).
Another important relationship with the Congress is through hearings. The Chairman testifies frequently before the Congress, with the one-year record being twenty-five appearances in 1995, although only seven were directly about monetary policy. Other governors testify also, though less frequently, with a range of eight to twenty-two appearances per year in recent years. Typically the Chairman alone testifies on monetary policy. The most important testimony on monetary policy is delivered at semiannual hearings before the House and the Senate that began, as I mentioned, with the now-expired provisions of the Humphrey-Hawkins act. From 1978 until today, these were semiannual appearances, in each case before the Senate and the House. But the Chairman is invited for many other hearings, including appearances before the budget committees, the Joint Economic Committee, and the banking committees.
In addition, on rarer occasions, the Chairman and other Board members will visit with some members of the House and the Senate, either at the Board or on the Hill, mostly at the request of the legislators. These meetings are rarely about monetary policy and most focus on regulatory issues, including banking bills and the Community Reinvestment Act, but they are also sometimes about global economic developments, international financial crises, or international financial architecture issues. In addition, contact at the staff level between the Board and congressional committees is common. The Board's staff is routinely asked for technical assistance in drafting legislation on banking, consumer protection, and amendments to the Community Reinvestment Act and other areas.
Finally, members of the Congress often write letters to the Board--individually and in groups--typically urging a specific direction for monetary policy. Since joining the Board in June 1996, I have seen numerous such letters--all either expressing concern about high interest rates or, in most cases, urging the FOMC either to not raise interest rates or to lower them. The largest number of signers during this period was eighty, for a September 23, 1996 letter urging the FOMC not to raise interest rates.
These letters are perhaps best understood as attempts by the Congress to alert the Fed to the pain of constituents as a byproduct of the conduct of monetary policy--typically when the Fed is preemptively raising interest rates in an effort to prevent higher inflation or trying to unwind an earlier increase in inflation, or not sufficiently stimulating a sluggish economy. Once having admonished us in an effort to make sure we understood the consequences of our policies, the Congress has generally relied on us to balance inflation and stabilization objectives, as is the implicit contract under a regime under which the Congress has delegated instrument independence to the Federal Reserve.
The legislative mandate under which the Federal Reserve operates is, as I noted earlier, different from the mandate applied to most other central banks. It explicitly sets out a dual mandate and, should there be short-run conflicts, does not identify any priority between the two objectives. There has long been a small group in the Congress who would like to revise the language related to the policy mandate to elevate the role of price stability to the single or at least principal objective for monetary policy. They press this issue not out of dissatisfaction with the conduct of monetary policy but because they believe such a revised policy mandate would strengthen the credibility of the Federal Reserve's commitment to price stability and thereby allow the central bank to carry out this commitment in the most efficient way. However, a larger, if less vocal, group in the Congress strongly opposes abandoning a commitment by the Federal Reserve to promote full employment through its conduct of monetary policy.
A related issue is the precision with which the objectives should be stated. The mandate, for example, sets out full employment and price stability as objectives but leaves to the Federal Reserve the precise definition of those goals. As recent experience confirms, it would be difficult and unwise to set any numerical target for full employment, given the uncertainty about what that target should be and the likelihood that this target would vary over time with demographic changes in the labor force, government policies, and changes in the efficiency of the matching process between jobs and unemployed workers. The Federal Reserve has never set an explicit numerical target for inflation. Chairman Greenspan has defined price stability as inflation so low and stable that it no longer affects the decisions of households and businesses. However, today, a growing number of governments have set explicit numerical targets for their central banks.
A second broad issue has to do with transparency and disclosure. Over the years, there has been an evolution toward greater disclosure and transparency, but some believe that we ought to be looking for opportunities for further progress. The major questions relate to the speed of release of the minutes and of the transcripts.
Bade, R., and M. Parkin. "Central Bank Laws and Monetary Policy," Working Paper,
Barro, Robert J., and David B. Gordon. "Rules, Discretion and Reputation in a Model
Briault, Clive, Andrew Haldane, and Mervyn King. "Independence and Accountability," Working Paper Series, no. 49, Bank of England, April 1996.
Brimmer, Andrew F. "Politics and Monetary Policy: Presidential Efforts to Control the Federal Reserve," before 75th Anniversary Luncheon, Board of Governors of the Federal Reserve System,
Cukierman, A.
Debelle, G., and S. Fischer. "How Independent Should a Central Bank Be?" mimeo, MIT, 1994.
Greider, William.
Grier, Kevin. "Congressional Influence on U.S. Monetary Policy: An Empirical Test,"
Grier, Kevin. "Presidential Elections and Federal Reserve Policy: An Empirical Test,"
Grilli, Vittorio, Donato Masciandro, and Guido Tabellini. "Political and Monetary Institutions and Public Financial Policies in the Industrial Countries,"
Havrilesky, Thomas M.
Kydland, Finn E., and Edward C. Prescott. "Rules Rather Than Discretion: The Inconsistency of Optimal Plans,"
Lohman, Susanne. "Optimal Commitment in Monetary Policy: Credibility vs. Flexibility,"
Minutes of the Board of Governors of the Federal Reserve System, December 3, 1965.
Nordhaus, William D. "The Political Business Cycle,"
Rogoff, Kenneth. "The Optimal Degree of Commitment to an Intermediate Monetary Target,"
Schwartz, Anna J. "Central Banking in a Democracy,"
Tufte, Edward.
U.S. House of Representatives.
U.S. House of Representatives.
U.S. Senate.
Walsh, Carl. "Optimal Contracts for Central Bankers,"
Footnotes
1 During the debate on the Banking Act of 1935, Carter Glass expressed the view that having the Secretary of the Treasury on the Federal Reserve Board resulted in enormous influence by the Administration over the decisions of the Board. He felt he had been able to get the Board to do whatever he wanted when he was Secretary of the Treasury in 1919. Certainly his predecessor as Treasury Secretary, W.G. McAdoo, was a dominant presence whenever he attended Board meetings. At the time, this arrangement didn't completely snuff out Federal Reserve independence because the Reserve Banks were important in formulating policy (more important than the Board through much of the 1920s). When the FOMC was established in 1935, ensuring that a Board with all seats filled would have the majority of the votes on policy decisions, it became imperative to end Administration influence by removing the Secretary of the Treasury and the Comptroller of the Currency from the Board.
2 It could be argued that policy began to evolve in this direction in 1983 when the FOMC set targets for borrowed reserves. The latter target was, in effect, a proxy for the federal funds rate. But the proxy was imprecise and it wasn't until the late 1980s that the committee became clearer about its federal funds rate expectation.
3 When Susan Phillips' term expired on January 31, 1998, it took the Administration a year and a half to nominate Carol Parry for that position. Alice Rivlin announced on June 4, 1999, that she would be resigning from the Board, but the President has not nominated anyone to replace her. In the meantime, the President renominated Vice Chairman Roger Ferguson to a full term as governor after the expiration of his short partial term on January 31, 2000. The Republican leadership of the Senate Banking Committee has refused to hold hearings for either the new nominee, Parry, or Ferguson, reserving the opportunity for the new President to make these appointments and therefore possibly to convert the positions from Democratic to Republican appointments. In the meantime, the Board remains below its statutory number of members and could remain so for the many months after the new President takes office, if past experience is any guide to the time it takes to make appointments and get them through the confirmation process.
4 A practice in the Senate called a "hold" allows a member--through his or her majority or minority leader and without exposing his or her identity--to hold up, virtually indefinitely, a vote on a nomination to the Federal Reserve Board or any other government position that requires confirmation by the Senate. I have first-hand knowledge of this practice, since a hold delayed the confirmation of my nomination for several months. Actually, the hold was applied to the renomination of the Chairman, but the nominations of myself and Alice Rivlin as governors were viewed as part of a package and therefore action on all the nominations was delayed. Even without a formal hold, the chairman of the relevant committee has discretion on whether or not to begin the confirmation process because the chairman controls the scheduling of a hearing.
Return to top
2000 Speeches | 61,742 | 21,792 | 0.000046 |
warc | 201704 | Droppers America's First Hippie Commune, Drop City
Communes have long been a feature of American life. For centuries, religious, cultural, and political groups have attempted to create their own societies to provide respite from the imperfections of the wider world. In the 1960s, however, communes took on a new life, as many young people chose to “turn on, tune in, drop out.” According to some estimates, up to 10,000 communes sprang up during the decade, housing approximately half a million people.
One of the first and most influential of the 1960s’ hippie communes was Drop City, a small settlement founded in 1965 outside Trinidad, Colorado. The main instigator behind the commune was Eugene Bernofsky, the son and grandson of radical communists, along with his wife and a few college friends. In his new book,
Droppers, journalist and former wildland firefighter Mark Matthews documents the rise and fall of Drop City, from its idealistic beginnings to its disillusioned end. Along the way, Matthews provides valuable background on the history of communes in the United States and the countercultural context in which Drop City arose.
Bernofsky, who serves as Matthews’ main subject and source of information, was less interested in politics than he was in simply escaping from the confines of middle-class life. Consequently, Drop City had no philosophy other than to allow its members to live as they pleased. Since the commune had no source of income, Bernofsky and his friends lived off food stamps and occasional donations. Their iconic dome-shaped shelters were built out of scavenged materials such as car tops, and were inspired by maverick designer Buckminster Fuller’s geodesic domes.
As the hippie counterculture gained notoriety over the course of the 1960s, so did Drop City. In the spring of 1967 it hosted a “Joy Festival,” which drew visitors from all over the country. The commune’s increased public profile was a source of contention amongst its members, however, and just after the festival Bernofsky decided to leave. Over the next few years the rest of the founding members departed one by one, until Drop City was finally dissolved in 1973.
While Drop City never realized Bernofky’s greatest hopes, Matthews stresses that it did achieve something valuable. “For two glorious years, the original droppers achieved many of their ambitions and realized many of their visions,” he writes. “They succeeded in tuning out the persistent static of war, racial conflict, poverty, mass murders, and social strife in order to maintain optimism and hope.” And while communes may be less popular today than they were in the 1960s, the story of Drop City is likely to interest anyone concerned with the far idealistic reaches of American life. Though it may have ultimately failed, Drop City was nothing if not a noble experiment.
Disclosure: This article is not an endorsement, but a review. The author provided free copies of his/her book to have his/her book reviewed by a professional reviewer. No fee was paid by the author for this review. Foreword Reviews only recommends books that we love. Foreword Magazine, Inc. is disclosing this in accordance with the Federal Trade Commission’s 16 CFR, Part 255. | 3,305 | 1,692 | 0.000605 |
warc | 201704 | VOICE DISORDERS TEST 1 SP14
Home > Flashcards > Print Preview
The flashcards below were created by user andreaucoslp15
on
FreezingBlue Flashcards
. What would you like to do?
The structure of the vocal folds consists of: (2 things)
the thyroarytenoid muscle and the mucous membrane
The mucous membrane of the vocal folds is subdivided into the: (2 things)
epithilium and lamina propria
Name the point where the glottal space is the widest during the adduction.
The point of maximum excursion of the vocal folds.
One vibratory cycle of the mucosal wavse includes:
open at the bottom open at the top close at the bottom close at the top
Breathy onset:
air flows through the glottis before the vocal folds are adducted and vibrating (anxious to start speaking)
Simultaneous onset:
air is flowing through the glottis as vibration of the vocal folds and adduction begin
Hard glottal attack:
air flows through the glottis after the vocal folds are adducted, and vibration begins with a jolt
Jitter is:
frequency perturbation
Shimmer is:
amplitude perturbation
During swallowing, the ventricular vocal folds are:
adducted
During regular phonation, the ventricular folds:
abduct
Name the five types of breathing.
abdominal/diapragmatic costal/thoracic (up higher) clavicular (shallow breathing) mixed combined thoracic and abdominal
Define tidal volume.
the amount of air inspired and expired during a typical respiratory cycle; it is determined by the oxygen needs, not the speaking or singing needs, of the individual
Define inspiratory reserve volume.
the maximum volume of air that can be inspired beyond the end of a tidal inspiration
True or False. The infant has a proportionately smaller glottis than the adult.
False. Absolute glottis dimensions increase with age.
When does the fundamental frequency of the voice drop?
Toward the end of a sustained utterance.
In typical development, what feature of speech develops before the phonetic pattern?
Intonation
Until puberty, the larynx is of ________ size in the male and female.
equal
In temperate climates, the onset of puberty in females ranges from ___ to ___, and in males from ___ to ___. Near the equator, the onset is accelerated ____ to ____ years.
What is the average time from onset to completion of adolescent voice change?
3-6 months, one year at most
True or False. Adolescent voice change does not constitute a voice disorder.
True.
Name the six structural changes of both the respiratory system and the vocal folds that occur with age.
atrophy of the laryngeal muscles thinning and dehydration of the laryngeal mucosa loss of elasticity ligaments changes in the elasticity of the vocal folds calcification of cartilages a glottal gap
Define expiratory reserve volume.
The maximum volume of air that can be expired beyond the end of a tidal expiration.
Define residual volume.
The volume of air that remains in the lungs after a maximum expiration.
Define vital capacity.
The total amount of air that can be expired from the lungs and air passages following a maximum inhalation.
True or False. It is not likely to expect a relationship between the relative size of individuals and their relative vital capacities.
False. It is reasonable to expect a relationship between the relative size of individuals and their relative vital capacities.
True or False. Vital capacity tends to decrease with age.
True.
Define total lung capacity.
It represents the total volume of air that can be held in the lungs and airways after maximum inspiration.
Which muscle, when contracted, stretches and thins the vocal folds?
The cricothyroid muscle.
Which vocal fold muscle increases fundamental frequency?
The cricothyroid muscle.
True or False. The cricothyroid muscle gives the voice a higher pitch quality.
True.
Which muscle, when contracted, shortens the vocal folds?
The thyroarytenoid muscle.
Which vocal fold muscle decreases fundamental frequency?
The thyroarytenoid muscle.
True or False. The thyroarytenoid muscle, when contracted, creates a higher pitch.
False it lowers pitch.
What three things must happen in order to increase perceived loudness?
Take in more air (increase sub-glottal pressure) Close vocal folds (increase in vocal fold adduction) Open the mouth wide
The difference in voices comes from the ___________ that happens in the head.
resonance
True or False. Resonance changes the frequency of the voice as well as the quality.
False. Resonance does not change the frequency, only the quality.
By action of the pharyngeal constrictors and other supra glottal muscles the overall dimensions of the __________are always changing.
pharynx
Of all our resonators, the _______ __________ is capable of the most variation in size and shape.
oral cavity
Which moving structure of the oral cavity is the most mobile articulator?
The tongue.
Name the important resonating cavities.
the pharynx the oral cavity the nasal cavity
Many things can go wrong with the _________, which fouls most likely cause a ________________ voice.
ASHA affirms the practice of _____________ of the _________ by both otolaryngologists and speech-language pathologists.
Is leukoplakia a precursor to cancer?
yes
Laryngeal cacer compromises approximately ____% of all malignancies diagnosed annually in the US.
6
Name the three classifications of laryngeal cancers.
supraglottal glottal subglottal
What is the most prevalent cause of stridor in the neonate?
Laryngomalacia, it accounts for 75% of all congenital anomalies of the larynx.
____________ stenosis results from an interruption of the cricoids cartilage or arrested development of the conus elastics during emryologic development.
Congenital
____________ stenosis may occur following endotracheal ____________ either related to life-saving procedures or surgery.
Tracheoesophageal Fistulas (TEF) are __________ that occur between the __________ and the _________.
_____________ ______________ is an abnormal occlusion of the esophagus.
Esophageal atresia. (the esophagus is either misshapen or missing parts)
One of the most common ORGANIC voice disorders is:
contact ulcers
Contact ulcers are considered a _________ ________ disease of the ________.
chronic inflammatory larynx
Common symptoms of contact ulcers are: (2)
hoarse or rough voice quality throat clearing
Are contact ulcers unilateral or bilateral?
Bilateral
Name the three causes (or combination thereof) that result in contact ulcers.
hard glottal attack along with throat clearing and coughing laryngopharyngeal Reflux (LPR) endotracheal intubation
Contact ulcers are more common in:
children, adults men, women ?
Reactive tissue irritation will lead to the formation of granulomas (contact ulcers). What is the most common reactive lesion?
Teflon granuloma of the larynx
True or False. Voice therapy can begin in the presence of contact ulcers without a complete laryngeal examination.
False.
True or False. Surgically induced irritation with resulting granuloma require medical-surgical resolution of the problem.
True.
Any _______________ disease of the larynx, such as ___________ or ____________ can lead to granulomatous tissue damage.
inflammatory tuberculosis syphilis
In LPR, stomach acid is forced up the ____________.
esophagus
In LPR, the stomach acid exits the upper _____________ ____________.
esophageal sphincter
LPR irritates the area between the:
arytenoids
Are cysts usually unilateral or bilateral?
Unilateral
True or False. Cysts will resolve spontaneously.
False. Cysts rarely resolve spontaneously.
How are hemangiomas different from contact ulcers and granulomas?
Whereas a granuloma is usually a firm granulated sac, a hemangioma is a soft, pliable, blood-filled sac.
Granulomas and hemangiomas often occur on the ____________ glottis.
posterior
Hemangiomas are associated with:
vocal hyperfunction hyperacidity intubation
Is hyperkeratosis a precursor of cancer?
Yes. It may be a precursor of malignant tissue change.
Name the common sites of hyperkeratosis.
under the tongue vocal folds at the anterior commissure posteriorly on the arytenoid prominences
Hyoerkeratosis is an _____________ lesion in the ___________ or ____________.
True or False. Infectious laryngitis is viral in origin.
True.
Total laryngectomy alters what three things?
respiration swallowing speech
Name the three methods of alaryngeal communication.
esophageal speech tracheoesophageal speech artificial larynx
What is sound source for esophageal speech?
the pharyngoesophageal segment
True or False. Distinguishing leukoplakia and cancer of the larynx by visual inspection can be done through visual inspection.
False.
What is the most common cause of pediatric hoarseness?
recurrent respiratory papillomatosis (the most common benign laryngeal neoplasm)
Describe the differences between GERD and LPR.
GERD is the passage of gastric juices from the stomach into the esophagus. LPR occurs if these contents move superiorly and exit the upper esophageal sphincter and contents spill into the pharynx.
True or False. Webbing is life-threatening.
True.
What is the primary function of the larynx?
To protect the airway.
Name the main vocal problems.
True or False. No one knows the incidence of voice disorders in the normal population.
True. Due to those that resolve by themselves, such as infectious laryngitis.
What would you like to do?
Home > Flashcards > Print Preview | 9,508 | 3,829 | 0.000266 |
warc | 201704 | Reliefs: employee-ownership trusts: conditions: the 'all-employee benefit requirement': cases in which the requirement is treated as met: 'significant interest' condition
TCGA92/S236L(1)(b)(ii) and (2)
The trustees held a ‘significant interest’ in the company on 10 December 2013 if all the following conditions applied on that date.
They held 10% or more of the ordinary share capital of C and had powers of voting on all questions affecting C which, if exercised, would have yielded 10% or more of the votes capable of being exercised on such questions. They were entitled to 10% or more of the profits available for distribution to the equity holders of C. They would have been entitled on a winding-up of C to 10% or more of the assets available for distribution to its equity holders. There were no provisions in any agreement or instrument affecting the constitution or management of C, or its shares or securities, whereby the first to third conditions above could cease to be satisfied without the trustees’ consent.
Example 20
The trustees of the Sulaphat Widgets Limited Employee Trust own 20% of the ordinary share capital of Sulaphat Widgets Limited, which entitles them to 20% of the voting rights in matters affecting the company, to 20% of its profits available for distribution and to 20% of any assets in a winding up. There is an agreement in place, dated 19 February 2011, under which in certain circumstances the trustees are not permitted to vote on matters affecting the company. The existence of this agreement means that the trustees did not hold a ‘significant interest’ in Sulaphat Widgets Limited on 10 December 2013.
For more information on the circumstances in which trustees hold a ‘significant interest’, see
CG67875. | 1,792 | 809 | 0.001267 |
warc | 201704 | Workplace Assessment Overview
The concept of “workplace of the future” is constantly evolving, as most technology initiatives are centered on social, cloud, mobile, and analytics. The advanced workplace is able to gauge the needs of the users and craft an infrastructure equipped with the essential applications to bring about cost efficiency and enhance productivity and agility. However, the importance of user profiling is critical here as an organization comprises various kinds of users. HCL’s User Profile Assessment services engage clients to segregate user profiles with Data Analytics, provide better visibility with regards to working style, productivity, and experience while it helps companies understand the device workload trends and the necessary transformations to be put in place.
HCL’s Kaleidoscope
TM services enable organizations to segregate user profiles with the use of real-time Data Analytics, enhance visibility with regards to user work styles, productivity, mobility, security, experience, device workload trends, etc., to help identify and create a conducive work environment for all profiles. HCL’s Kaleidoscope TM will furnish companies with better workplace assessment reports that include resource inventory and utilization alongside workplace sustainability health and diagnostics reports. Kaleidoscope TM services help provide improved workplace profiles to better understand profile definitions in order to identify the workplaces of the future. Kaleidoscope TM also furnishes fit-for-purpose solutions and services for all end users.
The benefits offered by these solutions extend to all the stakeholders involved as it will help organizations attract potential talents, enhance ROI through better optimization of every user profile and help improve user satisfaction and productivity. The services will also provide the right solutions for the end users and facilitate an improved user experience. From an IT point of view, user profiling services can improve the security and monitoring of the end-user workplace environment as well as cut down incident and problem tickets. | 2,144 | 990 | 0.001025 |
warc | 201704 | Top10Doctor insights on:Some Juice From An Orange Went Down The Wrong Way What Will Happen To The Juice 1doctor agreed: 1doctor agreed: It gets absorbed:I take it from your question that some orange juice went down your trachea, or airway. We call this aspiration, and unless it was a large amount or happens repeatedly it is not likely to cause any problem. It is acidic, so will be even more uncomfortable than other liquids in the airway, but will be absorbed into the body. There is a small chance it could cause a pneumonia, but this is not likely....Read more 2doctors agreed: 2doctors agreed: No:No. 3This morning I drank some orange juice and fealt a burning feeling in my stomach and blacked out from the pain what s wrong with me? 4Stomach has been making noises and hurting for the past month even if I'm full. It started when I drank vodka orange juice. What's wrong? Still drinking??:If you put alcohol on a wound or a cut, you know that it BURNS very badly. Well, it does the same thing to some people's stomach lining, particularly those who have other risk factors for getting peptic ulcer or gastritis (infection with a certain bacteria called h. pylori, use of aspirin or NSAIDs, heredity). If your stomach is still bothering you, see your PCP for an evalatuion and treatment....Read more 2doctors agreed: 5I wasn't able to hold any food down, I drank orange juice and then puked, but I didn't eat until about 5. Is my baby still okay? I'm 17 weeks pregnant 2doctors agreed: It ok:You baby will take from you and it would take several days of fairly persistent puking to cause an issue for you and the baby. If you're having nausea and vomiting issues you can calls your OB provider even on the weekend. Most of us doing OB have an answering service and there are meds we can call in for nausea and vomiting....Read more 6I am 5 weeks pregnant and unable to keep anything down. Orange juice,lemon drops, pepermints, smoothies, saltines, fruit, corn or pretzels. Help plz? 7If i accidentally consumed orange juice that had soiled. What could happen? And wouldn't i be able to tell by the taste if it was spoiled? Orange :Orange juice is tricky - we've been told for years how healthy it is, right? True, it has vitamin c, calcium (if you buy the calcium fortified stuff), and it tastes good, but there are a few other things to keep in mind:- an 8 oz glass of oj has almost as much sugar as a can of soda- oj has little to no fiber, so that sugar rush can cause an Insulin spike which eventually leads to high cholesterol, diabetes, high blood pressure and weight gain. Eating the whole fruit is much better for you because the fiber found in the fruit causes the sugars to be digested more slowly which avoids the Insulin spike- the stuff you buy in the carton has been processed with many chemicals to increase shelf life - even the expensive stuff! if you must have oj, it's much better to squeeze it yourself.The 2 best bits of dietary advice i can give pertaining to juice is "everything in moderation" or "bypass the juice and just eat the fruit". Your body will thank you....Read more Talk to a doctor live online for free Is it healthier to drink orange juice with some pulp or with no pulp? What happens if you swallow food down the wrong pipe? Nutritional information for tropicana orange juice not from concentrate Ask a doctor a question free online Does orange juice prevent you from losing weight? Can drinking orange juice prevent me from getting as sick from the flu? Fluid went down wrong pipe | 3,533 | 1,783 | 0.000562 |
warc | 201704 | Top10Doctor insights on:What Are The Best Medicines For Sciatica Pain Its Been About 8 Weeks Ongoing 4doctors agreed: 4doctors agreed: Medical supervision:Medical treatment is the first line of treatment for sciatica. The three types of medications are nonsteroidal anti-inflammatory medications, muscle relaxants, and narcotic analgesics. These medications should be taken under medical supervision. After 8 weeks of pain, you may need more intensive treatment, such as injections or surgery....Read more 14doctors shared insights
Low back pain is pain that occurs in the back above the buttock area and below the ribs. Low back pain can be sharp, dull, intermittent or constant. Pain can be at rest or associated with activity. Back pain can also be accompanied with pain that shoots or radiates down into the lower extremities or legs which is frequently ...Read more
1doctor agreed: 1doctor agreed: 1doctor agreed: 1doctor agreed: Stretching:Many times acute sciatic pain will resolves after stretching the area abit. The muscles will loosen up and help resolve pain. If not, then consider further imaging studies to see which exact nerve in your spine is getting pinched for further treatment options....Read more 2doctors agreed: 2doctors agreed: Sciatica alternative:Outside of stretching and exercise, alternatives to healing sciatica are massage, acupuncture, heat, topical heating gels, chiropractic care, feldenkrais, physical therapy, Motrin for pain, natural supplements such as arnica to decrease inflammation....Read more 1'st of all who gave:U the Dx. By Ur description it does not tell us any symptoms. Can B due 2 a disk in the lumbar spine, & can cause pain, numbness, & weakness of the muscles that the specific nerve root innervates. Was there an injury? Best advice is 4 U 2 C a fellowship trained spine surgeon 4 eval. General precautions R avoid anything that makes it worse, & get in 4 an eval....Read more Mostly Time:90% with sciatica due to disc herniation get better without surgery independent of any specific treatment -time is biggest healer. Treatment can include: medication, injections, activity modification, physical therapy, acupuncture & chiropractic. Regular exercise in terms of cardiovascular/stretching/core(stomach-back-leg) strengthening while stqying trim & not smoking. Running fine if no pain...Read more 3doctors agreed: 3doctors agreed: Options:You have multiple options, the mainstay of treatment are interdisciplinary and include physical therapy, acupuncture, medications (anti-inflammatories, nerve pain agents) and epidural steroid injections. Depending on your response surgery may need to be considered. There continue to be emerging technologies to treat sciatica in a minimally invasive fashion....Read more 1doctor agreed: 1doctor agreed: Spine Pain Options:This chronic pain in the distribution as you suggested is the result of an irritated nerve or facet joints or other injury typically in the lumbar spine (low back) which are caused by herniated disks, spinal stenosis or degenerative disc disease, etc requiring further evaluation by a spine specialist and may be candidate for facet injections/radiofrequency ablation and epidural steroid injection....Read more 1,050doctors shared insights Talk to a doctor live online for free Its been 8 weeks since my last Its been 8 weeks since my last period What is the best treatment for sciatica pain? Ask a doctor a question free online What otc pain medicine is best for back pain? What pain relief medicine is best for testicular aches? What is the best otc pain medicine for a sprained knee? What is the best alternative medicine for relieving my si joint pain? Talk to a neurologist online | 3,708 | 1,717 | 0.000583 |
warc | 201704 | Top10Doctor insights on:What Are The Treatments For Nevus Sebaceous 4doctors agreed: 4doctors agreed: Surgery vs Observe:Most nevus sebaceous do not develop into skin cancers. However, it is estimated that ~10-15% develop a skin cancer called basal cell carcinoma (bcc). Because of that risk, many dermatologists recommend prophylactic removal of these surgically around puberty. This is controversial as bcc is not life threatening, some would suggest excision only when skin cancer transformation is observed....Read more 1doctor agreed: 1doctor agreed: 1doctor agreed: 1doctor agreed: Often no Rx best:It depends on the case. Even good plastic surgery is likely to disfigure. When these are excised, they tend to grow back because they originate in the deep nerves and are resupplied with cells from there. Ask ten of your real friends whether to try to remove it or leave it alone. You'll be surprised. Invite folks to touch it for good luck....Read more Biopsy needed :There is rarely a substitute for a pathology review to confirm a diagnosis and to monitor for any changes. This is done by having a biopsy performed by a dermatologist or plastic surgeon. It is also important to have any previous biopsies or reports available for the current physician to review....Read more 1doctor agreed: 1doctor agreed: Not likely:Nevus sebaceous is a benign lesions but does have some small risk of converting into a malignant skin cancer over time. In addition that can cause local inflammation and irritation that can be bothersome. They can be removed. See a dermatologist or facial plastic surgeon to discuss your options and have the lesion examined...Read more 1doctor agreed: 1doctor agreed: Nevus:no natural way see a dermatologist to have it examined and for a complete skin body check...Read more 1doctor agreed: 7My son is only age 25 and diagnosed with nevus sebaceous and has patches of hairloss on head. Should he have these patches removed? 1doctor agreed: Nevus sebaceous :An organoid nevus that occurs in 3 of 1000 babies. 50% are on the scalp and 45% are on head and neck. They persist throughout life and enlarge in adult life. The risk for tumor development increases with age and occurs in more than 10% of cases. Both benign and malignant tumors occur. Surgical removal is generally recommended but can be delayed until adulthood....Read more 9Can sebaceous Nevus turn to cancer? If so how likely? And how often should it be checked.. I'm scared what should I do Remove the lesions:As I may have mentioned earlier, the lesions need to be excised. Even though the lesions are benign, they will grow....Read more 10I am 22,male having benign acquired melanocytic nevus on my face. 5-6 in number .i want to get removed all these.best possible treatment of choice? Talk to a doctor live online for free Sebaceous nevus removal Nevus sebaceous birthmark Alternative treatments for nevus depigmentosus Ask a doctor a question free online Treatments for intradermal nevus Treatments for nevus depigmentosus What causes a nevus to appear? What is a bathing trunk nevus? Talk to a dermatologist online | 3,112 | 1,487 | 0.000673 |
warc | 201704 | WBCSD in collaboration with UNEP, it has been a roar- ing success, estimatedly reaching over a million students. More importantly, it has enabled the WBCSD message to go directly to a whole generation of future business lead- ers. In ‘preparing’ for the exam by reading through the site, they have been exposed to WBCSD’s version of what sus- tainable development is and how it is most likely to be achieved; it is this version that they are most likely to ac- cept and absorb.
26
The success of the initiative was largely instrumental in WBCSD deciding to form, in 1996, the Foundation for Business and Sustainable Development (FBSD), whose goal is to ‘promote the business understanding of sustain- able development and to encourage education and com- petence building, research, and demonstration projects in the field of sustainable development.’ In 1998 FBSD pro- duced a 180-page book, The Sustainable Business Chal- lenge: A Briefing for Tomorrow’s Business Leaders, which it hopes will be widely used as a textbook in university-level business courses. Plans are underway to produce a televi- sion series on the same theme. Other programmes of the Foundation include an Internet-based ‘Eco-Efficiency Kit’, a multilingual ‘Global Sustainable Development Diction- ary’, the use of global scenarios as a management learning tool, and support for various research and demonstration projects. Capping all of these is the ‘WBCSD Virtual Uni- versity’, which is a joint project with the University of Cambridge and the Norwegian School of Management. Its aim is to ‘bring knowledge and appreciation of sustainable development, the way WBCSD members understand it, to a global audience through combining the latest distant learning and data technology with proven training tradi- tions’ (emphasis added). The outreach potential of such a programme is daunting, and if successful it could ultimately make WBCSD’s definition of sustainable development as well known as the Brundtland Commission’s.
27
The practice of consistently focusing on best practice within industry on environmental issues has been of stra- tegic as well as substantive importance to WBCSD. Just about every report and book to have come out from WBCSD or its predecessor organizations focuses much of its attention on highlighting case-studies of how specific businesses, mostly member companies, have taken deci- sions that benefit the environment while maintaining or increasing their long-term profits. This is obviously a heart- warming message, but also has deeper strategic value. For big business in general, which is much more used to being depicted as an environmental rogue, this provides not just good publicity but vindication. In an era when environmen- tal sensibilities among consumers can cause major dents in corporate profits, this alone can justify the US$30,000 annual membership fee. In effect, WBCSD provides big
business with the exact antidote to the many environmen- tal NGOs that have, for years, been highlighting ‘worst practice’. In this regard, however, WBCSD is as guilty of focusing on only one side of the coin as those NGOs have been for focusing only on the other.
Having said the above, the reason for focusing on best practice most often cited by the WBCSD relates to the aspiration (and self-perception) of its member companies to ‘be among the leaders in good environmental practice’. WBCSD offers them the ability to ‘share their experience and expertise with others and keep abreast of best practice in fields to which they might not otherwise have access’. The promise, to the business executive, is of advance in- formation; the attraction, for the environmental policy maker, is of the potential for early dissemination of win– win solutions. The most attractive of these win–win con- cepts is eco-efficiency, which is ‘at the heart of the WBCSD’s philosophy’. After having introduced the con- cept the organization has spent much effort in propagat- ing it, and to its credit it is now, indeed, ‘firmly entrenched in the business lexicon’.
28 Box 3: Eco-Efficiency
In introducing the concept in Changing Course, BCSD had not provided an exact definition of eco-efficiency beyond stating that ‘corporations that achieve ever more efficiency while preventing pollution through good housekeeping, materials substitution, cleaner technologies, and cleaner products and that strive for more efficient use and recovery of resources can be called “eco-efficient”.’
1 By 1993 BCSD had a formal definition: ‘Eco-efficiency is reached by the delivery of competitively priced goods and services that satisfy human needs and bring quality of life, while progres- sively reducing ecological impacts and resource inten- sity throughout the life cycle, to a level at least in line with the earth’s estimated carrying capacity.’ 2 By 1997 WBCSD was ready to publish a major book on the sub- ject which promoted the concept as a ‘marketing phi- losophy’ that has been ‘developed by business for busi- ness’ and highlights the fact that ‘the first word of the concept encompasses both ecological and economic resources—the second says we have to make optimal use of both.’ 3 It went on to specify seven guidelines for operationalizing the concept: a) reduce the material in- tensity of goods and services; b) reduce the energy in- tensity of goods and services; c) reduce toxic dispersion; d) enhance material recyclability; e) maximize sustain- able use of renewable resources; f) extend product du- rability; and g) increase the service intensity of prod- ucts.
70
YEARBOOK OF INTERNATIONAL CO-OPERATION ON ENVIRONMENT AND DEVELOPMENT 1999/2000 | 5,873 | 2,787 | 0.000373 |
warc | 201704 | Recently I have been asked a lot of questions about what employers can and can’t disclose about a former employee’s performance and reasons for leaving the company. So, I decided to find out what the law says as well as what in-house counsel would generally advise an HR department about disclosures. I consulted with my attorney friend Carole Jurkash, a fellow University of Chicago graduate who went on to get her law degree from Yale Law School to find out what the law says about this topic. Carole really knows what she is talking about because she has 17 years of experience advising various corporations on general business matters as an in-house attorney.
Carole made it clear that in most states employees are hired “at will” which means they can be fired at any time for almost any reason. The exceptions to “almost any reason” are that an employer can’t fire you for any of the following: your gender, your race, your religion, your sexual orientation, your age, any disabilities you might have, or your marital status. If you are fired for any of those reasons you might have grounds to sue your former employer. Employers are not prohibited by law from telling a potential employer who calls for a reference about a former employee the reasons that the employee left, as long as the information they share is truthful. However, a lot of employers opt not to share the reasons that employees have left the company or to give any kind of references for any former or current employees. While an employer may be able to fire you for just about any reason, it is in the employer’s interest to be consistent with all employees in order to avoid employment discrimination claims. In other words, as a best practice to avoid liability in employment cases, many lawyers advise employers to adopt a set of policies that are applied to all employees equally. Consistency is a very important element in understanding why employers may or may not choose to discuss the reasons a former employee left the company as you will see in a minute. One thing that employers want to avoid is a disparagement lawsuit. “Disparagement” means saying something about a former employee that isn’t true, that is slanderous, or is intended to hurt the former employee. In order to avoid the possibility of a disparagement lawsuit, many employers opt not to give any references at all. That’s right – no references for anyone. Instead, many employers choose to institute a policy of only confirming dates of employment and salary information. But why not give good references to employees who leave on good terms? Why give no references at all? Employers are cautious about disclosing information about the performance of former employees because of a combination of two things: the need for consistent treatment of all employees to avoid employment discrimination claims and a desire to avoid risking disparagement lawsuits. For example, if an employer discloses information related to a former employee’s poor performance the former employee in question could challenge that claim in court and claim that the employer is slandering them. Or that there was some sort of discrimination (whether there was or not). Even if an employer is perfectly justified in firing a poor performer it is likely that the employer’s attorney will advise them to keep quiet about the reasons for the firing. Why? Because, as you can see there is really no upside to the employer to disclose that information. Attorneys try to minimize risk for their clients. Since disclosing reasons for termination could be considered a risk, it is likely that most employers will simply not do so. On the other hand if an employer gives glowing references for its former employees who were star performers while staying mum about the poor performers, they run the risk that a poor performer could sue them for being inconsistent in their policies. Seem crazy? The “poor” performer’s argument goes like this: the “poor” performer claims that the real reason the employer is refusing to give a reference is based on an unlawful discriminatory reason [race, religion, etc.], and that the employer always gives good references, for example, to ex-employees who are Catholic males under forty with Irish surnames regardless of the quality of their performance, and never gives references to Buddhist females over forty. Well the key to successfully avoiding or defending this type of claim is for employers to treat all employees equally. So, if they say great things when someone calls to check references for “good” performers and say nothing about the “bad” performers, they are not treating everyone equally. So many employers won’t give any kind of reference at all. If you happen to have been fired for poor performance this situation is certainly better for you than some alternatives. So to wrap this up…Can a former employer disclose information about your job performance or the reasons you left the company to someone who calls to check your references? The short answer is: yes they can as long as they are truthful in what they disclose. The longer answer is that most employers choose to minimize the risk of certain types of lawsuits and therefore don’t disclose any performance related information about former employees or the reasons that employees have left the company. If you are leaving a company for any reason ask your HR representative or the company’s legal counsel what the policy is about references for former employees. Finding out the company policy is the only way to know what you can expect in terms of a reference from a former employer. Special thanks to Carole Jurkash for offering her thoughts on this important topic. | 5,846 | 2,436 | 0.000421 |
warc | 201704 | Stanford Computer Security Workshop Recap
Last week, Dan Boneh and I hosted a security workshop with a mix of thought leaders from both academia and industry. Dan is a well-known Stanford professor of Computer Science who specializes in security and cryptography. At this workshop we brought together researchers and practitioners working on web application security. The discussions were about recent trends in secure web application design, common vulnerabilities in existing systems, and upcoming security architectures for the web.
One of the recurring problems discussed during the workshop is the overwhelming number of false positives that results in triage fatigue among security operators.
For example, suppose you have a traditional web application firewall (WAF) which tries to distinguish a “good” request from a “bad” request based solely on the requests received by a web server. Trying to determine which of the two requests is legitimate and which may be an attack is close to impossible. In this specific case, you may be able to determine that the “Referer” header is misspelled (or rather, correctly spelled) and thus was not generated by a legitimate browser and is indicative of an attack.
The problem I focused on during my talk was the false positives from security tools that focus just on the web server, or just on the database, or just on the client. Since they lack the complete context to classify a request, they are forced to use heuristics. No matter how good those heuristics are, either they are overly conservative and miss vulnerable requests, or they're overly aggressive and generate the false positives which developers and ops hate so much. The approach that we have been experimenting with leverages DOM virtualization to mitigate this problem by augmenting the server-side WAF with client-side information to make the detection more precise and less prone to false positives.
Kunal Anand, Co-Founder & CTO of Prevoty, tackled the context problem from a different direction and introduced the participants to Language-based Security. His approach took advantage of building parsers for different types of data to enforce security at the application level rather than browser level to increase the amount of context, and as a result reduce false positives.
Dan Boneh focused on a system called Stickler that he and his team have built to let end users verify the end-to-end authenticity of web content served while still being able to reap the benefits of caching CDNs give them. The research explores what kind of integrity guarantees can be made without modifying the browser.
Deian Stefan, Assistant Professor of CSE, UC San Diego, other the other hand, explored the security properties which modifying or augmenting browsers may give us. He described the security extensions to the web specification the W3C is working on. He described new standards like HTTP Strict Transport Security to enforce HTTPS, Content Security Policy for controlling content on the page, and Confinement System for the Web for label-based confinement on the web. These proposals are in different stages of standardization and promise to make capturing browser state and enforcing security easier and more expressive.
Parisa Tabriz, Security Princess at Google Chrome, made an extended metaphor comparing human health with Chrome health. One of the things I loved about her analogy was the opportunity to reflect on the many different proxies we use to evaluate the quality of a software project – from hard metrics to symptoms more akin to "aches, pains and a general feeling of malaise."
There were three more reflective talks on how security threats have evolved and how companies have evolved in response:
Michael Stoppelman, SVP of Engineering of Yelp, spoke about the many challenges Yelp took on as it grew – from being reactive to proactive in tackling everything from XSS to denial of service. A recurring theme during the day that Michael was the first to cover was the challenges in creating a security team that balances “builders” vs. “breakers.”
Upendra Mardikar, VP of Security Strategy Architecture and Engineering at American Express, gave a deep look into how web applications have changed in the financial industry and the impact and value that compliance programs have on securing financial systems.
Neil Daswani, CISO of Lifelock, took a step back from compliance and gave a great overview of how metrics can help give a sense of an organization’s security posture. While there are pitfalls in the metrics that we have today, he argued that an essential part of securing an organization is to track progress and improvement over time – something which metrics are able to provide.
One of the more provocative talks at the workshop was given by my colleague, Parvez Ahammad, who is Instart Logic's Head of Data Science and Machine Learning. He gave a detailed history of machine learning and security and in particular tackled the skepticism that many security experts have about it (shared by several members of the panel discussions). There are many different types of machine learning systems which come into and out of vogue. Overall, though, machine learning algorithms follow the “No Free Lunch” theorem, which states that there is no one model that works best for every problem. Parvez argued that many of the failures that machine learning algorithms have suffered in security are rooted in the idea that machine learning can be treated as a black box into which data is directed, only to have security or anomalies magically emerge.
There were also two different, very lively panel discussions - one led by Michael Abbott, General Partner, Kleiner Perkins Caufield & Byers with Ganesh Krishnan, Head of Security and Identity, Atlassian, Gene Golovinsky, Director Security R&D at Intuit and Diogo Mónica who is the Security Lead at Docker; and another with Collin Greene,Senior Security Engineering Manager at Uber, Hemant Raju, Director of Engineering Application Security & Security Architecture at Walmart and Bryan Payne, Engineering Manager of Product and Application Security at Netflix. The panelists covered everything from choosing good security vendors, to the shortage of good security developers, to the biggest mistakes and the most exciting future developments in the field. I cannot do justice capturing the information nuggets, one-liners and the repartee that was shared. This is part of the reason why we will be sharing videos of those panels and all the other talks over the next few weeks!
Overall, it was fantastic to have such a diverse group of security enthusiasts as participants, in the panels and in the audience. I’m especially thankful to Stanford and to Dan Boneh for giving us the chance to host a thought provoking and stimulating workshop. If you missed it, you can view the on-demand workshop sessions now. | 6,998 | 3,277 | 0.00031 |
warc | 201704 | Exam day has come, and you know you have prepared adequately, but you may still be anxious when the time comes to actually take the exam. Don’t be embarrassed. Many students feel stressed, nervous, and worried when they have to demonstrate what they’ve learned through an exam.
The following tips will guide you through exam day. Remember that every exam is different. This test-taking guide is written in a general sense, with an eye toward the typical college-level exam.
Preparation for exam day Avoid cramming the night before. You will retain more both on test day and afterwards for comprehensive exams if you study regularly and at a reasonable pace. While a brief review will help, avoid an exhaustive cramming session that leaves you facing the test tired. Prepare your equipment. You should have two or three pens or pencils with good erasers, as well as books, note cards, or “cheat sheets” your instructor permits. If you are taking a math or science test, bring a calculator with good batteries. Also, since you won’t be allowed to use your cell phone, bring a watch to keep track of time. Lastly, if allowed, bring some chewing gum to deal with nervous tension. Be physically ready. Your previous preparation can go to waste if you don’t get a good night’s sleep before test day. You should also eat a healthy meal and be well hydrated before the exam begins. Avoid overeating or consuming excessive caffeine before your test. Also, use the restroom before the test begins, and if it is permitted, bring a bottle of water. Find out as much as you can about the exam before it begins. Find out details about the format of the test. Ask your professor if you will have to write any essays. If essays are your weak point, research potential essay topics and create an outline in order to save time for other sections of the test. Also, remember to ask about the rules for test day. Will you be permitted to go to the restroom during the test? Is there a strict time limit? During the exam Read the test directions closely.If you have questions, ask your instructor to clarify the matter, either to you personally or to the entire class. Don’t be embarrassed: your fellow students will likely have the same questions. If other students ask questions, don’t get so engrossed in your test that you miss out on answers to their questions. Remember to breathe. If you feel yourself panicking or stressing out, put down your pencil and take several long, deep breaths. Do this several times throughout the test to clear your mind and fill your blood with oxygen. Imagine yourself relaxing and visualize a calm image. Survey the test before beginning. Glance over the entire test and form a loose plan for how you will spend your time. You do not need to closely inspect every question, but your plan may be very different for a test with fifteen multiple-choice questions and six essay questions than for one with ninety multiple-choice questions and one essay. If the professor provides the point value of each question or section, focus on the sections with the highest point value if you expect to be pressed for time. Briefly look at any bonus questions, and answer those you know before spending time on complex, challenging questions. Read every question closely. Sometimes teachers will write questions that are deliberately reversed from what you might expect in order to challenge you. If you feel that a question is nonsensical, hard to understand, or contains typos, ask your instructor for clarification; misprints and editing accidents can happen. Strategize for multiple-choice and true/false questions. Read the question thoroughly, and if it helps, solve the problem on scratch paper. If the answer is not immediately clear, you may wish to skip it for the moment and solve problems that you know you can handle quickly. For multiple-choice questions, rule out as many options as you can, and make an educated guess. You won’t get it right if you don’t try. For true/false questions, remember that absolute or near-absolute answers, such as those that use “always” or “never,” are often false. Look for key words in essay questions. Read the question thoroughly and be sure you understand the specific topic, as well as what you are supposed to “do” with your essay. Keywords include “define,” “explain,” and “compare.” Prepare a short outline on scratch paper to organize your thoughts, and consider the time you have. Address the topic with a direct response, and address all aspects of the question with specifics, not just general statements. You should use technical vocabulary from the course correctly, but don’t feel you need to show off. Even if you and your teacher differ in perspective on a course topic, you can write an informed answer that reflects you knowledge of different angles on this topic. Don’t get distracted by other students taking the test. If they are being disruptive, ask them to be quiet or inform the instructor. Avoid looking toward their papers. Don’t feel pressured if other students complete the test quickly and leave early; some students take tests very quickly, and this has little bearing on their actual performance on those tests. If you find yourself racing to finish and “get it over with,” be sure to review your answers and check your work to spot mistakes or questions you overlooked. After the exam Once you have completed your test and double-checked it for mistakes, try not to dwell on how it went. Even if you felt you did poorly, it is now beyond your control. Do something that relaxes you, like playing a sport or listening to music, and go about your routine otherwise. If you receive your test paper back, look at where you made mistakes to determine your strengths and weaknesses for future attempts. In particular, professors often provide commentary on answers to essay questions if you have had problems presenting your argument or recalling factual material. Save your tests to study for midterms and final exams; even if the exact questions aren’t repeated, you can learn a lot from the way a professor asks questions. If your instructor has a test-review session, don’t skip it. Reviewing the material will help you learn and will enhance your performance on future tests. Sometimes, instructors even award credit for errors they made (which may require you to be present). Some professors allow you to “revise” your test for an improvement in score, and others award bonus points simply for attending the posttest review session. | 6,680 | 2,968 | 0.000345 |
warc | 201704 | scroll to top
Stuck on your essay?
Get ideas from this essay and see how your work stacks up
1 Pages
Word Count:714 Pages:1 1/1 Compare the democratic forms of government in the United States and Great Britain History and Geography Lifepac 902 Aaron Ang 3312004 Although the need for government to have leadership that provides direction is universal among states the form that the government leadership assumes varies Government structure varies significantly between the United States and Great Britain despite that each is a democracy and share a common history In fact the common history of the United States and Great Britain suggests reasons to explain the broad differences between the governments of each respective state In the wake of the American Revolution the people of the United States rejected the forms and institutions most notably a monarchy and Parliament of British government as well as British sovereignty Possessing a democratic presidential government the United States has two separately elected agencies of government The executive and legislative branches of the United States the President and Congress respectively both derive their power from the people whereas in Great Britain only the legislative branch Parliament derives its power from the people as the executive is elected by Members of Parliament thus effectively combining both branches within a single institution The Parliamentary system in Great Britain and the Presidential system in the United States both have histories marked by an absence of abject failure yet neither system can be considered truly perfect Consequently the analyst cannot conclude that either system is better rather he must recognize that there are merits and faults in both systems The Parliamentary system tends to legislate efficiently whereas a presidential system tends toward gridlock However the presidential system grants both elected representatives and citizens greater influence in government The Parliamentary system tends to favor Prime Ministers who have much experience whereas the Presidential system favors Presidents who are responsive to the general will of the people Also every week the British prime minister appears before the House of Commons and
@Kibin is a lifesaver for my essay right now!!
- Sandra Slivka, student @ UC Berkeley
Wow, this is the best essay help I've ever received!
- Camvu Pham, student @ U of M
If I'd known about @Kibin in college, I would have gotten much more sleep
- Jen Soust, alumni @ UCLA | 2,520 | 1,177 | 0.000854 |
warc | 201704 | Physician and Boomer Institute founder Courter, in his debut, offers a general self-help book for seniors.
The author crafted this guide, he writes,
in order to help enhance the quality of life of millions of aging baby boomers. Drawing on years of medical experience, he addresses several key areas, including emotional and physical health and activity; mental focus and spirituality; and new habits to redefine and revitalize the self. Many people see one’s golden years as a time of deteriorating health and alienation from society, and some retirees with depleted pensions have been forced to return to work to make ends meet. However, the author views life’s last trimester as an opportunity to embrace a holistic lifestyle. His suggestions include exercising with weights, eating more plant-based foods and avoiding genetically modified organisms, taking probiotics to improve one’s mood, and drinking hot water or tea to cleanse one’s system. He even suggests expanding one’s vocabulary and writing a memoir. Establishing new habits, and thus creating new neural pathways, he asserts, can lead to greater happiness and a renewed sense of purpose. Although self-help books for the elderly abound, few comprehensively cover their myriad concerns as well as Courter’s guide does. Informative, user-friendly and brimming with advice, its tone is neither preachy nor condescending. Appropriately, the author relates his own experiences, including a notable golf story, with compassion and humility, and his upbeat, enthusiastic approach may persuade many readers to see their circumstances in a more positive light. However, although the book briefly mentions the subject of sexual dysfunction, it doesn’t adequately explore sexuality in the senior years. It also occasionally provides unclear statistics, as when it claims that two-thirds of the U.S. population is overweight and one-third is obese; is no one underweight or at the ideal weight? However, these are minor quibbles given the abundance of worthwhile information here.
An inspirational manual designed to make seniors’ last years their best ones. | 2,157 | 1,153 | 0.000886 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.